text
stringlengths 56
7.94M
|
---|
\begin{document}
\title{Finding direct product decompositions in polynomial time}
\author{James B. Wilson}
\address{
Department of Mathematics\\
The Ohio State University\\
Columbus, OH 43210
}
\email{[email protected]}
\date{\today}
\thanks{This research was supported in part by NSF Grant DMS 0242983.}
\keywords{direct product, polynomial time, group variety, p-group, bilinear map}
\begin{abstract}
A polynomial-time algorithm is produced which,
given generators for a group of permutations on a finite set,
returns a direct product decomposition of the group
into directly indecomposable subgroups. The process uses bilinear maps and commutative rings to
characterize direct products of $p$-groups of class $2$ and reduces general groups to $p$-groups
using group varieties. The methods apply to quotients of permutation groups and operator groups as well.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
Forming direct products of groups is an old and elementary way to construct new groups from old ones. This paper concerns reversing that process by efficiently decomposing a group into a direct product of nontrivial subgroups in a maximal way, i.e. constructing a \emph{Remak decomposition} of the group. We measure efficiency by describing the time (number of operations) used by an algorithm, as a function of the input size. Notice that a small set of generating permutations or matrices can specify a group of exponentially larger size; hence, there is some work just to find the order of a group in polynomial time. In the last 40 years, problems of this sort have been attacked with ever increasing dependence on properties of simple groups, and primitive and irreducible actions, cf. \cite{Seress:book}. A polynomial-time algorithm to construct a Remak decomposition is an obvious addition to those algorithms and, as might be expected, our solution depends on many of those earlier works. Surprisingly, the main steps involve tools (bilinear maps, commutative rings, and group varieties) that are not standard in Computational Group Theory.
We solve the Remak decomposition problem for permutation groups and describe the method in a framework suitable for other computational settings, such as matrix groups. We prove:
\begin{thm}\label{thm:FindRemak}
There is a deterministic polynomial-time algorithm which, given a permutation group, returns a Remak decomposition of the group.
\end{thm}
It seems natural to solve the Remak decomposition problem by first locating a direct factor of the group, constructing a direct complement, and then recursing on the two factors. Indeed, Luks \cite{Luks:comp} and Wright \cite{Wright:comp} (cf. \thmref{thm:FindComp}) gave polynomial-time algorithms to test if a subgroup is a direct factor and if so to construct a direct complement. But how do we find a proper nontrivial direct factor to start with? A critical case for that problem is $p$-groups. A $p$-group generally has an exponential number of normal subgroups so that searching for direct factors of a $p$-group appears impossible.
The algorithm for \thmref{thm:FindRemak} does not proceed in the natural fashion just described, and it is more of a construction than a search. In fact, the algorithm does not produce a single direct factor of the original group until the final step, at which point it has produced an entire Remak decomposition.
It was the study of central products of $p$-groups which inspired the approach we use for \thmref{thm:FindRemak}. In \cite{Wilson:unique-cent,Wilson:algo-cent}, central products of a $p$-group $P$ of class $2$ were linked, via a bilinear map $\mathsf{Bi} (P)$, to idempotents in a Jordan algebra in a way that explained their size, their $(\Aut P)$-orbits, and demonstrated how to use the polynomial-time algorithms for rings (Ronyai \cite{Ronyai}) to construct fully refined central decompositions all at once (rather than incrementally refining a decomposition). This approach is repeated here, only we replace Jordan algebras with a canonical commutative ring $C(P):=C(\mathsf{Bi} (P))$ (cf. \eqref{eq:Bi} and \defref{def:centroid}).
Thus, we characterize directly indecomposable $p$-groups of class $2$ as follows:
\begin{thm}\label{thm:indecomp-class2}
If $P$ and $Q$ are finite $p$-groups of nilpotence class $2$ then $C(P\times Q)\cong C(P)\oplus C(Q)$. Hence, if $C(P)$ is a local ring and $\zeta_1(P)\leq \Phi(P)$, then $P$ is directly indecomposable. Furthermore, if $P^p=1$ then the converse also holds.
\end{thm}
The algorithm applies the implications of \thmref{thm:indecomp-class2} and begins with the \emph{unique} Remak decomposition of a commutative ring. This process is repeated across several sections of the group. Using group varieties we organize the various sections. Group varieties behave well regarding direct products and come with natural and computable normal subgroups used to create the sections. To work within these sections of a permutation group we have had to prove \thmref{thm:FindRemak} in the generality of quotients of permutation groups and thus we have used the Kantor-Luks polynomial-time quotient group algorithms \cite{KL:quotient}. Those methods depend on the Classification of Finite Simple Groups and, in this way, so does \thmref{thm:FindRemak}. A final generalization of the main result is the need to allow groups with operators $\Omega$ and consider Remak $\Omega$-decompositions. The most general version of our main result is summarized in \thmref{thm:FindRemak-Q} followed by a variant for matrix groups in \corref{coro:FindRemak-matrix}.
\thmref{thm:FindRemak} was proved in 2008 \cite{Wilson:thesis}.
That same year, with entirely different methods, Kayal-Nezhmetdinov \cite{KN:direct} proved
there is a deterministic polynomial-time algorithm which, given a group $G$ specified by its multiplication table (i.e. the size of input is $|G|^2$),
returns a Remak decomposition of $G$. The same result follows as a
corollary to \thmref{thm:FindRemak} by means of the regular permutation representation
of $G$. \thmref{thm:nearly-linear} states that in that special situation there is a nearly-linear-time algorithm for the task.
\subsection{Outline}
We organize the paper as follows.
In Section \ref{sec:background} we introduce the notation and definitions we use throughout. This includes the relevant group theory background, discussion of group varieties, rings and modules, and a complete listing of the prerequisite tools for \thmref{thm:FindRemak}.
In Section \ref{sec:lift-ext} we show when and how a direct decomposition of a subgroup or quotient group can be extended or lifted to a direct decomposition of the whole group (Sections \ref{sec:induced}--\ref{sec:chains}). That task centers around the selection of good classes of groups as well as appropriate normal subgroups. The results in that section are largely non-algorithmic though they lay foundations for the correctness proofs and suggest how the data will be processed by the algorithm for \thmref{thm:FindRemak}.
Section \ref{sec:lift-ext-algo} applies the results of the earlier section to produce a polynomial-time algorithm which can effect the lifting/extending of direct decompositions of subgroups and quotient groups. First we show how to construct direct $\Omega$-complements of a direct $\Omega$-factor of a group (Section \ref{sec:complements}) by modifying some earlier unpublished work of Luks \cite{Luks:comp} and Wright \cite{Wright:comp}. Those algorithms answer Problem 2, and (subject to some constraints) also Problem 4 of \cite[p. 13]{KN:direct}. The rest of the work concerns the algorithm \textalgo{Merge} described in Section \ref{sec:merge} which does the `glueing' together of direct factors from a normal subgroup and its quotient.
In Section \ref{sec:bi} we characterize direct decompositions of $p$-groups of class $2$ by means of an associated commutative ring and prove \thmref{thm:indecomp-class2}. We close that section with some likely well-known results on groups with trivial centers.
In Section \ref{sec:Remak} we prove \thmref{thm:FindRemak} and its generalization \thmref{thm:FindRemak-Q}. This is a specific application which demonstrates the general framework setup in Sections \ref{sec:lift-ext} and \ref{sec:lift-ext-algo}. \thmref{thm:FindRemak-Q} answers Problem 3 of \cite[p. 13]{KN:direct} and
\corref{coro:FindRemak-matrix} essentially answers Problem 5 of \cite[p. 13]{KN:direct}.
Section \ref{sec:ex} is an example of how the algorithm's main components operate on a specific group.
The execution is explained with an effort to indicate where some of the subtle points in the process arise.
Section \ref{sec:closing} wraps up loose ends and poses some questions.
\section{Background}\label{sec:background}
We begin with a survey of the notation, definitions, and algorithms we use throughout the paper.
Much of the preliminaries can be found in standard texts on Group Theory, consider
\cite[Vol. I \S\S 15--18; Vol. II \S\S 45--47]{Kurosh:groups}.
Typewriter fonts $\mathtt{X}, \mathtt{R}$, etc. denote sets without implied properties;
Roman fonts $G$, $H$, etc., denote groups; Calligraphic fonts
$\mathcal{H}, \mathcal{X}$, etc. denote sets and multisets of groups; and the Fraktur
fonts $\mathfrak{X}$, $\mathfrak{N}$, etc. denote classes of groups.
With few exceptions we consider only finite groups. Functions are evaluated on the right and group actions are denoted exponentially. We write $\End G$ for the set of endomorphisms of $G$ and $\Aut G$ for the group of
automorphisms. The \emph{centralizer} of a subgroup $H\leq G$ is
$C_G(H)=\{g\in G: H^g=H\}$. The \emph{upper central series} is
$\{\zeta_i(G): i\in\mathbb{N}\}$ where $\zeta_0(G)=1$, $\zeta_{i}(G)\normal \zeta_{i+1}(G)$
and $\zeta_{i+1}(G)/\zeta_i(G)=C_{G/\zeta_i(G)}(G/\zeta_i(G))$, for all $i\in\mathbb{N}$.
The commutator of subgroups $H$ and $K$ of $G$ is
$[H,K]=\langle [h,k]:h\in H, k\in K\rangle$. The \emph{lower central series} is
$\{\gamma_i(G):i\in\mathbb{Z}^+\}$ where $\gamma_1(G)=G$ and
$\gamma_{i+1}(G)=[G,\gamma_i(G)]$ for all $i\in\mathbb{Z}^+$.
The \emph{Frattini} subgroup $\Phi(G)$ is the intersection of all maximal subgroups.
\subsection{Operator groups}\label{sec:op-groups}
An $\Omega$-group $G$ is a group, a possibly empty set $\Omega$, and a function
$\theta:\Omega\to \End G$. Throughout the paper we write $g^{\omega}$ for
$g(\omega\theta)$, for all $g\in G$ and all $\omega\in \Omega$.
With the exception of Section \ref{sec:gen-ops}, we insist that $\Omega\theta\subseteq \Aut G$.
In a natural way, $\Omega$-groups have all the usual definitions of
$\Omega$-subgroups, quotient $\Omega$-groups, and
$\Omega$-homomorphisms. Call $H$ is \emph{fully invariant}, resp. \emph{characteristic} if it is
an $(\End G)-$, resp. $(\Aut G)-$, subgroup. As we insist that $\Omega\theta\subseteq \Aut G$, in this work every characteristic subgroup of $G$ is automatically an $\Omega$-subgroup.
Let $\Aut_{\Omega} G$ denote the $\Omega$-automorphisms of $G$.
We describe normal $\Omega$-subgroups $M$ of $G$ simply as
$(\Omega\union G)$-subgroup of $G$.
The following characterization is critical to our proofs.
\begin{align}
\label{eq:central}
\Aut_{\Omega\cup G} G & =
\{\varphi\in \Aut_{\Omega} G: \forall g\in G, g\varphi \equiv g \pmod{\zeta_1(G)}\}.
\end{align}
It is also evident that $\Aut_{\Omega\cup G} G$ acts as the identity
on $\gamma_2(G)$.
Such automorphisms are called \emph{central}
but for uniformity we described them as $(\Omega\cup G)$-automorphisms.
We repeatedly use the following property of the $(\Omega\cup G)$-subgroup lattice.
\begin{lemma}[Modular law]\cite[Vol. II \S 44: pp. 91-92]{Kurosh:groups}\label{lem:modular}
If $M$, $H$, and $R$ are $(\Omega\cup G)$-subgroups of an $\Omega$-group $G$ and
$M\leq H$, then $H\cap RM=(H\cap R)M$.
\end{lemma}
\subsection{Decompositions, factors, and refinement}\label{sec:decomps}
Let $G$ be an $\Omega$-group.
An \emph{$\Omega$-decomposition} of $G$ is a set $\mathcal{H}$ of
$(\Omega\union G)$-subgroups of $G$ which generates $G$ but no proper
subset of $\mathcal{H}$ does. A \emph{direct $\Omega$-decomposition} is
an $\Omega$-decomposition $\mathcal{H}$ where $H\intersect \langle\mathcal{H}-\{H\}\rangle=1$,
for all $H\in\mathcal{H}$. In that case, elements $H$ of $\mathcal{H}$ are direct $\Omega$-factors of $G$ and $\langle\mathcal{H}-\{H\}\rangle$ is a \emph{direct $\Omega$-complement} to $H$.
Call $G$ \emph{directly $\Omega$-indecomposable}
if $\{G\}$ is the only direct $\Omega$-decomposition of $G$. Finally, a
\emph{Remak $\Omega$-decomposition} means a direct $\Omega$-decomposition
consisting of directly $\Omega$-indecomposable groups.
Our definitions imply that the trivial subgroup $1$ is not a direct $\Omega$-factor.
Furthermore, the only direct decomposition of $1$ is $\emptyset$ and so $1$ is not
directly $\Omega$-indecomposable.
We repeatedly use for the following notation. Fix an $\Omega$-decomposition
$\mathcal{H}$ of an $\Omega$-group $G$, and an $(\Omega\union G)$-subgroup $M$ of $G$.
Define the sets
\begin{align}
\mathcal{H}\intersect M & = \{ H\intersect M : H\in\mathcal{H} \} -\{1\},\\
\mathcal{H}M & = \{ HM : H\in\mathcal{H}\} - \{M\},\textnormal{ and }\\
\mathcal{H}M/M & = \{ HM/M : H\in\mathcal{H} \} -\{M/M\}.
\end{align}
If $f:G\to H$ is an $\Omega$-homomorphism then define
\begin{align}
\mathcal{H}f = \{ Hf: H\in\mathcal{H}\}-\{1\}.
\end{align}
Each of these sets consists of $\Omega$-subgroups of $G\intersect M$, $M$, $G/M$, and
$\im f$ respectively.
It is not generally true that these sets are $\Omega$-decompositions.
In particular, for arbitrary $M$, we should not expect a relationship between the direct
$\Omega$-decompositions of $G/M$ and those of $G$.
If $\mathfrak{X}$ is a class of groups then set
\begin{align}
\mathcal{H}\cap \mathfrak{X} & = \{ H\in\mathcal{H}: H\in\mathfrak{X}\},\textnormal{ and}\\
\mathcal{H}-\mathfrak{X} & = \mathcal{H}-(\mathcal{H}\cap\mathfrak{X}).
\end{align}
An $\Omega$-decomposition $\mathcal{H}$ of $G$ \emph{refines} an $\Omega$-decomposition $\mathcal{K}$
of $G$ if for each $H\in\mathcal{H}$, there a unique $K\in\mathcal{K}$ such that
$H\leq K$ and furthermore,
\begin{equation}\label{eq:refine}
\forall K\in\mathcal{K},\quad K =\langle H\in\mathcal{H} : H\leq K\rangle.
\end{equation}
When $\mathcal{K}$ is a direct $\Omega$-decomposition, \eqref{eq:refine} implies
the uniqueness preceding the equation. If $\mathcal{H}$ is a direct
$\Omega$-decomposition then $\mathcal{K}$ is a direct $\Omega$-decomposition.
An essential tool for us is the so called ``Krull-Schmidt'' theorem for finite groups.
\begin{thm}[``Krull-Schmidt'']\label{thm:KRS}\cite[Vol. II, p. 120]{Kurosh:groups}
If $G$ is an $\Omega$-group and $\mathcal{R}$ and $\mathcal{T}$ are Remak $\Omega$-decompositions of $G$,
then for every $\mathcal{X}\subseteq \mathcal{R}$, there is a $\varphi\in \Aut_{\Omega\cup G} G$ such that
$\mathcal{X}\varphi\subseteq \mathcal{T}$ and $\varphi$ is the identity on $\mathcal{R}-\mathcal{X}$.
In particular, $\mathcal{R}\varphi=\mathcal{X}\varphi\sqcup (\mathcal{R}-\mathcal{X})$ is a Remak
$\Omega$-decomposition of $G$.
\end{thm}
\begin{remark}
The ``Krull-Schmidt'' theorem combines two distinct properties.
First, it is a theorem about exchange (as compared to a basis exchange).
That property was proved by Wedderburn \cite{Wedderburn:direct} in 1909.
Secondly, it is a theorem about the transitivity of a group action.
That property was the contribution of Remak \cite{Remak:direct} in 1911.
Remak was made aware of Wedderburn's work in the course of publishing his paper
and added to his closing remarks \cite[p. 308]{Remak:direct} that
Wedderburn's proof contained an unsupported leap (specifically at
\cite[p.175, l.-4]{Wedderburn:direct}). This leap is not so great
by contemporary standards, for example it occurs in \cite[p.81, l.-12]{Rotman:grp}.
Few references seem to be made to Wedderburn's work following Remak's publication.
In 1913, Schmidt \cite{Schmidt:direct} simplified and extended the work of Remak and
in 1925 Krull \cite{Krull:direct} considered direct products of finite and infinite
abelian $\Omega$-groups. Fitting \cite{Fitting:direct} invented the standard proof
using idempotents, Ore \cite{Ore:lattice1} grounded the concepts in Lattice theory, and
in several works Kurosh \cite[\S 17,
\S\S 42--47]{Kurosh:groups} and others unified and expanded these results.
By the 1930's direct decompositions of maximum length appear as ``Remak decompositions''
while at the same time the theorem is referenced as ``Krull-Schmidt''.
\end{remark}
\subsection{Free groups, presentations, and constructive presentations}
\label{sec:free}
In various places we use free groups.
Fix a set $\mathtt{X}\neq \emptyset$ and a group $G$. Let $G^{\mathtt{X}}$ denote
the set of functions from $\mathtt{X}$ to $G$, equivalently, the set of all $\mathtt{X}$-tuples of $G$.
Every $f\in G^{\mathtt{X}}$ is the restriction of a unique homomorphism $\hat{f}$ from the free group $F(\mathtt{X})$ into $G$, that is:
\begin{equation}
\forall x\in\mathtt{X},
\quad x\hat{f} = xf.
\end{equation}
We use $\hat{f}$ exclusively in that manner. As usual we call
$\langle\mathtt{X} | \mathtt{R}\rangle$ a \emph{presentation} for a group $G$ with respect
to $f:\mathtt{X}\to G$
if $\mathtt{X} f$ generates $G$ and $\ker \hat{f}$ is the smallest normal subgroup
of $F(\mathtt{X})$ containing $\mathtt{R}$.
Following \cite[Section 3.1]{KLM:Sylow},
$\{\langle\mathtt{X}|\mathtt{R}\rangle, f:\mathtt{X}\to G,\ell:G\to F(\mathtt{X})\}$ is a \emph{constructive presentation} for $G$,
if $\langle\mathtt{X} | \mathtt{R}\rangle$ is a presentation for $G$ with
respect to $f$ and $\ell \hat{f}$ is the identity on $G$.
More generally, if $M$ is a normal subgroup of $G$ then call
$\{\langle\mathtt{X}| \mathtt{R}\rangle,f:\mathtt{X}\to G,\ell:G\to F(\mathtt{X})\}$ a \emph{constructive presentation
for $G$ mod $M$} if $\langle \mathtt{X}|\mathtt{R}\rangle$ is a
presentation of $G/M$ with respect to the induced function
$\mathtt{X}\overset{f}{\to} G\to G/M$, also $\ell\hat{f}$ is the identity on $G$, and
$M\ell\leq \langle \mathtt{R}^{F(\mathtt{X})}\rangle$.
\subsection{Group classes, varieties, and verbal and marginal subgroups}
\label{sec:varieties}
In this section we continue the notation given in Section \ref{sec:free} and
introduce the vocabulary and elementary properties of group varieties studies at length in \cite{HNeumann:variety}.
By a \emph{class of $\Omega$-groups} we shall mean a class which contains
the trivial group and is closed to $\Omega$-isomorphic images. If $\mathfrak{X}$
is a class of ordinary groups, then $\mathfrak{X}^{\Omega}$ denotes
the subclass of $\Omega$-groups in $\mathfrak{X}$.
A \emph{variety} $\mathfrak{V}=\mathfrak{V}(\mathtt{W})$ is a class of groups defined by a
set $\mathtt{W}$ of words, known as \emph{laws}. Explicitly, $G\in\mathfrak{X}$ if, and only if, every $f\in G^{\mathtt{X}}$ has $\mathtt{W}\subseteq \ker \hat{f}$. We say that $w\in F(\mathtt{X})$ is a \emph{consequence}
of the laws $\mathtt{W}$ if for every $G\in\mathfrak{V}$ and every $f\in G^{\mathtt{X}}$,
$w\in \ker \hat{f}$.
The relevance of these classes to direct products is captured in the following:
\begin{thm}[Birkhoff-Kogalovski]\cite[15.53]{HNeumann:variety}\label{thm:BK}
A class of groups is a variety if, and only if, it is nonempty
and is closed to homomorphic images, subgroups,
and direct products (including infinite products).
\end{thm}
Fix a word $w\in F(\mathtt{X})$. We regard $w$ as a function $G^{\mathtt{X}}\to G$, denoted $w$, where
\begin{equation}\label{eq:w-map}
\forall f\in G^{\mathtt{X}},\quad w(f) = w\hat{f}.
\end{equation}
On occasion we write $w(f)$ as $w(g_1,g_2,\dots)$, where $f\in G^{\mathtt{X}}$ is understood
as the tuple $(g_1,g_2,\dots)$. For example, if $w=[x_1,x_2]$, then $w:G^2\to G$
can be defined as $w(g_1,g_2)=[g_1,g_2]$, for all $g_1,g_2\in G$.
Levi and Hall separately introduced two natural subgroups to associate with the
function $w:G^{\mathtt{X}}\to G$.
First, to approximate the image of $w$ with a group, we have the \emph{verbal} subgroup
\begin{equation}\label{eq:def-verbal}
w(G) = \langle w(f): f\in G^{\mathtt{X}}\rangle.
\end{equation}
Secondly, to mimic the radical of a multilinear map, we use the \emph{marginal} subgroup
\begin{equation}\label{eq:marginal}
w^*(G) =
\{ g \in G~:~\forall f'\in \langle g\rangle^{\mathtt{X}}, \forall f\in G^{\mathtt{X}},~w(ff')=w(f)\}.
\end{equation}
(To be clear, $ff'\in G^{\mathtt{X}}$ is the pointwise product: $x(ff')=(xf)(xf')$ for all $x\in \mathtt{X}$.)
Thus, $w:G^{\mathtt{X}}\to G$ factors through $w:(G/w^*(G))^{\mathtt{X}}\to w(G)$. For a set $\mathtt{W}$ of words,
the $\mathtt{W}$-verbal subgroup is $\langle w(G): w\in \mathtt{W}\rangle$ and the $\mathtt{W}$-marginal
subgroup is $\bigcap \{w^*(G): w\in \mathtt{W}\}$. Observe that for finite sets $\mathtt{W}$ a single word
may be used instead, e.g. replace $\mathtt{W}=\{[x_1,x_2], x_1^2\}
\subseteq F(\{x_1,x_2\})$ with $w=[x_1,x_2]x_3^2\in F(\{x_1,x_2,x_3\})$.
If we have a variety $\mathfrak{V}$ defined by two sets $\mathtt{W}$ and ${\tt U}$ of laws, then
every $u\in {\tt U}$ is a consequence of the laws $\mathtt{W}$. From the definitions above it
follows that $u(G)\leq \mathtt{W}(G)$ and $\mathtt{W}^*(G)\leq u^*(G)$. Reversing the roles of
$\mathtt{W}$ and ${\tt U}$, it follows that $\mathtt{W}(G)={\tt U}(G)$ and
$\mathtt{W}^*(G)={\tt U}^*(G)$. This justifies the notation
\begin{align*}
\mathfrak{V}(G) & = \mathfrak{V}(\mathtt{W})(G)=\mathtt{W}(G),\\
\mathfrak{V}^*(G) & = \mathfrak{V}(\mathtt{W})^*(G) = \mathtt{W}^*(G).
\end{align*}
The verbal and marginal groups are dual in the following sense \cite{Hall:margin}:
for a group $G$,
\begin{equation}
\mathfrak{V}(G)=1\quad \Leftrightarrow\quad G\in\mathfrak{V}
\quad \Leftrightarrow \quad \mathfrak{V}^*(G)=G.
\end{equation}
Also, verbal subgroups are radical, $\mathfrak{V}(G/\mathfrak{V}(G))=1$, and marginal
subgroups are idempotent, $\mathfrak{V}^*(\mathfrak{V}^*(G))=\mathfrak{V}^*(G)$, but
verbal subgroups are not generally idempotent and marginal subgroups are not generally radical.
\begin{ex}\label{ex:varieties}
\begin{enumerate}[(i)]
\item The class $\mathfrak{A}$ of abelian groups is a group variety defined by $[x_1, x_2]$.
The $\mathfrak{A}$-verbal subgroup of a group is the commutator subgroup and the
$\mathfrak{A}$-marginal subgroup is the center.
\item The class $\mathfrak{N}_c$ of nilpotent groups of class at most $c$ is a group variety
defined by $[x_1,\dots,x_{c+1}]$ (i.e. $[x_1]=x_1$ and
$[x_1,\dots,x_{i+1}]=[[x_1,\dots,x_i],x_{i+1}]$, for all $i\in \mathbb{N}$).
Also, $\mathfrak{N}_c(G)=\gamma_{c+1}(G)$ and $\mathfrak{N}_c^*(G)=\zeta_c(G)$
\cite[2.3]{Robinson}.
\item The class $\mathfrak{S}_d$ of solvable groups of derived length at most $d$ is
a group variety defined by $\delta_d(x_1,\dots,x_{2^d})$ where
$\delta_1(x_1)=x_1$ and for all $i\in\mathbb{N}$,
$$\delta_{i+1}(x_1,\dots,x_{2^{i+1}})
=[\delta_i(x_1,\dots,x_{2^i}),\delta_i(x_{2^i+1},\dots,x_{2^{i+1}})].$$
Predictably, $\mathfrak{S}_d(G)=G^{(d)}$ is the
$d$-th derived group of $G$. It appears that $\mathfrak{S}_d^*(G)$ is not often used
and has no name. (This may be good precedent for $\mathfrak{S}_d^*(G)$ can be trivial while $G$ is solvable; thus, the series $\mathfrak{S}^*_1(G)\leq
\mathfrak{S}^*_2(G)\leq \cdots $ need not be strictly increasing.)
\end{enumerate}
\end{ex}
Verbal and marginal subgroups are characteristic in $G$ and verbal subgroups are also fully
invariant \cite{Hall:margin}. So if $G$ is an $\Omega$-group then so is $\mathfrak{V}(G)$.
Moreover,
\begin{equation}\label{eq:verbal-closure}
G\in\mathfrak{V}^{\Omega} \textnormal{ if, and only if, $G$ is an $\Omega$-group and }
\mathfrak{V}(G)=1.
\end{equation}
Unfortunately, marginal subgroups need not be fully invariant (e.g. the
center of a group). In their place, we use the $\Omega$-invariant marginal subgroup
$(\mathfrak{V}^{\Omega})^{*}(G)$, i.e. the largest normal $\Omega$-subgroup of
$\mathfrak{V}^*(G)$.
Since $\mathfrak{V}$ is closed to subgroups it follows that $(\mathfrak{V}^{\Omega})^{*}(G)\in\mathfrak{V}$. Furthermore, if $G$ is an $\Omega$-group and $G\in\mathfrak{V}$ then
$\mathfrak{V}^*(G)=G$ and so the $\Omega$-invariant marginal subgroup is $G$. Thus,
\begin{equation}\label{eq:marginal-closure}
G\in\mathfrak{V}^{\Omega} \textnormal{ if, and only if, $G$ is an $\Omega$-group and }
\mathfrak{V}^{*}(G)=G.
\end{equation}
In our special setting all operators
act as automorphisms and so the invariant marginal subgroup is indeed the marginal subgroup.
Nevertheless, to avoid confusion insist that the marginal subgroup of a variety
of $\Omega$-groups refers to the $\Omega$-invariant marginal subgroup.
\subsection{Rings, frames, and modules}\label{sec:rings}
We involve some standard theorems for associative unital finite rings and modules.
Standard references for our uses include \cite[Chapters 1--3]{Herstein:rings} and
\cite[Chapters I--II, V.3]{Jacobson:Lie}. Throughout this section $R$ denotes
a finite associative unital ring.
A $e\in R-\{0\}$ is \emph{idempotent} if $e^2=e$. An idempotent is
\emph{proper} if it is not $1$ (as we have excluded $0$ as an idempotent). Two idempotents $e,f\in R$
are \emph{orthogonal} if $ef=0=fe$. An idempotent
is \emph{primitive} if it is not the sum of two orthogonal idempotents. Finally,
a \emph{frame} $\mathcal{E}\subseteq R$ is a set of pairwise orthogonal primitive idempotents
of $R$ which sum to $1$. We use the following properties.
\begin{lem}[Lifting idempotents]\label{lem:lift-idemp}
Let $R$ be a finite ring.
\begin{enumerate}[(i)]
\item If $e\in R$ such that $e^2-e\in J(R)$ (the Jacobson radical)
then for some $n\leq \log_2 |J(R)|$, $(e^2-e)^n=0$ and
\begin{equation*}
\hat{e}= \sum_{i=0}^{n-1}\binom{2n-1}{i} e^{2n-1-i}(1-e)^i
\end{equation*}
is an idempotent in $R$. Furthermore, $\widehat{1-e}=1-\hat{e}$.
\item $\mathcal{E}$ is a frame of $R/J(R)$ then $\hat{\mathcal{E}}=\{\hat{e}:e\in\mathcal{E}\}$
is a frame of $R$.
\item Frames in $R$ are conjugate by a unit in $R$; in particular, if $R$ is commutative then
$R$ has a unique frame.
\end{enumerate}
\end{lem}
\begin{proof}
Part (i) is verified directly, compare \cite[(6.7)]{Curtis-Reiner}.
Part (ii) follows from induction on (i). For (iii) see \cite[p. 141]{Curtis-Reiner}.
\end{proof}
If $M$ is an $R$-module and $e$ is an idempotent of $\End_R M$ then $M=Me\oplus M(1-e)$.
Furthermore, if $M=E\oplus F$ as an $R$-module, then the projection $e_E:M\to M$ with kernel $F$ and image $E$ is an idempotent endomorphism of $M$. Thus, every direct $R$-decomposition $\mathcal{M}$ of $M$ is parameterized by a set $\mathcal{E}(\mathcal{M})=\{e_E : E\in\mathcal{M}\}$ of pairwise orthogonal idempotents of $\End_R M$ which sum to $1$. Remak $R$-decompositions of $M$ correspond to frames of $\End_R M$.
\subsection{Polynomial-time toolkit}
\label{sec:tools}
We use this section to specify how we intend to compute with groups of permutations.
We operate in the context of quotients of permutation groups and borrow from the large
library of polynomial-time algorithms for this class of groups. We detail
the problems we use in our proof of \thmref{thm:FindRemak} so that in principle any computational
domain with polynomial-time algorithms for these problems will admit a theorem
similar to \thmref{thm:FindRemak}. The majority of algorithms which we cite do not provide specific
estimates on the polynomial timing. Therefore, our own main theorems will not have specific estimates.
The group $S_n$ denotes the permutations on $\{1,\dots,n\}$. Given $\mathtt{X}\subseteq S_n$, a
\emph{straight-line program} over $\mathtt{X}$ is a recursively defined
function on $\mathtt{X}$ which evaluates to a word over $\mathtt{X}$, but can be stored and evaluated in an
efficient manner; see \cite[p. 10]{Seress:book}. To simplify notation we treat these as elements in
$S_n$.
Write $\mathbb{G}_n$ for the class of groups $G$
encoded by $(\mathtt{X}:\mathtt{R})$ where $\mathtt{X}\subseteq S_n$
and $\mathtt{R}$ is a set of straight-line programs such that
\begin{equation}\label{eq:def-G}
G=\langle\mathtt{X}\rangle/N,\qquad N:=\left\langle \mathtt{R}^{\langle \mathtt{X}\rangle}\right\rangle\leq \langle\mathtt{X}\rangle\leq S_n.
\end{equation}
The notation $\mathbb{G}_n$ intentionally avoids reference to the permutation domain as the algorithms we consider can be adapted to other computational domains. Also, observe that a group $G\in\mathbb{G}_n$
may have no small degree permutation representation. For example, the extraspecial group $2^{1+2n}_+$ is
a quotient of $D_8^{n}\leq S_{4n}$; yet, the smallest faithful permutation representation
of $2^{1+2n}_+$ has degree $2^n$ \cite[Introduction]{Neumann:perm-grp}.
It is misleading to think of $\mathtt{R}$ in \eqref{eq:def-G} as relations for the generators $\mathtt{X}$;
indeed,
elements in $\mathtt{X}$ are also permutations and so there are relations implied on $\mathtt{X}$
which may not be implied by $\mathtt{R}$.
We write $\ell(\mathtt{R})$ for the sum of the lengths of straight-line programs in $\mathtt{R}$.
A homomorphism $f:G\to H$ of groups
$G=(\mathtt{X} :\mathtt{R}),H=({\tt Y}:{\tt S}) \in\mathbb{G}_n$
is encoded by storing $\mathtt{X} f$ as straight-line programs in ${\tt Y}$.
An $\Omega$-group $G$ is encoded by $G=(\mathtt{X}:\mathtt{R})\in\mathbb{G}_n$ along with a function
$\theta:\Omega\to \End G$. We write $\mathbb{G}_n^{\Omega}$
for the set of $\Omega$-groups encoded in that fashion.
A \emph{polynomial-time} algorithm with input
$G=(\mathtt{X}:\mathtt{R})\in \mathbb{G}_n^{\Omega}$ returns an output using a polynomial
in $|\mathtt{X}|n+\ell(\mathtt{R})+\ell(\Omega)$ number of steps. In some cases
$|\mathtt{X}|n+\ell(\mathtt{R})\in O(\log |G|)$; so, $|G|$ can be exponentially larger than the input size.
When we say ``given an $\Omega$-group $G$'' we shall mean $G\in\mathbb{G}_n^{\Omega}$.
Our objective in this paper is to solve the following problem.
\begin{prob}{\sc Remak-$\Omega$-Decomposition}\label{prob:FindRemak}
\begin{description}
\item[Given] an $\Omega$-group $G$,
\item[Return] a Remak $\Omega$-decomposition for $G$.
\end{description}
\end{prob}
The problems \probref{prob:Order}--\probref{prob:MinSNorm} have polynomial-time solutions
for groups in $\mathbb{G}_n^{\Omega}$.
\begin{prob}{\sc Order}\label{prob:Order}\cite[P1]{KL:quotient}
\begin{description}
\item[Given] a group $G$,
\item[Return] $|G|$.
\end{description}
\end{prob}
\begin{prob}{\sc Member}\label{prob:Member}\cite[3.1]{KL:quotient}
\begin{description}
\item[Given] a group $G$, a subgroup $H=(\mathtt{X}':\mathtt{R}')$ of $G$, and $g\in G$,
\item[Return] false if $g\notin H$; else, a straight-line program in $\mathtt{X}'$ reaching $g\in H$.
\end{description}
\end{prob}
We require the means to solve systems of linear equations, or determine that no solution exists,
in the following generalized setting.
\begin{prob}{\sc Solve}\label{prob:Solve}\cite[Proposition 3.7]{KLM:Sylow}
\begin{description}
\item[Given] a group $G$, an abelian normal subgroup $M$, a function
$f\in G^{\mathtt{X}}$ of constants in $G$, and a set $\mathtt{W}\subseteq F(\mathtt{X})$ of words encoded via straight-line programs;
\item[Return] false if $w(f\mu)\neq 1$ for all $\mu\in M^{\mathtt{X}}$; else, generators
for the solution space $\{\mu\in M^{\mathtt{X}} : w(f\mu)=1\}$.
\end{description}
\end{prob}
\begin{prob}{\sc Presentation}\label{prob:pres}\cite[P2]{KL:quotient}
\begin{description}
\item[Given] given a group $G$ and a normal subgroup $M$,
\item[Return] a constructive presentation $\{\langle\mathtt{X}|\mathtt{R}\rangle, f,\ell\}$ for $G$ mod $M$.
\end{description}
\end{prob}
\begin{prob}{\sc Minimal-Normal}\label{prob:MinNorm}\cite[P11]{KL:quotient}
\begin{description}
\item[Given] a group $G$,
\item[Return] a minimal normal subgroup of $G$.
\end{description}
\end{prob}
\begin{prob}{\sc Normal-Centralizer}\label{prob:CentNorm}\cite[P6]{KL:quotient}
\begin{description}
\item[Given] a group $G$ and a normal subgroup $H$,
\item[Return] $C_G(H)$.
\end{description}
\end{prob}
\begin{prob}{\sc Primary-Decomposition}\label{prob:Primary}
\begin{description}
\item[Given] an abelian group $A\in \mathbb{G}_n$,
\item[Return] a primary decomposition for $A=\bigoplus_{v\in{\tt B}} \mathbb{Z}_{p^e} v$,
where for each $v\in {\tt B}$, $|v|=p^e$ for some prime $p=p(v)$.
\end{description}
\end{prob}
We call $\mathcal{X}$, as in {\sc Primary-Decomposition}, a \emph{basis} for $A$.
The polynomial-time solution of {\sc Primary-Decomposition} is routine.
Let $A=(\mathtt{X} : \mathtt{R})\in \mathbb{G}_n$. Use {\sc Order} to compute $|A|$.
As $A$ is a quotient of a permutation group, the primes dividing $|A|$ are less than $n$.
Thus, pick a prime $p\mid |A|$ and write $|A|=p^e m$ where $(p,m)=1$. Set
$A_p=A^{m}$.
Using {\sc Member} build a basis ${\tt B}_p$ for $A_p$ by unimodular linear algebra.
(Compare \cite[Section 2.3]{Wilson:algo-cent}.) The return is $\bigsqcup_{p\mid |A|} {\tt B}_p$.
We involve some problems for associative rings. For ease we assume that all
rings $R$ are finite of characteristic $p^e$ and specified with a basis
${\tt B}$ over $\mathbb{Z}_{p^e}$.
To encode the multiplication in $R$ we store structure constants
$\{\lambda_{xy}^z \in \mathbb{Z}_{p^e} : x,y,z\in {\tt B}\}$ which are defined so that:
\begin{equation*}
\left(\sum_{x\in\mathcal{X}} r_x x\right)\left(\sum_{y\in\mathcal{X}} s_y y\right)
=\sum_{z\in{\tt B}} \left(\sum_{x,y\in\mathcal{X}} r_x \lambda_{xy}^{z} s_y\right)z
\end{equation*}
where, for all $x$ and all $y$ in ${\tt B}$, $r_x,s_y\in\mathbb{Z}_{p^e}$.
\begin{prob}{\sc Frame}\label{prob:Frame}
\begin{description}
\item[Given] an associative unital ring $R$,
\item[Return] a frame of $R$.
\end{description}
\end{prob}
{\sc Frame} has various nondeterministic solutions \cite{EG:fast-alge,Ivanyos:fast-alge} with
astonishing speed. However, we need a deterministic solution such as in the work of Ronyai.
\begin{thm}[Ronyai \cite{Ronyai}]\label{thm:Frame}
For rings $R$ specified as an additive group in $\mathbb{G}_n$ with a basis
and with structure constants with respect to the basis,
{\sc Frame} is solvable in polynomial-time in $p+n$ where $|R|=p^n$.
\end{thm}
\begin{proof}
First pass to ${\bf R}=R/pR$ and so create an algebra
over the field $\mathbb{Z}_p$. Now \cite[Theorem 2.7]{Ronyai} gives a
deterministic polynomial-time algorithm which finds a basis for the Jacobson
radical of ${\bf R}$. This allows us to pass to ${\bf S}={\bf R}/J({\bf R})$,
which is isomorphic to a direct product of matrix rings over finite fields.
Finding the frame for ${\bf S}$ can be done by finding the minimal ideals
$\mathcal{M}$ of ${\bf S}$ \cite[Corollary 3.2]{Ronyai}. Next, for each
$M\in\mathcal{M}$, build an isomorphism $M\to M_n(\mathbb{F}_q)$ \cite[Corollary 5.3]{Ronyai}
and choose a frame of idempotents from $M_n(\mathbb{F}_q)$ and let $\mathcal{E}_M$
be the pullback to $M$. Set $\mathcal{E} =\bigsqcup_{M\in\mathcal{M}} \mathcal{E}_M$
noting that $\mathcal{E}$ is a frame for ${\bf} S$. Hence, use the power series of
\lemref{lem:lift-idemp} to lift the frame $\mathcal{E}$ to a frame $\hat{\mathcal{E}}$ for
$R$.
\end{proof}
With \thmref{thm:Frame} we setup and solve a special instance of \thmref{thm:FindRemak}.
\begin{prob}{\sc Abelian.Remak-$\Omega$-Decomposition}\label{prob:FindRemak-abelian}
\begin{description}
\item[Given] an abelian $\Omega$-group $A$,
\item[Return] a Remak $\Omega$-decomposition for $A$.
\end{description}
\end{prob}
\begin{coro}\label{coro:FindRemak-abelian}
{\sc Abelian.Remak-$\Omega$-Decomposition} has a polynomial-time solution.
\end{coro}
\begin{proof}
Let $A\in\mathbb{G}_n^{\Omega}$ be abelian.
\emph{Algorithm.}
Use {\sc Primary-Decomposition} to write $A$ in a primary decomposition.
For each prime $p$ dividing $|A|$, let $A_p$ be the $p$-primary component.
Write a basis for $\End A_p$ (noting that $\End A_p$ is a checkered
matrix ring determined completely by the Remak decomposition of $A_p$ as a $\mathbb{Z}$-module
\cite[p. 196]{McDonald:fin-ring})
and use {\sc Solve} to find a basis for $\End_{\Omega} A$. Finally, use
{\sc Frame} to find
a frame $\mathcal{E}_p$ for $\End_{\Omega} A_p$. Set $\mathcal{A}_p=\{Ae: e\in\mathcal{E}\}$.
Return $\bigsqcup_{p\mid |A|} \mathcal{A}_p$.
\emph{Correctness.} Every direct $\Omega$-decomposition of $A$ corresponds to a set
of pairwise orthogonal idempotents in $\End_{\Omega} A$ which sum to $1$. Furthermore,
Remak $\Omega$-decomposition correspond to frames.
\emph{Timing.}
The polynomial-timing follows from \thmref{thm:Frame} together with the
observation that $p\leq n$ whenever $A\in \mathbb{G}_n$.
\end{proof}
\begin{remark}\label{rem:matrix}
In the context of groups of matrices our solution to
{\sc Abelian.Remak-$\Omega$-decomposition} is impossible as it invokes
integer factorization and {\sc Member} is a version of a discrete log problem in that
case. The primes involved in the orders
of matrix groups can be exponential in the input length and so these two
routines are infeasible. For solvable matrix groups whose primes are bound and so called
$\Gamma_d$-matrix groups the required problems
in this section have polynomial-time solutions, cf. \cite{Luks:mat,Taku}.
\end{remark}
\begin{prob}{\sc Irreducible}\label{prob:Irreducible}\cite[Corollary 5.4]{Ronyai}
\begin{description}
\item[Given] an associative unital ring $R$, an abelian group $V$, and a homomorphism $\varphi:R\to \End V$,
\item[Return] an irreducible $R$-submodule of $V$.
\end{description}
\end{prob}
As with the algorithm {\sc Frame}, there are nearly optimal nondeterministic methods for {\sc Irreducible},
for example, the MeatAxe \cite{Meataxe1,Meataxe2}; however, we are concerned here with a deterministic
method solely.
\begin{prob}{\sc Minimal-$\Omega$-Normal}\label{prob:MinSNorm}
\begin{description}
\item[Given] an $\Omega$-group $G$ where $\Omega$ acts on $G$ as automorphisms,
\item[Return] a minimal $(\Omega\cup G)$-subgroup of $G$.
\end{description}
\end{prob}
\begin{prop}
{\sc Minimal-$\Omega$-Normal} has a polynomial-time solution.
\end{prop}
\begin{proof}
Let $G=(\mathtt{X}: \mathtt{R})\in\mathbb{G}_n^{\Omega}$.
\emph{Algorithm.}
Use {\sc Minimal-Normal} to compute a minimal normal subgroup $N$ of $G$.
Using {\sc Member}, run the following transitive closure: set $M:=N$, then
while there exists $w\in \Omega\cup \mathtt{X}$ such that $M^w\neq M$, set $N=\langle M,M^w\rangle$.
Now $M=\langle N^{\Omega\cup G}\rangle$. If $N$ is non-abelian then return $M$; otherwise,
treat $M$ as an $(\Omega\cup G)$-module and use {\sc Irreducible} to find an irreducible
$(\Omega\cup G)$-submodule $K$ of $M$. Return $K$.
\emph{Correctness.}
Note that $M=\langle N^{\Omega\cup G}\rangle=N N^{w_1} N^{w_2}\cdots N^{w_t}$ for some
$w_1,\dots,w_t\in \langle \Omega\theta\rangle\ltimes G\leq \Aut G\ltimes G$. As $N$ is minimal normal,
so is each $N^{w_i}$ and therefore $M$ is a direct product of isomorphic simple groups.
If $N$ is non-abelian then the normal subgroups of $M$ are its direct factors and furthermore, every
direct factor $F$ of $M$ satisfies $M=\langle F^{\Omega\cup G}\rangle$. If $N$ is abelian then $N\cong\mathbb{Z}_p^d$
for some prime $p$. A minimal $(\Omega\cup G)$-subgroup of $N$ is therefore an irreducible
$(\Omega\cup G)$-submodule of $V$.
\emph{Timing.} First the algorithm executes a normal closure using the polynomial-time algorithm
{\sc Member}. We test if $N$ is abelian by computing the commutators of the generators. The final
step is the polynomial-time algorithm {\sc Irreducible}.
\end{proof}
\section{Lifting, extending, and matching direct decompositions}\label{sec:lift-ext}
We dedicate this section to understanding when a direct decomposition of a quotient or subgroup
lifts or extends to a direct decomposition of the whole group. Ultimately we plan these
ideas for use in the algorithm for \thmref{thm:FindRemak}, but the questions have taken on independent intrigue. The highlights of this section are Theorems \ref{thm:Lift-Extend} and \ref{thm:chain} and Corollaries \ref{coro:canonical-graders} and \ref{coro:canonical-grader-II}.
Fix a short exact sequence of $\Omega$-groups:
\begin{equation}\label{eq:SES}
\xymatrix{
1\ar[r] & K \ar[r]^{i} & G\ar[r]^{q} & Q\ar[r] & 1.
}
\end{equation}
With respect to \eqref{eq:SES} we study instances of the following problems.
\begin{description}
\item[Extend] for which direct $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $K$,
is there a Remak $\Omega$-decomposition $\mathcal{R}$ of $G$ such that
$\mathcal{K}i = \mathcal{R}\cap (Ki)$.
\item[Lift] for which direct $(\Omega\cup G)$-decomposition $\mathcal{Q}$ of
$Q$, is there a Remak $\Omega$-decomposition $\mathcal{R}$ of $G$ such that
$\mathcal{Q} = \mathcal{R}q$.
\item[Match] for which pairs $(\mathcal{K},\mathcal{Q})$ of direct $(\Omega\cup G)$-decompositions of
$K$ and $Q$ respectively, is there a Remak $\Omega$-decomposition of $G$ which is an extension of
$\mathcal{K}$ and a lift of $\mathcal{Q}$, i.e. $\mathcal{K}i=\mathcal{R}\cap (Ki)$ and
$\mathcal{Q}=\mathcal{R}q$.
\end{description}
Finding direct decompositions which extend or lift is surprisingly easy
(\thmref{thm:Lift-Extend}), but we have had only narrow success in finding matches.
Crucial exceptions are $p$-groups of class $2$ (\thmref{thm:Match-class2})
where the problem reduces to commutative ring theory.
\subsection{Graded extensions}\label{sec:induced}
In this section we place some reasonable parameters on the short exact sequences
which we consider in the role of \eqref{eq:SES}.
This section depends mostly on the material of Sections
\ref{sec:op-groups}--\ref{sec:decomps}.
\begin{lemma}\label{lem:induced}
Let $G$ be a group with a direct $\Omega$-decomposition $\mathcal{H}$. If $X$ is an
$(\Omega\cup G)$-subgroup of $G$ and $X=\langle \mathcal{H}\intersect X\rangle$, then
\begin{enumerate}[(i)]
\item $\mathcal{H}\intersect X$ is a direct $\Omega$-decomposition of $X$,
\item
$\mathcal{H}X/X$ is a direct $\Omega$-decomposition of $G/X$,
\item
$\mathcal{H}-\{H\in\mathcal{H} : H\leq X\}$, $\mathcal{H}X$, and $\mathcal{H}X/X$
are in a natural bijection, and
\item if $Y$ is an $(\Omega\cup G)$-subgroup of $G$ with $Y=\langle\mathcal{H}\cap Y\rangle$
then $\mathcal{H}\cap (X\cap Y)=\langle\mathcal{H}\cap (X\cap Y)\rangle$ and
$\mathcal{H}\cap XY=\langle \mathcal{H}\cap XY\rangle$.
\end{enumerate}
\end{lemma}
\begin{proof} For (i), $(H\cap X)\cap \langle \mathcal{H}\cap X-
\{H\cap X\}\rangle=1$ for all $H\cap X\in\mathcal{H}\cap X$.
For (ii), let $|\mathcal{H}|>1$, take $H\in\mathcal{H}$, and set
$J=\langle\mathcal{H}-\{H\}\rangle$. From (i):
$HX\intersect JX=(H\times (J\intersect X))\intersect
((H\intersect X)\times J)=(H\intersect X)\times (J\intersect X)=X$.
For (iii), the functions $H\mapsto HX\mapsto HX/X$, for each
$H\in\mathcal{H}-\{H\in\mathcal{H}: H\leq X\}$, suffice. Finally for (iv),
let $g\in X\cap N$. So there are unique $h\in H$ and $k\in\langle\mathcal{H}-\{H\}\rangle$
with $g=hk$. By (i) and the uniqueness, we get that $h\in (H\cap X)\cap (H\cap Y)$ and
$k\in \langle \mathcal{H}-\{H\}\rangle \cap (X\cap Y)$. So
$g\in \langle \{H\cap (X\cap N),\langle \mathcal{H}-\{H\}\rangle \cap (X\cap Y)\}\rangle$.
By induction on $|\mathcal{H}|$, $X\cap Y\leq \langle \mathcal{H}\cap (X\cap Y)\rangle
\leq X\cap N$. The last argument is similar.
\end{proof}
We now specify which short exact sequence we consider.
\begin{defn}\label{def:graded}
A short exact sequence $1\to K\overset{i}{\to} G\overset{q}{\to} Q\to 1$ of $\Omega$-groups
is \emph{$\Omega$-graded} if for all (finite) direct $\Omega$-decomposition
$\mathcal{H}$ of $G$, it follows that $Ki = \langle \mathcal{H}\cap (Ki) \rangle$.
Also, if $M$ is an $(\Omega\cup G)$-subgroup of $G$ such that
the canonical short exact sequence $1\to M\to G\to G/M\to 1$ is $\Omega$-graded
then we say that $M$ is $\Omega$-graded.
\end{defn}
\lemref{lem:induced} parts (i) and (ii) imply that every direct $\Omega$-decomposition of
$G$ induces direct $\Omega$-decompositions of $K$ and $Q$ whenever
$1\to K\overset{i}{\to} G\overset{q}{\to} Q\to 1$ is $\Omega$-graded. The universal quantifier
in the definition of graded exact sequences may seem difficult to satisfy; nevertheless, in
Section \ref{sec:direct-ext} we show many well-known subgroups are graded, for example the commutator
subgroup.
\begin{prop}\label{prop:graded-lat}
\begin{enumerate}[(i)]
\item If $M$ is an $\Omega$-graded subgroup of $G$ and $N$ an $(\Omega\cup G)$-graded subgroup
of $M$, then $N$ is an $\Omega$-graded subgroup of $G$.
\item
The set of $\Omega$-graded subgroups of $G$ is a modular
sublattice of the lattice of $(\Omega\cup G)$-subgroups of $G$.
\end{enumerate}
\end{prop}
\begin{proof}
For (i), if $\mathcal{H}$ is a direct $\Omega$-decomposition of $G$ then by
\lemref{lem:induced}(i), $\mathcal{H}\cap M$ is direct $\Omega$-decomposition of $M$
and so $\mathcal{H}\cap N=(\mathcal{H}\cap M)\cap N$ is a direct
$\Omega$-decomposition of $N$. Also (ii) follows from \lemref{lem:induced}(iv).
\end{proof}
\begin{lem}\label{lem:KRS}
For all Remak $\Omega$-decomposition $\mathcal{H}$ and all
direct $\Omega$-decomposition $\mathcal{K}$ of $G$,
\begin{enumerate}[(i)]
\item $\mathcal{H}M$ refines $\mathcal{K}M$ for all
$(\Omega\cup G)$-subgroups $M\geq \zeta_1(G)$,
\item $\mathcal{H}\intersect M$ refines $\mathcal{K}\intersect M$ for all
$(\Omega\cup G)$-subgroups $M \leq \gamma_2(G)$.
\end{enumerate}
\end{lem}
\begin{proof} Let $\mathcal{T}$ be a Remak $\Omega$-decomposition of $G$ which refines
$\mathcal{H}$. By \thmref{thm:KRS}, there is a $\varphi\in \Aut_{\Omega\cup G} G$ such that
$\mathcal{R}\varphi=\mathcal{T}$. Form \eqref{eq:central} it follows that
$\mathcal{R}\zeta_1(G)=\mathcal{R}\zeta_1(G)\varphi=\mathcal{T}\zeta_1(G)$ and
$\mathcal{R}\cap \gamma_2(G)=(\mathcal{R}\cap\gamma_2(G))\varphi=\mathcal{T}\cap\gamma_2(G)$.
\end{proof}
\begin{thm}\label{thm:Lift-Extend}
Given the commutative diagram in \figref{fig:LIFT-EXT} which is exact and $\Omega$-graded
in all rows and all columns, the following hold.
\begin{figure}
\caption{A commutative diagram of $\Omega$-groups which is exact and $\Omega$-graded in all rows and all columns.}
\label{fig:LIFT-EXT}
\end{figure}
\begin{enumerate}[(i)]
\item
If $\zeta_1(\hat{Q})r=1$ then for every Remak $\Omega$-decomposition $\hat{\mathcal{Q}}$ of $\hat{Q}$ and
every Remak $\Omega$-decomposition $\mathcal{H}$ of $G$, $\mathcal{Q}:=\hat{\mathcal{Q}}r$ refines
$\mathcal{H}q$.
In particular, $\mathcal{H}$ lifts a partition of $\mathcal{Q}$ which is unique to $(G, i,q)$.
\item
If $\gamma_2(K)\leq \hat{K}j$ then for every Remak $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $K$ and
every Remak $\Omega$-decomposition $\mathcal{H}$ of $G$, $\mathcal{K}i\cap \hat{K}\hat{i}$
refines $\mathcal{H}\cap \left(\hat{K}\hat{i}\right)$.
In particular, $\mathcal{H}$ extends a partition of $\hat{\mathcal{K}}:=
(\mathcal{K}\cap \hat{K}j)j^{-1}$ which is unique to $(G,\hat{i},\hat{q})$.
\end{enumerate}
\end{thm}
\begin{proof}
Fix a Remak $\Omega$-decomposition $\mathcal{H}$ of $G$.
As $\hat{K}$ and $K$ are $\Omega$-graded, it follows that $\mathcal{H}\hat{q}$ is
a direct $\Omega$-decompositions of $\hat{Q}$ (\lemref{lem:induced}(ii)).
Let $\mathcal{T}$ be a Remak $\Omega$-decomposition of $\hat{Q}$ which refines
$\mathcal{H}\hat{q}$. By \lemref{lem:KRS}(i), $\hat{\mathcal{Q}}\zeta_1(\hat{Q})
=\mathcal{T}\zeta_1(\hat{Q})$ and so $\hat{\mathcal{Q}}r = \mathcal{T}r$. Therefore,
$\mathcal{Q}:=\hat{\mathcal{Q}}r$ refines $\mathcal{H}\hat{q}r=\mathcal{H}q$. That proves (i).
To prove (ii), by \lemref{lem:induced}(i) we have that
$\mathcal{H}\cap (Ki)$ is a direct $(\Omega\cup G)$-decompositions of $Ki$.
Let $\mathcal{T}$ be a Remak $(\Omega\cup G)$-decomposition
of $Ki$ which refines $\mathcal{H}\cap (Ki)$. By \lemref{lem:KRS}(ii),
$\hat{\mathcal{K}}=\mathcal{K}i\cap (\hat{K}\hat{i})=\mathcal{T}\cap \left(\hat{K}\hat{i}\right)$.
Therefore, $\mathcal{K}i \cap \left(\hat{K}\hat{i}\right)$ refines
$\mathcal{H}\cap \left(\hat{K}\hat{i}\right)$.
\end{proof}
\thmref{thm:Lift-Extend} implies the following special setting where the match problem can be answered.
This is the only instance we know where the matching problem can be solved without considering
the cohomology of the extension.
\begin{coro}\label{coro:Match-perfect-centerless}
If $1\to K\to G\to Q\to 1$ is a $\Omega$-graded short exact sequence where
$K=\gamma_2(K)$ and $\zeta_1(Q)=1$; then for every
Remak $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $K$ and $\mathcal{Q}$ of $Q$,
there are partitions $[\mathcal{K}]$ and $[\mathcal{Q}]$ unique to the short exact sequence
such that every Remak $\Omega$-decomposition $\mathcal{H}$ of $G$ matches
$([\mathcal{K}],[\mathcal{Q}])$.
\end{coro}
\subsection{Direct classes, and separated and refined decompositions}\label{sec:direct-class}
In this section we begin our work to consider the extension, lifting, and matching problems
in a constructive fashion. We introduce classes of groups which are
closed to direct products and direct decompositions and show how to use these classes to
control the exchange of direct factors.
\begin{defn}
A class $\mathfrak{X}$ (or $\mathfrak{X}^{\Omega}$ if context demands)
of $\Omega$-groups is \emph{direct} if $1\in\mathfrak{X}$,
and $\mathfrak{X}$ is closed to $\Omega$-isomorphisms, as well as the following:
\begin{enumerate}[(i)]
\item if $G\in\mathfrak{X}$ and $H$ is a direct $\Omega$-factor of $G$, then $H\in\mathfrak{X}$,
and
\item if $H,K\in\mathfrak{X}$ then $H\times K\in\mathfrak{X}$.
\end{enumerate}
\end{defn}
Every variety of $\Omega$-groups is a direct class by \thmref{thm:BK} and to specify the finite
groups in a direct class it is sufficient to specify the directly $\Omega$-indecomposable group
it contains. However, in practical terms there are few settings where the directly
$\Omega$-indecomposable groups are known.
\begin{defn}
A direct $\Omega$-decomposition $\mathcal{H}$ is \emph{$\mathfrak{X}$-separated} if
for each $H\in\mathcal{H}-\mathfrak{X}$, if $H$ has a direct $\Omega$-factor
$K$, then $K\notin\mathfrak{X}$. If additionally every member of
$\mathcal{H}\cap \mathfrak{X}$ is directly $\Omega$-indecomposable, then
$\mathcal{H}$ is \emph{$\mathfrak{X}$-refined}.
\end{defn}
\begin{prop}\label{prop:direct-class}
Suppose that $\mathfrak{X}$ is a direct class of $\Omega$-groups, $G$ an $\Omega$-group,
and $\mathcal{H}$ a direct $\Omega$-decomposition of $G$. The following hold.
\begin{enumerate}[(i)]
\item $\langle\mathcal{H}\cap\mathfrak{X}\rangle\in\mathfrak{X}$.
\item
If $\mathcal{H}$ is $\mathfrak{X}$-separated and $\mathcal{K}$ is a direct $\Omega$-decomposition of $G$ which refines $\mathcal{H}$, then $\mathcal{K}$ is $\mathfrak{X}$-separated.
\item $\mathcal{H}$ is a $\mathfrak{X}$-separated
if, and only if, $\{\langle\mathcal{H}-\mathfrak{X}\rangle,
\langle\mathcal{H}\cap\mathfrak{X}\rangle\}$ is $\mathfrak{X}$-separated.
\item Every Remak $\Omega$-decomposition is $\mathfrak{X}$-refined.
\item If $\mathcal{H}$ and $\mathcal{K}$ are $\mathfrak{X}$-separated direct
$\Omega$-decompositions of $G$ then
$(\mathcal{H}-\mathfrak{X})\sqcup (\mathcal{K}\cap\mathfrak{X})$ is an
$\mathfrak{X}$-separated direct $\Omega$-decomposition of $G$.
\end{enumerate}
\end{prop}
\begin{proof}
First, (i) follows as $\mathfrak{X}$ is closed to direct $\Omega$-products.
For (ii), notice that a direct $\Omega$-factor of a $K\in \mathcal{K}$ is also a
direct $\Omega$-factor of the unique $H\in\mathcal{H}$ where $K\leq H$.
For (iii), the reverse direction follows from (ii). For the forward direction,
let $K$ be a direct $\Omega$-factor of $\langle \mathcal{H}-\mathfrak{X}\rangle$.
Because $\mathfrak{X}$ is closed to
direct $\Omega$-factors, if $K\in\mathfrak{X}$ then so is every directly
$\Omega$-indecomposable direct $\Omega$-factor of $K$, and so we insist that $K$ is
directly $\Omega$-indecomposable. Therefore $K$ lies in a Remak $\Omega$-decomposition
of $\langle \mathcal{H}-\mathfrak{X}\rangle$. Let $\mathcal{R}$ be
a Remak $\Omega$-decomposition of $\langle \mathcal{H}-\mathfrak{X}\rangle$
which refines $\mathcal{H}-\mathfrak{X}$. By \thmref{thm:KRS} there
is a $\varphi\in \Aut_{\Omega\cup G} \langle \mathcal{H}-\mathfrak{X}\rangle$
such that $K\varphi\in \mathcal{R}$ and so $K\varphi$ is a direct
$\Omega$-factor of the unique $H\in\mathcal{H}$ where $K\varphi\leq H$.
As $\mathcal{H}$ is $\mathfrak{X}$-separated
and $K\varphi$ is a direct $\Omega$-factor of $H\in\mathcal{H}$, it follows
that $K\varphi \notin\mathfrak{X}$. Thus, $K\notin\mathfrak{X}$ and
$\{\langle\mathcal{H}-\mathfrak{X}\rangle,
\langle\mathcal{H}\cap\mathfrak{X}\rangle\}$ is $\mathfrak{X}$-separated.
For (iv), note that elements of a Remak $\Omega$-decomposition have no proper
direct $\Omega$-factors.
Finally for (v), let $\mathcal{R}$ and $\mathcal{T}$ be a Remak $\Omega$-decompositions
of $G$ which refine $\mathcal{H}$ and $\mathcal{K}$ respectively.
Set $\mathcal{U}=\{R\in\mathcal{R}: R\leq \langle \mathcal{H}\cap \mathfrak{X}\rangle\}$.
By \thmref{thm:KRS} there is a $\varphi\in\Aut_{\Omega\cup G} G$ such that
$\mathcal{U}\varphi\subseteq \mathcal{T}$ and $\mathcal{R}\varphi
=(\mathcal{R}-\mathcal{U})\sqcup \mathcal{U}\varphi$. As $\mathfrak{X}$ is closed
to isomorphisms, it follows that $\mathcal{U}\varphi\subseteq\mathcal{T}\cap\mathfrak{X}$.
As $\mathcal{H}$ is $\mathfrak{X}$-separated, $\mathcal{U}=\mathcal{R}\cap\mathfrak{X}$.
As $\Aut_{\Omega\cup G} G$ is transitive on the set of all Remak $\Omega$-decompositions
of $G$ (\thmref{thm:KRS}), we have that
$|\mathcal{T}\cap\mathfrak{X}|=|\mathcal{R}\cap\mathfrak{X}|=|\mathcal{U}\varphi|$.
In particular, $\mathcal{U}\varphi=\mathcal{T}\cap\mathfrak{X}=
\{T\in\mathcal{T}: T\leq \langle\mathcal{K}\cap\mathfrak{X}\rangle\}$. Hence,
$\mathcal{R}\varphi$ refines $(\mathcal{H}-\mathfrak{X})\sqcup (\mathcal{K}\cap \mathfrak{X})$
and so the latter is a direct $\Omega$-decomposition.
\end{proof}
\subsection{Up grades and down grades}\label{sec:direct-ext}
Here we introduce a companion subgroup to a direct class $\mathfrak{X}$ of $\Omega$-groups.
These groups specify the kernels we consider in the problems of extending and lifting
in concrete settings.
\begin{defn}\label{def:grader}
An \emph{up $\Omega$-grader} (resp. \emph{down $\Omega$-grader}) for a direct class $\mathfrak{X}$
of $\Omega$-groups is a function $G\mapsto \mathfrak{X}(G)$ of finite $\Omega$-groups $G$ where
$\mathfrak{X}(G)\in\mathfrak{X}$ (resp. $G/\mathfrak{X}(G)\in\mathfrak{X}$) and such that the
following hold.
\begin{enumerate}[(i)]
\item If $G\in\mathfrak{X}$ then $\mathfrak{X}(G)=G$ (resp. $\mathfrak{X}(G)=1$).
\item $\mathfrak{X}(G)$ is an $\Omega$-graded subgroup of $G$.
\item For direct $\Omega$-factor $H$ of $G$, $\mathfrak{X}(H)=H\cap \mathfrak{X}(G)$.
\end{enumerate}
The pair $(\mathfrak{X},G\mapsto \mathfrak{X}(G))$ is an up/down \emph{$\Omega$-grading pair}.
\end{defn}
If $(\mathfrak{X},G\mapsto\mathfrak{X}(G))$ is an $\Omega$-grading pair then we have
$\mathfrak{X}(H\times K)=\mathfrak{X}(H)\times \mathfrak{X}(K)$.
First we concentrate on general and useful instances of grading pairs.
\begin{prop}\label{prop:V-inter-1}
The marginal subgroup of a variety of $\Omega$-groups is an up $\Omega$-grader and the
verbal subgroup is a down $\Omega$-grader for the variety.
\end{prop}
\begin{proof}
Let $\mathfrak{V}=\mathfrak{V}^{\Omega}$ be a variety of $\Omega$-groups with
defining laws $\mathtt{W}$ and fix an $\Omega$-group $G$. As the marginal function is idempotent,
\eqref{eq:marginal-closure} implies that $\mathfrak{V}^*(G)\in\mathfrak{V}$ and
that if $G\in\mathfrak{V}$ then $G=\mathfrak{V}^*(G)$. Similarly,
verbal subgroups are radical so that by \eqref{eq:verbal-closure} we have
$G/\mathfrak{V}(G)\in\mathfrak{V}$ and when $G\in\mathfrak{V}$ then $\mathfrak{V}(G)=1$.
It remains to show properties (ii) and (iii) of \defref{def:grader}.
Fix a direct $\Omega$-decomposition $\mathcal{H}$ of $G$, fix an $H\in\mathcal{H}$, and
set $K=\langle\mathcal{H}-\{H\}\rangle$.
For each $f\in G^{\mathtt{X}}=(H\times K)^{\mathtt{X}}$ there are unique $f_H\in H^{\mathtt{X}}$ and $f_K\in K^{\mathtt{X}}$
such that $f=f_H f_K$.
Thus, for all $w\in \mathtt{W}$, $w(f)=w(f_H)w(f_K)$ and so
$w(H\times K)=w(H)\times w(K)$. Hence, $\mathfrak{V}(H\times K)=\mathfrak{V}(H)\times
\mathfrak{V}(K)$. By induction
on $|\mathcal{H}|$, $\mathcal{H}\cap\mathfrak{V}(G)=\{\mathfrak{V}(H):H\in\mathcal{H}\}$ is
a direct $\Omega$-decomposition of $\mathfrak{V}(G)$. So $\mathfrak{V}(G)$ is a down $\Omega$-grader.
For the marginal case, for all $f'\in \langle (h,k)\rangle^{\mathtt{X}}\leq (H\times K)^{\mathtt{X}}=G^{\mathtt{X}}$ and
all $f\in G^{\mathtt{X}}$, again there exist unique $f_H,f'_H\in H^{\mathtt{X}}$ and $f_K,f'_K\in K^{\mathtt{X}}$ such that $f=f_H f_K$
and $f'=f'_H f'_K$.
Also, $w(f f')=w(f)$ if, and only if, $w(f_H f'_H)=w(f_H)$ and $w(f_K f'_K)=w(f_K)$. Thus,
$w^*(H\times K)=w^*(H)\times w^*(K)$. Hence, $\mathfrak{V}^*(H\times K)
=\mathfrak{V}^*(H)\times \mathfrak{V}^*(K)$ and by induction
$\mathcal{H}\cap\mathfrak{V}^*(G)$ is a direct $\Omega$-decomposition of $\mathfrak{V}^*(G)$.
Thus, $\mathfrak{V}^*(G)$ is an up $\Omega$-grader.
\end{proof}
\begin{remark}
There are examples of infinite direct decompositions $\mathcal{H}$ of infinite groups $G$
and varieties $\mathfrak{V}$, where $\mathfrak{V}(G)\neq \langle \mathcal{H}\cap \mathfrak{V}(G)\rangle$
\cite{Asmanov}.
However, our definition of grading purposefully avoids infinite direct decompositions.
\end{remark}
With \propref{prop:V-inter-1} we get a simultaneous proof of some individually evident examples
of direct ascenders and descenders.
\begin{coro}\label{coro:canonical-graders}
Following the notation of \exref{ex:varieties} we have the following.
\begin{enumerate}[(i)]
\item The class $\mathfrak{N}_c$ of nilpotent groups of class at most $c$ is a direct class
with up grader $G\mapsto \zeta_c(G)$ and down grader $G\mapsto \gamma_c(G)$.
\item The class $\mathfrak{S}_d$ of solvable groups of derived length at most $d$ is a direct class
with up grader $G\mapsto (\delta_d)^*(G)$ and down grader $G\mapsto G^{(d)}$.
\item For each prime $p$ the class $\mathfrak{V}([x,y]z^p)$ of elementary abelian $p$-groups is a direct class with up grader $G\mapsto \Omega_1(\zeta_1(G))$ and down grader $G\mapsto [G,G]\mho_1(G)$.\footnote{Here $\Omega_1(X)=\langle x\in X: x^p=1\rangle$ and $\mho_1(X)=\langle x^p : x\in G\rangle$, which are traditional notations having nothing to do with our use of $\Omega$ for operators elsewhere.}
\end{enumerate}
\end{coro}
We also wish to include direct classes $\mathfrak{N}:=\bigcup_{c\in\mathbb{N}} \mathfrak{N}_c$
and $\mathfrak{S}:=\bigcup_{d\in\mathbb{N}} \mathfrak{S}_d$. These classes are not varieties (they are not closed to infinite direct products as required by \thmref{thm:BK}). Therefore, we must consider alternatives to verbal and marginal groups for appropriate graders. Our approach mimics the definitions $G\mapsto O_p(G)$ and $G\mapsto O^p(G)$. We explain the up grader case solely.
\begin{defn}
For a class $\mathfrak{X}$, the $\mathfrak{X}$-core, $O_{\mathfrak{X}}(G)$, of a
finite group $G$ is
the intersection of all maximal $(\Omega\cup G)$-subgroups contained
in $\mathfrak{X}$.
\end{defn}
If $\mathfrak{V}$ is a union of a chain $\mathfrak{V}_0\subseteq \mathfrak{V}_1\subseteq\cdots $ of varieties then $1\in\mathfrak{V}$, and so the maximal $(\Omega\cup G)$-subgroups
of a group $G$ contained in $\mathfrak{V}$ is nonempty. Also $\mathfrak{V}$ is
closed to subgroups so that $O_{\mathfrak{V}}(G)\in \mathfrak{V}$.
\begin{ex}\label{ex:cores}
\begin{enumerate}[(i)]
\item $O_{\mathfrak{A}}(G)$ is the intersection of all maximal
normal abelian subgroups of $G$. Generally there can be any
number of maximal normal abelian subgroups of $G$ so
$O_{\mathfrak{A}}(G)$ is not a trivial intersection.
\item $O_{\mathfrak{N}_c}(G)$ is the intersection of
all maximal normal nilpotent subgroups of $G$ with class at most $c$.
As in (i), this need not be a trivial intersection. However, if
$c>\log |G|$ then all nilpotent subgroups of $G$ have class at
most $c$ and therefore $O_{\mathfrak{N}}(G)=O_{\mathfrak{N}_c}(G)$ is the Fitting
subgroup of $G$: the unique maximal normal nilpotent subgroup of $G$.
\item $O_{\mathfrak{S}_d}(G)$, $d>\log |G|$,
is the unique maximal normal solvable subgroup of $G$, i.e.:
the solvable radical $O_{\mathfrak{S}}(G)$ of $G$.
\end{enumerate}
\end{ex}
\begin{lemma}\label{lem:margin-join}
Let $\mathfrak{V}$ be a group variety of $\Omega$-groups and
$G$ an $\Omega$-group. If $H$ is a $\mathfrak{V}$-subgroup of $G$ then
so is $\mathfrak{V}^*(G)H$, that is: $\mathfrak{V}^*(G)H\in\mathfrak{V}$.
\end{lemma}
\begin{proof}
Let $\mathtt{W}$ be a set of defining laws for $\mathfrak{V}$.
Let $f'\in G^{\mathtt{X}}$ with $\im f\subseteq \mathfrak{V}^*(G) H$. Thus,
for all $w\in \mathtt{W}$, there is a decomposition $f=f' f''$ where $\im f'\subseteq w^*(G)$
and $\im f''\subseteq H$. As $w^*(G)$ is marginal to $G$ it is marginal to $H$ and so
$w(f)=w(f'')$. As $H\in\mathfrak{V}$, $w(f'')=1$. Thus, $w(f)=1$ and so
$w(w^*(G)H)=1$. It follows that $\mathfrak{V}^*(G)H\in\mathfrak{V}$.
\end{proof}
\begin{prop}\label{prop:margin-core}
If $\mathfrak{V}$ is a group variety of $\Omega$-groups and $G$
an $\Omega$-group, then
\begin{enumerate}[(i)]
\item $\mathfrak{V}^*(G)\leq O_{\mathfrak{V}}(G)$, and
\item if $M$ is an $(\Omega\cup G)$-subgroup then
$O_{\mathfrak{V}}(G)O_{\mathfrak{V}}(M)$
is an $(\Omega\cup G)$-subgroup contained in $\mathfrak{V}$.
\end{enumerate}
\end{prop}
\begin{proof}
$(i)$. By \lemref{lem:margin-join}, every maximal normal
$\mathfrak{V}$-subgroup of $G$ contains $\mathfrak{V}^*(G)$.
$(ii)$. As $M\normaleq G$ and $O_{\mathfrak{V}}(M)$ is characteristic
in $M$, it follows that $O_{\mathfrak{V}}(M)$ is a normal
$\mathfrak{V}$-subgroup of $G$. Thus, $O_{\mathfrak{V}}(M)$ lies
in a maximal normal $\mathfrak{V}$-subgroup $N$ of $G$. As
$O_{\mathfrak{V}}(G)\leq N$ we have $O_{\mathfrak{V}}(G)O_{\mathfrak{V}}(M)
\leq N\in\mathfrak{V}$. As $\mathfrak{V}$ is closed to subgroups,
it follows that $O_{\mathfrak{V}}(G)O_{\mathfrak{V}}(M)$ is
in $\mathfrak{V}$.
\end{proof}
\begin{remark}
It is possible to have $\mathfrak{V}^*(G)<O_{\mathfrak{V}}(G)$. For instance,
with $G=S_3\times C_2$ and the class $\mathfrak{A}$ of abelian groups,
the $\mathfrak{A}$-marginal subgroup is the center $1\times C_2$, whereas the
$\mathfrak{A}$-core is $C_3\times C_2$.
\end{remark}
\begin{prop}\label{prop:V-inter-core}
Let $G$ be a finite group with a direct decomposition $\mathcal{H}$. If $\mathfrak{V}$ is a group variety then
\begin{equation*}
\mathcal{H}\intersect O_{\mathfrak{V}}(G)
=\{O_{\mathfrak{V}}(H): H\in\mathcal{H}\}
\end{equation*}
and this is a direct decomposition of $O_{\mathfrak{V}}(G)$. In particular,
$G\mapsto O_{\mathfrak{V}}(G)$ is an up $\Omega$-grader. Furthermore, if $\mathfrak{V}$ is a union of a chain $\mathfrak{V}_0\subseteq \mathfrak{V}_1\subseteq\cdots $ of group varieties then $O_{\mathfrak{V}}(G)$
is an up $\Omega$-grader.
\end{prop}
\begin{proof}
Let $H\in \mathcal{H}$ and $K:=\langle \mathcal{H}-\{H\}\rangle$.
Let $M$ be a maximal normal $\mathfrak{V}$-subgroup of $G=H\times K$.
Let $M_H$ be the projection of $M$ to the $H$-component. As
$\mathfrak{V}$ is closed to homomorphic images,
$M_H\in\mathfrak{V}$. Furthermore, $M_H\normaleq H$ so there
is a maximal normal $\mathfrak{V}$-subgroup $N$ of $H$ such that
$M_H\leq N$.
We claim that $MN\in\mathfrak{V}$.
As $G=H\times K$, every $g\in M$ has the unique form $g=hk$,
$h\in H$, $k\in K$. As $M_H$ is the projection of $M$ to $H$,
$h\in M_H\leq N$. Thus, $g,h\in MN$ so $k\in MN$. Thus,
$MN=N\times M_K$, where $M_K$ is the projection of $M$ to $K$.
Now let $\mathfrak{V}=\mathfrak{V}(w)$.
For each $f:X\to MN$, write $f=f_N \times f_K$ where $f_N:X\to N$ and
$f_K:X\to M_K$. Hence, $w(f)=w(f_N \times f_K)=w(f_N)\times w(f_K)$.
However, $w(N)=1$ and $w(M_K)=1$ as $N,M_K\in\mathfrak{V}$. Thus,
$w(f)=1$, which proves that $w(MN)=1$. So $MN\in\mathfrak{V}$
as claimed.
As $M$ is a maximal normal $\mathfrak{V}$-subgroup of $G$,
$M=MN$ and $N=M_H$. Hence, $H\intersect M=N$ is a maximal normal
$\mathfrak{V}$-subgroup of $H$. So we have characterized the
maximal normal $\mathfrak{V}$-subgroups of $G$ as
the direct products of maximal normal $\mathfrak{V}$-subgroups of
members $H\in\mathcal{H}$.
Thus, $\mathcal{H}\intersect O_{\mathfrak{V}}(G)
=\{O_{\mathfrak{V}}(H) : H\in\mathcal{H}\}$ and this generates
$O_{\mathfrak{V}}(G)$. By \lemref{lem:induced}, $\mathcal{H}\intersect
O_{\mathfrak{V}}(G)$ is a direct decomposition of $O_{\mathfrak{V}}(G)$.
\end{proof}
\begin{coro}\label{coro:canonical-grader-II}
\begin{enumerate}[(i)]
\item The class $\mathfrak{N}$ of nilpotent groups is a direct class and
$G\mapsto O_{\mathfrak{N}}(G)$ (the Fitting subgroup) is up grader.
\item The class $\mathfrak{S}$ of solvable groups is a direct class and
$G\mapsto O_{\mathfrak{S}}(G)$ (the solvable radical) is an up grader.
\end{enumerate}
\end{coro}
\begin{proof}
For a finite group $G$, the Fitting subgroup is the $\mathfrak{N}_c$-core
where $c>|G|$. Likewise, the solvable radical is the $\mathfrak{S}_c$-core
for $d>|G|$. The rest follows from \propref{prop:V-inter-core}.
\end{proof}
We now turn our attention away from examples of grading pairs and
focus on their uses. In particular it is for the following ``local-global'' property
which clarifies, in the up grader case, when a direct factor of a subgroup is also a direct factor
of the whole group.
\begin{prop}\label{prop:extendable}
Let $G\mapsto \mathfrak{X}(G)$ be an up $\Omega$-grader for a direct class $\mathfrak{X}$ of $\Omega$-groups
and let $G$ be an $\Omega$-group.
If $H$ is an $(\Omega\cup G)$-subgroup of $G$ and the following hold:
\begin{enumerate}[(a)]
\item for some direct $\Omega$-factor $R$ of $G$, $H\mathfrak{X}(G)=R\mathfrak{X}(G)>\mathfrak{X}(G)$, and
\item $H$
lies in an $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition
of $H\mathfrak{X}(G)$;
\end{enumerate}
then $H$ is a direct $\Omega$-factor of $G$.
\end{prop}
\begin{proof}
By (a) there is a direct $(\Omega\cup G)$-complement $C$ in $G$ to $R$.
Also $\mathfrak{X}(G)=\mathfrak{X}(R)\times \mathfrak{X}(C)$,
as $\mathfrak{X}(G)$ is $\Omega$-graded. Hence, $R\mathfrak{X}(G)=R\times \mathfrak{X}(C)$.
By (b), there is an $\mathfrak{X}$-separated direct $\Omega$-decomposition
$\mathcal{H}$ of $H\mathfrak{X}(G)$ such that $H\in\mathcal{H}$. As $H\mathfrak{X}(G)>\mathfrak{X}(G)$
it follows that $H\notin\mathfrak{X}$ and so by \lemref{lem:induced}(iii),
$\mathcal{H}-\mathfrak{X}=\{H\}$ and $X=\langle\mathcal{H}\cap\mathfrak{X}\rangle\in \mathfrak{X}$.
So
$$R\times \mathfrak{X}(C)=R\mathfrak{X}(G)=H\mathfrak{X}(G)=H\times X.$$
Let $\mathcal{A}$ be Remak $(\Omega\cup G)$-decomposition of $R$. Since $\mathfrak{X}(C)\in\mathfrak{X}$,
$\mathcal{A}\sqcup\{\mathfrak{X}(C)\}$ is an $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition
of $R\mathfrak{X}(G)$. By \propref{prop:direct-class}(v),
$$\mathcal{C}=\{H\}\sqcup \{\mathfrak{X}(C)\}\sqcup (\mathcal{A}\cap\mathfrak{X})$$
is an $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition of $R\mathfrak{X}(G)$,
and we note that $\{H\}=\mathcal{C}-\mathfrak{X}$. We claim that
$\{H,C\}\sqcup(\mathcal{A}\cup \mathfrak{X})$ is a direct $\Omega$-decomposition of $G$.
Indeed, $H\cap \langle C,\mathcal{A}\cap \mathfrak{X}\rangle\leq R\mathfrak{X}(G)\cap C\mathfrak{X}(G)
=\mathfrak{X}(G)$ and so $H\cap \langle C,\mathcal{A}\cap \mathfrak{X}\rangle
=H\cap \langle \mathfrak{X}(C),\mathcal{A}\cap \mathfrak{X}\rangle=1$.
Also, $\mathfrak{X}(C)\leq \langle H,C,\mathcal{A}\cap\mathfrak{X}\rangle$ thus
$\langle H,C,\mathcal{A}\cap\mathfrak{X}\rangle=G$. As the members of
$\{H,C\}\sqcup(\mathcal{A}\cap\mathfrak{X})$ are $(\Omega\cup G)$-subgroups we have proved the claim.
In particular, $H$ is a direct $\Omega$-factor of $G$.
\end{proof}
\subsection{Direct chains}\label{sec:chains}
In \thmref{thm:Lift-Extend} we specified conditions under which any direct decomposition of
an appropriate subgroup, resp. quotient, led to a solution of the extension (resp. lifting) problem.
However, within that theorem we see that it is not the direct decomposition of the subgroup
(resp. quotient group) which can be extended (resp. lifted). Instead it a some unique
partition of the direct decomposition. Finding the correct partition by trial and error is
an exponentially sized problem. To avoid this we outline a data structure which enables a
greedy algorithm to find this unique partition. The algorithm itself is given in Section
\ref{sec:merge}. The key result of this section is \thmref{thm:chain}.
Throughout this section we suppose that $G\to \mathfrak{X}(G)$ is an (up) $\Omega$-grader for a
direct class $\mathfrak{X}$.
\begin{defn}\label{def:chain}
A \emph{direct chain} is a proper chain $\mathcal{L}$ of $(\Omega\cup G)$-subgroups starting at $\mathfrak{X}(G)$ and ending at $G$, and where there is a direct $\Omega$-decomposition $\mathcal{R}$ of $G$ with:
\begin{enumerate}[(i)]
\item for all $L\in\mathcal{L}$, $L=\langle\mathcal{R}\cap L\rangle$, and
\item for each $L\in\mathcal{L}-\{G\}$, there is a unique $R\in\mathcal{R}$
such that the successor $M\in\mathcal{L}$ to $L$ satisfies:
$R\mathfrak{X}(G)\cap L\neq R\mathfrak{X}(G)\cap M$. We call $R$ the \emph{direction of $L$}.
\end{enumerate}
We call $\mathcal{R}$ a set of directions for $\mathcal{L}$.
\end{defn}
If $\mathcal{L}$ is a direct chain with directions $\mathcal{R}$, then
for all $L\in\mathcal{L}$, $\mathcal{R}\cap L$ is a direct $\Omega$-decomposition of $L$
(\lemref{lem:induced}(i)). When working with direct chains it helps to remember that
for all $(\Omega\cup G)$-subgroups $L$ and $R$ of $G$, if $\mathfrak{X}(G)\leq L$, then
$(R\cap L)\mathfrak{X}(G)=R\mathfrak{X}(G)\cap L$. Also, if $\mathfrak{X}(G)\leq L< M\leq G$,
$L=\langle\mathcal{R}\cap L\rangle$ and $M=\langle\mathcal{R}\cap M\rangle$, and
\begin{equation}\label{eq:unique-direction}
\forall R\in\mathcal{R}-\mathfrak{X},\qquad
R\mathfrak{X}(G)\cap L=R\mathfrak{X}(G)\cap M
\end{equation}
then $L=\langle\mathcal{R}\cap L\rangle=\langle\mathcal{R}\cap L,\mathfrak{X}(G)\rangle
=\langle\mathcal{R}\cap M,\mathfrak{X}(G)\rangle=\langle\mathcal{R}\cap M\rangle=M$.
Therefore, it suffices
to show there is
at most one $R\in\mathcal{R}-\mathfrak{X}$ such that $R\mathfrak{X}(G)\cap L\neq
R\mathfrak{X}(G)\cap M$.
\begin{lemma}\label{lem:cap}
Suppose that $\mathcal{H}=\mathcal{H}\mathfrak{X}(G)$ is an $(\Omega\cup G)$-decomposition of $G$
such that $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$, for a direct $\Omega$-decomposition $\mathcal{R}$. It follows that,
if $L=\langle\mathcal{J},\mathfrak{X}(G)\rangle$, for some $\mathcal{J}\subseteq \mathcal{H}$,
then $L=\langle \mathcal{R}\cap L\rangle$.
\end{lemma}
\begin{proof}
As $\mathfrak{X}(G)\leq L$, for each $R\in\mathcal{R}$, $R\cap \mathfrak{X}(G)\leq R\cap L$.
As $\mathfrak{X}(G)$ is $(\Omega\cup G)$-graded, $\mathfrak{X}(G)=\langle\mathcal{R}\cap \mathfrak{X}(G)\rangle$.
Thus, $\mathfrak{X}(G)\leq \langle \mathcal{R}\cap L\rangle$.
Also, $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$. Thus, for each
$J\in\mathcal{J}\subseteq \mathcal{H}$ there is a unique
$R\in\mathcal{R}-\{R\in\mathcal{R}: R\leq \mathfrak{X}(G)\}$ such that $J\leq R\mathfrak{X}(G)$.
As $L=\langle\mathcal{J},\mathfrak{X}(G)\rangle$, $J\leq L$ and so $J\leq R\mathfrak{X}(G)\cap L=(R\cap L)\mathfrak{X}(G)$.
Now $R\cap L,\mathfrak{X}(G)\leq \langle \mathcal{R}\cap L\rangle$ thus
$J\leq \langle\mathcal{R}\cap L\rangle$. Hence $L=\langle\mathcal{J},\mathfrak{X}(G)\rangle\leq
\langle\mathcal{R}\cap L\rangle\leq L$.
\end{proof}
\begin{lemma}\label{lem:drop-H}
If $\mathcal{H}$ is an $(\Omega\cup G)$-decomposition of $G$ and
$\mathcal{R}$ a direct $(\Omega\cup G)$-decomposition of $G$ such that
$\mathcal{H}=\mathcal{H}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)$,
then for all $\mathcal{J}\subset\mathcal{H}$ and all $H\in\mathcal{H}-\mathcal{J}$,
there is a unique $R\in\mathcal{R}$ such that $H\leq R\mathfrak{X}(G)$
and
$$\langle\mathcal{R}-\{R\}\rangle \mathfrak{X}(G)\cap \langle H,\mathcal{J},\mathfrak{X}(G)\rangle
=\langle\mathcal{R}-\{R\}\rangle\mathfrak{X}(G)\cap \langle\mathcal{J},\mathfrak{X}(G)\rangle.$$
\end{lemma}
\begin{proof}
Fix $\mathcal{J}\subseteq\mathcal{H}$ and $H\in\mathcal{H}-\mathcal{J}$.
By the definition of refinement there is a unique $R\in\mathcal{R}$ such that
$H\leq R\mathfrak{X}(G)$. Set $J=\langle\mathcal{J},\mathfrak{X}(G)\rangle$ and
$C=\langle \mathcal{R}-\{R\}\rangle$.
By \lemref{lem:cap}, $\mathcal{R}\cap HJ$ and $\mathcal{R}\cap J$ are
direct $(\Omega\cup G)$-decompositions of $HJ$ and $J$ respectively.
As $J=(R\cap J)\times (C\cap J)$ and $\mathfrak{X}(G)\leq J$, we get that
$J=(R\mathfrak{X}(G)\cap J)(C\mathfrak{X}(G)\cap J)$.
Also, $\mathfrak{X}(G)$ is $(\Omega\cup G)$-graded; hence, by
\lemref{lem:induced}(ii), $G/\mathfrak{X}(G)=R\mathfrak{X}(G)/\mathfrak{X}(G)
\times C\mathfrak{X}(G)/\mathfrak{X}(G)$ and $C\mathfrak{X}(G)\cap R\mathfrak{X}(G)=
\mathfrak{X}(G)$.
Combining the modular law with $\mathfrak{X}(G)\leq H\leq R\mathfrak{X}(G)$ and
$R\mathfrak{X}(G)\cap C\mathfrak{X}(G)=\mathfrak{X}(G)$
we have that
\begin{align*}
C\mathfrak{X}(G)\cap HJ
& = C\mathfrak{X}(G) \cap \mathsf{Bi} g (H(R\mathfrak{X}(G)\cap J)\cdot
(C\mathfrak{X}(G)\cap J)\mathsf{Bi} g)\\
& = \mathsf{Bi} g(C\mathfrak{X}(G)\cap H(R\mathfrak{X}(G)\cap J)\mathsf{Bi} g)
( C\mathfrak{X}(G)\cap J)\\
& = (C\mathfrak{X}(G)\cap R\mathfrak{X}(G)\cap HJ)(C\mathfrak{X}(G)\cap J) \\
& = \mathfrak{X}(G)(C\mathfrak{X}(G)\cap J)=C\mathfrak{X}(G)\cap J.
\end{align*}
Thus, $C\mathfrak{X}(G)\cap HJ=C\mathfrak{X}(G)\cap J$.
\end{proof}
\begin{prop}\label{prop:chain-chain}
If $\mathcal{H}=\mathcal{H}\mathfrak{X}(G)$ is an $(\Omega\cup G)$-decomposition of $G$ and
$\mathcal{R}$ is a direct $\Omega$-decomposition of $G$ such that $\mathcal{H}$
refines $\mathcal{R}\mathfrak{X}(G)$, then every maximal proper chain $\mathscr{C}$ of
subsets of $\mathcal{H}$ induces a direct chain $\{\langle \mathcal{C},\mathfrak{X}(G)\rangle :
\mathcal{C}\in\mathscr{C}\}$.
\end{prop}
\begin{proof}
For each $\mathcal{C}\subseteq\mathcal{H}$, by \lemref{lem:cap},
$\langle\mathcal{C}\rangle=\big\langle\mathcal{R}\cap \langle\mathcal{C}\rangle\big\rangle$.
The rest follows from \lemref{lem:drop-H}.
\end{proof}
The following \thmref{thm:chain} is a critical component of the proof of the algorithm for \thmref{thm:FindRemak}, specifically in proving \thmref{thm:merge}. What it says is that we can proceed through any direct chain as the $\mathfrak{X}$-separated direct decompositions of lower terms in the chain induce direct factors of the next term in the chain, and in a predictable manner.
\begin{thm}\label{thm:chain}
If $\mathcal{L}$ is a direct chain with directions $\mathcal{R}$,
$L\in\mathcal{L}-\{G\}$, and $R\in\mathcal{R}$ is
the direction of $L$, then for every
$\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $L$
such that $\mathcal{K}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)\cap L$,
it follows that
$$\big\{K\in\mathcal{K}-\mathfrak{X}: K\leq
\langle \mathcal{R}-\{R\}\rangle \mathfrak{X}(G)\big\}$$
lies in an $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition of the successor
to $L$.
\end{thm}
\begin{proof}
Let $M$ be the successor to $L$ in $\mathcal{L}$ and set $C=\langle\mathcal{R}-\{R\}\rangle$.
As $\mathcal{K}\mathfrak{X}(G)$ refines
$\mathcal{R}\mathfrak{X}(G)\cap L$, it also refines
$\{R\mathfrak{X}(G)\cap L,C\mathfrak{X}(G)\cap L\}$
and so
\begin{align*}
C\mathfrak{X}(G)\cap L
& = \langle K\in\mathcal{K}, K\leq C\mathfrak{X}(G)\rangle
= \langle K\in\mathcal{K}-\mathfrak{X}, K\leq C\mathfrak{X}(G)\rangle\mathfrak{X}(G).
\end{align*}
Since $\mathcal{K}$ is $\mathfrak{X}$-separated $F=\langle K\in\mathcal{K}-\mathfrak{X},
K\leq C\mathfrak{X}(G)\rangle$ has no direct $(\Omega\cup G)$-factor in $\mathfrak{X}$. Also,
as the direction of $L$ is $R$, $C\mathfrak{X}(G)\cap M=C\mathfrak{X}(G)\cap L$ and so
\begin{align*}
(C\cap M)\mathfrak{X}(G)
& = C\mathfrak{X}(G)\cap M\\
& = C\mathfrak{X}(G)\cap L\\
& = \langle K\in\mathcal{K}-\mathfrak{X}, K\leq C\mathfrak{X}(G)\rangle \mathfrak{X}(G)\\
& =F \times \langle \mathcal{K}\cap \mathfrak{X}\rangle.
\end{align*}
Using $(M,F,C\cap M)$ in the role of $(G,H,R)$ in \propref{prop:extendable}, it
follows that $F$ is a direct $(\Omega\cup G)$-factor of $M$.
In particular, $\{K\in\mathcal{K}-\mathfrak{X}, K\leq C\mathfrak{X}(G)\}$
lies in a direct $(\Omega\cup G)$-decomposition of $M$.
\end{proof}
\section{Algorithms to lift, extend, and match direct decompositions}\label{sec:lift-ext-algo}
Here we transition into algorithms beginning with a small modification of a technique introduced
by Luks and Wright to find a direct complement to a direct factor (\thmref{thm:FindComp-invariant}). We then produce an algorithm
\textalgo{Merge} (\thmref{thm:merge}) to lift direct decompositions for appropriate quotients.
That algorithm is the work-horse which glues together the unique constituents predicted by \thmref{thm:Lift-Extend}. That task asks us to locate a unique partition of a certain set, but in a manner that does not test each of the exponentially many partitions. The proof relies heavily on results such as \thmref{thm:chain} to prove that an essentially greedy algorithm will suffice.
For brevity we have opted to describe the algorithms only for the case of lifting decompositions. The natural duality of up and down graders makes it possible to modify the methods to prove similar results for extending decompositions.
This section assumes familiarity with Sections \ref{sec:tools} and \ref{sec:lift-ext}.
\subsection{Constructing direct complements}\label{sec:complements}
In this section we solve the following problem in polynomial-time.
\begin{prob}{\sc Direct-$\Omega$-Complement}
\begin{description}
\item[Given] a $\Omega$-group $G$ and an $\Omega$-subgroup $H$,
\item[Return] an $\Omega$-subgroup $K$ of $G$ such that $G=H\times K$,
or certify that no such $K$ exists.
\end{description}
\end{prob}
Luks and Wright gave independent solutions to
{\sc Direct-$\emptyset$-Complement} in
back-to-back lectures at the University of Oregon \cite{Luks:comp,Wright:comp}.
\begin{thm}[Luks \cite{Luks:comp},Wright \cite{Wright:comp}]\label{thm:FindComp}
For groups of permutations,
{\sc Direct-$\emptyset$-Complement} has a polynomial-time solution
\end{thm}
Both \cite{Luks:comp} and \cite{Wright:comp} reduce
\textalgo{Direct-$\emptyset$-Complement} to the following problem
(here generalized to $\Omega$-groups):
\begin{prob}{\sc $\Omega$-Complement-Abelian}
\begin{description}
\item[Given] an $\Omega$-group $G$ and an abelian $(\Omega\cup G)$-subgroup $M$,
\item[Return] an $\Omega$-subgroup $K$ of $G$ such that $G=M\rtimes K$,
or certify that no such $K$ exists.
\end{description}
\end{prob}
To deal with operator groups we use some modifications to the problems above. Many of the steps
are conceived within the group $\langle \Omega \theta\rangle\ltimes G\leq \Aut G\ltimes G$.
However, to execute these algorithms we cannot assume that $\langle \Omega\theta\rangle\ltimes G$ is a permutation
group as it is possible that these groups have no small degree permutation representations (e.g. $G=\mathbb{Z}_p^d$
and $\langle \Omega\theta\rangle=\GL(d,p)$). Instead we operate
within $G$ and account for the action of $\Omega$ along the way.
\begin{lemma}\label{lem:pres}
Let $G$ be an $\Omega$-group where $\theta:\Omega\to \Aut G$.
If $\{\langle \mathtt{X} |\mathtt{R}\rangle,f,\ell\}$ is a constructive presentation
for $G$ and $\langle \Omega|\mathtt{R}'\rangle$ a presentation
for $A:=\langle \Omega\theta\rangle\leq \Aut G$ with respect to $\theta$, then
$\langle \Omega\sqcup \mathtt{X}| \mathtt{R}'\ltimes\mathtt{R}\rangle$
is a presentation for $A\ltimes G$ with respect to $\theta\sqcup f$, where
\begin{equation*}
\mathtt{R}'\ltimes\mathtt{R}= \mathtt{R}'\sqcup\mathtt{R}\sqcup
\{(xf)^{s}\ell\cdot (x^{s})^{-1} :
x\in\mathtt{X},s\in\Omega\},\textnormal{ and}
\end{equation*}
\begin{equation*}
\forall z\in \Omega\sqcup \mathtt{X},\quad
z(\theta\sqcup f)=\left\{\begin{array}{cc}
z\theta & z\in\Omega,\\
zf & z\in \mathtt{X}.
\end{array}\right.
\end{equation*}
\end{lemma}
\begin{proof}
Without loss of generality we assume
$F(\Omega),F(\mathtt{X})\leq F(\Omega\sqcup\mathtt{X})$.
Let $K$ be the normal closure of $\mathtt{R}'\ltimes \mathtt{R} $ in
$F(\Omega\sqcup\mathtt{X})$. For each $s\in\Omega$ and each $x\in\mathtt{X}$
it follows that $Kx^{s}=K(xf)^{s}\ell\leq N=\langle K,F(\mathtt{X})\rangle$.
In particular, $N$ is normal in $F(\Omega\sqcup\mathtt{X})$. Set
$C=\langle K,F(\Omega)\rangle$. It follows that $F(\Omega\sqcup\mathtt{X})
=\langle C,N\rangle=CN$. Thus, $H=F(\Omega\sqcup\mathtt{X})/K=CN/K=(C/K)(N/K)$
and $N/K$ is normal in $H$.
Since $C/K$ and $N/K$ satisfy the presentations for $A$ and $G$ respectively,
it follows that $H$ is a quotient of $A\ltimes G$. To show that $H\cong A\ltimes G$
it suffices to notice that $A\ltimes G$ satisfies the relations in
$\mathtt{R}'\ltimes\mathtt{R} $, with respect to
$\Omega\sqcup\mathtt{X}$ and $\theta\sqcup\ell$. Indeed,
for all $s\in \Omega$ and all $x\in\mathtt{X}$ we see that
\begin{align*}
x^{s}(\widehat{\theta\sqcup f})
& = (s\theta^{-1},1)(1,xf)(s\theta,1)
= (1,(xf)^{s})
= (1, (xf)^{s} \ell \hat{f})
= (xf)^{s} \ell(\widehat{\theta\sqcup f}),
\end{align*}
which implies that $(xf)^{s}\ell (x^{s})^{-1}\in \ker \widehat{\theta\sqcup f}$;
so, $K\leq \ker \widehat{\theta\sqcup f}$. Hence, $\langle \Omega\sqcup \mathtt{X}|
\mathtt{R}'\ltimes\mathtt{R} \rangle$ is a presentation for $A\ltimes G$.
\end{proof}
\begin{prop}\label{prop:InvComp}
{\sc $\Omega$-Complement-Abelian} has a polynomial-time solution.
\end{prop}
\begin{proof}
Let $M, G\in \mathbb{G}_n$, and $\theta:\Omega\to \Aut G$ a function, where $M$ is an abelian
$(\Omega\cup G)$-subgroup of $G$.
\emph{Algorithm.}
Use {\sc Presentation} to produce a constructive
presentation $\{\langle \mathtt{X}|\mathtt{R}\rangle,f,\ell\}$ for $G$ mod $M$. For each
$s\in \Omega$ and each $x\in\mathtt{X}$, define
$$r_{s,x}=(xf^{s})\ell\cdot(x^{s})^{-1}\in F(\Omega\sqcup\mathtt{X}).$$
Use {\sc Solve} to decide if there is a
$\mu\in M^{\mathtt{X}}$ where
\begin{align}\label{eq:comp-rel-1}
\forall r\in \mathtt{R} ,&\quad r(f\mu) = 1,\textnormal{ and }\\
\label{eq:comp-rel-2}
\forall s\in\Omega,\forall x\in\mathtt{X} & \quad r_{s,x}(f\mu)=1.
\end{align}
If no such $\mu$ exists, then assert that $M$ has no $\Omega$-complement in $G$; otherwise,
return $K=\langle x(f\mu)=(xf)(x\mu) : x\in\mathtt{X}\rangle$.
\emph{Correctness.} Let $A=\langle\Omega\theta\rangle\leq \Aut G$ and let
$\langle \Omega|\mathtt{R} '\rangle$ be a presentation of $A$ with
respect to $\theta$. The algorithm creates a constructive presentation
$\{\langle\mathtt{X}|\mathtt{R}\rangle ,f,\ell\}$ for $G$ mod $M$ and so by
\lemref{lem:pres}, $\langle\Omega\sqcup \mathtt{X}|\mathtt{R} '\ltimes\mathtt{R} \rangle$
is a presentation for $A\ltimes G$ mod $M$ with respect to $\theta\sqcup f$.
First suppose that the algorithm returns $K=\langle x(f\mu):x\in\mathtt{X}\rangle$.
As $\mathtt{X} f\subseteq KM$ we get that
$G=\langle \mathtt{X} f\rangle\leq KM\leq G$. By \eqref{eq:comp-rel-1},
$r(f\mu)=1$ for all $r\in \mathtt{R} $. Therefore $K$ satisfies the defining
relations of $G/M\cong K/(K\cap M)$, which forces $K\cap M=1$ and so $G=K\ltimes M$.
By \eqref{eq:comp-rel-1} and \eqref{eq:comp-rel-2}, the generator set
$\Omega\theta\sqcup\{x\bar{\mu}:x\in\mathtt{X} \}f$ of
$\langle A,K\rangle$ satisfies the defining relations $\mathtt{R} '\ltimes \mathtt{R} $
of $(A\ltimes G)/M$ and so $\langle A,K\rangle$ is isomorphic to a quotient of
$(A\ltimes G)/M$ where $K$ is the image of $G/M$. This shows $K$ is normal in
$\langle A,K\rangle$. In particular, $\langle K^{\Omega}\rangle\leq K$. Therefore if the algorithm
returns a subgroup then the return is correct.
Now suppose that there is a $K\leq G$ such that $\langle K^{\Omega}\rangle\leq K$ and $G=K\ltimes M$.
We must show that in this case the algorithm returns a subgroup.
We have that $G=\langle\mathtt{X} f\rangle$ and the generators $\mathtt{X} f$ satisfy
(mod $M$) the relations $\mathtt{R} $. Let $\varphi:G/M\to K$
be the isomorphism $kM\varphi=k$, for all $km\in KM=G$, where $k\in K$ and $m\in M$.
Define $\tau:\mathtt{X} \to M$ by $x\tau=(xf)^{-1} (xfM)\varphi$, for all $x\in\mathtt{X} $.
Notice $\langle x(x\tau): x\in\mathtt{X} \rangle=K$. Furthermore,
$\Phi:(a,hM)\mapsto (a,hM\varphi)$ is an isomorphism $A\ltimes(G/M)\to A\ltimes K$.
As $\mathtt{R} \subseteq F(\mathtt{X} )$ it follows that
$r((\theta\sqcup f)\Phi)=r(f)\Phi=1$, for all $r\in \mathtt{R}$. Also,
\begin{align*}
\forall z\in \Omega\sqcup \mathtt{X} ,& \quad
z(\theta\sqcup f)\Phi = \left\{
\begin{array}{cc}
(z\theta, 1), & z\in \Omega;\\
(1,(xfM)\varphi) = (1, x\bar{\tau}), & z\in \mathtt{X} .
\end{array}\right.
\end{align*}
Therefore, $r(f\tau)=r((\theta\sqcup f)\Phi)=1$ for all $r\in\mathtt{R} $. Thus,
an appropriate $\tau\in M^{\mathtt{X} }$ exists and the algorithm is guaranteed to find
such an element and return an $\Omega$-subgroup of $G$ complementing $M$.
\emph{Timing.} The algorithm applies two polynomial-time algorithms.
\end{proof}
\begin{thm}\label{thm:FindComp-invariant}
\textalgo{Direct-$\Omega$-Complement} has a polynomial-time solution.
\end{thm}
\begin{proof}
Let $H, G\in\mathbb{G}_n$ and $\theta:\Omega\to \Aut G$, where $\langle H^{\Omega}\rangle\leq H\leq G$.
\emph{Algorithm.} Use {\sc Member} to determine if $H$ is an $(\Omega\cup G)$-subgroup of $G$.
If not, then this certifies that $H$ is not a direct factor of $G$.
Otherwise, use {\sc Normal-Centralizer} to compute $C_G(H)$ and $ \zeta_1(H)$.
Using {\sc Member}, test if $G=HC_G(H)$ and if $\langle C_G(H)^{\Omega}\rangle= C_G(H)$. If either fails, then certify that
$H$ is not a direct $\Omega$-factor of $G$.
Next, use \propref{prop:InvComp} to find an $\Omega$-subgroup $K\leq C_G(H)$ such that
$C_G(H)=\zeta_1(H)\rtimes K$, or determine that no such $K$ exists. If
$K$ exists, return $K$; otherwise, $H$ is not a direct $\Omega$-factor of $G$.
\emph{Correctness.} Note that if $G=H\times J$ is a direct $\Omega$-decomposition then
$H$ and $J$ are $(\Omega\cup G)$-subgroups of $G$, $G=HC_G(H)$, and $C_G(H)=\zeta_1(H)\times J$.
As $\Omega\theta\subseteq \Aut G$,
$\zeta_1(H)$ is an $\Omega$-subgroup and therefore $C_G(H)$ is an $\Omega$-subgroup. Therefore the
tests within the algorithm properly identify cases where $H$ is not a direct $\Omega$-factor of $G$.
Finally, if the algorithm returns an $\Omega$-subgroup $K$ such that
$C_G(H)=\zeta_1(H)\rtimes K=\zeta_1(G)\times K$, then $G=H\times K$ is a direct $\Omega$-decomposition.
\emph{Timing.} The algorithm makes a bounded number of calls to polynomial-time algorithms.
\end{proof}
\subsection{Merge}\label{sec:merge}
In this section we provide an algorithm which given an appropriate direct decomposition of
a quotient group produces a direct decomposition of original group.
Throughout this section we assume that $(\mathfrak{X},G\mapsto \mathfrak{X}(G))$ is an up $\Omega$-grading pair in which $\zeta_1(G)\leq \mathfrak{X}(G)$.
The constraints of exchange by $\Aut_{\Omega\cup G} G$ given in \lemref{lem:KRS} can be sharpened
to individual direct factors as follows. (Note that \propref{prop:back-forth} is false when
considering the action of $\Aut G$ on direct factors.)
\begin{prop}\label{prop:back-forth}
Let $X$ and $Y$ be direct $\Omega$-factors of $G$ with no abelian direct $\Omega$-factor.
The following are equivalent.
\begin{enumerate}[(i)]
\item $X\varphi=Y$ for some $\varphi\in \Aut_{\Omega\cup G} G$.
\item $X\zeta_1(G)=Y\zeta_1(G)$.
\end{enumerate}
\end{prop}
\begin{proof}
By \eqref{eq:central}, $\Aut_{\Omega\cup G} G$ is the
identity on $G/\zeta_1(G)$; therefore (i) implies (ii).
Next we show (ii) implies (i). Recall that $\mathfrak{A}$ is the class of abelian groups.
Let $\{X,A\}$ and $\{Y,B\}$ be direct $\Omega$-decompositions of $G$. Choose
Remak $(\Omega\cup G)$-decompositions $\mathcal{R}$ and $\mathcal{C}$
which refine $\{X,A\}$ and $\{Y,B\}$ respectively.
Let $\mathcal{X}=\{R\in\mathcal{R}: R\leq X\}$. By \thmref{thm:KRS} there is a
$\varphi\in\Aut_{\Omega\cup G} G$ such that $\mathcal{X}\varphi\subseteq \mathcal{C}$.
However, $\varphi$ is the identity on $G/\zeta_1(G)$. Hence,
$\langle\mathcal{X}\rangle\zeta_1(G)=X\zeta_1(G)\varphi=Y\zeta_1(G)$.
Thus, $\mathcal{X}\varphi\subseteq \{C\in \mathcal{C}: C\leq Y\zeta_1(G)\}-\mathfrak{A}$.
Yet, $\mathcal{C}$ refines $\{Y,B\}$ and $Y$ has no direct $\Omega$-factor in $\mathfrak{A}$.
Thus, $$\{C\in \mathcal{C}: C\leq Y\zeta_1(G)=Y\times \zeta_1(B)\}-\mathfrak{A}
=\{C\in\mathcal{C}:C\leq Y\}.$$
Thus, $\mathcal{X}\varphi\subseteq\mathcal{Y}$. By reversing the roles of $X$ and $Y$
we see that $\mathcal{Y}\varphi'\subseteq\mathcal{X}$ for some $\varphi'$. Thus,
$|\mathcal{X}|=|\mathcal{Y}|$. So we conclude that $\mathcal{X}\varphi=\mathcal{Y}$
and $X\varphi=Y$.
\end{proof}
\begin{thm}\label{thm:Extend}
There is a polynomial-time algorithm which, given an $\Omega$-group $G$ and a set
$\mathcal{K}$ of $(\Omega\cup G)$-subgroups such that
\begin{enumerate}[(a)]
\item $\mathfrak{X}(\langle\mathcal{K}\rangle)=\mathfrak{X}(G)$ and
\item $\mathcal{K}$ is a direct $(\Omega\cup G)$-decomposition of $\langle \mathcal{K}\rangle$,
\end{enumerate}
returns a direct $\Omega$-decomposition $\mathcal{H}$ of $G$ such that
\begin{enumerate}[(i)]
\item $|\mathcal{H}-\mathcal{K}|\leq 1$,
\item if $K\in\mathcal{K}$ such that
$\langle \mathcal{H}\cap\mathcal{K}, K\rangle$ has a direct $\Omega$-complement in $G$, then
$K\in\mathcal{H}$; and
\item if $K\in\mathcal{K}-\mathfrak{X}$
such that $K$ is a direct $(\Omega\cup G)$-factor of $G$, then $K\in\mathcal{H}$.
\end{enumerate}
\end{thm}
\begin{proof}
\emph{Algorithm.}
\begin{code}{Extend$(~G,~\mathcal{K}~)$}
$\mathcal{L}=\emptyset$; $\lfloor G\rfloor = G$;\\
\textit{/* Using the algorithm for \thmref{thm:FindComp-invariant} to determine
the existence of $H$, execute the following. */}\\
\cwhile{$\exists K\in\mathcal{K}, \exists H, \mathcal{L}\sqcup\{K,H\}$ is a direct
$\Omega$-decomposition of $G$}
\begin{block*}
$\lfloor G\rfloor = H$;\\
$\mathcal{L} = \mathcal{L}\sqcup\{K\}$;\\
$\mathcal{K} = \mathcal{K}-\{K\}$;\\
\end{block*}
\creturn{$\mathcal{H}=\mathcal{L}\sqcup\{\lfloor G\rfloor\}$}
\end{code}
\emph{Correctness.} We maintain the following loop invariant (true at the start and end
of each iteration of the loop): $\mathcal{L}\sqcup\{\lfloor G\rfloor\}$ is a direct
$(\Omega\cup G)$-decomposition of $G$ and $\mathcal{L}\subseteq \mathcal{K}$. The loop exits once $\mathcal{L}\sqcup\{\lfloor G\rfloor\}$
satisfies (ii). Hence, $\mathcal{H}=\mathcal{L}\sqcup\{\lfloor G\rfloor\}$ satisfies (i) and (ii).
For (iii), suppose that $\mathcal{K}$ is $\mathfrak{X}$-separated and that
$K\in(\mathcal{K}-\mathfrak{X})-\mathcal{H}$ such that $K$ is a direct
$(\Omega\cup G)$-factor of $G$. Let $\langle F^{\Omega}\rangle\leq F\leq G$ such that $\{F,K\}$
is a direct $(\Omega\cup G)$-decomposition of $G$ and $\mathcal{R}$ a Remak
$(\Omega\cup G)$-decomposition of $G$ which refines $\{F,K\}$. Also let
$\mathcal{T}$ be a Remak $(\Omega\cup G)$-decomposition of $G$ which refines $\mathcal{H}$.
Set $\mathcal{X}=\{R\in\mathcal{R}: R\leq K\}$, and note
that $\mathcal{X}\subseteq \mathcal{R}-\mathfrak{X}$ as $K$ has no direct $\Omega$-factor
in $\mathfrak{X}$. By \thmref{thm:KRS} we can exchange $\mathcal{X}$ with
a $\mathcal{Y}\subseteq \mathcal{T}-\mathfrak{X}$ to create a Remak $(\Omega\cup G)$-decomposition
$(\mathcal{T}-\mathcal{Y})\sqcup \mathcal{X}$ of $G$. As $\zeta_1(G)\leq\mathfrak{X}(G)$
we get $\mathcal{R}\mathfrak{X}(G)=\mathcal{T}\mathfrak{X}(G)$ and
$\mathcal{X}\mathfrak{X}(G)=\mathcal{Y}\mathfrak{X}(G)$ (\lemref{lem:KRS},
\propref{prop:back-forth}). Thus, by (a) and then (b),
\begin{align*}
\langle\mathcal{Y}\rangle \cap
\langle\mathcal{H}\cap \mathcal{K}\rangle
& \equiv \langle\mathcal{X}\rangle \cap
\langle\mathcal{H}\cap \mathcal{K}\rangle
& \pmod{\mathfrak{X}(G)}\\
& \equiv K \cap \langle\mathcal{H}\cap \mathcal{K}\rangle
& \pmod{\mathfrak{X}(\langle \mathcal{K}\rangle)}\\
& \leq K\cap \langle \mathcal{K}-\{K\}\rangle\\
& \equiv 1
\end{align*}
Therefore $\langle\mathcal{Y}\rangle \leq
\langle(\mathcal{T}-\mathfrak{X})-\{T\in\mathcal{T}:
T\leq \langle\mathcal{H}-\mathcal{K}\rangle\}\rangle$.
Thus,
$$\mathcal{J}=(\mathcal{H}\cap\mathcal{K})\sqcup \{K\}\sqcup \{\langle (\mathcal{T}
-\mathcal{Y})-\{T\in\mathcal{T}:T\leq \langle\mathcal{H}\cap\mathcal{K}\rangle\}$$
is a direct $\Omega$-decomposition of $G$ and $(\mathcal{H}\cap\mathcal{K})\sqcup\{K\}\subseteq
\mathcal{J}\cap\mathcal{K}$ which shows that $\mathcal{L}$ is not maximal. By the contrapositive we have (iii).
\emph{Timing.} This loop makes $|\mathcal{K}|\leq \log_2 |G|$ calls to a polynomial-time algorithm
for \textalgo{Direct-$\Omega$-Complement}.
\end{proof}
Under the hypothesis of \thmref{thm:Extend} it is not possible to extend (iii) to
say that if $K\in\mathcal{K}$ and $K$ is a direct $\Omega$-factor of $G$ then $K\in \mathcal{H}$.
Consider the following example (where $\Omega=\emptyset$).
\begin{ex}
Let $G=D_8\times \mathbb{Z}_2$, $D_8=\langle a,b|a^4,b^2,(ab)^2\rangle$. Use
$\mathfrak{A}$ (the class of abelian groups) for $\mathfrak{X}$ and
$\mathcal{K}=\{\langle (0,1)\rangle,\langle (a^2,1)\rangle\}$.
Each member of $\mathcal{K}$ is a direct factor of $G$, but $\mathcal{K}$ is not
contained in any direct decomposition of $G$.
\end{ex}
\begin{lem}\label{lem:count}
If $\mathcal{K}$ is a $\mathfrak{X}$-refined direct $(\Omega\cup G)$-decomposition of $G$ such
that $\mathcal{K}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)$ for some Remak
$(\Omega\cup G)$-decomposition of $G$, then $\mathcal{K}$ is a Remak
$(\Omega\cup G)$-decomposition of $G$.
\end{lem}
\begin{proof}
As $\mathcal{R}$ is a Remak $(\Omega\cup G)$-decomposition
of $G$, by \lemref{lem:KRS}, $\mathcal{R}\mathfrak{X}(G)$ refines $\mathcal{K}\mathfrak{X}(G)$
and so $\mathcal{K}\mathfrak{X}(G)=\mathcal{R}\mathfrak{X}(G)$. Hence, $|\mathcal{K}-\mathfrak{X}|=|\mathcal{R}-\mathfrak{X}|$ and because $\mathcal{K}$ is $\mathfrak{X}$-refined we also have:
$|\mathcal{K}\cap\mathfrak{X}|=|\mathcal{R}\cap\mathfrak{X}|$. Therefore,
$|\mathcal{K}|=|\mathcal{K}-\mathfrak{X}|+|\mathcal{K}\cap\mathfrak{X}|
=|\mathcal{R}-\mathfrak{X}|+|\mathcal{R}\cap\mathfrak{X}|=|\mathcal{R}|$.
As every Remak $(\Omega\cup G)$-decomposition of $G$ has the same size, it follows that
$\mathcal{K}$ cannot be refined by a larger direct $(\Omega\cup G)$-decomposition of $G$.
Hence $\mathcal{K}$ is a Remak $(\Omega\cup G)$-decomposition of $G$.
\end{proof}
\begin{thm}\label{thm:merge}
There is a polynomial-time algorithm which, given $G\in\mathbb{G}_n$,
sets $\mathcal{A},\mathcal{H}\subseteq\mathbb{G}_n$, and a function $\theta:\Omega\to\Aut G$,
such that
\begin{enumerate}[(a)]
\item $\mathcal{A}$ is a Remak $(\Omega\cup G)$-decomposition of $\mathfrak{X}(G)$,
\item $\forall H\in\mathcal{H}$, $\mathfrak{X}(H)=\mathfrak{X}(G)$,
\item $\mathcal{H}/\mathfrak{X}(G)$ is a direct $\Omega$-decomposition of $G/\mathfrak{X}(G)$;
\end{enumerate}
returns an $\mathfrak{X}$-refined direct $\Omega$-decomposition $\mathcal{K}$ of $G$ with the following property. If $\mathcal{R}$ is a direct $\Omega$-decomposition
of $G$ where $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$
then $\mathcal{K}\mathfrak{X}(G)$ refines
$\mathcal{R}\mathfrak{X}(G)$; in particular, if $\mathcal{R}$ is Remak then $\mathcal{K}$ is Remak.
\end{thm}
\begin{proof}
\emph{Algorithm.}
\begin{code}{Merge$(~\mathcal{A},~\mathcal{H}~)$}
$\mathcal{K} = \mathcal{A}$;\\
$\forall H\in\mathcal{H}$\\
\begin{block*}
$\mathcal{K}=${\tt Extend}$(~\langle H,\mathcal{K}\rangle,~\mathcal{K}~)$;
\end{block*}
\creturn{$\mathcal{K}$}
\end{code}
\emph{Correctness.} Fix a direct $\Omega$-decomposition $\mathcal{R}$ of $G$
where $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$. We can assume
$\mathcal{R}$ is $\mathfrak{X}$-refined.
The loop runs through a maximal chain $\mathscr{C}$ of subsets of $\mathcal{H}$ and so we
track the iterations by considering the members of $\mathscr{C}$. By \propref{prop:chain-chain},
$\mathcal{L}=\{L=L_{\mathcal{C}}=\langle\mathcal{C},\mathfrak{X}(G)\rangle:
\mathcal{C}\in\mathscr{C}\}$ is a direct chain.
We claim the following properties as loop invariants.
At the iteration $\mathcal{C}\in\mathscr{C}$, we claim that $(\mathcal{C},L,\mathcal{K})$ satisfies:
\begin{enumerate}[(P.1)]
\item\label{P:1} $\mathfrak{X}(L)=\mathfrak{X}(G)$,
\item\label{P:3} $\mathcal{K}\mathfrak{X}(G)$ refines
$\mathcal{R}\mathfrak{X}(G)\cap L$, and
\item\label{P:4} $\mathcal{K}$ is an $\mathfrak{X}$-refined direct $(\Omega\cup G)$-decomposition
of $L$.
\end{enumerate}
Thus, when the loop completes, $L=\langle\mathcal{H}\rangle=G$.
By (P.\ref{P:3})
$\mathcal{K}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)$. By (P.\ref{P:4}), $\mathcal{K}$
is an $\mathfrak{X}$-refined direct $\Omega$-decomposition of $G$.
Following \lemref{lem:count}, if $\mathcal{R}$ is a Remak
$(\Omega\cup G)$-decomposition of $G$ then $\mathcal{K}$ is a Remak
$(\Omega\cup G)$-decomposition. We prove (P.\ref{P:1})--(P.\ref{P:4}) by induction.
As we begin with $\mathcal{K}=\mathcal{A}$, in the base case $\mathcal{C}=\emptyset$,
$L=\mathfrak{X}(G)$, and so (P.\ref{P:1}) holds.
As $\mathcal{K}\mathfrak{X}(G)=\emptyset$ and $\mathcal{R}\mathfrak{X}(G)\cap\mathfrak{X}(G)=\emptyset$
we have (P.\ref{P:3}). Also (P.\ref{P:4}) holds because of (a).
Now suppose for induction that for some $\mathcal{C}\in\mathscr{C}$,
$(\mathcal{C},L,\mathcal{K})$ satisfies (P.\ref{P:1})--(P.\ref{P:4}). Let
$\mathcal{D}=\mathcal{C}\sqcup\{H\}\in\mathscr{C}$ be the successor to $\mathcal{C}$,
for the appropriate $H\in\mathcal{H}-\mathcal{C}$.
Set $M=\langle H,L\rangle$, and
$\mathcal{M}={\tt Extend}(M,\mathcal{K})$.
Since $H\leq M$ it follows from (b) that $\mathfrak{X}(G)\leq \mathfrak{X}(M)
\leq \mathfrak{X}(H)=\mathfrak{X}(G)$ so that $\mathfrak{X}(M)=\mathfrak{X}(G)$; hence, (P.\ref{P:1}) holds for $(\mathcal{D}, M,\mathcal{M})$.
Next we prove (P.\ref{P:3}) holds for $(\mathcal{D}, M,\mathcal{M})$.
As $L,M\in\mathcal{L}$ and $\mathcal{L}$
is a direct chain with directions $\mathcal{R}$, $\mathcal{R}\cap L$ and $\mathcal{R}\cap M$
are direct $(\Omega\cup G)$-decomposition of $L$ and $M$, respectively.
Following \thmref{thm:Extend}(i),
$|\mathcal{M}-\mathcal{K}|\leq 1$. As $H\nleq L$, $\mathcal{M}\neq \mathcal{K}$,
and there is a group $\lfloor H\rfloor$ in
$\mathcal{M}-\mathcal{K}$ with $H\leq \lfloor H\rfloor\mathfrak{X}(G)$.
By assumption, $\mathcal{H}$ refines $\mathcal{R}\mathfrak{X}(G)$.
Hence, there is a unique $R\in \mathcal{R}-\mathfrak{X}$
such that $\mathfrak{X}(G)<H\leq R\mathfrak{X}(G)$. Indeed, $R$ is the direction
of $L$. Let
$C=\langle (\mathcal{R}-\{R\})-\mathfrak{X}\rangle$ and define
$$\mathcal{J}=\{K\in\mathcal{K}-\mathfrak{X}: K\leq C\mathfrak{X}(G)\}.$$
As the direction of $L$ is $R$,
$C\mathfrak{X}(G)\cap M=C\mathfrak{X}(G)\cap L=\langle\mathcal{J}\rangle\mathfrak{X}(G)$
and by \thmref{thm:chain}, $\mathcal{J}$ lies in a $\mathfrak{X}$-separated direct $(\Omega\cup G)$-decomposition of $M$.
Thus, by \thmref{thm:Extend}(ii), $\mathcal{J}\subseteq \mathcal{M}\cap \mathcal{K}$.
Also, $M = \langle\mathcal{M}-\mathcal{J}\rangle\times \langle \mathcal{J}\rangle$
and $\mathfrak{X}(M)=\mathfrak{X}(G)$, so
\begin{align*}
M/\mathfrak{X}(G)
& = \langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)/\mathfrak{X}(G)\times
\langle\mathcal{J}\rangle\mathfrak{X}(G)/\mathfrak{X}(G)\\
& = \langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)/\mathfrak{X}(G)\times
(C\mathfrak{X}(G) \cap M)/\mathfrak{X}(G).
\end{align*}
Thus, $\langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)\cap C\mathfrak{X}(G)=\mathfrak{X}(G)$.
Suppose that $X$ is a directly $(\Omega\cup G)$-indecomposable direct $(\Omega\cup G)$-factor of
$\langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)$ which does not lie in $\mathfrak{X}$.
As $\mathcal{R}\cap M$ is a direct $(\Omega\cup M)$-decomposition of $M$ and $X$ lies in a
Remak $(\Omega\cup G)$-decomposition of $M$,
then by \lemref{lem:KRS}, $X\leq R\mathfrak{X}(M)=R\mathfrak{X}(G)$ or
$X\leq C\mathfrak{X}(M)=C\mathfrak{X}(G)$.
Yet, $X\not\in\mathfrak{X}$ so that $X\nleq \mathfrak{X}(G)$ and
$$X\cap C\mathfrak{X}(G)\leq
\langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)\cap C\mathfrak{X}(G)
=\mathfrak{X}(G);$$
hence, $X\nleq C\mathfrak{X}(G)$. Thus, $X\leq R\mathfrak{X}(G)$ and as $X$ is
arbitrary, we get
$$\langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)\leq R\mathfrak{X}(G).$$
As $M/\mathfrak{X}(G)=(R\mathfrak{X}(G)\cap M)/\mathfrak{X}(G)\times (C\mathfrak{X}(G)\cap M)/\mathfrak{X}(G)$
we indeed have
$$\langle\mathcal{M}-\mathcal{J}\rangle\mathfrak{X}(G)= R\mathfrak{X}(G)\cap M.$$
In particular, $\mathcal{M}\mathfrak{X}(G)$ refines $\mathcal{R}\mathfrak{X}(G)\cap M$ and so
(P.\ref{P:3}) holds.
Finally to prove (P.\ref{P:4}) it suffices to show that $\lfloor H\rfloor$ has no direct $(\Omega\cup G)$-factor in $\mathfrak{X}$. Suppose otherwise:
so $\lfloor H\rfloor$ has a direct $(\Omega\cup G)$-decomposition $\{H_0,A\}$ where $A\in\mathfrak{X}$
and $A$ is directly $(\Omega\cup G)$-indecomposable.
Swap out $\lfloor H\rfloor$ in $\mathcal{M}$ for $\{H_0,A\}$ creating
$$\mathcal{M}'=(\mathcal{M}-\{\lfloor H\rfloor \})\sqcup\{H_0,A\}
=(\mathcal{M}\cap\mathcal{K})\sqcup\{H_0,A\}.$$
As $A\in \mathfrak{X}$ it follows that
$A\leq \mathfrak{X}(M)=\mathfrak{X}(G)=\mathfrak{X}(L)$. In particular, $A\leq L\leq M$. As
$A$ is a direct $(\Omega\cup G)$-factor of $M$, $A$ is also a direct
$(\Omega\cup G)$-factor of $L$.
Since $\langle A,\mathcal{M}\cap \mathcal{K}\rangle\leq L$ it follows that
$$\mathcal{M}'\cap L=\{H_0 \cap L, A\}\sqcup (\mathcal{M}\cap \mathcal{K})$$
is a direct $(\Omega\cup G)$-decomposition of $L$.
Furthermore, $A$ is directly $(\Omega\cup G)$-in\-de\-comp\-o\-sa\-ble, $A\in\mathfrak{X}$,
and $A$ lies in a Remak $(\Omega\cup G)$-decomposition of $L$. Also
$\mathcal{K}\cap\mathfrak{X}$ lies in a Remak $(\Omega\cup G)$-decomposition $\mathcal{T}$
of $L$ in which $\mathcal{K}\cap\mathfrak{X}=\mathcal{T}\cap\mathfrak{X}$
(\propref{prop:direct-class}(iv) and (v)); thus, by \thmref{thm:KRS} there is a
$B\in\mathcal{K}\cap \mathfrak{X}$ such that
$$(\mathcal{M}'\cap L-\{A\})\sqcup \{B\}$$
is a direct $(\Omega\cup G)$-decomposition of $L$. Hence,
$\mathcal{M}''=(\mathcal{M}'-\{A\})\sqcup\{B\}$
is a direct $(\Omega\cup G)$-decomposition of $M$. However,
$\mathcal{M}''\cap \mathcal{K}=(\mathcal{M}\cap \mathcal{K})\cup\{B\}$. By
\thmref{thm:Extend}(i), $\mathcal{M}\cap \mathcal{K}$ is maximal with respect to inclusion in
$\mathcal{K}$, such
that $\mathcal{M}\cap \mathcal{K}$ is contained in a direct $(\Omega\cup G)$-decomposition
of $M$. Thus, $B\in\mathcal{M}\cap\mathcal{K}$. That is, impossible since it would
imply that $\mathcal{M}'\cap L$ and $(\mathcal{M}'-\{A\})\cap L$ are both direct
$(\Omega\cup G)$-decompositions of $L$, i.e. that $A\cap L=1$, But $1<A\leq L$.
This contradiction demonstrates that $\lfloor H\rfloor$ has no direct
$(\Omega\cup G)$-factor in $\mathfrak{X}$. Therefore, $\mathcal{M}$ is $\mathfrak{X}$-refined.
Having shown that $M$ and $\mathcal{M}$ satisfy (P.\ref{P:1})--(P.\ref{P:4}),
at the end of the loop
$\mathcal{K}$ and $L$ are reassigned to $\mathcal{M}$ and $M$ respectively and
so maintain the loop invariants.
\emph{Timing.} The algorithm loops over every element of $\mathcal{H}$ applying the
polynomial-time algorithm of \thmref{thm:Extend} once in each loop. Thus, {\tt Merge}
is a polynomial-time algorithm.
\end{proof}
\section{Bilinear maps and $p$-groups of class $2$}\label{sec:bi}
In this section we introduce bilinear maps and a certain commutative ring as a means to access direct decompositions of a $p$-group of class $2$. In our minds, those groups represent the most difficult case of the direct product problem. This is because $p$-groups of class $2$ have so many normal subgroups, and many of those pairwise intersect trivially making them appear to be direct factors when they are not. Thus, a greedy search is almost certain to fail. Instead, we have had to consider a certain commutative ring that can be derived from a $p$-group. As commutative rings have unique Remak decomposition, and a decomposable $p$-group will have many Remak decompositions, we might expect such a method to have lost vital information. However, in view of results such as \thmref{thm:Lift-Extend} we recognize that in fact what we will have constructed leads us to a matching for the extension $1\to \zeta_1(G)\to G\to G/\zeta_1(G)\to 1$.
Unless specified otherwise, in this section $G$ is a $p$-group of class $2$.
\subsection{Bilinear maps}
Here we introduce $\Omega$-bilinear maps and direct
$\Omega$-decompositions of $\Omega$-bilinear maps. This allows us to solve the match
problem for $p$-groups of class $2$.
Let $V$ and $W$ denote abelian $\Omega$-groups.
A map $b:V\times V\to W$ is $\Omega$-\emph{bilinear} if
\begin{align}
b(u+u',v+v') & = b(u,v)+b(u',v)+b(u,v')+b(u',v'), \textnormal{ and }\\
b(ur,v) & = b(u,v)r = b( u,vr),
\end{align}
for all $u,u'v,v'\in V$ and all $r\in \Omega$. Every $\Omega$-bilinear map is also
$\mathbb{Z}$-bilinear. Define
\begin{equation}
b(X,Y) = \langle b(x,y) : x\in X, y\in Y\rangle
\end{equation}
for $X,Y\subseteq V$. If $X\leq V$ then define the \emph{submap}
\begin{equation}
b_X:X\times X\to b(X,X)
\end{equation}
as the restriction of $b$ to inputs from $X$.
The \emph{radical} of $b$ is
\begin{equation}
\rad b = \{ v\in V : b(v,V)=0=b(V,v) \}.
\end{equation}
We say that $b$ is \emph{nondegenerate} if $\rad b=0$.
Finally, call $b$ \emph{faithful} $\Omega$-bilinear when
$(0:_{\Omega} V)\cap (0:_{\Omega} W)=0$,
where $(0:_{\Omega} X)=\{r\in \Omega: Xr=0\}$, $X\in \{V,W\}$.
\begin{defn}
Let $\mathcal{B}$ be a family of $\Omega$-bilinear maps
$b:V_b\times V_b\to W_b$, $b\in\mathcal{B}$.
Define $\oplus\mathcal{B}=\bigoplus_{b\in\mathcal{B}} b$ as the
$\Omega$-bilinear map
$\bigoplus_{b\in\mathcal{B}} V_b\times\bigoplus_{b\in\mathcal{B}} V_b
\to \bigoplus_{b\in\mathcal{B}} W_b$ where:
\begin{equation}
\left(\oplus\mathcal{B}\right)\left(
(u_b)_{b\in\mathcal{B}},(v_b)_{b\in\mathcal{B}}\right)
= (b(u_b,v_b))_{b\in\mathcal{B}},\qquad
\forall (u_b)_{b\in\mathcal{B}},(v_b)_{b\in\mathcal{B}}
\in \bigoplus_{b\in\mathcal{B}}V_b.
\end{equation}
\end{defn}
\begin{lem}\label{lem:internal-direct-bi}
If $b:V\times V\to W$ is an $\Omega$-bilinear map,
$\mathcal{C}$ a finite set of submaps of $b$ such that
\begin{enumerate}[(i)]
\item $\{X_c: c:X_c\times X_c\to Z_c\in\mathcal{C}\}$ is a direct
$\Omega$-decomposition of $V$,
\item $\{Z_c: c:X_c\times X_c\to Z_c\in\mathcal{C}\}$ is a direct
$\Omega$-decomposition of $W$, and
\item $b(X_c,X_{d})=0$ for distinct $c,d\in\mathcal{C}$;
\end{enumerate}
then
$b=\bigoplus\mathcal{C}$.
\end{lem}
\begin{proof}
By $(i)$, we may write each $u\in V$ as
$u=(u_c)_{c\in\mathcal{C}}$ with $u_c\in X_c$, for all
$c:X_c\times X_c\to Z_c\in
\mathcal{C}$. By $(iii)$ followed by $(ii)$ we have that
$b(u,v)=\sum_{c,d\in\mathcal{C}} b(u_c,v_d)
=\sum_{c\in\mathcal{C}} c(u_c,v_c)
=\left(\oplus\mathcal{C}\right)(u,v).$
\end{proof}
\begin{defn}\label{def:ddecomp-bi}
A \emph{direct $\Omega$-decomposition} of an $\Omega$-bilinear map $b:V\times V\to W$
is a set $\mathcal{B}$ of submaps of $b$
satisfying the hypothesis of \lemref{lem:internal-direct-bi}.
Call $b$ directly $\Omega$-indecomposable if its only direct $\Omega$-decomposition
is $\{b\}$.
A Remak $\Omega$-decomposition of $b$ is an $\Omega$-decompositions
whose members or directly $\Omega$-indecomposable.
\end{defn}
The bilinear maps we consider were created by Baer \cite{Baer:class-2} and are the
foundation for the many Lie methods that have been associated to $p$-groups. Further
details of our account can be found in \cite[Section 5]{Warfield:nil}.
The principle example of such maps is the commutation of an $\Omega$-group $G$ where
$\gamma_2(G)\leq \zeta_1(G)$. There we define $V=G/\zeta_1(G)$, $W=\gamma_2(G)$, and
$b=\mathsf{Bi} (G):V\times V\to W$ where
\begin{equation}\label{eq:Bi}
b(\zeta_1(G) x,\zeta_1(G) y )=b(x,y),\qquad \forall x,\forall y, x,y\in G.
\end{equation}
It is directly verified that $b$ is $\mathbb{Z}_{p^e}[\Omega]$-bilinear where $G^{p^e}=1$, and
furthermore, nondegenerate. When working in $V$ and $W$ we use additive notation.
Given $H\leq G$ we define $U=H\zeta_1(G)/\zeta_1(G)\leq V$,
$Z=H\cap \gamma_2(G)\leq W$, and $c:=\mathsf{Bi} (H;G):U\times U\to Z$ where
\begin{equation}
c(u,v) = b(u,v),\qquad \forall u\forall v, u,v\in U.
\end{equation}
\begin{prop}\label{prop:p-group-bi}
If $G$ is a $\Omega$-group and $\gamma_2(G)\leq \zeta_1(G)$, then every direct $\Omega$-decomposition
$\mathcal{H}$ of $G$ induces a direct $\Omega$-decomposition
\begin{equation}
\mathsf{Bi} (\mathcal{H}) = \{ \mathsf{Bi} (H; G) : H\in\mathcal{H}\}.
\end{equation}
If $\mathsf{Bi} (P)$ is directly $\Omega$-indecomposable and $\zeta_1(G)\leq \Phi(G)$, then
$G$ is directly $\Omega$-indecomposable.
\end{prop}
\begin{proof}
Set $b:=\mathsf{Bi} (G)$.
By \lemref{lem:induced} and \propref{prop:V-inter-1}, $\mathcal{H}\zeta_1(G)/\zeta_1(G)$
is a direct $\Omega$-decomposition of $V=G/\zeta_1(G)$ and $\mathcal{H}\cap \gamma_2(G)$ is
a direct $\Omega$-decomposition of $W=\gamma_2(G)$. Furthermore, for each $H\in\mathcal{H}$,
$$b(H\zeta_1(G)/\zeta_1(G), \langle\mathcal{H}-\{H\}\rangle\zeta_1(G)/\zeta_1(G))
=[H,\langle\mathcal{H}-\{H\}\rangle]=0\in W.$$
In particular, $\mathsf{Bi} (\mathcal{H})$ is
a direct $\Omega$-decomposition of $b$.
Finally, if $\mathsf{Bi} (P)$ is directly indecomposable then $|\mathsf{Bi} (\mathcal{H})|=1$. Thus,
$\mathcal{H}\zeta_1(G)=\{G\}$. Therefore $\mathcal{H}$ has exactly one non-abelian member.
Take $Z\in\mathcal{H}\cap\mathfrak{A}$. As $Z$ is abelian, $Z\leq \zeta_1(G)$. If
$\zeta_1(G)\leq \Phi(G)$ then the elements of $G$ are non-generators. In particular,
$G=\langle\mathcal{H}\rangle=\langle \mathcal{H}-\{Z\}\rangle$. But by definition no
proper subset of decomposition generates the group. So $\mathcal{H}\cap\mathfrak{A}=\emptyset$.
Thus, $\mathcal{H}=\{G\}$ and $G$ is directly $\Omega$-indecomposable.
\end{proof}
Baer and later others observed a partial reversal of the map $G\mapsto \mathsf{Bi} (G)$. Our account
follows \cite{Warfield:nil}. In particular,
if $b:V\times V\to W$ is a $\mathbb{Z}_{p^e}$-bilinear map then we may define a group
$\mathsf{Grp} (b)$ on the set $V\times W$ where the product is given by:
\begin{equation}
(u,w)*(u',w') = (u+u', w+b(u,u')+w'),
\end{equation}
for all $(u,w)$ and all $(u',w')$ in $V\times W$. The following are immediate from the definition.
\begin{enumerate}[(i)]
\item $(0,0)$ is the identity and for all $(u,w)\in V\times W$, $(u,w)^{-1}=(-u,-w + b(u,u))$.
\item For all $(u,w)$ and all $(v,w)$ in $V\times W$, $[(u,w), (v,w')] = (0, b(u,v)-b(v,u))$.
\end{enumerate}
If $b$ is $\Omega$-bilinear then $\mathsf{Grp} (b)$ is an $\Omega$-group where
$$\forall s\in \Omega, \forall (u,w)\in V\times W,\qquad (u,w)^{s}=(u^s, w^s).$$
In light of (ii), if $p>2$ and $b$ is alternating, i.e. for all $u$ and all $v$ in $V$,
$b(u,v)=-b(v,u)$, then $[(u,w),(v,w')]=(0,2b(u,v))$. For that reason it is typical to consider
$\mathsf{Grp} (\frac{1}{2}b)$ in those settings so that $[(u,w),(v,w')]=(0,b(u,v))$. We shall not require
this approach. If $G^p=1$ then $G\cong \mathsf{Grp} (\mathsf{Bi} (G))$ \cite[Proposition 3.10(ii)]{Wilson:unique-cent}.
\begin{coro}\label{coro:exp-p}
If $G$ is a $p$-group with $G^p=1$ and $\gamma_2(G)\leq \zeta_1(G)$ then $G$ is directly
$\Omega$-indecomposable if, and only if, $\mathsf{Bi} (G)$ is directly $\Omega$-indecomposable and
$\zeta_1(G)\leq \Phi(G)$.
\end{coro}
\begin{proof}
The reverse directions is \propref{prop:p-group-bi}. We focus on the forward direction.
As $G^p=1$ it follows that $G\cong \mathsf{Grp} (\mathsf{Bi} (G))=:\hat{G}$. Set
$b:=\mathsf{Bi} (G)$. Let $\mathcal{B}$ be a direct $\Omega$-decomposition
of $b$, and therefore also of $\mathsf{Bi} (G)$. For each $c:X_c\times X_c\to Z_c\in\mathcal{B}$,
define $\mathsf{Grp} (c,b)=X_c\times Z_c\leq V\times W$. We claim that $\mathsf{Grp} (c;b)$ is an $\Omega$-subgroup of
$\mathsf{Grp} (b)$. In particular, $(0,0)\in \mathsf{Grp} (c;b)$ and for all $(x,w),(y,w')\in \mathsf{Grp} (c;b)$,
$(x,w)*(-y,-w'+b(y,y))= (x-y, w-b(x,y)-w'+b(y,y))\in X_c\times Z_c=\mathsf{Grp} (c;b)$. Furthermore,
\begin{align*}
\left[\mathsf{Grp} (c;b), \mathsf{Grp} \left( \sum_{d\in\mathcal{C}-\{c\}} d; b\right) \right]
& = \left( 0, 2b\left( X_c, \sum_{d\in\mathcal{C}-\{c\}} X_d\right) \right)=(0,0).
\end{align*}
Combined with $\mathsf{Grp} (b)=\langle \mathsf{Grp} (c;b): c\in\mathcal{C}\rangle$ it follows that
$\mathsf{Grp} (c;b)$ is normal in $\mathsf{Grp} (b)$. Finally,
\begin{align*}
\mathsf{Grp} (c;b) \cap \mathsf{Grp} \left(\sum_{d\in\mathcal{C}-\{c\}} d; b\right)
& = (X_c\times Z_c) \cap \sum_{d\in\mathcal{C}-\{c\}}(X_d\times Z_d) = 0\times 0.
\end{align*}
Thus, $\mathcal{H}=\{\mathsf{Grp} (c;b): c\in\mathcal{C}\}$ is a direct $\Omega$-decomposition of $\mathsf{Grp} (b)$.
As $G$ is directly $\Omega$-indecomposable it follows that $\mathcal{H}=\{G\}$ and so
$\mathcal{C}=\{b\}$. Thus, $b$ is directly $\Omega$-indecomposable.
\end{proof}
\subsection{Centroids of bilinear maps}
\label{sec:enrich}
In this section we replicate the classic interplay of idempotents of a ring and direct decompositions
of an algebraic object, but now for context of bilinear maps. The relevant ring is the centroid,
defined similar to centroid of a nonassociative ring \cite[Section X.1]{Jacobson:Lie}. As with
nonassociative rings, the idempotents of the centroid of a bilinear map correspond to direct decompositions.
Myasnikov \cite{Myasnikov} may have been the
first to generalize such methods to bilinear maps.
\begin{defn}\label{def:centroid}
The \emph{centroid} of an $\Omega$-bilinear $b:V\times V\to W$ is
\begin{equation*}
C_{\Omega}(b) = \{ (f,g)\in \End_{\Omega} V\oplus \End_{\Omega} W:
b(uf,v)=b(u,v)g=b(u,vf),\forall u,v\in V\}.
\end{equation*}
If $\Omega=\emptyset$ then write $C(b)$.
\end{defn}
\begin{lem}\label{lem:centroid}
Let $b:V\times V\to W$ be an $\Omega$-bilinear map. Then the following hold.
\begin{enumerate}[(i)]
\item $C_{\Omega}(b)$ is a subring of $\End_{\Omega} V\oplus \End_{\Omega} W$, and
$V$ and $W$ are $C_{\Omega}(b)$-modules.
\item If $b$ is $K$-bilinear for a ring $K$, then $K/(0:_K V)\cap (0:_K W)$ embeds
in $C(b)$.
In particular, $C(b)$ is the largest
ring over which $b$ is faithful bilinear.
\item If $b$ is nondegenerate and $W=b(V,V)$ then $C_{\Omega}(b)=C(b)$
and $C(b)$ is commutative.
\end{enumerate}
\end{lem}
\begin{proof}
Parts (i) and (ii) are immediate from the definitions. For part (iii), if $s\in \Omega$ and
$(f,g)\in C(b)$, then $b((su)f,v)=b(su,vf)=sb(u,vf)=b(s(uf),v)$ for all $u$ and
all $v\in V$. As $b$ is nondegenerate and
$b((su)f-s(uf),V)=0$, it follows that $(su)f=s(uf)$. In a similar fashion, $g\in\End_{\Omega}W$
so that $(f,g)\in C_{\Omega}(b)$. For part (iii) we repeat the same shuffling game
above: if $(f,g),(f',g')\in C(b)$ then $b(u(ff'),v)=b(u,vf)f'=b(u(f'f),v)$. By the
nondegenerate assumption we get that $ff'=f'f$ and also $gg'=g'g$.
\end{proof}
\begin{remark}
If $\rad b=0$ and $(f,g),(f',g)\in C(b)$ then $f=f'$. If
$W=b(V,V)$ and $(f,g),(f,g')\in C(b)$ then $g=g'$. So if $\rad b=0$
and $W=b(V,V)$ then the first variable determines the second and vice-versa.
\end{remark}
\subsection{Idempotents, frames, and direct $\Omega$-decompositions}\label{sec:bi-direct}
In this section we extend the usual interplay of idempotents and direct decompositions to the context of bilinear maps and them $p$-groups of class $2$. This allows us to prove \thmref{thm:indecomp-class2}.
This section follows the notation described in Subsection \ref{sec:rings}.
\begin{lem}\label{lem:idemp}
Let $b:V\times V\to W$ be an $\Omega$-bilinear map.
\begin{enumerate}[(i)]
\item A set $\mathcal{B}$ of $\Omega$-submaps of $b$ is a direct $\Omega$-decomposition
of $b$ if, and only if,
\begin{equation}
\mathcal{E}(\mathcal{B})
=\{ (e(V_c),e(W_c)) : c:V_c\times V_c\to W_c\in \mathcal{B}\}.
\end{equation}
is a set of pairwise orthogonal idempotents of $C_{\Omega}(b)$ which sum to $1$.
\item
$\mathcal{B}$ is a Remak $\Omega$-decomposition of $b$ if, and only if,
$\mathcal{E}(\mathcal{B})$ is a frame.
\item If $b$ is nondegenerate and $W=b(V,V)$,
then $b$ has a unique Remak $\Omega$-decomposition of $b$.
\end{enumerate}
\end{lem}
\begin{proof}
For $(i)$, by \defref{def:ddecomp-bi}, $\{V_b: b\in\mathcal{B}\}$ is a direct
decomposition of $V$ and $\{W_b:b\in\mathcal{B}\}$ is a direct decomposition
of $W$. Thus, $\mathcal{E}(\mathcal{B})$ is a set of
pairwise orthogonal idempotents which sum to $1$.
Let $(e,f)\in\mathcal{E}(\mathcal{X})$. As
$1-e=\sum_{(e',f')\in\mathcal{E}(\mathcal{B})-\{(e,f)\}} e'$ it follows that
for all $u,v\in V$ we have $b(ue,v(1-e))\in b(Ve,V(1-e))=0$ by
the assumptions on $\mathcal{B}$. Also, $b(ue,v e)\in Wf$. Together we have:
\begin{eqnarray*}
b(ue,v) & = & b(ue,v e) + b(ue,v(1-e))
= b(ue,v e),\\
b(u,ve) & = & b(u(1-e),v e) + b(ue,ve)
= b(ue,v e),\textnormal{ and }\\
b(u,v)f & = & \left(\sum_{(e',f')\in \mathcal{E}(\mathcal{B})}
b(ue',ve')f'\right)f = b(ue,v e)f=b(ue,ve).
\end{eqnarray*}
Thus $b(ue,v)=b(u,v)f=b(u,ve)$ which proves
$(e,f)\in C_{\Omega}(b)$; hence, $\mathcal{E}(\mathcal{B})\subseteq C_{\Omega}(b)$.
Now suppose that $\mathcal{E}$ is a set of pairwise orthogonal idempotents
of $C_{\Omega}(b)$ which sum to $1$. It follows that
$\{Ve: (e,f)\in\mathcal{E}\}$ is a direct $\Omega$-decomposition of $V$ and
$\{Wf: (e,f)\in\mathcal{E}\}$ is a direct $\Omega$-decomposition of $W$. Finally,
$b(ue,ve')=b(uee',v)=0$. Thus, $\{b|_{(e,f)}:V_e\times V_e\to W_f: (e,f)\in\mathcal{E}\}$
is a direct $\Omega$-decomposition of $C(b)$.
Now $(ii)$ follows. For $(iii)$, we now by \lemref{lem:centroid}(ii) that
$C(b)=C_{\Omega}(b)$ is commutative Artinian. The rest follows from \lemref{lem:lift-idemp}(iv).
\end{proof}
\begin{thm}\label{thm:Match-class2}
If $G$ is a $p$-group and $\gamma_2(G)\leq \zeta_1(G)$, then there is a unique frame
$\mathcal{E}$ in $C(\mathsf{Bi} (G))$. Furthermore, if $\gamma_2(G)=\zeta_1(G)$ then every Remak $\Omega$-decomposition $\mathcal{H}$ of
$G$ matches a unique partition of $(\mathcal{K},\mathcal{Q})$ where
\begin{align*}
\mathcal{K} := \{ W\hat{e}: (e,\hat{e})\in \mathcal{E}\},\\
\mathcal{Q} := \{ Ve : (e,\hat{e})\in\mathcal{E}\}.
\end{align*}
If $G^p=1$ then every Remak $\Omega$-decomposition of $G$ matches $(\mathcal{K},\mathcal{Q})$.
\end{thm}
\begin{proof}
This follows from \propref{prop:p-group-bi}, \lemref{lem:idemp}, and \corref{coro:exp-p}.
\end{proof}
\subsection{Proof of \thmref{thm:indecomp-class2}}
This follows from \thmref{thm:Match-class2}.
$\Box$
\subsection{Centerless groups}
We close this section with a brief consideration of groups with a trivial center.
\begin{lemma}\label{lem:centerless}
Let $G$ be an $\Omega$-group with $\zeta_1(G)=1$ and $N$ a minimal $(\Omega\union G)$-subgroup of $G$. Then the following hold.
\begin{enumerate}[(i)]
\item $G$ has a unique Remak $\Omega$-decomposition $\mathcal{R}$.
\item There is a unique $R\in\mathcal{R}$ such that $N\leq R$.
\item $\{C_R(N),\langle\mathcal{R}-\{R\}\rangle\}$ is a direct
$(\Omega\union G)$-decomposition of $C_G(N)$.
\item Every Remak $(\Omega\union G)$-decomposition $\mathcal{H}$ of
$C_G(N)$ refines $\{C_R(N),\langle\mathcal{R}-\{R\}\rangle\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Given Remak $\Omega$-decompositions $\mathcal{R}$ and $\mathcal{S}$ of $G$, by
\lemref{lem:KRS} and the assumption that $\zeta_1(G)=1$, it follows that
$\mathcal{R}=\mathcal{R}\zeta_1(G)=\mathcal{S}\zeta_1(G)=\mathcal{S}$. This proves (i).
For (ii), if $N$ is a minimal $(\Omega\union G)$-subgroup of $G$ then
$[R,N]\leq R\intersect N\in \{1,N\}$, for all $R\in\mathcal{R}$. If $[R,N]=1$
for all $R\in\mathcal{R}$ then $N\leq \zeta_1(G)=1$ which contradicts the assumption
that $N$ is minimal. Thus, for some $R\in\mathcal{R}$, $N\leq R$. The
uniqueness follows as $R\intersect\langle\mathcal{R}-\{R\}\rangle=1$.
By (ii), $[N,\langle R-\{R\}\rangle]=[R,\langle \mathcal{R}-\{R\}\rangle]=1$ which shows
$\langle \mathcal{R}-\{R\}\rangle\leq C_G(N)$. Hence,
$C_G(N)=C_R(N)\times \langle\mathcal{R}-\{R\}\rangle$.
This proves (iii).
Finally we prove (iv). Let $\mathcal{K}$ be a Remak $(\Omega\cup G)$-decomposition of $C_G(N)$.
Let $\mathcal{S}$ be a Remak $(\Omega\cup G)$-decomposition of $C_G(N)$ which refines the direct
$(\Omega\cup G)$-decomposition $C_G(N)=C_R(N)\times \langle\mathcal{R}-\{R\}\rangle$
given by (iii). Note that $\mathcal{R}-\{R\}\subseteq \mathcal{S}$ as
members of $\mathcal{R}$ cannot be refined further. By
\thmref{thm:KRS}, there is a $\mathcal{J}\subseteq\mathcal{K}$ such that we may exchange
$\mathcal{R}-\{R\}\subseteq \mathcal{S}$ with $\mathcal{J}$; hence, $\{C_R(N)\}\sqcup \mathcal{J}$
is a direct $(\Omega\union G)$-decomposition of $C_G(N)$. Now $R\intersect \langle\mathcal{J}\rangle
\leq C_R(N)\intersect \langle\mathcal{J}\rangle=1$. Also
\begin{equation}
\langle R,\mathcal{J}\rangle=\langle R, C_R(N),\mathcal{J}\rangle
=\langle R,\mathcal{R}-\{R\}\rangle=G.
\end{equation}
As every member of $\mathcal{J}$ is an $(\Omega\union G)$-subgroup of $G$, it follows
that the are normal in $G$ and so $\{R\}\sqcup\mathcal{J}$ is a direct $\Omega$-decomposition
of $G$. As the members of $\mathcal{J}$ are $\Omega$-indecomposable it follows that
$\{R\}\sqcup\mathcal{J}$ is a Remak $\Omega$-decomposition of $G$. However, $G$ has a unique
Remak $\Omega$-decomposition so $\mathcal{J}=\mathcal{R}-\{R\}$. As $\mathcal{J}$ was a subset
of an arbitrary Remak $(\Omega\union G)$-decomposition of $C_G(N)$ it follows that every Remak
$(\Omega\union G)$-decomposition of $C_G(N)$ contains $\mathcal{R}-\{R\}$.
\end{proof}
\begin{prop}\label{prop:centerless-Extend}
For groups $G$ with $\zeta_1(G)=1$, the set $\mathcal{M}$ of minimal
$(\Omega\cup G)$-subgroups is a direct $(\Omega\cup G)$-decomposition of the socle of $G$
and furthermore there is a unique partition of $\mathcal{M}$ which extends to the
Remak $\Omega$-decomposition of $G$.
\end{prop}
The following consequence shows how the global Remak decomposition of a group with
trivial solvable radical is determined precisely from a unique partition of the Remak
decomposition of it socle.
\begin{coro}
If $G$ has trivial solvable radical and $\mathcal{R}$ is its Remak decomposition then
$\mathcal{R}=\{C_G(C_G(\soc(R))): R\in\mathcal{R}\}.$
\end{coro}
\section{The Remak Decomposition Algorithm}\label{sec:Remak}
In this section we prove \thmref{thm:FindRemak}. The approach is to break up a given group
into sections for which a Remak $(\Omega\cup G)$-decomposition can be computed directly.
The base cases include $\Omega$-modules (\corref{coro:FindRemak-abelian}), $p$-groups of class $2$ (which
follows from \thmref{thm:Match-class2}), and groups with a trivial center. We use \thmref{thm:Lift-Extend} as justification that we can interlace these base cases
to sequentially lift direct decomposition via the algorithm {\tt Merge} of \thmref{thm:merge}.
\subsection{Finding Remak $\Omega$-decompositions for nilpotent groups of class $2$}
\label{sec:FindRemak-class2}
In this section we prove \thmref{thm:FindRemak} for the case of nilpotent groups $G$
of class $2$. The algorithm depends on \thmref{thm:Match-class2} and \thmref{thm:merge}.
To specify a $\mathbb{Z}$-bilinear map $b:V\times V\to W$ for computation we need only provide
the \emph{structure constants} with respect to fixed bases of $V$ and $W$.
Specifically let $\mathcal{X}$ be a basis of $V$ and $\mathcal{Y}$ a basis of $W$. Define
$B_{xy}^{(z)}\in\mathbb{Z}$ so that the following equation is satisfied:
\begin{align*}
b\left(\sum_{x\in\mathcal{X}} \alpha_x x,\sum_{y\in\mathcal{X}} \beta_y y \right)
& = \sum_{z\in\mathcal{Z}} \left(\sum_{x,y\in\mathcal{X}} \alpha_x B_{xy}^{(z)} \beta_y\right)z
& (\forall x\in\mathcal{X},\forall\alpha_x,\beta_x\in\mathbb{Z}).
\end{align*}
\begin{lem}\label{lem:Remak-bilinear}
There is a deterministic polynomial-time algorithm, which given $\Omega$-modules
$V$ and $W$ and a nondegenerate $\Omega$-bilinear map $b:V\times V\to W$
with $W=b(V,V)$, returns a Remak $\Omega$-decomposition of $b$.
\end{lem}
\begin{proof}
\emph{Algorithm}.
Solve a system of linear equations in the (additive) abelian group
$\End_{\Omega} V\times \End_{\Omega} W$ to find generators for $C_{\Omega}(b)$.
Use {\sc Frame} to find a frame $\mathcal{E}$
of $C_{\Omega}(b)$. Return
$\{b|_{(e,f)}:Ve\times Ve\to Wf : (e,f)\in\mathcal{E}\}$.
\emph{Correctness}.
This is supported by \lemref{lem:idemp} and \thmref{thm:Frame}.
\emph{Timing}.
This follows from the timing of {\sc Solve} and {\sc Frame}.
\end{proof}
\begin{thm}\label{thm:FindRemak-class2}
There is a polynomial-time algorithm which, given a nilpotent $\Omega$-group of class $2$, returns a Remak $\Omega$-decomposition
of the group.
\end{thm}
\begin{proof}
Let $G\in\mathbb{G}_n^{\Omega}$ with $\gamma_2(G)\leq \zeta_1(G)$.
\emph{Algorithm}.
Use {\sc Order} to compute $|G|$. For each prime $p$ dividing $|G|$, write
$|G|=p^e m$ where $(p,m)=1$ and set $P:=G^{m}$. Set $b_p:=\mathsf{Bi} (P)$.
Use the algorithm of \lemref{lem:Remak-bilinear} to
find a direct $\Omega$-decomposition $\mathcal{B}$ of $b$.
Define each of the following:
\begin{align*}
\mathcal{X}(\mathcal{B}) & = \{ X_c : c:X_c\times X_c\to Z_c\in \mathcal{B}\}\\
\mathcal{H} & = \{H\leq P: \zeta_1(P)\leq H, H/\zeta_1(P)\in \mathcal{X}(\mathcal{B})\}.
\end{align*}
Use \corref{coro:FindRemak-abelian} to build a Remak $\Omega$-decomposition
$\mathcal{Z}$ of $\zeta_1(P)$. Set $\mathcal{R}_p:={\tt Merge}(\mathcal{Z},\mathcal{H})$.
Return $\bigcup_{p\mid |G|} \mathcal{R}_p$.
\emph{Correctness}.
By \lemref{lem:Remak-bilinear} the set $\mathcal{B}$ is the unique Remak
$\Omega$-decomposition of $\mathsf{Bi} (G)$. By \thmref{thm:Match-class2} and
\thmref{thm:merge} the return a Remak $\Omega$-decomposition of $G$.
\emph{Timing}.
The algorithm uses a constant number of polynomial time subroutines.
\end{proof}
We have need of one final observation which allows us to modify certain decompositions into ones that match the hypothesis \thmref{thm:merge}(b) when the up grading pair is $(\mathfrak{N}_c,G\mapsto \zeta_c(G))$.
\begin{lemma}\label{lem:centralize}
There is a polynomial-time algorithm which, given an $\Omega$-decomposition $\mathcal{H}=
\mathcal{H}\zeta_c(G)$
of a group $G$, returns the finest $\Omega$-decomposition $\mathcal{K}$ refined by $\mathcal{H}$
and such that for all $K\in\mathcal{K}$, $\zeta_c(K)=\zeta_c(G)$. (The proof also
shows there is a unique such $\mathcal{K}$.)
\end{lemma}
\begin{proof}
Observe that $\mathcal{K}=\{\langle H\in\mathcal{H}: [K,H,\dots,H]\neq 1\rangle: K\in\mathcal{K}\}$.
We can create $\mathcal{K}$ by a transitive closure algorithm.
\end{proof}
\begin{thm}\label{thm:FindRemak-Q}
{\sc Find-$\Omega$-Remak} has polynomial-time solution.
\end{thm}
\begin{proof}
Let $G\in \mathbb{G}_n^{\Omega}$.
\emph{Algorithm.}
If $G=1$ then return $\emptyset$. Otherwise, compute $\zeta_1(G)$.
If $G=\zeta_1(G)$ then use {\sc Abelian.Remak-$\Omega$-Decomposition} and return
the result. Else, if $\zeta_1(G)=1$ then use {\sc Minimal-$\Omega$-Normal} to find
a minimal $(\Omega\cup G)$-subgroup $N$ of $G$. Use {\sc Normal-Centralizer} to compute
$C_G(N)$. If $C_G(N)=1$ then return $\{G\}$. Otherwise, recurse with
$C_G(N)$ in the role of $G$ to find a Remak $\Omega$-decomposition
$\mathcal{K}$ of $C_G(N)$. Call $\mathcal{H}:=\textalgo{Extend}(G,\mathcal{K})$ to create
a direct $\Omega$-decomposition $\mathcal{H}$ extending $\mathcal{K}$ maximally. Return $\mathcal{H}$.
Now $G>\zeta_1(G)>1$. Compute $\zeta_2(G)$ and use \thmref{thm:FindRemak-class2} to
construct a Remak $(\Omega\cup G)$-decomposition $\mathcal{A}$ of $\zeta_2(G)$. If
$G=\zeta_2(G)$ then return $\mathcal{A}$; otherwise, $G>\zeta_2(G)$ (consider \figref{fig:rel-ext}).
Use a recursive call on $G/\zeta_1(G)$ to find $\mathcal{H}=\mathcal{H}\zeta_1(G)$ such that
$\mathcal{H}/\zeta_1(G)$ is a Remak $\Omega$-decomposition of $G/\zeta_1(G)$. Apply \lemref{lem:centralize} to $\mathcal{H}$ and then set $\mathcal{J}:=\textalgo{Merge}(\mathcal{A},\mathcal{H})$, and return $\mathcal{J}$.
\emph{Correctness.}
The case $G=\zeta_1(G)$ is proved by \corref{coro:FindRemak-abelian} and the case $G=\zeta_2(G)$ is proved in \thmref{thm:FindRemak-class2}.
Now suppose that $G>\zeta_1(G)=1$. Following \lemref{lem:centerless}, $G$ has a unique Remak $\Omega$-decomposition $\mathcal{R}$ and there is a unique $R\in\mathcal{R}$ such that $N\leq R$ and $\langle\mathcal{R}-\{R\}\rangle\leq C_G(N)$. So if $C_G(N)=1$ then $G$ is directly indecomposable and the return of the algorithm is correct. Otherwise the algorithm makes a recursive call to find a Remak $(\Omega\cup G)$-decomposition $\mathcal{K}$ of $C_G(N)$. By \lemref{lem:centerless}(iv), $\mathcal{K}$ contains $\mathcal{R}-\{R\}$ and so there is a unique maximal extension of $\mathcal{K}$, namely $\mathcal{R}$, and so by \thmref{thm:Extend}, the algorithm $\textalgo{Extend}$ creates the Remak $\Omega$-decomposition of $G$ so the return in this case is correct.
Finally suppose that $G>\zeta_2(G)\geq\zeta_1(G)>1$. There we have the commutative
diagram \figref{fig:rel-ext} which is exact in rows and columns.
\begin{figure}
\caption{The relative extension $1<\zeta_1(G)\leq\zeta_2(G)<G$. The rows and columns are exact.}
\label{fig:rel-ext}
\end{figure}
By \thmref{thm:Lift-Extend}, $\mathcal{H}\zeta_2(G)$ refines $\mathcal{R}\zeta_2(G)$ and so
the algorithm \textalgo{Merge} is guaranteed by
\thmref{thm:merge} to return a Remak $\Omega$-decomposition of $G$ (consider \figref{fig:recurse}).
\begin{figure}
\caption{The recursive step parameters feed into {\tt Merge}
\label{fig:recurse}
\end{figure}
\emph{Timing.} The algorithm enters a recursive call only if $\zeta_1(G)=1$ or
$G>\zeta_2(G)\geq \zeta_1(G)>1$. As these two case are exclusive there is at most one recurse call
made by the algorithm.
The remainder of the algorithm uses polynomial time methods as indicated.
\end{proof}
\subsection{Proof of \thmref{thm:FindRemak}}
This is a corollary to \thmref{thm:FindRemak-Q}
$\Box$
\begin{coro}\label{coro:FindRemak-matrix}
$\textalgo{FindRemak}$ has a deterministic polynomial-time solution for matrix $\Gamma_d$-groups.
\end{coro}
\begin{proof}
This follows from Section \ref{sec:tools}, \remref{rem:matrix}, and \thmref{thm:FindRemak-Q}.
\end{proof}
\subsection{General operator groups}\label{sec:gen-ops}
Now we suppose that $G\in \mathbb{G}_n$ is a $\Omega$-group for a general set $\Omega$ of
operators. That is, $\Omega\theta\subseteq \End G$. To solve \textalgo{Remak-$\Omega$-Decomposition}
in full generality it suffices to reduce to the case where $\Omega$ acts as automorphisms on $G$,
where we invoke \thmref{thm:FindRemak-Q}.
For that suppose we have $\omega\theta\in \End G-\Aut G$. By Fitting lemma we have that:
\begin{equation}
G=\ker \omega^{\ell(G)} \times \im \omega^{\ell(G)}.
\end{equation}
To compute such a decomposition we compute $\im\omega^{\ell(G)}$ and then apply
{\sc Direct-$\Omega$-Complement} to compute $\ker \omega^{\ell(G)}$. As $\Omega$ is part of the input, we may test each $\omega\in\Omega$ to find those $\omega$ where $\omega\theta\notin\Aut G$, and with each produce a direct $\Omega$-decomposition. The restriction of $\omega$ to the constituents induces either zero map, or an automorphism. Thus the remaining cases are handled by \thmref{thm:FindRemak-Q}.
$\Box$
\section{An example}\label{sec:ex}
Here we give an example of the execution of the algorithm for \thmref{thm:FindRemak-Q} which covers several
of the interesting components (but of course fails to address all situations).
We will operate without a specific representation in mind, since we are interested in demonstrating
the high-level techniques of the algorithm for \thmref{thm:FindRemak-Q}.
We trace through how the algorithm might process the group
$$G=D_8 \times Q_8\times \SL(2,5)\times \big(\SL(2,5)\circ \SL(2,5)\big).$$
First the algorithm recurses until it reaches the group
$$\hat{G}=G/\zeta_2(G)\cong \PSL(2,5)^3.$$
At this point it finds a minimal normal subgroup $N$ of $\hat{G}$, of which there
are three, so we pick $N=\PSL(2,5)\times 1\times 1$. Next the algorithm computes
a Remak decomposition of $C_G(N)=1\times \PSL(2,5)\times \PSL(2,5)$. At this point
the algorithm returns the unique Remak decomposition
$$\mathcal{Q}:=\{\PSL(2,5)\times 1\times 1,1\times \PSL(2,5)\times 1,1\times 1\times \PSL(2,5)\}.$$
These are pulled back to the set $\{H_1, H_2, H_3\}$ of subgroups in $G$.
Next the algorithm constructs a Remak $G$-decomposition of $\zeta_2(G)$.
For that the algorithm constructs the bilinear map of commutation from
$\zeta_2(G)/\zeta_1(G)\cong \mathbb{Z}_2^4$
into $\gamma_2(\zeta_2(G))=\langle z_1,z_2\rangle\cong \mathbb{Z}_2^2$, i.e.
$$b:=\mathsf{Bi} (\zeta_2(G)):\mathbb{Z}_2^4\times \mathbb{Z}_2^4\to \mathbb{Z}_2^2$$
Below we have described the structure constants for $b$ in a nice basis but remark that
unless we already know the direct factors of $\zeta_2(G)$ it is unlikely to have such a natural form.
\begin{equation}
b(u,v) = u
\begin{bmatrix}
0 & z_1 & & \\
-z_1 & 0 & & \\
& & 0 & z_2 \\
& & -z_2 & 0
\end{bmatrix} v^t,\qquad \forall u,v\in \mathbb{Z}_2^4.
\end{equation}
A basis for the centroid of $b$ is computed:
\begin{equation}
C(b) = \left\{\left(\begin{bmatrix}
a & 0 & & \\
0 & a & & \\
& & b & 0 \\
& & 0 & b
\end{bmatrix},
\begin{bmatrix} a & 0 \\ 0 & b\end{bmatrix}\right) :
a,b\in\mathbb{Z}_2\right\}\cong \mathbb{Z}_2\oplus\mathbb{Z}_2.
\end{equation}
Next, the unique frame
$\mathcal{E}=\{ (I_2\oplus 0_2, 1\oplus 0), (0_2\oplus I_2,0\oplus 1)\}$
of $C(b)$ is built and used to create the subgroups
$\mathcal{K}:=\{D_8\times Z(Q_8), Z(D_8)\times Q_8\}$ in $\zeta_2(G)$. Here, using
an arbitrary basis $\mathcal{X}$ for $\zeta_1(G)=\mathbb{Z}_2^2\times \mathbb{Z}_4^2$,
the algorithm {\tt Merge}$(\mathcal{X},\mathcal{K})$
constructs a Remak decomposition $\mathcal{A}:=\{H,K,C_1, C_2\}$ of $\zeta_2(G)$
where $H\cong D_8$, $K\cong Q_8$, and $C_1\cong C_2\cong \mathbb{Z}_4$.
Finally, the algorithm {\tt Merge}$(\mathcal{A},\mathcal{H})$ returns a
Remak decomposition of $G$. To explain the merging process we trace that algorithm
through as well.
Let $R=\SL(d,q)\times 1\times 1$ and $S=1\times (\SL(d,q)\circ\SL(d,q))$.
These groups are directly indecomposable direct factors of $G$ and
serve as the hypothesized directions of for the direct chain used by {\tt Merge}.
Without loss of generality we index the $H$'s so that $H_2=R\zeta_2(G)$ and $H_1H_3=S\zeta_2(G)$
and
$$G/\zeta_2(G)=\PSL(d,q)\times \PSL(d,q)\times \PSL(d,q)
=H_2/\zeta_2(G)\times H_1/\zeta_2(G)\times H_3/\zeta_2(G).$$
Furthermore, $\zeta_2(H_i)=\zeta_2(G)$ for all $i\in\{1,2,3\}$. Therefore,
$(\mathcal{A},\mathcal{H})$ satisfies the hypothesis of \thmref{thm:merge}.
The loop in {\tt Merge} begins with $\mathcal{K}_0=\mathcal{A}$ and seeks to extend
$\mathcal{A}$ to $H_1$ by selecting an appropriate subset $\mathcal{A}_1\subseteq \mathcal{K}_0=\mathcal{A}$
and finding a complement $\lfloor H_1\rfloor\leq H_1$ such that
$\mathcal{K}_1=\mathcal{A}_1\sqcup\{\lfloor H_1\rfloor\}$ is a direct decomposition of $H_1$.
The configuration at this stage is seen in \figref{fig:merge-1}. By \thmref{thm:Extend}, we have $H,K\in \mathcal{A}_1$ (as those lie outside the center) and one of the $C_i$'s (though no unique choice exists there).
In the second loop iteration we extend $\mathcal{K}_1$ to a $\mathfrak{N}_2$-refined
direct decomposition if $H_1 H_2$. This selects a subset $\mathcal{A}_2\subseteq \mathcal{K}_1\cap \zeta_2(G)$. Also $H_1$ and $H_2$ are in different
directions, specifically $H_2=R\zeta_2(G)$ and $H_1\leq S\zeta_2(G)$, so
the algorithm is forced to include $\lfloor H_1\rfloor\in\mathcal{K}_2$ (cf. \thmref{thm:Extend}(iii)) and then creates
a complement $\lfloor H_2\rfloor\cong\SL(2,5)$ to $\langle \mathcal{A}_2,\lfloor H_1\rfloor\rangle$.
The configuration is illustrated in \figref{fig:merge-2}. As before, we have $H,K\in\mathcal{K}_2$ as well, but the cyclic groups are now gone as the centers of $\lfloor H_i\rfloor$, $i\in\{1,2\}$, fill out a direct decomposition of $\zeta_2(G)$.
Finally, in the third loop iteration, the direction is back towards $S$ and so
the extension $\mathcal{K}_3$ of $\mathcal{K}_2$ to $H_1 H_2 H_3$ contains
$\lfloor H_2\rfloor$ and is $\mathfrak{N}_2$-refined. However, the group $\lfloor H_1\rfloor$
is not a direct factor of $G$ as it is one term in nontrivial central product. Therefore
that group is replaced by a subgroup $\lfloor H_1 H_3\rfloor\cong \SL(d,q)\circ\SL(d,q)$. The final configuration
is illustrated in \figref{fig:merge-3}. $\mathcal{K}_3$ is a Remak decomposition of $G$.
\begin{figure}
\caption{The lattice encountered during the first iteration of the loop in the algorithm
${\tt Merge}
\label{fig:merge-1}
\end{figure}
\begin{figure}
\caption{The lattice encountered during the second iteration of the loop in the algorithm
${\tt Merge}
\label{fig:merge-2}
\end{figure}
\begin{figure}
\caption{The lattice of encountered during the third iteration of the loop in the algorithm
${\tt Merge}
\label{fig:merge-3}
\end{figure}
\section{Closing remarks}\label{sec:closing}
Historically the problem of finding a Remak decomposition focused on groups given by their multiplication table since even there there did not seem to be a polynomial-time solution. It was known that a Remak decomposition could be found by checking all partitions of all minimal generating sets of a group $G$ and so the problem had a sub-exponential complexity of $|G|^{\log |G|+O(1)}$. That placed it in the company of other interesting problems including testing for an isomorphism between two groups \cite{Miller:nlogn}. Producing an algorithm that is polynomial-time in the size of the group's multiplication table (i.e. polynomial in $|G|^2$) was progress, achieved independently in \cite{KN:direct} and \cite{Wilson:thesis}. Evidently, \thmref{thm:FindRemak} provides a polynomial-time solution for groups input in this way (e.g. use a regular representation). With a few observations we sharpen \thmref{thm:FindRemak} in that specific context to the following:
\begin{thm}\label{thm:nearly-linear}
There is a deterministic nearly-linear-time algorithm which, given a group's multiplication table,
returns a Remak decomposition of the group.
\end{thm}
\begin{proof}
The algorithm for \thmref{thm:FindRemak-Q} is polynomial in $\log |G|$. As the input length here is $|G|^2$, it suffices to show that the problems listed in Section \ref{sec:tools} have $O(|G|^2\log^c |G|)$-time or better solutions. Evidently, \textalgo{Order}, \textalgo{Member}, \textalgo{Solve} each have brute-force linear-times solutions. \textalgo{Presentation} can be solved in linear-time by selecting a minimal generating set $\{g_1,\dots,g_{\ell}\}$ (which has size $\log |G|$) and acting on the cosets of $\{\langle g_i,\dots, g_{\ell}\rangle : 1\leq i\leq \ell\}$ produce defining relations of the generators in fashion similar to \cite[Exercise 5.2]{Seress:book}. For \textalgo{Minimal-Normal}, begin with an element and takes it normal closure. If this is a proper subgroup recurse, otherwise, try an element which is not conjugate to the first and repeat until either a proper normal subgroup is discovered or it is proved that group is simple. That takes $O(|G|^2)$-time. The remaining algorithms \textalgo{Primary-Decomposition}, \textalgo{Irreducible}, and \textalgo{Frame} have brute force linear-time solutions. Thus, the algorithm can be modified to run in times $O(|G|^2 \log^c |G|)$.
\end{proof}
Section \ref{sec:lift-ext} lays out a framework which permits for a local view of the direct products of group. We have some lingering questions in this area.
\begin{enumerate}
\item What is the best series of subgroups to use for the algorithm of \thmref{thm:FindRemak-Q}?
Corollaries \ref{coro:canonical-graders} and \ref{coro:canonical-grader-II} offer alternatives series to use in the algorithm. There is an option for a top-down algorithm based on down graders. That may allow for a black-box algorithm since verbal subgroups can be constructed in black-box groups; see \cite[Section 2.3.4]{Seress:book}.
\item Is their a parallel NC solution for \textalgo{Remak-$\Omega$-Decomposition}?
We can speculate how this may proceed. First, select an appropriate series $1\leq G_1\leq\cdots \leq G_n=G$ for $G$ and distribute and use parallel linear algebra methods to find Remak decompositions $\mathcal{A}_{i0}$ of each $G_{i+1}/G_{i}$, for $1\leq i<n$. Then for $0\leq j\leq \log n$, for each $1\leq i\leq n/2^j$ in parallel compute $\mathcal{A}_{i(j+1)}:=\textalgo{Merge}(\mathcal{A}_{ij},\mathcal{A}_{(i+1)j})$. When $j=\lfloor \log n\rfloor$ we have a direct decomposition $\mathcal{A}_{1\log n}$ of $G$ and have used poly-logarithmic time. Unfortunately, \thmref{thm:merge}(a) is not satisfied in these recursions, so we cannot be certain that the result is a Remak decomposition.
\end{enumerate}
\section*{Acknowledgments}
I am indebted to W. M. Kantor for taking a great interest in this work and offering guidance. Thanks to E. M. Luks, C.R.B. Wright, and \'{A}. Seress for encouragement and many helpful remarks.
\def$'${$'$}
\end{document} |
\begin{document}
\title[Continuity of attractors for $\mathcal{C}^1$ perturbations]{Continuity of attractors for $\mathcal{C}^1$ perturbations of a smooth domain}
\author[P. S. Barbosa, A. L. Pereira ]{Pricila S. Barbosa$^{\star}$ and Ant\^onio L. Pereira$^{\diamond}$ }
\thanks{$^\diamond$Partially supported by FAPESP-Brazil grant 2016/02150-8.}
\address{Ant\^onio L. Pereira
\break
Instituto de Matem\'atica e Estat\'istica \\
Universidade de S\~ao Paulo - S\~ao Paulo - Brazil}
\email{[email protected]}
\address{Pricila S. Barbosa
\break
Universidade Tecnol\'ogica Federal do Paran\'a - Paran\'a - Brazil }
\email{[email protected]}
\date{}
\subjclass{Primary: 35B41 ; Secondary: 35K20, 58D25 }
\keywords{parabolic problem, perturbation of the domain, global attractor, continuity of attractors.}
\begin{abstract}
We consider a family of semilinear parabolic problems with nonlinear boundary conditions
\[
\left\{
\begin{aligned}
u_t(x,t)&=\Delta u(x,t) -au(x,t) + f(u(x,t)) ,\,\,\ x \in \Omega_\epsilon
\,\,\,\mbox{and}\,\,\,\,\,\,t>0\,,
\\
\displaystyle\frac{\partial u}{\partial N}(x,t)&=g(u(x,t)), \,\, x \in \partial\Omega_\epsilon \,\,\,\mbox{and}\,\,\,\,\,\,t>0\,,
\end{aligned}
\right.
\]
where $\Omega_0 \subset \mathbb{R}^n$ is a smooth (at least $\mathcal{C}^2$) domain, $\Omega_{\epsilon} = h_{\epsilon}(\Omega_0)$ and $h_{\epsilon}$ is a family of diffeomorphisms converging to the identity in the $\mathcal{C}^1$-norm. Assuming suitable regularity and dissipative conditions for the nonlinearites, we show that the problem is well posed for $\epsilon>0$ sufficiently small in a suitable scale of fractional spaces, the associated semigroup has a global attractor $\mathcal{A}_{\epsilon}$ and the family $\{\mathcal{A}_{\epsilon}\}$ is continuous at $\epsilon = 0$.
\end{abstract}
\maketitle
\allowdisplaybreaks
\section{Introduction} \label{intro}
Let $\Omega= \Omega_0 \subset \mathbb{R}^n$ be a $\mathcal{C}^2$ domain, $a$ a positive number, $f, g: \mathbb{R} \to \mathbb{R}$ real functions, and consider the family of semilinear parabolic problems with nonlinear Neumann boundary conditions:
\begin{equation} \label{nonlinBVP} \tag{$P_{\epsilon}$}
\begin{array}{rcl}
\left\{
\begin{array}{rcl}
u_t(x,t)&=&\Delta u(x,t) -au(x,t) + f(u(x,t)) ,\,\,\ x \in \Omega_\epsilon
\,\,\,\mbox{and}\,\,\,\,\,\,t>0\,,
\\
\displaystyle\frac{\partial u}{\partial N}(x,t)&=&g(u(x,t)), \,\, x \in \partial\Omega_\epsilon \,\,\,\mbox{and}\,\,\,\,\,\,t>0\,,
\end{array}
\right.
\end{array}
\end{equation}
where $\Omega_{\epsilon}= \Omega_{h_\epsilon} =h_{\epsilon}(\Omega_0)$ and $h_{\epsilon}: \Omega_0 \to \mathbb{R}^n $ is a family
of $\mathcal{C}^m, m \geq 2$ maps satisfying suitable conditions to be specified later.
One of the central questions concerning this problem is the existence and properties of \emph{ global attractors} since, as it is well known, they determine the dynamics of the entire system (see, for example \cite{Hale} or \cite{Teman}). The continuity with respect to parameters present in the equation is also of interest, since it can be seen as a desirable property of ``robustness'' in the model. In many cases, however, the form of the equation is fixed, so the `parameter`of interest is the domain where the problem is posed.
The existence of a global compact attractor for the problem \ref{nonlinBVP} has been proved in
\cite{COPR} and \cite{OP}, under stronger smoothness hypotheses on the domains and growth and dissipative conditions on the nonlinearities $f$ and $g$.
\par The problem of existence and continuity of global attractors for semilinear parabolic problems, with respect to change of domains has also been considered in \cite{AC1}, for the problem with homogeneous boundary conditions
\begin{equation*}
\begin{array}{rlr}
\left\{
\begin{array}{rcl}
u_t = \Delta u + f(x,u) \,\,\,\mbox{in}\,\,\, \,\,\,\Omega_\epsilon \\
\displaystyle\frac{\partial u}{\partial N}= 0 \,\,\,\mbox{on}\,\,\,\,\,\,\partial\Omega_\epsilon\,,
\end{array}
\right.
\end{array}
\end{equation*}
where $\Omega_\epsilon$, $0 \leq \epsilon \leq \epsilon_0$ are bounded domains
with Lipschitz boundary in $\mathbb{R}^N$, $N \geq 2$. There it is proved that, if the perturbations are such that the convergence of the eigenvalues and eigenfunctions of the linear part of the problem can be shown, than the upper semicontinuity of attractors follow. With the additional assumption that the equilibria are all hyperbolic, the lower semicontinuity is also obtained.
\par The behavior of the equilibria of (\ref{nonlinBVP}) was studied in
\cite{AB2} and \cite{AB3}. In these papers, the authors consider
a family of smooth domains $\Omega_\epsilon \subset \mathbb{R}^N$, $N \geq 2$ and $0 \leq \epsilon \leq \epsilon_0$ whose boundary oscillates rapidly when the parameter
$\epsilon \to 0$ and prove that the equilibria, as well as the spectra of the linearised problem around them, converge to the solution of a ``limit problem''.
In \cite{PP} the authors prove the continuity of the attractors of \ref{nonlinBVP} with respect to
$C^2$-perturbations of a smooth domain of $\mathbb{R}^n$.
These results do not extend immediately to the case considered here, due to the lack of smoothness of the domains considered and the fact that the perturbations do not
converge to the inclusion in the $C^2$-norm.
In this work, we follow the general approach of \cite{PP}, which consists basically in ``pull-backing'' the perturbed problems to the fixed domain $ {\Omega}$
and then considering the family of abstract semilinear problems thus generated. We present a brief overview of this approach in the next section for convenience. Our aim here is then to prove well-posedness,
establish the existence of a global attractor $\mathcal{A}_{\epsilon}$
for sufficiently small $\epsilon \geq 0$ and prove that the family of attractors of
is continuous at $ \epsilon = 0$
These results were obtained in our previous paper \cite{BPP} for the family of perturbations
of the unit square in $\mathbb{R}^2$ given by
\begin{equation} \label{per}
h_{\epsilon} (x_1,x_2) = (\,x_1\,,\,x_2 + x_2\,\epsilon\, sen(x_1/\epsilon^\alpha)\,)
\end{equation}
with $0 <\alpha <1$ and $\epsilon >0$ sufficiently small,
(see figure (\ref{figura})).
\begin{figure}
\caption{The perturbed region}
\label{figura}
\end{figure}
In the present paper, we generalize these results in two directions: we consider the problem in arbitrary spatial dimension and, also, instead of a specific family of perturbations, we consider general families
$h_{\epsilon}: \Omega_0 \to \mathbb{R}^n $
of $\mathcal{C}^m, m \geq 2$ maps satisfiyng the following abstract hypotheses:
\begin{itemize}
\item $({\bf H_1})$ \ $ \|h_{\epsilon}- i_{\Omega_0} \|_{\mathcal{C}^1{(\Omega)}} \to 0$ as
$\epsilon \to 0.$
\item $ ({\bf H_2})$ \ The Jacobian determinant $Jh_{\epsilon}$ of $h_{\epsilon}$ is differentiable, and
\\ $ \|\nabla Jh_{\epsilon} \|_{\infty} =
\sup \{ \, \|\nabla Jh_{\epsilon}(x) \| \ , \ x \in \Omega \} \to 0 $ as $\epsilon \to 0$.
\end{itemize}
We show in section \ref{linear_semi} that the family $h_{\epsilon}$ considered in \cite{BPP} satisfies
the conditions $({\bf H_1})$ and $({\bf H_2})$. Since the domain $\Omega$ is not of class $\mathcal{C}^1$, the results obtained here do not immediately apply. However, since the perturbations occur only in a
smooth portion of the boundary, they could easily be adapted to this case. We also give there more general examples of families satisfying our properties.
The paper is organized as follows: in section \ref{prelim} we we show how the problem can be reduced to a family of problems in the initial domain and collect some results needed later. In section
\ref{domains} we give some rather general examples of families satisfying our basic assumptions. In section
\ref{linear_semi} we show that the perturbed linear operators are sectorial operators in suitable spaces and study properties of the linear semigroup generated by them. In section \ref{abstract} we show that the problem \ref{nonlinBVP} can be reformulated as an abstract problem in a scale of Banach spaces which are shown to be locally well-posed in section \ref{wellposed}, under suitable growth assumptions onf
$f$ and $g$. In section \ref{global_exist}, assuming a dissipative condition for the problem, we use comparison results to prove that the solutions are globally defined and the family of associated semigroups are uniformly bounded.
In section \ref{attract} we prove the existence of global attractors.
In section \ref{upper}, we show that these attractors
behave upper semicontinuously . Finally, in section \ref{lower}, with some additional properties on the nonlinearities and on the set of equilibria, we show that they are also lower semicontinuous at $\epsilon=0$.
\section{Reduction to a fixed domain} \label{prelim}
One of the difficulties encountered in problems of perturbation of the domain
is that the function spaces change with the change of the region. One way to overcome this difficulty is to effect a ``change of variables'' in order to bring
the problem back to a fixed region. This approach was developed by D. Henry in \cite{He1} and is the one we adopt here.
We describe it briefly here, for convenience of the reader.
For a different
approach, see \cite{AB2}, \cite{AB3} and \cite{AC1}.
Given an open bounded $C^m$ region $\Omega \subset \mathbb{R}^{n}$, $m \geq 1$, denote by $\textrm{Diff}^m(\Omega),
m \geq 0$,
the set of $\mathcal{C}^m $ embeddings (=diffeomorphisms from $\Omega$
to its image).
We define a topology in $\textrm{Diff}^m(\Omega)$, by declaring that $ \Omega$ is in a $\epsilon$ neighborhood of $\Omega_0$, if $ \Omega = h(\Omega_0)$, with $ \|h\|_{\mathcal{C}^m(\Omega_0 )} < \epsilon $.
It has been shown in \cite{Mich} that this topolgy is metrizable and we
denote by
${\mathcal{M}}_{m}(\Omega)$ or simply ${\mathcal{M}}_{m}$ this (separable)
metric space.
We say that a function $F$ defined in the
space ${\mathcal{M}}_{m}$ with values in a Banach space is $C^m$ or
analytic if $h \mapsto F(h(\Omega))$ is $C^m$ or analytic as a map
of Banach spaces ($h$ near $i_{\Omega}$ in $C^m(\Omega,
\mathbb{R}^n)$). In this sense, we may express problems of perturbation
of the boundary of a boundary value problem as problems of
differential calculus in Banach spaces.
If $h: \Omega \mapsto \mathbb{R}^n$ is a $C^k$, $k \leq m$ embedding, we may
consider the `pull-back' of $h$
$$ h^{*}: C^k(h(\Omega)) \to C^k(\Omega) \quad (0 \leq k \leq m)$$
defined by $h^{*}(\varphi) = \varphi \circ h$, which is an
isomorphism with inverse ${h^{-1}}^{*}$. Other function spaces can
be used instead of $C^k$, and we will actually be interested
mainly in Sobolev spaces and fractional power spaces.
Now, if
$
F_{h(\Omega)} : C^m(h(\Omega)) \to C^0(h(\Omega))$ is a (generally
nonlinear) differential operator in $\Omega_h= h(\Omega) $
we may consider the operator
$h^{*} F_{h(\Omega)} {h^{*}}^{-1}$, which is a differential
operator in the fixed region $\Omega$.
Let now $h_{\epsilon}: \Omega_0 \to \mathbb{R}^n $ be a family of maps satisfying
the conditions $({\bf H_1})$ and $({\bf H_2})$ and
$\Omega_{\epsilon} = h_{\epsilon}(\Omega)$ the corresponding family of ``perturbed domains''.
\begin{lema}\label{estimateh}
If $\epsilon> 0$ is sufficiently small, the map $h_\epsilon$ belongs to $\textrm{Diff}^m(\Omega) = $ diffeomorphisms from
$\Omega$ { to its image}.
\end{lema}
\noindent{\bf Proof. \quad} Straightforward.
\begin{lema} \label{isosobolev}
If $ 0< s \leq m$
and $\epsilon > 0$ is small enough, the map
$$
\begin{array}{llcll}
h_\epsilon^* &:& H^s(\Omega_{\epsilon}) &\to& H^s(\Omega) \\
&& u & \longmapsto & u \circ h_\epsilon
\end{array}
$$
is an isomorphism, with inverse ${h_\epsilon^*}^{-1} = (h_\epsilon^{-1})^*$.
\end{lema}
\noindent{\bf Proof. \quad} See \cite{BPP}.
\mbox{${\square}$}
Using d Lemma\ref{estimateh} we may bring the problem \ref{nonlinBVP} back to the fixed region $\Omega_0$.
For this purpose, observe that
$v(.\,,t)$ is a solution (\ref{nonlinBVP}) in the perturbed region
${\Omega_{\epsilon} = h_\epsilon(\Omega)}$, if and only if
$u(.\,,t)={h_\epsilon^*v(.,t)}$ satisfies
\begin{equation}\label{nonlinBVP_fix}
\begin{array}{rcl}
\left\{
\begin{array}{rcl}
u_t(x,t)&=& {h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*^{-1}}} u(x,t) -au(x,t) + f(u(x,t)) ,\,\, x \in \Omega \,\,\,\mbox{and}\,\,\,t>0, \\
{h_\epsilon^*\displaystyle\frac{\partial }{\partial N_{\Omega_{\epsilon}}}h_\epsilon^{*^{-1}}}u(x,t)&=&g(u(x,t)), \,\, x \in \partial\Omega \,\,\,\mbox{and}\,\,\,t>0\,,
\end{array}
\right.
\end{array}
\end{equation}
where $ h_{\epsilon}^{*} \Delta_{\Omega_{\epsilon}} {h_{\epsilon}^{*}}^{-1}$ and
$ h_{\epsilon}^{*} \frac{\partial}{\partial N_{\Omega_{\epsilon}}} {h_{\epsilon}^{*}}^{-1}$
are defined by
$$
h_{\epsilon}^{*} \Delta_{\Omega_{\epsilon}} {h_{\epsilon}^{*}}^{-1}u (x) =
\Delta_{\Omega_{\epsilon}} (u\circ h_{\epsilon}^{-1})(h_{\epsilon}(x))
$$
and
$$
h_{\epsilon}^{*} \frac{\partial}{\partial N_{\Omega_{\epsilon}}} {h_{\epsilon}^{*}}^{-1}=
\frac{\partial}{\partial N_{\Omega_{\epsilon}}}(u\circ h^{-1})(hV(x))
$$
(in appropriate spaces). In particular, if
${\mathcal{A}}_{\epsilon}$ is the global attractor of (\ref{nonlinBVP}) in $
H^s(\Omega_{\epsilon})$, then $ {\tilde{\mathcal{A}}}_{\epsilon} = \{v \circ h \mid v
\in {\mathcal{A}}_{\epsilon} \}$ is the global attractor of (\ref{nonlinBVP_fix})
in $H^s(\Omega)$ and conversely. In this way we can consider
the problem of continuity of the attractors as $\epsilon \to
0 $ in a fixed phase space.
For later use, we now compute an expression for the
differential operator $h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1}$ in the fixed region $\Omega$, in terms of $h_\epsilon$.
\noindent Writing
$h_\epsilon(x)\,=\,h_\epsilon(x_1,x_2, \cdots, x_n)\,=\,((h_\epsilon)_1(x),(h_\epsilon)_2(x), \cdots, (h_\epsilon)_n(x) )\,=\,(y_1,y_2, \cdots, y_n)\,=\,y\,$, we obtain, for $i=1, 2, \cdots, n$
\begin{equation}
\begin{split} \label{deriv}
\left(h_\epsilon^* \displaystyle\frac{\partial}{\partial y_i} h_\epsilon^{*-1}(u)\right)(x) &= \displaystyle\frac{\partial}{\partial y_i}(u\circ h_\epsilon^{-1})(h_\epsilon(x)) \\
&= \displaystyle\sum^n_{j=1} \left[\left(\displaystyle\frac{\partial h_\epsilon}{\partial {x_j}}\right)^{-1}\right]_{j,i}(x)\frac{\partial u}{\partial x_j}(x) \\
&= \displaystyle\sum^{{n}}_{j=1} b_{ij}^{\epsilon}(x)\displaystyle\frac{\partial u}{\partial x_j}(x)\,,
\end{split}
\end{equation}
where $b_{ij}^{\epsilon}(x)$ is the $i,j$-entry of the
inverse transpose of the Jacobian matrix of $h_\epsilon$. From now on, we omit the
$\epsilon$ from the notation for simplicity.
Therefore,
\begin{equation}
\begin{split} \label{laph}
h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1}(u)(x) & =
\sum_{i=1}^n \left(h_\epsilon^*\frac{\partial^2}{\partial y_i^2}h_\epsilon^{*-1}(u)\right)(x) \\
& = \sum_{i=1}^n \left(\sum_{k=1}^n b_{i\,k} \frac{\partial}{\partial x_k}\left(\sum_{j=1}^n b_{i j}\frac{\partial u}{\partial x_j}\right) \right)(x) \\
& = \sum_{k=1}^n \frac{\partial}{\partial x_k}
\left( \sum_{j=1}^n \sum_{i=1}^n b_{i j} b_{i k}
\frac{\partial u}{\partial x_j} \right){(x)} -
\sum_{j=1}^n \left( \sum_{i,k=1}^n
\frac{\partial}{\partial x_k} (b_{i k} ) b_{i j} \right)
\frac{\partial u}{\partial x_j} {(x)}\\
& = \sum_{k=1}^n \frac{\partial}{\partial x_k}
\left( \sum_{j=1}^n C_{kj}
\frac{\partial u}{\partial x_j} \right){(x)} -
\sum_{j=1}^n A_j
\frac{\partial u}{\partial x_j}{(x)},
\end{split}
\end{equation}
where $ C_{kj} = \sum_{i=1}^n b_{i j} b_{i k} $ and
$A_j = \sum_{i,k=1}^n
\frac{\partial}{\partial x_k} (b_{i k} ) b_{i j} $.
We also need to compute the boundary condition $h_\epsilon^*\displaystyle\frac{\partial}{\partial N_{\Omega_{\epsilon}}}h_\epsilon^{*-1}u=0$ in the fixed region $\Omega$ in terms of $h_\epsilon$. Let $N_{h_\epsilon(\Omega)}$ denote the {outward unit} normal to the boundary of $h_\epsilon(\Omega):=\Omega_{\epsilon}$. From (\ref{deriv}), we obtain
\begin{equation}
\begin{split}
\left(h_\epsilon^*\displaystyle\frac{\partial}{\partial N_{\Omega_{\epsilon}}}h_\epsilon^{*-1}u\right)(x) &= \sum_{i=1}^n
\left(h_\epsilon^*\displaystyle\frac{\partial}{\partial y_i}h_\epsilon^{*-1}u\right)(x)\left(N_{\Omega_{\epsilon}}\right)_i(h_\epsilon (x)) \\
&= \sum_{i=1}^n \displaystyle\frac{\partial}{\partial y_i}(u \circ h_\epsilon^{-1})(h_\epsilon (x))\left(N_{\Omega_{\epsilon}}\right)_i(h_\epsilon (x)) \\
&= \sum_{i,j=1}^n b_{ij}(x)\displaystyle\frac{\partial u}{\partial x_j}(x)\left(N_{\Omega_{\epsilon}}\right)_i(h_\epsilon(x)) \label{normalh}
\end{split}
\end{equation}
\par Since
$$ N_{\Omega_{\epsilon}}(h_\epsilon(x)) = h_\epsilon^*N_{\Omega_{\epsilon}}(x) =
\displaystyle\frac{[h_\epsilon^{-1}]_x^T N_\Omega(x)}{||\,[h_\epsilon^{-1}]_x^T N_\Omega(x)\,||}
$$
(see \cite{He1}), we obtain
$$ \left(N_{\Omega_{\epsilon}}(h_\epsilon(x)) \right)_i =
\frac{1}{||\,[h_\epsilon^{-1}]_x^T N_\Omega(x)\,||}
\sum_{k=1}^n b_{ik} (N_{\Omega})_k.$$
Thus, from (\ref{normalh})
\begin{equation}
\begin{split}
\left(h_\epsilon^*\displaystyle\frac{\partial}{\partial N_{\Omega_{\epsilon}}}h_\epsilon^{*-1}u\right)(x) &=
\frac{1}{||\,[h_\epsilon^{-1}]_x^T N_\Omega(x)\,||}
\sum_{k=1}^n \left( \sum_{i,j=1}^n b_{ik} b_{ij}(x)\displaystyle\frac{\partial u}{\partial x_j}(x) \right) (N_{\Omega})_k \\
&= \frac{1}{||\,[h_\epsilon^{-1}]_x^T N_\Omega(x)\,||}
\sum_{k=1}^n \left( \sum_{j=1}^n C_{kj}\displaystyle\frac{\partial u}{\partial x_j}(x) \right) (N_{\Omega})_k
\end{split}
\end{equation}
Thus, the boundary condition
$
\left(h_\epsilon^*\displaystyle\frac{\partial}{\partial N_{\Omega_{\epsilon}}}h_\epsilon^{*-1}u\right)(x) = 0\,,
$
becomes
$$
\sum_{j,k=1}^n \left(N_\Omega(x)\right)_k(C_{kj}D_ju) = 0 \,\,\mbox{on}\,\, \partial\Omega\,.
$$
\noindent so the boundary condition is exactly the
``oblique normal derivative'' with respect to the divergence part of the
operator
$ h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1}.$
\section{Basic assumptions and examples on domain perturbations}
\label{domains}
We assume that the unperturbed domain $ \Omega_0 $ is of class $\mathcal{C}^2$, and consider rather general examples of families
$h_{\epsilon}: \Omega_0 \to \mathbb{R}^n $
of $\mathcal{C}^2 $ maps satisfying the hypotheses $({\bf H_1})$ and $({\bf H_2})$ stated in the introduction.
\begin{ex}
The family $h_{\epsilon}$ of perturbations
of the unit square in $\mathbb{R}^2$ considered in \cite{BPP} given by
\begin{equation}
h_{\epsilon} (x_1,x_2) = (\,x_1\,,\,x_2 + x_2\,\epsilon\, sen(x_1/\epsilon^\alpha)\,)
\end{equation}
with $0 <\alpha <1$ and $\epsilon >0$ sufficiently small,
(see figure (\ref{figura})).
satisfy the conditions $({\bf H_1})$ and $({\bf H_2})$.
We observe that the unperturbed region \emph{is not of class $\mathcal{C}^2 $} and, therefore, does not strictly satisfies our hypothesis. However, since the perturbation occurs only at a smooth portion of the boundary and the elliptic problem in this case is well posed (see \cite{Gri}) the problem can actually be included in the framework considered here, with only minor modifications.
In fact, hypothesis $({\bf H_1})$ was shown in \cite{BPP}
(Lemma 2.1). A simple computation gives $\nabla Jh_{\epsilon} = ( \epsilon^{(1-\alpha)}
\cos(x_1/ \epsilon^{\alpha} ), 0) $, from which $({\bf H_2})$ follows easily.
From $({\bf H_1})$, it follows that the boundary Jacobian $\mu_{\epsilon}= J_{\partial \Omega} {h_{\epsilon}}_{|\, \partial \Omega} \to 1$ uniformly as $\epsilon \rightarrow 0$ . It can be checked by explicitly computation, as done in \cite{BPP}:
{\[ \mu_{\epsilon} = \left\{
\begin{array}{l}
\displaystyle\frac{\sqrt{1 + \epsilon^{\,2-2\alpha}{cos}^{\,2} (x_1/\epsilon^{\alpha})}}{1 + \epsilon\sin(x_1/\epsilon^\alpha)}
\textrm{ for } x \in I_1:= \left\{ (x_1,1) \, | \, 0
\leq x_1 \leq 1 \right\}, \nonumber \\
\displaystyle\frac{1}{ 1+ \epsilon \sin(x_1/\epsilon^{\alpha})} \textrm{ for } x \in I_3:= \left\{ (x_1,0) \, | \, 0
\leq x_1 \leq 1 \right\}, \nonumber \\
1 \textrm{ for } x \in I_2:= \left\{ (1,x_2) \, | \, 0
\leq x_2 \leq 1 \right\} \textrm{ and } I_4:= \left\{ (0,x_2) \,
| \, {0} \leq x_2 \leq 1 \right\}. \nonumber \\
\end{array} \right.
\]
}
\end{ex}
Much more general families satisfying the conditions $({\bf H_1})$ and $({\bf H_2})$ are given in the examples below.
\begin{ex}
\end{ex}
Let $ \Omega \subset \mathbb{R}^n$ be a $\mathcal{C}^{2} $ domain, and $X: U \subset \mathbb{R}^n \to \mathbb{R}^n $ a smooth (say $\mathcal{C}^1 $)
vector field defined in an open set containing $\Omega$ and $x(t,x_0)$ the solution of
\[ \left\{
\begin{array}{lcl}
\frac{ dx}{dt} & = & X(x) \\
x(0) & = & x_0.
\end{array} \right.
\]
Then, the map $$ x: (t, \xi) \mapsto x(t, \xi) : (-r, r) \times \partial \Omega \to V \subset \mathbb{R}^n $$ is a diffeomorphism for some $r>0$ and some open neighborhood $V$ of $\partial \Omega$. Let $ W$ be a (smaller) open neighborhood of $\partial \Omega$, that is, with $\overline{W} \subset V$ and define $ h_{\epsilon} : W \to \mathbb{R}^n$ by $ h_{\epsilon}(x(t,\xi)) = (x(t+ \eta(t) \cdot \theta_{\epsilon}(\xi) ,\xi)) $ , where $\theta_{\epsilon}: \partial \Omega \to \mathbb{R} $ is a $\mathcal{C}^1$ function, with $\|\theta_{\epsilon}\|_{\mathcal{C}^1(\partial \Omega)} \to 0 $, as $\epsilon \to 0$, $\eta: [-r, r] \to [0,1] $ is a $\mathcal{C}^2$ function, with $\eta(0) = 1 $ and $\eta(t) = 0$ if $|t| \geq \frac{r}{2}$.
Observe that $h_{\epsilon}$ is well defined and $ \{ h_{\epsilon}, \ 0 \leq \epsilon \leq \epsilon_0 \}$ is a family of $ \mathcal{C}^{1}$ maps for $\epsilon_0$ sufficiently small, with $\| h_{\epsilon} - i_{ B_{r}(\partial \Omega) } \|_{\mathcal{C}^1{(W)}}
\to 0 $ as $\epsilon \to 0.$
We may extend $h_{\epsilon}$ to a diffeomorphism of $\mathbb{R}^n$,
satisfying $({\bf H_1})$, which we still write simply
as $ h_{\epsilon} $ by defining it as the identity outside $W$.
If $ \phi: U \subset \mathbb{R}^{n-1} \to \mathbb{R}^n $ is a local coordinate system for
$\partial \Omega$ in a neighborhood of $x_0 \in \partial \Omega$, then the map
$\Psi (t, y)= x(t, \phi(y)) : (-r, r) \times U \to \mathbb{R}^n $ is a $ \mathcal{C}^1$ coordinate system around the point
$x_0 \in \mathbb{R}^n$ and
$ \Psi^{-1} h_{\epsilon} \Psi (t,y) = (t + \eta(t) \theta_{\epsilon}(\phi(y)), y)$.
By an easy computation, we find that the Jacobian of
$\Psi^{-1} h_{\epsilon} \Psi$ is given by
$J (\Psi^{-1} h_{\epsilon} \Psi(t,y)) = 1 + \eta'(t) \theta_{\epsilon}(\phi(y)) $ and, therefore
$J h_{\epsilon} (x)= \left[ 1 + \eta'(t(x)) \theta_{\epsilon}(\phi(\pi(x)))
\right]\cdot J \Psi \left( \Psi^{-1}( h_{\epsilon} (x)) \right)\cdot J\Psi^{-1} (x) $
for $x \in W $.
Since
$\| h_{\epsilon} - Id_{ R^n } \|_{\mathcal{C}^1}\to 0$, the condition $({\bf H_2})$ follows.
We can also compute $J_{\partial \Omega} {h_{\epsilon}}_{|\, \partial \Omega} $, the Jacobian of
$ h_{\epsilon} $ restricted to ${\partial \Omega}$. We drop the subscript
$\partial \Omega$ to simplify the notation.
Note that the coordinate system $\Psi$ above takes $ \{0 \} \times U$
into a neighborhood of $x_0 \in \partial \Omega,$ and
$ \Psi^{-1} {h_{\epsilon}}_{|\, \partial \Omega} \Psi (0, y)= ( \theta_{\epsilon}(\phi(y)), y)$.
A straightforward computation then gives
$J (\Psi^{-1} {h_{\epsilon}}_{|\, \partial \Omega} \Psi(0,y)) =
\sqrt{ 1 + \|\nabla \theta_{\epsilon} (\phi(y))\|^2} $ and, therefore
$J {h_{\epsilon}}_{|\, \partial \Omega} (\phi(y))=
\left[ \sqrt{ 1 + \|\nabla \theta_{\epsilon} (\phi(y))\|^2}
\right] J \Psi \left( \Psi^{-1}_{\epsilon}( h_{\epsilon} (\phi(y))) \right)\cdot J\Psi^{-1}
(\Psi (0,y)) $
for $y \in
U $, where $ \Psi_{0}$ and $ \Psi_{\epsilon}$ denote the restriction
of $\Psi$ to $ \{ (0, y) \, |y \in U \}$ and
$ \{ ( \theta_{\epsilon}(\phi(y)) , y) \, |y \in U \}$, respectively. Since
$\| h_{\epsilon} - Id_{ R^n } \|_{\mathcal{C}^1}$ and $\|\theta_{\epsilon}(\xi)\|_{\mathcal{C}^1{(\partial\Omega)} } \to 0 $, it follows that
$J {h_{\epsilon}}_{|\, \partial \Omega} (\phi(y)) \to 1 $ as $\epsilon \to 0$, uniformly in $\partial \Omega$.
\mbox{${\square}$}
\begin{ex}
\end{ex}
We can choose the vector field $X$ in the previous example as an extension of $N : \partial \Omega \to \mathbb{R}^n$ the unit outward normal to $ \partial \Omega $, $t(x) = \pm \textrm{dist}(x, \partial \Omega), \quad
("+" \textrm{ outside}, "-" \textrm{ inside})$,
$\phi(x) = \textrm{ the point of } \partial \Omega \textrm{ nearest
to } x$ and
$B_r(\partial \Omega) = \{x \in \mathbb{R}^n \, | \, \textrm{dist}(x, \partial \Omega) < r \}.$
Then, the map $ \rho: (t, \xi) \mapsto \xi + t N(\xi) : (-r, r) \times \partial \Omega \to
B_r(\partial \Omega)$ is a diffeomorphism, for some $r> 0$, with inverse
$x \mapsto (t(x), \pi(x))$
(see \cite{He1}).
Define $ h_{\epsilon} : B_r(\partial \Omega) \to \mathbb{R}^n$ by
$ h_{\epsilon}(\rho(t,\xi)) = \xi + t N(\xi) +
\eta(t) \theta_{\epsilon}(\xi) N(\xi) =
\rho(t,\xi) + \eta(t)\theta_{\epsilon}(\xi) N(\xi) $, where
$\theta_{\epsilon}: \partial \Omega \to \mathbb{R} $ is a $\mathcal{C}^1$ function, with
$\|\theta_{\epsilon} \|_{\mathcal{C}^1{(\partial\Omega)}} \to 0 $ as $\epsilon \to 0$,
$\eta: [-r, r] \to [0,1] $ is a $\mathcal{C}^2$ function, with
$\eta(0) = 1 $ and $\eta(t) = 0$ if $|t| \geq \frac{r}{2}$.
Then, $ \{ h_{\epsilon}, \ 0 \leq \epsilon \leq \epsilon_0 \}$ is a family of $ \mathcal{C}^{1}$ maps for $\epsilon_0$ sufficiently small, with
$\| h_{\epsilon} - i_{ B_{r}(\partial \Omega) } \|_{\mathcal{C}^1}
\to 0$ as $\epsilon \to 0.$
We may extend $h_{\epsilon}$ to a diffeomorphism of $\mathbb{R}^n$,
satisfying $({\bf H_1})$, which we still write simply
as $ h_{\epsilon} $ by defining it as the identity outside $B_r(\partial
\Omega)$.
If $ \phi: U \subset \mathbb{R}^{n-1} \to \mathbb{R}^n $ is a local coordinate system for
$\partial \Omega$ in a neighborhood of $x_0 \in \partial \Omega$, then the map
$\Psi (t, y)= \phi(y) + t N( \phi(y)) = \rho(t, \phi(y)) : (-r, r) \times U \to \mathbb{R}^n $ is a $ \mathcal{C}^1$ coordinate system around the point
$x_0 \in \mathbb{R}^n$ and
$ \Psi^{-1} h_{\epsilon} \Psi (t,y) = (t + \eta(t) \theta_{\epsilon}(\phi(y)), y)$.
The condition $({\bf H_2})$ can now be checked as in the previous example.
\begin{rem}
We may choose the function $\theta_{\epsilon}$ with ``oscillatory behavior'', so the example above essentialy includes the case considered in \cite{BPP}, since the perturbation there is nonzero only in a smooth portion of the boundary.
\end{rem}
\section{The linear semigroup} \label{linear_semi}
In this section we consider the linear semigroups generated by the family of differential operators
$ - h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1} {+\,aI}$, appearing in
(\ref{nonlinBVP_fix}).
\subsection{Strong form in $L^p$ spaces}
Consider the operator in $L^p(\Omega), \ p \geq 2 $, given by
\begin{equation}\label{operator_Lp}
A_{\epsilon}:= \left(\,- h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1} +aI\,\right)
\end{equation}
with domain
\begin{equation}\label{domain_Lp}
D\left(A_{\epsilon}\right) = \left\{u \in W^{2,p} (\Omega) \,\bigg|\,
h_{\epsilon}^*\displaystyle\frac{\partial }{\partial N_{\Omega_{\epsilon}}}h_{\epsilon}^{*^{-1}}u = 0,\, \textrm{ on } \, \partial \Omega \right\}.
\end{equation}
(We will denote simply by $A$ the unperturbed operator
$\left(\,{-}\Delta_{\Omega} +aI\,\right)$).
\begin{teo}\label{sectorial_Lp}
If $\epsilon> 0$ is sufficiently small and $h_\epsilon \in \mathrm{Diff}^1(\Omega)$,
then the operator $A_{\epsilon}= \left(\,- h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1} +aI\,\right)$ defined by (\ref{operator_Lp}) and (\ref{domain_Lp})
is sectorial.
\end{teo}
\noindent{\bf Proof. \quad}
Consider the operator $- \Delta_{\Omega_{\epsilon}} $ defined in $ L^p( h_{\epsilon}(\Omega))$, with
domain
\[ D( - \Delta_{\Omega_{\epsilon}}) = \left\{u \in W^{2,p} (\Omega_{\epsilon}) \,\bigg|\,
\frac{\partial }{\partial N_{\Omega_{\epsilon}}}u = 0,\, \textrm{ on } \, \partial \Omega_{\epsilon} \right\},\]
where $\Omega_{\epsilon} = h_{\epsilon}(\Omega)$.
It is well known that $- \Delta_{\Omega_{\epsilon}}$ is sectorial, with the spectra contained in the interval
$ ] 0, \infty) \subset \mathbb{R} $.
If $\lambda \in \mathbb{C} $ and $f \in L^2(\Omega)$, we have
\begin{align}
&\left( \, h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1} + \lambda I \right) u (x) = f (x) \\
\nonumber \\
\Leftrightarrow & \left( \Delta_{\Omega_{\epsilon}} + \lambda I \right) u \circ h^{-1}_{{\epsilon}} (h_{{\epsilon}} (x)) =
f \circ h^{-1}_{{\epsilon}} (h_{{\epsilon}} (x)) \nonumber \\
\Leftrightarrow & \left( \Delta_{\Omega_{\epsilon}} + \lambda I \right) v (y ) =
g (y) \nonumber,
\end{align}
Since $ u \mapsto h^*_{{\epsilon}} u := u \circ h_{{\epsilon}} $ is an isomorphism from $ L^2(\Omega_{\epsilon})$ to
$ L^2(\Omega )$ with inverse $ ({h^{-1}_{{\epsilon}} })^{*} $, it follows that the
first equation is uniquely solvable in $ L^2(\Omega )$ if and only if the last equation is
uniquely solvable in $ L^2(\Omega_{\epsilon})$ .
Suppose $\lambda $ belongs to $ \rho( -\Delta_{\Omega_{\epsilon}} ) $, the resolvent set of $-\Delta_{\Omega_{\epsilon}}$. Then, we have.
\begin{align*}
\| u \|_{L^p(\Omega)}^p & = \int_{\Omega} |u (x)|^p \, d \, x \\
& = \int_{\Omega} |v \circ h_{{\epsilon}} (x)|^p \, d \, x \\
& = \int_{\Omega_{\epsilon} } |v (y) |^p |J h^{-1}_{{\epsilon}} (y) | \, d \, y \\
& \leq \|Jh^{-1}_{{\epsilon}} \|_{\infty} \| v \|_p^p \\
& \leq \|Jh^{-1}_{{\epsilon}} \|_{\infty} \cdot
\| \left( \Delta_{\Omega_{\epsilon}} + \lambda I \right)^{-1} \|_{\mathcal{L}( L^p (\Omega_{\epsilon}))} \cdot
\| g \|_{{L^p(\Omega_ {\epsilon})} }^p \\
\end{align*}
On the other hand
\begin{align*}
\| g \|_{{L^p(\Omega_ {\epsilon})} }^p & = \int_{\Omega_{{\epsilon}} } |g(x)|^p \, d \, y \\
& = \int_{\Omega_{{\epsilon}} } |f \circ h^{-1}_{{\epsilon}} (y)|^p \, d \, y \\
& = \int_{{\Omega} } |f (x) |^p |Jh_{{\epsilon}} (x) | \, d \, x \\
& \leq \|Jh_{{\epsilon}} \|_{\infty} \| f \|_{{L^p(\Omega )} }^p \\
\end{align*}
It follows that
\[
\| u \|_{L^p(\Omega)}^p \leq \|Jh_{{\epsilon}} \|_{\infty} \cdot \|Jh^{-1}_{{\epsilon}} \|_{\infty} \cdot \| \left( \Delta_{\Omega_{\epsilon}} + \lambda I \right)^{-1} \|_{\mathcal{L}( L^p (\Omega_{\epsilon}))} \cdot \| f \|_{{L^p(\Omega )} }^p
\]
Therefore, $ \lambda \in \rho(- h_\epsilon^* \Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1} ) $ and
\begin{equation} \label{resolv_Lp}
\| \left( h_\epsilon^* \Delta_{\Omega_{\epsilon}} h_\epsilon^{*-1} + \lambda I \right)^{-1} \|_{\mathcal{L}( L^p (\Omega))} \leq
\|Jh_{{\epsilon}} \|_{\infty} \cdot \|Jh^{-1}_{{\epsilon}} \|_{\infty} \cdot \|
\left( \Delta_{\Omega_{\epsilon}} + \lambda I \right)^{-1} \|_{\mathcal{L}( L^p (\Omega_{\epsilon}))}.
\end{equation}
Reciprocally, one can prove similarly that $ \lambda \in \rho(- h_\epsilon^* \Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1} )
\mathbb{R}ightarrow \lambda \in \rho(- \Delta_{\Omega_{\epsilon}} ). $
Finally, if $ B_{\epsilon} = - \Delta_{\Omega_{\epsilon}} + a I $ is sectorial with $ \|(\lambda -
B_{\epsilon}^{-1}\| \leq \displaystyle\frac{M}{|\lambda -a' |} $ for all $\lambda$ in the
sector $ S_{ a',\phi_0} = \{ \lambda \ | \ \phi_0 \leq
|arg(\lambda-a')| \leq \pi, \lambda \neq a' \} $, for some $a'\in
\mathbb{R}$ and $0\leq \phi_0<\pi/2$, it follows from
\eqref{resolv_Lp} that $ A_{\epsilon} = a- h_\epsilon^* \Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1} $
satisfies $ \|(\lambda -
A)^{-1}\| \leq \displaystyle\frac{M'}{|\lambda -a' |} $ for all $\lambda$ in the
sectoriality of $A_{\epsilon}$ follows from the sectoriality of $B_{\epsilon}$.
\mbox{${\square}$}
\begin{rem}
From \ref{sectorial_Lp} and results in \cite{He2}, it follows that $A_\epsilon$ generates a linear analytic semigroup in $L^p(\Omega)$, for each $\epsilon \leq 0$.
\end{rem}
\subsection{Weak form in $L^p$ spaces}
One would like to prove that the operators $A_{\epsilon}$ defined by \eqref{operator_Lp} and \eqref{domain_Lp} become close to the operator $A$ as $\epsilon \to 0$ in a certain sense. This is possible when the perturbation diffeomorphisms $h_{\epsilon}$ converge to the identity in the
$\mathcal{C}^2$-norm (see, for example \cite{OPP} and \cite{PP}).
To obtain similar results here, we need to consider the problem in weaker topologies, that is, we need to extend those operators. To this end, we now want to consider the operator
$A_{\epsilon} =
\left(-h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1} + aI\right)$ as an operator $\widetilde{A}_{\epsilon}$
in $(W^{1,q}(\Omega))'$ with $D(\widetilde{A}_{\epsilon})=W^{1,p} (\Omega)$, where
$q$ is the conjugate exponent of $p$, that is $\frac{1}{p} + \frac{1}{q}=1$.
\par If $u \in D(A_{\epsilon})=\left\{\,u \in W^{2,p}(\Omega)\,\,\bigg|\,\,h_\epsilon^*\displaystyle\frac{\partial}{\partial N_{\Omega_{\epsilon}}}h_\epsilon^{*-1}u=0\,\right\}$, $\psi \in W^{1,q}(\Omega)$, and
$v = u \circ h_\epsilon^{-1}$, we obtain, integrating by parts
\begin{align} \label{weak_form_Lp}
\left\langle A_{\epsilon}u\,,\,\psi\right\rangle_{{-1,1}} &= - \displaystyle\int_\Omega (h_\epsilon^*\Delta_{\Omega_{\epsilon}}h_\epsilon^{*-1}u)(x)\,\psi(x)\,dx+ a\displaystyle\int_\Omega u(x)\psi(x)\,dx \nonumber \\
&= - \displaystyle\int_\Omega \Delta_{\Omega_{\epsilon}} (u \circ h_\epsilon^{-1})(h_\epsilon(x))\,\psi(x)\,dx + a\displaystyle\int_\Omega u(x)\psi(x)\,dx \nonumber \\
&= - \displaystyle\int_{\Omega_{\epsilon}} \Delta_{\Omega_{\epsilon}} v(y)\psi(h_\epsilon^{-1}(y)) \displaystyle\frac{1}{|Jh_\epsilon(h_\epsilon^{-1}(y))|}dy + a\displaystyle\int_{\Omega_{\epsilon}} u(h_\epsilon^{-1}(y))\psi(h_\epsilon^{-1}(y))\displaystyle\frac{1}{|Jh_\epsilon(h_\epsilon^{-1}(y))|}dy \nonumber \\
&= -\displaystyle\int_{\partial \Omega_{\epsilon}} \displaystyle\frac{\partial v}{\partial N_{\Omega_{\epsilon}}}(y)\, \psi(h_\epsilon^{-1}(y)) \displaystyle\frac{1}{|\,Jh_\epsilon(h_\epsilon^{-1}(y))\,|}\,d\sigma (y) \nonumber \\
& + \displaystyle\int_{\Omega_{\epsilon}} \nabla_{\Omega_{\epsilon}} v(y)\cdot \nabla_{\Omega_{\epsilon}} \left[ \psi(h_\epsilon^{-1}(y)) \displaystyle\frac{1}{|\,Jh_\epsilon(h_\epsilon^{-1}(y))\,|} \right] \,dy \nonumber \\
& \,+ a\displaystyle\int_{\Omega_{\epsilon}} u(h_\epsilon^{-1}(y))\psi(h_\epsilon^{-1}(y))\,\displaystyle\frac{1}{|\,Jh_\epsilon(h_\epsilon^{-1}(y))\,|} \,dy\, \nonumber \\
&=
\displaystyle\int_{\Omega_{\epsilon}} \nabla_{\Omega_{\epsilon}} v(y)\cdot \nabla_{\Omega_{\epsilon}}
\left[ \psi(h_\epsilon^{-1}(y)) \displaystyle\frac{1}{|\,Jh_\epsilon(h_\epsilon^{-1}(y))\,|} \right] \,dy
\nonumber\\
& \, + a\displaystyle\int_{\Omega_{\epsilon}} u(h_\epsilon^{-1}(y))\psi(h_\epsilon^{-1}(y))\,\displaystyle\frac{1}{|\,Jh_\epsilon(h_\epsilon^{-1}(y))\,|}\,dy\nonumber\\
&=
\displaystyle\int_{\Omega} \nabla_{\Omega_{\epsilon}}v(h_\epsilon(x))\cdot \nabla_{\Omega_{\epsilon}}
\left[ \psi \circ h_\epsilon^{-1}\displaystyle\frac{1}{|Jh_\epsilon \circ h_\epsilon^{-1}|}(h_\epsilon(x))
\right]
\, |Jh_\epsilon(x)|dx \nonumber
+ a\displaystyle\int_{\Omega} u(x)\psi(x)dx\nonumber \\
&=
\displaystyle\int_{\Omega} (h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u)(x) \cdot\left[ h_\epsilon^* \nabla_{\Omega_{\epsilon}} h_\epsilon^{*-1} \displaystyle\frac{\psi}{Jh_\epsilon}\right]
(x)\,|\,Jh_\epsilon(x)\,|\,dx\,
+ a\displaystyle\int_{\Omega} u(x)\psi(x)\,dx \nonumber \\
&=
\displaystyle\int_{\Omega} (h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u)(x) \cdot h_\epsilon^* \nabla_{\Omega_{\epsilon}} h_\epsilon^{*-1} \displaystyle \psi
(x) \,dx\, + a\displaystyle\int_{\Omega} u(x)\psi(x)\,dx \nonumber \\
& +
\int_{\Omega} (h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u)(x) \cdot ( h_\epsilon^* \nabla_{\Omega_{\epsilon}} h_\epsilon^{*-1} \displaystyle Jh_\epsilon)(x) \, \cdot
\frac{1}{Jh_\epsilon } \cdot \psi
(x) \,dx.
\end{align}
Since (\ref{weak_form_Lp}) is well defined for $u \in W^{1,p}(\Omega)$, we may define an extension
$\widetilde{A}_{\epsilon}$ of $A_{\epsilon}$, with domain $W^{1,p}(\Omega)$ and values in $(W^{1,q}(\Omega))'$, by
\begin{align} \label{operator_weak_Lp}
\left\langle \widetilde{A}_{\epsilon}u\,,\,\psi\right\rangle_{{-1,1}} := &
\displaystyle\int_{\Omega} (h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u)(x) \cdot h_\epsilon^* \nabla_{\Omega_{\epsilon}} h_\epsilon^{*-1} \displaystyle \psi
(x) \,dx\, + a\displaystyle\int_{\Omega} u(x)\psi(x)\,dx \nonumber \\
+ &
\int_{\Omega} (h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u)(x) \cdot ( h_\epsilon^* \nabla_{\Omega_{\epsilon}} h_\epsilon^{*-1} \displaystyle Jh_\epsilon)(x) \, \cdot
\frac{1}{Jh_\epsilon } \cdot \psi
(x) \,dx,
\end{align}
for any $\Psi \in (W^{1,q}(\Omega))$.
\begin{rem}
If $u$ is regular enough, then $\widetilde{A} u = A u$ implies that $u$ must satisfy the
boundary condition $ h_{\epsilon}^*\displaystyle\frac{\partial }{\partial N_{\Omega_{\epsilon}}}h_{\epsilon}^{*^{-1}}u = 0 $, on $\partial \Omega$ but, since this is not well defined in
$ (W^{1,q}(\Omega)) $, the domain of $\widetilde{A}$ does not incorporate this boundary condition.
\end{rem}
For simplicity, we still denote this extension by $A_{\epsilon},$ whenever there is no danger of confusion. Also, from now on, we drop the absolute value in $ |\, Jh_{\epsilon} (x) \, |$, since the Jacobian of
$h_\epsilon$ is positive for sufficiently small $\epsilon$.
We now prove the following basic inequality.
\begin{teo}\label{fund_inequality_Lp} $D\big(A_{\epsilon}\big) \supset D\big(A\big)$ for any $\epsilon \geq 0$ and there exists a positive function $\tau(\epsilon)$ such that
$$
\big|\big|\,\big(A_{\epsilon} - A \big)u\,\big|\big|_{W^{1,q}(\Omega)'} \leq {\tau}(\epsilon)\big|\big|\, A \,u\,\big|\big|_{W^{1,q}(\Omega)'} \,,
$$
for all $u \in D\big( A \big)$, with $\displaystyle\lim_{\epsilon \to 0{^+}} {\tau}(\epsilon)=0.$
\end{teo}
\noindent{\bf Proof. \quad}
The assertion about the domain is immediate.
The inequality is
equivalent to
$$
\big|\,\big\langle\,\big(A_{\epsilon} -A \big)u\,,\,\psi\,\big\rangle_{-1,1}\big|
\leq \tau(\epsilon)||\,A u\,||_{(W^{1,q}(\Omega))' }||\,\psi\,||_{W^{1,q}(\Omega)}\,,
$$
for all $u\in W^{1,p}(\Omega) $, $\psi \in W^{1,q}(\Omega)$, with $\displaystyle\lim_{\epsilon \to 0{^+}} \tau(\epsilon)=0.$
We have, for $\epsilon >0$.
\begin{align}
\left| \big\langle \,\big(A_{\epsilon} -A \,\big)u\,,\,\psi\,\big\rangle_{ {-1,1}}\right| & =
\left |\displaystyle\int_{\Omega} \big(\,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u\,\big)(x) \cdot \left[ \big(\,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1} \psi \,\big)(x) -
\big( \nabla_{\Omega} \psi \,\big)(x) \right] \, d\,x \right| \nonumber \\
& + \left |\displaystyle\int_{\Omega} \big(\,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u - \nabla_{\Omega} u \,\big)(x) \cdot
\big( \nabla_{\Omega} \psi \,\big)(x) \, d\,x \right| \nonumber \\
& +
\left| \int_{\Omega} (h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u)(x) \cdot ( h_\epsilon^* \nabla_{\Omega_{\epsilon}} h_\epsilon^{*-1} \displaystyle Jh_\epsilon)(x) \, \cdot
\frac{1}{Jh_\epsilon } \cdot \psi
(x) \,dx \right|
\end{align}
Now, writing $ |v |_p = \left( \sum_{i=1}^n |v_i|^p \right)^{\frac{1}{p}}$, $1\leq p < \infty$, \ $ |v|_{\infty} = \sup \left( |v_i|, \ i=1,2, \cdots, n \right)$ for the $p$-norm of the vector
$v=(v_1,v_2, \cdots, v_n) \in \mathbb{R}^n $, we observe that
\begin{align*}
\left| \,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u\,(x) \right|_p & =
\left( \sum_i \left| \,h_\epsilon^* \frac{\partial }{\partial y_i} h_\epsilon^{*-1}u\,(x)
\right|^p \right)^{\frac{1}{p}}
= \left( \sum_i \left( \sum_j \left| b_{i,j}^{\epsilon} (x) \frac{\partial u}{\partial x_j} (x) \right| \right)^p \right)^{\frac{1}{p}} \\
&\leq
\left[ \sum_i \left( \sum_j | ( b_{i,j}^{\epsilon})|^q (x) \right)^{\frac{p}{q}}
\left( \sum_j \left( \left|\frac{\partial u}{\partial x_j} \right| \right)^p (x)\right) \right]^{\frac{1}{p}} \\
& \leq
\left[ \sum_i \left( \sum_j \left|( b_{i,j}^{\epsilon})\right|^q (x)\right)^{p-1} \right]^{\frac{1}{p}} |\nabla u (x) |_p \\
& \leq
\| ( b^{\epsilon})\|_{\infty} \left[ \sum_i n^{p-1} \right]^{\frac{1}{p}} |\nabla u (x) |_p \\
& \leq
n \| b^{\epsilon}\|_{\infty} |\nabla u (x) |_p \\
& \leq B(\epsilon) |\nabla u (x) |_p \\
& \\
\left| \,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u\,(x) - \nabla_{\Omega} u (x) \right|_p & =
\left( \sum_i \left| \,h_\epsilon^* \frac{\partial }{\partial y_i} h_\epsilon^{*-1}u\,(x) - \frac{\partial u}{\partial x_i}(x)
\right|^p \right)^{\frac{1}{p}} \\
& = \left( \sum_i \left( \sum_j \left| \left(b_{i,j}^{\epsilon} (x) - \delta_{i,j}\right) \frac{\partial u}{\partial x_j} (x) \right| \right)^p \right)^{\frac{1}{p}} \\
&\leq
\left[ \sum_i \left( \sum_j | ( b_{i,j}^{\epsilon} - \delta_{i,j} )|^q (x) \right)^{\frac{p}{q}}
\left( \sum_j \left( \left|\frac{\partial u}{\partial x_j} \right| \right)^p (x)\right) \right]^{\frac{1}{p}} \\
& \leq
\left[ \sum_i \left( \sum_j \left|( b_{i,j}^{\epsilon}) - \delta_{i,j} \right|^q (x)\right)^{p-1} \right]^{\frac{1}{p}} |\nabla u (x) |_p \\
& \leq
\| ( b^{\epsilon}-\delta)\|_{\infty} \left[ \sum_i n^{p-1} \right]^{\frac{1}{p}} |\nabla u (x) |_p \\
& \leq
n \| b^{\epsilon} -\delta \|_{\infty} |\nabla u (x) |_p \\
& \leq \eta(\epsilon) |\nabla u (x) |_p \\
& \\
\frac{1}{Jh_\epsilon(x)} \left| \,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}Jh_{\epsilon}\,(x) \right|_{\infty} & = \frac{1}{Jh_\epsilon(x)}
\ \sup_i \left\{ \left| \,h_\epsilon^* \frac{\partial }{\partial y_i} h_\epsilon^{*-1}Jh_{\epsilon}\,(x) \right|
\right\} \\
& = \frac{1}{Jh_\epsilon(x)} \sup_i \left\{ \sum_j \left| b_{i,j}^{\epsilon} (x) \frac{\partial Jh_{\epsilon}}{\partial x_j} (x) \right| \right\} \\
& = \frac{1}{Jh_\epsilon(x)} \ \| b_{\infty}^{\epsilon} \| \sum_j \left| \frac{\partial Jh_{\epsilon}}{\partial x_j} (x) \right| \\
& \leq \frac{1}{Jh_\epsilon(x)} \ \| b_{\infty}^{\epsilon} \| |\nabla Jh_{\epsilon} (x)|_1 \leq \ n \| b_{\infty}^{\epsilon} \| |\nabla Jh_{\epsilon} (x)|_{\infty} \\
& \leq \frac{1}{Jh_\epsilon(x)} B(\epsilon) |\nabla Jh_{\epsilon} (x) |_{\infty}
\leq \frac{1}{Jh_\epsilon(x)}
B(\epsilon) \|\nabla Jh_{\epsilon} \|_{\infty} \\
& \leq \mu(\epsilon) \\
& \\
\frac{1}{Jh_\epsilon(x)} \left| \,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}Jh_{\epsilon}\,(x) \psi (x) \right|_{q} & = \frac{1}{Jh_\epsilon(x)}
\ \left( \sum_i \left| \,h_\epsilon^* \frac{\partial }{\partial y_i} h_\epsilon^{*-1}Jh_{\epsilon}\,(x) \cdot \psi(x) \right|^q \right)^{\frac{1}{q}}
\\
& \leq
\frac{1}{Jh_\epsilon(x)}
\left| h_\epsilon^* \nabla h_\epsilon^{*-1}Jh_{\epsilon}\,(x) \right|_{\infty} \ \left(
\sum_i \left| \psi(x) \right|^q \right)^{\frac{1}{q}}
\\
& \leq n \mu(\epsilon) \psi(x) ,
\end{align*}
where $\| b^{\epsilon} \|_{\infty};= \sup \{ | b_{i,j}^{\epsilon} | (x), \ \, 1 \leq i,j \leq n, \ x \in \Omega \}, $
$ \| b^{\epsilon} -\delta \|_{\infty};= \sup \{ | b_{i,j}^{\epsilon} - \delta_{i,j} | (x), \ \, 1 \leq i,j \leq n, \ x
\in \Omega \}, $ \ $B(\epsilon) \to n $ and $\eta(\epsilon)$,
$\mu(\epsilon) \to 0$, as $\epsilon \to 0$.
by hypotheses $\bf{H_1} $ and $\bf{H_2}. $
In a similar way, we obtain
\begin{itemize}
\item $ \displaystyle \left| \,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1} \psi \,(x)
\right|_p
\leq B(\epsilon) |\nabla \psi (x) |_p, $
\item $ \displaystyle \left| \,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1} \psi \,(x)
- \nabla \psi (x) \right|_ p
\leq \eta(\epsilon) |\nabla \psi (x) |_p, $
\end{itemize}
It follows that
\begin{align}
\left| \big\langle \,\big(A_{\epsilon} -A \,\big)u\,,\,\psi\,\big\rangle_{ {-1,1}}\right| &
\leq
\int_{\Omega} \left| \big(\,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u\,\big)(x) \right|_p \cdot \left| \big(\,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1} \psi \,\big)(x) -
\big( \nabla_{\Omega} \psi \,\big)(x) \right|_q \, d\,x \nonumber \\
& + \int_{\Omega}\left| \big(\,h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u - \nabla_{\Omega} u \,\big)(x) \right|_p \cdot \left|\big(
\big( \nabla_{\Omega} \psi \,\big)(x) \right|_q \, d\,x \nonumber \\
& +
\int_{\Omega} \left|(h_\epsilon^*\nabla_{\Omega_{\epsilon}}h_\epsilon^{*-1}u)(x) \right|_p \cdot
\left| \frac{1}{Jh_\epsilon(x) } ( h_\epsilon^* \nabla_{\Omega_{\epsilon}} h_\epsilon^{*-1} )(x) \,
\cdot \psi (x) \right|_{q}
\,dx \nonumber \\
& \leq B(\epsilon) \left[ \int_{\Omega} |\nabla u (x) |_p^p \, dx \right]^{\frac{1}{p}} \cdot
\eta(\epsilon) \left[ \int_{\Omega} |\nabla \psi (x) |_q^q \, dx \right]^{\frac{1}{q}} \nonumber \\
& + \eta(\epsilon) \left[ \int_{\Omega} |\nabla u (x) |_p^p \, dx \right]^{\frac{1}{p}} \cdot
\left[ \int_{\Omega} |\nabla \psi (x) |_q^q \, dx \right]^{\frac{1}{q}} \nonumber \\
& + B(\epsilon) n\cdot \mu(\epsilon) \left[ \int_{\Omega} |\nabla u (x) |_p^p \, dx \right]^{\frac{1}{2}} \cdot
\left[ \int_{\Omega} | \psi (x) |^q \, dx \right]^{\frac{1}{q}} \nonumber \\
& \leq \left( (1+B(\epsilon)) \eta(\epsilon) + n \beta(\epsilon) ) \cdot \mu(\epsilon)\right) \|u \|_{W{^1,p} (\Omega)}
\cdot \| \psi \|_{W^{1,q}(\Omega)} \nonumber \\
& \leq K(\epsilon) \|u \|_{W{^1,p} (\Omega)}
\cdot \| \psi \|_{W^{1,q}(\Omega)} \nonumber
\end{align}
with $\displaystyle\lim_{\epsilon \to 0{^+}} K(\epsilon)=0$ (independently of $u$).
We conclude that
\begin{align} \label{dif_estimate}
\| \left({A}_{\epsilon} - {A} \right)u \|_{ {W^{1,q}(\Omega)}'}
& \leq K(\epsilon)||\,u\,||_{W^{1,p}(\Omega)} \nonumber \\
& \leq \tau(\epsilon)||\,{A} u\,||_{ {W^{q}(\Omega)}'}
\end{align}
with $ \displaystyle\lim_{\epsilon \to 0{^+} } \tau(\epsilon)=0$,
(and $\tau(\epsilon)$ does not depend on $u$).
\mbox{${\square}$}
\subsection{Existence and continuity of the linear semigroup}
Using well known facts about the "unperturbed operator" $A$
and Theorem
\ref{fund_inequality_Lp},
one can now establish existence and continuity of the linear semigroup, based on the following results:
\begin{lema} \label{sector}
Suppose $A $ is a sectorial operator with $ \|(\lambda -
A)^{-1}\| \leq \displaystyle\frac{M}{|\lambda -a |} $ for all $\lambda$ in the
sector $ S_{a,\phi_0} = \{ \lambda \ | \ \phi_0 \leq
|arg(\lambda-a)| \leq \pi, \lambda \neq a \} $, for some $a\in
\mathbb{R}$ and $0\leq \phi_0<\pi/2$. Suppose also that $B$ is a
linear operator with $D(B ) \supset D(A)$ and $\|Bx - A x\|
\leq \varepsilon \|Ax\| + K \|x\| $, for any $x \in D(A) $, where
$K$ and $ \varepsilon $ are positive constants with $
\varepsilon \leq \displaystyle\frac{1}{4(1+LM)},\, K \leq
\displaystyle\frac{\sqrt{5}}{20M} \frac{\sqrt{2}L - 1}{L^2 - 1} $, for some
$L>1$.
Then $B$ is also sectorial. More precisely, if $b =
\displaystyle\frac{L^2}{L^2 -1} a - \frac{\sqrt{2} L}{L^2 -1} |a| $, $ \phi =
\max \left\{ \phi_0, \displaystyle\frac{\pi}{4}\right\}$ and $ M' = 2 M \sqrt{5} $
then
\[
\|(\lambda - B)^{-1}\| \leq \frac{M'}{|\lambda -b|},
\]
in the sector $ S_{b,\phi} = \{ \lambda \ | \ \phi \leq
|arg(\lambda-b)| \leq \pi, \lambda \neq b \} $.
\end{lema}
{\noindent{\bf Proof. \quad} See \cite{PP}, pg 348.
\mbox{${\square}$}
}
\begin{rem} \label{positive} \rm
Observe that $b$ can be made arbitrarily close to $a$ by taking
$L$ sufficiently large. In particular, if $a>0$ then $b>0$.
\end{rem}
\begin{teo} \label{cont_lin_semigroup}
Suppose that $A$ is as in Lemma \ref{sector}, $\Lambda$ a topological
space and $\{A_\gamma\}_{\gamma \in \Lambda}$
is a family of operators in $X$ with $A_{\gamma_0} = A$ satisfying the
following conditions:
\begin{enumerate}
\item $D(A_{\gamma} ) \supset D(A)$, for all $\gamma\in \Lambda$;
\item $\|A_{\gamma}x - A x\| \leq \epsilon(\gamma) \|Ax\| + K(\gamma)
\|x\| $ for any $x \in D(A)$,
where $K(\gamma)$ and $\epsilon(\gamma)$ are positive functions with
$\displaystyle\lim_{\gamma \to \gamma_0} \epsilon(\gamma)=0$
and $\displaystyle\lim_{ \gamma \to \gamma_0} K(\gamma) = 0 $.
\end{enumerate}
Then, there exists a neighborhood $V$ of $\gamma_0$ such that
$A_{\gamma}$ is sectorial if $\gamma \in V$ and
the family of (linear) semigroups $e^{-tA_{\gamma}}$ {satisfies}
\begin{align} \label{cont_lin_semigroup_eq}
\| e^{-tA_{\gamma}}- e^{-tA} \| \leq C(\gamma) e^{-bt} \nonumber \\
\| A \left( e^{-tA_{\gamma}}- e^{-tA} \right) \| \leq
C(\gamma) \frac{1}{t} e^{-bt} \nonumber \\
\| A^{\alpha} \left( e^{-tA_{\gamma}}- e^{-tA} \right) \| \leq
C(\gamma)
\frac{1}{ t^{\alpha}} e^{-bt}, \quad 0< \alpha < 1
\end{align}
for $t >0$, where $b$ is as in Lemma \ref{sector} and
$C(\gamma) \to 0$ as $\gamma \to \gamma_0$.
\end{teo}
{\noindent{\bf Proof. \quad} See \cite{PP}, pg 349.
\mbox{${\square}$}
}
\begin{teo} \label{family_sec_Lp}
The operators $A_{\epsilon}$ given by \eqref{operator_weak_Lp} in the space $ X= (W^{1,q})'$, with domain
$W^{1,p}$, $ 1 < p < \infty, \frac{1}{p} + \frac{1}{p} =1$, are sectorial operators with sectors and constant in the sectorial inequality independent of $\epsilon$, for $\epsilon_0$ sufficiently small. The family of analytic linear semigroups $e^{- tA_{\epsilon}}$ generated by $A_{\epsilon}$
in the ``base space'' $X$, satisfies
\eqref{cont_lin_semigroup_eq}.
\end{teo}
\noindent{\bf Proof. \quad}
The first assertion follows from Theorem \ref{sector} and the second from Theorem
\ref{cont_lin_semigroup},
\mbox{${\square}$}
\section{The abstract problem in a scale of Banach spaces}
\label{abstract}
Our goal in this section is to pose the problem (\ref{nonlinBVP}) in a convenient abstract setting.
We proved in Theorem \ref{sectorial_Lp}
that, if $\epsilon$ is small, the operator $A_{\epsilon}$ in $L^p(\Omega)$ defined by
(\ref{operator_Lp}) with domain given in (\ref{domain_Lp}) is sectorial and, in Theorem \ref{family_sec_Lp} that the same is true for its extension
$\widetilde{A}_{\epsilon}$ to $(W^{1,q})'(\Omega)$.
It is then well-known that the domains $X_{\epsilon}^{\alpha}$ (resp. $\widetilde{X}_\epsilon^{\alpha})$,
$ \alpha \geq 0$ of the fractional powers of $A_{\epsilon}$ (resp. $\widetilde{A}_{\epsilon}$)
are Banach spaces, $X_{\epsilon}^0 = L^p(\Omega)$, (resp. $\widetilde{X}_\epsilon^0 = (W^{1,q})' (\Omega)$),
$X_{\epsilon}^1 = D(A_{\epsilon}) = W^{2,p} (\Omega) $, (resp. $\widetilde{X}_\epsilon^1 = D(\widetilde{A}_{\epsilon}) =
W^{1,p} (\Omega) $),
$X_{\epsilon}^\alpha$, ( $\widetilde{X}_\epsilon^\alpha$) is compactly embedded in
$ X_{\epsilon}^\beta$, ($ \widetilde{X}_\epsilon^\beta$) when $ 0 \leq \alpha < \beta <1 $,
and $X_{\epsilon}^{\alpha} = W^{p\alpha}$, when $2 \alpha$ is an integer number.
Since $ X_{\epsilon}^{\frac{1}{2}}= \widetilde{X}_{\epsilon}^1 $, it follows easily that
$ X_{\epsilon}^{\alpha - \frac{1}{2} }= \widetilde{X}_{\epsilon}^\alpha $, for $ \frac{1}{2}
\leq \alpha \leq 1 $ and, by an abuse of notation,
we will still write $X_{\epsilon}^{\alpha - \frac{1}{2} } $ instead of
$\widetilde{X}_\epsilon^{\alpha}$, for $ 0 \leq
\alpha \leq \frac{1}{2} $ so we may denote by
$ \{ X_{\epsilon}^{\alpha}, \, \, -\frac{1}{2} \leq \alpha \leq 1 \} =
\{ X_{\epsilon}^{\alpha}, \, \, 0 \leq \alpha \leq 1 \} \cup
\{ \widetilde{X}_\epsilon^{\alpha}, \, \, 0 \leq \alpha \leq 1 \},$
the whole family of fractional power spaces.
We will denote simply by $X^{\alpha}$ the fractional power spaces associated to the unperturbed operator $A$.
For any $ \displaystyle -\frac{1}{2} \leq \beta \leq 0$, we may now
define an operator in these spaces as the restriction of
$\widetilde{A}_{\epsilon}$. We then have the following result
\begin{teo}\label{sec_scale} For any $-\frac{1}{2} \leq \beta \leq 0$ and $\epsilon$ sufficiently small, the operator
$(A_{\epsilon})_{\beta}$ in $ X_{\epsilon}^{\beta}$, obtained by restricting
$ \widetilde{A}_{\epsilon}$, with domain $ X_{\epsilon}^{\beta+1}$ is a sectorial operator.
\end{teo}
\noindent{\bf Proof. \quad}
Writing $\beta = -\frac{1}{2} + \delta $, for some $ 0 \leq \delta \leq
\frac{1}{2}$, we have $(A_{\epsilon})_{\beta} =
\widetilde{A}_{\epsilon}^{-\delta} \widetilde{A}_{\epsilon} \widetilde{A}_{\epsilon}^{\delta}. $ Since $\widetilde{A}_{\epsilon}^{\delta}$ is an isometry from
$X_{\epsilon}^{\beta}$ to $X_{\epsilon}^{ {-\frac{1}{2}}} = (W^{1,q} (\Omega))',$ the result follows easily.
{
\mbox{${\square}$}
}
We
can now pose the problem (\ref{nonlinBVP_fix}) as an abstract problem in the scale of Banach spaces $ \{{X}^{\beta}_{\epsilon},
\, \frac{-1}{2} \leq \beta \leq 0 \} $.
\begin{equation}\label{abstract_scale}
\left\{
\begin{aligned}
u_t & = - (A_{\epsilon})_{\beta}u + (H_{\epsilon})_{\beta}u \, , \, t>t_0\,;
\\
u(t_0 & )=u_0 \in X_{\epsilon}^{\eta}\, ,
\end{aligned}
\right.
\end{equation}
where
\begin{equation} \label{defH}
(H_{\epsilon})_{\beta} = H(\cdot,\epsilon):=(F_{\epsilon})_{\beta}
+ (G_{\epsilon})_{\beta} :X_{\epsilon} ^{\eta} \to X_{\epsilon} ^{\beta},
\ \ \epsilon >0 \textrm{ and } 0 \leq \eta \leq \beta +1,
\end{equation}
\begin{itemize}
\item[(i)] $(F_{\epsilon})_{\beta} = F(\cdot,\epsilon) :X_{\epsilon} ^{\eta} \to X_{\epsilon} ^{\beta}$ is given by
\begin{eqnarray}\label{Fh}
\left\langle F(u,\epsilon)\,,\,\Phi \right\rangle_{\beta \,,\, - \beta} =\displaystyle\int_{\Omega} f(u)\,\Phi\,dx, \ \ \textrm{ for any } \Phi \in (X_{\epsilon}^{ \beta})',
\end{eqnarray}
\item[(ii)] $(G_{\epsilon})_{\beta} = G(\cdot, {\epsilon}) :
X_{\epsilon} ^{\eta} \to X_{\epsilon} ^{\beta} $ is given by
\begin{eqnarray}\label{Gh}
\left\langle G(u,\epsilon)\,,\,\Phi \right\rangle_{\beta \,,\, - \beta } =\displaystyle\int_{\partial \Omega}g(\gamma(u))\,\gamma(\Phi)\left|\frac{J_{\partial \Omega}h_\epsilon}{Jh_\epsilon}\right|\,d\sigma(x)\,, \ \ \textrm{ for any } \Phi \in (X_{\epsilon}^{ \beta})',
\end{eqnarray}
where $\gamma$ is the trace map {and $J_{\partial\Omega}h_\epsilon$ is the determinant of the Jacobian matrix of the diffeomorphism $h_\epsilon: \partial\Omega \longrightarrow \partial h_\epsilon(\Omega)$}.
\end{itemize}
We will choose $\beta$, small enough in order that $X_{\epsilon}^{\beta+1}$ does not incorporate the boundary conditions, that is, the closure of the subset defined by smooth functions with Neumann boundary condition is the whole space).
It is not difficult to show, integrating by parts, that a regular enough solution of
\eqref{abstract_scale}, must satisfy \eqref{nonlinBVP_fix}
(see \cite{COPR} or \cite{OP}).
\section{Local well-posedness}
\label{wellposed}
In order to prove local well-posedness for the abstract problem, without assuming growth conditions in
the nonlinearities, we want to have two somewhat conflicting requirements for our phase space: we need it to be continuously embedded in $L^{\infty} $ and we also do not want it to incorporate the boundary conditions. To this end, we need to choose $\eta$ and $p$ big enough sot that the hypotheses of Theorem
\ref{inclusion} hold and, on the other hand, we need $\eta$ small enough so that the normal derivative does not have a well defined trace. To achieve both requirements we will henceforth
assume that
that
\begin{equation} \label{hip_inclusion}
p \textrm{ and } \eta \textrm{ are such the inclusion } \ \eqref{includ_C} \
\textrm{ holds, for some } \mu \geq 0 \quad \textrm{ and } \eta < \frac{1}{2}.
\end{equation}
It is easy to check that \ref{hip_inclusion} holds, for instance, if
$p = 2n$, \textrm{ and }
$ \frac{1}{4} < \eta < \frac{1}{2} $. Also, the last inequality is automatically attended if we choose our
base space $X_{\epsilon}^{\beta} = X_{\epsilon}^{-\frac{1}{2}} = (W^{1,q})' $, where $q $ and $p$ are conjugate exponents, since we must have $\eta- \beta <1 . $
\begin{lema}\label{Flip}
Suppose that $p $ and $\eta$ are such that \eqref{hip_inclusion} holds and $f$ is locally Lipschitz.
Then, the operator
$ {(F_{\epsilon})_\eta} :X_{\epsilon}^{\eta} {\rightarrow} X_{\epsilon}^{-\frac{1}{2}}$ given by
(\ref{Fh}) is well defined and Lipschitz in bounded sets.
\end{lema}
\noindent{\bf Proof. \quad}
Suppose
$u \in X_{\epsilon}^{\eta} $. From \eqref{includ_C} and the hypotheses it follows
that $u \in L^{\infty}(\Omega) $ and, therefore, if $L_f$ is the Lipschitz constant of $f$ in the interval
$ [ -\| u \|_{\infty} , \| u \|_{\infty} ] ]$,
it follows that
$ | f(u(x) - f(0) | \leq L_f | u(x) | $, for any $x \in \Omega$.
If
$\Phi \in (X_{\epsilon}^{-\frac{1}{2}})' = W^{1,q}$,
\begin{align*}
\big|\left\langle {(F_{\epsilon})_\eta}(u) \,,\,\Phi \right\rangle_{ {\beta, -\beta}}\big| &\leq \displaystyle\int_\Omega |\,f(u)\,|\,|\,\Phi\,|\,dx \nonumber \\
&\leq L_f \displaystyle\int_\Omega |\,u\,|\,|\,\Phi\,|\,dx + \displaystyle\int_\Omega |\,f(0)\,|\,|\,\Phi\,|\,dx \nonumber \\
&\leq L_f \| u \|_{L^p(\Omega)} \cdot \| \Phi \|_{L^q(\Omega)} + \| f(0)\|_{L^p(\Omega)} \cdot \| \Phi \|_{L^q(\Omega)}
\end{align*}
\par Since $ W^{1,q} \subset L^q(\Omega)$ and $X_{\epsilon}^{\eta} \subset L^p(\Omega) $
with stronger norms, we have
\[
\big|\left\langle {(F_{\epsilon})_\eta}(u)\,,\,\Phi \right\rangle_{ { \beta \,,\,-\beta}}\big|
\leq L_f \|\,u \|_{L^p(\Omega)}||\,\Phi\,||_{W^{1,q}} + \| f(0)\|_{L^{p} (\Omega)} ||\,\Phi\,||_{W^{1,q}},
\]
so $(F_{\epsilon})_\eta$ is well defined and
\begin{align}
\|(F_{\epsilon})_{\eta}(u)\|_{(W^{1,q} )' } & \leq L_f \|u\ \|_{L^p(\Omega)} + \| f(0)\|_{L^{p} (\Omega)} \label{normF_Lp} \\
& \leq L_f \|u\ \|_{X_{\epsilon}^\eta} + \| f(0)\|_{L^{p} (\Omega)} \label{normF}
\end{align}
where $L_f$ is the Lipschitz constant of $f$ in the interval
$ [ -\| u \|_{\infty} , \| u \|_{\infty} ] ]$
Alternatively, if $M_f = M_f(u): = \sup \{|f(x)| \ x \in
[ -\| u \|_{\infty} , \| u \|_{\infty} ] ] \}$,
it follows that
\begin{align*}
\big|\left\langle {(F_{\epsilon})_\eta}(u) \,,\,\Phi \right\rangle_{ {\beta, -\beta}}\big| &\leq \displaystyle\int_\Omega |\,f(u)\,|\,|\,\Phi\,|\,dx
\nonumber \\
&\leq M_f \displaystyle\int_\Omega \,|\,\Phi\,|\,dx
\nonumber \\
& \leq M_f |\Omega|^{\frac{1}{p} } \,|\,\Phi\,|_{L^q(\Omega)} \\
& \leq M_f |\Omega|^{\frac{1}{p} } \,|\,\Phi\,|_{W^{1,q}(\Omega)} \\
\end{align*}
Thus
\begin{align}
\|(F_{\epsilon})_{\eta}(u)\|_{(W^{1,q} )' } & \leq M_f |\Omega|^{\frac{1}{p} }
\label{normF_infty}
\end{align}
Suppose now that If $u_1, u_2$ belong to a bounded set $B \in X_{\epsilon}^{ {\eta}}$.
From \eqref{includ_C} and the hypotheses it follows now that $u_1, u_2$ belong to a ball of radius
$R = \sup_{u\in B} \|u\|_{\infty}$
in $L^{\infty}(\Omega) $ and, therefore, if L is the Lipschitz constant of $f$ in the interval $[-R,R]$,
we have
$ | f(u_1(x)) - f(u_2(x))) | \leq L | u_1(x) - u_2(x) | $, for any $x \in \Omega$.
If
$\Phi \in (X_{\epsilon}^{-\frac{1}{2}})' = W^{1,q}$, we obtain
\begin{align*}
\left|\left\langle{(F_{\epsilon})_\eta}(u_1) - {(F_{\epsilon})_\eta}(u_2) \,,\,\Phi \right\rangle_{\beta, -\beta}\right| & =
\left|\displaystyle\int_{\Omega} [f(u_1)-f(u_2)]\,\Phi\,dx\,\right| \\
&\leq \displaystyle\int_{\Omega} L \,|\,u_1-u_2\,|\,|\,\Phi\,|\,dx \\
&\leq L_f \| u_1 - u_2 \|_{L^p(\Omega)} \cdot \| \Phi \|_{L^q(\Omega)} \\
&\leq L_f \| u_1 - u_2 \|_{X_{\epsilon}^\eta} \cdot \| \Phi \|_{W^{1,q}(\Omega)}
\end{align*}
Thus
\begin{align}
\|(F_{\epsilon})_{\eta}(u_1) - (F_{\epsilon})_{\eta}(u_2)\|_{(W^{1,q})' } & \leq
L_f \| u_1 - u_2 \|_{L^p(\Omega)} \label{LipF_Lp}\\
& \leq L_f \| u_1 - u_2 \|_{X_{\epsilon}^\eta}. \label{LipF}
\end{align}
This concludes the proof.
\mbox{${\square}$}
\begin{lema}\label{Gbem}
Suppose that $p $ and $\eta$ are such that \eqref{hip_inclusion} holds and $g$ is locally Lipschitz.
Then, if $\epsilon_0$ is sufficiently small, the operator $(G_{\epsilon})_{\eta} {=G} :X_{\epsilon}^{\eta} {\rightarrow} (W^{1,q})' $
given by (\ref{Gh}) is well defined, for $0 \leq \epsilon <\epsilon_0$
and bounded in bounded sets.
\end{lema}
\noindent{\bf Proof. \quad}
Suppose
$u \in X_{\epsilon}^{\eta} $. From \eqref{includ_C} and the hypotheses it follows that $u \in L^{\infty}(\Omega) $ and, therefore, if $L_g$ is the Lipschitz constant of $g$ in the interval
$ [ -\| u \|_{\infty}, \| u \|_{\infty} ]$,
it follows that
$ | g(\gamma(u)(x) - g(0) | \leq L_g | \gamma(u)(x) | $, for any $x \in \partial \Omega$.
If $u \in X_{\epsilon}^{\eta}$ and $\Phi \in (X_{\epsilon}^{-\frac{1}{2}})' = W^{1,q}$, we have
\begin{align*}
\big|\left\langle G(u,\epsilon)\,,\,\Phi \right\rangle_{\beta \,,\, -\beta }\big| &\leq\displaystyle\int_{\partial \Omega} |\,g(\gamma(u))\,|\,|\,\gamma(\Phi)\,
|\left|\frac{J_{\partial \Omega}h_\epsilon}{Jh_\epsilon}\right|\,d\sigma(x) \nonumber \\
&\leq ||\mu||_\infty \displaystyle\int_{\partial \Omega} L_g |\gamma(u)| |\gamma(\Phi)| +
|g(0)|\,|\gamma(\Phi)| \, d\sigma(x) \nonumber \\
&\leq ||\mu||_\infty \left( L_g \|\gamma(u) \|_{L^p(\partial \Omega)} \cdot
\|\gamma(\Phi)\|_{L^q(\partial \Omega)}
+ \| g(0) \|_{L^p(\partial \Omega)} \cdot
\|\gamma(\Phi)\|_{L^q(\partial \Omega)} \right)
\end{align*}
where $\mu(x, \epsilon) = \left|\frac{J_{\partial \Omega}h_\epsilon}{Jh_\epsilon}\right|$, and $\|\mu\|_\infty = \sup \left\{ |\mu(x, \epsilon)| \, | \,
x\in \partial \Omega, \, 0 \leq \epsilon
\leq \epsilon_0 \right\}$ is finite by
hypothesis $ ({\bf H_1})$.
By Theorem \ref{trace}, there
$||\gamma(\Phi)||_{L^q(\partial \Omega)} \leq {K}_1\,||\Phi||_{W^{1,q}(\Omega)} \,,\
||\gamma(u)||_{L^p(\partial \Omega)} \leq {K}_2 \,||u||_{X_{\epsilon}^{\eta}}\, ,
$
for some constants ${K}_1$, ${K}_2$.
Thus
\begin{align*}
\big|\left\langle G(u,\epsilon)\,,\,\Phi \right\rangle_{\beta \,,\, -\beta }\big|
&\leq ||\mu||_\infty \left( L_g K_1 \,||\gamma(u)||_{L^p(\partial \Omega) }
\|\Phi |_{W^{1,q} (\Omega)}
+ K_1 \| g(0) \|_{L^p(\partial \Omega)} \cdot
\|\Phi \|_{W^{1,q}(\Omega) } \right)
\end{align*}
proving that $(G_{\epsilon})_{\beta} $ is well defined and
\begin{align}
\|G(u,\epsilon) \|_{(W^{1,q}(\Omega))' } & \leq
||\mu||_\infty \left( L_g K_1 \,||\gamma(u)||_{ L^p(\partial \Omega) }
+ K_1 \| g(0) \|_{L^p(\partial \Omega)} \right) \label{normG_Lp} \\
& \leq
||\mu||_\infty \left( L_g K_1 K_2 \,||u||_{X_{\epsilon}^{\eta}}
+ K_1 \| g(0) \|_{L^p(\partial \Omega)} \right) \label{normG}
\end{align}
Alternatively, if $M_g= M_g (u): = \sup \{|g(x)| \ x \in
[ -\| u \|_{\infty} , \| u \|_{\infty} ] ] \}$,
it follows that
\begin{align*}
\big|\left\langle G(u,\epsilon)\,,\,\Phi \right\rangle_{\beta \,,\, -\beta }\big| &\leq\displaystyle\int_{\partial \Omega} |\,g(\gamma(u))\,|\,|\,\gamma(\Phi)\,
|\left|\frac{J_{\partial \Omega}h_\epsilon}{Jh_\epsilon}\right|\,d\sigma(x) \nonumber \\
&\leq ||\mu||_\infty M_g \displaystyle\int_{\partial \Omega} \,|\gamma(\Phi)| \, d\sigma(x) \nonumber \\
&\leq ||\mu||_\infty M_g |{\partial \Omega}|^{\frac{1}{p} } \, \|\gamma(\Phi)\|_{L^q( \partial \Omega)} \nonumber \\
&\leq ||\mu||_\infty M_g |{\partial \Omega}|^{\frac{1}{p} } \,K_1 \|\Phi) \|_{W^{1,q}( \Omega)} \nonumber \\
\end{align*}
Thus
\begin{align}
\|G(u,\epsilon) \|_{(W^{1,q}(\Omega))' }
&\leq ||\mu||_\infty M_g |{\partial \Omega}|^{\frac{1}{p} } \,K_1 \label{normG_infty}
\end{align}
\mbox{${\square}$}
\begin{lema}\label{Glip}
Suppose the same hypotheses of Lemma \ref{Gbem} hold.
Then
the operator
$G(u,\epsilon) {=G(u)} :X_{\epsilon}^{\eta} \times [0, \epsilon_0] \to (W^{1,q})'$ given
by (\ref{Gh})
is uniformly continuous in $\epsilon$, for $u$ in bounded sets of $X_{\epsilon}^{\eta}$
and
locally Lipschitz continuous in $u$, uniformly in
$\epsilon$.
\end{lema}
\noindent{\bf Proof. \quad}
We first show that $(G_{\epsilon})_{\beta} $ is locally Lipschitz continuous in $u \in X_{\epsilon}^{\eta}$.
Suppose that $u_1, u_2$ belong to a bounded set $B \in X_{\epsilon}^{ {\eta}}$.
From \eqref{includ_C}, the Trace Theorem and the hypotheses, it follows now that $\gamma(u_1), \gamma(u_2)$ belong to a ball of some radius $R$
in $L^{\infty}(\partial \Omega) $ and, therefore, if $L_g$ is the Lipschitz constant of $g$ in the interval $[-R,R]$,
we have
$ | g(\gamma(u_1)(x)) - g(\gamma(u_2)(x))) | \leq L_g | \gamma(u_1)(x) - \gamma(u_2)(x) | $, for any $x \in \partial \Omega$.
If
$\Phi \in (W^{1,q})'$ and $ \epsilon \in [0, \epsilon_0]$, we obtain
Then
\begin{align*}
\left|\left\langle G(u_1,\epsilon) - G(u_2,\epsilon),\Phi \right\rangle_{\beta,-\beta}\right |
&\leq \displaystyle\int_{\partial\Omega} |\,g(\gamma(u_1))-g(\gamma(u_2))\,|\,\big|\,\gamma(\Phi)\,\big|\,\left|\frac{J_{\partial\Omega} h_\epsilon}{Jh_\epsilon}\right|\,d \sigma (x) \\
&\leq \displaystyle\int_{\partial\Omega} L_g |\gamma(u_1)-\gamma(u_2)|\left|\gamma(\Phi)\right|\left|\frac{J_{\partial\Omega} h_\epsilon}{Jh_\epsilon}\right|d\sigma (x) \\
&\leq
L_g ||\mu||_\infty \displaystyle\int_{\partial\Omega} |\gamma(u_1)-\gamma(u_2)|\left|\gamma(\Phi)\ \right|d\sigma (x) \\
&\leq L_g ||\mu||_\infty \| \gamma(u_1) - \gamma( u_2 ) \|_{L^p(\partial \Omega)} \cdot \| \gamma(\Phi ) \|_{L^q(\partial \Omega)} \\
&\leq L_g ||\mu||_\infty K_1 K_2 \| u_1 - u_2 \|_{X_{\epsilon}^{\eta}} \cdot \| \Phi \|_{W^{1,q}(\Omega)},
\end{align*}
where $K_1, K_2$ are the norms of the trace mappings, given by Theorem \ref{trace}.
Therefore,
\begin{align}
\| G(u_1,\epsilon) - G(u_2,\epsilon) \|_{(W^{1,q})' }
&\leq L_g ||\mu||_\infty K_1 \| \gamma(u_1) - \gamma( u_2 ) \|_{L^p(\partial \Omega)}
\label{LipG_Lp} \\
&\leq L_g ||\mu||_\infty K_1 K_2 \| u_1 - u_2 \|_{X_{\epsilon}^{\eta}} \label{LipG}
\end{align}
so $(G_{\epsilon})_{\beta} $ is locally Lipschitz in $u$.
\par Now, if $u \in X_{\epsilon}^{\eta}$, $\Phi \in (W^{1,q})'$ and
$ \epsilon_1, \epsilon_2 \in [0, \epsilon_0]$, we have
\begin{align*}
\big|\langle G(u,{\epsilon_1})- & G(u,{\epsilon_2}),\Phi\rangle_{\beta,-\beta}\big| \leq
\displaystyle\int_{\partial\Omega}|\,\gamma(g(u))\,|\,|\,\gamma(\Phi)\,|\left|\,\left(\,\left|\frac{J_{\partial \Omega}h_{\epsilon_1}}{Jh_{\epsilon_1}}\right|-\left|\frac{J_{\partial \Omega}h_{\epsilon_2}}{Jh_{\epsilon_ 2}}\right|\,\right)\,\right|\,d\sigma(x) \\
&\leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty}
\displaystyle\int_{\partial\Omega} |\,g(\gamma(u))\,|\,|\,\gamma(\Phi)\,|\, d\sigma(x) \\
&\leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty}
\displaystyle\int_{\partial\Omega} \left( L_{g} |\,\gamma(u)\, | + |\, g(0)\, | \right) \,|\,\gamma(\Phi)\,|\, d\sigma(x) \\
&\leq \|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty} \left( L_g \|\gamma(u) \|_{L^p(\partial \Omega)} \cdot
\|\gamma(\Phi)\|_{L^q(\partial \Omega)}
+ \| g(0) \|_{L^p(\partial \Omega)} \cdot
\|\gamma(\Phi)\|_{L^q(\partial \Omega)} \right) \\
& \leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty} \left( L_g K_1 K_2 \,||u||_{X^{\eta}}
\|\Phi |_{W^{1,q} (\Omega)}
+ K_1 \| g(0) \|_{L^p(\partial \Omega)} \cdot
\|\Phi \|_{W^{1,q}(\Omega) } \right)
\end{align*}
\noindent where $\|\mu_{\epsilon_1}- \mu_{\epsilon_2} \|_\infty =
\sup \left\{ \left|\frac{J_{\partial \Omega}h_{\epsilon_1}}{Jh_{\epsilon_1}}\right|-\left|\frac{J_{\partial \Omega}h_{\epsilon_2}}{Jh_{\epsilon_ 2}}\right| \, | \,
x\in \partial \Omega, \,
\right\} \to 0 $ as $ |\epsilon_1 - \epsilon_2| \to 0 $,
by hypothesis $ ({\bf H_1})$ and $K_1, K_2$ are trace constants given by Theorem \ref{trace}.
It follows that
\begin{equation} \label{unif_contG}
\|G(u,\epsilon_1) - G(u,\epsilon_2)\|_{(W^{1,q}(\Omega))' } \leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty} \left( L_g K_1 K_2 \,||u||_{X^{\eta}}
+ K_1 \| g(0) \|_{L^p(\partial \Omega)} \right)
\end{equation}
Alternatively, if $M_g= M_g (u): = \sup \{|g(x)| \ x \in
[ -\| u \|_{\infty} , \| u \|_{\infty} ] ] \}$,
\begin{align*}
\big|\langle G(u,{\epsilon_1})- & G(u,{\epsilon_2}),\Phi\rangle_{\beta,-\beta}\big| \leq
\displaystyle\int_{\partial\Omega}|\,\gamma(g(u))\,|\,|\,\gamma(\Phi)\,|\left|\,\left(\,\left|\frac{J_{\partial \Omega}h_{\epsilon_1}}{Jh_{\epsilon_1}}\right|-\left|\frac{J_{\partial \Omega}h_{\epsilon_2}}{Jh_{\epsilon_ 2}}\right|\,\right)\,\right|\,d\sigma(x) \\
&\leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty}
\displaystyle\int_{\partial\Omega} |\,g(\gamma(u))\,|\,|\,\gamma(\Phi)\,|\, d\sigma(x) \\
&\leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty} M_g
\displaystyle\int_{\partial\Omega} \,\gamma(\Phi)\,|\, d\sigma(x) \\
&\leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty} M_g |\partial \Omega|^{\frac{1}{p} }
\| \,\gamma(\Phi) \|_{L^p(\partial \Omega)} \\
&\leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty} M_g |\partial \Omega|^{\frac{1}{p} }
K_1 \| \,\Phi \|_{W^{1,q}}(\Omega)
\end{align*}
It follows that
\begin{equation} \label{unif_contG_infty}
\|G(u,\epsilon_1) - G(u,\epsilon_2)\|_{(W^{1,q}(\Omega))' } \leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty} M_g |\partial \Omega|^{\frac{1}{p} } K_1
\end{equation}
\mbox{${\square}$}
\begin{cor} \label{propH} Suppose the hypotheses of Lemmas \ref{Flip} and \ref{Gbem} hold. Then the map
$(H (u,\epsilon))_{\eta} := ( F_{\epsilon} (u))_{\eta} + (G (u,\epsilon))_{\eta} :X_{\epsilon}^{\eta} \times [0, \epsilon_0] \to (W^{1,q})'$ is well defined, bounded in bounded sets uniformly in $\epsilon$, uniformly continuous in $\epsilon$ for $u$ in bounded sets of $X_{\epsilon}^{\eta}$
and
locally Lipschitz continuous in $u$ uniformly in
$\epsilon$.
\end{cor}
\noindent{\bf Proof. \quad}
From \ref{normF_Lp}, \ref{normF}, \ref{normG_Lp} and \ref{normG}, we obtain
\begin{align}
\|(H_{\epsilon})_{\eta}(u)\|_{(W^{1,q} )' } & \leq L_f \|u \|_{L^p(\Omega)} +
L_g K_1 \|\gamma(u) \|_{L^p(\partial \Omega)}
+ \| f(0)\|_{L^{p} (\Omega)}
+ K_1 \| g(0) \|_{L^p(\partial \Omega)} \label{normH_Lp} \\
& \leq \left( L_f +
L_g K_1 K_2 \right) ||u||_{X_{\epsilon}^{\eta}}
+ \| f(0)\|_{L^{p} (\Omega)}
+ K_1 \| g(0) \|_{L^p(\partial \Omega)}, \label{normH}
\end{align}
where $L_f$ and $L_g$ are Lipschitz constants of $f$ and $g$ in the interval
$ [ -\| u \|_{\infty} , \| u \|_{\infty} ] ]$,
Alternatively, if $M_f = M_f(u): = \sup \{|f(x)| \ x \in
[ -\| u \|_{\infty} , \| u \|_{\infty} ] ] \}$,
$M_g= M_g (u): = \sup \{|g(x)| \ x \in
[ -\| u \|_{\infty} , \| u \|_{\infty} ] ] \}$, we obtain from \eqref{normF_infty} and
\eqref{unif_contG_infty}
\begin{align}
\|(H_{\epsilon})_{\eta}(u)\|_{(W^{1,q} )' } & \leq M_f |\Omega|^{\frac{1}{p} } +
||\mu||_\infty M_g |{\partial \Omega}|^{\frac{1}{p} } \,K_1 \label{normH_infty}
\end{align}
From \eqref{LipF_Lp}, \eqref{LipF}, \eqref{LipG_Lp} and \eqref{LipG},
\begin{align}
\| H(u_1,\epsilon) - H (u_2,\epsilon) \|_{(W^{1,q})' }
&\leq L_g ||\mu||_\infty K_1 \| \gamma(u_1) -
\gamma( u_2 ) \|_{L^p(\partial \Omega)} + L_f \| u_1 - u_2 \|_{L^p(\Omega)}
\label{LipH_Lp} \\
&\leq \left( L_g ||\mu||_\infty K_1 K_2 + L_f \right)
\| u_1 - u_2 \|_{X_{\epsilon}^{\eta}} \label{LipH}
\end{align}
From \eqref{unif_contG}
\begin{equation} \label{unif_contH}
\|H(u,\epsilon_1) - H(u,\epsilon_2)\|_{(W^{1,q}(\Omega))' } \leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty} \left( L_g K_1 K_2 \,||u||_{X^{\eta}}
+ K_1 \| g(0) \|_{L^p(\partial \Omega)} \right)
\end{equation}
Alternatively, from \eqref{unif_contG_infty}
\begin{equation} \label{unif_contH_infty}
\|H(u,\epsilon_1) - H(u,\epsilon_2)\|_{(W^{1,q}(\Omega))' } \leq
\|\mu_{\epsilon_1} - \mu_{\epsilon_2} \|_{\infty} M_g |\partial \Omega|^{\frac{1}{p} } K_1
\end{equation}
In the estimates above ${K}_1$ and ${K}_2$ are the norms of the trace mappings (see Theorem
\ref{trace}).
\mbox{${\square}$}
\begin{teo}\label{loc_exist}
Suppose the hypotheses of Corollary \ref{propH} hold.
Then, for any $(t_0, u_0) \in \mathbb{R} \times X_{\epsilon}^\eta$, the problem (\ref{abstract_scale})
has a unique solution $u(t,t_0, u_0,\epsilon)$ with initial value $u(t_0) = u_0$.
\end{teo}
\noindent{\bf Proof. \quad}
From Theorem \ref{sec_scale} it follows that $ {(}A_{ {\epsilon}} {)_\beta}$ is a sectorial operator in
$(W^{1,q})'$, with domain $X_{\epsilon}^{\frac{1}{2}} = W^{1,p} $, if $\epsilon $ is small enough.
The result follows then from Corollary \ref{propH} and results in \cite{He2}
and \cite{PP}.
\mbox{${\square}$}
\section{Global existence and boundedness of the semigroup}
\label{global_exist}
We will use the notation $T_{\epsilon}(t) u_0$ for the (local) solution of the problem \eqref{abstract_scale} given by
Theorem \ref{loc_exist}, with initial condition $u_0$ in some fractional power space of $A_{\epsilon}$.
We now want to show that these solutions are globally defined if an additional (dissipative) hypotheses on $f$ and $g$ is assumed.
Here are these hypotheses:
There exist constants $c_0$ and $d_0$ such that
\begin{equation} \label{dissipative}
\displaystyle\limsup_{|\,u\,| \to \infty }\frac{f(u)}{u} \leq c_0\,, \quad
\displaystyle\limsup_{|\,u\,| \to \infty }\frac{g(u)}{u} \leq d_0
\end{equation}
and the first eigenvalue $\mu_1(\epsilon)$ of the problem
\begin{equation} \label{compet}
\left\{
\begin{array}{lll}
- h_{\epsilon}^{*} \Delta_{\Omega_\epsilon} {h_{\epsilon}^{*}}^{-1} \Delta u + (a - c_0)u = \mu u \,\,\mbox{em} \,\,\Omega \\
\displaystyle h_{\epsilon}^{*} \frac{\partial u}{\partial N_\Omega}{h_{\epsilon}^{*}}^{-1} =d_0\,u \,\, \mbox{em} \,\, \partial\Omega
\end{array}
\right.
\end{equation}
is positive for $\epsilon$ sufficiently small.
\begin{rem} \label{dissip_perturbed}
Observe that if the hypothesis (\ref{compet}) hold
for $\epsilon = 0$, then this also true for $\epsilon$ small since the eigenvalues change continuously with $\epsilon$ by (\ref{dif_estimate}).
\end{rem}
\begin{rem}
The arguments bellow are a slight modification of the ones in
{\cite{OP}}, but we include them here for the sake of completeness. Similar arguments were used in \cite{ACB} in a somewhat different setting.
\end{rem}
In order to use comparison results, we start
by defining the concepts of sub- and super-solutions.
\begin{defn}
Suppose $\Omega$ is a $\mathcal{C}^{1,\alpha}$, domain for some $\alpha \in (0,1)$, $L$ is a uniformly elliptic second order differential operator in $\overline{\Omega}$,
$u_0\in \mathcal{C }^{\alpha}(\Omega)$, $T>0$ and
$\bar{u}:\Omega\subset\mathbb{R}^\to\mathbb{R}^n$ (${\underline u}$
respectively) a function which is continuous in $[0,T]\times\bar\Omega$,
continuously differentiable in $t$ and twice continuously differentiable in
$x$
for
$(t,x)\in (0,T]\times\Omega$. Then $\overline{u}$ (${ respectively, \underline u}$)
is a super-solution (sub-solution) of the problem
\begin{equation}
\left\{
\begin{aligned}
\displaystyle
u_t & = Lu +f(u), \qquad \hbox{in} \quad (0,T]\times \Omega, \\
\frac{\partial u}{\partial N} & = g(u), \qquad \hbox{on}\quad \partial \Omega
\\
u(0) & =u_0.
\label{subsup}
\end{aligned}
\right.
\end{equation}
if it satisfies
\begin{equation}
\left\{
\begin{aligned}
\displaystyle
u_t & \geq Lu +f(u) , \qquad \hbox{in} \quad (0,T]\times \Omega, \\
\frac{\partial u}{\partial N} & \geq g(u), \qquad \hbox{on}\quad \partial \Omega
\\
u(0) & \geq u_0.
\label{super}
\end{aligned}
\right.
\end{equation}
( and respectively with the $\geq$ sign replaced by the $\leq$ sign).
\end{defn}
A basic result for our arguments is the following
\begin{teo}(Pao \cite{Pao})
\label{pao}
If $f$ is locally Lipschitz and ${\bar u}$ and ${\underline u}$ are
respectively
a super and sub-solution of the problem (\ref{subsup}), satisfying
\[
{\underline u}\leq {\bar u},{\hbox{ in }}\Omega\times(0,T),
\]
then there exists a solution $u$ of (\ref{subsup}) such that
\[
{\underline u}\leq u\leq {\bar u},{\hbox{ in }}\Omega\times(0,T).
\]
\end{teo}
Let now $\varphi_{\epsilon}$ be the first positive normalized
eigenfunction of \eqref{compet} and
$\displaystyle m_{\epsilon} =\min_{x\in{\bar\Omega}}\varphi_{\epsilon} (x)$. We know that $m_{\epsilon}>0$.
For each $\theta > 0 \in\mathbb{R} $, define
\[
\Sigma_\theta^{\epsilon}=\displaystyle\left\{u\in X^{\eta}_{\epsilon} : |u(x)|\leq \theta\varphi_{\epsilon} (x),
{\hbox{ for all }}x\in{\bar\Omega}\right\}.
\]
From the dissipative hypothesis \eqref{dissipative} on $f$ and $g$, we know that there
exists
$\xi\in\mathbb{R}$, such that
\[
\displaystyle
\frac{f(s)}{s}\leq c_0{\hbox{ and }} \frac{g(s)}{s}\leq d_0,
\]
for all $s$ with $|s|\geq\xi$.
To simplify the notation, we take the $\epsilon = 0$, in the proofs below, since the argument is the same for any $\epsilon$ such that \eqref{compet} is true (see Remark \ref{dissip_perturbed}).
\begin{lema}
\label{posinv} Suppose, in addition to
the hypotheses of Theorem \ref{loc_exist}, that \eqref{dissipative} and
\eqref{compet} hold.
Then, if $\theta m_{\epsilon} \geq\xi$ and $\epsilon$ is small enough, the set
$\Sigma_\theta^{\epsilon} $ is a positively invariant set for $T(t)$.
\end{lema}
\noindent{\bf Proof. \quad}
Let
\[
\begin{array}{l}
\displaystyle \Sigma^1_\theta =\{u\in X^{\eta} : u(x)\leq \theta\varphi(x),{\hbox{
for all }}
x\in{\bar\Omega}\}\\
\\
\displaystyle \Sigma^2_\theta =\{u\in X^{\eta} : u(x)\geq -\theta\varphi(x),{\hbox{
for all }}
x\in{\bar\Omega}\}
\end{array}
\]
Since $\Sigma_\theta=\Sigma^1_\theta\cap\Sigma^2_\theta$ it is enough to show
that $\Sigma^1_\theta$ and $\Sigma^2_\theta$ are positively invariant.
Let $u_0\in\Sigma^1_\theta$, and suppose, for contradiction, that there
exists $t_0\in[0,t_{\max}[$ and $x_0\in{\bar\Omega}$ such that
\[
T(t_0)u_0(x_0) > \theta\varphi(x_0).
\]
Consider ${\bar v}(t)=e^{-\mu(t-t_0)}\theta\varphi$, where $\mu$ is the
eigenvalue associated with $\varphi$. We have that
\[ \left\{
\begin{aligned}
\frac{\partial{\bar v}}{\partial t} &= \left( \Delta {\bar v}-
a{\bar v}+c_0{\bar v} \right)
\geq
\Delta {\bar v}
- a{\bar v}+ f({\bar v})
\\
\frac{\partial {\bar v}}{ \partial N} & = d_0{\bar v} \geq g({\bar v}),
\end{aligned}
\right.
\]
for all $t\in]0,t_0]$.
Thus ${\bar v}$ is a super-solution for the problem (\ref{nonlinBVP_fix}). It follows
from Theorem~{\ref{pao}} that
\[
T(t)u_0\leq {\bar v}(t),{\hbox{ in }}{\bar\Omega}{\hbox{ for all
}}t\in[0,t_0[.
\]
In particular, $T(t_0)u_0(x_0)\leq\theta\varphi(x_0)$ and we reach a
contradiction.
To prove that $\Sigma^2_\theta$ is positively invariant we proceed in a
similar
way,
using now that ${\underline v}=-{\bar v}$ is a sub-solution for the problem
(\ref{nonlinBVP_fix}).
\mbox{${\square}$}
\begin{lema}
\label{bound_theta}
Suppose the hypotheses of Lemma \ref{posinv} hold.
If $\theta m_{\epsilon} \geq \xi$, and $ \eta \leq \alpha < \frac{1}{2} $, there exists a constant $R= R(\theta, \eta)$, and $T>0$ independent of $\epsilon$, such that the orbit of any
bounded subset V of $X^{\eta}_{\epsilon} \cap \Sigma_\theta^{\epsilon}$ under $T_{\epsilon}(t)$ is in the ball of radius
$R$ of $X^{\alpha}_{\epsilon}$, for $ t > T $. In particular, the solutions with initial condition in $X^{\eta}_{\epsilon} \cap \Sigma_\theta$ are globally defined.
\end{lema}
\noindent{\bf Proof:}
Lemma~\ref{posinv} implies that
$T_{\epsilon}(t)u_0\in\Sigma_\theta^{\epsilon}$, for all $t\in[0,t_{\max}[$ so
\[
\|T_{\epsilon}(t)u_0\|_\infty\leq\theta\|\varphi\|_\infty.
\]
Applying the variation of constants formula, we obtain (see \cite{He2})
\[
\displaystyle
\|T(t)u_0\|_{X^{\alpha}}\leq M t^{-(\alpha - \eta ) } e^{-\delta
t}\|u_0\|_{X^{\eta}}+M\int_0^t(t-s)^{-(\alpha +\frac{1}{2})}
e^{-\delta(t-s)}\|(H_{\epsilon})_{\eta}(T(s)u_0)\|_{X^{-\frac{1}{2} } } \,ds,
\]
where the $M,\delta >0$ are constants depending only on the decay of the linear semigroup
$e^{A_{\epsilon}t}$, and can be chosen independently of $\epsilon$.
By \eqref{normH_infty}
\begin{align*}
\|(H_{\epsilon})_{\eta}(T(s)u_0)\|_{X^{-\frac{1}{2} }} & \leq M_f |\Omega|^{\frac{1}{p} } +
||\mu||_\infty M_g |{\partial \Omega}|^{\frac{1}{p} } \,K_1 \\
\end{align*}
where now
$M_f = M_f(u): = \sup \{|f(x)| \ x \in I \}$,
$M_g= M_g (u): = \sup \{|g(x)| \ x \in I \}$
with $ [ - \theta \| \phi_{\epsilon}\|_{\infty} , \phi_{\epsilon}\|_{\infty} ] \subset I $, for all
$\epsilon$ sufficiently small.
Thus, writing $K= M_f |\Omega|^{\frac{1}{p} } +
||\mu||_\infty M_g |{\partial \Omega}|^{\frac{1}{p} } \,K_1$, we obtain
\begin{align*}
\displaystyle
\|T_{\epsilon}(t)u_0\|_{X^{\alpha}} & \leq M t^{-(\alpha - \eta ) } e^{-\delta
t}\|u_0\|_{X^{\eta}}+ K M\int_0^t(t-s)^{-(\alpha +\frac{1}{2})}
e^{-\delta(t-s)} \,ds, \ \\
& \leq
M t^{-(\alpha - \eta ) } e^{-\delta
t}\|u_0\|_{X^{\eta}}+
KM \frac{ \Gamma( \frac{1}{2} -\alpha ) }{\delta^{\frac{1}{2}- \alpha} }.
\end{align*}
for all $t\in[0,t_{max}[$.
Therefore $\|T_{\epsilon}(t)u_0\|_{X^{\alpha}}$ is bounded by a constant for any $t > 0$. Since
$ X^{\alpha}$ is compactly embedded in $ X^{\eta }$, if $\alpha > \eta$, it follows that the solution is globally defined. Also, if $T$ is such that
$ t^{-(\alpha - \eta ) } e^{-\delta
t}\|u_0\|_{X^{\eta}} \leq
K \frac{ \Gamma( \frac{1}{2} -\alpha ) }{\delta^{\frac{1}{2}- \alpha} } $, then
$\|T_{\epsilon}(t)u_0\|_X^{\alpha}$ belongs to the ball of $ X^{\alpha}$
of radius
$R(\theta) = 2 KM \frac{ \Gamma( \frac{1}{2} -\alpha ) }{\delta^{\frac{1}{2}- \alpha} } $, for $t \geq T$.
\mbox{${\square}$}
\section{Existence of Global Attractors}
\label{attract}
The first step to show the existence of global attractors will be to obtain
a ``contraction property'' of the sets $\Sigma_\theta$, similar to the
property for rectangles, considered by Smoller \cite{smoller}.
\begin{lema}
\label{contract}
Suppose that the hypotheses of Lemma \ref{posinv} hold
and $\bar\theta\in \mathbb{R}$ satisfy $\bar\theta m_{\epsilon} >\xi$. Then, for any
$\theta$ there exists a $\bar t$, which can be chosen independently of $\epsilon$, such that
\[
T_{\epsilon}(t)\Sigma_\theta^{\epsilon}\subset\Sigma_{\bar\theta}^{\epsilon},
\]
for all $t\geq{\bar t}$.
\end{lema}
\noindent{\bf Proof:}
Let $u\in\Sigma_\theta$. We can suppose without loss of generality that
$\theta\geq\bar\theta$. Let $\bar v=e^{-t\mu_{\epsilon}}\theta\varphi$, $\underline v=
-\bar v$.
As in Lemma~\ref{posinv}, we can prove that $\bar v$ and $\underline v$ are
super- and sub-solutions respectively. Thus, using Theorem~\ref{pao} and the
uniqueness of solution, we have that
\[
\underline v\leq T_{\epsilon} (t)u\leq \bar v,
\]
Therefore $T_{\epsilon}(t)u$ enters $\Sigma_{\bar\theta}$ after a time depending only on
$\theta$, and on the first eigenvalue $\mu_{\epsilon} $ of $A_{\epsilon} $ (and not on the particular solution $u \in \Sigma_{\theta}$). Since $\mu_{\epsilon} $ is bigger than a constant $\mu$, for
$\epsilon$ sufficiently small, and
$\Sigma_{\bar\theta}$
is positively invariant, the result follows.
\mbox{${\square}$}
\begin{teo}
\label{exatr}
Suppose that the hypotheses of Lemma \ref{posinv} hold. Then
the problem (\ref{abstract_scale}) has a global attractor $\mathcal{A}_{\epsilon}$ in $X^{\eta}_{\epsilon}
$. Furthermore
$\mathcal{A_{\epsilon}} \subset \Sigma_{\theta}^{\epsilon}$ if $\theta m_{\epsilon}\geq
\xi$.
\end{teo}
\noindent{\bf Proof:}
Let $V $ be a bounded subset of $ X^{\eta}$, and
$\bar\theta\in\mathbb{R} $ be such that $\bar\theta m\geq\xi$. If $u$ is any
element of $X^{\eta}$, it follows from the continuity
of the embedding $X^{\eta}\hookrightarrow C^0(\bar\Omega)$ that
$u\in\Sigma_{\theta}$,
for some $\theta$ and then, applying Lemma~\ref{contract}, we conclude that
$T(t)u\in\Sigma_{\bar\theta}$, for $t$ big enough.
From Lemma \ref{bound_theta}, it follows that $V $ enters and remains in a ball
of $ X^{\alpha} $, with $\alpha > \eta$ of radius $R(\alpha, \bar{\theta})$, which does not depend on $V$. Since this ball is a compact set of
$ X^{\alpha} $, the existence of a global compact attractor
$\mathcal{A}$ follows immediately. Furthermore, since $ \Sigma_{\bar{\theta}} $ is positively invariant by
Lemma
\ref{posinv} it also follows that $\mathcal{A} \subset \Sigma_{\bar{\theta}} $,
as claimed.
\mbox{${\square}$}
\begin{cor} \label{uniform_bound_Linf}
Suppose that the hypotheses of Lemma \ref{posinv} hold.
If $\epsilon_0 $ is sufficiently small, the attractor $\mathcal{A}_{\epsilon}$ is uniformly bounded in
$ L^{\infty} $, for $0 \leq \epsilon \leq \epsilon_0$.
\end{cor}
\noindent{\bf Proof:}
From \eqref{fund_inequality_Lp} and results in \cite{kato}, it follows that the first eigenvalue and eigenfunction of $A_{\epsilon}$ are continuous in $W^{1,p} $ and, therefore, also in $L^{\infty}$, Thus the sets
$ \Sigma_{{\theta}}^{\epsilon}$ are uniformly bounded in $ L^{\infty} $ and the result follows from Theorem \ref{exatr}.
\mbox{${\square}$}
\section{Uppersemicontinuity of the family of global attractors}
\label{upper}
Recall that a family of subsets $\mathcal{A}_{\lambda}$ of
a metric space $(X,d)$ is
said to be {\em upper-semi continuous} at $\lambda = \lambda_0$
if $\delta(\mathcal{A}_{\lambda}, \mathcal{A}_{\lambda_0}) \to 0$
as $ \lambda \to \lambda_0$, where
$\delta(A,B) = \sup_{x\in A} d(x,B) =
\sup_{x\in A} \inf_{y \in B} d(x,y) $
and {\em lower-semicontinuous} if
$\delta(\mathcal{A}_{\lambda_0}, \mathcal{A}_{\lambda}) \to 0$
as $ \lambda \to \lambda_0$.
To prove the uppersemicontinuity of the family of attractors $A_{\epsilon}$, given by Theorem \ref{exatr}
in the (fixed) fractional space
$X^{\eta}$, $0 < \eta < \frac{1}{2}$, we will need two main ingredients: the uniform boundedness of the family and the continuity of the nonlinear semigroup $T_{\epsilon}$ with respect to $\epsilon$. This is the content of the next two results. In view of the uniform boundedness of the solutions, proved in Corollary \ref{uniform_bound_Linf} we may suppose, without loss of generality, the following hypothesis on the nonlinearites.
\begin{align} \label{glob_Lip}
\bullet \ f \textrm{ and } g & \textrm{ are globally bounded}. \nonumber \\
\bullet \ f \textrm{ and } g & \textrm{ are globally Lipschtiz,
with Lipschitz constants} L_f \textrm{ and } L_g \ \textrm{respectively},
\end{align}
\begin{lema} \label{uniform_bound_space}
Suppose that the hypotheses of Lemma \ref{posinv} and \eqref{glob_Lip} hold.
If $\epsilon_0 $ is sufficiently small, the family of attractors $\mathcal{A}_{\epsilon}$ given by Theorem \ref{exatr} is uniformly bounded
in the (fixed) fractional space
$X^{\eta}$, $0 < \eta < \frac{1}{2}$, for $0 \leq \epsilon \leq \epsilon_0$.
\end{lema}
\noindent{\bf Proof. \quad}
Let $b$ be the exponential rate of decay of the linear semigroup
generated by $A_{\epsilon}$, $ \epsilon$ for $\epsilon $ small, given by
Theorem \ref{cont_lin_semigroup}. Let $u \in \mathcal{A}_{\epsilon}$. By the variation of constants formula, Lemma \ref{sector} and Theorem \ref{cont_lin_semigroup}, we obtain
\begin{align*}
\| T_{\epsilon}(t)(u) \|_{\eta} & \leq
\| e^{A_{\epsilon}(t)} u \|_{\eta}
+ \int_{0}^{t} \| e^{A_{\epsilon}(t-s)}
H_{\epsilon}(T_{\epsilon}(s) u )\|_{\eta} \, ds \\
& \leq \| e^{A(t)} u \|_{\eta} + \| \left( e^{A_{\epsilon}(t)} - e^{A(t)} \right) u \|_{\eta}
+ \int_{0}^{t} \| e^{A(t-s)}
H_{\epsilon}(T_{\epsilon}(s) u )\|_{\eta} \, d\, s \\
& +
\int_{0}^{t} \| \left( e^{A_{\epsilon}(t-s)} - e^{A(t-s)} \right)
H_{\epsilon}(T_{\epsilon}(s) u) \|_{\eta} \, ds \\
& \leq \left( C e^{-at} + C(\epsilon) e^{-bt} \right)
\frac{1}{t^{\eta+\frac{1}{2}}} \|u\|
+
\int_{0}^{t} C e^{-a(t-s)} \frac{1}{(t-s)^{\eta}+\frac{1}{2}}
\| H_{\epsilon}(T_{\epsilon}(s) u ) \| \, d \, s \\
& + \int_{0}^{t} C e^{ -b (t-s)} \frac{1}{(t-s)^{\eta + \frac{1}{2}}} \|
H_{\epsilon}(T_{\epsilon}(s) u ) \| \, ds \\
\end{align*}
By \eqref{normH_infty}
\begin{align*}
\|(H_{\epsilon})_{\eta}(T(s)u_0)\|_{X^{-\frac{1}{2} }} & \leq M_f |\Omega|^{\frac{1}{p} } +
||\mu||_\infty M_g |{\partial \Omega}|^{\frac{1}{p} } \,K_1 \\
& \leq
\|f\|_{\infty} |\Omega|^{\frac{1}{p} } +
||\mu||_{\infty} \|g\|_{\infty} |{\partial \Omega}|^{\frac{1}{p} } \,K_1,
\end{align*}
where $K_1$ is a constant of the trace mapping (see Theorem \ref{trace}).
Thus
\begin{align*}
\| T_{\epsilon}(t)(u) \|_{\eta} & \leq
C' e^{-bt} \frac{1}{t^{\eta + \frac{1}{2}}} \|u\|_{\infty} + C'' \left( \|f\|_{\infty} |\Omega|^{\frac{1}{p} } +
||\mu||_{\infty} \|g\|_{\infty} |{\partial \Omega}|^{\frac{1}{p} } \right)
\int_{0}^{t} e^{-b(t-s)} \frac{1}{(t-s)^{\eta}+ \frac{1}{2}} \, ds, \\
\end{align*}
where the constants $C' $ and $C''$ do not depend on $\epsilon$.
Since the right hand side is uniformly bounded for $u \in \mathcal{A}_{\epsilon}, t> 0$ and the attractors are invariant, the result follows immediately.
\mbox{${\square}$}
\begin{lema} \label{uniform_cont_space}
Suppose that the hypotheses of Lemma \ref{uniform_bound_space} hold.
Then the map
\[ (u, \epsilon)\in X^{\eta} \times [0, \epsilon_0] \mapsto T_{\epsilon} u \in X^{\eta} \]
is continuous at $\epsilon = 0$, uniformly for $u$ in bounded sets and $ 0 < t \leq T < \infty$.
\end{lema}
\noindent{\bf Proof. \quad}
Using the variation of constants formula, \eqref{normH_infty}, \eqref{unif_contH_infty} and
\eqref{LipH}, we obtain
\begin{align*}
\| T_{\epsilon}(t)(u) - T(t)(u) \|_{\eta} & \leq
\| e^{A_{\epsilon}(t)} u - e^{A(t)} u \|_{\eta} \\
& + \int_{0}^{t} \| \left( e^{A_{\epsilon}(t-s)} - e^{A(t-s)} \right)
H_{\epsilon} (T_{\epsilon}(s) u)\|_{\eta} \, ds \\
& +
\int_{0}^{t} \| e^{A(t-s)} \left(
H_{\epsilon}(T_{\epsilon}(s) u ) - H(T_{\epsilon} (s) u )\|_{\eta} \right) \| \, ds \\
& + \int_{0}^{t} \| e^{A(t-s)} \left(
H(T_{\epsilon}(s) u ) - H(T (s) u) \right)\|_{\eta} \, ds
\\
& \leq C(\epsilon ) e^{-bt} \frac{1}{t^{\eta + \frac{1}{2} }} \|u\|
+
\int_{0}^{t} C (\epsilon) e^{-b(t-s)} \frac{1}{(t-s)^{\eta}+\frac{1}{2}}
\| H_{\epsilon}(T_{\epsilon}(s) u ) \| \, d \, s \\
& + \int_{0}^{t} C e^{ -b (t-s)} \frac{1}{(t-s)^{\eta + \frac{1}{2}}} \|
H_{\epsilon}(T_{\epsilon}(s) u ) - H(T_{\epsilon}(s) u) \| \, ds \\
& + \int_{0}^{t} C e^{ -b (t-s)} \frac{1}{(t-s)^{\eta + \frac{1}{2}}} \|
H(T_{\epsilon}(s) u) - H(T(s) u ) \| \, ds \\
& \leq C(\epsilon ) e^{-bt} \frac{1}{t^{\eta + \frac{1}{2} }} \|u\|
\\
& +
\int_{0}^{t} C (\epsilon) e^{-b(t-s)} \frac{1}{(t-s)^{\eta}+\frac{1}{2}}
\left( \|f\|_{\infty} |\Omega|^{\frac{1}{p} } +
||\mu||_{\infty} \|g\|_{\infty} |{\partial \Omega}|^{\frac{1}{p} } \,K_1 \right) \, d \, s \\
& + \int_{0}^{t} C e^{ -b (t-s)} \frac{1}{(t-s)^{\eta + \frac{1}{2}}} \|
\left( \|\mu_{\epsilon} - 1 \|_{\infty} M_g |\partial \Omega|^{\frac{1}{p} } K_1 \right) \, ds \\
& + \int_{0}^{t} C e^{ -b (t-s)} \frac{1}{(t-s)^{\eta + \frac{1}{2}}} \|
\left( L_g ||\mu||_\infty K_1 K_2 + L_f \right)
\| T_{\epsilon}(s) u - T(s) u \|_{X_{\epsilon}^{\eta}} \| \, ds \\
\end{align*}
Writting
\begin{align*}
A(\epsilon) & : = C(\epsilon ) \|u\|
+ t^{\eta + \frac{1}{2}}
\int_{0}^{t} C (\epsilon) e^{bs} \frac{1}{(t-s)^{\eta}+\frac{1}{2}}
\left( \|f\|_{\infty} |\Omega|^{\frac{1}{p} }
+
||\mu||_{\infty} \|g\|_{\infty} |{\partial \Omega}|^{\frac{1}{p} } \,K_1 \right) \,
\, d \, s \\
& + t^{\eta + \frac{1}{2}} \int_{0}^{t} C e^{ bs} \frac{1}{(t-s)^{\eta + \frac{1}{2}}} \|
\left( \|\mu_{\epsilon} - 1 \|_{\infty} M_g |\partial \Omega|^{\frac{1}{p} } K_1 \right) \, ds \\
\\
B & : = C
\left( L_g ||\mu||_\infty K_1 K_2 + L_f \right),
\end{align*}
we obtain
\begin{align*}
e^{bt} \| T_{\epsilon}(t)(u) - T(t)(u) \|_{\eta} & \leq
A(\epsilon) t^{ - (\eta + \frac{1}{2}) } +
B \int_0^t t^{ - (\eta + \frac{1}{2}) }
e^{ b s}
\| T_{\epsilon}(s) u - T(s) u \|_{X_{\epsilon}^{\eta}} \, ds
\end{align*}
From the singular Gronwall's inequality, it follows that
\begin{align*}
\| T_{\epsilon}(t)(u) - T(t)(u) \|_{\eta} & \leq
A(\epsilon) M e^{-bt} t^{ - (\eta + \frac{1}{2}) },
\end{align*}
for $0< t \leq T$, where the constant $M$ depends on $B, \eta $ and $T$, for $u$ in a bounded set
of $X_{\epsilon}^{\eta}$.
\mbox{${\square}$}
\begin{teo} \label{upper_semi} Suppose that the hypotheses of Lemma \ref{uniform_bound_space} hold.
Then the family of
attractors ${\mathcal{A}}_{\epsilon}$, given by Theorem \ref{exatr}
is upper semicontinuous with
respect to $\epsilon$ at $\epsilon= 0$.
\end{teo}
\noindent{\bf Proof. \quad} From Lemma \ref{uniform_bound_space} there exists a bounded set
$B \subset X^{\eta}$ such that $ \bigcup_{0 \leq \epsilon \leq \epsilon_{0} } \mathcal{A}_{\epsilon} \subset B$.
Given $\delta > 0$, there exists $t_{\delta} >0$ such that $T(t_{\delta})(B) \subset
\mathcal{A}_0^{\frac{\delta}{2}}$, where
${\mathcal{A}}_{0}^{\frac{\delta}{2}}$ is the
$\frac{\delta}{2}$-neighborhood of ${\mathcal{A}}_{0}$.
From Lemma \ref{uniform_cont_space}, there exists $\bar{\epsilon}
> 0$ such that
$| T_{\epsilon}(t_{\delta}) u - T(t_{\delta}) u \|_{X^{\eta}} \leq \frac{\delta}{2} $, for
every $u \in B$ and $ 0 \leq \epsilon \leq \bar{\epsilon}$. It follows that
$ T_{\epsilon}(t_{\delta}) B \subset \mathcal{A}_0^{\delta}$. In particular,
$ T_{\epsilon}(t_{\delta}) \mathcal{A}_{\epsilon} \subset \mathcal{A}_0^{\delta}$.
Since $\mathcal{A}_{\epsilon}$ is invariant under $T_{\epsilon}$, we conclude that
$ \mathcal{A}_{\epsilon} \subset \mathcal{A}_0^{\delta}$, for $0\leq \epsilon \leq \bar{\epsilon}$, thus proving the claim.
\mbox{${\square}$}
From the semicontinuity of attractors, we can easily prove the corresponding
property for the equilibria.
\begin{cor}\label{upperequil}
Suppose the hypotheses of Theorem \ref{upper_semi} hold.
Then the family of sets of equilibria
$ {\{} E_{\epsilon } \, {|} \, 0\leq \epsilon \leq \epsilon_0 {\}}$, of
the problem (\ref{abstract_scale}) is uppersemicontinuous in $X^\eta$.
\end{cor}
\noindent{\bf Proof. \quad}
The result is well-known, but we sketch a proof here for completeness.
Suppose $u_{n} \in \mathcal{A}_n$, with $\displaystyle\lim_{n\to \infty}\epsilon_n= 0$. We choose an arbitrary subsequence and still call it $(u_{n})$, for simplicity. It is enough to show that, there exists a subsequence $(u_{n_k})$, which converges to a point $u_0 \in E_0$. Since $(u_{n}) \to \mathcal{A}_{0}$, there exists
$(v_n) \in \mathcal{A}_0$ with $ \|u_n - v_n\|_{\eta} \to 0 $. Since $\mathcal{A} {_0}$ is compact,
there
exists a subsequence $(v_{n_k})$, which converges to a point $u_0 \in
\mathcal{A}_0$, so also $(u_{n_k}) \to \mathcal{A}_0 $.
Now, since the flow $T_\epsilon(t)$ is continuous in $\epsilon$
we have, for
any $t>0$
\[ u_{n_k} \to u_0 \Leftrightarrow T_{\epsilon_{n_k}}(t) u_{n_k} \to
T_{0}(t) u_0 \Leftrightarrow u_{n_k} \to
T_{0}(t) u_0. \]
Thus, by uniqueness of the limit, $T_{0}(t) u_0 =u_0$, for any $t> 0$, so
$u {_0} \in E_0. $
\mbox{${\square}$}
\section{Lowersemicontinuity}
\label{lower}
For the lower semicontinuity we will need to
assume the following additional properties for the nonlinearities.
\begin{align} \label{boundfg}
f \textrm{ and } g \textrm{ are in } C^1(\mathbb{R},\mathbb{R}) \textrm{ with bounded derivatives }.
\end{align}
\begin{lema}
\label{FGateaux}
Suppose that $\eta$ and $p$ are such that \eqref{hip_inclusion} holds and $f$ satisfies (\ref{boundfg}).
Then
the operator
$F :X^{\eta}\times \mathbb{R} {\rightarrow} X^{-\frac{1}{2}}$ given by
(\ref{Fh}) is Gateaux differentiable with respect to
$u$, with Gateaux differential
$\displaystyle{\frac{\partial F}{\partial u}(u,\epsilon)w}$ given by
\begin{equation}\label{FGateaux_form}
\left\langle \frac{\partial F}{\partial u}(u,\epsilon)w\,,\,
\Phi\right\rangle_{-\frac{1}{2},\frac{1}{2}} = \displaystyle\int_\Omega f^{\,'}(u)w\,\Phi\,dx\,,
\end{equation}
for all $w \in X^\eta$ and $\Phi \in X^{\frac{1}{2}}$.
\end{lema}
\noindent{\bf Proof. \quad}
Observe first that $F(u, {\epsilon})$ is well-defined, since the conditions of Lemma \ref{Flip} are met.
It is clear that $\displaystyle\frac{\partial F}{\partial u}(u,\epsilon)$ is linear. We now show that it is bounded. In fact we have,
for all
$u,w \in X^{\eta} $ and
$\Phi \in X^{-\frac{1}{2} } = W^{1,q}$
\begin{eqnarray*}
\left|\left\langle \frac{\partial F}{\partial u}(u,\epsilon)w\,,\,
\Phi\right\rangle_{-\frac{1}{2},\frac{1}{2}} \right| &\leq & \displaystyle\int_\Omega
|\,f^{\,'}(u)\,|\, |\,w\,| \,|\,\Phi\,|\, dx\ \\
&\leq & \|\,f'\,\|_{\infty}\displaystyle\int_\Omega |\,w\,| \, |\,\Phi\,|\,dx \\
&\leq & \|\,f'\,\|_{\infty}\displaystyle \|\,w\,\|_{L^p(\Omega)} \,
\|\,\Phi\,\|_{L^q(\Omega)}\,dx \\
&\leq & \|\,f'\,\|_{\infty}\displaystyle \|w\|_{ X^{\eta}} \,
\|\,\Phi\,\|_{ X^{\frac{1}{2}}}\,dx\,,
\end{eqnarray*}
where $\|f'\|_{\infty}= \sup \{f'(x) \,|\, x \in \mathbb{R} \} $. This proves boundedness.
Now, we have, for all
$u,w \in X^{\eta} $ and
$\Phi \in X^{\frac{1}{2}}$
\begin{eqnarray*}
&& \left|\frac{1}{t}\left\langle F(u + tw,\epsilon) - F(u,\epsilon) - t \frac{\partial F}{\partial u}(u,\epsilon) w,\Phi\right\rangle_{-\frac{1}{2},\frac{1}{2}}\right| \\
&\leq& \frac{1}{|t|}\displaystyle \int_\Omega \big|\,[\,f(u + tw) - f(u)
- tf^{\,'}(u)w\,]\,\Phi\,\big|\,dx \\
&\leq & \frac{1}{|t|} \left(\displaystyle\int_\Omega \big|f(u + tw) - f(u) - tf^{\,'}(u) w\big|^{p}dx\right)^\frac{1}{p}||\Phi||_{X^{\frac{1}{2}}} \\
& \leq &
\left(\,\displaystyle \underbrace{\int_\Omega \big|\,
\left(f'(u + \bar{t}w ) - f^{\,'}(u)\right) w\,\big|^{\,p} \,dx\,
}_{(I)}\right)^\frac{1}{p}||\,\Phi\,||_{X^{\frac{1}{2}}},
\end{eqnarray*}
where $ 0 \leq \bar{t} \leq t$.
Since $f'$ is bounded and continuous, the integrand of $(I)$
is bounded by an integrable function and goes to $0$ as $t \to 0$.
Thus, the integral $(I)$ goes to $ 0$ as $t \to 0$, from Lebesgue's Dominated
Convergence Theorem. It follows that \newline
$ \displaystyle{\lim_{t \to 0} \frac{ F(u + tw {,\epsilon}) - F(u {,\epsilon})}{t} = \frac{\partial F}{\partial u}(u,\epsilon) w \ \textrm{ in} \ X^{-\frac{1}{2}},}$
for all $u,w \in X^{\eta} $; so $F$ is Gateaux differentiable with
Gateaux differential given by (\ref{FGateaux_form}).
\mbox{${\square}$}
We now want to prove that the Gateaux differential of $F(u, \epsilon)$
is continuous in $u$. Let us denote by
$\mathcal{B}(X, Y)$
the space of
linear bounded operators from $X$ to $Y$.
We will need the following result, whose simple proof is omitted.
\begin{lema}\label{strong_uniform_operators}
Suppose $X,Y$ are Banach spaces and $ T_n : X \to Y$ is a sequence
of linear operators converging strongly to the linear operator
$T:X \to Y$. Suppose also that
$X_1 \subset X$ is a Banach space, the inclusion
$i: X_1 \hookrightarrow X$
is compact and let $ \widetilde{T}_n = T_n \circ i$
and $ \widetilde{T} = T \circ i$. Then
$\widetilde{T}_n \to \widetilde{T} $ uniformly for $x$ in a
bounded subset of $X_1$ (that is, in the or norm of
$\mathcal{B}(X_1,Y )$).
\end{lema}
\begin{lema}\label{FGateaux_cont}
Suppose that $\eta$ and $p$ are such that \eqref{hip_inclusion} holds and $f$ satisfies (\ref{boundfg}).
Then
the Gateaux differential of $F(u,\epsilon)$, with respect to $u$ is
continuous in $u$, that is, the map
$ u \mapsto \displaystyle\frac{\partial F}{\partial u}( u,\epsilon)
\in \mathcal{B}(X {^\eta}, X^{-\frac{1}{2}})$
is continuous.
\end{lema}
\noindent{\bf Proof. \quad} Let $u_n$ be a sequence converging to
$u$ em $X^{\eta}$, and choose $0<\widetilde{\eta}< \eta$, such that the hypotheses still hold.
Then, we have for any
$\Phi \in X^{\frac{1}{2}}$ and $w \in X^{\widetilde{\eta}}$:
\begin{eqnarray*}
\bigg|\left\langle \left( \frac{\partial F}{\partial u}(u_n, {\epsilon}) - \frac{\partial F}{\partial u}(u, {\epsilon})\right)w\,,\,\Phi \right\rangle_{-\frac{1}{2}\,,\, \frac{1}{2}}\bigg|
&\leq& \displaystyle\int_\Omega \bigg|\,\big(\,f^{\,'}(u) - f^{\,'}(u_n)\,\big)w\,\Phi\,\bigg|\,dx \nonumber \\
&\leq& \bigg(\displaystyle\int_\Omega \big|\big(f^{\,'}(u) - f^{\,'}(u_n)\big)w\big|^{p}dx\bigg)^\frac{1}{p}\bigg(\displaystyle\int_\Omega |\Phi\big|^{q}dx\bigg)^\frac{1}{q} \nonumber \\
&\leq& \,\bigg(\displaystyle\underbrace{\int_\Omega \big|\big(f^{\,'}(u) - f^{\,'}(u_n)\big)w\big|^{p}dx}_{(I)}\bigg)^\frac{1}{p}\,||\Phi||_{ {X^{\frac{1}{2}}}}\,,
\end{eqnarray*}
\par Now, the integrand in $(I)$ is bounded by the
integrable function $\,||\,f^{\,'}||_\infty^{\,p}\,w^{\,p}$ and
goes to $0$ a.e. as $u_n \to u$ in $X^\eta$.
Therefore the sequence of operators
$\displaystyle\frac{\partial F}{\partial u}( u_n,\epsilon)$ converges strongly
in the space $\mathcal{B}(X^{\widetilde{\eta}}, X^{-\frac{1}{2}})$ to the operator $ \displaystyle\frac{\partial F}{\partial u}( u,\epsilon)$.
From Lemma \ref{strong_uniform_operators}
the convergence holds in the norm of $\mathcal{B}(X^{\eta}, X^{-\frac{1}{2}})$,
since $X^{\eta}$ is compactly embedded in $X^{\widetilde{\eta}}$.
\mbox{${\square}$}
\begin{lema}
\label{GGateaux}
Suppose that $\eta$ and $p$ are such that \eqref{hip_inclusion} holds and $g$ satisfies (\ref{boundfg}).
Then
the operator
$G :X^{\eta}\times \mathbb{R} {\rightarrow} X^{-\frac{1}{2}}$ given by
(\ref{Gh}) is Gateaux differentiable with respect to
$u$, with Gateaux differential
\begin{equation}\label{GGateaux_form}
\left\langle \frac{\partial G}{\partial u}(u, {\epsilon})w\,,\,\Phi\right\rangle_{ {-\frac{1}{2}, - -\frac{1}{2}}} = \displaystyle\int_{\partial\Omega} g^{\,'}(\gamma(u))\gamma(w)\,\gamma(\Phi)\,\left|\displaystyle\frac{J_{\partial\Omega}h_\epsilon}{Jh_\epsilon}\right|\,d\sigma(x)\,,
\end{equation}
for all $w \in X^\eta$ and $\Phi \in X^{\frac{1}{2}}$.
\end{lema}
\noindent{\bf Proof. \quad}
Observe first that $G(u, {\epsilon})$ is well-defined, since the conditions of Lemma
\ref{Gbem} are met.
It is clear that $\displaystyle\frac{\partial G}{\partial u}(u,\epsilon)$ is linear. We now show that it is bounded. In fact we have,
for all
$u,w \in X^{\eta} $ and
$\Phi \in X^{\frac{1}{2}}$
\begin{eqnarray*}
\left| \left\langle \frac{\partial G}{\partial u}(u, {\epsilon})w\,,\,\Phi\right\rangle_{-\frac{1}{2}, \frac{1}{2}} \right| & = & \left| \displaystyle\int_{\partial\Omega} g^{\,'}(\gamma(u))\gamma(w)\,\gamma(\Phi)\,\left|\displaystyle\frac{J_{\partial\Omega}h_\epsilon}{Jh_\epsilon}\right|\,d\sigma(x)\, \right| \\
&\leq &
\|\mu \|_{\infty} \, \|g'\|_{\infty}
\displaystyle\int_{\partial\Omega} | \gamma(w)|\, |\gamma(\Phi) |\,
\,d\sigma(x)\,
\\
&\leq & \|\mu \|_{\infty} \, \|g'\|_{\infty}
\displaystyle \|\gamma(w)\|_{L^p(\partial \Omega)} \,
\|\,\gamma(\Phi)\,\|_{L^q(\partial \Omega)}\\
&\leq &
K_1 K_2 \|\mu \|_{\infty} \, \|g'\|_{\infty}
\displaystyle \|w\|_{X^{\eta}}} \,
\|\,\Phi\,\|_{X^{\frac{1}{2}} \,,
\end{eqnarray*}
where $\|g'\|_{\infty}= \sup \left\{g'(x) \,|\, x \in \mathbb{R} \right\} $,
$\|\mu \|_{\infty} = \sup \left\{ |\mu(x, \epsilon)| \, | \,
x\in \partial \Omega \right\} = \sup \left\{
\displaystyle\frac{J_{\partial\Omega}h_\epsilon}{Jh_\epsilon} (x) \, \bigg| \,
x\in \partial \Omega \right\}$
and $K_1$, $K_2$ are embedding constants given by Theorem \ref{trace}. This proves boundedness.
Now, we have, for all
$u,w \in X^{\eta} $ and
$\Phi \in X^{\frac{1}{2}}$
\begin{eqnarray*}
& & \left| \frac{1}{t} \left\langle G(u + tw {,\epsilon}) - G(u {,\epsilon}) - t \frac{\partial G}{\partial u}(u,\epsilon) w\,,\,\Phi\right\rangle_{-\frac{1}{2},\frac{1}{2}}\right| \\
&\leq& \frac{1}{|t|}\displaystyle \int_{\partial {\Omega}} \left|\,\left[\,g(\gamma(u + tw)) -g(\gamma(u))
- tg'(\gamma(u))\right]\gamma(w)\,\right|\, {\left|\gamma(\Phi)\right|}\,\left| \frac{J_{\partial\Omega}h_\epsilon}{Jh_\epsilon} \right|\, {d\sigma(x)} \\
& \leq& K_1 \|\mu \|_{\infty}
\frac{1}{|t|}\displaystyle \left\{ \int_{\partial {\Omega}} \left|\,\left[\,g(\gamma(u + tw)) -
g(\gamma(u))
- tg'(\gamma(u))\,\right]\gamma(w)\right|^p \, {d\sigma(x)} \right\}^{\frac{1}{p}}
\| \Phi \|_{X^{\frac{1}{2}}} \\
&\leq& K_1 \|\mu \|_{\infty} \displaystyle \left\{ \underbrace{\int_{\partial {\Omega}}
\left|\,\left[\,g'(\gamma(u + \bar{t}w))
- g'(\gamma(u))\,\right]\gamma(w)\,\right|^p \, {d\sigma(x)}}_{(I)} \right\}^{\frac{1}{2}}
\| \Phi \|_{X^{\frac{1}{2}}} \,,
\end{eqnarray*}
where $K_1$ is
the embedding constant given by Theorem \ref{trace} and Lemma \ref{inclusion} and $ 0 \leq \bar{t} \leq t$.
Since $g'$ is bounded and continuous, the integrand of $(I)$
is bounded by an integrable function and goes to $0$ as $t \to 0$.
Thus, the integral $(I)$ goes to $ 0$ as $t \to 0$, from Lebesgue's Dominated
Convergence Theorem. It follows that
$ \displaystyle{\lim_{t \to 0} \frac{ G(u + tw {,\epsilon}) - G(u {,\epsilon})}{t} =
\frac{\partial G}{\partial u}(u,\epsilon) w \ \textrm{ in} \ X^{-\frac{1}{2}},}$
for all $u,w \in X^{\eta} $; so $G$ is Gateaux differentiable with
Gateaux differential given by (\ref{GGateaux_form}).
\mbox{${\square}$}
\begin{lema}\label{GGateaux_cont}
Suppose that $\eta$ and $p$ are such that \eqref{hip_inclusion} holds and $g$ satisfies (\ref{boundfg}).
Then
the Gateaux differential of $G(u,\epsilon)$, with respect to $u$ is
continuous in $u$ (that is, the map
$ u \mapsto \displaystyle\frac{\partial G}{\partial u}( u,\epsilon)
\in \mathcal{B}(X^{\eta}, X^{-\frac{1}{2}})$
is continuous) and uniformly continuous in $\epsilon$ for $u$ in bounded sets
of $X^{\eta}$ and
$0\leq \epsilon \leq \epsilon_0 <1$.
\end{lema}
\noindent{\bf Proof. \quad} Let $0\leq \epsilon \leq \epsilon_0$, $u_n$ be a sequence converging to
$u$ em $X^{\eta}$, and choose $ 0< \widetilde{\eta} < \eta$, still satisfying the hypotheses.
Then, we have for any
$\Phi \in X {^{\frac{1}{2}}}$ and $w \in X^{\widetilde{\eta}}$:
\begin{eqnarray*}
& &\bigg|\,\left\langle \,\left(\, \frac{\partial G}{\partial u}(u_n,\epsilon) - \frac{\partial G}{\partial u}(u,\epsilon)\,\right)w\,,\,\Phi \,\right\rangle_{-\frac{1}{2}\,,\, \frac{1}{2}}\,\bigg| \\
& \leq & \displaystyle\int_{\partial\Omega}
\left| \left(g'(\gamma(u)) - g'(\gamma(u_n))\right)
\gamma(w)\,\gamma(\Phi)\right|\,\left|\displaystyle\frac{J_{\partial\Omega}h_\epsilon}{Jh_\epsilon}\right|\,d\sigma(x)
\nonumber \\
& \leq & \|\mu_{\epsilon} \|_{\infty} \left\{ \displaystyle\int_{\partial\Omega}
\left| (g'(\gamma(u)) - g'(\gamma(u_n))
\gamma(w)\,\,\right|^p\,d\sigma(x)\, \right\}^{\frac{1}{p}}
\left\{ \displaystyle\int_{\partial\Omega}
\left| \,\gamma(\Phi)\,\right|^q\,d\sigma(x)\, \right\}^{\frac{1}{q}}
\nonumber \\
&\leq &K_1 \|\mu_{\epsilon}\|_{\infty} \left\{ \displaystyle \underbrace{\int_{\partial\Omega}
\left| (g'(\gamma(u)) - g'(\gamma(u_n))
\gamma(w)\,\,\right|^p\,d\sigma(x)}_{(I)}\, \right\}^{\frac{1}{p}}
\|\Phi\|_{ {X^{\frac{1}{2}} }}\,,
\nonumber
\end{eqnarray*}
where $K_1$ is the constant due to continuity of the trace map from
$X^{\frac{1}{2}}$ into
$L^2(\partial \Omega)$, as in Lemma \ref{Gbem}.
\par Now, the integrand in $(I)$ is bounded by the
integrable function $||\,g'\,||_\infty^2 \left|
\gamma(w)\right|^2$ and
goes to $0$ a.e. as $u_n \to u$ {in} $X^\eta$.
Therefore the sequence of operators
$ \displaystyle\frac{\partial G}{\partial u}( u_n,\epsilon)$ converges strongly
in the space $\mathcal{B}(X^{\widetilde{\eta}}, X^{-\frac{1}{2}})$ to the operator $ \displaystyle\frac{\partial G}{\partial u}( u,\epsilon)$.
From Lemma \ref{strong_uniform_operators}
the convergence holds in the norm of $\mathcal{B}(X^{\eta}, X^{-\frac{1}{2}})$,
since $X^{\eta}$ is compactly embedded in $X^{\widetilde{\eta}}$ (see \cite{He2}).
Finally, if $0\leq \epsilon_1 \leq \epsilon_2 <\epsilon_0$, we have
for any
$\Phi \in {X^{\frac{1}{2}}}$ and $w \in X^{\eta}$:
\begin{eqnarray*}
& &\bigg|\,\left\langle \,\left(\, \frac{\partial G}{\partial u}(u,\epsilon_1) - \frac{\partial G}{\partial u}(u,\epsilon_2)\,\right)w\,,\,\Phi \,\right\rangle_{-\frac{1}{2}\,,\, \frac{1}{2}}\,\bigg| \\
& \leq & \displaystyle\int_{\partial\Omega}
\left|\,g'(\gamma(u))
\gamma(w)\,\gamma(\Phi)\,\right|\,\left|\displaystyle \mu_{\epsilon_1} -
\mu_{\epsilon_2}\right|\,d\sigma(x)\,,
\nonumber \\
& \leq& \|\mu_{\epsilon_1} -
\mu_{\epsilon_2} \|_{\infty} \left\{ \displaystyle\int_{\partial\Omega}
\left| g'(\gamma(u))
\gamma(w)\,\,\right|^p\,d\sigma(x)\, \right\}^{\frac{1}{p}}
\left\{ \displaystyle\int_{\partial\Omega}
\left| \,\gamma(\Phi)\,\right|^q \,d\sigma(x)\, \right\}^{\frac{1}{q}}
\nonumber \\
& \leq & K_1 K_2\|\,g'\,\|_{\infty}\|\|\,w\, \|_{ {X^\eta}}
\|\Phi\|_{ {X^{\frac{1}{2}}}}
\|\mu_{\epsilon_1}-\mu_{\epsilon_2}\|_{\infty},
\nonumber
\end{eqnarray*}
where $K_2$ is the constant due to continuity of the trace map from
$X^{\eta}$ into
$L^q(\partial \Omega)$, as before.
This proves uniform continuity in $\epsilon$.
\mbox{${\square}$}
\begin{lema}\label{Hfrechet}
Suppose that $\eta$ and $p$ are such that \eqref{hip_inclusion} holds and $f$ and $g$ satisfy (\ref{boundfg}).
Then,
the map
$ {(}H {_\epsilon)_-\frac{1}{2}}= {(}F {_\epsilon)_-\frac{1}{2}} + {(}G {_\epsilon)_-\frac{1}{2}} :X^{\eta}\times \mathbb{R} \mapsto X^{-\frac{1}{2}}$ given by
(\ref{defH}) is continuously Fr\'echet differentiable with respect to
$u$ and the derivative $\displaystyle\frac{\partial G}{\partial u}$ is uniformly continuous with respect to $\epsilon$, for $u$ in bounded sets
of $X^{\eta}$ and $0\leq \epsilon \leq\epsilon_0 < 1$.
\end{lema}
\noindent{\bf Proof. \quad}
The proof follows from Lemmas \ref{FGateaux_cont}, \ref{GGateaux_cont}
and Proposition 2.8 in \cite{Rall}.
\mbox{${\square}$}
We now prove lower semicontinuity for the equilibria.
\begin{teo}\label{equicon}
{If $f$} and $g$ satisfy the conditions of Theorem \ref{posinv}
and
also (\ref{boundfg}), the equilibria
of (\ref{abstract_scale}) with $\epsilon = 0$ are all hyperbolic
and $\frac{1}{4}<\eta< \frac{1}{2}$, then the family of sets of equilibria
$\{ E_{\epsilon} \, | \, 0 \leq \epsilon <\epsilon_0 \}$ of
(\ref{abstract_scale}) is
lower semicontinuous in $X^{\eta}$ at $\epsilon = 0$.
\end{teo}
\noindent{\bf Proof. \quad} A point $e \in X^{\eta} $ is an equilibrium of (\ref{abstract_scale})
if and only if it is a root of the map
$$
\begin{array}{rlc}
Z: W^{1,p}(\Omega) \times {\mathbb{R}}& \longrightarrow &X^{-\frac{1}{2}} \, \\
(u\,,\,\epsilon)& \longmapsto & (A_{ {\epsilon}})_{-\frac{1}{2}}(u) + (H_{\epsilon})_{-\frac{1}{2}}(u)\,,
\end{array}
$$
By Lemma \ref{Hfrechet} the map $ {(}H_\epsilon {)_{-\frac{1}{2}}}: X^{\eta} \to X^{-\frac{1}{2}}$ is continuously Fr\'echet differentiable with
respect to $u$ and by Lemmas \ref{Glip} and \ref{Flip} it is also continuous in $\epsilon$
if $\eta= \frac{1}2 - \delta$, with $\delta>0$ is sufficiently small.
Therefore, the same holds if $\eta = \frac{1}{2}$.
The map $A_\epsilon= -h_\epsilon^{*} \Delta_{\Omega_\epsilon} h_\epsilon^{*} \,+\, aI$
is a bounded linear operator from $W^{1,p}(\Omega)$ to $X^{-\frac{1}{2}}$.
It is also
continuous in $\epsilon$ since it is analytic as a function of
$h {_\epsilon} \in Diff {^1}(\Omega)$ and
$ h_\epsilon$ is continuous in $\epsilon$.
Thus, the map $Z$ is continuously differentiable in $u$ and continuous in
$\epsilon$.
The derivative of $\displaystyle\frac{\partial Z}{\partial u}(e, 0)$
is an isomorphism by hypotheses.
Therefore, the Implicit Function Theorem apply, implying that the
zeroes of $Z(\cdot, \epsilon)$ are given by a continuous function
$ e(\epsilon)$. This proves the claim.
\mbox{${\square}$}
To prove the lower semi continuity of the attractors, we also need the continuity of local unstable manifolds at equilibria.
\begin{teo}\label{manifcont}
Suppose that $\eta$ and $p$ are such that \eqref{hip_inclusion} holds and $f$ and $g$ satisfy (\ref{boundfg}).,
$u_0$ is an equilibrium of (\ref{abstract_scale}) with $\epsilon = 0$,
and for each $\epsilon>0$ sufficiently small, let $u {_\epsilon}$
be the unique equilibrium of (\ref{abstract_scale}), whose existence
is asserted by Corollary \ref{upperequil} and Theorem \ref{equicon}.
Then, for $\epsilon$ and $\delta$ sufficiently small, there exists a
local unstable manifold
$
W_{\rm loc}^u(u_{\epsilon})
$ of $u_{\epsilon}$, and if we denote
$ W_{\delta}^u(u_{\epsilon}) =\{ w \in W_{\rm loc}^u(u_{\epsilon}) \ | \
\|w-u_{\epsilon} \|_{X^{\eta}} < \delta \}, then$
\[
-\frac{1}{2} \Big(W_{\delta}^u(u_{\epsilon}),W_{\delta}^u( u_0) \Big) \quad \textrm{and} \quad
-\frac{1}{2} \Big(W_{\delta}^u(u_{0}),W_{\delta}^u( u_{\epsilon}) \Big)
\]
approach zero as $\epsilon \to 0$, where
$-\frac{1}{2}(O,Q)=\displaystyle\sup_{o \in O} \inf_{q \in Q}
\|q-o\|_{ {X^{\eta}}}$ for $O$, $Q\subset X^{\eta}$.
\end{teo}
\noindent{\bf Proof. \quad}
Let $H_{\epsilon}(u)=H(u,\epsilon)$ be the map defined by (\ref{defH}) and $u_{\epsilon}$ a hyperbolic equilibrium of (\ref{abstract_scale}). Since $H(u,\epsilon)$ is differentiable by Lemma \ref{Hfrechet},
it follows that $H_{\epsilon}(u_{\epsilon}+w , \epsilon)=
H_{\epsilon}(u_{\epsilon},\epsilon)
+ H_u(u_{\epsilon} , \epsilon)w + r(w,\epsilon)= A_{\epsilon}u_{\epsilon} + H_u(u_{\epsilon} , \epsilon)w + r(w , \epsilon)$,
with $r(w,\epsilon)=o(\|w\|_{X^\eta})$, as $\|w\|_{X^\eta} \to 0$.
The claimed result was proved in \cite{PP}, assuming the following properties of $H_{\epsilon}$:
\begin{itemize}
\item[a)] $||\,r(w,0)-r(w,\epsilon)\,||_{X^{-\frac{1}{2}}} \leq C({\epsilon})$,
with $C({\epsilon}) \to 0 \textrm{ when } \epsilon \to 0$, uniformly for $w$ in a neighborhood of $0$ in $X^{\eta}$.
\item[b)] $||\,r(w_1,\epsilon)-r(w_2,\epsilon)\,||_{X^{-\frac{1}{2}}} \leq k(\rho) ||\,w_1-w_2\,||_{X^{\eta}}$ , for $||\,w_1\,||_{X^{\eta}}\leq \rho$, $||\,w_2\,||_{X^{\eta}}\leq \rho$, with $k(\rho) \to 0$ when $\rho \to 0^+$ and $k(*)$ is non decreasing.
\end{itemize}
Property a) follows from easily from the fact that both $H(u,\epsilon)$ and
$H_u(u,\epsilon)$ are
uniformly continuous in $\epsilon$ for $u$ in bounded sets
of $X^{\eta}$, by Lemmas \ref{Glip}, {\ref{Flip}} and \ref{Hfrechet}.
It remains to prove property b).
If $w_1,w_2 \in X^{\eta}$ and $\epsilon \in [0, \epsilon_0]$, with
$0 < \epsilon_0 <1 $ small enough, we have
\begin{eqnarray}
||\,r(w_1\,,\,\epsilon)-r(w_2\,,\,\epsilon)\,||_{X^{-\frac{1}{2}}} &=& ||\,H(u_{\epsilon} + w_1\,,\,\epsilon) - H(u_{\epsilon}\,,\,\epsilon) - H_u(u_{\epsilon}\,,\,\epsilon)w_1 \nonumber\\ &&-\,
H(u_{\epsilon} + w_2\,,\,\epsilon) + H_{\epsilon}(u_{\epsilon}\,,\,\epsilon) + H_u(u_{\epsilon}\,,\,\epsilon)w_2\,||_{X^{-\frac{1}{2}}}
\nonumber\\
& \leq& ||\,F(u_{\epsilon} + w_1\,,\,\epsilon) - F(u_{\epsilon}\,,\,\epsilon) - F_u(u_{\epsilon}\,,\,\epsilon)w_1 \label{7}\\
&&-\, F(u_{\epsilon} + w_2\,,\,\epsilon) + F(u_{\epsilon}\,,\,\epsilon) + F_u(u_{\epsilon}\,,\,\epsilon)w_2\,||_{X^{-\frac{1}{2}}}\nonumber \\
&&+\, ||\,G(u_{\epsilon} + w_1\,,\,\epsilon) -G(u_{\epsilon}\,,\,\epsilon) - G_u(u_{\epsilon}\,,\,\epsilon)w_1 \label{8}\\
&&-\, G(u_{\epsilon} + w_2\,,\,\epsilon) + G(u_{\epsilon}\,,\,\epsilon) + G_u(u_{\epsilon}\,,\,\epsilon)w_2\,||_{X^{-\frac{1}{2}}} \,.\nonumber
\end{eqnarray}
We first estimate (\ref{7}). Since $f'$ is bounded by (\ref{boundfg}), we have
\begin{align*}
& \bigg|\,\left\langle \,F(u_{\epsilon} + w_1\,,\,\epsilon) - F(u_{\epsilon}\,,\,\epsilon) - F_u(u_{\epsilon}\,,\,\epsilon)w_1 \right.
\left. -F(u_{\epsilon} + w_2\,,\,\epsilon) + F(u_{\epsilon}\,,\,\epsilon) + F_u(u_{\epsilon}\,,\,\epsilon)w_2 \,,\,\Phi\,\right\rangle_{ {-\frac{1}{2}, \frac{1}{2}}}
\,\bigg| \\
&\leq
\displaystyle\int_{\Omega} \left|\,[\,f(u_{\epsilon}+w_1)-f(u_{\epsilon})- f'(u_{\epsilon})w_1
-f(u_{\epsilon}+w_2)+ f(u_{\epsilon})+ f'(u_{\epsilon})w_2\,]\,\Phi\, \right|\,dx\,
\nonumber\\
& =
\displaystyle\int_{\Omega} \left|\,[\,f'(u_{\epsilon}+ \xi_x)-f'(u_{\epsilon})\,](w_1(x)-w_2(x))\,\Phi \,\right|\,dx \nonumber\\
& \leq
K_1 \displaystyle \left\{\int_{\Omega} \left|\,[\,f'(u_{\epsilon}+ \xi_x)-f'(u_{\epsilon})\,]^p
(w_1(x)-w_2(x))^p\,\right|\,dx\right\}^{\frac{1}{p}} \|\Phi \|_{X^{\frac{1}{2}} }\,,
\nonumber\\
& \leq K_1 K_2 \displaystyle \left\{\int_{\Omega} \left|\,[\,f'(u_{\epsilon}+ \xi_x)-f'(u_{\epsilon})\,]^p
\,\right|\,dx\right\}^{\frac{1}{p}} \| w_1 -w_2 \|_{X^{\eta} } \cdot \|\Phi \|_{X^{\frac{1}{2}} }\,,
\end{align*}
where $K_1$ is the embedding constant
of $X^{\frac{1}{2}}$ {into} $L^q(\Omega)$, $K_2$ is the embedding constant of
$ X^{\eta} $ in $L^{\infty} (\Omega) $ and
$ w_1(x) \leq \xi_x \leq w_2(x)$ or $ w_2(x) \leq \xi_x \leq w_1(x)$.
Therefore, we have
$$
\begin{array}{lll}
&&||\,F(u_{\epsilon} + w_1,\epsilon) - F(u_{\epsilon},\epsilon) - F_u(u_{\epsilon},\epsilon)w_1 -
F(u_{\epsilon} + w_2,\epsilon) + F(u_{\epsilon},\epsilon) + F_u(u_{\epsilon},\epsilon)w_2||_{X^{-\frac{1}{2}}} \\
&\leq& K_1K_2\left\{ \displaystyle\int_{\Omega} [\,f'(u_{\epsilon}+ \xi_x)-f'(u_{\epsilon})\,]^{\,p}dx \right\}^\frac{1}{p}
||\,w_1-w_2\,||_{X^{\eta}}\,.
\end{array}
$$
Now the integrand above is bounded by $2^{\,p}||f'||_\infty^{\,2p}$
and goes a.e. to $0$ as $\rho \to 0$, since $||\,w_1\,||_{X^{\eta}}\leq \rho$, $||\,w_2\,||_{X^{\eta}}\leq \rho$ and $w_1(x)\leq \xi_x\leq w_2(x)$.
Thus, the integral goes to $0$ by Lebesgue's bounded convergence Theorem.
\quad We now estimate (\ref{8}):
\begin{align*}
& \bigg|\,\left\langle G(u_{\epsilon} + w_1\,,\,\epsilon) - G(u_{\epsilon}\,,\,\epsilon) - G_u(u_{\epsilon}\,,\,\epsilon)w_1
-G(u_{\epsilon} + w_2\,,\,\epsilon) + G(u_{\epsilon}\,,\,\epsilon) + G_u(u_{\epsilon}\,,\,\epsilon)w_2 \,,\,\Phi\,\right\rangle_{ {-\frac{1}{2}, \frac{1}{2}}}\,\bigg| \\
& \leq
\displaystyle\int_{\partial\Omega} \bigg|\,[\,g(\gamma(u_{\epsilon}+w_1)) - g(\gamma(u_{\epsilon}))- g'(\gamma(u_{\epsilon}))w_1 \\
& \quad \quad \quad \quad \quad \quad \quad -g(\gamma(u_{\epsilon}+w_2)) + g(\gamma(u_{\epsilon}))+ g'(\gamma(u_{\epsilon}))w_2\,]
\,\gamma(\Phi)\gamma\left(\,\left|\,\displaystyle\frac{J_{\partial\Omega} {h_\epsilon}}{J {h_\epsilon}}\,\right|\,\right)\,\bigg|\,d\sigma(x)\\
& =
\displaystyle\int_{\partial\Omega} \left|\,[\,g'(\gamma(u_{\epsilon}+ \xi_x)) - g'(\gamma(u_{\epsilon}))\,]\gamma(\,w_1(x)-w_2(x)\,)\,\gamma(\Phi)\gamma\left(\,\left|\,\displaystyle\frac{J_{\partial\Omega} {h_\epsilon}}{J {h_\epsilon}}\,\right|\,\right)\, \right|\,d\sigma(x) \\
& \leq K_1 \,\left\{\displaystyle\int_{\partial\Omega} [\,(g'(\gamma(u_{\epsilon}+ \xi_x)) - g'(\gamma(u_{\epsilon})))]^{\,p}[\gamma(w_1(x)-w_2(x))]^{p}\left[\gamma\left(\left|\displaystyle\frac{J_{\partial\Omega} {h_\epsilon}}{J {h_\epsilon}}\right|\right)
\right]^{p}d\sigma(x)\right\}^\frac{1}{p}||\Phi|| {_{X^\frac{1}{2}}}\,, \\
& \leq K_1 K_ 2 \, \mu_{\epsilon} \, \,\left\{\displaystyle\int_{\partial\Omega} [\,(g'(\gamma(u_{\epsilon}+ \xi_x)) - g'(\gamma(u_{\epsilon})))]^{\,p}d\sigma(x)\right\}^\frac{1}{p}
\|w_1 - w_2\|_{ X^{\eta} }||\Phi|| {_{X^\frac{1}{2}}}\,, \\
\end{align*}
where
$\mu_{\epsilon} = \left|\displaystyle\frac{J_{\partial\Omega} {h_\epsilon}}{J {h_\epsilon}}\right|$ is bounded, uniformly in $\epsilon$
and $ w_1(x) \leq \xi_x \leq w_2(x)$ or $ w_2(x) \leq \xi_x \leq w_1(x)$.
Now the integrand above is bounded by $2^{\,{p}}||\,g'\,||_\infty^{\,{p}}||$ and goes to $0$ a.e. as $\rho \to 0$, since $||\,w_1\,||_{X^{\eta}}\leq \rho$, $||\,w_2\,||_{X^{\eta}}\leq \rho$ and $w_1(x)\leq \xi_x\leq w_2(x)$.
Thus, the integral goes to $0$ by Lebesgue's dominated convergence Theorem.
\mbox{${\square}$}
We are now in a position to prove the main result of this section
\begin{teo}
Assume the hypotheses of Theorem \ref{equicon} hold. Then the family of attractors
\{$\mathcal{A}_{\epsilon }\,{|}\, 0 \leq \epsilon \leq \epsilon_0\}$, of
the problem (\ref{abstract_scale}), whose existence is guaranteed by
Theorem \ref{exatr} is lower semicontinuous in $X^\eta$.
\end{teo}
\noindent{\bf Proof. \quad}
The system generated by
(\ref{abstract_scale}) is gradient for any $\epsilon$ and its equilibria are all hyperbolic for $\epsilon $ in a neighborhood of $0$. Also,
the equilibria are continuous in $\epsilon$ by Theorem \ref{equicon}, the linearisation is continuous
in $\epsilon$ as shown during the proof of Theorem \ref{equicon} and the local unstable manifolds of the equilibria are continuous
in $\epsilon$, by Theorem \ref{manifcont}. The result follows then from \cite{PP}, Theorem 3.10 .
\mbox{${\square}$}
\end{document} |
\begin{document}
\title {Superconvergent interpolatory HDG methods for reaction diffusion equations I: An HDG$_{k}$ method}
\author{Gang Chen
\thanks{School of Mathematics Sciences, University of Electronic Science and Technology of China, Chengdu, China, [email protected]}
\and
Bernardo Cockburn\thanks {School of Mathematics, University of Minnesota, Minneapolis, MN, [email protected]}
\and
John~R.~Singler\thanks {Department of Mathematics and Statistics, Missouri University of Science and Technology, Rolla, MO, [email protected]}
\and
Yangwen Zhang\thanks {Department of Mathematics and Statistics, Missouri University of Science and Technology, Rolla, MO, [email protected]}
}
\date{}
\maketitle
\begin{abstract}
In our earlier work \cite{CockburnSinglerZhang1}, we approximated solutions of a general class of scalar parabolic semilinear PDEs by an interpolatory hybridizable discontinuous Galerkin (Interpolatory HDG) method. This method reduces the computational cost compared to standard HDG since the HDG matrices are assembled once before the time integration. Interpolatory HDG also achieves optimal convergence rates; however, we did not observe superconvergence after an element-by-element postprocessing. In this work, we revisit the Interpolatory HDG method for reaction diffusion problems, and use the postprocessed approximate solution to evaluate the nonlinear term. We prove this simple change restores the superconvergence and keeps the computational advantages of the Interpolatory HDG method. We present numerical results to illustrate the convergence theory and the performance of the method.
\end{abstract}
\textbf{Keywords} Interpolatory hybridizable discontinuous Galerkin method, superconvergence
\section{Introduction}
In our earlier work \cite{CockburnSinglerZhang1}, we introduced an interpolatory hybridizable discontinuous Galerkin (Interpolatory HDG) method to approximate the solution of semilinear parabolic PDEs. In contrast to standard HDG, the Interpolatory HDG method uses an elementwise interpolation procedure to approximate the nonlinear term; therefore, all quadrature for the nonlinear term can be performed once before the time integration, which yields a significant computational cost reduction. The Interpolatory HDG method still converged at optimal rates, but superconvergence using element-by-element postprocessing was lost.
The superconvergence is an excellent feature of HDG methods, and therefore in this work we modify the Interpolatory HDG method from \cite{CockburnSinglerZhang1} and restore the superconvergence for reaction diffusion PDEs.
Specifically, we consider the following class of scalar reaction diffusion PDEs on a Lipschitz polyhedral domain $\Omega\subset \mathbb R^d $, $ d = 2, 3$, with boundary $\partial\Omega$:
\begin{equation}\label{semilinear_pde1}
\begin{split}
\partial_tu-\Delta u+ F(u)&= f \quad \mbox{in} \; \Omega\times(0,T],\\
u&=0 \quad \mbox{on} \; \partial\Omega\times(0,T],\\
u(\cdot,0)&=u_0~~\mbox{in}\ \Omega.
\end{split}
\end{equation}
In \Cref{sec:HDG}, we provide background on HDG methods and describe the new Interpolatory HDG approach in detail. We use the HDG$_k$ method to approximate the linear terms in the equation; i.e., $k$th order discontinuous polynomials are used to approximate the flux $\bm q = -\nabla u$, the scalar variable $u$, and its trace, and the stabilization function is chosen as $O(1)$ piecewise constant. For the nonlinear term, we again use an elementwise Lagrange interpolation operator, as in \cite{CockburnSinglerZhang1}, but now we also approximate $ u $ using a postprocessing approach. This modified approximate nonlinearity restores the superconvergence and, as in \cite{CockburnSinglerZhang1}, we have a simple explicit expressions for the nonlinear term and Jacobian matrix, which leads to an efficient and unified implementation.
We analyze the semidiscrete Interpolatory HDG$_k$ method in \Cref{Error_analysis}. We first assume the nonlinearity satisfies a global Lipschitz condition and prove the superconvergence. Next, we establish the superconvergence under a local Lipschitz condition, assuming the mesh is quasi-uniform.
In \Cref{sec:numerics}, we illustrate the convergence theory with numerical experiments and also demonstrate the performance of the Interpolatory HDG$_k$ method on a reaction diffusion PDE system.
We note that interpolatory finite element methods for nonlinear PDEs are well-known to have computational advantages and have a long history. The approach has been given many different names, including finite element methods with interpolated coefficients, product approximation, and the group finite element method. For more information, see \cite{MR0502033,MR641309,MR798845,MR702221,MR967844,MR731213,MR2752869,MR2273051,MR2112661, MR1030644,MR1172090,MR973559,MR3178584,MR2294957,MR1068202,MR2391691,MR3403707,MR2587427} and the references therein.
\section{Interpolatory HDG$_k$ formulation and implementation}
\label{sec:HDG}
Hybridizable discontinuous Galerkin (HDG) methods were proposed by Cockburn et al.\ in \cite{MR2485455}. HDG methods work with the mixed formulation of the PDE, and on each element the approximate solution and flux are expressed in terms of the approximate solution trace on the element boundary. The approximate trace is uniquely determined by requiring the normal component of the numerical trace of the flux to be continuous across element boundaries. This allows the approximate solution and approximate flux variables to be eliminated locally on each element; the result is a global system of equations for the approximate solution trace only. Therefore, the number of globally coupled degrees of freedom for HDG methods is significantly lower than for standard DG methods. HDG methods have been successfully applied to linear PDEs \cite{MR2485455,MR2772094,MR2629996,MR2513831} and nonlinear PDEs \cite{NguyenPeraireCockburnHDGAIAAINS10,PeraireNguyenCockburnHDGAIAACNS10,NguyenPeraireCM12,MoroNguyenPeraireSCL12,MR3626531,NguyenPeraireCockburnEDG15,KabariaLewCockburn15,MR2558780,MR3463051}.
To describe the Interpolatory HDG$_k$ method, we introduce notation below. We mostly follow the notation used in \cite{MR2485455}, where HDG methods were considered for linear, steady-state diffusion.
Let $\mathcal{T}_h$ be a collection of disjoint simplexes $K$ that partition $\Omega$. Let $\partial \mathcal{T}_h$ denote the set $\{\partial K: K\in \mathcal{T}_h\}$. For an element $K$ in the collection $\mathcal{T}_h$, let $e = \partial K \cap \Gamma$ denote the boundary face of $ K $ if the $d-1$ Lebesgue measure of $e$ is nonzero. For two elements $K^+$ and $K^-$ of the collection $\mathcal{T}_h$, let $e = \partial K^+ \cap \partial K^-$ denote the interior face between $K^+$ and $K^-$ if the $d-1$ Lebesgue measure of $e$ is nonzero. Let $\varepsilon_h^o$ and $\varepsilon_h^{\partial}$ denote the sets of interior and boundary faces, respectively, and let $\varepsilon_h$ denote the union of $\varepsilon_h^o$ and $\varepsilon_h^{\partial}$. We use the mesh-dependent inner products
\begin{align*}
(w,v)_{\mathcal{T}_h} := \sum_{K\in\mathcal{T}_h} (w,v)_K, \quad\quad\quad\quad\left\langle \zeta,\rho\right\rangle_{\partial\mathcal{T}_h} := \sum_{K\in\mathcal{T}_h} \left\langle \zeta,\rho\right\rangle_{\partial K},
\end{align*}
where $(\cdot,\cdot)_D$ denotes the $L^2(D)$ inner product for a set $D\subset\mathbb{R}^d$ and $\langle \cdot, \cdot\rangle_{\Gamma} $ denotes the $L^2(\Gamma)$ inner product for a set $\Gamma \subset \mathbb{R}^{d-1}$.
Let $\mathcal{P}^k(D)$ denote the set of polynomials of degree at most $k$ on a domain $D$. We consider the discontinuous finite element spaces
\begin{align}
\bm{V}_h &:= \{\bm{v}\in [L^2(\Omega)]^d: \bm{v}|_{K}\in [\mathcal{P}^k(K)]^d, \forall K\in \mathcal{T}_h\},\\
{W}_h &:= \{{w}\in L^2(\Omega): {w}|_{K}\in \mathcal{P}^{k }(K), \forall K\in \mathcal{T}_h\},\\
{Z}_h &:= \{{z}\in L^2(\Omega): {z}|_{K}\in \mathcal{P}^{k+1}(K), \forall K\in \mathcal{T}_h\},\\
{M}_h &:= \{{\mu}\in L^2(\varepsilon_h): {\mu}|_{e}\in \mathcal{P}^k(e), \forall e\in \varepsilon_h,\mu|_{\varepsilon_h^\partial} = 0\}.
\end{align}
All spatial derivatives of functions in these spaces should be understood piecewise on each element $K\in \mathcal T_h$.
We consider the HDG method that approximates the scalar variable $ u $, flux $ \bm q = -\nabla u $, and boundary trace $ \widehat{u} $ using the spaces $ W_h $, $ \bm V_h $, and $ M_h $, respectively; i.e., polynomials of degree $ k $ are used for all variables. We call this specific method HDG$_k$ to distinguish it from the wide variety of other available HDG methods, see, e.g., \cite{CockburnFuSayasM17, CockburnFuM2D17, CockburnFuM3D17, MR3507267}. The space $ Z_h $ is used for postprocessing.
For the Interpolatory HDG$_k$ method, we use an elementwise interpolatory procedure along with postprocessing to approximate the nonlinear term. Let $\mathcal I_h$ be the elementwise interpolation operator with respect to the finite element nodes for the postprocessing space $ Z_h $. Therefore, for any function $ g $ that is continuous on each element we have $ \mathcal{I}_h g \in Z_h $.
The Interpolatory HDG$_k$ formulation reads: find $(\bm q_h,u_h,\widehat{u}_h)\in \bm V_h\times W_h\times M_h$ such that, for all $(\bm r_h,v_h,\widehat{v}_h)\in \bm V_h\times W_h\times M_h$, we have
\begin{subequations}\label{HDG-O}
\begin{align}
(\bm{q}_h,\bm{r}_h)_{\mathcal{T}_h}-(u_h,\nabla\cdot \bm{r}_h)_{\mathcal{T}_h}+\left\langle\widehat{u}_h,\bm r_h\cdot\bm n \right\rangle_{\partial{\mathcal{T}_h}} &= 0, \label{HDG-O_a}\\
(\partial_t u_h, v_h)_{\mathcal T_h}-(\bm{q}_h,\nabla v_h)_{\mathcal{T}_h}+\left\langle\widehat{\bm{q}}_h\cdot \bm{n},v_h\right\rangle_{\partial{\mathcal{T}_h}} + ( \mathcal I_h F(u_h^\star),v_h)_{\mathcal{T}_h}&= (f,v_h)_{\mathcal{T}_h},\label{HDG-O_b}\\
\left\langle\widehat{\bm{q}}_h\cdot \bm{n}, \widehat{v}_h\right\rangle_{\partial{\mathcal{T}_h}\backslash\varepsilon^{\partial}_h} &=0,\label{HDG-O_c}\\
u_h(0) &= \Pi u_0,\label{HDG-O_d}
\end{align}
\end{subequations}
where $\Pi$ is a projection mapping into $ W_h $ and the numerical trace for the flux is defined by
\begin{align}\label{num_tra_HDG-O}
\widehat{\bm q}_h\cdot\bm n &= \bm q_h\cdot\bm n+\tau ( u_h -\widehat u_h).
\end{align}
Here, the stabilization function $ \tau $ is nonnegative, constant on each element, and $ O(1) $. Furthermore, the postprocessed scalar variable $u_h^\star=\mathfrak{q}_h^{k+1}(\bm q_h, u_h) \in Z_h $ is determined on each element $ K $ by
\begin{subequations}\label{post_process_1}
\begin{align}
(\nabla\mathfrak{q}_h^{k+1}(\bm q_h, u_h),\nabla z_h)_K&=-(\bm q_h,\nabla z_h)_K,\label{post_process_1_a}\\
(\mathfrak{q}_h^{k+1}(\bm q_h, u_h),w_h)_K&=(u_h,w_h)_K,\label{post_process_1_b}
\end{align}
\end{subequations}
for all $(z_h, w_h)\in [\mathcal P^{k+1}(K)]^{\perp}\times\mathcal{P}^{0}(K) $, where
\begin{align}
[\mathcal P^{k+1}(K)]^{\perp} = \{z\in \mathcal P^{k+1}(K) : (z,w_0)_K = 0 \ \textup{for all } w_0 \in \mathcal{P}^{0}(K) \}.
\end{align}
\begin{remark}
In our original Interpolatory HDG work \cite{CockburnSinglerZhang1}, we used $ \mathcal{I}_h^k F(u_h) $ to approximate the nonlinear term, where $ \mathcal I_h^k $ is the elementwise interpolation operator mapping into $ W_h $. We proved optimal convergence rates for all variables, but we did not observe superconvergence after an element-by-element postprocessing. In this work, we approximate the nonlinearity using $ \mathcal I_h $ and postprocessing, i.e., $\mathcal I_h F(u_h^\star)$. Note that this approximate nonlinearity is in $ Z_h $ instead of $ W_h $ as in our first work. This simple change yields the superconvergence and keeps all the advantages of the original Interpolatory HDG$_k$ method proposed in \cite{CockburnSinglerZhang1}. We provide details on the computational advantages of this approach in \Cref{sec:HDG2}.
\end{remark}
\subsection{ Implementation}
\label{sec:HDG2}
In our original work \cite{CockburnSinglerZhang1} on Interpolatory HDG, we provided details of the implementation for the method. Since we changed the discretization of the nonlinear term in this work, the implementation is different; therefore, we provide details for the implementation of this new formulation and show how all matrices need only be assembled once before the time integration. As in our earlier work \cite{CockburnSinglerZhang1}, we describe the implementation using a simple time discretization approach: backward Euler with a Newton iteration to solve the nonlinear system at each time step. Using Interpolatory HDG with other time discretization approaches is also possible.
Let $N$ be a positive integer and define the time step $\Delta t = T/N$. We denote the approximation of $(\bm q_h(t),u_h(t),\widehat u_h(t))$ by $(\bm q^n_h,u^n_h,\widehat u^n_h)$ at the discrete time $t_n = n\Delta t $, for $n = 0,1,2,\ldots,N$. We replace the time derivative $\partial_tu_h$ in \eqref{HDG-O} by the backward Euler difference quotient
\begin{align}
\partial^+_tu^n_h = \frac{u^n_h-u^{n-1}_h}{\Delta t}.\label{backward_Euler}
\end{align}
This gives the following fully discrete method: find $(\bm q^n_h,u^n_h,\widehat u^n_h)\in \bm V_h\times W_h\times M_h$ satisfying
\begin{subequations}\label{full_discretion_standard}
\begin{align}
(\bm{q}^n_h,\bm{r})_{\mathcal{T}_h}-(u^n_h,\nabla\cdot \bm{r})_{\mathcal{T}_h}+\left\langle\widehat{u}^n_h,\bm{r\cdot n} \right\rangle_{\partial{\mathcal{T}_h}} &= 0,\label{full_discretion_standard_a} \\
(\partial^+_tu^n_h,w)_{\mathcal T_h}-(\bm{q}^n_h,\nabla w)_{\mathcal{T}_h}+\left\langle\widehat{\bm{q}}^n_h\cdot \bm{n},w\right\rangle_{\partial{\mathcal{T}_h}} + ( \mathcal I_hF( u_h^{n \star}),w)_{\mathcal{T}_h}&= (f^n,w)_{\mathcal{T}_h},\label{full_discretion_standard_b}\\
\left\langle\widehat{\bm{q}}^n_h\cdot \bm{n}, \mu\right\rangle_{\partial{\mathcal{T}_h}\backslash\varepsilon^{\partial}_h} &=0,\label{full_discretion_standard_c}\\
u^0_h &=\Pi u_0,\label{full_discretion_standard_d}
\end{align}
\end{subequations}
for all $(\bm r,w,\mu)\in \bm V_h\times W_h\times M_h$ and $n=1,2,\ldots,N$. In \eqref{full_discretion_standard}, $f^n=f(t_n,\cdot)$, the numerical trace for the flux on $\partial\mathcal T_h$ is defined by
\begin{align}\label{num_tr_s}
\widehat{\bm q}_h^n\cdot\bm n=\bm q_h^n\cdot\bm n+\tau(u_h^n-\widehat u_h^n),
\end{align}
and the postprocessed approximate solution $u_h^{n\star}$ is determined on each element $ K $ by solving
\begin{subequations}\label{post_process_1i}
\begin{align}
(\nabla u_h^{n\star},\nabla z_h)_K&=-(\bm q_h^n,\nabla z_h)_K,\label{post_process_1_ai}\\
(u_h^{n\star},w_h)_K&=(u_h^n,w_h)_K,\label{post_process_1_bi}
\end{align}
\end{subequations}
for all $(z_h, w_h)\in [\mathcal P^{k+1}(K)]^{\perp}\times\mathcal{P}^{0}(K) $.
As is discussed below, the Interpolatory HDG$_k$ method takes great advantage of nodal basis functions; however, the postprocessing \eqref{post_process_1i} uses an orthogonal complement space, which complicates the implementation. To avoid this, on each element $ K $, we introduce a Lagrange multiplier $\eta_h^n\in \mathcal P^0(K)$ such that
\begin{subequations}\label{post_process_2i}
\begin{align}
(\nabla u_h^{n\star},\nabla z_h)_K + (\eta_h^n,z_h)_K&=-(\bm q_h^n,\nabla z_h)_K,\label{post_process_2_ai}\\
(u_h^{n\star},w_h)_K&=(u_h^n,w_h)_K,\label{post_process_2_bi}
\end{align}
\end{subequations}
holds for all $(z_h, w_h)\in \mathcal P^{k+1}(K)\times\mathcal{P}^{0}(K) $.
\begin{remark}
In this work, we used $w_h\in\mathcal P^{\ell}(K)$ with $\ell = 0$ in \eqref{post_process_2i}. Actually, $\ell = 0,1, \ldots, k-1$ works both in the analysis and the numerical experiments. In Part II of this work, we use the same postprocessing \eqref{post_process_2i} with $\ell = k$.
\end{remark}
Assume $\bm{V}_h = \mbox{span}\{\bm \varphi_i\}_{i=1}^{N_1}$, $W_h=\mbox{span}\{\phi_i\}_{i=1}^{N_2}$, $Z_h=\mbox{span}\{\chi_i\}_{i=1}^{N_3}$, and $M_h=\mbox{span}\{\psi_i\}_{i=1}^{N_4}$. Then
\begin{equation}\label{expre}
\begin{split}
\bm q^{n}_{h}= \sum_{j=1}^{N_1}\alpha_{j}^{n}\bm\varphi_j, \qquad
u^{n}_h= \sum_{j=1}^{N_2} \beta_{j}^{n}\phi_j, \\
u^{n \star}_h= \sum_{j=1}^{N_3}\gamma_{j}^{n}\chi_j, \qquad
\widehat{u}^{n}_h= \sum_{j=1}^{N_4}\zeta_{j}^{n}\psi_{j}.
\end{split}
\end{equation}
Also, define the following matrices
\begin{align*}
A_1 &= [(\nabla\chi_j,\nabla\chi_i )_{\mathcal{T}_h}], & A_2 &= [(\bm{\varphi}_j,\nabla\chi_i)_{\mathcal{T}_h}], & A_3 &= [(\bm\varphi_j,\bm\varphi_i )_{\mathcal{T}_h}],\\
A_4 &= [(\phi_j,\nabla\cdot\bm{\varphi}_i)_{\mathcal{T}_h}], & A_5 &= [\left\langle\psi_j,{\bm\varphi_i\cdot\bm n}\right\rangle_{\partial\mathcal{T}_h}], & A_6 &= [\left\langle\tau\phi_j,{\phi_i}\right\rangle_{\partial\mathcal{T}_h}],\\
A_7 &= [\left\langle\tau\psi_j,{\varphi_i}\right\rangle_{\partial\mathcal{T}_h}], & A_8 &= [\left\langle\tau\psi_j,{\psi_i}\right\rangle_{\partial\mathcal{T}_h}], & A_9 &= [(\chi_j,\phi_i)_{\mathcal{T}_h}],\\
& & M &= [(\phi_j,\phi_i )_{\mathcal{T}_h}], & &
\end{align*}
and vectors
\begin{align*}
b_1 &= [(\chi_j,1 )_{\mathcal{T}_h}], & b_2 &= [(\phi_j,1 )_{\mathcal{T}_h}], & b_3^n &= [(f^n,\phi_i )_{\mathcal{T}_h}].
\end{align*}
Since $ \bm V_h $, $ W_h $, and $ Z_h $ are discontinuous finite element spaces, many of the matrices are block diagonal with small blocks.
Substitute \eqref{expre} into the postprocessing equation \eqref{post_process_2i} and use the corresponding test functions to test \eqref{post_process_2i} on each element $K\in \mathcal T_h$. This gives the following local postprocessing equation
\begin{align*}
{\begin{bmatrix}
A_1^k & (b_1^k)^T\\
b_1^k & 0
\end{bmatrix}}
{\left[ {\begin{array}{*{20}{c}}
\bm\gamma^{n}_k\\
\bm\eta^{n}_k\\
\end{array}} \right]}
={\begin{bmatrix}
-A_2^k & 0\\
0& b_2^k
\end{bmatrix}}
{\left[ {\begin{array}{*{20}{c}}
\bm\alpha^{n}_k\\
\bm\beta^{n}_k
\end{array}} \right]},
\end{align*}
were $A_1^k$ is the $k$th block of the matrix $A_1$, and $A_2^k, b_1^k$, and $b_2^k$ are defined similarly. That is,
\begin{align}
{\left[ {\begin{array}{*{20}{c}}
\bm\gamma^{n}_k\\
\bm\eta^{n}_k\\
\end{array}} \right]}
&={\begin{bmatrix}
A_1^k & (b_1^k)^T\\
b_1^k & 0
\end{bmatrix}}^{-1}
{\begin{bmatrix}
-A_2^k & 0\\
0& b_2^k
\end{bmatrix}}
{\left[ {\begin{array}{*{20}{c}}
\bm\alpha^{n}_k\\
\bm\beta^{n}_k
\end{array}} \right]}\nonumber\\
& = {\begin{bmatrix}
B_{11}^k & B_{12}^k\\
B_{21}^k& B_{22}^k
\end{bmatrix}}
{\left[ {\begin{array}{*{20}{c}}
\bm\alpha^{n}_k\\
\bm\beta^{n}_k
\end{array}} \right]},\label{post_process}
\end{align}
i.e.,
\begin{align*}
\bm \gamma^n_k = B_{11}^k \bm\alpha^{n}_k+ B_{12}^k \bm\beta^{n}_k.
\end{align*}
Let $B_{11}$ and $B_{12}$ be the block diagonal matrices with $k$th blocks $B_{11}^k$ and $B_{12}^k$, respectively.
As in \cite{CockburnSinglerZhang1}, once we test \eqref{full_discretion_standard_b} using $ w = \phi_i $ we can express the Interpolatory HDG$_k$ nonlinear term by the matrix-vector product
\begin{align*}
[ ( \mathcal I_h F(u_h^{n \star}),\phi_i)_{\mathcal{T}_h} ] &= A_9 \mathcal F(\bm\gamma^{n})\\
& = A_9 \mathcal F(B_{11} \bm\alpha^{n}+ B_{12} \bm\beta^{n}),
\end{align*}
where $\mathcal F$ is defined by
\begin{align}
\mathcal F(\bm\gamma^{n}) &= [F(\gamma_1^{n}), F(\gamma_2^{n}),\ldots,F(\gamma_{N_3}^{n})]^T.
\end{align}
Then the system \eqref{full_discretion_standard} can be rewritten as
\begin{align}\label{system_equation_group3}
\underbrace{\begin{bmatrix}
A_3 & -A_4 &A_5\\
A_4^T & \Delta t^{-1} M+A_6 &-A_7\\
A_5^T & A_7^T &-A_8
\end{bmatrix}}_{K}
\underbrace{\left[ {\begin{array}{*{20}{c}}
\bm\alpha^{n}\\
\bm\beta^{n}\\
\bm\zeta^{n}\\
\end{array}} \right]}_{\bm x_{n}}+
\underbrace{\left[ {\begin{array}{*{20}{c}}
0\\
A_9 \mathcal F(B_{11} \bm\alpha^{n}+ B_{12} \bm\beta^{n})\\
0
\end{array}} \right]}_{\mathscr F(\bm x_{n})}
=\underbrace{\left[ {\begin{array}{*{20}{c}}
0\\
b_3^n+{\Delta t}^{-1}M\bm\beta^{n-1} \\
0
\end{array}} \right]}_{\bm b_n},
\end{align}
i.e.,
\begin{align}\label{system_equation_group4}
K\bm x_n + \mathscr F(\bm x_n) = \bm b_n.
\end{align}
To apply Newton's method to solve the nonlinear equations \eqref{system_equation_group4}, define $G:\mathbb R^{N_1+N_2+N_4}\to \mathbb R^{N_1+N_2+N_4}$ by
\begin{align}\label{system_equation_group5}
G(\bm x_n ) = K\bm x_n + \mathscr F(\bm x_n) - \bm b_n.
\end{align}
At each time step $t_n $ for $ 1\le n\le N$, given an initial guess $\bm x_n^{(0)}$, Newton's method yields
\begin{align}\label{system_equation_group6}
\bm x_n^{(m)} =\bm x_n^{(m-1)} - \left[G'(\bm x_n^{(m-1)})\right]^{-1}G(\bm x_n^{(m-1)}), \quad m=1,2,3,\ldots
\end{align}
where the Jacobian matrix $G'(\bm x_n^{(m-1)})$ is given by
\begin{align}\label{system_equation_group7}
G'(\bm x_n^{(m-1)}) = K+\mathscr F'(\bm x_n^{(m-1)}).
\end{align}
Similar to our earlier work \cite{CockburnSinglerZhang1} on Interpolatory HDG, the term $\mathscr F'(\bm x_n^{(m-1)})$ is easily computed by
\begin{align*}
\mathscr F'(\bm x_n^{(m-1)}) = \begin{bmatrix}
0 & 0 &0 \\
A_{10}^{n,(m)}&A_{11}^{n,(m)}&0\\
0 &0 & 0
\end{bmatrix},
\end{align*}
where $ A_{10}^{n,(m)} $ and $ A_{11}^{n,(m)} $ can be efficiently computed using sparse matrix operations by
\begin{align*}
A_{10}^{n,(m)} &= A_9 \, \text{diag}(\mathcal F'(B_{11} \bm\alpha^{n,(m-1)}+ B_{12} \bm\beta^{n,(m-1)})) B_{11},\\
A_{11}^{n,(m)} &= A_9 \, \text{diag}(\mathcal F'(B_{11} \bm\alpha^{n,(m-1)}+ B_{12} \bm\beta^{n,(m-1)})) B_{12}.
\end{align*}
Therefore, equation \eqref{system_equation_group6} can be rewritten as
\begin{align}\label{system_equation_group1}
\begin{bmatrix}
A_3 & -A_4 &A_5\\
A_4^T + A_{10}^{n,(m)} & \Delta t^{-1} M+A_6 + A_{11}^{n,(m)} &-A_7\\
A_5^T & A_7^T &-A_8
\end{bmatrix}
\left[ {\begin{array}{*{20}{c}}
\bm\alpha^{n,(m)}\\
\bm\beta^{n,(m)}\\
\bm\zeta^{n,(m)}
\end{array}} \right]
=\bm {\widetilde b},
\end{align}
where
\begin{align}
\bm {\widetilde b} = G'(\bm x_n^{(m-1)}) \bm x_n^{(m-1)} - G(\bm x_n^{(m-1)}).
\end{align}
This equation can be solved by locally eliminating the unknowns $\bm\alpha^{n,(m)}$ and $\bm\beta^{n,(m)}$; see \cite{CockburnSinglerZhang1} for details.
\begin{remark}
In this new Interpolatory HDG$_k$ formulation, we only need to assemble the HDG matrices and the HDG postprocessing matrices $B_{11}$ and $B_{12}$ once before the time integration. Hence, we keep all the advantages from our earlier work \cite{CockburnSinglerZhang1}: the new approach eliminates the computational cost of matrix reassembly and gives simple explicit expressions for the nonlinear term and Jacobian matrix, which leads to a simple unified implementation for a variety of nonlinear PDEs.
\end{remark}
\section{Error analysis}
\label{Error_analysis}
In this section, we give a rigorous error analysis for the semidiscrete Interpolatory HDG$_k$ method. Below, we state our assumptions and briefly outline the main results. Then we provide an overview of the projections required for the analysis in \Cref{subsec:basic_projections}. The proofs of the main results follow. We first assume in \Cref{subsec:analysis_global_Lip} that the nonlinearity satisfies a global Lipschitz condition. Finally, in \Cref{local} we extend the results to locally Lipschitz nonlinearities; however, we assume the mesh is quasi-uniform and $ h $ is sufficiently small for this case.
We use the standard notation $W^{m,p}(\Omega)$ for Sobolev spaces on $\Omega$ with norm $\|\cdot\|_{m,p,\Omega}$ and seminorm $|\cdot|_{m,p,\Omega}$. We also write $H^{m}(\Omega)$ instead of $W^{m,2}(\Omega)$, and we omit the index $p$ in the corresponding norms and seminorms.
Throughout, we assume the solution of the PDE \eqref{semilinear_pde1} exists and is unique for $ t \in [0,T] $, the function $F$, the problem data, and the solution of the PDE are smooth enough, and the semidiscrete Interpolatory HDG$_k$ equations \eqref{HDG-O} have a unique solution on $ [0,T] $. Furthermore, we assume the mesh is uniformly shape regular, $ h \leq 1 $, and the projection $ \Pi $ used for the initial condition in \eqref{HDG-O_d} is $ \Pi = \Pi_W $, where $ \Pi_W $ is defined below in \Cref{subsec:basic_projections}.
We also make the following regularity assumption on the dual problem: there exists a constant $ C $ such that for any $\Theta \in L^2(\Omega)$, the solution $ (\bm \Phi, \Psi) $ of the dual problem
\begin{equation}\label{Dual_PDE1_assumption}
\begin{split}
\bm{\Phi}+\nabla\Psi&=0\qquad\qquad\text{in}\ \Omega,\\
\nabla\cdot\bm \Phi &=\Theta\qquad\quad~~\text{in}\ \Omega,\\
\Psi &= 0\qquad\qquad\text{on}\ \partial\Omega,
\end{split}
\end{equation}
satisfies $ (\bm \Phi, \Psi) \in [H^1(\Omega)]^d \times H^2(\Omega) $ and
\begin{align}\label{regularity_PDE_assumption}
\|\bm \Phi\|_{H^{1}(\Omega)} + \|\Psi\|_{H^{2}(\Omega)} \le C \|\Theta\|_{L^{2}(\Omega)}.
\end{align}
This assumption is satisfied if $ \Omega $ is convex.
We show for all $0\le t\le T$ the solution $ (\bm q_h, u_h, u_h^\star) $ of the semidiscrete Interpolatory HDG$_k$ equations \eqref{HDG-O} satisfies
\begin{align*}
\|\bm q(t) - \bm q_h(t)\|_{\mathcal T_h}&\le C h^{k+1}, \
\|u(t) - u_h(t)\|_{\mathcal T_h}\le C h^{k+1}, \
\|u(t) - u_h^\star(t)\|_{\mathcal T_h}\le C h^{k+1+\min\{k,1\}}.
\end{align*}
In our error estimates, the constants $ C $ can vary from line to line and may depend on the exact solution and the final time $ T $. As in the linear case \cite{ChabaudCockburn12}, superconvergence is only obtained for $ k \geq 1 $.
\begin{remark}
In \cite{ChabaudCockburn12}, the $ L^\infty(L^2) $ error for $ u - u_h^* $ superconverges at a rate of $ \sqrt{ \log \kappa } \,\, h^{k+2} $, where $ \kappa $ depends on the mesh and the term $ \sqrt{ \log \kappa } $ grows very slowly as $ h $ tends to zero. The term $ \sqrt{ \log \kappa } $ results from the parabolic duality argument used in \cite{ChabaudCockburn12}. It appears this parabolic duality argument is not applicable to Interpolatory HDG. Therefore, in this work we use a duality argument based on Wheeler's work \cite{MR0351124} and avoid the term $ \sqrt{ \log \kappa } $ in our error estimates; however, we require the solution has higher regularity than the regularity needed in \cite{ChabaudCockburn12} for the linear case.
\end{remark}
\subsection{Projections and basic estimates}
\label{subsec:basic_projections}
We first introduce the HDG$_k$ projection operator $\Pi_h(\bm{q},u) := (\bm{\Pi}_{V} \bm{q},\Pi_{W}u)$ defined in \cite{MR2629996}, where
$\bm{\Pi}_{V} \bm{q}$ and $\Pi_{W}u$ denote components of the projection of $\bm{q}$ and $u$ into $\bm{V}_h$ and $W_h$, respectively.
For each element $K\in\mathcal T_h$, the projection is determined by the equations
\begin{subequations}\label{HDG_projection_operator}
\begin{align}
(\bm\Pi_V\bm q,\bm r)_K &= (\bm q,\bm r)_K,\qquad\qquad \forall \bm r\in[\mathcal P_{k-1}(K)]^d,\label{projection_operator_1}\\
(\Pi_Wu, w)_K &= (u, w)_K,\qquad\qquad \forall w\in \mathcal P_{k-1}(K ),\label{projection_operator_2}\\
\langle\bm\Pi_V\bm q\cdot\bm n+\tau\Pi_Wu,\mu\rangle_{e} &= \langle\bm q\cdot\bm n+\tau u,\mu\rangle_{e},~\;\forall \mu\in \mathcal P_{k}(e),\label{projection_operator_3}
\end{align}
\end{subequations}
for all faces $e$ of the simplex $K$. The approximation properties of the HDG$_k$ projection \eqref{HDG_projection_operator} are given in the following result from \cite{MR2629996}:
\begin{lemma}\label{pro_error}
Suppose $k\ge 0$, $\tau|_{\partial K}$ is nonnegative and $\tau_K^{\max}:=\max\tau|_{\partial K}>0$. Then the system \eqref{HDG_projection_operator} is uniquely solvable for $\bm{\Pi}_V\bm{q}$ and $\Pi_W u$. Furthermore, there is a constant $C$ independent of $K$ and $\tau$ such that
\begin{subequations}
\begin{align}
\|{\bm{\Pi}_V}\bm{q}-q\|_K &\leq Ch_{K}^{\ell_{\bm{q}}+1}|\bm{q}|_{\ell_{\bm{q}}+1,K}+Ch_{K}^{\ell_{{u}}+1}\tau_{K}^{*}{|u|}_{\ell_{{u}}+1,K},\label{Proerr_q}\\
\|{{\Pi}_W}{u}-u\|_K &\leq Ch_{K}^{\ell_{{u}}+1}|{u}|_{\ell_{{u}}+1,K}+C\frac{h_{K}^{\ell_{{\bm{q}}}+1}}{\tau_K^{\max}}{|\nabla\cdot \bm{q}|}_{\ell_{\bm{q}},K}\label{Proerr_u}
\end{align}
\end{subequations}
for $\ell_{\bm{q}},\ell_{u}$ in $[0,k]$. Here $\tau_K^{*}:=\max\tau|_{{\partial K}\backslash e^{*}}$, where $e^{*}$ is a face of $K$ at which $\tau|_{\partial K}$ is maximum.
\end{lemma}
Next, for each simplex $ K $ in $ \mathcal T_h $ and each boundary face $ e $ of $ K $, let $ \Pi_\ell $ (for any $ \ell \geq 0 $) and $ P_M $ denote the standard $L^2$ orthogonal projection operators $\Pi_{\ell}: L^2(K)\to \mathcal P^{\ell}(K)$ and $P_M : L^2(e)\to \mathcal P^{k}(e)$ satisfying
\begin{subequations}
\begin{align}
(\Pi_{\ell} u, v_h)_K &= (u,v_h)_K,\quad \forall v_h\in \mathcal P^{\ell}(K),\label{L2_do}\\
\langle P_M u, \widehat v_h \rangle_e &= \langle u, \widehat v_h \rangle_e,\quad \forall \widehat v_h\in \mathcal P^{k}(e).\label{L2_edge}
\end{align}
\end{subequations}
The following error estimates for the $ L^2 $ projections and the elementwise interpolation operator $ \mathcal I_h $ from \Cref{sec:HDG} are standard and can be found in \cite{MR2373954}:
\begin{lemma}\label{lemmainter}
Suppose $k, \ell \ge 0$. There exists a constant $C$ independent of $K\in\mathcal T_h$ such that
\begin{subequations}
\begin{align}
&\|w - \mathcal I_h w\|_K + h_K\|\nabla(w - \mathcal I_h w)\|_K \le Ch^{k+2} \|w\|_{k+2,K}, & &\forall w\in C(\bar K)\cap H^{k+2}(K), \label{lemmainter_inter}\\
&\|w - \Pi_{\ell} w\|_K \le Ch^{\ell+1} \|w\|_{\ell+1,K}, & &\forall w\in H^{\ell+1}(K), \label{lemmainter_orthoo}\\
&\|w- P_M w\|_{\partial K} \le Ch^{k+1/2} \|w\|_{k+1,K} & &\forall w\in H^{k+1}(K). \label{lemmainter_orthoe}
\end{align}
\end{subequations}
\end{lemma}
\subsection{Error analysis under a global Lipschitz condition}
\label{subsec:analysis_global_Lip}
In this section, we assume the nonlinearity $F$ is globally Lipschitz:
\begin{assumption}\label{gloablly_lip}
There is a constant $L>0$ such that
\begin{align*}
|F(u) - F(v)|_{\mathbb R}\le L |u-v|_{\mathbb R}
\end{align*}
for all $ u, v \in \mathbb{R}$.
\end{assumption}
We remove this restriction in the next section. Our proof relies on techniques used in \cite{CockburnSinglerZhang1,ChabaudCockburn12}. We split the proof of the main result into several steps.
To begin, we first rewrite the semidiscrete interpolatory HDG equations \eqref{HDG-O}. First, subtract \eqref{HDG-O_c} from \eqref{HDG-O_b} and integrate by parts to give the following formulation:
\begin{lemma}
The Interpolatory HDG$_k$ method finds $(\bm q_h,u_h,\widehat{u}_h)\in \bm V_h\times W_h\times M_h$ satisfying
\begin{subequations}\label{HDGO}
\begin{align}
(\bm{q}_h,\bm{r}_h)_{\mathcal{T}_h}-(u_h,\nabla\cdot \bm{r}_h)_{\mathcal{T}_h}+\left\langle\widehat{u}_h,\bm r_h\cdot\bm n \right\rangle_{\partial{\mathcal{T}_h}} &= 0, \label{HDGO_a}\\
(\partial_t u_h, v_h)_{\mathcal T_h} + ( \mathcal I_h F(u_h^\star),v_h)_{\mathcal{T}_h}+ (\nabla\cdot\bm{q}_h, v_h)_{\mathcal{T}_h}
-\langle \bm q_h\cdot\bm n,\widehat{v}_h \rangle_{\partial{\mathcal{T}_h}}&
\nonumber\\
+\left\langle \tau( u_h -\widehat u_h),v_h-\widehat{v}_h)\right\rangle_{\partial{\mathcal{T}_h}} &= (f,v_h)_{\mathcal{T}_h},\label{HDGO_b}\\
u_h(0) &= \Pi_W u_0,\label{HDGO_c}
\end{align}
\end{subequations}
for all $(\bm r_h,v_h,\widehat{v}_h)\in \bm V_h\times W_h\times M_h$.
\end{lemma}
We also define the HDG$_k$ operator $\mathscr B$:
\begin{align}\label{def_B}
\begin{split}
\hspace{1em}&\hspace{-1em}\mathscr B (\bm q_h, u_h, \widehat u_h;\bm r_h, v_h, \widehat v_h )\\
& = (\bm{q}_h,\bm{r}_h)_{\mathcal{T}_h}-(u_h,\nabla\cdot \bm{r}_h)_{\mathcal{T}_h}+\left\langle\widehat{u}_h,\bm r_h\cdot\bm n \right\rangle_{\partial{\mathcal{T}_h}}\\
&\quad + (\nabla\cdot\bm{q}_h, v_h)_{\mathcal{T}_h}
-\langle \bm q_h\cdot\bm n,\widehat{v}_h \rangle_{\partial{\mathcal{T}_h}} +\left\langle
\tau( u_h -\widehat u_h),v_h-\widehat{v}_h)\right\rangle_{\partial{\mathcal{T}_h}}.
\end{split}
\end{align}
This allows us to rewrite the semidiscrete Interpolatory HDG$_k$ formulation \eqref{HDGO} as follows: find $(\bm q_h, u_h, \widehat u_h)\in \bm V_h\times W_h\times M_h$ such that
\begin{subequations}\label{HDGO-B}
\begin{align}
(\partial_t u_h, v_h)_{\mathcal T_h}+\mathscr B (\bm q_h, u_h, \widehat u_h;\bm r_h, v_h, \widehat v_h ) + ( \mathcal I_h F(u_h^\star),v_h)_{\mathcal{T}_h} &= (f,v_h),\\
u_h(0) &= \Pi_W u_0.
\end{align}
\end{subequations}
for all $(\bm r_h,v_h,\widehat{v}_h)\in \bm V_h\times W_h\times M_h$.
\subsubsection{Step 1: Equations for the projection of the errors}
\begin{lemma}\label{error_u3}
For $\varepsilon_h^{\bm q}=\bm{\Pi}_{V}\bm q-\bm q_h $, $ \varepsilon_h^{ u}=\Pi_{W} u-u_h $, and $ \varepsilon_h^{ \widehat{u}}=P_M u-\widehat{u}_h$, we have
\begin{subequations}\label{error_u1}
\begin{align}
\hspace{3em}&\hspace{-3em}(\partial_t \varepsilon_h^u, v_h)_{\mathcal T_h } + \mathscr B(\varepsilon_h^{\bm q},\varepsilon_h^{u},\varepsilon_h^{\widehat u}; \bm r_h, w_h, \widehat v_h) + (F(u) - \mathcal I_h F(u_h^\star), v_h )_{\mathcal T_h} \nonumber\\
& = (\bm \Pi_V \bm{q} - \bm q,\bm{r}_h)_{\mathcal{T}_h} + (\Pi_W u_t - u_t , v_h)_{\mathcal{T}_h},\label{error_u1_a}\\
\varepsilon_h^u|_{t=0} &=0,\label{error_u1_b}
\end{align}
\end{subequations}
for all $(\bm r_h,v_h,\widehat v_h)\in \bm V_h\times W_h\times M_h$.
\end{lemma}
\begin{proof}
By the definition of the operator $\mathscr B$ in \eqref{def_B}, we have
\begin{align*}
\hspace{2em}&\hspace{-2em} \mathscr B (\bm\Pi_V\bm q,\Pi_W u,P_M u,\bm r_h,v_h,\widehat v_h)\\
&= (\bm\Pi_V\bm q,\bm{r}_h)_{\mathcal{T}_h}-(\Pi_W u,\nabla\cdot\bm{r}_h)_{\mathcal{T}_h}+\left\langle P_M u, \bm{r}_h\cdot \bm{n}\right\rangle_{\partial\mathcal{T}_h} +(\nabla\cdot \bm\Pi_V\bm q, v_h)_{\mathcal{T}_h}\\
&\quad- \left\langle \bm\Pi_V\bm q \cdot\bm n , \widehat v_h\right\rangle_{\partial {\mathcal{T}_h}} +\left\langle \tau(\Pi_W u - u), v_h- \ \widehat v_h\right\rangle_{\partial\mathcal{T}_h}\\
&= (\bm{\Pi}_V \bm q - \bm q,\bm{r}_h)_{\mathcal{T}_h}+ (\bm q,\bm{r}_h)_{\mathcal{T}_h}-( u,\nabla\cdot\bm{r}_h)_{\mathcal{T}_h}+\left\langle u, \bm{r}_h\cdot \bm{n}\right\rangle_{\partial\mathcal{T}_h} \\
&\quad +(\nabla\cdot\bm q, v_h)_{\mathcal T_h} +(\nabla\cdot(\bm \Pi_V \bm q - \bm q), v_h)_{\mathcal T_h}- \left\langle (\bm\Pi_V\bm q - \bm q) \cdot\bm n , \widehat v_h\right\rangle_{\partial {\mathcal{T}_h}}\\
&\quad +\left\langle \tau(\Pi_W u - u), v_h- \widehat v_h\right\rangle_{\partial\mathcal{T}_h}\\
&= (\bm{\Pi}_V \bm q - \bm q,\bm{r}_h)_{\mathcal{T}_h}+(f -F(u)+\partial_t u, v_h)_{\mathcal T_h},
\end{align*}
where we used the HDG$_k$ projection \eqref{HDG_projection_operator} and the $ L^2 $ projection $P_M$ \eqref{L2_edge}. Use \eqref{HDGO-B} and subtract to obtain the result.
\end{proof}
\subsubsection{Step 2: Estimate of $\varepsilon_h^u$ in $L^{\infty}(L^2)$ by an energy argument}
\label{sec:energy_argument_q2}
\begin{lemma}\label{super_con}
For any $t\in[0,T]$, we have
\begin{align*}
\|\Pi_{k+1} u - u_h^\star\|_{\mathcal T_h} &\le C ( \|\varepsilon_h^u\|_{\mathcal T_h} + \| u - \mathcal I_h u \|_{K} + \delta_{k0}\|\Pi_W u - u\|_{\mathcal T_h}) \\
& \quad +Ch(\|\varepsilon_h^{\bm q}\|_{\mathcal T_h}+\|\bm{q}-\bm\Pi_V\bm{q}\|_{\mathcal T_h} +\|\nabla(u - \mathcal I_h u)\|_{\mathcal T_h}),
\end{align*}
where $ \delta_{k0} $ denotes the Kronecker delta symbol so that $ \delta_{k0} = 1 $ for $ k = 0 $ and $ \delta_{k0} = 0 $ for $ k \geq 1 $.
\end{lemma}
\begin{proof}
We begin the proof with the case $k\ge 1$. The proof is very similar to a proof in \cite{MR1086845}, but we include it for completeness. By the properties of $\Pi_W$ and $\Pi_{k+1}$, we obtain
\begin{align*}
(\Pi_W u,w_0)_K &=(u,w_0)_K,\quad \text{ for all } w_0\in \mathcal{P}^{0}(K),\\
(\Pi_{k+1} u,w_0)_K&=(u,w_0)_K, \quad \text{ for all } w_0\in \mathcal{P}^{0}(K).
\end{align*}
Hence, for all $w_{0}\in \mathcal P^{0}(K)$, we have
\begin{align*}
(\Pi_W u-\Pi_{k+1} u, w_{0})_K = 0.
\end{align*}
Let $e_h=u_h^\star - u_h+\Pi_W u-\Pi_{k+1} u$. Using the postprocessing equation \eqref{post_process_1}, $ \bm q = -\nabla u $, and an inverse inequality gives
\begin{align}
\|\nabla e_{h}\|_K^2&=(\nabla (u_h^\star - u_h),\nabla e_{h} )_K+( \nabla (\Pi_Wu -\Pi_{k+1} u),\nabla e_{h} )_K \nonumber\\
&=(-\nabla u_h-\bm{q}_h,\nabla e_{h} )_K+( \nabla (\Pi_W u-\Pi_{k+1} u),\nabla e_{h} )_K \nonumber\\
&=(\nabla (\Pi_Wu - u_h)-(\bm{q}_h-\bm\Pi_V\bm{q}) + (\bm{q}-\bm\Pi_V\bm{q}) +\nabla (u-\Pi_{k+1} u),\nabla e_{h})_K \nonumber\\
&\le C (h_K^{-1}\|\Pi_W u- u_h\|_K+\|\varepsilon_h^{\bm q}\|_K + \|\bm{q}-\bm\Pi_V\bm{q}\|_{K} +\|\nabla(u - \Pi_{k+1} u)\|_{K} )\|\nabla e_{h}\|_K.\label{H1}
\end{align}
Since $(e_h,1)_K=0$, apply the Poincar\'{e} inequality and the above estimate \eqref{H1} to give
\begin{align*}
\|e_{h}\|_K&\le C h_K \|\nabla e_h\|_K \le C \|\varepsilon_h^u\|_K+Ch_K(\|\varepsilon_h^{\bm q}\|_K +\|\bm{q}-\bm\Pi_V\bm{q}\|_{K} +\|\nabla(u - \Pi_{k+1} u)\|_{K} ).
\end{align*}
Next, estimate the last term using an inverse inequality:
\begin{align*}
h_K \|\nabla(u - \Pi_{k+1} u)\|_{K} &\leq h_K \|\nabla(u - \mathcal I_h u)\|_{K} + h_K \|\nabla(\mathcal I_h u - \Pi_{k+1} u)\|_{K}\\
&\leq h \|\nabla(u - \mathcal I_h u)\|_{K} + \| \mathcal I_h u - \Pi_{k+1} u\|_{K}\\
&\leq h \|\nabla(u - \mathcal I_h u)\|_{K} + \| u - \mathcal I_h u \|_{K}.
\end{align*}
This implies
\begin{align*}
\|e_{h} \|_{\mathcal T_h} \le C ( \|\varepsilon_h^u\|_{\mathcal T_h} + \| u - \mathcal I_h u \|_{\mathcal T_h} ) +Ch (\|\varepsilon_h^{\bm q}\|_{\mathcal T_h} +\|\bm{q}-\bm\Pi_V\bm{q}\|_{\mathcal T_h} + \|\nabla(u - \mathcal I_h u)\|_{\mathcal T_h}).
\end{align*}
Hence, we have
\begin{align*}
\|\Pi_{k+1} u - u_h^\star\|_{\mathcal T_h}&\le\|\Pi_{k+1} u -\Pi_W u- u_h^\star + u_h\|_{\mathcal T_h} + \|\Pi_W u-u_h\|_{\mathcal T_h} \nonumber\\
& \le C ( \|\varepsilon_h^u\|_{\mathcal T_h} + \| u - \mathcal I_h u \|_{\mathcal T_h} ) +Ch (\|\varepsilon_h^{\bm q}\|_{\mathcal T_h} +\|\bm{q}-\bm\Pi_V\bm{q}\|_{\mathcal T_h} + \|\nabla(u - \mathcal I_h u)\|_{\mathcal T_h}).
\end{align*}
This completes the proof for the case $ k \geq 1 $.
For the case $k=0$, we follow the above steps in the proof of the $ k \geq 1 $ case but we replace the projection $\Pi_W$ with $\Pi_k$ to obtain
\begin{align*}
\|\Pi_{k+1} u - u_h^\star\|_{\mathcal T_h}&\le\|\Pi_{k+1} u -\Pi_k u- u_h^\star + u_h\|_{\mathcal T_h} + \|\Pi_k u-u_h\|_{\mathcal T_h} \nonumber\\
& \le C ( \|\Pi_k u-u_h\|_{\mathcal T_h} + \| u - \mathcal I_h u\|_{\mathcal T_h} ) \\
& \quad +Ch(\|\varepsilon_h^{\bm q}\|_{\mathcal T_h}+\|\bm{q}-\bm\Pi_V\bm{q}\|_{\mathcal T_h} +\|\nabla(u - \mathcal I_h u)\|_{\mathcal T_h})\\
& \le C ( \|\varepsilon_h^u\|_{\mathcal T_h} + \|\Pi_k u - u\|_{\mathcal T_h} + \|\Pi_W u - u\|_{\mathcal T_h} + \| u - \mathcal I_h u\|_{\mathcal T_h})\\
&\quad +Ch(\|\varepsilon_h^{\bm q}\|_{\mathcal T_h}+\|\bm{q}-\bm\Pi_V\bm{q}\|_{\mathcal T_h} +\|\nabla(u - \mathcal I_h u)\|_{\mathcal T_h}).
\end{align*}
The optimality of the $ L^2 $ projection gives $ \|\Pi_k u - u\|_{\mathcal T_h} \leq \|\Pi_W u - u\|_{\mathcal T_h} $, and this completes the proof.
\end{proof}
To bound the error in the nonlinear term, we split $F(u)-\mathcal I_hF( u_h^\star)$ as
\begin{align*}
F( u)-\mathcal I_hF( u_h^\star) &= F(u)- \mathcal I_h F(u) + \mathcal I_h F(u) - \mathcal I_h F(\Pi_{k+1} u) + \mathcal I_h F(\Pi_{k+1} u) -\mathcal I_hF(u_h^\star)\\
&=: R_1 + R_2 + R_3.
\end{align*}
A bound for the first term $R_1$ follows directly from the standard FE interpolation error estimate \eqref{lemmainter_inter} in \Cref{lemmainter} due to the smoothness assumption for the function $F$. Error bounds for $R_2$ and $R_3$ are given in the following result:
\begin{lemma}\label{non_est}
We have
\begin{align*}
\|\mathcal I_h F(u) - \mathcal I_h F(\Pi_{k+1} u) \|_{\mathcal T_h}&\le C \| u- \mathcal I_h u\|_{\mathcal T_h},\\
\| \mathcal I_h F(\Pi_{k+1} u) -\mathcal I_hF(u_h^\star)\|_{\mathcal T_h} &\le C\| \Pi_{k+1} u- u_h^\star\|_{\mathcal T_h}.
\end{align*}
\end{lemma}
The proofs of the estimates in \Cref{non_est} are similar to proofs in \cite{CockburnSinglerZhang1} and also use $ \| \Pi_{k+1} u- u\|_{\mathcal T_h} \leq \| u- \mathcal I_h u\|_{\mathcal T_h} $. We omit the details.
\begin{lemma}\label{theorem_err_u3}
We have the estimate
\begin{align*}
\|\varepsilon_h^{u}(t)\|^2_{\mathcal{T}_h}+ \int_0^t \left[\|\varepsilon_h^{\bm q}\|^2_{\mathcal{T}_h}+\langle \tau(\varepsilon_h^u -\varepsilon_h^{\widehat{u}}), \varepsilon_h^u -\varepsilon_h^{\widehat{u}}\rangle_{\partial{\mathcal{T}_h}} \right]dt \le C \int_0^t \bm {\mathscr H}^2,
\end{align*}
where
\begin{align}\label{def_H}
\begin{split}
\bm {\mathscr H} &= \|\bm\Pi_V {\bm{q}} -\bm q\|_{\mathcal T_h} + \| \Pi_W u_t - u_t \|_{\mathcal{T}_h} + \delta_{k0}\|\Pi_W u - u\|_{\mathcal T_h}\\
&\quad + \| F(u)- \mathcal I_h F(u) \|_{\mathcal T_h} + \| u - \mathcal I_h u\|_{\mathcal T_h} + h \|\nabla (u - \mathcal I_h u)\|_{\mathcal T_h}.
\end{split}
\end{align}
\end{lemma}
\begin{proof}
Take $(\bm r_h,v_h,\widehat{v}_h)=(\varepsilon_h^{\bm q},\varepsilon_h^{u},\varepsilon_h^{\widehat u})$ in the error equation \eqref{error_u1} to give
\begin{align}\label{error_q1_at_0}
\begin{split}
\hspace{2em}&\hspace{-2em} \frac 1 2\frac{d}{dt}\|\varepsilon_h^{u}\|^2_{\mathcal{T}_h}+ \|\varepsilon_h^{\bm q}\|^2_{\mathcal{T}_h}+\langle \tau(\varepsilon_h^u -\varepsilon_h^{\widehat{u}}), \varepsilon_h^u -\varepsilon_h^{\widehat{u}}\rangle_{\partial{\mathcal{T}_h}} \\
& = (\bm\Pi_V {\bm{q}} -\bm q, \varepsilon_h^{\bm q})_{\mathcal{T}_h} + (\Pi_W u_t - u_t , \varepsilon_h^{u})_{\mathcal{T}_h} - (F(u) - \mathcal I_h F(u_h^\star),\varepsilon_h^u)_{\mathcal T_h}.
\end{split}
\end{align}
Apply the Cauchy-Schwarz inequality to each term of the right-hand side of the above identity and use $ h \leq 1 $, \Cref{super_con}, and \Cref{non_est} to get
\begin{align*}
\frac{d}{dt}\|\varepsilon_h^{u}\|^2_{\mathcal{T}_h}+ \|\varepsilon_h^{\bm q}\|^2_{\mathcal{T}_h}+\langle \tau(\varepsilon_h^u -\varepsilon_h^{\widehat{u}}), \varepsilon_h^u -\varepsilon_h^{\widehat{u}}\rangle_{\partial{\mathcal{T}_h}} \le C \bm {\mathscr H}^2 + C \|\varepsilon_h^{u}\|^2_{\mathcal{T}_h}.
\end{align*}
Gronwall's inequality, $ \varepsilon_h^u(0) = 0 $, and $ e^{Ct} \leq e^{CT} $ give the result.
\end{proof}
\subsubsection{Step 3: Estimate of $\varepsilon_h^{q}$ in $L^{\infty}(L^2)$ by an energy argument}
\begin{lemma}\label{error_ana_q}
We have
\begin{align*}
\|\varepsilon_h^{\bm q}(t)\|_{\mathcal T_h}^2 +\|\sqrt{\tau}(\varepsilon_h^u(t)-\varepsilon_h^{\widehat u}(t))\|_{\partial{\mathcal{T}_h}}^2\le C \bigg(\|(\bm\Pi_V {\bm{q}} -\bm q) (0)\|_{\mathcal{T}_h}^2+ \int_0^t \bm {\mathscr G}^2 \bigg),
\end{align*}
where
\begin{align}\label{def_G}
\begin{split}
\bm {\mathscr G} &= \|\bm\Pi_V {\bm{q}} -\bm q\|_{\mathcal T_h} + \|\Pi_Wu_t-u_t\|_{\mathcal T_h} + \|\bm\Pi_V {\bm{q}}_t -\bm q_t\|_{\mathcal{T}_h} + \delta_{k0} \|\Pi_W u - u\|_{\mathcal T_h} \\
&\quad + \| F(u)- \mathcal I_h F(u) \|_{\mathcal T_h} + \| u - \mathcal I_h u\|_{\mathcal T_h} + h\|\nabla (u - \mathcal I_h u)\|_{\mathcal T_h}.
\end{split}
\end{align}
\end{lemma}
\begin{proof}
Take $(\bm r_h, v_h, \widehat v_h) = (\bm r_h,0 , 0)$ in the error equation \eqref{error_u1} and differentiate the result with respect to time. Also take $(\bm r_h, v_h, \widehat v_h) = (0, v_h, \widehat v_h)$ in \eqref{error_u1} to get
\begin{subequations}\label{err_eq_2}
\begin{align}
(\partial_t\varepsilon_h^{\bm q},\bm r_h)_{\mathcal T_h}-(\partial_t\varepsilon_h^u,\nabla\cdot\bm r_h)_{\mathcal T_h}+\langle \partial_t\varepsilon_h^{\widehat u},\bm r_h\cdot\bm n\rangle_{\partial\mathcal T_h} &= (\bm\Pi_V\bm q_t-\bm q_t,\bm r_h)_{\mathcal T_h},\label{err_eq_a2}\\
(\partial_t\varepsilon_h^u,v_h)_{\mathcal T_h}+(\nabla\cdot\varepsilon_h^{\bm q}, v_h)_{\mathcal{T}_h}-\langle {\varepsilon}_h^{{\bm q}}\cdot\bm n,\widehat v_h\rangle_{\partial{\mathcal{T}_h}} \quad &\nonumber\\
+\langle \tau(\varepsilon_h^u- \varepsilon_h^{\widehat u}), v_h- \widehat v_h\rangle_{\partial\mathcal{T}_h}+ (F(u)-\mathcal I_hF(u_h^\star),v_h)_{\mathcal T_h} &=(\Pi_Wu_t-u_t,v_h)_{\mathcal T_h},\label{err_eq_b2}\\
\varepsilon_h^u|_{t=0}&=0,\label{err_eq_c2}
\end{align}
\end{subequations}
for all $(\bm r_h,w_h,\widehat v_h)\in \bm V_h\times W_h\times M_h$.
Next, take $\bm r_h=\varepsilon_h^{\bm q} $ in \eqref{err_eq_a2}, $v_h=\partial_t\varepsilon_h^u$ in \eqref{err_eq_b2}, and $\widehat v_h=\partial_t\varepsilon_h^{\widehat u}$ in \eqref{err_eq_b2} \color{black} to obtain
\begin{align*}
\hspace{1em}&\hspace{-1em}\|\partial_t\varepsilon_h^u\|^2_{\mathcal T_h}+(\partial_t\varepsilon_h^{\bm q},\varepsilon_h^{\bm q})_{\mathcal T_h}+\langle \tau(\varepsilon_h^u-\varepsilon_h^{\widehat u}),\partial_t\varepsilon_h^u-\partial_t\varepsilon_h^{\widehat u}\rangle_{\partial\mathcal T_h} \\
&=(\bm\Pi_V\bm q_t-\bm q_t,\varepsilon_h^{\bm q})_{\mathcal T_h}+(\Pi_Wu_t-u_t,\partial_t\varepsilon_h^u)_{\mathcal T_h}- (F( u)-\mathcal I_hF(u_h^\star),\partial_t\varepsilon_h^u)_{\mathcal T_h}.
\end{align*}
Integrating in time gives
\begin{align*}
\hspace{1em}&\hspace{-1em}\frac 1 2 [\|\varepsilon_h^{\bm q}(t)\|_{\mathcal T_h}^2 +\|\sqrt{\tau}(\varepsilon_h^u(t)-\varepsilon_h^{\widehat u}(t))\|_{\partial{\mathcal{T}_h}}^2]+\int_0^t \|\partial_t\varepsilon_h^{u}\|_{\mathcal T_h}^2 \\
&= \frac 12 [\|\varepsilon_h^{\bm q}(0)\|_{\mathcal T_h}^2 +\|\sqrt{\tau}(\varepsilon_h^u(0)-\varepsilon_h^{\widehat u}(0))\|_{\partial{\mathcal{T}_h}}^2 ] +\int_0^t (\bm\Pi_V {\bm{q}_t} -\bm q_t, \varepsilon_h^{\bm q})_{\mathcal{T}_h}\\
&\quad +\int_0^t (\Pi_Wu_t-u_t,\partial_t\varepsilon_h^u)_{\mathcal T_h} - \int_0^t (F(u) - \mathcal I_h F( u_h^\star),\partial_t\varepsilon_h^u)_{\mathcal T_h}.
\end{align*}
Use the Cauchy-Schwarz inequality, $ h \leq 1 $, \Cref{super_con}, and \Cref{non_est} to get
\begin{align*}
\hspace{1em}&\hspace{-1em} \|\varepsilon_h^{\bm q}(t)\|_{\mathcal T_h}^2 +\|\sqrt{\tau}(\varepsilon_h^u(t)-\varepsilon_h^{\widehat u}(t))\|_{\partial{\mathcal{T}_h}}^2 \\
&\le \|\varepsilon_h^{\bm q}(0)\|_{\mathcal T_h}^2 +\|\sqrt{\tau}(\varepsilon_h^u(0)-\varepsilon_h^{\widehat u}(0))\|_{\partial{\mathcal{T}_h}}^2 + C \int_0^t \left( \bm {\mathscr G}^2 + \|\varepsilon_h^u\|_{\mathcal T_h}^2 + \|\varepsilon_h^{\bm q}\|_{\mathcal T_h}^2 \right).
\end{align*}
Apply the integral Gronwall's inequality to obtain
\begin{align*}
\hspace{2em}&\hspace{-2em}\|\varepsilon_h^{\bm q}(t)\|_{\mathcal T_h}^2 +\|\sqrt{\tau}(\varepsilon_h^u(t)-\varepsilon_h^{\widehat u}(t))\|_{\partial{\mathcal{T}_h}}^2\\
&\le C \left( \|\varepsilon_h^{\bm q}(0)\|_{\mathcal T_h}^2 +\|\sqrt{\tau}(\varepsilon_h^u(0)-\varepsilon_h^{\widehat u}(0))\|_{\partial{\mathcal{T}_h}}^2 + \int_0^t \bm {\mathscr G}^2 + \int_0^t \|\varepsilon_h^u\|_{\mathcal T_h}^2\right).
\end{align*}
Next, let $t=0$ in \eqref{error_q1_at_0} and use $\varepsilon_h^{u}(0) = 0$ to get
\begin{align*}
\|\varepsilon_h^{\bm q}(0)\|_{\mathcal T_h}^2 +\|\sqrt{\tau}(\varepsilon_h^u-\varepsilon_h^{\widehat u})(0)\|_{\partial{\mathcal{T}_h}}^2 &= ((\bm\Pi_V {\bm{q}} -\bm q)(0), \varepsilon_h^{\bm q}(0))_{\mathcal{T}_h}.
\end{align*}
Therefore,
\begin{align*}
\|\varepsilon_h^{\bm q}(0)\|_{\mathcal T_h}^2 +\|\sqrt{\tau}(\varepsilon_h^u-\varepsilon_h^{\widehat u})(0)\|_{\partial{\mathcal{T}_h}}^2 \le \|(\bm\Pi_V {\bm{q}} -\bm q)(0)\|_{\mathcal{T}_h}^2,
\end{align*}
and the estimate for $\|\varepsilon_h^u\|^2_{\mathcal T_h}$ in \Cref{theorem_err_u3} completes the proof.
\end{proof}
\subsubsection{Step 4: Superconvergent estimate for $\varepsilon_h^{u}$ in $L^{\infty}(L^2)$ by a duality argument}
\label{subsec:superconv_duality_argument}
To get a superconvergent rate for $\|\varepsilon_h^{u}\|_{\mathcal T_h}$, we adopt a duality argument from Wheeler \cite{MR0351124}. In that work, an elliptic projection is used and it commutes with the time derivative. It is easy to check that the operator $\Pi_W$ defined in \eqref{HDG_projection_operator} commutes with the time derivative, i.e,
$\partial_t \Pi_W u =\Pi_W u_t $.
For any $t\in[0,T]$, let $(\overline{\bm q}_h,\overline{u}_h,\widehat{\overline u}_h)\in \bm V_h\times W_h\times M_h$ be the solution of the following steady state problem
\begin{align}\label{HDGO_a2}
\mathscr B (\overline{\bm q}_h,\overline{u}_h,\widehat{\overline u}_h, \bm r_h, v_h, \widehat v_h )= (f - \Pi_W u_t - F(u),v_h)_{\mathcal{T}_h},
\end{align}
for all $(\bm r_h,v_h,\widehat{v}_h)\in \bm V_h\times W_h\times M_h$.
The following estimates are proved in \Cref{AppendixA}.
\begin{lemma}\label{error_dual}
For any $t\in[0,T]$, we have
\begin{subequations}
\begin{align}
\|\bm{\Pi}_{V}\bm q- \overline{\bm q}_h\|_{\mathcal{T}_h} &\le C (\|\bm q - \bm\Pi_V \bm q\|_{\mathcal T_h} +\|u_t - \Pi_W u_t\|_{\mathcal T_h}),\label{error_dual_a}\\
\|\Pi_{W} {u}-\overline u_h\|_{\mathcal{T}_h} &\le Ch^{\min\{k,1\}} (\|\bm q - \bm\Pi_V \bm q\|_{\mathcal T_h} +\|u_t - \Pi_W u_t\|_{\mathcal T_h}),\label{error_dual_b}\\
\|\partial_t(\Pi_W u - \overline{u}_h)\|_{\mathcal T_h} &\le Ch^{\min\{k,1\}} (\|\bm q_t - \bm\Pi_V \bm q_t\|_{\mathcal T_h} +\|u_{tt} - \Pi_W u_{tt}\|_{\mathcal T_h}).\label{error_dual_c}
\end{align}
\end{subequations}
\end{lemma}
\begin{lemma}\label{error_e}
Let $e_h^{\bm q}=\bm q_h -\overline{\bm q}_h $, $ e_h^{ u}= u_h - \overline{u}_h $, and $ e_h^{ \widehat{u}}=\widehat{u}_h - {\widehat{\overline u}}_h$. Then for any $t\in[0,T]$ we have
\begin{align*}
\|e_h^u(t)\|_{\mathcal T_h}^2 \le \|\Pi_Wu_0 - \overline u_h(0) \|_{\mathcal T_h}^2 +C \int_0^t \left(\|\partial_t (\Pi_W u - \overline{u}_h)\|_{\mathcal T_h}^2 +h^2\|(\bm\Pi_V {\bm{q}} -\bm q)(0)\|_{\mathcal{T}_h}^2 + \bm {\mathscr K}^2 \right),
\end{align*}
where
\begin{align}\label{def_K}
\begin{split}
\bm {\mathscr K} &= h(\|\bm\Pi_V {\bm{q}} -\bm q\|_{\mathcal T_h} + \|\Pi_Wu_t-u_t\|_{\mathcal T_h} + \|\bm\Pi_V {\bm{q}}_t -\bm q_t\|_{\mathcal{T}_h}) \\
&\quad + \| F(u)- \mathcal I_h F(u) \|_{\mathcal T_h} + \| u - \mathcal I_h u\|_{\mathcal T_h} + h\|\nabla (u - \mathcal I_h u)\|_{\mathcal T_h} + \delta_{k0} \|\Pi_W u - u\|_{\mathcal T_h}.
\end{split}
\end{align}
\end{lemma}
\begin{proof}
By the definition of the operator $\mathscr B$ in \eqref{def_B}, we have
\begin{align*}
\hspace{1em}&\hspace{-1em}(\partial_t e_h^u, v_h)_{\mathcal T_h} + \mathscr B (e_h^{\bm q}, e_h^{u}, e_h^{\widehat u}, \bm r_h, v_h, \widehat v_h )\\
& = (\partial_t u_h, v_h)_{\mathcal T_h} + \mathscr B (\bm q_h, u_h,\widehat u_h, \bm r_h, v_h, \widehat v_h ) - (\partial_t \overline{u}_h, v_h)_{\mathcal T_h} - \mathscr B (\overline{\bm{q}}_h, \overline u_h,\widehat {\overline u}_h, \bm r_h, v_h, \widehat v_h )\\
& =(f, v_h)_{\mathcal T_h} - (\mathcal I_h F(u_h^\star), v_h)_{\mathcal T_h} - (\partial_t \overline{u}_h, v_h)_{\mathcal T_h} - (f - \Pi_W u_t - F(u),v_h)_{\mathcal{T}_h}\\
& =(\partial_t (\Pi_W u - \overline{u}_h), v_h)_{\mathcal T_h} + (F(u)-\mathcal I_h F(u_h^\star), v_h)_{\mathcal T_h}.
\end{align*}
Take $(\bm r_h, v_h,\widehat v_h) = (e_h^{\bm q}, e_h^{u}, e_h^{\widehat u})$ and use \Cref{super_con}, \Cref{non_est}, \Cref{error_ana_q}, the bound
$$
\| \varepsilon_h^u \| = \| \Pi_W u - \overline{u}_h - e_h^u \| \leq \| \Pi_W u - \overline{u}_h \| + \| e_h^u \|,
$$
and also \Cref{error_dual} to give
\begin{align*}
\hspace{1em}&\hspace{-1em}\frac 12 \frac{d}{dt} \|e_h^u\|_{\mathcal T_h}^2 + \|e_h^{\bm q}\|_{\mathcal T_h}^2 + \langle \tau(e_h^{u} - e_h^{\widehat u}), e_h^{u}- e_h^{\widehat u} \rangle_{\partial\mathcal T_h}\\
&\le C \left( \bm {\mathscr K}^2 + \|\partial_t (\Pi_W u - \overline{u}_h)\|_{\mathcal T_h}^2 +h^2\|(\bm\Pi_V {\bm{q}} -\bm q)(0)\|_{\mathcal{T}_h}^2 +C \|e_h^u\|_{\mathcal T_h}^2 \right).
\end{align*}
Integration from $0$ to $t$ gives
\begin{align*}
\hspace{1em}&\hspace{-1em} \|e_h^u(t)\|_{\mathcal T_h}^2 +\int_0^t \left[\|e_h^{\bm q}\|_{\mathcal T_h}^2 + \langle \tau(e_h^{u} - e_h^{\widehat u}), e_h^{u}- e_h^{\widehat u} \rangle_{\partial\mathcal T_h}\right] \\
&\le \|e_h^u(0)\|_{\mathcal T_h}^2 +C \int_0^t (\|\partial_t (\Pi_W u - \overline{u}_h)\|_{\mathcal T_h}^2 +h^2 \|(\bm\Pi_V {\bm{q}} -\bm q)(0)\|_{\mathcal{T}_h}^2 + \bm {\mathscr K}^2) +C \int_0^t \|e_h^u\|_{\mathcal T_h}^2.
\end{align*}
By Gronwall's inequality and $e_h^u(0) = u_h(0) - \overline u_h(0) = \Pi_Wu_0 - \overline u_h(0) $, we have
\begin{align*}
\|e_h^u(t)\|_{\mathcal T_h}^2 \le \|\Pi_Wu_0 - \overline u_h(0) \|_{\mathcal T_h}^2 +C \int_0^t (\|\partial_t (\Pi_W u - \overline{u}_h)\|_{\mathcal T_h}^2 +h^2\|(\bm\Pi_V {\bm{q}} -\bm q)(0)\|_{\mathcal{T}_h}^2 + \bm {\mathscr K}^2).
\end{align*}
\end{proof}
A combination of \Cref{error_dual} and \Cref{error_e} gives the following lemma:
\begin{lemma}\label{supper}
For any $t\in[0,T]$, we have
\begin{align*}
\|\varepsilon_h^u(t)\|_{\mathcal T_h}\le Ch^{\min\{k,1\}} (\|\bm q(0) - \bm\Pi_V \bm q(0)\|_{\mathcal T_h} +\|u_t(0) - \Pi_W u_t(0)\|_{\mathcal T_h}) + C\int_0^t \bm {\mathscr L},
\end{align*}
where
\begin{align}\label{def_L}
\begin{split}
\bm {\mathscr L} &= h (\|\bm\Pi_V {\bm{q}} -\bm q\|_{\mathcal T_h} + \|\Pi_Wu_t-u_t\|_{\mathcal T_h} ) + h^{\min\{k,1\}} ( \|u_{tt} - \Pi_W u_{tt}\|_{\mathcal T_h} + \|\bm\Pi_V {\bm{q}}_t -\bm q_t\|_{\mathcal{T}_h} ) \\
&\quad + \| F(u)- \mathcal I_h F(u) \|_{\mathcal T_h} + \| u - \mathcal I_h u\|_{\mathcal T_h} + h\|\nabla (u - \Pi_{k+1} u)\|_{\mathcal T_h} + \delta_{k0} \|\Pi_W u - u\|_{\mathcal T_h}.
\end{split}
\end{align}
\end{lemma}
Using $ u - u_h = (u-\Pi_W u) + \varepsilon_h^u $, $ \bm q - \bm q_h = (\bm q-\Pi_V \bm q) + \varepsilon_h^{\bm q} $, $ u - u_h^\star = (u-\Pi_{k+1} u) + (\Pi_{k+1}u-u_h^\star) $, and the triangle inequality gives the main result:
\begin{theorem}\label{main_err_qu}
If the nonlinearity $F$ satisfies the global Lipschitz condition in \Cref{gloablly_lip} and the assumptions at the beginning of \Cref{Error_analysis} hold, then for all $0\le t\le T$ the solution $ (\bm q_h, u_h,u_h^\star) $ of the semidiscrete Interpolatory HDG$_k$ equations satisfy
\begin{align*}
\|\bm q(t) - \bm q_h(t)\|_{\mathcal T_h} &\le \|\bm q(t) - \bm{\Pi}_V \bm q(t)\|_{\mathcal T_h} + C\|(\bm\Pi_V {\bm{q}} -\bm q)(0)\|_{\mathcal{T}_h} + C \int_0^t \bm{\mathscr G}, \\
\|u(t) - u_h(t)\|_{\mathcal T_h} & \le \|u(t) - \Pi_W u(t)\|_{\mathcal T_h} + C\int_0^t \bm{\mathscr H},\\
\|u(t) - u_h^\star(t)\|_{\mathcal T_h}&\le Ch^{\min\{k,1\}} (\|\bm q(0) - \bm\Pi_V \bm q(0)\|_{\mathcal T_h} +\|u_t(0) - \Pi_W u_t(0)\|_{\mathcal T_h}) \\
&\quad + \|u(t) - \Pi_{k+1} u(t)\|_{\mathcal T_h} + C \int_0^t \bm {\mathscr L},
\end{align*}
where $\bm{\mathscr G}$, $\bm{\mathscr H}$ and $\bm{\mathscr L}$ are defined in \eqref{def_G}, \eqref{def_H} and \eqref{def_L}, respectively.
\end{theorem}
By \Cref{pro_error}, \Cref{lemmainter}, and \Cref{main_err_qu}, we obtain convergence rates for smooth solutions.
\begin{corollary}\label{res_coll}
If, in addition, $ u $, $ \bm q $, and $ F(u) $ are sufficiently smooth for $ t \in [0,T] $, then for all $0\le t\le T$ the solution $ (\bm q_h, u_h, u_h^\star) $ of the semidiscrete Interpolatory HDG$_k$ equations satisfy
\begin{align*}
\|\bm q(t) - \bm q_h(t)\|_{\mathcal T_h}&\le C h^{k+1}, \
\|u(t) - u_h(t)\|_{\mathcal T_h}\le C h^{k+1}, \
\|u(t) - u_h^\star(t)\|_{\mathcal T_h}\le C h^{k+1+\min\{k,1\}}.
\end{align*}
\end{corollary}
\subsection{Error analysis under a local Lipschitz condition}
\label{local}
In applications, the nonlinearity $F$ might not satisfy the global Lipschitz condition, see \Cref{gloablly_lip}. Instead, let
\begin{align}\label{Max_value}
M = \max\{|u(t,x)|, x\in \overline{\Omega}, t\in [0,T]\} + 1.
\end{align}
In this section, we assume the mesh is quasi-uniform, the polynomial degree satisfies $k\ge 1$, and the nonlinearity $F$ satisfies the following local Lipschitz condition:
\begin{assumption}\label{Locally_lip}
There is a constant $L(M)>0$ such that
\begin{align*}
|F(u) - F(v)|_{\mathbb R}\le L(M) |u-v|_{\mathbb R}
\end{align*}
for all $ u, v \in [-M, M]$.
\end{assumption}
Our proof relies on techniques used in \cite{MR3403707}. Below, we use the notation $ \rho_h^{u^\star} = \Pi_{k+1} u - u_h^\star $.
\begin{lemma}\label{theorem_err_u3local}
If $h$ is small enough and $ k \geq 1 $, then there exists $t^*_h \in (0, T]$ such that \Cref{error_ana_q} and \Cref{supper} hold for all $t\in [0, t^*_h]$.
\end{lemma}
\begin{proof}
Take $(\bm r_h,v_h,\widehat{v}_h)=(\varepsilon_h^{\bm q},\varepsilon_h^{u},\varepsilon_h^{\widehat u})$ in \eqref{error_u1} to give
\begin{align}\label{error_q1_at_00}
\begin{split}
\hspace{2em}&\hspace{-2em} \frac 1 2\frac{d}{dt}\|\varepsilon_h^{u}\|^2_{\mathcal{T}_h}+ \|\varepsilon_h^{\bm q}\|^2_{\mathcal{T}_h}+\langle \tau(\varepsilon_h^u -\varepsilon_h^{\widehat{u}}), \varepsilon_h^u -\varepsilon_h^{\widehat{u}}\rangle_{\partial{\mathcal{T}_h}} \\
& = (\bm\Pi_V {\bm{q}} -\bm q, \varepsilon_h^{\bm q})_{\mathcal{T}_h} - (F(u) - \mathcal I_h F(u_h^\star),\varepsilon_h^u)_{\mathcal T_h}.
\end{split}
\end{align}
Take $t=0$ and use the fact $\varepsilon_h^u(0) = 0$ to obtain
\begin{align*}
\|\varepsilon_h^{\bm q}(0)\|_{\mathcal T_h} \le C \|\bm\Pi_V {\bm{q}}(0) -\bm q(0)\|_{\mathcal T_h}.
\end{align*}
By \Cref{super_con}, we have
\begin{align*}
\|\rho_h^{u^\star}(0)\|_{\mathcal T_h}\le C \| u(0) - \mathcal I_h u(0) \|_{\mathcal T_h} + Ch(\|\bm\Pi_V {\bm{q}}(0) -\bm q(0)\|_{\mathcal T_h} + \|\nabla(u(0) - \mathcal I_h u(0))\|_{\mathcal T_h}).
\end{align*}
The inverse inequality gives
\begin{align*}
\|\rho_h^{u^\star}(0)\|_{L^\infty(\Omega)} &\le C h^{-\frac d 2}\| u(0) - \mathcal I_h u(0) \|_{\mathcal T_h}\\
&\quad + Ch^{1-\frac d 2}(\|\bm\Pi_V {\bm{q}}(0) -\bm q(0)\|_{\mathcal T_h} + \|\nabla(u(0) - \mathcal I_h u(0))\|_{\mathcal T_h}).
\end{align*}
Since the exact solution is smooth at $ t = 0 $, we can choose $ h $ small enough so that $ \|\rho_h^{u^\star}(0)\|_{L^\infty(\Omega)} < 1/2 $. Also, since the error equation \eqref{error_u1} is continuous with respect to the time $t$, again using an inverse inequality shows that there exists $t^*_h \in(0, T]$ such that for all $ h $ small enough,
\begin{align}\label{proof_error_u1}
\|\rho_h^{u^\star}(t)\|_{L^\infty(\Omega)} \le 1/2 \quad \mbox{for all $t\in [0, t^*_h]$.}
\end{align}
For all $h$ sufficiently small we have
\begin{align}\label{eqn:u_L2_proj_error_small_enough}
\| u(t) - \Pi_{k+1} u(t) \|_{L^\infty(\Omega)} \le 1/2 \quad \mbox{for all $t\in [0, t^*_h]$.}
\end{align}
This implies for all $ h $ small enough and all $t\in [0, t^*_h]$,
\begin{align*}
\| \Pi_{k+1} u \|_{L^\infty} &\leq \| u \|_{L^\infty} + \| u - \Pi_{k+1} u \|_{L^\infty} \leq \| u \|_{L^\infty} + 1/2 \leq M,\\
\| u_h^\star \|_{L^\infty} &\leq \| \Pi_{k+1} u \|_{L^\infty} + \| \Pi_{k+1} u - u_h^\star \|_{L^\infty} \leq ( \| u \|_{L^\infty} + 1/2 ) + 1/2 = M.
\end{align*}
Therefore, $u$, $ \Pi_{k+1} u $, and $u_h^\star$ are located in the interval $[-M,M]$, where $M$ is defined in \eqref{Max_value}. This allows us to take advantage of the local Lipschitz condition of $F(u)$ for all $t \in [0, t^*_h]$. Hence, \Cref{error_ana_q} and \Cref{supper} hold for all $t\in [0, t^*_h]$.
\end{proof}
\begin{lemma}\label{theorem_err_u3_extend}
For $h$ small enough and $k\ge 1$, the conclusions of \Cref{error_ana_q} and \Cref{supper} are true on the whole time interval $[0,T]$.
\end{lemma}
\begin{proof}
Fix $ h^* > 0 $ so that \eqref{proof_error_u1}, \eqref{eqn:u_L2_proj_error_small_enough}, and \Cref{theorem_err_u3local} are true for all $ h \leq h^* $, and assume $t^*_h$ is the largest value for which \eqref{proof_error_u1} is true for all $ h \leq h^* $. Define the set $ \mathcal{A} = \{ h \in [0,h^*] : t^*_h \neq T \} $. If the result is not true, then $ \mathcal{A} $ is nonempty, $ \inf \{ h : h\in \mathcal{A} \} = 0 $, and also
\begin{align}\label{assum}
\|\rho_h^{u^\star}(t^*_h)\|_{L^\infty(\Omega)} =1/2 \quad \mbox{for all $ h \in \mathcal{A} $.}
\end{align}
However, by the inverse inequality, $k\ge 1$, and since \Cref{theorem_err_u3local} is true, we have
\begin{align*}
\|\rho_h^{u^\star}(t^*_h)\|_{L^{\infty}(\Omega)} \le Ch^{-\frac d 2}\|\rho_h^{u^\star}(t^*_h)\|_{\mathcal T_h}\le C h^{3-d/2} \quad \mbox{for all $ h \in \mathcal{A} $.}
\end{align*}
Since $ C $ does not depend on $ h $, there exists $ h^*_1 \leq h^* $ such that $ \|\rho_h^{u^\star}(t^*_h)\|_{L^{\infty}(\Omega)}<1/2 $ for all $ h \in \mathcal{A} $ such that $ h \leq h^*_1 $. This contradicts \eqref{assum}, and therefore $t^*_h = T$ for all $ h $ small enough.
\end{proof}
\begin{theorem}\label{main_err_qu2}
If the nonlinearity $F$ satisfies the local Lipschitz condition in \Cref{Locally_lip}, the mesh is quasi-uniform, $ k \geq 1 $, and the assumptions at the beginning of \Cref{Error_analysis} hold, then for all $ h $ small enough the conclusions of \Cref{main_err_qu} and \Cref{res_coll} are true for all $0\le t\le T$.
\end{theorem}
\section{Numerical Results}
\label{sec:numerics}
In this section, we present two examples to demonstrate the performance of the Interpolatory HDG$_k$ method.
\begin{example}[The Allen-Cahn or Chaffee-Infante equation]
We begin with an example with an exact solution in order to illustrate the convergence theory. The domain is the unit square $\Omega = [0,1]\times [0,1]\subset \mathbb R^2$, the nonlinear term is $F( u) = u^3-u$, and the source term $f$ is chosen so that the exact solution is $u = \sin(t)\sin(\pi x)\sin(\pi y)$. Backward Euler and Crank-Nicolson are applied for the time discretization when $k=0$ and $k=1$, respectively, where $k$ is the degree of the polynomial. The time step is chosen as $\Delta t = h$ when $k=0$ and $\Delta t = h^2$ when $k=1$. We report the errors at the final time $ T = 1 $ for polynomial degrees $ k = 0 $ and $ k = 1 $ in \Cref{table_1}. The observed convergence rates match the theory.
\begin{table}
\small
\caption{History of convergence.}\label{table_1}
\centering
Example 1: Errors for $\bm{q}_h$, $u_h$ and $u_h^\star$ of Interpolatory HDG$_k$ {
\begin{tabular}{c|c|c|c|c|c|c|c}
\Xhline{1pt}
\multirow{2}{*}{Degree}
&\multirow{2}{*}{$\frac{h}{\sqrt{2}}$}
&\multicolumn{2}{c|}{$\|\bm{q}-\bm{q}_h\|_{0,\Omega}$}
&\multicolumn{2}{c|}{$\|u-u_h\|_{0,\Omega}$}
&\multicolumn{2}{c}{$\|u-u_h^\star\|_{0,\Omega}$} \\
\cline{3-8}
& &Error &Rate
&Error &Rate
&Error &Rate
\\
\cline{1-8}
\multirow{5}{*}{ $k=0$}
&$2^{-1}$ &1.2889 & &5.0344E-01 & &4.5836E-01 &\\
&$2^{-2}$ &7.0471E-01 &0.87 &2.8491E-01 &0.82 &2.5673E-01 &0.84\\
&$2^{-3}$ & 3.5473E-01 &0.99 &1.5511E-01 &0.88 &1.4105E-01 &0.86\\
&$2^{-4}$ &1.7648E-01 &1.00 &8.0617E-02 &0.94 &7.3725E-02 &0.94\\
&$2^{-5}$ &8.7855E-02 &1.00 &4.1025E-02 &0.97 & 3.7627E-02 &0.97\\
\cline{1-8}
\multirow{5}{*}{$k=1$}
&$2^{-1}$ & 3.7304E-01 & &1.7028E-01 & &3.0236E-02 &\\
&$2^{-2}$ &9.9820E-02 &1.90 &4.8288E-02 &1.82 &3.9074E-03 &2.95\\
&$2^{-3}$ &2.5307E-02 &1.98 &1.2561E-02 &1.94 &4.7940E-04 &3.02\\
&$2^{-4}$ &6.3422E-03 &2.00 &3.1825E-03 &1.98 &5.9047E-05 &3.02\\
&$2^{-5}$ &1.5858E-03 &2.00 &7.9966E-04 &2.00 &7.3168E-06 &3.01\\
\Xhline{1pt}
\end{tabular}
}
\end{table}
\end{example}
\begin{example}[The Schnakenberg model]
Next, we consider a more complicated example of a reaction diffusion PDE system with zero Neumann boundary conditions that does not satisfy the assumptions for the convergence theory established here. We consider such an example to demonstrate the applicability of the Interpolatory HDG$_k$ method to more general problems.
Specifically, we consider the Schnakenberg model, which has been used to model the spatial distribution of a morphogen; see \cite{MR2511741} for more details. The Schnakenberg system has the form
\begin{align*}
\frac{\partial C_a}{\partial t} & = D_1\nabla^2 C_a + \kappa(a - C_a +C_a^2 C_i),\\
\frac{\partial C_i}{\partial t} & = D_2\nabla^2 C_i+ \kappa(b - C_a^2 C_i),
\end{align*}
with initial conditions
\begin{align*}
C_a(\cdot,0) & =a+b + 10^{-3}\exp\big(-100((x-1/3)^2 + (y-1/2)^2)\big),\\
C_i(\cdot,0)& = \frac {b}{(a+b)^2},
\end{align*}
and homogeneous Neumann boundary conditions. The parameter values are $\kappa = 100$, $a = 0.1305$, $b = 0.7695$, $D_1 = 0.05$, and $D_2 = 1$. We choose polynomial degree $k=1$ and apply Crank-Nicolson for the time discretization with time step $\Delta t = 0.001$.
We vary the spatial domain, but keep all of parameters in the model unchanged. The first domain is the unit square $\Omega = [0,1]\times [0,1]$, and the domain is partitioned into $2048$ elements. The second domain is the circle $\Omega = \{(x,y) : (x-0.5)^2 + (y - 0.5)^2 <0.5^2\}$ and we use $7168$ elements.
Numerical results are shown in \Cref{fig:rd_system_square}--\Cref{fig:rd_system_circle}. Spot patterns form on the square and circular domains. Our numerical results are very similar to results reported in \cite{MR2511741}.
\begin{figure}
\caption{\label{fig:rd_system_square}
\label{fig:rd_system_square}
\label{squre}
\end{figure}
\begin{figure}
\caption{\label{fig:rd_system_circle}
\label{fig:rd_system_circle}
\label{circle}
\end{figure}
\end{example}
\section{Conclusion}
In our earlier work \cite{CockburnSinglerZhang1}, we considered an Interpolatory HDG$_k$ methods for semilinear parabolic PDEs with a general nonlinearity of the form $F(\nabla u, u)$. The interpolatory approach achieves optimal convergence rates and reduces the computational cost compared to standard HDG since all of the HDG matrices are assembled once before the time stepping procedure. However, the method does not have superconvergence by postprocessing.
In this work, we proposed a new superconvergent Interpolatory HDG$_k$ method for approximating the solution of reaction diffusion PDEs. Unlike our earlier Interpolatory HDG$_k$ work \cite{CockburnSinglerZhang1}, the new method uses a postprocessing $u_h^\star$ to evaluate the nonlinear term. This change provides the superconvergence, and the new method also keeps all of the computational advantages of using an interpolatory approach for the nonlinear term. We proved the superconvergence under a global Lipschitz condition for the nonlinearity, and then extended the superconvergence results to a local Lipschitz condition assuming the mesh is quasi-uniform.
In the second part of this work \cite{ChenCockburnSinglerZhang2}, we again consider reaction diffusion equations and extend the ideas here to derive other superconvergent interpolatory HDG methods inspired by hybrid high-order methods \cite{MR3507267}. However, it is currently not clear whether the present approach can be used to obtain the superconvergence for semilinear PDEs with a general nonlinearity $F(\nabla u, u)$. We are currently exploring this issue.
\section{Appendix A}\label{AppendixA}
Recall the steady state problem \eqref{HDGO_a2} from \Cref{subsec:superconv_duality_argument}, which we repeat here for convenience: let $(\overline{\bm q}_h,\overline{u}_h,\widehat{\overline u}_h)\in \bm V_h\times W_h\times M_h$ be the solution of
\begin{align}\label{HDGO_aa2}
\mathscr B (\overline{\bm q}_h,\overline{u}_h,\widehat{\overline u}_h, \bm r_h, v_h, \widehat v_h )= (f - \Pi_W u_t - F(u),v_h)_{\mathcal{T}_h},
\end{align}
for all $(\bm r_h,v_h,\widehat{v}_h)\in \bm V_h\times W_h\times M_h$. Since $ \Pi_W $ commutes with the time derivative, taking the partial derivative of \eqref{HDGO_aa2} with respect to $t$ shows $(\partial_t \overline{\bm q}_h,\partial_t\overline{u}_h,\partial_t\widehat{\overline u}_h)\in \bm V_h\times W_h\times M_h$ is the solution of
\begin{align}\label{HDGO_3}
\mathscr B (\partial_t\overline{\bm q}_h,\partial_t\overline{u}_h,\partial_t\widehat{\overline u}_h, \bm r_h, v_h, \widehat v_h )&= (f_t - \Pi_W u_{tt} - F'(u)u_t,v_h)_{\mathcal{T}_h},
\end{align}
for all $(\bm r_h,v_h,\widehat{v}_h)\in \bm V_h\times W_h\times M_h$.
The proof of the following lemma is very similar to a proof in \cite{MR2629996}, hence we omit it here.
\begin{lemma}\label{error_u1_appendix}
For $\varepsilon_h^{\overline{\bm q}}=\bm{\Pi}_{V}\bm q- \overline{\bm q}_h $, $ \varepsilon_h^{ \overline u}=\Pi_{W} {u}-\overline u_h $, and $ \varepsilon_h^{ \widehat{\overline u}}=P_M u-\widehat{\overline u}_h$, we have
\begin{align}\label{error_u11appendix}
\mathscr B(\varepsilon_h^{\overline{\bm q}},\varepsilon_h^{\overline u},\varepsilon_h^{\widehat {\overline u}}; \bm r_h, w_h, \widehat v_h) = (\bm \Pi_V \bm{q} - \bm q,\bm{r}_h)_{\mathcal{T}_h}+ (\Pi_W u_t - u_t , v_h)_{\mathcal{T}_h},
\end{align}
for all $(\bm r_h,v_h,\widehat v_h)\in \bm V_h\times W_h\times M_h$.
\end{lemma}
The next step is the consideration of the dual problem \eqref{Dual_PDE1_assumption}, which we again repeat for convenience: Let
\begin{equation}\label{Dual_PDE1}
\begin{split}
\bm{\Phi}+\nabla\Psi&=0\qquad\qquad\text{in}\ \Omega,\\
\nabla\cdot\bm \Phi &=\Theta\qquad\quad~~\text{in}\ \Omega,\\
\Psi &= 0\qquad\qquad\text{on}\ \partial\Omega.
\end{split}
\end{equation}
By the assumption at the beginning of \Cref{Error_analysis}, this boundary value problem admits the regularity estimate
\begin{align}\label{regularity_PDE}
\|\bm \Phi\|_{H^{1}(\Omega)} + \|\Psi\|_{H^{2}(\Omega)} \le C \|\Theta\|_{L^{2}(\Omega)},
\end{align}
for all $\Theta \in L^2(\Omega)$.
\begin{lemma}\label{dual_ar}
We have
\begin{align*}
\|\varepsilon^{\overline u}_h\|_{\mathcal{T}_h} &\le Ch^{\min\{k,1\}} (\|\bm q - \bm\Pi_V \bm q\|_{\mathcal T_h}+\|u_t - \Pi_W u_t\|_{\mathcal T_h})\\
\|\varepsilon^{\overline {\bm q}}_h\|_{\mathcal{T}_h} &\le C (\|\bm q - \bm{\Pi}_V \bm q\|_{\mathcal T_h} +\|u_t - \Pi_W u_t\|_{\mathcal T_h}).
\end{align*}
\end{lemma}
\begin{proof}
Let $\Theta = \varepsilon_h^{\overline u}$ in the dual problem \eqref{Dual_PDE1}, and take $(\bm r_h,v_h,\widehat v_h) = (-\bm\Pi_V\bm{\Phi}, \Pi_W\Psi,P_M \Psi)$ in the definition of $ \mathscr{B} $ \eqref{def_B} to get
\begin{align*}
\mathscr B &(\varepsilon^{\overline{\bm q}}_h,\varepsilon^{\overline u}_h,\varepsilon^{\widehat {\overline u}}_h;-\bm\Pi_V\bm{\Phi},\Pi_W\Psi,P_M\Psi)\\
&=-(\varepsilon^{\overline{\bm q}}_h,\bm\Pi_V\bm{\Phi})_{\mathcal{T}_h}+(\varepsilon^{\overline u}_h,\nabla\cdot\bm\Pi_V\bm{\Phi})_{\mathcal{T}_h}-\langle \varepsilon^{\widehat {\overline u}}_h, \bm\Pi_V\bm{\Phi}\cdot \bm{n}\rangle_{\partial\mathcal{T}_h}+ (\nabla\cdot \varepsilon^{\overline{\bm q}}_h, \Pi_W\Psi)_{\mathcal{T}_h}\\
&\quad -\langle \varepsilon^{\overline{\bm q}}_h\cdot \bm{n}, P_M \Psi \rangle_{\partial {\mathcal{T}_h}} + \left\langle\tau(\varepsilon^{\overline u}_h-\varepsilon^{\widehat {\overline u}}_h), \Pi_W \Psi - P_M\Psi \right\rangle_{\partial\mathcal{T}_h}\\
&=-(\varepsilon^{\overline {\bm q}}_h,\bm{\Phi})_{\mathcal{T}_h}+(\varepsilon^{\overline{\bm q}}_h,\bm \Phi - \bm \Pi_V \bm\Phi)_{\mathcal{T}_h}+(\varepsilon^{\overline u}_h,\nabla\cdot\bm{\Phi})_{\mathcal{T}_h}-(\varepsilon^{\overline u}_h,\nabla\cdot (\bm \Phi - \bm \Pi_V \bm\Phi))_{\mathcal{T}_h}\\
&\quad +\langle \varepsilon^{\widehat {\overline u}}_h, (\bm \Phi - \bm \Pi_V \bm\Phi)\cdot \bm{n}\rangle_{\partial\mathcal{T}_h} + (\nabla\cdot \varepsilon^{\overline{\bm q}}_h, \Psi)_{\mathcal{T}_h} + (\nabla\cdot \varepsilon^{\overline{\bm q}}_h, \Pi_W \Psi - \Psi)_{\mathcal{T}_h}\\
&\quad -\langle \varepsilon^{\overline{\bm q}}_h\cdot \bm{n}, \Psi \rangle_{\partial {\mathcal{T}_h}} + \left\langle\tau(\varepsilon^{\overline u}_h-\varepsilon^{\widehat {\overline u}}_h), \Pi_W \Psi - P_M\Psi \right\rangle_{\partial\mathcal{T}_h}\\
&=(\varepsilon^{\overline{\bm q}}_h,\bm \Phi - \bm \Pi_V \bm\Phi)_{\mathcal{T}_h} + \|\varepsilon^{\overline u}_h\|_{\mathcal T_h}^2.
\end{align*}
On the other hand, take $(\bm r_h,v_h,\widehat v_h) = (-\bm\Pi_V\bm{\Phi}, \Pi_W\Psi,P_M \Psi)$ in \eqref{error_u11appendix} to get
\begin{align}\label{two_e}
\mathscr B (\varepsilon^{\overline{\bm q}}_h,\varepsilon^{\overline u}_h,\varepsilon^{\widehat {\overline u}}_h;-\bm\Pi_V\bm{\Phi},\Pi_W\Psi,P_M\Psi)= (\bm q - \bm\Pi_V \bm q,\bm\Pi_V\bm{\Phi})_{\mathcal T_h} + (\Pi_W u_t - u_t , \Pi_W \Psi)_{\mathcal{T}_h}.
\end{align}
Comparing the above two equalities gives
\begin{align*}
\|\varepsilon^{\overline u}_h\|^2_{\mathcal{T}_h}
&=-(\varepsilon^{\overline{\bm q}}_h,\bm \Phi - \bm \Pi_V \bm\Phi)_{\mathcal{T}_h}+ (\bm q - \bm{\Pi}_V \bm q,\bm\Pi_V\bm{\Phi})_{\mathcal T_h} + (\Pi_W u_t - u_t , \Pi_W \Psi)_{\mathcal{T}_h} \\
&=-(\varepsilon^{\overline{\bm q}}_h,\bm \Phi - \bm \Pi_V \bm\Phi)_{\mathcal{T}_h}+ (\bm q - \bm\Pi_V \bm q,\bm\Pi_V\bm{\Phi} - \bm \Phi)_{\mathcal T_h} \\
&\quad + (\bm q - \bm\Pi_V \bm q,\bm \Phi)_{\mathcal T_h} + (\Pi_W u_t - u_t , \Pi_W \Psi)_{\mathcal{T}_h} \\
&=-(\varepsilon^{\overline{\bm q}}_h,\bm \Phi - \bm \Pi_V \bm\Phi)_{\mathcal{T}_h}+ (\bm q - \bm\Pi_V \bm q,\bm\Pi_V\bm{\Phi} - \bm \Phi)_{\mathcal T_h} \\
&\quad - (\bm q - \bm\Pi_V \bm q,\nabla \Psi)_{\mathcal T_h} + (\Pi_W u_t - u_t , \Pi_W \Psi)_{\mathcal{T}_h} \\
&=-(\varepsilon^{\overline{\bm q}}_h,\bm \Phi - \bm \Pi_V \bm\Phi)_{\mathcal{T}_h}+ (\bm q - \bm\Pi_V \bm q,\bm\Pi_V\bm{\Phi} - \bm \Phi)_{\mathcal T_h} \\
&\quad - (\bm q - \bm\Pi_V \bm q,\nabla (\Psi - \Pi_W\Psi))_{\mathcal T_h} + (\Pi_W u_t - u_t , \Pi_W \Psi - \min\{k,1\}\Pi_0 \Psi)_{\mathcal{T}_h}.
\end{align*}
Hence, by the regularity of the dual PDE \eqref{regularity_PDE}, we have
\begin{align}\label{errir_Rew}
\|\varepsilon^{\overline u}_h\|^2_{\mathcal{T}_h} \le Ch^2 \|\varepsilon^{\overline{\bm q}}_h\|_{\mathcal T_h}^2 + Ch^{\min\{2k,2\}}\|\bm q - \bm{\Pi}_V \bm q\|_{\mathcal T_h}^2 + Ch^{\min\{2k,2\}}\|u_t - \Pi_W u_t\|_{\mathcal T_h}^2.
\end{align}
Next, take $(\bm r_h,v_h,\widehat{v}_h)=(\varepsilon_h^{\overline {\bm q}},\varepsilon_h^{\overline u},\varepsilon_h^{\widehat {\overline u}})$ in \eqref{error_u11appendix} to obtain
\begin{align*}
\hspace{1em}&\hspace{-1em} \|\varepsilon_h^{\overline{\bm q}}\|^2_{\mathcal{T}_h}+\langle \tau(\varepsilon_h^{\overline u} -\varepsilon_h^{\widehat{ \overline u}}), \varepsilon_h^{\overline u} -\varepsilon_h^{\widehat{ \overline u}} \rangle_{\partial{\mathcal{T}_h}} \\
&= (\bm\Pi_V {\bm{q}} -\bm q, \varepsilon_h^{\overline {\bm q}})_{\mathcal{T}_h} + (\Pi_W u_t - u_t,\varepsilon_h^{\overline u})_{\mathcal T_h}\\
&\le C\|\bm\Pi_V {\bm{q}} -\bm q\|_{\mathcal T_h}^2 + \frac 1 2 \|\varepsilon_h^{\overline {\bm q}}\|^2_{\mathcal{T}_h} + 4C\| \Pi_W u_t - u_t\|_{\mathcal T_h}^2 +\frac 1 {4C} \|\varepsilon_h^{\overline u}\|_{\mathcal T_h}^2.
\end{align*}
This implies
\begin{align}\label{energy_q_appdenix}
\|\varepsilon_h^{\overline{\bm q}}\|^2_{\mathcal{T}_h}+\langle \tau(\varepsilon_h^{\overline u} -\varepsilon_h^{\widehat{ \overline u}}), \varepsilon_h^{\overline u} -\varepsilon_h^{\widehat{ \overline u}} \rangle_{\partial{\mathcal{T}_h}}
\le 2C\|\bm\Pi_V {\bm{q}} -\bm q\|_{\mathcal T_h}^2 + 8C\| \Pi_W u_t - u_t\|_{\mathcal T_h}^2 +\frac 1 {2C} \|\varepsilon_h^{\overline u}\|_{\mathcal T_h}^2.
\end{align}
Next, use $h\le 1$ and substitute \eqref{energy_q_appdenix} into \eqref{errir_Rew} to yield the result.
\end{proof}
Following the same steps, we obtain the following result:
\begin{lemma}\label{dual_ar2}
We have
\begin{align*}
\|\partial_t(\Pi_W u - \overline u_h)\|_{\mathcal{T}_h} &\le Ch^{\min\{k,1\}} (\|\bm q_t - \bm\Pi_V \bm q_t\|_{\mathcal T_h} +\|u_{tt} - \Pi_W u_{tt}\|_{\mathcal T_h}).
\end{align*}
\end{lemma}
\end{document} |
\begin{document}
\title{\large A lower bound on HMOLS with equal sized holes}
\author{Michael Bailey}
\address{\rm Michael Bailey:
Mathematics and Statistics,
University of Victoria, Victoria, BC, Canada
}
\email{[email protected]}
\author{Coen del Valle}
\address{\rm Coen del Valle:
Mathematics and Statistics,
University of Victoria, Victoria, BC, Canada}
\email{[email protected]}
\author{Peter J.~Dukes}
\address{\rm Peter J.~ Dukes:
Mathematics and Statistics,
University of Victoria, Victoria, BC, Canada
}
\email{[email protected]}
\thanks{Research of Peter Dukes is supported by NSERC grant 312595--2017}
\date{\today}
\begin{abstract}
It is known that $N(n)$, the maximum number of mutually orthogonal latin squares of order $n$, satisfies the lower bound $N(n) \ge n^{1/14.8}$ for large $n$. For $h\ge 2$, relatively little is known about the quantity $N(h^n)$, which denotes the maximum number of `HMOLS' or mutually orthogonal latin squares having a common equipartition into $n$ holes of a fixed size $h$.
We generalize a difference matrix method that had been used previously for explicit constructions of HMOLS.
An estimate of R.M. Wilson on higher cyclotomic numbers guarantees our construction succeeds in suitably large finite fields.
Feeding this into a generalized product construction,
we are able to establish the lower bound $N(h^n) \ge (\log n)^{1/\delta}$ for any $\delta>2$ and all $n > n_0(h,\delta)$.
\end{abstract}
\maketitle
\hrule
\section{Introduction}
\subsection{Overview}
A \emph{latin square} is an $n \times n$ array with entries from an
$n$-element set of symbols such that every row and column is a permutation
of the symbols. Often the symbols are taken to be from
$[n]:=\{1,\dots,n\}$. The integer $n$ is called the \emph{order} of the
square.
Two latin squares $L$ and $L'$ of order $n$ are \emph{orthogonal} if
$\{(L_{ij},L'_{ij}): i,j \in [n]\}=[n]^2$; that is, two
squares are orthogonal if, when superimposed, all ordered pairs of symbols
are distinct.
A family of latin squares in which any pair are orthogonal is
called a set of \emph{mutually orthogonal latin squares}, or `MOLS' for
short. The maximum size of a set of MOLS of order $n$ is denoted $N(n)$.
It is easy to see that $N(n) \le n-1$ for $n>1$, with equality if and only
if there exists a projective plane of order $n$. Consequently, $N(q)=q-1$
for prime powers $q$. Using a number sieve and some recursive constructions,
Beth showed \cite{Beth} (building on \cite{CES,WilsonMOLS}) that
$N(n) \ge n^{1/14.8}$ for large $n$. In fact, by inspecting the sieve
a little more closely, $14.8$ can be replaced by $14.7994$; we use this observation
later to keep certain bounds a little cleaner.
In this article, we are interested in a variant on MOLS.
An \emph{incomplete latin square} of order $n$
is an
$n \times n$ array $L=(L_{ij}: i,j \in [n])$ with entries either blank or in $[n]$,
together with a partition $(H_1,\dots,H_m)$ of some subset of $[n]$ such that
\begin{itemize}
\item
$L_{ij}$ is empty if $(i,j) \in \cup_{k=1}^m H_k \times H_k$ and otherwise contains exactly one symbol;
\item
every row and every column in $L$ contains each symbol at most once; and
\item
symbols in $H_k$ do not appear in rows or columns indexed by $H_k$, $k=1,\dots,m$.
\end{itemize}
The sets $H_k$ are often taken to be intervals of consecutive
rows/columns/symbols (but need not be).
As one special case, when $m=n$ and each $H_k=\{k\}$, the definition is equivalent to a latin square $L$ that is \emph{idempotent}, that is satisfying $L_{ii}=i$ for each $i \in [n]$, except that the diagonal is removed to produce the corresponding incomplete latin square.
The \emph{type} of an incomplete latin square is the list $(h_1,\dots,h_m)$, where $h_i = |H_i|$ for each $i=1,\dots,m$. When $h_i=n/m$ for all $k$, so that the set of holes is a uniform partition of $[n]$, the term `holey latin square' is used, and the type is abbreviated to $h^m$, where $h=n/m$. To clarify the notation, we henceforth recycle the parameter $n$ as the number of holes, so that type $h^n$ is considered. The relevant squares are then $hn \times hn$.
Two holey latin squares $L,L'$ of type $h^n$ (and sharing the same hole partition) are said to be \emph{orthogonal} if each of the $(hn)^2-nh^2$ ordered pairs of symbols from different holes appear exactly once when $L$ and $L'$ are superimposed. As with MOLS, we use the term `mutually orthogonal' for a set of holey latin squares, any two of which are orthogonal. The abbreviation HMOLS is standard in the more modern literature; see for instance \cite{ABG,Handbook}. Following \cite[\S III.4.4]{Handbook}, we use a similar function $N(h^n)$ as for MOLS to denote the maximum number of HMOLS of type $h^n$. (Some context is needed to properly parse this notation and not mistake `$h^n$' for exponentiation of integers.)
\begin{ex} As an example we give a pair of HMOLS of type $2^4$, also shown in ~\cite{DS}. In our (slightly different) presentation, the holes are $\{1,2\}$, $\{3,4\}$, $\{5,6\}$, $\{7,8\}$.
\begin{center}
\begin{tabular}{|cccccccc|}
\hline
&&8&6&3&7&4&5\\
&&5&7&8&4&6&3\\
7&6&&&1&8&5&2\\
5&8&&&7&2&1&6\\
4&7&2&8&&&3&1\\
8&3&7&1&&&2&4\\
3&5&6&2&4&1&&\\
6&4&1&5&2&3&&\\
\hline
\end{tabular}
\hspace{1.5cm}
\begin{tabular}{|cccccccc|}
\hline
&&5&7&8&4&6&3\\
&&8&6&3&7&4&5\\
5&8&&&7&2&1&6\\
7&6&&&1&8&5&2\\
8&3&7&1&&&2&4\\
4&7&2&8&&&3&1\\
6&4&1&5&2&3&&\\
3&5&6&2&4&1&&\\
\hline
\end{tabular}
\end{center}
\end{ex}
It is easy to see that $N(n-1) \le N(1^n) \le N(n)$, so Beth's result gives a lower bound on HMOLS in the special case $h=1$.
Also, if there exist $k$ HMOLS of type $1^n$ and $k$ MOLS of order $h$, then $N(h^n) \ge k$ follows easily by a standard product construction.
However, very little else is known about HMOLS with holes of a fixed size greater than $1$. Some explicit results are known for a small number of squares. Dinitz and Stinson showed \cite{DS} that $N(2^n) \ge 2$ for $n \ge 4$. Stinson and Zhu \cite{SZ} extended this to $N(h^n) \ge 2$ for all $h \ge 2$, $n \ge 4$. Bennett, Colbourn and Zhu \cite{BCZ} settled the case of three HMOLS with a handful of exceptions. Abel, Bennett and Ge \cite{ABG} obtained several constructions of four, five or six HMOLS and produced a table of lower bounds on $N(h^n)$ for $h \le 20$ and $n \le 50$. For $2 \le h \le 6$, the largest entry in this table is 7, due to Abel and Zhang in \cite{AZ}.
As a na\"{i}ve upper bound, we have $N(h^n) \le n-2$ from a similar argument as for the standard MOLS upper bound. In more detail, we may permute symbols in a set of HMOLS so that the first row contains symbols $h+1,\dots,nh$, where symbols $1,\dots,h$ are missing and columns $1,\dots,h$ are blank.
Consider the symbols that occur in entry $(h+1,1)$ among the family of HMOLS. At most one element from each of the holes $H_3,\dots,H_n$ can appear.
Our main result is a general lower bound on the rate of growth of $N(h^n)$ for fixed $h$ and large $n$.
\begin{thm}
\label{main}
Let $h$ be a positive integer and $\epsilon>0$. For $k>k_0(h,\epsilon)$, there exists a set of $k$ HMOLS of type $h^n$ for all $n \ge k^{(3+\epsilon)\omega(h)k^2}$, where $\omega(h)$ denotes the number of distinct prime factors of $h$.
\end{thm}
To our knowledge, it has not even been stated that $N(h^n)$ tends to infinity, though by now this is implicit from some results on graph decompositions; see Section~\ref{sec:graph-decompositions}. The bound of Theorem~\ref{main} is very weak, yet we are satisfied at present due to the apparent difficulty of obtaining direct constructions. Unlike in the case of MOLS, or in the case $h=1$, there is no obvious `finite-geometric' object to get started for hole size $h \ge 2$. Indeed, the bulk of our lower bound is needed to get just a single finite field construction of $k$ HMOLS; see Section~\ref{sec:cyclotomic}.
\subsection{Related objects}
Let $n$ and $k$ be positive integers, where $k \ge 2$.
A \emph{transversal design} TD$(k,n)$ consists of an $nk$-element set of \emph{points} partitioned into $k$ \emph{groups}, each of size $n$, and equipped with a family of $n^2$ \emph{blocks} of size $k$ having the property that any two points in distinct groups appear together in exactly one block.
There exists a TD$(k,n)$ if and only if there exists a set of $k-2$ MOLS of order $n$. This equivalence is seen by indexing groups of the partition by rows, columns, and symbols from each square. Transversal designs are closely connected to orthogonal arrays.
As with MOLS, it is possible to extend the definition to include holes. A \emph{holey transversal design} HTD$(k,h^n)$ is a $khn$-element set, say $[k] \times X$, where $X$ has cardinality $hn$ and an equipartition $(H_1,\dots,H_n)$ of \emph{holes} of size $h$, together with a collection of $h^2n(n-1)$ blocks that cover, exactly once each, every pair of elements $(i,x)$, $(j,y)$ in which $i \neq j$ and $x,y$ are in different holes. Of course, one could extend the definition to allow holes of mixed sizes, but the uniform hole size case suffices for our purposes.
For a graph $G$ and positive integer $t$, let $G(t)$ denote the graph obtained by replacing every vertex of $G$ by an independent set of $t$ vertices, and replacing every edge of $G$ by a complete bipartite subgraph between corresponding $t$-sets. In other words, $G(t)$ is the lexicographic graph product $G \cdot \overline{K_t}$. Let us identify in the natural way the set of points of an HTD$(k,h^n)$ with vertices of the graph $K_k(h) \times K_n$. If we interpret the blocks of the transversal design as $k$-cliques on the underlying set of points, then the condition that two elements appear in a block (exactly once) if and only if they are in distinct groups and distinct holes amounts to every edge of $K_k(h) \times K_n$ falling into precisely one $k$-clique.
The following is a summary of the preceding equivalences.
\begin{prop}
Let $h,k,n$ be positive integers with $k \ge 2$. The following are equivalent:
\begin{itemize}
\item
the existence of a set of $k-2$ HMOLS of type $h^n$;
\item
the existence of a holey transversal design HTD$(k,h^n)$; and
\item
the existence of a $K_k$-decomposition of $K_k (h) \times K_n$.
\end{itemize}
\end{prop}
\subsection{Existence via graph decompositions}
\label{sec:graph-decompositions}
In \cite{BKLOT}, Barber, K\"uhn, Lo, Osthus and Taylor prove a powerful existence result on $K_k$-decompositions of `dense' $k$-partite graphs. In a little more detail, let us call a $k$-partite graph $G$ \emph{balanced} if every partite set has the same cardinality and \emph{locally balanced} if every vertex has the same number of neighbors in each of the other partite sets. The main result of \cite{BKLOT} assures that any balanced and locally balanced $k$-partite graph on $kn$ vertices has a $K_k$-decomposition if $n$ is sufficiently large and the minimum degree satisfies $\delta(G) > C(k) (k-1)n$. Here, $C(k)$ is a constant less than one associated with the `fractional $K_k$-decomposition' threshold.
We remark that the preceding machinery is enough to guarantee, for fixed $k$ and $h$, the existence of an HTD$(k,h^n)$ for sufficiently large $n$, since $K_k(h) \times K_n \cong K_k \times K_n(h)$ is $k$-partite and $r$-regular, where $r=h(k-1)(n-1)=(k-1)hn - h(k-1)$. In fact, even a slowly growing parameter $h$ (as a function of $n$) can be accommodated. However, the result in \cite{BKLOT} makes no attempt to quantify how large $n$ must be for the decomposition. Even in the case of the structured $k$-partite graph we are considering, it is likely hopeless to obtain a reasonable bound on $n$ by this method.
Separately, the theory \cite{DMW,LW} of `edge-colored graph decompositions' due to R.M. Wilson and others, can be applied to the setting of HMOLS. To sketch the details, we fix $h$ and $k$ and consider the graph $K_k(h) \times K_n$ for large $n$. From this, we set up a directed complete graph $K_n^*$ with $r=(kh)^2-kh^2$ edge-colors between two vertices. Each color corresponds with an edge of the bipartite graph $K_k(h) \times K_2$ occurring between two of the $n$ vertices. Let $\mathcal{H}$ denote the family of all $r$-edge-colored cliques $K_k$ which correspond to legal placements of a block in our TD. We seek an $\mathcal{H}$-decomposition of $K_n^*$, and this is guaranteed for sufficiently large $n$ from \cite[Theorem 1.2]{LW}. (We omit several routine calculations needed to check the hypotheses.)
Wilson's approach makes it difficult to obtain reasonable bounds on $n$, although in this context it is worth mentioning the bounds of Y.~Chang \cite{Chang-BIBD2,Chang-TD} for block designs and transversal designs.
\subsection{Outline} The outline of the rest of the paper is as follows. In Section~\ref{sec:cyclotomic}, we obtain a direct construction of $k$ HMOLS of type $h^q$ for large prime powers $q$. This finite field construction is inspired by a method in \cite{DS} that was applied for $k \le 6$. Then, in Section~\ref{sec:constructions}, we adapt a product-style MOLS construction in \cite{WilsonMOLS} to the setting of HMOLS. The proof of our main result, Theorem~\ref{main}, is completed in Section~\ref{sec:proof}. We conclude with a discussion of a few next steps for research on HMOLS.
\section{A cyclotomic construction}
\label{sec:cyclotomic}
\subsection{Expanding transversal designs of higher index}
Let $\lambda$ be a positive integer. We define a TD$_\lam(k,n)$ similarly as a TD$(k,n)$, except that any two points in distinct groups appear together in exactly $\lam$ blocks (and otherwise in zero blocks).
The integer $\lam$ is called the \emph{index} of the transversal design.
The main idea in what follows is to expand a TD$_\lam(k,h)$, where $h$ is the desired hole size, into an HTD$(k,h^q)$ for suitable large prime powers $q$. Unless $k$ is small relative to $h$, the input design for this construction may require large index $\lam$. When $h$ is itself a prime power, a
TD$_\lam(k,h)$ naturally arises from a linear algebraic construction.
\begin{prop}
\label{td-projection}
Let $h$ be a prime power and $d$ a positive integer with $k \le h^d$. Then there exists a TD$_{h^{d-1}}(k,h)$.
\end{prop}
\begin{proof}
Let $H$ be a field of order $h$. Our construction uses points $H \times H^d$, where groups are of the form $H \times \{v\}$, $v \in H^d$. Consider the family of blocks
$$\mathcal{B} =\{ \{ (a+u\cdot v,v): v \in H^d \} : a \in H, u \in H^d \},$$
where $u \cdot v$ denotes the usual dot product in the vector space $H^d$.
The family $\mathcal{B}$ can be viewed as the result of developing the subfamily
$\mathcal{B}_0 =\{ \{ (u\cdot v,v): v \in H^d \} : u \in H^d \}$
additively under $H$.
Fix two elements $v_1 \neq v_2$ in $H^d$ and a `difference' $\delta \in H$. Then since $|\{u : u \cdot (v_1-v_2)=\delta\}|=h^{d-1}$, it follows that there are exactly $h^{d-1}$ elements in $\mathcal{B}_0$ which achieve difference $\delta$ across the groups indexed by $v_1$ and $v_2$. Therefore, two points $(a_1,v_1)$, $(a_2,v_2) \in H \times H^d$ are together in exactly one translate of each of those blocks, where $a_1-a_2=\delta$. We have shown that $\mathcal{B}$ produces a transversal design of index $h^{d-1}$ on the indicated points and group partition; the restriction to (any) $k$ groups produces the desired TD$_{h^{d-1}}(k,h)$.
\end{proof}
We now use a standard product construction to build higher index transversal designs for the case where $h$ has multiple distinct prime divisors.
\begin{prop}
If there exists both a TD$_{\lambda_1}(k,h_1)$ and a TD$_{\lambda_2}(k,h_2)$, then there exists a TD$_{\lambda_1\lambda_2}(k,h_1h_2)$.
\end{prop}
\begin{proof}
Take the given TD$_{\lambda_i}(k,h_i)$ on point set $[k]\times H_i$, $i=1,2$, where $[k]$ indexes the groups. We construct our TD$_{\lambda_1\lambda_2}(k,h_1h_2)$ on points $[k]\times H_1 \times H_2$. For each block $\beta$ of the TD$_{\lambda_1}(k,h_1)$, we put the blocks of a TD$_{\lambda_2}(k,h_2)$ on $\{(x,y,z) : (x,y)\in\beta, z\in H_2\}$. It is easy to verify the resulting design is a TD$_{\lambda_1\lambda_2}(k,h_1h_2)$.
\end{proof}
The next result follows immediately from the previous two propositions and induction.
\begin{cor}
\label{prod-inf}
Let $h \ge 2$ be an integer which factors into prime powers as $q_1q_2\cdots q_{\omega(h)}$.
Put $\lam(h,k):=\prod_{i=1}^{\omega(h)} q_i^{d_i-1}$, where $d_i=\lceil \log_{q_i} k\rceil$ for each $i$.
Then there exists a TD$_\lambda(k,h)$.
\end{cor}
Next, we show how to expand a transversal design of group size $h$ and index $\lam$ into an HTD with hole size $h$ (and index one). Roughly speaking, elements are expanded into copies of a finite field $\F_q$, where $q \equiv 1 \pmod{\lam}$. Each block is lifted so that previously overlapping pairs now cover the cyclotomic classes of index $\lam$, and then blocks are developed additively in $\F_q$. To this end, we cite a guarantee of R.M. Wilson on cyclotomic difference families in sufficiently large finite fields.
\begin{lemma}[Wilson; see \cite{WilsonCyc}, Theorem 3]
\label{wilson-cyc}
Let $\lambda$ and $k$ be given integers, $\lambda,k \ge 2$. For any prime power $q \equiv 1 \pmod{\lambda}$ with $q>\lambda^{k(k-1)}$, there exists a $k$-tuple $(a_1,\dots,a_k) \in \F_q^k$ such that the $\binom{k}{2}$ differences $a_j-a_i$, $1 \le i < j \le k$, belong to any prespecificed cosets of the index-$\lambda$ subgroup of $\F_q^\times$.
\end{lemma}
Applying this, we have the following result which mirrors \cite[Construction 6]{DLL}.
\begin{prop}
\label{htd-cons}
Suppose there exists a TD$_\lam(k,h)$ and $q$ is a prime power with $q \equiv 1 \pmod{\lam}$, $q>\lam^{k(k-1)}$. Then there exists an HTD$(k,h^q)$.
\end{prop}
\begin{proof}
Consider a TD$_\lam(k,h)$ on $[k] \times H$, with block collection $\mathcal{B}$. Consider the collection of point-block incidences $S:=\{((x,y),\beta) : (x,y)\in\beta\in\mathcal{B}\}$. Let $\mu : {S \choose 2}\to\{0,1,\dots,\lambda-1\}$ be defined such that for each fixed pair $(i,y), (j,y')$ with $1\leq i<j\leq k$, $$\left\{\mu(\{((i,y),\beta),((j,y'),\beta)\right\}) : \beta\supset \{(i,y),(j,y')\}\}=\{0,1,\dots,\lambda-1\}.$$ (One can choose such a $\mu$ via a `greedy
labeling'.)
Pick a prime power $q\equiv 1\pmod{\lambda}$, $q>\lam^{k(k-1)}$, and let $C_0,C_1,\dots, C_{\lambda-1}$ denote the cyclotomic classes of index $\lambda$ in $\F_q$. By Lemma~\ref{wilson-cyc} there is a map $\phi : S\to \F_q$ such that for every block $\beta\in\mathcal{B}$, and $i<j$, $\phi((i,y),\beta)-\phi((j,y'),\beta)\in C_t$, where $t=\mu(\{((i,y),\beta),((j,y'),\beta)\})$. We construct an HTD$(k,h^q)$ on $[k]\times H\times \F_q$ as follows. For each $a\in C_0$, $\beta\in \mathcal{B}$, and $c\in \F_q$, include the block $a\beta'+c$, where $a\beta'+c=\{(x,y,a\phi((x,y),\beta)+c) : (x,y)\in \beta\}$.
It is clear that if two points are in the same group, they will appear together in no common blocks; this is inherited from the original TD$_\lam(h,k)$. Consider two points in different groups, but the same hole, say $(x,y,z)$, and $(x',y',z)$, where $x \neq x'$. If there were some block $a\beta'+c$ containing both points then we would have $\phi((x,y),\beta)=\phi((x',y'),\beta)$, an impossibility. It remains to show that any two points from different groups and holes appear together in exactly one block. Let $(x,y,z)$, and $(x',y',z')$ be two such points. By construction there is exactly one block $\beta$ satisfying $z-z'\in C_t$ where $t=\mu(\{((x,y),\beta),((x',y'),\beta)\})$. Then, there is some $a\in C_0$ satisfying $a(\phi((x,y),\beta)-\phi((x',y'),\beta))=z-z'$, and so our two chosen points belong to the block $a\beta'+c$, where $c=z-a\phi((x,y),\beta)$.
\end{proof}
Combining Corollary \ref{prod-inf} and Proposition \ref{htd-cons}, we obtain a construction of HTD$(k,h^q)$ for general $h$ and $k$ and certain large prime powers $q$.
\begin{thm}
\label{cyclotomic-hmols}
Let $h \ge 2$ be an integer which factors into prime powers as $q_1q_2\cdots q_{\omega(h)}$.
Then there exists an HTD$(k,h^q)$ for all prime powers $q\equiv 1\pmod{\lam(h,k)}$, $q>\lam(h,k)^{k(k-1)}$. In other words, $N(h^q) \ge k$ for all prime powers $q\equiv 1 \pmod{\lam(h,k+2)}$ with
$q>\lam(h,k+2)^{(k+2)(k+1)}$.
\end{thm}
\subsection{Template matrices and explicit computation}
We include here some remarks on explicit computer-aided construction of HTD$(k,h^q)$ in the special case of prime hole size $h$.
The `expansion' construction of Proposition~\ref{htd-cons} relies on lifting all blocks so that the differences across any two points fall into distinct cyclotomic classes. Using a `template matrix' method introduced by Dinitz and Stinson \cite{DS}, it is possible to impose some additional structure on this lifting to gain an efficiency in computations.
With notation similar to before, we define an $h^d \times h^d$ `template matrix' $T_d(h)$ as the Gram matrix of the vector space $H^d$. That is, rows and columns of $T_d(h)$ are indexed by $H^d$, and $T_d(h)_{uv} = u \cdot v$. Although the order in which columns appear is unimportant, it is convenient to index the rows in lexicographic order. When $h=2$, the template is simply a Walsh Hadamard matrix (with entries $0,1$ instead of $\pm 1$). We offer another example below.
\begin{ex}
Consider the case $h=3$, $d=2$, which is suitable for the construction of up to $6=2^3-2$ HMOLS having hole size $3$.
With rows (and columns) indexed by the lex order on $\F_3^2$, we have
$$T_3(2)=
\left[\begin{array}{ccc|ccc|ccc}
0&0&0&0&0&0&0&0&0\\
0&1&2&0&1&2&0&1&2\\
0&2&1&0&2&1&0&2&1\\
\hline
0&0&0&1&1&1&2&2&2\\
0&1&2&1&2&0&2&0&1\\
0&2&1&1&0&2&2&1&0\\
\hline
0&0&0&2&2&2&1&1&1\\
0&1&2&2&0&1&1&2&0\\
0&2&1&2&1&0&1&0&2\\
\end{array}\right].$$
\end{ex}
Observe that the difference between any two distinct columns of $T_h(d)$ achieves every value in $H$ exactly $h^{d-1}$ times each; this is essentially the content of Proposition~\ref{td-projection}. For $k \le h^d$, the restriction of $T_h(d)$ to any $k$ columns has the same property. As we illustrate in Example~\ref{401} to follow, aiming for a value of $k$ less than $h^d$ may be a worthwhile tradeoff in computations.
We use the template matrix in conjunction with the following `relative difference matrix' setup for HTDs.
\begin{lemma}[see \cite{DS}]\label{difcon}
Let $G$ be an abelian group of order $g$ with subgroup $H$ of order $h$, and $\mathcal{B}\subseteq G^k$. If for all $r,s$ with $1\leq r<s\leq k$ and each $a\in G\setminus H$ there is a unique $b\in \mathcal{B}$ with $b_r-b_s=a$, then there exists an HTD$(k,h^{g/h})$.
\end{lemma}
To further set up the construction, fix integers $d,h \ge 2$, and put $\lam=h^{d-1}$. Let $q\equiv 1\pmod{\lam}$ be a prime power.
Let $\omega$ be a multiplicative generator of $\F_q$, and define $C_0:= \langle \omega^{\lam} \rangle$
to be the index-$\lam$ subgroup of $\F_q^\times$.
For $1\leq i < \lam$, we denote the coset $\omega^i C_0$ by $C_i$.
Let $k \le h^d$.
Given two $k$-tuples $t \in X^k$ and $u \in Y^k$, define $t \circ u=((t_i,u_i) : 1 \le i \le k) \in (X \times Y)^k$. We take $X=H=\F_h$ and $Y=F_q$ in what follows, so that the group in Lemma~\ref{difcon} is $G= \F_h \times \F_q$ with subgroup $\F_h \times \{0\}$.
Letting $t_1,t_2,\dots, t_{h^d}$ denote the rows of $T_h(d)$, our construction amounts to a selection of vectors $u_1,u_2,\dots,u_{h^d} \in \F_q^k$ such that
$$\mathcal{B} = \{ t_i \circ (x u_i) : x \in C_0, 1 \le i \le h^d \}$$
satisfies the hypotheses of Lemma~\ref{difcon}.
Dinitz and Stinson \cite{DS} and later, Abel and Zhang \cite{AZ}, reduce the search for such vectors $u_i$ by assuming they have the form
$$
u_1, \omega u_1, \omega^2 u_1, \dots, \omega^{\lam-1}u_1, \dots,
u_h, \omega u_h, \omega^2 u_h, \dots, \omega^{\lam-1}u_h.$$
With this reduction, $\mathcal{B}$ produces an HTD if the quotients $(u_{ir}-u_{is})(u_{jr}-u_{js})^{-1}$ lie in certain cosets of $C_0$ for each pair $r,s$ with $1 \le r < s \le k$. In more detail, fix two such column indices and consider two blocks $b,b' \in \mathcal{B}$ arising from a choice of two rows of $T_d(h)$. When these rows are in the same block of $\lam=h^{d-1}$ consecutive rows, we automatically avoid $b_r-b_s = b'_r-b'_s$ because of the different powers of $\omega$ multiplying the same $u_i$. On the other hand, when these rows are in, say, the $i$th block and $j$th block of $\lam$ rows, $i\neq j$, we must ensure that the quotient $(u_{ir}-u_{is})(u_{jr}-u_{js})^{-1}$ avoids those cyclotomic classes indexed by $e'-e \pmod{\lam}$, whenever $\omega^e u_i$ and $\omega^{e'} u_j$ index rows of $T_d(h)$ which have equal $(r,s)$-differences.
It is routine (but somewhat tedious) exercise to characterize the `allowed cosets', either computationally for specific $h,d$ or in general. We omit the details, but point out that, for each $r$ and $s$, an arithmetic progression of cyclotomic classes (with difference a power of $h$) is available.
Now, given a table of allowed cosets, the vectors $u_1,\dots,u_h \in \F_q^k$ can be chosen one at a time, where each new vector has coset restrictions on its $(r,s)$-differences. The guarantee of Lemma~\ref{wilson-cyc} can be used for this purpose (giving an alternate proof of Corollary~\ref{cyclotomic-hmols} in the case of prime $h$). However, in practice it often suffices to take significantly smaller values of $q$.
\begin{ex}
\label{401}
To illustrate the method, we construct 9 HMOLS of type $2^{401}$ in $\F_2 \times \F_{401}$; that is, we consider $h=2$, $q=401$. Instead of using all $16$ columns of the template $T_2(4)$, we require only $9+2=11$, as indicated below. Let
\begin{align*}
u_1&=(284, 136, 249, 334, 1, 202, 140, 307, -, 35, 312, -, 0, -, -, -)\\
\text{and}~u_2&=(283, 297, 137, 60, 1, 210, 102, 39, -, 241, 111,-, 0, -, -, -).
\end{align*}
It can be verified that the quotients $(u_{2r}-u_{2s})(u_{1r}-u_{1s})^{-1}$ all lie in allowed cosets for $T_2(4)$ for any distinct indices $r$ and $s$ such that our vectors are nonblank. (As an explanation for the unnatural ordering of entries, it turns out that a column-permutation of the template $T_2(4)$ was more convenient for the computations, at least with our approach.)
\end{ex}
To our knowledge, Example~\ref{401} provides the first (explicit) construction of more than $6$ HMOLS of type $2^n$ for any $n>1$.
\section{Recursive constructions}
\label{sec:constructions}
As we move away from prime powers, we present a product construction which scales the number of holes. The idea is to join (copies of) equal-sized HMOLS on the diagonal and ordinary MOLS off the diagonal. Our proof uses the language of transversal designs.
\begin{prop}
\label{prod2}
$N(h^{mn}) \ge \min \{N(1^m),N(hn),N(h^n) \}$.
\end{prop}
\begin{proof}
We show that the existence of a HTD$(k,h^{mn})$ is implied by the existence of an HTD$(k,1^m)$, TD$(k,hn)$ and HTD$(k,h^n)$. Let us take as points $[k] \times H \times M \times X$, where $|H|=h$, $|M|=m$, $|X|=n$. The groups of our TD are $\{i\} \times H \times M \times X$ and the holes are $[k] \times H \times \{w\} \times \{x\}$, where $i \in [k]$, $w \in M$, and $x \in X$. We construct the block set in two pieces. First, on each layer of points of the form $[k] \times H \times \{w\} \times X$, where $w \in M$, we include the blocks of an HTD$(k,h^n)$ with groups $\{i\} \times H \times \{w\} \times X$ and holes $[k] \times H \times \{w\} \times \{x\}$. Second, let us take an HTD$(k,1^m)$ on $[k] \times M$ and, for each block $B=\{(i,w_i): i = 1,\dots,k\}$, include the blocks of a TD$(k,hn)$ on $\cup_i \{i\} \times H \times \{w_i\} \times X$, where in each case we use the natural group partition induced by first coordinates.
It remains to verify that the block set as constructed covers every pair of points as needed for an HTD$(k,h^{mn})$. To begin, since each of our ingredient blocks is transverse to the group partition, it is clear that two distinct points in the same group are together in no block. Moreover, two distinct points in the same hole appear in the same HTD$(k,h^n)$, the hole partition of which is inherited from our resultant design. Therefore, such elements are also together in no block.
Consider then, two elements $(i_1,j_1,w_1,x_1)$ and $(i_2,j_2,w_2,x_2)$ with $i_1 \neq i_2$ and $(w_1,x_1) \neq (w_2,x_2)$. If $w_1=w_2$, this pair of points occurs in the same HTD$(k,h^n)$, and thus in exactly one block. On the other hand, if $w_1 \neq w_2$, we first locate the unique block of the HTD$(k,1^m)$ containing $(i_1,w_1)$ and $(i_2,w_2)$, and then, within the corresponding TD$(k,hn)$, identify the unique block containing our two given points. We remark that the holes can be safely ignored in this latter case, since we are assuming $w_1 \neq w_2$.
\end{proof}
Although the idea behind the construction in Proposition~\ref{prod2} is very standard, we could not find this result mentioned explicitly in the literature.
To set up our next construction, we recall that an incomplete latin square can have holes that partition a proper subset of the index set. In particular, we consider sets of mutually orthogonal $n \times n$ incomplete latin squares with a single common $h \times h$ hole for integers $n>h>0$. The maximum number of squares in such a set is commonly denoted $N(n;h)$.
\begin{ex}
A noteworthy value is $N(6;2)=2$ in spite of the nonexistence of a pair of orthogonal latin squares of order six. The squares, with common hole $H=\{1,2\}$, are shown below.
\begin{center}
\begin{tabular}{|cccccc|}
\hline
&&3&4&5&6\\
&&4&3&6&5\\
6&3&5&1&4&2\\
4&5&6&2&3&1\\
3&6&2&5&1&4\\
5&4&1&6&2&3\\
\hline
\end{tabular}
\hspace{1.5cm}
\begin{tabular}{|cccccc|}
\hline
&&3&5&6&4\\
&&4&6&5&3\\
3&5&2&4&1&6\\
6&4&1&3&2&5\\
4&6&5&1&3&2\\
5&3&6&2&4&1\\
\hline
\end{tabular}
\end{center}
\end{ex}
Given a set of $k-2$ mutually orthogonal incomplete latin squares of type $(n;h)$, reading blocks as $k$-tuples produces
an \emph{incomplete transversal design}, abbreviated either ITD$(k,(n;h))$ or $\text{TD}(k,n)-\text{TD}(k,h)$. The latter notation is not meant to suggest that a TD$(k,h)$ exists as a subdesign, but rather that two elements from the hole are uncovered by blocks.
The interested reader is referred to \cite{Handbook,MOLStable} for more information and references on these objects.
We now extend Proposition~\ref{prod2} to get an analog of Wilson's MOLS construction \cite[Theorem 2.3]{WilsonMOLS}.
\begin{prop}
\label{wilsonish-cons}
For $0 \le u < t$,
\begin{equation*}
N(h^{mt+u}) \ge \min \{N(t)-1,N(h^m),N(hm),N(hm+h;h),N(h^u) \}.
\end{equation*}
\end{prop}
\begin{proof}
We show that the existence of an HTD$(k,h^{mt+u})$ is implied by the existence of a TD$(k+1,t)$, HTD$(k,h^m)$, TD$(k,hm)$, HTD$(k,h^u)$ and an ITD$(k,(hm+h;h))$.
We remark that the first of these is equivalent to a resolvable TD$(k,t)$.
The set of points for our design is $[k] \times H \times (M \times X \cup Y)$, where $|H|=h$, $|M|=m$, $|X|=t$, and $|Y|=u$. Similar to the proof of Proposition~\ref{prod2}, the groups are induced by first coordinates, and the holes are `copies of $H$'.
Begin with a TD$(k+1,t)$ on $([k] \cup \{0\}) \times X$, say with blocks $\mathcal{A}$. Let $x_* \in X$ and assume, without loss of generality, that the blocks in $\mathcal{A}$ incident with $(0,x_*)$ are of the form $\{(i,x):i=1,\dots,k\} \cup \{(0,x_*)\}$. In other words, in the induced resolvable TD$(k,t)$, assume one parallel class is labeled as $[k] \times \{x\}$, $x \in X$. Let us identify $Y$ with any $u$-element subset of $\{0\} \times (X \setminus \{x_*\})$.
For each block in $\mathcal{A}$ of the form $\{(i,x):i=1,\dots,k\} \cup \{(0,x_*)\}$, include the blocks of an HTD$(k,h^m)$, on $[k] \times H \times M \times \{x\}$ with groups and holes as usual.
Consider now a block $B \in \mathcal{A}$ which does not contain $(0,x_*)$, say $B=\{(i,x_i):i=0,1,\dots,k\} \in \mathcal{A}$ where $x_0 \neq x_*$.
Put $y_0 = (0,x_0)$. If $y_0 \not\in Y$ (that is if $B$ does not intersect $Y$), we include the blocks of a TD$(k,hm)$ on $\cup_{i=1}^k \{i\} \times H \times M \times \{x_i\}$ with groups and holes as usual. On the other hand, if $y_0 \in Y$ (that is if $B$ intersects $Y$), we include the blocks of an ITD$(k,(hm+h;h))$ on the points $\cup_{i=1}^k \{i\} \times H \times (M \times \{x_i\} \cup \{y_0\})$ and such that the hole of this ITD occurs as
$\cup_{i=1}^k \{i\} \times H \times \{y_0\}$. To finish the construction, we include the blocks of an HTD$(k,h^u)$ on $[k] \times H \times Y$, where again the natural partition into groups and holes is used. We have used four types of blocks, to be referenced below in the order just described.
As a verification, we consider two elements in different groups and holes. Suppose $i$ and $i'$ index two different groups. There are cases to consider.
Consider first a pair of points of the form $(i,j,w,x)$ and $(i',j',w',x')$, where $(w,x) \neq (w',x')$. If $x=x'$, then the two points appear together in exactly one block of the first kind. If $x \neq x'$, then we consider the unique block $B \in \mathcal{A}$ containing $(i,x)$ and $(i',x')$. Our two points are either in exactly one block of the second kind if $B \cap Y = \emptyset$ or exactly one block of the third kind otherwise.
Consider now the points $(i,j,w,x)$ and $(i',j',y')$, where $y' \in Y$. There is exactly one block of the TD$(k+1,t)$ containing $(i,x)$ and $y_0$. Examining the ITD prescribed by the construction, the points $(i,j,w,x)$ and $(i',j',y')$ appear together in exactly one block of the third kind, since $i \neq i'$ and only one of these points belongs to the hole.
Finally, two points $(i,j,y)$ and $(i',j',y')$ appear together in one (and only one) block of the fourth kind if and only if $y \neq y'$.
\end{proof}
\begin{rk}
The construction in Proposition~\ref{prod2} is just the specialization $Y=\emptyset$ of that of Proposition~\ref{wilsonish-cons}. However, we have kept the former stated separately since it requires no assumption on $N(hm+h;h)$.
\end{rk}
\section{Lower bounds}
\label{sec:proof}
\subsection{Preliminary bounds}
To make use of Proposition~\ref{wilsonish-cons}, it is helpful to have a lower bound on $N(n;h)$ resembling Beth's bound for $N(n)$.
\begin{thm}
\label{imols-bound}
Let $h$ be a positive integer. Then $N(n;h) > n^{1/29.6}$ for sufficiently large $n$.
\end{thm}
\begin{proof}
We use the construction of \cite[Theorem 2.4]{WilsonMOLS}. A minor variant gives that, for $0 \le u,v \le t$,
\begin{equation}
\label{imols-cons}
N(mt+u+v;v) \ge \min \{N(m),N(m+1),N(m+2),N(t)-2,N(u)\}.
\end{equation}
(To clarify, the cited theorem `fills the hole' of size $v$ so that the left side becomes $N(mt+u+v)$ and the minimum on the right side includes $N(v)$.)
Put $v=h$ and suppose $k$ is a large integer. For $n \ge k^{29.6}$, write $n = mt+u$, where $m,t,u \ge k^{14.7995}$. Then, from Beth's inequality, there exist $k$ MOLS of each of the side lengths $m,m+1,m+2,u$, and also $k+2$ MOLS of side length $t$.
It follows from (\ref{imols-cons}) that $N(n+h;h) \ge k$, as required.
\end{proof}
The forthcoming proof of Theorem~\ref{main} makes use of two number-theoretic lemmas which are minor variants of classical results. The first of these concerns the selection of a prime, with a congruence restriction, in a large and wide enough interval.
\begin{lemma}
\label{Dirichlet-estimate}
For any sufficiently large integer $M$ and any real number $x>e^M$ there exists a prime $p \equiv 1 \pmod{M}$ satisfying $x < p \le 2x$.
\end{lemma}
\begin{proof}
We use a result \cite[Theorem 1.3]{Dirichlet-bounds} of Bennett, Martin, O'Bryant and Rechnitzer concerning the prime-counting function
$$\pi(x;q,a) := \#\{p \le x: p~\text{ is prime}, p \equiv a \pmod{q}\}.$$
With $q=M$ and $a=1$, their estimate implies
\begin{equation*}
\left| \pi(x;M,1) - \frac{{\mathrm{Li}}(x)}{\phi(M)} \right| \le \frac{1}{160} \frac{x}{(\log x)^2}
\end{equation*}
for $M>10^5$ and all $x > e^M$. Since $\mathrm{Li}(x)\sim x/\log(x)$ and $\phi(M) < \log x$, a routine calculation gives $\pi(2x;M,1)-\pi(x;M,1) \ge 1$ for sufficiently large $x$ and $M$.
\end{proof}
\begin{rk}
Lemma~\ref{Dirichlet-estimate} actually holds with `2' replaced by any constant greater than one; however, the present form suffices for our purposes.
\end{rk}
Next, we
have a Frobenius-style representation theorem for large integers.
\begin{lemma}
\label{Frobenius}
Let $a,b$ and $C$ be positive integers with $\gcd(a,b)=1$. Any $n > a(b+1)(b+C)$ can be written in the form $n=ax+by$ where $x$ and $y$ are integers satisfying $x \ge C$ and $y > ax$.
\end{lemma}
\begin{proof}
The integers $a(C+1), a(C+2), \dots, a(C+b)$ cover all congruence classes mod $b$. Suppose $n \equiv a(C+j) \pmod{b}$, where $j \in \{1,\dots,b\}$. Put $x=C+j$ and
$y=(n-ax)/b$. Then $y$ is an integer with
\begin{equation*}
y > \frac{a(b+1)(C+b)-a(C+b)}{b} = a(C+b) \ge ax.\qedhere
\end{equation*}
\end{proof}
\subsection{Proof of the main result}
We are now ready to prove our asymptotic lower bound on HMOLS of type $h^n$.
\begin{proof}[Proof of Theorem~\ref{main}]
Put $M=\lam(h,k+2)$, as defined in Corollary~\ref{prod-inf}. Note that $M \le (k+2)^{\omega(h)}$. Let $K$ denote the ceiling of
$k^{(\omega(h)+\epsilon/4)k^2}$, which we note for large $k$ exceeds both $e^M$ and $M^{(k+2)(k+1)}$.
Using Lemma~\ref{Dirichlet-estimate}, choose two primes $q_1,q_2 \equiv 1 \pmod{M}$ where $q_2 \in (K,2K]$ and $q_1 \in (2K,4K]$.
With $m=q_2$, we have $N(h^m) \ge k$ from Theorem~\ref{cyclotomic-hmols}, $N(hm) \ge k$ from Beth's inequality, and $N(hm+h;h) \ge k$ from Theorem \ref{imols-bound}. The latter two bounds use the assumption that $k$ is large.
From the hypothesis on $n$ and choice of $q_i$, we have for large $k$,
$$n > k^{(3+\epsilon)\omega(h)k^2} > 17K^3 > q_1(q_2+1)(q_2+k^{14.8}).$$
Using Lemma~\ref{Frobenius}, write $n=q_1s+q_2t$, where $s,t$ are integers satisfying $s \ge k^{14.8}$ and $t > q_1s$. Put $u=q_1s$ so that, with this alternate notation, we have $n=mt+u$ with $t>u$. Observe that $N(h^u) \ge k$ from Proposition~\ref{prod2} with $s$ taking the role of $m$ and $q_1$ taking the role of $n$. We additionally have $N(t) > k$ from Beth's inequality and our lower bound on $t$.
From the above properties of $m,t,u$, Proposition~\ref{wilsonish-cons} implies $N(h^n) \ge k$.
\end{proof}
\begin{ex}
We illustrate the proof method by computing an explicit bound for the existence of six HMOLS of type $2^n$. We show that $n> 8 \times 50 \times 148 = 59200$ suffices.
From \cite[Table 1]{ABG}, there exist six HMOLS of types $2^8$ and $2^{49}$. (The former does not arise from the cyclotomic construction of Section 2, but it helps us optimize the bound.)
From \cite[Table III.3.83]{Handbook}, we have $N(1^s) \ge 6$ for all $s \ge 99$. It follows by Proposition~\ref{prod2} that $N(2^{8s}) \ge 6$ for all $s \ge 99$. Put $m=49$ and note that
$N(2m)=N(98) \ge 6$ and $N(2m+2;2) = N(100;2) \ge 6$, where the latter appears in \cite[Table III.4.14]{Handbook}.
Write $n=8s+25t$ where $t >8s$. From \cite[Table III.3.81]{Handbook}, we have $N(t) \ge 7$. Letting $h=2$ and $u=8s$, we conclude from Proposition~\ref{wilsonish-cons} that $N(2^n) \ge 6$.
\end{ex}
\subsection{Inverting the bound}
Here, we offer a lower bound on $N(h^n)$ in terms of $n$.
\begin{thm}\label{N-bound}
Let $h\geq 2$ be an integer and $\delta>2$ a real number. Then $N(h^n)\ge (\log n)^{1/\delta}$ for all $n>n_0(h,\delta)$.
\end{thm}
\begin{proof}
If $k$ is an integer not exceeding the right side of the above bound, then $\log n \ge k^\delta > C k^2 \log k $ for any constant $C >3 \omega(h)$ and sufficiently large $k$. The existence of $k$ HMOLS of type $h^n$ then follows from Theorem~\ref{main}.
\end{proof}
\begin{rk}
Various slightly better `inverse bounds' are possible. For instance, we have
$N(h^n)\ge e^{\frac{1}{2}W(\log(n)/2\omega(h))}$ for sufficiently large $n$, where $W$ denotes Lambert's function, the inverse of $x \mapsto xe^x$. The constants here represent one choice of many.
\end{rk}
\section{Future directions}
It would of course be desirable to produce a lower bound of the form $N(h^n) \ge n^{\delta}$ for some $\delta>0$. This appears difficult using the presently available methods over finite fields, although a sophisticated randomized construction is plausible.
When $k$ is a fixed positive integer (rather than sufficiently large), our proof method can still compute, in principle, a lower bound on $n$ such that $N(h^n) \ge k$. However, this bound incurs a considerable penalty in the analytic number theory for small integers $k$.
To track this penalty, we require a bound on the selection of prime powers in the spirit of \cite[Lemma 5.3]{Chang-BIBD1} or a deeper look at explicit estimates for primes in Dirichlet's theorem such as in the data attached to \cite{Dirichlet-bounds}.
Concerning explicit sets of HMOLS, we showed $N(2^{401}) \ge 9$ in Example~\ref{401}. A few other sets of 9 HMOLS of type $2^n$ for $n<1000$ were found, along with a set of 10 HMOLS of type $2^{1009}$. Our search for a pair of vectors with the needed cyclotomic constraints was very na\"ive. We also restricted our computational efforts to the case $h=2$.
With some improved code to search for vectors $u_1,u_2,\dots,u_h$ producing a set of HMOLS, one could envision an expanded table of lower bounds. Such an undertaking could offer a better sense of what to expect in practice from the standard construction methods and set a benchmark for future bounds.
As a separate line of investigation, it would be of interest to improve the exponent of Theorem~\ref{imols-bound}, our bound on MOLS with exactly one hole of a fixed size. A direct use of the Buchstab sieve as in \cite{WilsonMOLS} is likely to do a bit better than our (indirect) method.
\end{document} |
\frak{b}egin{document}
\frak{a}uthor{Kamran Lamei \frak{a}nd Siamak Yassemi}
\frak{a}ddress[Kamran Lamei]{School of Mathematics, Statistics and Computer Science \\ University of Tehran \\ Tehran \\ Iran}
\email{Kamran [email protected]}
\frak{a}ddress[Siamak Yassemi]{School of Mathematics, Statistics and Computer Science \\ University of Tehran \\ Tehran \\ Iran}
\email[corresponding author]{[email protected]}
\title{graded Betti numbers of good filtrations}
\keywords{Betti numbers, Good filtrations, Vector partition function}
\subjclass[2000]{13A30, 13D02, 13D40, 13B22}
\frak{b}egin{abstract}
The asymptotic behavior of graded Betti numbers of powers of homogeneous ideals in a polynomial ring
over a field has recently been reviewed. We extend quasi polynomial behavior of graded Betti numbers of powers of homogenous ideals to $\frak{m}athbb{Z}$-graded algebra over Notherian local ring. Furthermore our main result treats the Betti table of filtrations which is finite or integral over the Rees algebra.
\end{abstract}
\frak{m}aketitle
\section{Introduction}
A significant result on the Castelnuovo-Mumford regularity of powers of homogeneous ideal $I$ in a polynomial ring $S$ shows that the maximal degree of the i-th syzygy of $I^t$ is a linear function of $t$ for $t$ large enough. This result have various generalizations including a case $I$ is a homogeneous ideal in standard graded algebras over a Noetherian ring. Bagheri, Chardin and H\`a promote the content in {\bf c}ite{BCH} through investigating of the eventual behavior of all the minimal generators of the $i$-th syzygy module of $MI^t$ where $M$ is a finitely generated $Z$-graded S-module. Their approach formed from the fact that the module $\oplus_{t} \tor_{i}^{S} (MI^{t} , k )$ for a homogeneous ideal $I$ in graded ring $S$, has a structure of a finitely generated graded module over a non-standard graded polynomial ring over $k$, from which one can conclude the behavior of $\tor_{i}^{S} (I^{t} , k )$ when $t$ varies.\\
In the case that $I$ is generated by the forms of the same degree, an interesting result in {\bf c}ite{BCH} shows that the Betti tables of modules $MI^t$ could be encoded by a variant of Hilbert-Serre polynomials. This is a refinement of asymptotic stability of total Betti numbers proved by Kodiyalam {\bf c}ite{kodiyalam1993homological}. The question that track the asymptotic behavior of graded Betti numbers in the case that ideal $I$ is generated by forms of degrees (not necessarily equal) $d_1,\ldots ,d_r$ was the core of our previous work {\bf c}ite{BK}. The fact that $\oplus_{t} \tor_{i}^{S} (I^{t} , k )$ is a finitely generated over a multigraded polynomial ring $B=K[T_1, \ldots,T_r]$ endowed $(d_i , 1)$ to each variables $T_i$ for $1\leq i \leq r$, guarantees a bigraded minimal free $B$-resolution with free module $\oplus B(-a,-b)^{\frak{b}eta_{(i,(a,b))}}$ at the homological degree $i$. As a consequence, we proved in {\bf c}ite{BK} that $\frak{m}athbb{Z}^2$ can be splitted into a finite number of regions (see Figure \ref{regions}) such that each region corresponds to quasi-polynomial behavior of the Betti numbers of homogeneous ideals in a polynomial ring over a field. Accordingly, the graded Betti tables of power $I^n$ could be encoded by the a set of polynomials for $n$ large enough. More generally on the stabilization of graded Betti numbers for a collection of graded ideals we use the fact that the module
$$
B_i:=\oplus_{t_1,\ldots ,t_s}\tor_i^R(MI_1^{t_1}{\bf c}dots I_{s}^{t_s},k)
$$
is a finitely generated $(\frak{m}athbb{Z}^p\times \frak{m}athbb{Z}^s)$-graded ring over $k[T_{i,j}]$ setting $\deg (T_{i,j})=(\deg (f_{i,j}),e_i)$
with $e_i$ the $i$-th canonical generator of $\frak{m}athbb{Z}^s$ and for fixed $i$ the $f_{i,j}$ form a set of minimal generators of $I_i$. We proved in {\bf c}ite{BK} that the chamber associated to a positive $\frak{m}athbb{Z}^d$-grading of $R:=k[T_{i,j}]$ splitted $\frak{m}athbb{Z}^d$ into a finite maximal cells where the eventual behaviour of graded Betti numbers is encoded by the a set of polynomials.\\
\frak{b}egin{figure}
\includegraphics[scale=0.20]{BR1.JPG}
{\bf x}ymatrix{& \frak{a}r@{~>}[rrr]_{\frak{m}athrm{as\,\, degrees\,\,goes\,\, large\,\, enough}}^-{In\,\, the\,\, case \,of\, one\,\, graded\,\, ideal}&&&}
\includegraphics[scale=0.20]{BR2.JPG}
{\bf c}aption{}
\label{regions}
\end{figure}
Let $S=A[x_1,\ldots,x_n]$ be a graded algebra over a commutative noetherian local ring $A$ with residue field $k$ and $I \subseteq S$ be a homogeneous ideal.
In Theorem \ref{garaded algebra} we generalize the quasi-polynomial behavior of the Betti numbers to the case of homogeneous ideals in $G$-graded algebra over Notherian local ring. By using the concept of vector partition function we present an effective method to find such a set of polynomials. roughly speaking, vector partition function is the number of ways where a vector decompose as a linear combination with nonnegative integral coefficients of a fixed set of vectors. On the other hand, let $e_i$ be the $i$-th standard basis of the space $\frak{m}athcal{R}R^r$ for $1\leqslant i \leqslant r$ and suppose that linear map $f : \frak{m}athcal{R}R^{n} \rightarrow \frak{m}athcal{R}R^d $ is defined by $f(e_i) = v_i$. The Rational convex polytope can be written as \\
$$
P(b) := f^{-1} (b) {\bf c}ap \frak{m}athcal{R}R_{\geqslant 0}^{r} = \{ x \in \frak{m}athcal{R}R^{r} | A x = a ; x \geqslant 0 \}
$$
where $A$ is the matrix of $f$. If $b$ is in the interior of $\pos(A):=\{\sum_{i=1}^{n} \lambda_ia_i\in\frak{m}athcal{R}R^d | \lambda_i\geq 0, 1\leq i\leq n\}$, the polytope $P(b)$ has dimension $n-d$. One can see that evaluating the vector partition is equivalent to computing the number of integral points in a rational convex polytope. Accordingly in order to compute the Hilbert function $HF(S;b)$ of a polynomial rings $S$ over an infinite field $K$ of characteristic $0$ weighted by a set of vectors in $\frak{m}athbb{Z}^r$ we use an algorithm of enumeration the integral points in the polytope $P(b)$.
A classical result of G.Pick [1899] for a two-dimensional polygone, states that if $P \subset \frak{m}athcal{R}R^2$ is an integer polygon, then the number of integer points inside
$P$ is
$
|P {\bf c}ap \frak{m}athbb{Z}^2| = area (P) + \frac{| \partial P {\bf c}ap \frak{m}athbb{Z}^2|}{2} + 1
$. One of important generalizations of the Pick's formula is the theorem of Ehrhart. In the general, for any rational polyhedron $P \subset \frak{m}athcal{R}R^n$ we consider following generationg function:
$$
f(P,\textbf{x}) = \sum_{m \in P {\bf c}ap \frak{m}athbb{Z}^n} \textbf{x}^{m}
$$
where $m = (m_{1},{\bf c}dots ,m_{n})$ and $\textbf{x}^m = x_{1}^{m_1}{\bf c}dots x_{n}^{m_n}$. By Brion's theorem {\bf c}ite{B}, the generating function of the polytope $P$ is equal to the sum of the generating functions of its vertex cones. More precisely,
$$
f (P; \textbf{x}) = \sum_{m \in P {\bf c}ap \frak{m}athbb{Z}^n} \textbf{x}^{m} = \sum_{v \in \Omega(P)} f (K; \textbf{v})
$$
where $\Omega (P)$ is the set of vertices of $P$. In order to find the generation function of arbitrary pointed cones, Stanley {\bf c}ite{S} gives a triangulation of a rational cone into simplicial cones. Barvinok proved {\bf c}ite{BP} that every rational polyhedral cone can be triangulated into unimodular cones. The method of Barvinok enabling us to calculate the generating function of the above polytope depending on $b$, before that we should mention that the polytope $P(b)$ associated to the matrix $A$ is not full dimensional so to use the Barvinok metod we need to transform $P(b)$ to polytope $Q$ which is full dimensional and the integer points of $Q$ are in one-to-one correspondence to the integer points of $P(b)$. In the example \ref{4-gen} we show the implementation of above mothod in order to find out an effective method for our objective.
Finally in order to be able to investigate the behavior of graded Betti numbers of integral closures and rattliff-Rush closure of powers of ideals, we use the notion of $I$-good filtrations. This is a series $\frak{m}athcal{J}=\lbrace \frak{m}athcal{J}_n \rbrace_{ n\geq 0}$ of ideals such that $\oplus_{n\geqslant 0} \frak{m}athcal{J}_n$ is a finite module over the Rees ring. Our result on graded Betti numbers of $I$-good filtrations takes the following form:
\textbf{Theorem.}
Let $S = A\left[x_{1}, {\bf c}dots , x_{n}\right]$ be a graded algebra over a Noetherian local ring $(A, m) \subset S_{0}$. Let $\frak{m}athcal{J}=\lbrace \frak{m}athcal{J}_n \rbrace_{ n\geq 0}$ be an $I$-good filtration of ideals $\frak{m}athcal{J}_n$ of $S$ and $\frak{m}athcal{J}_1 = (f_1,f_2,...,f_r)$
with $\deg f_i = d_i$ be $\frak{m}athbb{Z}$-homogenous ideal in $S$, and let $R=S[T_1, \ldots, T_r]$ be a bigraded polynomial extension of $S$
with $\deg(T_i)=(d_i, 1)$ and $\deg (a) = (\deg (a) , 0) \in \frak{m}athbb{Z} \times\{0\}$ for all $a \in S$. Then,\\
\frak{b}egin{itemize}
\item For all $i$ \\
$\tor_{i}^{R}(\frak{m}athcal{R}_{\frak{m}athcal{J}},k)$ is finitely generated $k[T_1, \ldots, T_r]$-module.\\
\item There exist,
$t_0,m,D\in \frak{m}athbb{Z}$, linear functions $L_i(t)=a_i t+b_i$,
for $i=0,\ldots ,m$, with $a_i$ among the degrees
of the minimal generators of $I$ and $b_i\in \frak{m}athbb{Z}$, and polynomials $Q_{i,j}\in \frak{m}athbb{Q} [x,y]$ for
$i=1,\ldots ,m$ and $j\in 1,\ldots ,D$, such that, for $t\geq t_0$,
(i) $L_i(t)<L_j(t)\ \Leftrightarrow\ i<j$,
(ii) If $\frak{m}u <L_0(t)$ or $\frak{m}u >L_m(t)$, then $\tor_i^S(\frak{m}athcal{J}_t, k)_{\frak{m}u}=0$.
(iii) If $L_{i-1} (t)\leq \frak{m}u \leq L_{i}(t)$ and
$a_i t-\frak{m}u \equiv j\frak{m}od (D)$, then
$$
\dim_k\tor_i^S(\frak{m}athcal{J}_t, k)_{\frak{m}u}=Q_{i,j}(\frak{m}u ,t).\\\\
$$
\end{itemize}
It's worth mentioning that even in a simple situation, namely when $I$ is a complete intersection ideal quite many polynomials are involved to give the asymptotic behavior of the graded Brtti numbers of powers of $I$.
\section{Preliminaries} \label{sec.prel}
In this section, we give a brief recall on necessary notations and terminology used in the article.
Let $S=k[x_1, \ldots, x_n]$ be a polynomial ring over a field $k$. Let $\frak{m}athbb{G}$ be an abelian group. A $\frak{m}athbb{G}$-grading of $S$ is a morphism $\deg : \frak{m}athbb{Z}^n \longrightarrow \frak{m}athbb{G} $ and If $G$ is torsion-free and $S_0=k$, the grading is positive. Criterions for positivity are given in {\bf c}ite[8.6]{ms}. When $\frak{m}athbb{G}=\frak{m}athbb{Z}^d$ and the grading is positive, (generalized) Laurent series are associated to finitely generated graded modules:
\frak{b}egin{definition}
The Hilbert function of a finitely generated module $M$ over a positively graded polynomial ring is the map:
$$\frak{b}egin{array}{llll}
HF(M; -):&\frak{m}athbb{Z}^d&\longrightarrow& \frak{m}athcal{N}N\\
&\frak{m}u&\longmapsto&\dim_k(M_\frak{m}u).
\end{array}$$
The Hilbert series of $M$ is the Laurent series $$H(M;t)=\sum_{{\frak{b}f \frak{m}u}\in \frak{m}athbb{Z}^d}\dim_k(M_{\frak{m}u})t^{\frak{m}u}.$$
\end{definition}
Let $M$ be a finitely generated $\frak{m}athbb{Z}^d$-graded $S$-module. It admits a
finite minimal graded free $S$-resolution
$$\frak{m}athbb{F}_\frak{b}ullet: 0\rightarrow F_u \rightarrow \ldots \rightarrow F_1\rightarrow F_0\rightarrow M\rightarrow 0.$$
Writing $$F_i=\oplus_\frak{m}u S(-\frak{m}u)^{\frak{b}eta_{i,\frak{m}u}(M)},$$ the minimality
shows that $\frak{b}eta_{i,\frak{m}u}(M)=\dim_k\left(\tor_i^S(M,k)\right)_\frak{m}u$,
as the maps of $\frak{m}athbb{F}_\frak{b}ullet\otimes_S k$ are zero.
We also recall that the support of a $\frak{m}athbb{Z}^d$-graded module $N$ is
$$\supp_{\frak{m}athbb{Z}^d} (N):=\{\frak{m}u\in \frak{m}athbb{Z}^d | N_\frak{m}u\frak{n}eq0\} .$$
\subsection{Partition function}
Let $e_i$ be the standard basis of the space $\frak{m}athcal{R}R^r$ for $1\leqslant i \leqslant r$. Let $f$ be a linear map
$f : \frak{m}athcal{R}R^{r} \rightarrow \frak{m}athcal{R}R^d $ defined by $f(e_i) = v_i$ and denote by $V$ the linear span of $\left[ v_1, {\bf c}dots , v_r\right]$. For $a \in V$ consider the following convex polytope: \\
$$
P(a) := f^{-1} (a) {\bf c}ap \frak{m}athcal{R}R_{\geqslant 0}^{r} = \{ x= (x_1, {\bf c}dots , x_r) \in \frak{m}athcal{R}R^{r} | \sum_{i=1}^{r} x_{i}v_{i} = a ; x \geqslant 0 \}.
$$
\frak{b}egin{definition}\label{partition function-def}
The function $\varphi : \frak{m}athcal{N}N^d \rightarrow \frak{m}athcal{N}N$ defined by $\varphi_{A}(a) = \sharp ( f^{-1} (a) {\bf c}ap \frak{m}athbb{Z}_{\geqslant 0}^{r})$ is called vector partition function corresponding to the matrix $A = (v_1, {\bf c}dots , v_r)$.\\
\end{definition}
For more details about the vector partition functions in particular the definition of chambers, chamber complex and quasipolynomials we use the terminology of {\bf c}ite{BV, Strm}.
Now, we recall the vector partition function theorem:
\frak{b}egin{theorem}\label{vector partition}(See {\bf c}ite[Theorem 1]{Strm})
For each chamber $C$ of maximal dimension in the chamber complex of $A$,
there exist a polynomial $P$ of degree
$n-d$, a collection of polynomials $Q_\sigma$
and functions $\Omega_\sigma: G_\sigma\setminus\{0\}\rightarrow \frak{m}athbb{Q}$ indexed by non-trivial $\sigma\in \Delta(C)$
such that, if $u\in \frak{m}athcal{N}N A{\bf c}ap \overline{C}$,
$$\varphi_A(u)=P(u)+\sum\{\Omega_\sigma([u]_\sigma).Q_\sigma(u) : \sigma\in \Delta(C) , [u]_\sigma\frak{n}eq 0\}$$
where $[u]_\sigma$ denotes the image of $u$ in $G_\sigma$. Furthermore, $\deg (Q_\sigma )=\#\sigma-d$.\\
\end{theorem}
\frak{b}egin{corollary}\label{vector partition2}
{\bf c}ite{BK}For each chamber $C$ of maximal dimension in the chamber complex of $A$,
there exists a collection of polynomials $Q_\tau$ for $\tau\in \frak{m}athbb{Z}^d/\Lambda$ such that
$$
\varphi_A(u)=Q_\tau (u), \ \hbox{if}\ u\in \frak{m}athcal{N}N A{\bf c}ap \overline{C}\ \hbox{and}\ u \in \tau+\Lambda_{C},
$$
where $\Lambda_{C} = {\bf c}ap_{\sigma \in \Delta(C)} \Lambda_{\sigma}$
\end{corollary}
Notice that setting $\Lambda$ for the intersection of the lattices
$\Lambda_\sigma$ with $\sigma$ maximal, the class of $u$ mod $\Lambda$ determines the class of $u$ mod $\Lambda_{C}$, hence the corollary holds with $\Lambda$ in place of $\Lambda_{C}$.\\
\section{Structure of $\tor$ module of Rees algebra}
Let $S=A[x_1, \ldots, x_n]$ be a graded algebra over a commutative noetherian local
ring $ S_0= (A,\textit{m})$ with residue field $k$ and set $R=S[T_1, \ldots, T_r]$ and $B=k[T_1, \ldots, T_r]$ . We set $\deg(T_i)=(d_i, 1)$ and extended the grading from $S$ to $R$ by setting $\deg(x_i) = (\deg(x_i) , 0)$.
Let $M$ be a finitely generated graded $S$-module and $I$ be a graded $S$-ideal generated in degrees $d_1,{\bf c}dots d_r$. In this section we use the important fact which it provides a $B$-structure on $\oplus_{t} \tor_{i}^{S} (MI^{t} , k )$ that were already at the center of the work {\bf c}ite{BCH} .
\frak{b}egin{theorem}\label{garaded algebra}
Let $S=A[x_1, \ldots, x_n]$ be a $\frak{m}athbb{Z}$-graded algebra over Noetherian local ring $(A,m,k)$. Let $I = (f_1,f_2,...,f_r)$ a homogeneous ideal in $S$ with $deg f_i = d_i$, and let $R=S[T_1, \ldots, T_n]$ be a bigraded polynomial extension of $S$ with $\deg(T_i)=(d_i, 1)$ and $\deg (a) = (\deg (a) , 0) \in \frak{m}athbb{G}\times\{0\}$ for all $a \in S$. Then there exists,
$t_0,m,D\in \frak{m}athbb{Z}$, linear functions $L_i(t)=a_i t+b_i$,
for $i=0,\ldots ,m$, with $a_i$ among the degrees
of the minimal generators of $I$ and $b_i\in \frak{m}athbb{Z}$, and polynomials $Q_{i,j}\in \frak{m}athbb{Q} [x,y]$ for
$i=1,\ldots ,m$ and $j\in 1,\ldots ,D$, such that, for $t\geq t_0$,
(i) $L_i(t)<L_j(t)\ \Leftrightarrow\ i<j$,
(ii) If $\frak{m}u <L_0(t)$ or $\frak{m}u >L_m(t)$, then $\tor_i^S(I^t, k)_{\frak{m}u}=0$.
(iii) If $L_{i-1} (t)\leq \frak{m}u \leq L_{i}(t)$ and
$a_i t-\frak{m}u \equiv j\frak{m}od (D)$, then
$$
\dim_k\tor_i^S(I^t, k)_{\frak{m}u}=Q_{i,j}(\frak{m}u ,t).
$$
\end{theorem}
\frak{b}egin{proof}
The natural anto map $R \rightarrow \frak{m}athcal{R}_{I}:=\frak{b}igoplus_{t\geqslant 0} I^{t}$ sending $T_i$ to $f_i$ makes $\frak{m}athcal{R}_{I}=\frak{b}igoplus_{t\geqslant 0} I^{t}$ a finitely generated graded $R$-module. It was shown in {\bf c}ite{BCH} that $\tor_{i}^{S}( I^{t} , A )$ is finitely generated $k[T_1, \ldots, T_r]$-module. Let $f: S \rightarrow A$ be the canonical map then there is a graded spectral sequence with second term $E_{p,q}^{2} = \tor_{p}^{A}(\tor_{q}^{S} (I^{t} , A)_{\frak{n}u},k) \frak{m}athcal{R}ightarrow \tor_{p+q}^{S} (I^{t} , k )_{\frak{n}u}$ therefore $\tor_{i}^{S} (I^{t} , k )$ is finitely generated $k[T_1, \ldots, T_r]$-module. then the result follows from {\bf c}ite[Proposition 4.5]{BK}.\\
\end{proof}
\frak{b}egin{example}\label{4-gen}
Let $S=A[x_1,\ldots,x_n]$ be a graded algebra over a commutative noetherian local ring $S_0 =(A,m)$. Let $I\subseteq S$ be a complete intersection ideal of three forms $f_1,f_2,f_3,f_4$ of degrees $3,5,7,9$ respectively. Let $R=S[T_1,T_2,T_3,T_4]$ be a $\frak{m}athbb{Z} \times \frak{m}athbb{Z}$-graded polynomial extension of $S$ with $\deg x_i = (1,0)$ and $\deg (T_1)=(3,1)$, $\deg (T_2)=(5,1)$, $\deg (T_3)=(8,1)$ ,$\deg (T_1)= (9,1)$. By Theorem \ref{garaded algebra} $\tor_{i}^{R}(\frak{m}athcal{R}_{I},k )$ is finitely generated $B=k[T_1, \ldots, T_4]$-module with assigned weight. Let $A= \frak{b}egin{pmatrix}
3 & 5 & 8 & 9 \\
1 & 1 & 1 & 1
\end{pmatrix}$ be the matrix of degrees of $B$ then by {\bf c}ite[Proposition 3.1]{BK} the Hilbert function of $B$ at degree $(\frak{n}u,n)$ equals to enumeration lattice points of the following convex polytope
$$
P(\frak{n}u,n) = \{ x \in \frak{m}athcal{R}R^{r} | A x = (\frak{n}u,n) ; x \geqslant 0 \}
$$
To take advantage of the Barvinok metod of enumeration lattice points we need a tranformation of polytope with one-to-one correspondence to the lattice points by the following procedure explained in {\bf c}ite{Lo}
\frak{b}egin{enumerate}
\item let $P = \{ x\in \frak{m}athcal{R}R^n | Ax = a, Bx \leqslant b\}$ be a polytope related to full row-rank $d\times n$ matrix $A$.
\item Find the generators $\{g_1,{\bf c}dots , g_{n-d}\}$ of the integer null-space of $A$.
\item Find integer solution $x_0$ to $Ax = a$.
\item Substituting the general integer solution $x = x_{0} + \sum_{i=1}^{n-d}\frak{b}eta_{i}g_{i}$ into the inequalities $Bx \leqslant b$.
\item By Substitution of (4) we arrive at a new system $C\frak{b}eta \leqslant c$ which defines the new polytope $Q = \{\frak{b}eta \in \frak{m}athcal{R}R^{n-d} | C\frak{b}eta \leqslant c \}$.\\
\end{enumerate}
In order to find out the integer null-space of $A$ we first calculate the Hermite
Normal form (HNF) of $A$ which is
$$
H:=HFN(A)=
\frak{b}egin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{pmatrix}
$$
and
$$
U=
\frak{b}egin{pmatrix}
1 & 4 & 3 & 2 \\
-2 & -4 & -5 & -3 \\
1 & 1 & 2 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
$$
Here $U$ is a unimodular matrix such that $H = AU$. Then the columns of $U_{1}=\frak{b}egin{pmatrix}
1 & 4 \\
-2 & -4 \\
1 & 1 \\
0 & 0
\end{pmatrix}$
gives the generators of the integer null-space of $A$. Hence by the above procedure the polytope $Q(\frak{n}u,n)$ defined by the solutions of following system of linear inequalities
$$\left\{\frak{b}egin{array}{r}
-4\lambda_{2} -4n-\frak{n}u \leq \lambda_{1},\\
\lambda_{1} + 2\lambda_{2} \leq -\frak{n}u-2n,\\
\lambda_{1}+\lambda_{2} \geq -\frak{n}u-n ,\\
\end{array}\right.$$
\end{example}
\section{structure of $\tor$ module of Hilbert filtrations}
To study blowup algebras, Northcott and Rees defined the notion of reduction of an ideal $I$ in a commutative ring $R$. An ideal $J \subseteq I$ is a reduction of $I$ if there exists $r$ such that $JI^{r} = I^{r+1}$ (equivalently this hold for $r \gg 0$) . An important fact about reduction of ideals is that this property is equivalent to the fact that
$$
\frak{m}athcal{R}_{J} = \oplus_{n} J^{n} \rightarrow \frak{m}athcal{R}_{I} = \oplus_{n} I^{n}
$$
is a finite morphism. Okon and Ratliff in {\bf c}ite{OR} extended the above notion of reduction to the case of filtrations by setting the following definition :\\
\frak{b}egin{definition}
Let $R$ be a ring, $I$ an $R$-ideal and $\frak{m}athcal{J}=\lbrace \frak{m}athcal{J}_n \rbrace_{ n\geq 0}$ and $\frak{m}athcal{I}=\lbrace \frak{m}athcal{I}_n \rbrace_{ n\geq 0}$ two filtration on $R$ :
\item (1) $\frak{m}athcal{J} \leq \frak{m}athcal{I}$ if $\frak{m}athcal{J}_n \subseteq \frak{m}athcal{I}_n$ for all $n\geq 0$.
\item (2) $\frak{m}athcal{J}$ is a reduction of $\frak{m}athcal{I}$ if $\frak{m}athcal{J} \leq \frak{m}athcal{I}$ and there exists a positive integer $d$ such that $\frak{m}athcal{I}_n = \sum_{i=0}^{d} \frak{m}athcal{J}_{n-i}\frak{m}athcal{I}_i$ for all $n\geq 1$.
\item (3 $\frak{m}athcal{J}$ is a $I$-good filtration if $I\frak{m}athcal{J}_i \subseteq \frak{m}athcal{J}_{i+1}$ for all $i\geq 0$
and $\frak{m}athcal{J}_{n+1} = I\frak{m}athcal{J}_n$ for all $n\gg 0$.
\end{definition}
Opposite to the ideal case, minimal reductions of a filtration does not exist in general. But
Hoa and Zarzuela showed in {\bf c}ite{HZ} the existence of a minimal reduction for $I$-good filtrations.
If $\frak{m}athcal{J}=\lbrace \frak{m}athcal{J}_n \rbrace_{ n\geq 0}$ is an $I$-good filtration on $R$, then $\frak{m}athcal{R}_\frak{m}athcal{J} := \oplus_{n\geqslant 0} \frak{m}athcal{J}_n$ is a finite $\frak{m}athcal{R}_{I}$-module {\bf c}ite[Theorem III.3.1.1]{}. Thit is why we are interested about $I$-good filtration to generalize the previous results.
The following theorem explain the structure of $\tor$ module of $I$-good filtrations :\\
\frak{b}egin{theorem}\label{Tor-Hilbert filtrations}
Let $S = A\left[x_{1}, {\bf c}dots , x_{n}\right]$ be a graded algebra over a Noetherian local ring $(A, m,k) \subset S_{0}$ . Let $\frak{m}athcal{J}=\lbrace \frak{m}athcal{J}_n \rbrace_{ n\geq 0}$ be an $I$-good filtration of $\frak{m}athbb{Z}$-homogeneous ideals in $S$, and $\frak{m}athcal{J}_1 = (f_1,f_2,...,f_r)$
with $deg f_i = d_i$ . Let $R=S[T_1, \ldots, T_n]$ be a bigraded polynomial extension of $S$
with $\deg(T_i)=(d_i, 1)$ and $\deg (a) = (\deg (a) , 0) \in \frak{m}athbb{Z} \times\{0\}$ for all $a \in S$.\\
\item(1)Then for all $i$ : \\
$\tor_{i}^{R} (\frak{m}athcal{R}_{\frak{m}athcal{J}},k) $ is a finitely generated $k[T_1, \ldots, T_r]$-module .\\
\item (2) There exist,
$t_0,m,D\in \frak{m}athbb{Z}$, linear functions $L_i(t)=a_i t+b_i$,
for $i=0,\ldots ,m$, with $a_i$ among the degrees
of the minimal generators of $I$ and $b_i\in \frak{m}athbb{Z}$, and polynomials $Q_{i,j}\in \frak{m}athbb{Q} [x,y]$ for
$i=1,\ldots ,m$ and $j\in 1,\ldots ,D$, such that, for $t\geq t_0$,
(i) $L_i(t)<L_j(t)\ \Leftrightarrow\ i<j$,
(ii) If $\frak{m}u <L_0(t)$ or $\frak{m}u >L_m(t)$, then $\tor_i^S(\varphi(t), k)_{\frak{m}u}=0$.
(iii) If $L_{i-1} (t)\leq \frak{m}u \leq L_{i}(t)$ and
$a_i t-\frak{m}u \equiv j\frak{m}od (D)$, then
$$
\dim_k\tor_i^S(\frak{m}athcal{J}_t, k)_{\frak{m}u}=Q_{i,j}(\frak{m}u ,t).\\\\
$$
\end{theorem}
\frak{b}egin{proof}
Recall that $\frak{m}athcal{R}_\frak{m}athcal{J}$ is a finite $\frak{m}athcal{R}_I$-module, hence a finitely generated $\frak{m}athbb{Z}^2$-graded $R$-module. Let $F_{\frak{b}ullet}$ be a $\frak{m}athbb{Z} \times \frak{m}athbb{Z}$-graded minimal free resolution of $\frak{m}athcal{R}_{\frak{m}athcal{J}}$ over $R$. Each $F_i = \oplus_{\frak{m}u ,t} R (-\frak{m}u ,- n)^{\frak{b}eta_{\frak{m}u ,n}^{i}}$ is of of finite rank due to the Noetherianity of $A$. The graded stand $F_\frak{b}ullet^t := (F_\frak{b}ullet)_{\frak{a}st,t}$ is a $\frak{m}athbb{Z}$-graded free resolution of $\frak{m}athcal{J}_t$ over $S = R_{(*,0)}$. Meanwhile there is a $S$-graded isomorphisme
$$
R (-\frak{m}u ,- n)_{(*,t)} \simeq R (-\frak{m}u)_{(*,t-n)}
$$
Thus $\tor^S_i(\frak{m}athcal{J}_t,A) = H_i(F_\frak{b}ullet^t \otimes_S A)$ and by the above isomorphisme $\tor^S_i(\frak{m}athcal{J}_t,A)$ is subquotient of $ A(-\frak{m}u)^{\frak{b}eta_{\frak{m}u ,n}^{i}} \otimes_A \left( A[T_1, \ldots, T_r]\right) _{t-n} $ Since the $A$ is Noetherian it it follows that $\tor^S_i(\frak{m}athcal{J}_t,A)$ is a finitely generated $A[T_1, \ldots, T_r]$-module. With a similar Approach one has $\tor^S_i(\frak{m}athcal{J}_t,k) = H_i(F_\frak{b}ullet^t \otimes_S k).$
Moreover, taking homology respects the graded structure and therefore,
$$H_i(F_\frak{b}ullet^t \otimes_S k) = H_i(F_\frak{b}ullet \otimes_R R/\frak{m} +\frak{n} R)_{(*,t)},$$
where $\frak{n} = (x_1, \dots, x_n)$ is the homogeneous irrelevant ideal of $S$. It follows that $\tor_{j}^{R}(\frak{m}athcal{R}_{\frak{m}athcal{J}},k) $ is a finitely generated graded $k[T_1, \ldots, T_r]$-module.
To proof the second fact let $E:=\{ d_1,\ldots ,d_r\}$ with $d_1<{\bf c}dots <d_r$ be a set of positive integers. For $\ell$ from
$1$ up to $r-1$, let $$\Omega_\ell :=\{ a{{d_\ell}{\bf c}hoose{1}}+b{{d_{\ell +1}}{\bf c}hoose{1}},\ (a,b)\in \frak{m}athcal{R}R_{\geq 0}^2\} $$ be the closed cone spanned by ${{d_\ell}{\bf c}hoose{1}}$ and ${{d_{\ell +1}}{\bf c}hoose{1}}$. For integers $i\frak{n}eq j$, let $\Lambda_{i,j}$ be the lattice spanned by ${{d_i}{\bf c}hoose{1}}$ and ${{d_{j}}{\bf c}hoose{1}}$ and $
\Lambda_\ell :=\frak{b}igcap_{i\leq \ell < j}\Lambda_{i,j}.
$
Also we set $\Lambda :=\frak{b}igcap_{i < j}\Lambda_{i,j}$ with $\Delta = \det(\Lambda)$. It follows from Theorem \ref{vector partition} that $\dim_k B_{\frak{m}u ,t}=0$ if $(\frak{m}u ,t)\frak{n}ot\in \Omega:= \frak{b}igcup_\ell \Omega_\ell$. Part (ii) and (iii) then follow from {\bf c}ite[Lemma 4.4 and Proposition 4.5]{BK}.\\\
\end{proof}
This in particular applies to the following situations:
\frak{b}egin{itemize}
\item $I$ is a graded ideal of $S$ and $S$ is an analytically unramified ring without
nilpotent elements. Then the integral closure filtration $\frak{m}athcal{J}=\lbrace \overline{I^n}\rbrace$ is $I$-good filtration{\bf c}ite{rees}.\\
\item $I$ is a graded ideal of $S$, then the rattliff-Rush closure filtration $\frak{m}athcal{J}=\lbrace \widetilde{I^{n}}\rbrace$ is an $I$-good filtration{\bf c}ite{rr}.
\end{itemize}
\end{document} |
\begin{document}
\begin{frontmatter}
\dochead{4th International Conference on Industry 4.0 and Smart Manufacturing}
\title{Active Transfer Prototypical Network: An Efficient Labeling Algorithm for Time-Series Data}
\author[a,b]{Yuqicheng Zhu$^{*}$}
\author[a,c]{Mohamed-Ali Tnani}
\author[b]{Timo Jahnz}
\author[a]{Klaus Diepold}
\address[a]{Department of Electrical and Computer Engineering, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany}
\address[b]{Department of Brake System Engineering, Robert Bosch GmbH, Robert-Bosch-Allee 1, 74232 Abstatt, Germany}
\address[c]{Department of Factory of the Future, Bosch Rexroth AG, Lise-Meitner-Straße 4, 89081 Ulm, Germany}
\correspondingauthor[*]{ Corresponding author. Tel.: +49 711 685 88127. {\it E-mail address:} [email protected]}
\begin{abstract}
The paucity of labeled data is a typical challenge in the automotive industry. Annotating time-series measurements requires solid domain knowledge and in-depth exploratory data analysis, which implies a high labeling effort. Conventional \textit{Active Learning} (AL) addresses this issue by actively querying the most informative instances based on the estimated classification probability and retraining the model iteratively. However, the learning efficiency strongly relies on the initial model, resulting in the trade-off between the size of the initial dataset and the query number. This paper proposes a novel \textit{Few-Shot Learning} (FSL)-based AL framework, which addresses the trade-off problem by incorporating a \textit{Prototypical Network} (ProtoNet) in the AL iterations. The results show an improvement, on the one hand, in the robustness to the initial model and, on the other hand, in the learning efficiency of the ProtoNet through the active selection of the support set in each iteration. This framework was validated on UCI HAR/HAPT dataset and a real-world braking maneuver dataset. The learning performance significantly surpasses traditional AL algorithms on both datasets, achieving 90\% classification accuracy with 10\% and 5\% labeling effort, respectively.
\end{abstract}
\begin{keyword}
Active Learning; Few-Shot Learning; Prototypical Network; Time-series Classification; Automotive Data
\end{keyword}
\end{frontmatter}
\section{Introduction}
The rapid growth of digitalization and information systems allows industries to collect vast amounts of data from devices such as machines and sensors. In particular, in the automotive industry, modern vehicles generate gigabytes of time-series data every hour. This large amount of data drives the development of innovative applications that support the user, accelerate innovation cycles, and enhance the entire workflow in manufacturing.
Although time-series classification problem has been well studied for non deep learning classifiers \cite{middlehurstHIVECOTENewMeta2021, bagnallTimeSeriesClassificationCOTE2015, hillsClassificationTimeSeries2014} and deep learning classifiers \cite{fawazDeepLearningTime2019}, a sufficient amount of labeled training data is required for both methods to achieve a satisfying performance. Enormous unlabeled data are frequently available while obtaining a complete set of labeled data might be complicated or expensive. In the automotive industry, especially for the braking system, the amount of sensor signals collected daily by test vehicles is increasing enormously. Due to the complex interconnection of the sensor signals under various braking scenarios, systematic signal analysis and detailed discussion between system experts and application engineers are frequently required to determine the corresponding braking maneuvers. The solid domain knowledge and the high time required for time-series labeling have made the large-scale application of state-of-the-art machine learning techniques in the automotive industry challenging.
Labeling effort of the time-series data can be reduced through \textit{Active Learning} (AL) \cite{angluinQueriesConceptLearning1988} or \textit{Few-Shot Learning} (FSL) \cite{wangGeneralizingFewExamples2020}. AL addresses the labeling issue by actively querying the most informative instances to be labeled from the unlabeled data pool \cite{settlesActiveLearningLiterature}. As the number of manually annotated instances increases, the classification accuracy is expected to improve rapidly. However, the learning performance heavily relies on the initial model. A poor initial model leads to querying several uninformative instances at the beginning \cite{yuanInitialTrainingData2011}. Therefore, high learning efficiency can not be guaranteed due to the random nature of the initial model. On the other hand, FSL attempts to generalize the prior knowledge quickly to new tasks using a few labeled instances, i.e., a support set \cite{wangGeneralizingFewExamples2020}. It has a pre-trained encoder trained on a large labeled dataset of similar tasks, which is used to convert the input instances into embedding space. A new test sample is classified based on the similarity to the embedding of the support set. Nevertheless, an unrepresentative support set may lead to inaccurate estimates of similarity and therefore fail to generalize prior knowledge.
\begin{figure}
\caption{Overview of the algorithm architecture.}
\label{fig:overview}
\end{figure}
The combination of AL and FSL incorporates the advantages and compensates for the disadvantages from both sides. This paper proposes a novel AL framework based on an FSL algorithm, i.e., \textit{Prototypical Network} (ProtoNet) \cite{snellPrototypicalNetworksFewshot2017}. Fig. \ref{fig:overview} visualizes the algorithm architecture. It starts from an unlabeled data pool and a ProtoNet with a pre-trained encoder as the initial model. The goal is to annotate all instances in the unlabeled data pool with minimal manual effort. The ProtoNet evaluates the entire unlabeled data pool. Then, the query strategy picks up the most informative instances based on the estimated classification probabilities. These instances are labeled by experts and used in two parts of the ProtoNet - recalculating the prototypes and fine-tuning the encode. After updating, the next AL iteration starts. The algorithm evaluates the rest of the unlabeled instances through the updated ProtoNet, picks up the most informative instance, and updates the ProtoNet with the accumulated query set. The iteration repeats until it achieves the maximal query number.
We implemented three different variants of the AL-ProtoNet combination to compare with standard AL and passive learning on public time-series datasets - \textit{UCI HAR\&HAPT} dataset. The influence of different pre-trained encoders and query strategies is investigated. Furthermore, the algorithm with the optimal hyperparameters was validated on a real-world Bosch braking maneuver dataset.
The rest of the paper is organized as follows. Section \ref{relatedwork} sorts out the related works on AL, FSL, and their combination for time-series classification. Relevant background is provided in section \ref{background}. We introduce our proposed novel AL algorithm and other two variants in detail in section \ref{methodology}. The investigation focuses on the learning performance of algorithm structure, query strategy, and pre-trained encoder. Finally, section \ref{exp} describes the experimental settings, followed by the corresponding results and conclusions of our work.
\section{Related Work}\label{relatedwork}
The concept of AL was introduced in 1988 by Angluin, where several query approaches for the problem of using queries to learn an unknown concept have been investigated. \cite{angluinQueriesConceptLearning1988} The objective of AL is to reduce the training set size by involving human annotators in the AL loop. State-of-the-art AL algorithms have been combined with other advanced machine learning approaches to adapt the properties of specific tasks. For example, Wei et al. used semi-supervised learning to recognize unlabeled instances automatically \cite{Wei2006}; Bengar et al. and Emam et al. integrated self-supervised learning in the AL framework \cite{bengar2021, emam2021active}. However, the performance of the algorithm is strongly dependent on a randomly selected initial set. Therefore, our work addresses this problem by leveraging the prior information to select a representative initial data set.
FSL tackles the issue of learning new concepts efficiently by exploiting an encoder trained on a variety of tasks \cite{wangGeneralizingFewExamples2020}. With this pre-trained encoder as the initial model, a classifier can be adapted to previously unseen classes from just a few instances \cite{brenden2015fewshot}. Concretely, \textit{Siamese Network} contains two or more identical sub-networks to find the similarity of the inputs by comparing its feature vectors \cite{Koch2015SiameseNN}. However, using pointwise learning, \textit{Siamese Network} produces a non-probabilistic output and slows the classification task significantly as it requires quadratic pairs to learn. A ProtoNet was proposed by Snell et al., where a metric (embedding) space is learned by computing distances to prototype representations of each class \cite{snellPrototypicalNetworksFewshot2017}. It has been well studied mainly in computer vision to deal with problems such as character recognition \cite{souibgui2021,asish2020}, image classification \cite{snellPrototypicalNetworksFewshot2017,hou2021}, object recognition \cite{prabhudesai2020}, etc.
Nevertheless, insufficient research has been conducted for time-series classification, even though it is one of the most relevant tasks in the industry. Zhang et al. proposed a novel attentional ProtoNet - \textit{TapNet} for multivariant time-series classification and extended it into a semi-supervised setting \cite{Zhang_Gao_Lin_Lu_2020}. The \textit{Semi-TapNet}, which utilizes the unlabeled data to improve the classification performance, inspires the idea of combining ProtoNet with AL. The idea of active ProtoNet has been investigated and demonstrated to outperform the baseline in \textit{Computer Vision} \cite{WoodwardF17} and \textit{Natural Language Processing} \cite{muller2022active} field. However, Pezeshkpour et al. pointed out that current AL cannot improve few-shot learning models significantly compared to random sampling. \cite{Pezeshkpour2020on} The main reason could be that AL and ProtoNet are processed separately in their work, i.e., AL is only used for preparing the support set of ProtoNet. Our proposed novel algorithm - Active Transfer Prototypical Network (ATPN), focuses on training a ProtoNet enhanced AL learner. The idea is similar to \cite{WoodwardF17}, which introduces an active learner combining meta-learning and reinforcement learning to choose whether to label the instance by the annotator. But instead of reinforcement learning, we formulate a framework to integrate ProtoNet properly in the Al loop.
\section{Background}\label{background}
The proposed model consists of two machine learning techniques, namely 1) \textbf{AL} to iteratively select representative support set for ProtoNet; 2) \textbf{ProtoNet} to provide a more robust initial model and a more efficient learning mechanism. The learning efficiency is expected to be significantly higher and robust regardless of the performance of the size of the initial model. In this section, relevant background is given in detail.
\subsection{Prototypical Network}
The novel approach is mainly based on ProtoNet, where an embedding space is learned by computing distances to prototype representations of each class [7].
It requires a small representative labeled dataset $S = \{(x_1, y_1), \dots, (x_N, y_N)\}$ called support set, where $N$ is the number of examples and $x_i\in R^D$ refers to the $D$-dimensional feature vector. Assume there are $K$ different classes, $S_k$ denotes the subset of support set with label $k$. Per-class prototypes $c_k$ are calculated by averaging the $S_k$ in $M$-dimensional embedding space using an encoder $f_\phi:R^D\rightarrow R^M$. The classification probabilities $p_{\phi}$ can be calculated based on a distance function $d:R^M\times R^M\rightarrow [0,+\infty)$, e.g., Euclidean distance is used in this paper. Since the original paper of ProtoNet \cite{snellPrototypicalNetworksFewshot2017} demonstrates that compared to cosine distance and other distance metrics, using Euclidean distance can significantly improve results.
\begin{equation}
\label{eqn:classProb}
p_{\phi}(y=k|x) = \frac{\exp(-d(f_{\phi}(x), c_k))}{\sum_{i=1}^K\exp(-d(f_{\phi}(x), c_i))}, \ \ \ \text{where}\ \ \ \ \ \ c_k = \frac{1}{|S_k|}\sum_{(x_i, y_i)\in S_k}f_\phi(x_i)
\end{equation}
The learnable parameters $\phi$ can be then computed by minimizing the cost function $J(\phi)=-\log_{p_\phi}(y=k|x)$ on a training set $D$, where $|D|\gg |S|$.
\subsection{Query Strategy}\label{querystrategy}
The AL algorithms can query the data based on \textit{uncertainty sampling} [16], \textit{query-by-committee} [17], or \textit{expected model change} [18]. Among them, query-by-committee requires an ensemble model, and expected model change is computationally expensive. Therefore, uncertainty sampling is chosen due to its simplicity and interpretability.
\textbf{Least Confidence} The most straightforward measure is the uncertainty of classification defined by $U(x) = 1-P(\hat{x}|x)$ \cite{lewis1994sequential}, where x is the data point to be classified and $\hat{x}$ is the prediction with the highest classification probability. It selects the instances that the model is least confident in prediction based on current estimated classification probability.
\textbf{Margin Sampling} Classification margin \cite{lewis1994sequential} is the difference between the probability of the first and second most likely prediction, which is defined by $M(x) = P(\hat{x_1}|x) - P(\hat{x_2}|x)$, where $\hat{x_1}$ and $\hat{x_2}$ are the classes with first and second-highest classification probability. It assumes that the most informative instances fall within the margin.
\textbf{Entropy Sampling} In information theory, the terminology "entropy" \cite{shannon1948} refers to the average level of information or uncertainty inherent to the variable's possible outcomes. Inspired by this, the classification uncertainty can also be represented by $H(x) = -\sum_{i=1}^{n}P(x_i)\log P(x_i)$, where $p_k$ is the probability of the sample belonging to the kth class, heuristically, the entropy is proportional to the average number of guesses one has to make to find the true class. The closer the distribution to uniform, the larger the entropy. It means that the instances with similar classification probabilities for all classes have a higher chance of being selected while querying.
\textbf{Batch-Mode Sampling} More instances can be queried in a single AL iteration; \textit{ranked batch-mode query} \cite{CARDOSO2017313} is used for batch-mode sampling. The score of an instance can be calculated by $score=\alpha (1-\Phi(x, X_{labeled}))+(1-\alpha)U(x)$, where $\alpha=\frac{|X_{unlabeled}|}{|X_{unlabeled}|+|X_{labeled}|}$, $X_{labeled}$ is the labeled dataset, $U(x)$ is the least confidence measurement, and $\Phi$ is cosine similarity in the implementation. The first term measures the diversity of the current instance concerning the labeled dataset. The uncertainty of predictions for $x$ is considered in the second term. After scoring, the highest scored instance is put at the top of a list. Then, the instance is removed from the pool. Finally, the score is recalculated until the desired amount of instances is selected.
\section{Methodology}\label{methodology}
In this section, we demonstrate the detailed structure design of ATPN and its variants for ablation study.
\subsection{Active Transfer Prototypical Network}\label{ATFN}
With a large number of unlabeled measurements, such as in the automotive industry use cases, the entire unlabeled dataset should be evaluated to find the most informative instances at each single AL iteration. Therefore, a pool-based AL learning strategy is more suitable than stream-based selective sampling. In a pool-based scenario, AL attempts to find a small set of representative data $L$ from a large unlabeled data pool $U$ such that it can achieve satisfactory performance with much less labeling effort, i.e., $|L| \ll |U|$. Algorithm \ref{alg:main} is designed to reach the minimal $|L|$. A pre-trained encoder $f_\phi$ is used instead of learning $\phi$ on $D$ to avoid updating the entire parameter set in every AL iteration. Only the parameters of the feature layer $\phi'$, i.e., the layer preceding the output layer of the encoder, is updated in AL iterations. $\hat{\phi}$ denotes the parameters frozen in AL iterations.
\begin{algorithm}[H]
\caption{\textit{Training Active Transfer Prototypical Network}}\label{alg:main}
\textbf{Input:} pre-trained encoder $f_{\phi}(x)$; unlabeled data pool $\mathbf{U}=\{x_1,\dots,x_m\}$
\begin{algorithmic}
\State $\mathbf{L} \gets RandomSample(\mathbf{U}, 1)$
\State $\mathbf{U} \gets \mathbf{U}\setminus\mathbf{L}$
\State \textbf{Initialization of Feature Layer}
\State $\mathbf{J_{\hat{\phi}}}\gets \sum_{(x_i, y_i)\in\mathbf{L}}(y_i - softmax(f_{\phi', \hat{\phi}}(x_i)))^2$
\State $\hat{\phi}\gets argmin_{\hat{\phi}}\mathbf{J_{\hat{\phi}}}$
\State \textbf{Initialization of Prototypical Network}
\State $c_k\gets\frac{1}{|\mathbf{L}_k|}\sum_{(x_i,y_i)\in\mathbf{L}_k}f_{\phi', \hat{\phi}}(x_i)$
\State \textbf{Active Learning Iterations}
\For{$n$ in $\{1,\dots,N\}$}
\State $p_{\phi', \hat{\phi}}(y=k|\textbf{x}) \gets \frac{\exp(-d(f_{\phi', \hat{\phi}}(\textbf{x}), c_k))}{\sum_{i=1}^{K}\exp(-d(f_{\phi', \hat{\phi}}(\textbf{x}), c_{i}))}$
\State $\mathbf{S} \gets QueryStrategy(\mathbf{U}, p_{\phi', \hat{\phi}})$
\State $\mathbf{L} \gets \mathbf{L}\bigcup\mathbf{S}$
\State $\hat{\phi}\gets argmin_{\hat{\phi}}\sum_{(x_i, y_i)\in\mathbf{L}}(y_i - softmax(f_{\phi', \hat{\phi}}(x_i)))^2$
\State $c_k\gets\frac{1}{|\mathbf{L}_k|}\sum_{(x_i,y_i)\in\mathbf{L}_k}f_{\phi', \hat{\phi}}(x_i)$
\EndFor
\end{algorithmic}
\end{algorithm}
The ATPN starts with random initialization. One instance is randomly sampled from $U$ and labeled by an expert. The pseudo-label of the initial sampled point is predicted using the pre-trained encoder $f_{\phi}$. The probability is obtained by the softmax layer placed on top of the encoder. $\phi'$ is initially updated by minimizing the \textit{Mean Square Error} (MSE) of the initial point. $L$ is considered a support set for ProtoNet; therefore, the prototypes $c_k$ are initialized with $L$ through averaging $L_k$ in embedding space.
In the AL iterations, the classification probability $p_{\phi', \hat{\phi}}$ is estimated based on the negative Euclidean distance $d$ between the embedded input feature $f_{\phi', \hat{\phi}}$ and the prototypes $c_k$. Then, the query strategy evaluates the informativeness for every single instance in $U$ using $p_{\phi', \hat{\phi}}$ as input. Intuitively, the instances far away from the prototypes are regarded as informative instances. The most informative instance or set of instances $S$ is selected and added to $L$.
Furthermore, the labeled dataset $L$ is used to update two parts of our algorithm. On the one hand, the prototypes are updated by averaging the embeddings in the new $L$ to increase the representativeness of each class. On the other hand, we fine-tune the encoder, i.e., update the parameters of the feature layer $\phi'$ by minimizing the MSE in $L$. It aims to preserve the learning ability of the algorithm even when the test data and the training data for the pre-trained encoder are not similar. Finally, the iteration process repeats $N$ times, which is the maximal query number. The maximal query number $N$ corresponds to the budget for data labeling.
\subsection{Active Prototypical Network Variants}
To investigate the influence of each relevant part in ATPN, we implement two other variants - \textit{Active Offline Prototypical Network} (OfflinePN) and \textit{Active Online Prototypical Network} (OnlinePN). The former does not update the encoder's feature layer $\phi'$ during AL iterations to observe the impact of fine-tuning; the latter removes the pre-trained encoder and trains the encoder entirely online with the support set $L$ to identify the improvement of the pre-trained encoder.
Algorithm \ref{alg:offline} removes the fine-tuning part from ATPN. In Algorithm \ref{alg:online}, there is no pre-trained encoder. The learnable parameters $\phi', \hat{\phi}$ are initialized randomly and updated online in AL iterations with support set $L$.
\begin{minipage}{0.49\textwidth}
\begin{algorithm}[H]
\caption{\textit{Training Active Offline Prototypical Network}}\label{alg:offline}
\textbf{Input:} pre-trained encoder $f_{\Phi}(x)$; unlabeled data pool $\mathbf{U}=\{x_1,\dots,x_m\}$
\begin{algorithmic}
\State $\mathbf{L} \gets RandomSample(\mathbf{U}, 1)$
\State $\mathbf{U} \gets \mathbf{U}\setminus\mathbf{L}$
\State \textbf{Initialization of Prototypical Network}
\State $c_k\gets\frac{1}{|\mathbf{L}_k|}\sum_{(x_i,y_i)\in\mathbf{L}_k}f_{\phi', \hat{\phi}}(x_i)$
\State \textbf{Active Learning Iterations}
\For{$n$ in $\{1,\dots,N\}$}
\State $p_{\phi', \hat{\phi}}(y=k|\textbf{x}) \gets \frac{\exp(-d(f_{\phi', \hat{\phi}}(\textbf{x}), c_k))}{\sum_{i=1}^{K}\exp(-d(f_{\phi', \hat{\phi}}(\textbf{x}), c_{i}))}$
\State $\mathbf{S} \gets QueryStrategy(\mathbf{U}, p_{\phi', \hat{\phi}})$
\State $\mathbf{L} \gets \mathbf{L}\bigcup\mathbf{S}$
\State $c_k\gets\frac{1}{|\mathbf{L}_k|}\sum_{(x_i,y_i)\in\mathbf{L}_k}f_{\phi', \hat{\phi}}(x_i)$
\EndFor
\end{algorithmic}
\end{algorithm}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\begin{algorithm}[H]
\caption{\textit{Training Active Online Prototypical Network}}\label{alg:online}
\textbf{Input:} unlabeled data pool $\mathbf{U}=\{x_1,\dots,x_m\}$
\begin{algorithmic}
\State $\mathbf{L} \gets RandomSample(\mathbf{U}, 1)$
\State $\mathbf{U} \gets \mathbf{U}\setminus\mathbf{L}$
\State \textbf{Initialization of Encoder}
\State $\phi', \hat{\phi}\gets Randomly\ initialize\ f_{\phi', \hat{\phi}}(x_i)$
\State \textbf{Initialization of Prototypical Network}
\State $c_k\gets\frac{1}{|\mathbf{L}_k|}\sum_{(x_i,y_i)\in\mathbf{L}_k}f_{\phi', \hat{\phi}}(x_i)$
\State \textbf{Active Learning Iterations}
\For{$n$ in $\{1,\dots,N\}$}
\State $p_{\phi', \hat{\phi}}(y=k|\textbf{x}) \gets \frac{\exp(-d(f_{\phi', \hat{\phi}}(\textbf{x}), c_k))}{\sum_{i=1}^{K}\exp(-d(f_{\phi', \hat{\phi}}(\textbf{x}), c_{i}))}$
\State $\mathbf{S} \gets QueryStrategy(\mathbf{U}, p_{\phi', \hat{\phi}})$
\State $\mathbf{L} \gets \mathbf{L}\bigcup\mathbf{S}$
\State $\phi', \hat{\phi}\gets argmin_{\phi', \hat{\phi}}\{-\log(p_{\phi', \hat{\phi}}(y=k|\textbf{x}))\}$
\State $c_k\gets\frac{1}{|\mathbf{L}_k|}\sum_{(x_i,y_i)\in\mathbf{L}_k}f_{\phi', \hat{\phi}}(x_i)$
\EndFor
\end{algorithmic}
\end{algorithm}
\end{minipage}
\section{Experimental Settings}\label{exp}
The algorithms are validated on the \textit{UCI HAR} \cite{HAR} \textit{\& HAPT} dataset \cite{HAPT} and a Bosch braking maneuver dataset. We run the algorithms five times and record the accuracy of each AL iteration. Then, the average accuracy is calculated, and the confidence interval is determined using \textit{bootstrapping} method \cite{diciccio1996bootstrap}. Finally, we plot an accuracy curve in terms of query number to evaluate the model performance.
\subsection{UCI HAR\&HAPT Dataset}
HAR and HAPT datasets are human activity recognition databases built from the recordings of subjects performing activities of daily living while carrying a waist-mounted smartphone with embedded inertial sensors. The trials of both datasets were conducted on 30 participants ranging in age from 19 to 48 years old, who wore a smartphone (Samsung Galaxy S II) around their waist during the experiment. With the device's built-in accelerometer and gyroscope, 3-axial linear acceleration and 3-axial angular velocity were recorded at a rate of 50Hz. The HAR dataset contains six basic human activities: three static postures - \textit{standing}, \textit{sitting}, and \textit{lying} and three dynamic activities - \textit{walking}, \textit{walking downstairs}, and \textit{walking upstairs}. While HAPT dataset further includes postural transitions that occurred between the static postures. These are \textit{stand-to-sit, sit-to-stand, sit-to-lie, lie-to-sit, stand-to-lie, and lie-to-stand}. The sensor signals (accelerometer and gyroscope) are pre-processed by applying median filters, and a third-order low pass Butterworth filter with a corner frequency of 20 Hz to remove noise. Then it is sampled in fixed-width sliding windows of 2.56 sec and 50\% overlap (128 readings/window) to standardize the signal length.
\subsection{BOSCH Braking Maneuver Dataset}
To validate the investigated algorithms, they were tested on a real-world industrial database - BOSCH \textit{Automated Maneuver Detection} (AMD) database. It contains braking maneuvers for software functions, e.g., \textit{Anti-lock Braking System} (ABS) and \textit{Electronic Brake Force Distribution} (EBD). This database aims to train a general feature extractor to automate the maneuver detection task. The measurements were initially collected from the quality center database, the official data pool for storing brake maneuver measurements from customers and platform development projects. These measurements were mainly measured by sensors in test vehicles. 10 ABS maneuver types and 5 EBD maneuver types were used in this paper. Each maneuver type contains hundreds of measurements. There are in total 12277 measurements, 8536 ABS measurements, and 3741 EBD measurements. Concretely, each measurement records braking events under different scenarios such as different road friction coefficients and pavement Slope. 18 relevant sensor signals, including braking pressure, wheel velocity, etc., are selected as input for our model. All signals are re-sampled to length 4000 using linear interpolation to get a uniform input dimension.
\subsection{Pre-trained Encoder}
We use \textit{1D Convolutional Neural Network} (CNN) \cite{LeCun1998} as a time-series encoder; theoretically, it can be replaced by any neural network capable of extracting solid features from time-series such as \textit{Long short-term Memory} (LSTM) \cite{HochSchm97}. The objective of a pre-trained encoder in our model is to provide a solid initial model for the AL loop. As long as the pre-trained encoders extract common features, what architecture is used and the hyperparameter-tuning has a minor effect on the final result. Since the feature layer will be fine-tuned in the AL loop, the adaption from different pre-trained encoders is efficient. Additionally, this paper focuses on investigating the overall combination concept of Al and ProtoNet. Therefore we select an efficient hyperparameter setting to reduce the experiment time.
For the UCI HAR dataset, the encoder contains two 1D convolutional layers with 32 filters, 7 kernels, and \textit{Rectified Linear Unit} (ReLU) \cite{relu} as activation function, followed by a Maxpooling layer with pool size 2 and 0.5 rate dropout layer. The encoder ends up with a flatten layer and a softmax layer. We use Adam as an optimizer, Categorical Cross Entropy as a loss function, and run 10 epochs with batch size 32.
Further hyperparameter tuning and more epochs lead to an improvement for the pre-trained encoder. Still, our experiments show that the difference in overall performance with respect to the hyperparameter tuning is minor. However, the training data used for the encoder is relevant (see section \ref{pre-train exp}).
\subsection{Experiments}\label{expsetting}
We conduct four experiments on the UCI HAR\&HAPT dataset to investigate the algorithm performance in terms of algorithm structure, query strategy, and pre-trained encoder. The first experiment compares all algorithm variants to standard AL and passive learning. 1D CNN is used as a machine learning model in the AL framework with the least confidence sampling as a query strategy. The same AL architecture with random sampling as query strategy refers to passive learning in this context. The second and third experiment attempts to investigate the impact of query strategy. The query strategies in section \ref{querystrategy} are applied in the experiment. In the last experiment, the pre-trained encoder is trained on datasets with different similarities to the test dataset. For "weak encoder," the encoder is trained on a different dataset, namely the bosch braking maneuver dataset. "Normal encoder" uses UCI HAR as training data, and "strong encoder" further includes some UCI HAPT data.
To identify the improvement of the novel algorithm in a real-world application, only ATPN and OnlinePN are compared to passive learning, which is commonly used in industry. The best algorithm settings based on the previous experiment are used for the Bosch dataset. The robustness of the pre-trained encoder is vital for the model. Therefore, the braking measurements of 9 braking maneuvers are randomly selected from 15 maneuvers as training data for the encoder, and the measurements of the rest 6 maneuvers are considered test data. This random selection repeats five times. The best and worst case is recorded in Fig. \ref{fig:bosch}.
\section{Results}
In this section, To better describe the observations, two main processes can be defined in each learning curve of the algorithm.
\begin{itemize}
\item \textbf{Exploration Process}: during this process, AL attends to learn a reliable model to estimate classification probability for the query strategy.
\item \textbf{Exploitation Process}: AL has learned a sufficient model to predict reliable informativeness of unlabeled data. More labeled instances do not change the unlabeled instances’ informativeness significantly. Therefore, the accuracy will increase slowly in this process. The slope represents the learning efficiency of AL.
\end{itemize}
\begin{figure}
\caption{The results of experiments undertaken on the UCI HAR\&HAPT dataset. a) different versions of active ProtoNet are compared with standard AL and passive learning. b) different single-mode query strategies are tested for ATPN. c) batch-mode query strategies are compared with margin sampling and random sampling. d) pre-trained encoders with different generalization levels are tested for ATPN with margin sampling.}
\label{fig:HAR}
\end{figure}
\subsection{Algorithm Structure}
Fig. \ref{fig:HAR} a. shows that the learning curve of passive learning has an overall linear growth trend with substantial deviation. Standard AL improves learning efficiency by requiring fewer queries to achieve the same classification accuracy. However, standard AL has a noisy exploration process. The results of all AL-ProtoNet variants follow a pattern where the curve has a significantly larger slope in the exploration process than in the exploitation process. OnlinePN and ATPN both reach 60\% accuracy within 50 queries. OnlinePN has a flatter exploitation curve and converges with standard AL after 1000 queries. In contrast, the accuracy of ATPN keeps increasing visibly in the exploitation process. Surprisingly, the exploration process of OfflinePN is most efficient, but the accuracy improvement stops at 80\% within very few queries.
The comparison between OnlinePN and standard AL indicates the structure of ProtoNet provides a more efficient exploration process; they differ only in the machine learning algorithm, i.e., OnlinePN uses ProtoNet instead of 1D CNN as an AL algorithm. OfflinePN leverages the prior knowledge in a pre-trained encoder and further optimizes the exploration efficiency, but the training dataset's quality strictly limits the optimal accuracy. Fine-tuning during the AL iterations balances the exploration and exploitation. Therefore, ATPN outperforms other variants in terms of the overall learning ability.
\subsection{Query Strategy}
The influence of different query strategies is investigated in Fig.\ref{fig:HAR} b. and c. for ATPN. Different query strategies does not change the shape of the learning curve significantly. Margin sampling and least confidence sampling perform slightly better than random sampling and have comparable behavior in the exploration process. Margin sampling has marginally higher learning efficiency in the exploitation process. In contrast, entropy sampling provides worse learning performance in the exploration and exploitation process than random sampling. There is a unusual plunge close to the critical point between exploration and exploitation. Furthermore, a batch-mode query strategy could increase the learning efficiency because it considers both informativeness and diversity simultaneously. However, Figure \ref{fig:HAR} c) does not completely support this hypothesis. Random batch sampling performs worse than random sampling because an unsuitable batch could cause a more considerable bias to the model, leading to a more fluctuating learning curve. Therefore, according to these experiments, margin sampling is best suited for ATPN.
\subsection{Pre-trained Encoder}\label{pre-train exp}
Fig. \ref{fig:HAR} d) demonstrates that the change in the exploration and exploitation process due to different pre-trained encoders is minor. The learning curves seem to be only shifted vertically for different encoders. In other words, the pre-trained encoder influences the turning point from the exploration to the exploitation process.
\subsection{Validation on Bosch dataset}
To validate the novel algorithm in the real-world dataset, we mainly focus on investigating ATPN. The goal is to indicate the improvement of the novel algorithm compared to the algorithm currently used in the automotive industry, i.e., passive learning. The validation of ATPN is done with different test settings as described in section \ref{expsetting}. The average learning curve shows similar behavior as for the UCI HAR dataset, which verifies the generalization performance of the novel algorithm to a more complex time-series dataset. We visualize the worst case and best case for the ATPN learning curve, demonstrating the algorithm's robustness in practice. Even though it has been shown that a pre-trained encoder has an impact on the overall learning performance in previous results, the deviation due to the pre-trained encoder is acceptable in practice. ATPN reaches over 90\% accuracy with less than 10\% of labeling effort on average.
Moreover, in an extreme case, the training data for the pre-trained encoder and the test set can be independent. ATPN and OnlinePN are equivalent in this circumstance since the encoder is not pre-trained regarding the current task. The learning curve of OnlinePN indicates the lowest bound of the learning performance. The results show that ATPN significantly improves even if the pre-trained encoder is not well trained.
\begin{figure}
\caption{The learning curves of ATPN, OnlinePN and passive learning for the Bosch brake maneuver dataset. For the best algorithm - ATPN, experiments with different pre-trained encoders and test sets are conducted; the worst \& best case and the average learning curve are shown in the figure.}
\label{fig:bosch}
\end{figure}
\section{Conclusions and Future Work}\label{conclusion}
We propose a novel AL framework incorporating a fine-tuning ProtoNet for time-series classification. The investigation of different combinations has shown that ATPN has the most comprehensive learning ability, including learning efficiency and robustness. Furthermore, the research has shown that the quality of the pre-trained encoder can influence the learning performance of ATPN. However, the encoder can keep the deviation of the learning curve within acceptable limits in practice. Moreover, margin sampling provided the best learning performance in both datasets. This paper indicates that ATPN can address the trade-off dilemma between the training data size for the initial model and the query number. It also provides a more efficient and robust learning process for a pool-based AL strategy.
A training set of similar tasks is required to obtain the optimal learning curve using ATPN. Developing a similarity metric for the labeled training data and the current data could address this problem. Combining an unsupervised encoder such as Autoencoder would also be fruitful for further work. More advanced AL strategies, e.g., query by committee, could improve the learning performance. Notwithstanding these limitations, the study indicates that the proposed algorithm can reduce the effort for labeling time-series data dramatically in practice.
\end{document} |
\begin{document}
\title{The Foliated Weinstein Conjecture}
\subjclass[2010]{Primary: 53D10.}
\date{May, 2015}
\keywords{contact structures, foliations, Weinstein conjecture}
\author{\'Alvaro del Pino}
\address{Universidad Aut\'onoma de Madrid and Instituto de Ciencias Matem\'aticas -- CSIC.
C. Nicol\'as Cabrera, 13--15, 28049, Madrid, Spain.}
\email{[email protected]}
\author{Francisco Presas}
\address{Instituto de Ciencias Matem\'aticas -- CSIC.
C. Nicol\'as Cabrera, 13--15, 28049, Madrid, Spain.}
\email{[email protected]}
\begin{abstract}
A foliation is said to admit a foliated contact structure if there is a codimension $1$ distribution in the tangent space of the foliation such that the restriction to any leaf is contact. We prove a version of the Weinstein conjecture in the presence of an overtwisted leaf. The result is shown to be sharp.
\end{abstract}
\maketitle
\section{Introduction}
The \textbf{Weinstein conjecture} \cite{Wei} states that the Reeb vector field associated to a contact form $\alpha$ in a closed $(2n+1)$--manifold $M$ always carries a closed periodic orbit. Hofer proved in \cite{Ho} that the Weinstein conjecture holds for any 3-dimensional contact manifold $(M^3, \alpha)$ overtwisted or satisfying $\pi_2(M) \neq 0$. Then, it was proven in full generality by Taubes \cite{Tau} by localising the Seiberg-Witten equations along Reeb orbits.
The main theorem of this note -- definitions of the relevant objects will be given in the next section -- reads as follows:
\begin{theorem} \label{thm:main}
Let $(M^{3+m}, {\mathcal{F}}^3, \xi^2)$ be a contact foliation in a closed manifold $M$. Let $\alpha$ be a defining 1--form for an extension of $\xi$ and let $R$ be its Reeb vector field. Let ${\mathcal{L}}^3 \hookrightarrow M$ be a leaf.
\begin{itemize}
\item[i.] If $({\mathcal{L}}, \xi|_{{\mathcal{L}}})$ is an overtwisted contact manifold, $R$ possesses a closed orbit in the closure of ${\mathcal{L}}$.
\item[ii.] If $\pi_2({\mathcal{L}}) \neq 0$, $R$ possesses a closed orbit in the closure of ${\mathcal{L}}$.
\end{itemize}
\end{theorem}
The case where the leaf ${\mathcal{L}}$ is closed corresponds to the Weinstein conjecture. This result constrasts, just as in the non--foliated case, with the behaviour of \textsl{smooth flows}: it was proven in \cite{CPP15} that any never vanishing vector field tangent to a foliation $(M^{3+m}, {\mathcal{F}}^3)$ can be homotoped, using parametric plugs, to a tangent vector field without periodic orbits.
The proof of Theorem \ref{thm:main}, based on Hofer's methods, occupies the last section of the note. Before that, several examples showing the sharpness of the result are discussed.
In Subsection \ref{ssec:tight}, Proposition \ref{prop:noGeodesics} constructs a contact foliation in the 4--torus $\mathbb{T}^4$ that has \textsl{all leaves tight} and that has no Reeb orbits. Naturally, in this example all leaves are open. This shows that the \textbf{foliated Weinstein conjecture} does not necessarily hold as soon as we drop the assumption on overtwistedness. Then Proposition \ref{prop:geodesic} presents a more sophisticated example of a contact foliation in $\mathbb{S}^3 \times \mathbb{S}^1$ with all leaves tight and with closed Reeb orbits appearing only in the unique compact leaf of the foliation.
In Subsection \ref{ssec:sharp} we construct a foliation in $\mathbb{S}^2 \times \mathbb{S}^1 \times \mathbb{S}^1$ that has two compact leaves $\mathbb{S}^2 \times \mathbb{S}^1 \times \{0,\pi\}$ on which all others accumulate. We then endow it with a foliated contact structure that makes all leaves overtwisted but that has closed Reeb orbits only in the compact ones. Theorem \ref{thm:main} is therefore sharp in the sense that an overtwisted leaf might not possess a Reeb orbit \textsl{itself}.
In Subsection \ref{ssec:nonComplete} we construct Reeb flows with no closed orbits in every \textsl{open contact manifold}.
\textsl{Acknowledgements.} Part of this work was carried out while the first author was visiting the group of Andr\'as Stipsicz at the Alfr\'ed R\'enyi Institute. We are very thankful to Klaus Niederkr\"uger for the many hours of insightful conversations about this project. The authors would also like to thank Viktor Ginzburg for his interest in this work and for his suggestions regarding Propositions \ref{prop:openLeaf} and \ref{prop:noGeodesics}. Several other people have discussed with us when this work was in progress: S. Behrens, R. Casals, M. Hutchings, E. Miranda, D. Pancholi, and D. Peralta--Salas.
The first author is supported by a La Caixa--Severo Ochoa grant. Both authors are supported by Spanish National Research Project MTM2013--42135 and the ICMAT Severo Ochoa grant SEV--2011--0087 through the V. Ginzburg Lab.
\section{The relevant concepts involved}
All objects considered henceforth will be smooth. Foliations and distributions will be oriented and cooriented. Often, arguments where orientability assumptions are dropped would go through by taking double or quadruple covers appropriately.
\subsection{Contact structures}
\begin{definition}
Let $W$ be a $2n+1$ dimensional manifold. A distribution $\xi^{2n} \subset TW$ is said to be a \textbf{contact distribution} if it is maximally non--integrable. A $1$--form $\alpha \in \Omega^1(W)$ satisfying $\ker(\alpha) = \xi$ is called a \textbf{contact form}. $\xi$ being maximally non--integrable amounts to $\alpha$ satisfying $\alpha \wedge d\alpha^n \neq 0$.
We say that the pair $(W,\xi)$ is a \textbf{contact manifold}.
\end{definition}
A map $\phi: (W_1, \xi_1) \to (W_2,\xi_2)$ satisfying $\phi^* \xi_2 = \xi_1$ is a contact map. If $\phi$ is additionally a diffeomorphism we will say that $\phi$ is a \textbf{contactomorphism}.
\begin{example}
Consider $\mathbb{R}^{2n+1}$ with coordinates $(x_1,y_1,\cdots,x_n,y_n,z)$. The $2n$--distribution $\xi_{st} = \ker(dz-\sum_{i=1..n} x_idy_i)$ is called the standard \textbf{tight} contact structure.
\end{example}
\begin{example}
Consider $\mathbb{R}^3$ with cylindrical coordinates $(r,\theta,z)$. The $2$--distribution $\xi_{ot} = \ker(\cos(\theta)dz+r\sin(r)d\theta)$ is called the contact structure \textbf{overtwisted at infinity}. The disc $\Delta = \{z=0, r \leq \pi\}$ is called the \textbf{overtwisted disc}.
\end{example}
It was shown by Bennequin in \cite{Be} that the structures $(\mathbb{R}^3, \xi_{st})$ and $(\mathbb{R}^3, \xi_{ot})$, although homotopic as plane fields, are distinct as contact structures.
\subsubsection{Overtwisted contact structures in dimension $3$}
\begin{definition}
Let $(W^3,\xi^2)$ be a contact manifold. $(W,\xi)$ is said to be an \textbf{overtwisted} contact manifold if there is an embedded 2--disc $D \subset W$ and a contactomorphism $\phi: \nu(\Delta) \to \nu(D)$ between a neighbourhood $\nu(\Delta)$ of the overtwisted disc $\Delta \subset \mathbb{R}^3$ and a neighbourhood $\nu(D) \subset W$ of $D$.
\end{definition}
The relevance of this notion stems from the following theorem stating that overtwisted contact manifolds are completely classified by their underlying algebraic topology.
\begin{theorem} \emph{(Eliashberg \cite{El89})} \label{thm:El89}
Let $W^3$ be a $3$--fold. Any plane field $\eta \subset TW$ is homotopic to an overtwisted contact structure.
Further, any two overtwisted contact structures $\xi_1, \xi_2 \subset TW$ homotopic as plane fields are homotopic through overtwisted contact structures. In particular, they are contactomorphic.
\end{theorem}
Theorem \ref{thm:El89} says that 2--plane fields and contact structures in $3$--manifolds present a 1 to 1 correspondence at the level of connected components. Eliashberg's result is stronger than what we have stated. Indeed, there is a weak homotopy equivalence if one restricts to the class of plane fields that have a fixed overtwisted disc.
Overtwisted contact structures in $\mathbb{R}^3$ were completely classified by Eliashberg in \cite{El93}. In particular, the following proposition will be used in Subsection \ref{ssec:sharp}.
\begin{proposition} \emph{(Eliashberg \cite{El93})} \label{prop:El93}
Let $\xi$ be a contact structure in $\mathbb{R}^3$ that is overtwisted in the complement of every compact subset. Then $\xi$ is isotopic to $\xi_{ot}$.
\end{proposition}
Contact structures with the property that they remain overtwisted after removing any compact subset are called \textbf{overtwisted at infinity}.
\subsubsection{Overtwisted contact structures in higher dimensions}
Overtwisted contact structures have been defined in full generality -- for every dimension -- in \cite{BEM}. In \cite{CMP} it has been shown that the overtwisted disc in higher dimensions can be understood as an stabilisation of the overtwisted disc in dimension $3$.
The following lemma will be useful in Subsection \ref{ssec:nonComplete}. Its proof is based on a \textsl{swindling} argument, as found in \cite{El92}.
\begin{lemma} \label{coro:BEM} \emph{(\cite[Corollary 1.4]{BEM})}
Let $(M^{2n+1}, \xi_M)$ be a connected overtwisted contact manifold and let $(N^{2n+1}, \xi_N)$ be an open contact manifold of the same dimension. Let $f : N \rightarrow M$ be a smooth embedding covered by a contact bundle homomorphism $\Phi : TN \rightarrow TM$ -- that is, $\Phi|_{\xi_M(p)}$ maps into $\xi_N(f(p))$ and preserves the conformal symplectic structure -- and assume that $df$ and $\Phi$ are homotopic as injective bundle homomorphisms $TN \rightarrow TM$.
Then $f$ is isotopic to a contact embedding $\tilde{f} : (N, \xi_N) \rightarrow (M, \xi_M)$.
\end{lemma}
\subsubsection{Convex surfaces} \label{sssec:convex}
Let $(W^3, \xi^2)$ be a contact manifold. Let $\Sigma^2 \subset W$ be an immersed surface. The intersection $\xi \cap T\Sigma$ yields a singular foliation by lines on $\Sigma$, which is called the \textbf{characteristic foliation}. In the generic case, it can be assumed that the singularities -- the points where $\xi_p = T_p\Sigma$ -- are isolated points, that can then be classified into \textbf{nicely elliptic} and \textbf{hyperbolic}.
\begin{example} \label{ex:ot}
By our characterisation of overtwistedness, any overtwisted manifold $(W, \xi)$ contains a disc $\Sigma$ with a single singular point, which is nicely elliptic and whose boundary is legendrian. All other leaves spiral around the legendrian boundary in one end and converge to the elliptic point in the other. Such a disk appears as a $C^\infty$--small perturbation of the overtwisted disk $\Delta$.
\end{example}
\begin{example} \label{ex:tight}
Consider the unit sphere $\mathbb{S}^2$ in $(\mathbb{R}^3, \xi_{st})$. Its singular foliation has two critical points located in the poles, which are nicely elliptic. All other leaves are diffeomorphic to $\mathbb{R}$ and they connect the poles.
\end{example}
\begin{theorem}[Eliashberg, Giroux, Fuchs] \label{thm:EGF}
Let $\Sigma = \mathbb{S}^2$ and let $(W, \xi)$ be tight. Then, after a $C^0$--small perturbation of its embedding, it can be assumed that the characteristic foliation of $\Sigma$ is conjugate to the one of the unit sphere in $\mathbb{R}^3$ tight.
\end{theorem}
\subsection{Contact foliations}
The contents of this section appear in more detail in \cite{CPP}.
\begin{definition}
A \textbf{contact foliation} is a triple $(M^{2n+1+m}, {\mathcal{F}}^{2n+1}, \xi^{2n})$ where $M$ is a manifold of dimension $2n+1+m$, ${\mathcal{F}}$ is a foliation of codimension $m$, and $\xi \subset T{\mathcal{F}}$ is a distribution of dimension $2n$ that is contact on each leaf of ${\mathcal{F}}$.
Often we will say that $\xi$ is a \textbf{foliated contact structure} on the foliation $(M, {\mathcal{F}})$.
\end{definition}
Contact foliations do exist in abundance as the following result shows:
\begin{theorem} $($ \cite{CPP} $)$
Let $(M^{3+m}, {\mathcal{F}}^3)$ be a foliation such that the structure group of $T{\mathcal{F}}$ reduces to $U(1)\oplus 1$. Then ${\mathcal{F}}$ admits a foliated contact structure with all leaves overtwisted.
\end{theorem}
This result is the foliated counterpart of Eliashberg's result \cite{El89}.
We say that a distribution $\Theta^{2n+m}$ satisfying $\xi = \Theta \cap T{\mathcal{F}}$ is an \textbf{extension} of $\xi$, and a regular equation $\alpha$ can be considered for $\Theta = \ker(\alpha)$. It follows that $d\alpha$ is a symplectic form on $\xi$, but not necessarily on $\Theta$.
\begin{definition}
Let $(M,{\mathcal{F}},\xi)$ be a contact foliation. Let $\Theta$ be an extension of $\xi$ with regular equation $\alpha$. The \textbf{Reeb vector field} $R$ associated to $\alpha$ is the unique vector field satisfying $R \in \Gamma(T{\mathcal{F}})$, $(i_R d\alpha)|_{T{\mathcal{F}}} = 0$, and $\alpha(R) = 1$.
\end{definition}
Of course this is nothing but the leafwise Reeb vector field induced by the restriction of $\alpha$ to each leaf of ${\mathcal{F}}$.
\subsubsection{The space of foliated contact elements}
The following concept will be relevant in the subsequent construction.
\begin{definition}
A \textbf{strong symplectic foliation} is a triple $(M^{m+2n},{\mathcal{F}}^{2n}, \omega)$ where $M$ is a smooth manifold, ${\mathcal{F}}$ a foliation, and $\omega \in \Omega^2(M)$ a closed 2--form that is symplectic on the leaves of ${\mathcal{F}}$.
\end{definition}
Let $(M^{n+m}, {\mathcal{F}}^n)$ be a smooth foliation. The cotangent space to the foliation $\pi: T^*{\mathcal{F}} \to M$ is an $n$--dimensional bundle over $M$ that carries a natural foliation ${\mathcal{F}}^* = \coprod_{{\mathcal{L}} \in {\mathcal{F}}} \pi^{-1}({\mathcal{L}})$. Additionally, it is endowed with a canonical $1$--form:
\[ \lambda_{(p,w)}(v) = w \circ d_{(p,w)}\pi(v), \text{ at a point $(p,w)$, $p \in M$, $w \in T^*_p{\mathcal{F}}$.} \]
If ${\mathcal{L}} \subset M$ is a leaf of ${\mathcal{F}}$ this is nothing but the \textbf{Liouville $1$--form} on $T^*{\mathcal{L}}$. Therefore, since $d\lambda$ is a leafwise symplectic form that is globally exact, $(T^*{\mathcal{F}}, {\mathcal{F}}^*, d\lambda)$ is a strong symplectic foliation.
Fix a leafwise metric $g$ in $M$. Then there is a bundle isomorphism $\#: T^*{\mathcal{F}} \to T{\mathcal{F}}$. This defines a metric in $T^*{\mathcal{F}}$ by setting $g^*(w_1,w_2) = g(\# w_1, \# w_2)$. The presence of $g^*$ allows one to consider the unit cotangent bundle $\mathbb{S}(T^*{\mathcal{F}})$ as a submanifold of $T^*{\mathcal{F}}$ transverse to ${\mathcal{F}}^*$.
The intersection of $\mathbb{S}(T^*{\mathcal{F}})$ with a leaf ${\mathcal{L}}$ is by construction the sphere bundle $\mathbb{S}(T^*{\mathcal{L}})$, which endowed with the form $\lambda$ corresponds to the contact manifold which is called the \textbf{space of oriented contact elements}. Therefore $(\mathbb{S}(T^*{\mathcal{F}}), {\mathcal{F}}^* \cap \mathbb{S}(T^*{\mathcal{F}}), \ker(\lambda))$ is a contact foliation. We call it the \textbf{space of foliated oriented contact elements}.
\begin{lemma}
The Reeb flow in $(\mathbb{S}(T^*{\mathcal{F}}), {\mathcal{F}}^* \cap \mathbb{S}(T^*{\mathcal{F}}), \lambda)$ coincides with the leafwise cogeodesic flow of $g$.
\end{lemma}
This lemma can be proved just as in the case of contact manifolds (see \cite[Theorem 1.5.2]{Ge}). This construction will be used in Subsection \ref{ssec:tight}.
\subsubsection{The symplectisation of a contact foliation}
\begin{definition}
Let $(M^{2n+1+m}, {\mathcal{F}}^{2n+1}, \xi^{2n})$ be a contact foliation. Let $\Theta^{2n+m} \subset TM$ be an extension of $\xi$, and let $\alpha$ be a defining $1$--form for $\Theta$, $\ker(\alpha) = \Theta$.
We say that
\[ ({\mathbb{R}} \times M, {\mathcal{F}}_{\mathbb{R}} = \coprod_{{\mathcal{L}} \in {\mathcal{F}}} {\mathbb{R}} \times {\mathcal{L}}, \omega = d(e^t\alpha)), \text{ with $t$ the coordinate in $\mathbb{R}$, }\]
is the \textbf{symplectisation} of $(M, {\mathcal{F}}, \xi)$.
\end{definition}
The symplectisation is another instance of a strong symplectic foliation. Restricted to every individual leaf this is the standard symplectisation of the contact structure on the leaf.
We are abusing notation and we are writing $\alpha$ for $\pi^* \alpha$, where $\pi: {\mathbb{R}} \times M \to M$ is the projection onto the second factor. We will also write $\xi$ for the restriction of $(d\pi)^{-1} \xi$ to the level $T(\{t\} \times M)$ and $R$ for the lift of the Reeb vector field $R$ to $\{t\} \times M$.
Let us also introduce the projection $\pi_\xi: T({\mathbb{R}} \times M) \to \xi$ along the $\partial_t$ and $R$ directions.
\section{Several examples}
\subsection{Non--complete Reeb vector fields with no closed orbits} \label{ssec:nonComplete}
It is first reasonable to wonder about the Weinstein conjecture for open manifolds in general. In this direction, not much is known. In \cite{vdBPV} and its sequel \cite{vdBPRV} it is shown that the Weinstein conjecture holds for non--compact energy surfaces in cotangent bundles as long as one imposes certain topology conditions on the hypersurface and certain growth conditions on the hamiltonian, which is assumed to be of mechanical type.
\begin{proposition} \label{prop:openLeaf}
Let $(N^{2n+1}, \xi)$ be an open contact manifold. Then there is a contact form $\alpha$, $\ker(\alpha) = \xi$, whose (possibly non--complete) associated Reeb flow has no periodic orbits.
\end{proposition}
\begin{proof}
Fix some small ball $U \subset N$. Modify $\xi$ within $U$ to introduce an overtwisted disc $\Delta$ in the sense of \cite{BEM}. By applying the relative $h$--principle for overtwisted contact structures, there is $\xi_{ot}$ in $N$ that agrees with $\xi$ outside of $U$ and that has $\Delta$ as an overtwisted disc. This new contact structure is homotopic to the original one as almost contact structures.
Let $\{N_i\}_{i \in \mathbb{N}}$ be an exhaustion of $N$ by compact sets, $N_i \subset N_{i+1}$. Fix a non--degenerate contact form $\alpha_{ot}$ for the overtwisted structure $\xi_{ot}$. Its closed Reeb orbits are isolated and countable; moreover, we may assume that no closed orbit is fully contained in $\Delta$. We index them as follows: each compact set $N_i$ is intersected by finitely many closed orbits and hence we write $\{\gamma^i_j\}_{j \in I_i}$ for the collection of closed Reeb orbits intersecting $N_i$ but not $N_{i-1}$.
Construct a path $\beta: [0,\infty) \to N$, avoiding $\Delta$, that is proper and such that $N \setminus \beta([0,\infty))$ is diffeomorphic to $N$ by a map isotopic to the identity. Then, for each $i$, and each $j \in I_i$, we can construct paths $\beta^i_j: [0,1] \to N_i$ such that the $\beta_j^i$ are all pairwise disjoint, they intersect ${\operatorname{Image}}(\beta)$ only at $\beta^i_j(0) \in {\operatorname{Image}}(\beta)$, they satisfy $\beta^i_j(1) \in \gamma^i_j \cap N_i$, and they avoid $\Delta$.
Since the images of $\beta$ and the $\beta^i_j$ avoid $\Delta$, we can fix a closed contractible neighbourhood $V$ of $\Delta$ disjoint from them as well. Construct a path $\beta_{ot}: [0,1] \to N$ with $\beta_{ot}(0) \in \partial V$, $\beta_{ot}(1) \in {\operatorname{Image}}(\beta)$ and otherwise avoiding $V$ and all other paths.
Consider the tree $T = \beta \cup \{\cup_{i \in \mathbb{N}, j \in I_i} \beta_j^i \} \cup \beta_{ot}$. Denote by $\nu(T)$ a small closed neighbourhood that deformation retracts onto $T$. We can assume that $N$ is diffeomophic to $N' = N \setminus (\nu(T) \cup V)$ by a diffeomorphism $f: N \to N'$ that is isotopic to the identity.
The embedding $f: (N,\xi) \to (N' \cup V,\xi_{ot})$ has image $N'$ and is covered by a contact bundle homomorphism. This follows because $f$ is isotopic to the identity in $N$ and $\xi$ and $\xi_{ot}$ are homotopic. Now an application of Lemma \ref{coro:BEM} implies that there is an isocontact embedding $\tilde f: (N,\xi) \to (N' \cup V, \xi_{ot})$. The form $\alpha_{ot}$ has no periodic orbits in $N' \cup V$ by construction and hence the pullback form $\alpha = \tilde f^*\alpha_{ot}$ does not either.
\end{proof}
\begin{remark}
A natural open question is whether it is true that every open contact manifold can be endowed with a contact form inducing a complete Reeb flow with no closed orbits.
\end{remark}
\subsection{The Weinstein conjecture does not hold for contact foliations with all leaves tight} \label{ssec:tight}
We shall construct first a contact foliation with all leaves tight and with periodic orbits lying in the only compact leaf.
\begin{proposition} \label{prop:geodesic}
Let $(\mathbb{S}^3, {\mathcal{F}}_{Reeb})$ be the Reeb foliation on the $3$--sphere and let $g$ be the round metric in $\mathbb{S}^3$. Consider the contact foliation $(\mathbb{S}^3 \times \mathbb{S}^1, \lambda_{can})$ on the unit cotangent bundle of ${\mathcal{F}}_{Reeb}$. Its only closed Reeb orbits lie in the compact torus leaf.
\end{proposition}
The proposition is an easy consequence of the following Lemma.
\begin{lemma}
Consider the Riemannian manifold $(\mathbb{R}^2,g)$, where $g$ is of the form $dr \otimes dr + f(r) d\theta \otimes d\theta$, with $f(r)$ an increasing function with $f(r) = r^2$ close to the origin. $(\mathbb{R}^2,g)$ has no closed geodesics.
\end{lemma}
\begin{proof}
Applying the Koszul formula yields the following equations for the Christoffel symbols:
\[ g(\nabla_{\partial_r}\partial_\theta, \partial_\theta) = f'/2 = \Gamma_{r\theta}^\theta g(\partial_\theta, \partial_\theta) = \Gamma_{r\theta}^\theta f, \]
\[ g(\nabla_{\partial_\theta}\partial_r, \partial_\theta) = f'/2 = \Gamma_{\theta r}^\theta g(\partial_\theta, \partial_\theta) = \Gamma_{\theta r}^\theta f, \]
\[ g(\nabla_{\partial_\theta}\partial_\theta, \partial_r) = -f'/2 = \Gamma_{\theta \theta}^r g(\partial_r, \partial_r) = \Gamma_{\theta \theta}^r. \]
And hence the geodesic equations read:
\[ \overset{..}{r} = f'\overset{.}{\theta}^2, \]
\[ \overset{..}{\theta} = -\log(f)'\overset{.}{\theta}\overset{.}{r}. \]
If at any point $\overset{.}{\theta}=0$, then $\overset{.}{\theta} = 0$ for all times and $\overset{.}{r}$ is a constant. This situation corresponds to radial lines.
All other geodesics have always $\overset{.}{\theta} \neq 0$ and hence $\overset{..}{r} > 0$. In particular, as soon as a geodesic has $\overset{.}{r} \geq 0$ at some point, it will have $\overset{.}{r} > 0$ for all the points in the forward orbit and hence it will not close up.
For a geodesic to close up we deduce then that it must have $\overset{.}{r} < 0$ for all times, but then it cannot close up either.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:geodesic}]
Consider $\mathbb{S}^3$ lying in $\mathbb{C}^2$, with coordinates $(z_1,z_2) = (r_1,\theta_1,r_2,\theta_2)$. The Reeb foliation can be assumed to have the Clifford torus $|z_1|^2 = |z_2|^2 = 1/2$ as its torus leaf. One of the solid tori, denote it by $T$, corresponds to $\{|z_1|^2 \leq 1/2, |z_2|^2 = 1 - |z_1|^2\}$ and the other one is given by the symmetric equation. Let us multiple cover the torus $T$ with the map $\phi: \mathbb{R} \times \mathbb{D}^2_{1/\sqrt{2}} \rightarrow T$ given by $\phi(s,r,\theta) = (r,\theta,\sqrt{1-r^2},s)$. For all purposes we can work in $\mathbb{R} \times \mathbb{D}^2_{1/\sqrt{2}}$, which is the universal cover of the torus, and hence we shall do so.
The restriction of the flat metric of $\mathbb{C}^2$
\[ g = \sum_{i=1,2} dr_i \otimes dr_i + r_i^2 d\theta_i \otimes d\theta_i \]
to $\mathbb{S}^3$ is precisely the round metric. In the parametrisation of $T$ given above it reads as:
\[ \phi^* g = \dfrac{1}{1-r^2} dr \otimes dr + r^2 d\theta \otimes d\theta + (1-r^2) ds \otimes ds. \]
Which in particular readily shows that the metric induced in the Clifford torus is flat.
Consider the embeddings
\[ \psi_c: \mathbb{R}^2 \rightarrow \mathbb{R} \times \mathbb{D}^2_{1/\sqrt{2}} \]
\[ \psi_c(\rho,\theta) = (f(\rho)+c,\dfrac{\rho}{\sqrt{2}(1+\rho)},\theta), \]
with $f: \mathbb{R} \to \mathbb{R}$ a smooth increasing function that agrees with $\rho^2$ near the origin and with the identity away from it. They realise the non--compact leaves of the Reeb foliation in $T$. It is clear that the leafwise metric is of the form
\[ \psi_c^* \phi^* g = h_1(\rho) d\rho \otimes d\rho + h_2(\rho) d\theta \otimes d\theta \]
with $h_2(\rho)$ increasing and converging to $1/2$ as $\rho \rightarrow \infty$ and $h_1(\rho)$ bounded from above and behaving as $O(\rho)$ near the origin.
At every point of $\mathbb{R}^2$ a vector field $X$ pointing radially and of unit length can be defined. The properties of $h_1$ imply that $X$ is complete and following $X$ yields a reparametrisation $\Phi: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ that satisfies:
\[ \Phi^*\psi_c^* \phi^* g = d\rho \otimes d\rho + \tilde{h}(\rho) d\theta \otimes d\theta \]
with $\tilde{h}(\rho)$ still increasing and converging to $1/2$ as $\rho \rightarrow \infty$. Now the Lemma yields the result.
\end{proof}
\begin{remark}
Taking the universal cover of a leaf yields the standard tight $\mathbb{R}^3$, so all leaves are tight.
\end{remark}
One can actually construct a contact foliation with no periodic orbits of the Reeb.
\begin{proposition} \label{prop:noGeodesics}
Consider the manifold $\mathbb{T}^3$, endowed with the Euclidean metric $g$, and the foliation ${\mathcal{F}}$ by planes given by two rationally independent slopes. The space of foliated cooriented contact elements $\mathbb{S}(T^*{\mathcal{F}})$ has no closed Reeb orbits.
\end{proposition}
\begin{proof}
Let ${\mathcal{L}}$ be any leaf of ${\mathcal{F}}$. ${\mathcal{L}}$ is diffeomorphic to $\mathbb{R}^2 \times \mathbb{S}^1$ and its universal cover of is the standard tight $\mathbb{R}^3$. Hence it is a tight contact manifold. Since the restriction of $g$ to ${\mathcal{L}}$ is Euclidean, there are no closed geodesics on ${\mathcal{L}}$ and hence no closed Reeb orbits in its sphere cotangent bundle.
\end{proof}
\subsection{A sharp example. Overtwisted leaves with no closed orbits} \label{ssec:sharp}
\subsubsection{$\mathbb{R}^3$ overtwisted at infinity with no closed orbits}
Consider the following $1$--form in $\mathbb{R}^3$ in cylindrical coordinates:
\[ \alpha = \cos(r) dz + (r\sin(r) + f(z)\phi(r))d\theta \]
If $f(z)\phi(r) = 0$ identically, this is the standard form $\alpha_{ot}$ for the contact structure $\xi_{ot}$ that is overtwisted at infinity. We well henceforth assume that $f(z)\phi(r)$ is $C^1$--small, and therefore $\alpha$ will be a contact form as well. In particular, by Proposition \ref{prop:El93}, the contact structure it defines is contactomorphic to $\xi_{ot}$. Let us compute:
\[ d\alpha = -\sin(r) dr \wedge dz + [\sin(r) + r\cos(r) + \phi'(r)f(z)]dr \wedge d\theta + f'(z)\phi(r) dz \wedge d\theta \]
whose kernel, away from the origin, is spanned by:
\[ X = -f'(z)\phi(r) \partial_r + [\sin(r) + r\cos(r) + \phi'(r)f(z)] \partial_z + \sin(r) \partial_\theta. \]
It is easy to check that $\alpha(X) > 0$ far from the origin, and hence the Reeb is a positive multiple of $X$.
Assume that $\phi(r)$ is a monotone function that is identically $0$ close to $0$ and identically $1$ in $[\delta, \infty)$, for $\delta > 0$ small. Then the Reeb vector field in the origin is $\partial_z$, and remains almost vertical nearby. Assume further that $f$ is strictly decreasing and $C^1$ small. Then the Reeb has a positive radial component away from the origin. We conclude that it has no closed orbits.
\subsubsection{$\mathbb{S}^2 \times \mathbb{R}$ overtwisted at infinity with no closed orbits}
Consider coordinates $(z,\theta; s)$ in $\mathbb{S}^2 \times \mathbb{R}$, with $z \in [0, 2\pi]$, and construct the following $1$--form:
\[ \lambda_0 = \cos(z)ds + z(z-2\pi)\sin(z)d\theta. \]
It is easy to see that it is a contact form that defines two families of overtwisted discs sharing a common boundary: $\{ z \in [0,\pi], s = s_0\}$ and $\{ z \in [\pi,2\pi], s = s_0\}$. It is therefore overtwisted at infinity.
$\lambda_0$ defines two cylinders comprised of Reeb orbits: $\{z = \pi/2\}$ and $\{z = 3\pi/2\}$. Therefore, proceeding like in the previous example, we will add a small perturbation that gets rid of these orbits. Consider the form:
\[ \lambda = \cos(z)ds + [z(z-2\pi)\sin(z) + f(s)\phi(z)]d\theta. \]
Here we require for $\phi(z)$ to be constant close to the points $0$, $\pi/2$, $\pi$, $3\pi/2$ and $2\pi$, to satisfy:
\[ \phi(0) = \phi(\pi) = \phi(2\pi) = 0, \quad \phi(\pi/2) = \phi(3\pi/2) = 1 \]
and to be monotone in the subintervals inbetween. We assume that $f$ is strictly monotone and $C^1$ small. Computing:
\[ d\lambda = -\sin(z)dr \wedge ds + [ (z-2\pi)\sin(z) + z\sin(z) + z(z-2\pi)\cos(z) + f(s)\phi'(z)]dz \wedge d\theta + f'(s)\phi(z) ds \wedge d\theta \]
so the Reeb flow is a multiple of:
\[ X = -f'(s)\phi(z) \partial_z + [ (z-2\pi)\sin(z) + z\sin(z) + z(z-2\pi)\cos(z) + f(s)\phi'(z)] \partial_s + \sin(z) \partial_\theta. \]
Near $z = 0, \pi, 2\pi$ the Reeb is very close to $\pm \partial_s$. Away from those points, it has a non--zero $z$--component. It follows that it cannot have closed orbits.
\subsubsection{Constructing the foliation}
Consider $\mathbb{S}^2 \times \mathbb{S}^1 \times \mathbb{S}^1$ with coordinates $(z,\theta;s,t)$, $t \in [0,2]$. It can be endowed with the following $1$--form:
\[ \tilde\lambda = \cos(z)ds + [z(z-2\pi)\sin(z) + F(t)\phi(z)]d\theta, \]
with $F$ strictly increasing in $(0,1)$, strictly decreasing in $(1,2)$, $C^1$--small and having vanishing derivatives to all orders in $\{0,1\}$. $\phi$ is the bump function defined in the previous subsection.
Let $\Phi: \mathbb{S}^1 \to \mathbb{S}^1$ be a diffeomorphism of the circle that fixes $\{0,1\}$ and no other points, is strictly increasing in $(0,1)$ as a map $(0,1) \to (0,1)$, and is strictly decreasing in $(1,2)$ as a map $(1,2) \to (1,2)$. $\Phi$ defines a foliation ${\mathcal{F}}_\Phi$ on $\mathbb{S}^2 \times \mathbb{S}^1 \times \mathbb{S}^1$ called the \textbf{suspension} of $\Phi$.
${\mathcal{F}}_\Phi$ can be constructed as follows. Find a family of functions $\Phi_s: \mathbb{S}^1 \to \mathbb{S}^1$, $s \in [0,1]$, satisfying:
\begin{equation} \begin{cases}
\Phi_0 = {\operatorname{Id}}, \quad \Phi_1 = \Phi, \\
\text{ the map } s \to \Phi_s(t) \text{ is strictly increasing in $(0,1)$ and strictly decreasing in $(1,2)$}, \\
\left.\dfrac{\partial}{\partial s}\right|_{s=1} \Phi_s(t) = \left.\dfrac{\partial}{\partial s}\right|_{s=0} \Phi_s(\Phi_1(t)) \quad \text{ for all } t.
\end{cases} \end{equation}
Then the curves $\gamma_t(s) = (s,\Phi_s(t))$, induce a foliation in $[0,1] \times \mathbb{S}^1$ which glues to yield a foliation by curves in the $2$--torus. ${\mathcal{F}}_\Phi$ is the lift of such a foliation.
The leaves of the foliation in the $2$--torus are obtained by concatenating the segments $\gamma_t$. $\gamma_0$ and $\gamma_1$ yield closed curves $\tilde\gamma_0$ and $\tilde\gamma_1$. All other curves are diffeomorphic to $\mathbb{R}$, and we denote them by $\tilde\gamma_t(s) = (s,h_t(s))$, $t \in (0,1) \cup (1,2)$. By our assumption on $\Phi_s$, the functions $h_t$ are strictly increasing if $t \in (0,1)$ and strictly decreasing if $t \in (1,2)$. Observe that the non--compact leaves accumulate against the two compact ones.
The contact structure in the compact leaves $\mathbb{S}^2 \times \tilde\gamma_t$, $t=0,1$, is given by
\[ \cos(z)ds + z(z-2\pi)\sin(z)d\theta. \]
In particular, they both have infinitely many closed orbits.
The contact structure in the non compact leaves $\mathbb{S}^2 \times \tilde\gamma_t$, $t \in (0,1) \cup (1,2)$, reads
\[ \cos(z)ds + [z(z-2\pi)\sin(z) + F(h_t(s))\phi(z)]d\theta \]
Since $F \circ h_t$ is non--zero, strictly monotone and $C^1$--small, it is of the form described in the previous section. It follows that they have no periodic orbits.
\begin{remark}
In this example all leaves involved are overtwisted. Further, the non--compact leaves are overtwisted at infinity. It would be interesting to construct an example of a contact foliation where the non--compact leaves are overtwisted, the leaves in their closure are tight and the only periodic orbits appear in the tight leaves.
\end{remark}
\section{$J$--holomorphic curves in the symplectisation of a contact foliation}
In this section we generalise the standard setup for moduli spaces of pseudoholomorphic curves to the foliated setting. The main result is Theorem \ref{thm:removalSingularities}, which deals with the removal of singularities. The proof is standard and closely follows that of \cite{Ho}, and indeed the only essential difference lies in the fact that, although the leaves might be open, they live inside a compact ambient manifold, so the Arzel\'a--Ascoli theorem can still be applied when carrying out the bubbling analysis.
\subsection{Setup}
Consider the contact foliation $(M^{m+2n+1}, {\mathcal{F}}^{2n+1}, \xi^{2n})$, with extension $\Theta^{2n+m}$ given by a $1$--form $\alpha$, and write $({\mathbb{R}} \times M, {\mathcal{F}}_\mathbb{R}, \omega)$ for its symplectisation.
\subsubsection{The space of almost complex structures}
The symplectic bundle $(\xi, d\alpha)$ can be endowed with a complex structure compatible with $d\alpha$, which we denote by $J_\xi$. The space of such choices is non--empty and contractible. $J_\xi$ induces a unique $\mathbb{R}$--invariant leafwise complex structure, $J \in {\operatorname{End}}(T{\mathcal{F}}_\mathbb{R})$, $J^2 = -{\operatorname{Id}}$, as follows:
\[ J|_{\xi} = J_\xi \]
\[ J(\partial_t) = R \]
Observe that $J$ is \textbf{compatible} with $\omega$, and hence they define a metric, which turns each leaf of the symplectisation into a manifold which is not complete. Instead, we shall consider the better behaved $\mathbb{R}$--invariant leafwise riemannian metric $g$ in ${\mathbb{R}} \times {\mathcal{F}}$ given by:
\[ g = dt \otimes dt + \alpha \otimes \alpha + d\alpha(J_\xi \circ \pi_\xi , \pi_\xi ). \]
\subsubsection{$J$--holomorphic curves}
Let $(S, i)$ be a Riemann surface, possibly with boundary. A map satisfying
\begin{equation} \begin{cases} \label{eq:holCurves}
F: (S,i) \to ({\mathbb{R}} \times M,J) \\
dF(TS) \subset {\mathcal{F}}_\mathbb{R} \\
J \circ dF = dF \circ i
\end{cases} \end{equation}
is called a parametrised \textbf{foliated $J$--holomorphic curve}. The second condition implies that $F(S)$ is contained in a leaf ${\mathbb{R}} \times {\mathcal{L}}$ of ${\mathcal{F}}_\mathbb{R}$. Indeed, $J$ is an almost complex structure in the open manifold ${\mathbb{R}} \times {\mathcal{L}}$, and $F$, regarded as a map into ${\mathbb{R}} \times {\mathcal{L}}$, is a \textbf{$J$--holomorphic curve} in the standard sense.
By our choice of $J$, there is an $\mathbb{R}$--action on the space of foliated $J$--holomorphic curves given by translation on the $\mathbb{R}$ term of ${\mathbb{R}} \times M$.
\subsubsection{Foliated $J$--holomorphic planes and cylinders}
A solution of Equation (\ref{eq:holCurves})
\[ F = (a,u): (\mathbb{C},i) \rightarrow ({\mathbb{R}} \times M,J) \]
is called a \textbf{foliated $J$--holomorphic plane}. If we write $\mathcal{M}_J^{\mathcal{F}}$ for the space of such maps, it is clear that the space of complex automorphisms of $\mathbb{C}$ acts on it by its action on the domain.
$\mathcal{M}_J^{\mathcal{F}}$ is non--empty. Every Reeb orbit $\gamma: \mathbb{R} \rightarrow M$ has an associated foliated $J$--holomorphic plane given by
\[ F(s,t) = (s,\gamma(t))\quad \text{ where $z = s+it$ are the standard complex coordinates in $\mathbb{C}$.}\]
We call these the \textsl{trivial} solutions.
Similarly, a solution of Equation \ref{eq:holCurves}
\[ F = (a,u): (-\infty, \infty) \times \mathbb{S}^1 \rightarrow {\mathbb{R}} \times M \]
is called a \textbf{foliated $J$--holomorphic cylinder}. We let $(s,t)$ be the coordinates in the cylinder and its complex structure to be given by $i(\partial_s) = \partial_t$. A closed Reeb orbit $\gamma: \mathbb{S}^1 \to M$, gives a \textsl{trivial} cylinder $F(s,t) = (s,\gamma(t))$.
Recall that the cylinder $(-\infty, \infty) \times \mathbb{R}$ is biholomorphic to $\mathbb{C} \setminus \{0\}$ by the exponential, and for convenience we will often consider both domains interchangeably. In particular, given some foliated $J$--holomorphic plane, we could define a foliated $J$--holomorphic cylinder by introducing a pucture in the domain. Therefore, we say that a foliated $J$--holomorphic map
\[ F=(a,u): \mathbb{C} \setminus \{0\} \to {\mathbb{R}} \times M \]
can be \textbf{extended} over zero (or $\infty$) if there is a foliated $J$--holomorphic map with domain $\mathbb{C}$ (resp. the puctured Riemann sphere $\hat{\mathbb{C}} \setminus \{0\})$ that agrees with $F$ in $\mathbb{C} \setminus \{0\}$.
\subsubsection{Energy}
After introducing the \textsl{trivial} foliated $J$--holomorphic curves, we would like to introduce an \textsl{energy constraint} that singles out more interesting solutions of Equation \ref{eq:holCurves}. This leads us to the following definitions.
\begin{definition}
Consider the space of functions
\[ \Gamma = \{ \phi \in C^\infty(\mathbb{R}, [0,1])| \quad \phi' \geq 0 \} \]
Let $F: S \to {\mathbb{R}} \times M$ be a foliated $J$--holomorphic curve.
Its \textbf{energy} is defined by:
\begin{align} \label{eq:energy}
E(F) = \sup_{\phi \in \Gamma} \int_S F^* d(\phi\alpha).
\end{align}
Its \textbf{horizontal energy} is defined by:
\begin{align} \label{eq:horizontalEnergy}
E^h(F) = \int_S F^* d\alpha.
\end{align}
\end{definition}
Trivial solutions correspond to the following general phenomenon.
\begin{lemma} \label{lem:zeroHorizontalEnergy}
Let $F=(a,u): (S,i) \to ({\mathbb{R}} \times M,J)$ be a foliated $J$--holomorphic curve. $E^h(F) = 0$ if and only if ${\operatorname{Image}}(F) \subset {\mathbb{R}} \times \gamma$, where $\gamma$ is a Reeb orbit.
\end{lemma}
\begin{proof}
Given a ball $U \subset S$ find complex coordinates $(s,t)$. Then:
\[ \int_U F^* d\alpha = \int_U d\alpha(u_s, u_t) ds \wedge dt = \int_U d\alpha(u_s, J u_s) ds \wedge dt = \]
\[ \int_U d\alpha(\pi_\xi u_s, \pi_\xi \circ J u_t) ds \wedge dt = \int_U |\pi_\xi u_s| ds \wedge dt \]
and since
\[ E^h(F) = \int_S F^* d\alpha = \int_S u^* d\alpha \]
the claim follows.
\end{proof}
The following lemma states that cylinders with finite energy that cannot be extended to planes have to be necessarily trivial and hence imply the existence of a Reeb orbit.
\begin{lemma} \label{lem:trivialCylinder}
Let $F$ be a foliated $J$--holomorphic map
\[ F=(a,u): \mathbb{C} \setminus \{0\} \to {\mathbb{R}} \times M \]
satisfying $E(F) < \infty$ and $E^h(F) = 0$. If $F$ cannot be extended over its punctures, then $t \to u(e^{2\pi it})$, $t \in [0,1]$, is a parametrised closed Reeb orbit.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:zeroHorizontalEnergy}, we know that there is some Reeb orbit $\gamma$ (not necessarily closed) such that ${\operatorname{Image}}(F) \subset {\mathbb{R}} \times \gamma$. We can identify the universal cover of ${\mathbb{R}} \times \gamma$ with $\mathbb{C}$ with its standard complex structure.
We claim that $\gamma$ is a closed orbit and that $F$ is a non contractible map into ${\mathbb{R}} \times \gamma$. Assuming otherwise, regard $F$ as a holomorphic map $f: \mathbb{C} \setminus \{0\} \to \mathbb{C} \subset \hat{\mathbb{C}}$. As such, its punctures are either removable or essential singularities. They cannot be removable singularities with values in $\mathbb{C}$ by assumption.
If $f$ has a removable singularity that is a pole, a neighbourhood of the pucture branch covers a neighbourhood of $\infty$ in the Riemann sphere. In particular, there is a band $[a,b] \times {\mathbb{R}} \subset {\operatorname{Image}}(f) \subset \mathbb{C}$, with $a < b$ large enough. This contradicts the assumption that $E(F)$ was finite.
If $f$ has an essential singularity, then Picard's great theorem states that every point in $\mathbb{C}$, except possibly one, is contained in ${\operatorname{Image}}(f)$. Again, this contradicts the assumption that $E(F)$ was finite.
We deduce that $\gamma$ is a closed orbit and that $F$ is a non--contractible map into the cylinder ${\mathbb{R}} \times \gamma$. The exponential is a biholomorphism between the cylinder and $\mathbb{C} \setminus \{0\}$, so now we regard $F$ as a holomorphic map $h: \mathbb{C} \setminus \{0\} \to \mathbb{C} \setminus \{0\}$.
Suppose one of the punctures was an essential singularity for $h$. Since $h$ has no zeroes or poles, Picard's theorem states that all other points in the Riemann sphere have infinitely many preimages by $h$. This contradicts $E(F) < \infty$.
Therefore, $h$ can be extended over its punctures to be zero or $\infty$. $h$ is then a meromorphic function over the Riemann sphere, and hence it is nothing but the quotient of two polynomials. By our assumption that there are no other zeroes or poles this implies that $h(z) = az^k$, for some $k \in \mathbb{Z} \setminus \{0\}$, $a \in \mathbb{C}$. This shows that $t \to u(e^{2\pi it})$ parametrises the $k$--fold cover of $\gamma$.
\end{proof}
Exactly the same analysis yields the following lemma.
\begin{lemma} \label{lem:trivialPlane}
Let $F$ be a foliated $J$--holomorphic map
\[ F=(a,u): \mathbb{C} \to {\mathbb{R}} \times M \]
satisfying $E^h(F) = 0$. Then either $F$ is the constant map or $E(F) = \infty$.
\end{lemma}
\begin{proof}
Let $\gamma$ be the Reeb orbit such that ${\operatorname{Image}}(F) \subset {\mathbb{R}} \times \gamma$. By taking the universal cover of ${\mathbb{R}} \times \gamma$, regard $F$ as a map $\mathbb{C} \to \mathbb{C}$, as in Lemma \ref{lem:trivialCylinder}. Now study the extension problem of $F$ to $\infty$. If it corresponds to a removable singularity with values in $\mathbb{C}$, then $F$ is the constant map. Otherwise, if it is either a pole or a non--removable singularity, it has infinite energy.
\end{proof}
\subsubsection{Riemannian and symplectic area}
In the case of compact symplectic manifolds, there is an interplay between the symplectic area of a $J$--holomorphic curve and its riemannian area for the metric given by the symplectic form and the compatible almost complex structure.
In our case, $g$ is not of that form. Rather, it is $\mathbb{R}$--invariant, while $\omega$ is not: $\mathbb{R}$--translations of the same $J$--holomorphic curve have different symplectic energy and indeed there are no universal constants relating the $\omega$--area and the $g$--area.
However, $E$ and $E^h$ are invariant under the $\mathbb{R}$-action. Given $F$, a foliated $J$--holomorphic curve, let ${\operatorname{area}}_g(F)$ be its riemannian area in terms of $g$, and let ${\operatorname{area}}_{\omega_\phi}(F)$ be its symplectic area in terms of $\omega_\phi = d(\phi\alpha)$.
\begin{lemma} \label{lem:noSphereBubbling}
Let $F=(a,u): (S,i) \to ({\mathbb{R}} \times M,J)$ be a parametrised foliated $J$--holomorphic curve. Then, if $a$ is bounded below and above:
\[ {\operatorname{area}}_g(F) < C{\operatorname{area}}_\omega(F) < C'\int_{\partial S} \alpha, \]
for some constants $C, C'$ depending only on the upper and lower bounds of $a$.
\end{lemma}
\begin{proof}
Consider $a_0$ and $a_1$ satisfying $a_0 < a < a_1$. Let $\phi(t) = \frac{t-a_0}{3(a_1-a_0)} + 1/3$ in $[a_0,a_1]$ and belonging to $\Gamma$. Then $\omega_\phi$ is a symplectic form in $[a_0,a_1] \times M$ and $J$ is $\omega_\phi$--compatible. Since $0 < D < \phi, \phi' < D' < \infty$, there are universal constants relating the metrics $g$ and $g_\phi = \omega_\phi(-,J-)$ in $[a_0,a_1] \times M$.
Since $J$ is $\omega_\phi$--compatible, $F$ being $J$--holomorphic implies that ${\operatorname{area}}_{g_\phi}(F) = {\operatorname{area}}_{\omega_\phi}(F)$, and the first inequality follows. The second inequality follows by applying Stokes.
\end{proof}
An immediate consequence of Lemma \ref{lem:noSphereBubbling} is that there cannot be \textsl{closed} foliated $J$--holomorphic curves in ${\mathbb{R}} \times M$.
\subsection{Bubbling}
As we shall see in Section \ref{sec:main}, the way in which we will prove the existence of a periodic orbit of the Reeb vector field will be by constructing a $1$--dimensional moduli of pseudoholomorphic discs that necessarily will be open in one of its ends. The following lemma shows that the reason for it to be open must be that the gradient is not uniformly bounded for all discs in the moduli.
\begin{proposition} \label{prop:ArzelaAscoli}
Fix ${\mathcal{L}}$ a leaf of ${\mathcal{F}}$. Let $W \subset {\mathbb{R}} \times {\mathcal{L}}$ be a totally real compact submanifold, possibly with boundary.
Let $(S,i)$ be a compact Riemann surface with boundary. Consider the sequence of foliated $J$--holomorphic maps
\[ F_k: (S, \partial S) \to ({\mathbb{R}} \times {\mathcal{L}}, W), \quad k \in \mathbb{N}. \]
Suppose that there is a uniform bound $||dF_k|| < C < \infty$. Then there is a subsequence $F_{k_i}$, $k_i \to \infty$, convergent in the $C^\infty$--topology to a foliated $J$--holomorphic map
\[ F_\infty: (S, \partial S) \to ({\mathbb{R}} \times M, W) \]
\end{proposition}
\begin{proof}
Observe that since we have a uniform gradient bound and $F_k(\partial S) \subset W$, for all $k$, it necessarily follows that the images of all the $F_k$ lie in a compact subset of ${\mathbb{R}} \times {\mathcal{L}}$. Then one can proceed as in the standard case to prove $C^\infty$ bounds from $C^1$ bounds and then apply Arzel\'a-Ascoli to conclude.
\end{proof}
\begin{remark}
The same statement holds for surfaces without boundary as long as one imposes for the images of all the $F_k$ to lie in a compact set of the leaf.
\end{remark}
Proposition \ref{prop:ArzelaAscoli} suggests that we should study sequences of maps
\[ F_k: (S, \partial S) \to ({\mathbb{R}} \times {\mathcal{L}}, W), \quad k \in \mathbb{N} \]
in which $||dF_k||$ is not uniformly bounded. We have to consider two separate cases.
\subsubsection{Plane bubbling}
\begin{proposition} \label{prop:planeBubbling}
Consider a sequence of foliated $J$--holomorphic curves
\[ F_k: (S, \partial S) \to ({\mathbb{R}} \times {\mathcal{L}}, W), \quad k \in \mathbb{N} \]
and a corresponding sequence of points $q_k$ in $S$ having $M_k = ||d_{q_k}F_k|| \to \infty$ and converging to a point $q \in S$.
Suppose that there is an uniform bound $E(F_k) < C < \infty$. If ${\operatorname{dist}}(q_k,\partial S)M_k \to \infty$, there is a foliated $J$--holomorphic plane
\[ F_\infty: \mathbb{C} \to {\mathbb{R}} \times {\mathcal{L}}' \]
with $E(F_\infty) < C$, where ${\mathcal{L}}'$ is a leaf in the closure of ${\mathcal{L}}$.
\end{proposition}
\begin{proof}
After possibly modifying the $q_k$ slightly, there are charts
\[ \phi_k: \mathbb{D}^2(R_k) \to S \]
\[ \phi_k(z) = q_k + \dfrac{z}{M_k} \]
with $R_k < {\operatorname{dist}}(q_k,\partial S)M_k$, $R_k \to \infty$, $R_k/M_k \to 0$ and $||d(F_k \circ \phi_k)|| < 2$ -- this last condition is achieved by the so called Hofer's lemma, see \cite[Lemma 26]{Ho}.
The maps $F_k \circ \phi_k$ have $C^1$ bounds by construction, but they have no $C^0$ bounds. By our construction of $J$, the vertical translation of a $J$--holomorphic map is still $J$--holomorphic and hence we can compose with a vertical translation $\tau_k$ guaranteeing that $\tau_k \circ F_k \circ \phi_k$ takes the point $0$ to the level $\{0\} \times {\mathcal{L}}$. Then, for every compact subset $\Omega \subset \mathbb{C}$, the maps $\tau_k \circ F_k \circ \phi_k: \Omega \to {\mathbb{R}} \times M$ are equicontinuous and bounded -- note that this is where we use that ${\mathcal{L}}$ lies inside the compact manifold $M$.
Recall that having uniform $C^1$ bounds implies that we have uniform $C^\infty$ bounds. Hence, an application of Arzel\'a--Ascoli shows that a subsequence converges in $C^\infty_{loc}$ to a map $F_\infty: \mathbb{C} \to {\mathbb{R}} \times M$ that must be foliated and $J$--holomorphic, but not necessarily lying in ${\mathbb{R}} \times {\mathcal{L}}$, but maybe in some new leaf ${\mathbb{R}} \times {\mathcal{L}}'$.
Note that the energy of the map $\tau_k \circ F_k \circ \phi_k$ is bounded above by that of $F_k$. Since we have uniform bounds for the energy of the $F_k$, we have uniform energy bounds for the maps $\tau_k \circ F_k \circ \phi_k$ and hence for their limit $F_\infty$. Note that $F_\infty$ is necessarily non constant, since $||d_0F_\infty|| = 1$ by construction. In particular, it has non--zero energy.
\end{proof}
\begin{remark}
We say that the map $F_\infty$ as given in the proof is called a \textbf{plane bubble}. If the map $F_\infty$ could be extended over the pucture to a map with domain the Riemann sphere $\mathbb{S}^2$, this would yield a contradiction with Lemma \ref{lem:noSphereBubbling}.
\end{remark}
\subsubsection{Disc bubbling}
\begin{proposition} \label{prop:discBubbling}
Consider a sequence of foliated $J$--holomorphic curves
\[ F_k: (S, \partial S) \to ({\mathbb{R}} \times {\mathcal{L}}, W), \quad k \in \mathbb{N} \]
and a corresponding sequence of points $q_k$ in $S$ having $M_k = ||d_{q_k}F_k|| \to \infty$ converging to a point $q \in S$.
Suppose that there is an uniform bound $E(F_k) < C < \infty$. If ${\operatorname{dist}}(q_k,\partial S)M_k$ is uniformly bounded from above, there is a foliated $J$--holomorphic disc
\[ F_\infty: (\mathbb{D}^2, \partial \mathbb{D}^2) \to ({\mathbb{R}} \times {\mathcal{L}},W) \]
with $E(F_\infty) < C$.
\end{proposition}
\begin{proof}
Since we are assuming that $W$ is compact, the usual rescaling argument for the disc bubbling goes through and yields a punctured disc bubble lying in ${\mathbb{R}} \times {\mathcal{L}}$ and having bounded gradient. Then, the standard removal of singularities gives a disc bubble $F_\infty$.
\end{proof}
\subsection{Removal of singularities}
The aim of this subsection is to prove the following result, which is one of the key ingredients for proving Theorem \ref{thm:main}.
\begin{theorem}[Removal of singularities] \label{thm:removalSingularities}
Let $F=(a,u): \mathbb{D}^2 \setminus \{0\} \to {\mathbb{R}} \times {\mathcal{L}} \subset {\mathbb{R}} \times M$ be a $J$--holomorphic curve with $0 < E(F) < \infty$, ${\mathcal{L}}$ a leaf of ${\mathcal{F}}$.
Then, either $F$ extends to a $J$--holomorphic map over $\mathbb{D}^2$ or for every sequence of radii $r_k \to 0$ the curves $\gamma_{r_k}(s) = u(e^{r_k+is})$ converge in $C^\infty$ --possibly after taking a subsequence-- to a parametrised closed Reeb orbit lying in the closure of ${\mathcal{L}}$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:removalSingularities}]
Let us state the problem in terms of cylinders. Identify $\mathbb{D}^2 \setminus \{0\}$ with $[0, \infty) \times \mathbb{S}^1$ by using the biholomorphism $-\log(z)$, and regard $F$ as a foliated $J$--holomorphic map $[0, \infty) \times \mathbb{S}^1 \to {\mathbb{R}} \times M$. Then, the following maps are foliated $J$--holomorphic:
\[ F_k=(a_k,u_k): [-R_k/2, \infty) \times \mathbb{S}^1 \to {\mathbb{R}} \times M \]
\[ F_k(s,t) = (a(s+R_k,t) - a(R_k,0),u(s+R_k,t)) \]
and by assumption they have a uniform bound $E(F_k) < C < \infty$ and $\lim_{k \to \infty} E^h(F_k) = 0$. Here $R_k = -\log(r_k) \to \infty$.
Suppose that the gradient was not uniformly bounded for the family $F_k$. We can then find a sequence of points $q_k \in [0, \infty) \times \mathbb{S}^1$ escaping to infinity and satisfying $|d_{q_k}F| \to \infty$. Then we are under the assumptions of Proposition \ref{prop:planeBubbling}, and this yields a plane bubble $G: \mathbb{C} \to {\mathbb{R}} \times M$ with $E^h(G) = 0$, which must lie on top of a Reeb orbit by Lemma \ref{lem:zeroHorizontalEnergy}. By our bubbling analysis, it cannot be constant, since its gradient at the origin is $1$, which is a contradiction with it having $E(G) < \infty$, by Lemma \ref{lem:trivialPlane}.
We conclude that the family $F_k$ has uniform $C^1$ bounds and hence uniform $C^\infty$ bounds. By construction $a_k(0,0) \in \{0\} \times M$, which means that we have uniform $C^0$ bounds on every compact subset of $(-\infty, \infty) \times \mathbb{S}^1$ --here is where we use the compactness of $M$. Arzel\'a-Ascoli implies that --after possibly taking a subsequence-- the maps $F_k$ converge in $C^\infty_{loc}$ to a map $F_\infty: (-\infty, \infty) \times \mathbb{S}^1 \to {\mathbb{R}} \times M$ with $E(F_\infty) < \infty$ and $E^h(F_\infty) = 0$.
Observe that
\[ \lim_{r \to 0} \int_{\gamma_r} \alpha = \int_{\gamma_1} \alpha - \int_{\mathbb{D}^2 \setminus \{0\}} d\alpha. \]
If this limit is zero, then the argument above shows that the $\gamma_r$, $r \to 0$, tend to the constant map in the $C^\infty$ sense, and hence $F$ extends to a map over $\mathbb{D}^2$. Assuming otherwise, it is clear that $F_\infty$ cannot be the constant map and hence Lemma \ref{lem:trivialCylinder} implies the conclusion.
\end{proof}
\section{Existence of contractible periodic orbits in the closure of an overtwisted leaf} \label{sec:main}
After setting up the study of foliated $J$--holomorphic curves in the previous section and dealing with its compactness issues, we use this machinery to conclude the proof of Theorem \ref{thm:main}. The setting of the theorem is as follows: $(M^{m+3}, {\mathcal{F}}^3, \xi^2)$ is a contact foliation with $\Theta^{2+m}$ an extension given by a $1$--form $\alpha$. We write $({\mathbb{R}} \times M, {\mathcal{F}}_\mathbb{R}, \omega)$ for its symplectisation. ${\mathcal{L}}^3$ is a leaf of ${\mathcal{F}}$.
\subsection{The Bishop family}
The following results have a local nature and hence do not depend on whether ${\mathcal{L}}$ is compact or not. Their proofs can be found in \cite{Ho}.
\subsubsection{The Bishop family at an elliptic point}
If $({\mathcal{L}}, \xi)$ is an overtwisted manifold, let $\Sigma$ be an overtwisted disc for $\xi$. Otherwise, if $\pi_2({\mathcal{L}}) \neq 0$, let $\Sigma$ be some sphere realising a non--zero class in $\pi_2$. Assume, after a small perturbation, that the characteristic foliations are as described in Subsection \ref{sssec:convex} in Exercises \ref{ex:ot} and \ref{ex:tight} and Theorem \ref{thm:EGF}. Denote by $\Gamma_\Sigma$ the set of singular points of the characteristic foliation of $\Sigma$.
Let $p \in \Gamma_\Sigma$, a nicely elliptic point. The maps satisfying:
\begin{equation} \begin{cases} \label{eq:holDiscs}
F = (u, a): (\mathbb{D}^2, \partial \mathbb{D}^2) \to ({\mathbb{R}} \times {\mathcal{L}}, \{0\} \times \Sigma) \\
dF \circ i = J \circ dF, \\
{\operatorname{wind}}(F, p) = \pm 1, \\
{\operatorname{ind}}(F) = 4,
\end{cases} \end{equation}
will be called the \textbf{Bishop family}. ${\operatorname{wind}}(F, p)$ refers to the winding number of $F(\partial \mathbb{D}^2$) around the elliptic point $p$.
The condition ${\operatorname{ind}}(F) = 4$ is implied by the other assumptions. It means that the linearised Cauchy--Riemann operator at $F$ has index $4$, and hence, if there is transversality, the solutions of Equation \ref{eq:holDiscs} close to $F$ form a smooth $4$--dimensional manifold. Since the Mobius transformations of the disc have real dimension $3$, this implies that the image of $F$ is part of a $1$--dimensional family of distinct discs.
The Bishop family is not empty under some integrability assumptions.
\begin{proposition} \label{prop:Bishop} \emph{(\cite{Bi}, \cite[Section 4.2]{Ho})}
For a suitable choice of $J_\xi$, $J$ is integrable close to $p$. Then there is a smooth family of maps $F_s$, $s \in [0,\varepsilon)$, with $F_0(z) = p$ and $F_s$, $s>0$, disjoint embeddings satisfying Equation \ref{eq:holDiscs}.
Additionally, there is a small neighbourhood $U$ of $p$ such that any other disc satisfying Equation \ref{eq:holDiscs} and interesecting $U$ is a reparametrisation of one of the $F_s$.
\end{proposition}
\subsubsection{Continuation of the Bishop family}
The following statement shows that transversality always holds for the linearised Cauchy--Riemann operator for maps belonging to the Bishop family.
\begin{proposition} \emph{(\cite[Theorem 17]{Ho})} \label{prop:openess}
Let $F$ satisfy Equation \ref{eq:holDiscs}. Then there is a smooth family of disjoint embeddings $F_s$, $s \in (-\varepsilon, \varepsilon)$, satisfying Equation \ref{eq:holDiscs}, such that $F_0 = F$. Additionally, any two such families are related by a reparametrisation of the parameter space and a smooth family of Mobius transformations.
\end{proposition}
\subsubsection{Properties of the Bishop family}
Convexity of $\{0\} \times {\mathcal{L}}$ inside of ${\mathbb{R}} \times {\mathcal{L}}$ and an application of the maximum principle yield the following lemma. It will be useful to show that there is no disc bubbling.
\begin{lemma} \emph{(\cite[Lemma 19]{Ho})} \label{lem:winding}
Let $F: (\mathbb{D}^2, \partial \mathbb{D}^2) \to ({\mathbb{R}} \times {\mathcal{L}}, \{0\} \times \Sigma)$ be a $J$--holomorphic map. Then $F(\partial \mathbb{D}^2)$ is transverse to the characteristic foliation of $\Sigma$ and $F(\mathbb{D}^2)$ is transverse to $\{0\} \times {\mathcal{L}}$.
\end{lemma}
In order to apply Theorem \ref{thm:removalSingularities} we must have energy bounds, which are provided by the following result.
\begin{proposition} \emph{(\cite[Proposition 27]{Ho}} \label{prop:energyBounds}
There are uniform energy bounds $0 < C_1 < E(F),E^h(F) < C_2 < \infty$ for every $F$ satisfying Equation \ref{eq:holDiscs} and having
\[ {\operatorname{dist}}({\operatorname{Image}}(F), \Gamma_\Sigma) > \varepsilon > 0. \]
\end{proposition}
\begin{proof}
By Stokes' theorem:
\[ E(F) = \sup_{\phi \in \Gamma} \int_{\mathbb{D}^2} F^* d(\phi\alpha) = \sup_{\phi \in \Gamma} \int_{\partial \mathbb{D}^2} F^* \phi\alpha = \]
\[ \sup_{\phi \in \Gamma} \phi(0) \int_{\partial \mathbb{D}^2} F^*\alpha = \int_{F(\partial \mathbb{D}^2)} \alpha. \]
$F(\partial \mathbb{D}^2)$ winds around the critical point exactly once and hence bounds a disc within $\Sigma$. The area of such a disc is always bounded above by a universal constant and is bounded below under the assumption that they have radius at least $\varepsilon$. The claim follows.
A similar estimate holds for $E^h$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:main}}
Now we tie all the results we have discussed so far.
\begin{lemma} \label{lem:jump1ot}
Let ${\mathcal{L}}$ be a leaf of ${\mathcal{F}}$ and assume that $({\mathcal{L}}, \xi)$ is an overtwisted contact manifold. Then there is a finite energy plane contained in ${\mathbb{R}} \times {\mathcal{L}}' \in {\mathcal{F}}_\mathbb{R}$, with ${\mathcal{L}}'$ lying in the closure of ${\mathcal{L}}$.
\end{lemma}
\begin{proof}
Proposition \ref{prop:Bishop} shows that the set of solutions of Equation \ref{eq:holDiscs} is non--empty and Proposition \ref{prop:openess} shows that, up to Moebius transformations, it is an open $1$--dimensional manifold. Denote by $\mathcal{M}$ the component that contains the solutions arising from the elliptic point.
The boundaries of the maps in $\mathcal{M}$ are pairwise disjoint by Proposition \ref{prop:openess} and they remain transverse to the characteristic foliation by Lemma \ref{lem:winding}. Hence, they define an open submanifold $D$ of the overtwisted disc.
Take a sequence in $\mathcal{M}$ whose distance to the elliptic point is uniformly bounded from below. Then Proposition \ref{prop:ArzelaAscoli} says that either the gradient is unbounded in the family or their limit (by taking a subsequence) is a new solution of Equation \ref{eq:holDiscs}.
We conclude that if the gradient does not explode, $D$ is both compact and open. Then $D$ should have a tangency with the boundary of the overtwisted disk, which is a contradiction with Lemma \ref{lem:winding}.
Since the gradient explodes, we know by Propositions \ref{prop:planeBubbling} and \ref{prop:discBubbling} that either a plane or a disc bubble appears. In the case of a disc bubble, the standard analysis as in \cite{Ho} shows that bubbles connect, and hence we must have two $J$--holomorphic discs touching at a point and whose winding numbers add up to $1$. This is a contradiction with Lemma \ref{lem:winding}.
We conclude that necessarily a plane bubble must appear.
\end{proof}
\begin{lemma} \label{lem:jump1pi2}
Let ${\mathcal{L}}$ be a leaf of ${\mathcal{F}}$ and assume that $\pi_2({\mathcal{L}}) \neq 0$. Then there is a finite energy plane contained in ${\mathbb{R}} \times {\mathcal{L}} \in {\mathcal{F}}_\mathbb{R}$, with ${\mathcal{L}}'$ lying in the closure of ${\mathcal{L}}$.
\end{lemma}
\begin{proof}
Let us denote by $p_-$ and $p_+$ the two elliptic points of the convex $2$--sphere $\Sigma$ realising a non trivial element of $\pi_2({\mathcal{L}})$. Proposition \ref{prop:Bishop} gives two different Bishop families starting at each point, which we denote by $\mathcal{M}^-$ and $\mathcal{M}^+$, respectively.
Assume that the gradient is uniformly bounded in the Bishop family $\mathcal{M}^-$. Then $\mathcal{M}^-$ is open and compact, and an application of Proposition \ref{prop:openess} shows that it can be continued until the boundaries of the discs in the family reach $p_+$. Since we know by Proposition \ref{prop:Bishop} that in a neighbourhood of $p_+$ the only curves are those in $\mathcal{M}^+$, both families must be the same. The evaluation map
\[ {\operatorname{ev}}: \mathcal{M}^- \times \mathbb{D}^2 \approx [0,1] \times \mathbb{D}^2 \to {\mathcal{L}} \]
\[ {\operatorname{ev}}(F=(a,u), z) = u(z) \]
satisfies ${\operatorname{ev}}(\partial (\mathbb{M}^- \times \mathbb{D}^2)) = \Sigma$, which contradicts the fact that $\Sigma$ was non--trivial in $\pi_2({\mathcal{L}})$.
Therefore, the gradient must explode, and since a disc bubble cannot appear, the claim follows.
\end{proof}
We are now ready to prove Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
Let $({\mathcal{L}}, \xi)$ be overtwisted. Proposition \ref{lem:jump1ot} yields a finite energy plane $F: \mathbb{C} \to {\mathbb{R}} \times {\mathcal{L}}$, with ${\mathcal{L}}'$ a leaf of ${\mathcal{F}}$ contained in the closure of ${\mathcal{L}}$. By Lemma \ref{lem:noSphereBubbling} this plane cannot be completed to a sphere. Now an application of Theorem \ref{thm:removalSingularities} shows that there is a closed Reeb orbit in some leaf ${\mathcal{L}}''$ lying in the closure of ${\mathcal{L}}'$. Since ${\mathcal{L}}''$ is in the closure of ${\mathcal{L}}$ the claim follows.
Same argument goes through by applying Proposition \ref{lem:jump1pi2} if $\pi_2({\mathcal{L}}) \neq 0$.
\end{proof}
\begin{remark}
As we have seen, Lemmas \ref{lem:jump1ot} and \ref{lem:jump1pi2} yield a finite energy plane in a leaf that might not be the one containing the overtwisted disc or the convex $2$--sphere. Then, an application of Theorem \ref{thm:removalSingularities} shows that the plane is asymptotic to a trivial cylinder that might live yet in a nother leaf.
Our example in Subsection \ref{ssec:sharp} shows that at least one of these two phenomena must take place. Is it possible for a ``\textsl{double jump}'' to actually happen?
\end{remark}
\begin{remark}
Let $(M^{2n+1+m}, {\mathcal{F}}^{2n+1}, \xi)$ be a contact foliation. Let ${\mathcal{L}}$ be a leaf of ${\mathcal{F}}$ and let $({\mathcal{F}}, \xi)$ be overtwisted in the sense of \cite{BEM}. More generally, assume that $({\mathcal{F}}, \xi)$ contains a plastikstufe \cite{Nie}.
It is immediate that the Bishop family arising from the plastikstufe can be employed to show that there must be a Reeb orbit, so Theorem \ref{thm:main} also holds true for overtwisted manifolds in all dimensions.
\end{remark}
\section{The non--degenerate case}
In this section we show that under non--degeneracy assumptions none of the \textsl{jumps} between leaves can happen.
\begin{definition}
Let $(M, {\mathcal{F}}, \xi)$ be a contact foliation and let $\alpha$ be the defining $1$--form for some extension $\Theta$ of $\xi$.
A closed orbit of the Reeb vector field associated to $\alpha$ is called \textbf{non--degenerate} if it is isolated among Reeb orbits having the same period and lying in the same leaf of ${\mathcal{F}}$.
The form $\alpha$ is called \textbf{non--degenerate} if all the closed orbits of its Reeb vector field are non--degenerate.
\end{definition}
The statement we want to show is the following. It is a stronger version of the Removal of Singularities (Theorem \ref{thm:removalSingularities}) in the non--degenerate case.
\begin{theorem} \label{thm:nonDegenerateRemovalSingularities}
Let $(M, {\mathcal{F}}, \xi)$ be a contact foliation and let $\alpha$ be the defining $1$--form for some extension $\Theta$ of $\xi$. Assume $\alpha$ is non--degenerate.
Let $F=(a,u): \mathbb{D}^2 \setminus \{0\} \to {\mathbb{R}} \times {\mathcal{L}} \subset {\mathbb{R}} \times M$ be a $J$--holomorphic curve with $0 < E(F) < \infty$, ${\mathcal{L}}$ a leaf of ${\mathcal{F}}$.
Then, either $F$ extends to a $J$--holomorphic map over $\mathbb{D}^2$ or the curves $\gamma_r(s) = u(e^{r+is})$ converge in $C^\infty$ to a closed Reeb orbit $\gamma$ lying in ${\mathcal{L}}$.
\end{theorem}
\begin{proof}
We proceed by contradiction. Assume that $\gamma$, the limit of some $\gamma_{r_i}$, $r_i \to \infty$, is contained in some leaf ${\mathcal{L}}' \neq {\mathcal{L}}$.
Denote $T = \int_\gamma \alpha$, the period of $\gamma$. By our assumption on $\alpha$, we can find a closed foliation chart $U \subset M$ diffeomorphic to $\mathbb{D}^2 \times \mathbb{S}^1 \times [-1,1]$ around $\gamma$ such that the plaque in $U$ containing $\gamma$ intersects no other orbits of period $T$. Write $h: U \to [-1,1]$ for the height function of the chart: we can assume that $h^{-1}(0)$ is the plaque containing $\gamma$.
Since the curves $\gamma_{r_i}$ converge in $C^\infty$ to $\gamma$, their images are contained in $U$ for large enough $i$. Assume, by possibly restricting to a subsequence, that each ${\operatorname{Image}}(\gamma_{r_i})$ lies in a different plaque of ${\mathcal{F}} \cap U$. Then, for each $i$, there is a smallest radius $r_i < R_i < r_{i+1}$ such that ${\operatorname{Image}}(\gamma_r)$ is disjoint from the plaque containing ${\operatorname{Image}}(\gamma_{r_i})$, for all $r > R_i$. In particular, ${\operatorname{Image}}(\gamma_{R_i})$ intersects $\partial U$.
Consider the maps
\[ F_i: [r_i - R_i, r_{i+1} - R_i] \times \mathbb{S}^1 \to {\mathbb{R}} \times M \]
\[ F_i(t,s) = (a(e^{t+R_i+is}) - a(e^{R_i}), u(e^{t+R_i+is})) \]
By construction, $F_i(0,0) \in \{0\} \times M$, $F_i(0,s) \cap \{0\} \times (\partial U) \neq \emptyset$, and $\lim_{i \to \infty} h \circ F_i = 0$
By carrying out the bubbling analysis, we can assume that the $F_i$ have bounded gradient. In particular, $r_{i+1} - r_i$ must be uniformly bounded from below by a non--zero constant. Arcel\'a--Ascoli states that the $F_i$ converge in $C^\infty_{loc}$ --maybe after taking a subsequence-- to a map $F_\infty$ with $E^h(F_\infty) = 0$ and therefore lying on top of some Reeb orbit.
By the properties of the $F_i$, $F_\infty$ must have image contained in ${\mathbb{R}} \times {\mathcal{L}}'$ and intersecting ${\mathbb{R}} \times (h^{-1}(0) \cap \partial U)$. In particular, ${\operatorname{Image}}(F_\infty)$ is not contained in ${\mathbb{R}} \times \gamma$. If $\lim_{i \to \infty} R_i-r_i < \infty$, the curves $s \to F_i(r_i-R_i,s)$ would converge to $\gamma$, which is a contradiction. Similarly we deduce that $\lim_{i \to \infty} r_{i+1} - R_i = \infty$.
Since it has finite energy, $F_\infty: (-\infty, \infty) \times \mathbb{S}^1 \to {\mathbb{R}} \times {\mathcal{L}}'$ must yield a periodic orbit of the Reeb. It must be a closed orbit different from $\gamma$, having period $T$ and intersecting the plaque containing $\gamma$, which is a contradiction.
We have proved that the limit must lie in ${\mathcal{L}}$. It is standard then that the limit does not depend on the sequence chosen $r_i$.
\end{proof}
\begin{remark}
Theorem \ref{thm:nonDegenerateRemovalSingularities} immediately implies that a finite energy plane is asymptotic to a trivial cylinder lying in the same leaf.
Similarly, it shows that the Bishop family always yields a plane bubble in the original leaf ${\mathcal{L}}$: outside of a finite set of points, the Bishop family converges to foliated $J$--holomorphic curve with boundary in the overtwisted disc and possibly many punctures that are asymptotic at $-\infty$ to a number of Reeb orbits necessarily lying in ${\mathcal{L}}$.
\end{remark}
\end{document} |
\begin{document}
\title{An entangling quantum-logic gate operated with\\ an ultrabright single photon-source}
\author{O. Gazzano$^{1}$, M. P. Almeida$^{2,3}$, A. K. Nowak $^1$, S. L. Portalupi$^1$, A. Lema\^itre$^1$, I. Sagnes$^1$, A. G. White$^{2,3}$ and P. Senellart$^1$}
\affiliation{$^1$Laboratoire de Photonique et de Nanostructures, CNRS, UPR20, Route de Nozay, 91460 Marcoussis, France\\
$^2$Centre for Engineered Quantum Systems \& $^3$Centre for Quantum Computer and Communication Technology, School of Mathematics and Physics, University of Queensland, Brisbane QLD 4072, Australia\\}
\begin{abstract}
\noindent We demonstrate unambiguous entangling operation of a photonic quantum-logic gate driven by an ultrabright solid-state single-photon source. Indistinguishable single photons emitted by a single semiconductor quantum dot in a micropillar optical cavity are used as target and control qubits. For a source brightness of 0.56 collected photons-per-pulse, the measured truth table has an overlap with the ideal case of $68.4{\pm}0.5$\%, increasing to $73.0{\pm}0.6$\% for a source brightness of 0.17 photons-per-pulse. The gate is entangling: at a source brightness of 0.48, the Bell-state fidelity is above the entangling threshold of 50\%, and reaches $71.0{\pm}3.6$\% for a source brightness of 0.15.
\end{abstract}
\maketitle
The heart of quantum information processing is entangling separate qubits using multi-qubit gates: the canonical entangling gate is the controlled-\textsc{not} (\textsc{cnot}) gate, which flips the state of a target qubit depending on the state of the control. A universal quantum computer can be built using solely \textsc{cnot} gates and arbitrary local rotations \cite{quantumcomp}, the latter being trivial in photonics. In 2001, Knill, Laflamme and Milburn (KLM) demonstrated that photonic multi-qubit gates, could be implemented using only linear-optical components and projective measurements and feedforward~\cite{KLM}. Since then, many schemes to implement linear-optical \textsc{cnot} gates have been theoretically proposed~\cite{cnot1,cnot2,cnot3} and experimentally demonstrated~\cite{cnot4,cnot5,cnot6,cnot7,cnot8,cnot9,cnot10}. These demonstrations all used parametric down conversion as photon sources, however such sources are not suitable for scalable implementations due to their inherently low source brightness---$10^{-6}$ to $10^{-4}$ photons-per-excitation pulse---and contamination with a small but significant multiple-photon component \cite{weinholdarxiv2008, barbieri, jennewein}.
Semiconductor quantum-dots (QDs) confined in micropillar optical cavities are close to ideal as photon sources, emitting pulses containing one and only one photon, with high efficiency and brightness. QDs have been shown to emit single photons~\cite{michler2001}, indistinguishable photons~\cite{santori2002}, and entangled photon pairs~\cite{gershoni,NJPtoshiba}. Intrinsically, the dots emit photons isotropically: both tapered single mode waveguides~\cite{claudon2012} and micropillar cavities~\cite{doussenat,nat-comm} have enabled the fabrication of single photon sources with brightness of $\sim$80\%. In the latter case, the Purcell effect further allows reducing the dephasing induced by the solid state environment, yielding photons with a large degree of indistinguishability~\cite{santori2002,varoutsis2005,nat-comm}.
Very recently, quantum-dot photon sources have been used to drive linear-optical entangling gates: on a semiconductor waveguide chip, where the truth table was measured \cite{toshibaCNOT}; and in bulk polarisation-optics \cite{naturenanoCNOT}, where the gate process fidelity was bounded by measurements in two orthogonal bases \cite{qprocess}. These are necessary, but not sufficient measurements for unambiguously establishing entanglement \cite{White2007}, e.g. a \textsc{cnot} gate has the same truth table as a classical, reversible-\textsc{xor} gate.
Here we show unambiguous operation of an entangling \textsc{cnot} gate using single photons emitted by a single quantum-dot deterministically coupled to the optical mode of a pillar microcavity. The source is operated at a remarkably high brightness---above 0.65 collected photons-per-pulse---and successively emitted photons present a mean wave-packet overlap~ \cite{santori2002} between 50$\%$ and 72$\%$. Bell-state fidelities above 50\% are an unimpeachable entanglement witness \cite{White2007}: we see fidelities up to $71.0{\pm}3.6$\%.
\begin{figure}
\caption{a) Schematic of the the experimental setup. Single photons are produced by a QD in a micro-pillar optical cavity, excited by two consecutive laser pulses, temporally-separated by 2.3~ns. A non-polarising beam splitter reflecting 90\% of the QD signal is employed to send the QD emission into a single-mode fiber and to the input of the CNOT gate. Polarizers, half- and quarter- wave plates are used for state preparation and analysis. The photons are spectrally filtered by two spectrometers and detected by single-photon avalanche photon diodes (SPAD). b) Experimental schematic of the CNOT gate, as described in \cite{cnot4}
\label{fig:1}
\end{figure}
Our source was grown by molecular beam epitaxy, and consists of an InGaAs annealed QD layer between two Bragg reflectors with 16 (36) pairs for the top (bottom) mirror. After spin-coating the sample with a photoresist, low temperature \emph{in-situ} lithography is used to define pillars deterministically coupled to single QDs~\cite{Dousse2008}. We first select QDs with optimal quantum efficiency and appropriate emission wavelength to be spectrally matched to 2.5~$\mu m$ diameter pillar cavities. A green laser beam is used to expose the disk defining the pillar centered on the selected QD with 50~nm accuracy. To operate the source close to maximum brightness and maintain a reasonably high degree of indistinguishability, we use a two color excitation scheme. A 905~nm 82~MHz pulsed laser resonant to an excited state of the QD is used to saturate the QD transition while a low power continuous wave laser at 850~nm is used to fill traps in the QD surrounding, thereby reducing fluctuations of the electrostatic environment. For more details see reference~\cite{nat-comm}. Our source has a maximum brightness of 0.79 photons-per excitation pulse, Fig.~1d, as measured in the first collection lens, Fig.~1a.
The QD emission is collected by a 0.4 NA microscope objective and coupled to a single-mode fiber with a $70\%$ efficiency, estimated by comparing the measured single photon count rate with and without fiber coupling. The typical spectrum of the source is shown in the insert of Fig.~1d, note the single emission line at 930~nm. To characterize the purity of the single photon emission, we measure the second-order correlation function, g$^2$, using an Hanbury Brown-Twiss setup~\cite{HBT}. Figure 1c. shows the measured auto-correlation function under pulsed excitation only. We obtain g$^2(0){=}0.01{\pm}0.01$---without background correction. For the QD under study, the fine structure splitting of the exciton line is below $2 \mu eV~$\cite{annealingQD}. Thanks to the enhancement of spontaneous emission by the Purcell factor, $F_p{=}3.8$, the photons are indistinguishable in any polarization basis as shown in~\cite{nat-comm} using the Hong-Ou-Mandel experiment. In the following, we operate the source at brightness of 75\% for measuring the gate truth table and at a brightness of 65\% for demonstrating two-photon entanglement.
To generate the target and control input photons, the source is excited twice every 12.2ns---the repetition rate of the laser---with a delay between the two excitations of 2.3 ns. The two photons are non-deterministically spatially-separated by coupling the source to a 50/50 fiber beam splitter, and non-deterministically temporally-overlapped by adding the 2.3~ns delay to one of the fibre paths. We implement the \textsc{cnot} gate following the design of reference~\cite{cnot4}, which requires both classical and quantum multi-path interference, Fig.1b. The logical qubits are encoded on the polarization state of the photons with $\ket{0}{\equiv}\ket{H}$ and $\ket{1}{\equiv}\ket{V}$. We initialise with polarisers, and set the gate input-state using half-wave plates. Half-wave plates on the control input and output act as Hadamard gates, the internal half-wave plate implements the three 1/3 beamsplitters at the heart of this gate \cite{cnot4}. The waveplates and polarisers on the output modes enable analysis in any polarisation basis. For spectral filtering, the gate outputs are coupled to spectrometers, and are detected via single photon avalanche photodiodes with 350 ps time resolution.
\begin{figure*}
\caption{(a) Example of a correlation histogram, measured at the output of the \textsc{cnot}
\label{fig:2}
\end{figure*}
To obtain the truth table, we measured the output of the gate for each of the four possible logical basis input states $\{\ket{HH},\ket{HV},\ket{VH},\ket{VV}\}$ where $\ket{ct}$ are the control and target qubit states. Figure 2a presents a typical experimental correlation histogram. Every 12.2~ns, a set of five peaks is observed: each peak corresponds to one of the five possible paths followed by the two photons generated with a 2.3 ns delay. The central peak, at zero delay, corresponds to events where both the control and target photons enter the gate simultaneously. We will hereafter refer to the five central peaks centred at zero delay as \emph{correlated peaks} and the set of peaks centred at $p\times 12.2\ ns$ ($p\in \mathbb{Z^*}$) as \emph{uncorrelated peaks}. For each set of peaks, we also define 5 time bins of variable width, separated by 2.3~ns, in order to temporally analyze the time evolution of the signal. To evaluate the gate properties, we measure the area of the peaks for a given temporal-bin size. Because the emission decay time of the source is 750 ps, adjacent peaks slightly temporally-overlap on the order of 5 to 10\%. The experimental data presented hereafter are corrected for this overlap (see Supplementary information).
Figures 2b and 2c present the measured area of the correlated and uncorrelated peaks for the control-qubit set to $\ket{0}$, hence the target and control photons do not interfere: the result of the measurement depends only on the purity of the single photon source $g^2(0)$. In Figures 2d and 2e, the control-qubit is set to $\ket{1}$, and the measurements are obtained with a time bin of 1ns: in this case, the signal measured on the output depends on two-photon interference. For perfectly indistinguishable photons, the peak at zero delay in Fig.~2d should completely vanish, whereas for perfectly distinguishable photons this peak is expected to present the same area as the peaks at $\pm 2.3$~ns. Our observation of an intermediate case highlights the non-unity indistinguishability of successively emitted photons.
Each experimental curve is normalized using the area of the central uncorrelated peaks at $p{\times} 12.2$~ns ($p \in \mathbb{Z^*}$) which can be easily calculated considering the optical path followed by non-temporally-overlapping photons with Poisson statistics. Doing so, we find that the amplitude of the experimental area (blue bars) averaged over 200 uncorrelated peaks sets is in very good agreement with theoretical expectations (red lines in Figs 2b--e). This normalization procedure allows us to measure the output coincident count rates normalized to the input pair mode, as shown in Table I for a time-bin width of 1ns. In the 8 logical configurations indicated by $\alpha$ and $\beta$, there is actually no signal on one of the detectors: the dark count to signal ratio leads to $\alpha{<}0.005\frac{1}{9}$ and $\beta{<}0.01\frac{1}{9}$. Using a photon mean-wavepacket-overlap, $M$, of $50\%$ we see that the measured configurations, Table I, are in very good agreement with those predicted for an ideal gate, Table II \cite{cnot2}. The value of $M$ is not corrected for imperfections in the experimental setup---such as visibility of the single photon interference, polarization ratio of the calcite, etc.---and is therefore a lower bound to the source indistinguishability, and compares well with previously reported values \cite{nat-comm}
\begin{table}[h!]
\begin{center}
\begin{tabularx}{1\linewidth}{|c|XXXX|}
\hline
Input & C$_{\ket{\text{HH}}}$ & C$_{\ket{\text{HV}}}$ & C$_{\ket{\text{VH}}}$ & C$_{\ket{\text{VV}}}$ \\
\hline
${\ket{\text{HH}}}$ &$\frac{1}{9}$1.12 & $\alpha$ & $\frac{1}{9}$0.015 & $\alpha$ \\
${\ket{\text{HV}}}$ &$\alpha$ & $\frac{1}{9}$0.97 & $\alpha$ & $\frac{1}{9}$0.04 \\
${\ket{\text{VH}}}$ &$\beta$ & $\alpha$ & $\frac{2}{9}$0.50 & $\frac{1}{9}$0.92 \\
${\ket{\text{VV}}}$ &$\alpha$ & $\beta$ & $\frac{1}{9}$0.75 & $\frac{2}{9}$0.502 \\
\hline
\end{tabularx}
\caption{Experimental output coincident count rates normalized to the input pair mode. Input $\ket{\psi_\text{in}}{=} \ket{\text{control,target}}$ qubit states are indicated in the left column.
}
\label{Table:1}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabularx}{1\linewidth}{|c|XXXX|}
\hline
Input & C$_{\ket{\text{HH}}}$ & C$_{\ket{\text{HV}}}$ & C$_{\ket{\text{VH}}}$ & C$_{\ket{\text{VV}}}$ \\
\hline
${\ket{\text{HH}}}$ & $\frac{1}{9}$ & 0 & 0 & 0 \\
${\ket{\text{HV}}}$ &0 & $\frac{1}{9}$ & 0 & 0 \\
${\ket{\text{VH}}}$ &0 & 0 & $\frac{2}{9}$(1-M) & $\frac{1}{9}$ \\
${\ket{\text{VV}}}$ &0 & 0 & $\frac{1}{9}$ & $\frac{2}{9}$(1-M) \\
\hline
\end{tabularx}
\caption{Theoretical output coincident count rates normalized to the input pair mode. Input $\ket{\psi_\text{in}}{=} \ket{\text{control,target}}$ qubit states are indicated in the left column.
}
\label{Table:2}
\end{center}
\end{table}
The left ordinate of Fig.~2f plots the overlap between the measured and ideal \textsc{cnot} gate truth tables---defined as the probability to obtain the correct output averaged over all possible four inputs \cite{White2007}---as a function of time-bin width. The right ordinate of Fig.~2f shows the number of collected photons-per-pulse as a function of the time bin width, given by $I_{max}\times \int_0^{t_{bin}}e^{-t/\tau} dt/\int_0^{\infty}e^{-t/\tau} dt$ where $I_{max}$ is the source operation brightness---here, $I_{max}$=0.75 collected photons-per-pulse---and $\tau{=}750$~ps is the decay time of the single photon emission. Figure 2f shows that the overlap between the measured and ideal truth table increases from $0.684{\pm}0.005$ for a brightness of 0.56, to $0.730{\pm}0.016$ when reducing the time bin, thanks to improved indistinguishability of photons emitted at shorter delay \cite{varoutsis2005, toshibaPRL2012}.
\begin{figure}
\caption{a-f) Area of the correlation peaks as a function of time delay for the correlated peaks (left) and uncorrelated peaks (right) for a time bin of 1 ns. For all measurements the input state is $\ket{D,H}
\label{fig:3}
\end{figure}
To certify that this gate and source combination can produce entangled states from unentangled inputs, we measure the fidelity of the output state with an ideal Bell-state. Setting the control qubit to $\ket{D}{=}(\ket{V}{+}\ket{H})/\sqrt{2}$, and the target qubit to $\ket{H}$, the output of an ideal gate is $\Phi^{+}{=}(\ket{V,V}{+}\ket{H,H})\sqrt{2}$. To measure the fidelity of the experimentally-generated state, we measure the polarization of the correlation in three bases~\cite{White2007,measuringqbit}:
$$ E_{\alpha,\beta}=\frac{A_{\alpha,\alpha}+A_{\beta,\beta}-A_{\alpha,\beta}-A_{\beta,\alpha}}{A_{\alpha,\alpha}+A_{\beta,\beta}+A_{\alpha,\beta}+A_{\beta,\alpha}}$$ where $A_{\beta,\alpha}$ is the zero delay peak area measured for the output control photon detected in $\beta$ polarization and the output target photon in $\alpha$ polarization. The fidelity to the Bell state is then given by $F_{\Phi^{+}}{=}\left(1{+}E_{H,V}{+}E_{D,A}{-}E_{R,L}\right)/4$ where the anti-diagonal polarisation is $\ket{A}{=}(\ket{H}{-}\ket{V})/\sqrt{2}$, and the circular basis polarisations are right, $\ket{R}{=}(\ket{H}{+}\ii \ket{V})/\sqrt{2}$, and left, $\ket{L}{=}(\ket{H}{-}\ii \ket{V})/\sqrt{2}$. Figure 3a-f shows the experimental correlation curves for two polarization configurations in each basis. Note that for both linear and diagonal bases, the results of the measurement depends on the two photon quantum interference only when the output photons are in $\ket{V,H}$, $\ket{V,V}$, $\ket{A,D}$ or $\ket{A,A}$. The four other terms result only from single photon interferences (not shown).
\begin{figure}
\caption{a) Fidelity to the Bell State $\Phi^{+}
\label{fig:3}
\end{figure}
Figure 4a presents the fidelity to the Bell state $F_{\Phi^{+}}$ as a function of time bin. For all time bins, the fidelity to the Bell state is above the 0.5 limit for quantum correlations. For these entanglement measurements, we have only slightly decreased the source brightness, $I_{max}$=0.65, in order to obtain a better degree of indistinguishability \cite{nat-comm}. Our results show the creation of an entangled two photon state for a source brightness as large as 0.48 collected photons-per-pulse. When reducing the time bin---and thereby the source brightness, as indicated in the right ordinate of Fig.~4a---the fidelity increases up to $0.710{\pm}0.036$.
Figure 4b presents the expected fidelity to the Bell state as a function of the mean wavepacket overlap, $M$. Following \cite{cnot2} to calculate the output coincident count rate for all bases configurations, it can be shown that $F_{\Phi^{+}}{=}\frac{1+M}{2(2-M)}$. For $M{=}0$, the fidelity is 0.25, which is the value experimentally observed for the uncorrelated peaks (square). For a time bin of 2 ns, the measured fidelity of 0.5 is consistent with M=0.5 (circle), which is a lower bound for M since our modeling does not take into account the setup experimental imperfections. For a time bin of 400 ps, the measured fidelity of 0.71 shows a mean wavepacket overlap larger than M=0.76 (triangle).
In conclusion, we have demonstrated the successful implementation of an entangling \textsc{cnot} gate operating with an ultrabright single photon source. The gate is entangling for all source brightnesses under 0.48, reaching a Bell-state fidelity of $71.0{\pm}3.6$\% at a source brightness of 0.15 collected photons-per-pulse. To improve the fidelity of the gate operation while maintaining a high source brightness, one could use an adiabatic design of the micropillar to benefit from a larger Purcell effect to further improve the source indistinguishability~\cite{niels}. The advances on quantum dot single photon technologies open exciting possibilities for linear optical computing. Their main asset as compared to heralded single photon sources based on parametric down conversion is the possibility to obtain very bright sources as well as negligible multiphoton events. Photonic quantum technologies will require access to multiple single-photons, multiplexed in different spatial modes. Small scale implementation of quantum logic circuits is the first step towards incorporating quantum dot based single-photon source to these technologies.
\begin{acknowledgments}
This work was partially supported by: the French ANR P3N DELIGHT, ANR JCJC MIND, the ERC starting grant 277885 QD-CQED, the French RENATECH network and the CHISTERA project SSQN; and the Australia Research Council Centre for Engineered Quantum Systems (CE110001013) and the Centre for Quantum Computation and Communication Technology (CE110001027). O.G. acknowledges support by the French Delegation Generale de l'Armement; MPA by the Australian Research Council Discovery Early Career Award (DE120101899); and AGW by the University of Queensland Vice-Chancellor's Senior Research Fellowship.
\end{acknowledgments}
\end{document} |
\begin{document}
{\scriptscriptstyle\mathsf{T}}itle{Time-reversal of rank-one quantum strategy\newline functions}
\date{October 1, 2018}
\author{Yuan Su}
\affiliation{
Department of Computer Science,
Institute for Advanced Computer Studies, and
Joint Center for Quantum Information and Computer Science,
University of Maryland, USA
}
\homepage{http://quics.umd.edu/people/yuan-su}
\orcid{0000-0003-1144-3563}
\author{John Watrous}
\affiliation{Institute for Quantum Computing and School of Computer
Science, University of Waterloo, Canada
}
\affiliation{Canadian Institute for Advanced Research, Toronto,
Canada
}
\makeatletter
\@homepage{}{
\href{https://cs.uwaterloo.ca/~watrous/}{
https://cs.uwaterloo.ca/{\raise.17ex\hbox{$\scriptstyle\sim$}}watrous/}}
\makeatother
\orcid{0000-0002-4263-9393}
\title{Time-reversal of rank-one quantum strategy
ewline functions}
\begin{abstract}
The \emph{quantum strategy} (or \emph{quantum combs}) framework is a useful
tool for reasoning about interactions among entities that process and
exchange quantum information over the course of multiple turns.
We prove a time-reversal property for a class of linear functions, defined
on quantum strategy representations within this framework, that corresponds
to the set of rank-one positive semidefinite operators on a certain space.
This time-reversal property states that the maximum value obtained by such a
function over all valid quantum strategies is also obtained when the
direction of time for the function is reversed, despite the fact that the
strategies themselves are generally not time reversible.
An application of this fact is an alternative proof of a known relationship
between the conditional min- and max-entropy of bipartite quantum states,
along with generalizations of this relationship.
\end{abstract}
\section{The quantum strategy framework}
The \emph{quantum strategy framework} \cite{GutoskiW07}, which is also known as
the \emph{quantum combs framework} \cite{ChiribellaDP08,ChiribellaDP09},
provides a useful framework for reasoning about networks of quantum channels.
It may be used to model scenarios in which two or more entities, which we will
call \emph{players}, process and exchange quantum information over the course
of multiple rounds of communication; and it is particularly useful when one
wishes to consider an optimization over all possible behaviors of one
player, for any given specification of the other player or players.
Various developments, applications, and variants of the quantum strategy
framework can be found in
\cite{ChiribellaDP08b,ChiribellaDPSW13,ChiribellaE16,Gutoski09,Hardy12},
for instance, and in a number of other sources.
In the discussion of the quantum strategy framework that follows, as well as in
the subsequent sections of this paper, we assume that the reader is familiar
with quantum information theory and semidefinite programming.
References on this material include
\cite{NielsenC00, Wilde13, KitaevSV02, WolkowiczSV00} as well as
\cite{Watrous18}, which we follow closely with respect to notation and
terminology.
In particular, we denote quantum registers by capital sans serif letters such
as $\reg{X}$, $\reg{Y}$, and $\reg{Z}$ (sometimes with natural number
subscripts), while the same letters (with matching subscripts) in a scripted
font, such as $\X$, $\Y$, and $\Z$ denote the complex Euclidean spaces
(i.e., finite-dimensional complex Hilbert spaces) associated with the
corresponding registers.
The set $\setft{L}(\X,\Y)$ denotes the set of all linear operators from $\X$ to
$\Y$;
$\setft{L}(\X)$ is a shorthand for $\setft{L}(\X,\X)$;
$\setft{Herm}(\X)$, $\setft{Pos}(\X)$, $\setft{D}(\X)$, and $\setft{U}(\X)$ denote the sets
of all Hermitian operators, positive semidefinite operators, density operators,
and unitary operators acting on $\X$;
$\setft{C}(\X,\Y)$ denotes the set of all channels (i.e., completely positive and
trace-preserving maps) mapping $\setft{L}(\X)$ to $\setft{L}(\Y)$; and
$\setft{C}(\X)$ is a shorthand for $\setft{C}(\X,\X)$.
The adjoint of an operator $A$ is denoted $A^{\ast}$, the entry-wise complex
conjugate is denoted $\overline{A}$, and the transpose is denoted $A^{{\scriptscriptstyle\mathsf{T}}}$.
A similar notation is used for the adjoint and transpose of a channel $\Phi$
(the meaning of which, in the case of the transpose, will be clarified later).
The (Hilbert-Schmidt) inner-product is defined as $\ip{A}{B} = \operatorname{Tr}(A^{\ast}B)$
for all operators $A,B\in\setft{L}(\X)$.
Some additional notation will be introduced as it is used.
\subsection*{An example of a six-message interaction}
To explain the aspects of the quantum strategy framework that are relevant to
this paper, we will begin by discussing an example of an interaction structure
involving six messages exchanged between two players, Alice and Bob.
We have chosen to describe a six-message interaction because it is simple and
concrete, but nevertheless clearly suggests the underlying structure of an
interaction having any finite number of message exchanges.
Our main result holds in the general case, which will be considered later,
where an arbitrary finite number of message exchanges may take place.
\begin{figure}
\caption{A six message interaction between Alice and Bob, after which Bob
produces a measurement outcome.}
\label{fig:Alice-and-Bob-interact}
\end{figure}
Figure~\ref{fig:Alice-and-Bob-interact} illustrates an interaction between
Alice and Bob.
In this figure, time proceeds from left to right, and the arrows represent
registers either being sent from one player to the other
(as is the case for the registers $\reg{X}_1$, $\reg{Y}_1$, $\reg{X}_2$,
$\reg{Y}_2$, $\reg{X}_3$, and $\reg{Y}_3$), or momentarily stored by one of the
two players (as is the case for $\reg{Z}_1$ and $\reg{Z}_2$, stored by Alice,
and $\reg{W}_1$, $\reg{W}_2$, $\reg{W}_3$, $\reg{W}_4$, stored by Bob).
Alice's actions are represented by the channels $\Phi_1$, $\Phi_2$, and
$\Phi_3$, and Bob's actions are represented by the channels $\Psi_1$, $\Psi_2$,
$\Psi_3$, and $\Psi_4$, as well as a final measurement, which is not given a
name in the figure.
Suppose that Bob's specification has been fixed, including his choices for the
channels $\Psi_1$, $\Psi_2$, $\Psi_3$, and $\Psi_4$, as well as his final
measurement, and suppose further that one of Bob's possible measurement
outcomes is to be viewed as desirable to Alice.
It is then natural to consider an optimization over Alice's possible actions,
maximizing the probability that Bob's measurement produces the outcome Alice
desires.
The quantum strategy framework reveals that this optimization problem can be
expressed as a semidefinite program, in the manner that will now be described.
First, a single channel $\Xi_3$ that transforms
$(\reg{X}_1,\reg{X}_2,\reg{X}_3)$ to $(\reg{Y}_1,\reg{Y}_2,\reg{Y}_3)$ is
associated with any given choice for Alice's actions.
That is, the channel $\Xi_3$ takes the form
\begin{equation}
\label{eq:3-channel-form}
\Xi_3\in\setft{C}(\X_1\otimes\X_2\otimes\X_3,\Y_1\otimes\Y_2\otimes\Y_3),
\end{equation}
and for a particular selection of $\Phi_1$, $\Phi_2$, and $\Phi_3$ may be
expressed as
\begin{equation}
\label{eq:3-channel-composition}
\Xi_3 = \bigl(\I_{\setft{L}(\Y_1\otimes\Y_2)} \otimes \Phi_3\bigr)
\bigl(\I_{\setft{L}(\Y_1)} \otimes \Phi_2 \otimes \I_{\setft{L}(\X_3)}\bigr)
\bigl(\Phi_1 \otimes \I_{\setft{L}(\X_2 \otimes \X_3)}\bigr).
\end{equation}
Formally speaking, this composition requires that we view $\Phi_1$, $\Phi_2$,
and $\Phi_3$ as channels of the form
$\Phi_1\in\setft{C}(\X_1,\Y_1\otimes\Z_1)$,
$\Phi_2 \in \setft{C}(\Z_1\otimes\X_2,\Y_2\otimes\Z_2)$, and
$\Phi_3 \in \setft{C}(\Z_2\otimes\X_3,\Y_3)$,
as opposed to the forms
$\Phi_1\in\setft{C}(\X_1,\Z_1\otimes\Y_1)$,
$\Phi_2 \in \setft{C}(\Z_1\otimes\X_2,\Z_2\otimes\Y_2)$, and
$\Phi_3 \in \setft{C}(\Z_2\otimes\X_3,\Y_3)$ suggested by
Figure~\ref{fig:Alice-and-Bob-interact}, so that the ordering of the tensor
factors of the various input and output spaces is consistent with the
composition.
Similar re-orderings of tensor factors should be assumed implicitly throughout
this paper as needed.
This understanding should not be a source of confusion because we always
assign distinct names to distinct registers (and their associated spaces).
Figure~\ref{fig:Alice-as-channel} illustrates the action of the channel
$\Xi_3$, which in words may be described as the channel obtained if all three
of the registers $(\reg{X}_1,\reg{X}_2,\reg{X}_3)$ are provided initially, and
then Alice's actions are composed in the natural way to produce
$(\reg{Y}_1,\reg{Y}_2,\reg{Y}_3)$ as output registers.
\begin{figure}
\caption{The channel $\Xi_3$ that describes Alice's actions in the
interaction illustrated in Figure~\ref{fig:Alice-and-Bob-interact}
\label{fig:Alice-as-channel}
\end{figure}
It may appear that by considering the channel $\Xi_3$, one is ignoring the
possibility that Bob's actions could, for instance, allow the contents of
$\reg{Y}_1$ or $\reg{Y}_2$ to influence what is input into $\reg{X}_2$ or
$\reg{X}_3$.
Despite this appearance, the influence that Alice's actions have from the
viewpoint of Bob, including the probability for each of his measurement
outcomes to appear, is uniquely determined by the channel $\Xi_3$.
Naturally, not all channels of the form \eqref{eq:3-channel-form} will arise
from a composition of channels $\Phi_1$, $\Phi_2$, and $\Phi_3$ as in
\eqref{eq:3-channel-composition}; the fact that $\Phi_1$ is effectively
performed first, $\Phi_2$ is performed second, and $\Phi_3$ is performed third
imposes constraints on the channels $\Xi_3$ that can be obtained.
In particular, consider the channel that results when $\Xi_3$ is performed
and then the partial trace is performed on $\Y_3$.
As $\Phi_3$ is a channel, discarding its output is equivalent to discarding its
inputs, from which it follows that
\begin{equation}
\label{eq:channel-constraint-3}
{\scriptscriptstyle\mathsf{T}}ext{Tr}_{\Y_3} \circ \Xi_3 = \Xi_2 \circ {\scriptscriptstyle\mathsf{T}}ext{Tr}_{\X_3},
\end{equation}
where the circles represent channel compositions and
$\Xi_2\in\setft{C}(\X_1\otimes\X_2,\Y_1\otimes\Y_2)$ is the channel defined as
\begin{equation}
\Xi_2 = \bigl(\I_{\setft{L}(\Y_1)} \otimes ({\scriptscriptstyle\mathsf{T}}extup{Tr}_{\Z_2} \circ \Phi_2)\bigr)
\bigl(\Phi_1 \otimes \I_{\setft{L}(\X_2)}\bigr).
\end{equation}
That is, $\Xi_2$ is the channel obtained from $\Phi_1$ and $\Phi_2$, followed
by the partial trace over $\Z_2$, by a similar process to the one used to
obtain $\Xi_3$.
By similar reasoning, one finds that
\begin{equation}
\label{eq:channel-constraint-2}
{\scriptscriptstyle\mathsf{T}}ext{Tr}_{\Y_2} \circ \Xi_2 = \Xi_1 \circ {\scriptscriptstyle\mathsf{T}}ext{Tr}_{\X_2},
\end{equation}
where $\Xi_1\in\setft{C}(\X_1,\Y_1)$ is the channel given by
$\Xi_1 = {\scriptscriptstyle\mathsf{T}}extup{Tr}_{\Z_1}\circ \Phi_1$.
Somewhat remarkably, this is not only a necessary condition on the channel
$\Xi_3$, but also a sufficient one, for it to be obtained from a composition of
channels $\Phi_1$, $\Phi_2$, and $\Phi_3$ as described above.
That is, given any channel
\begin{equation}
\Xi_3\in\setft{C}(\X_1\otimes\X_2\otimes\X_3,\Y_1\otimes\Y_2\otimes\Y_3)
\end{equation}
satisfying \eqref{eq:channel-constraint-3} and \eqref{eq:channel-constraint-2},
for some choice of channels
\begin{equation}
\begin{gathered}
\Xi_2\in\setft{C}(\X_1\otimes\X_2,\Y_1\otimes\Y_2),\\
\Xi_1\in\setft{C}(\X_1,\Y_1),
\end{gathered}
\end{equation}
there must exist channels
\begin{equation}
\begin{gathered}
\Phi_1\in\setft{C}(\X_1,\Y_1\otimes\Z_1),\\
\Phi_2\in\setft{C}(\Z_1\otimes\X_2,\Y_2\otimes\Z_2),\\
\Phi_3\in\setft{C}(\Z_2\otimes\X_3,\Y_3),
\end{gathered}
\end{equation}
for spaces $\Z_1$ and $\Z_2$ having sufficiently large dimension, so that
\eqref{eq:3-channel-composition} holds.
This fact is proved in \cite{ChiribellaDP08,ChiribellaDP09,GutoskiW07}, and we
note that a key idea through which this equivalence is proved may be found in
\cite{EggelingSW02}.
The next step toward an expression of the optimization problem suggested above
as a semidefinite program makes use of the Choi representation of channels.
The Choi representation of the channel $\Xi_3$ takes the form
\begin{equation}
J(\Xi_3) \in
\setft{Pos}(\Y_1\otimes\Y_2\otimes\Y_3\otimes\X_1\otimes\X_2\otimes\X_3),
\end{equation}
as the complete positivity of $\Phi_1$, $\Phi_2$, and $\Phi_3$ implies that
$\Xi_3$ is also completely positive, and therefore $J(\Xi_3)$ is positive
semidefinite.
The constraints on the channel $\Xi_3$ described previously correspond
(conveniently) to linear constraints;
one has that \eqref{eq:channel-constraint-3} and
\eqref{eq:channel-constraint-2} hold, for some choice of channels $\Xi_2$ and
$\Xi_1$, if and only if the Choi representation $X_3 = J(\Xi_3)$ of $\Xi_3$
satisfies
\begin{equation}
\begin{gathered}
\operatorname{Tr}_{\Y_3}(X_3) = X_2 \otimes \I_{\X_3},\\
\operatorname{Tr}_{\Y_2}(X_2) = X_1 \otimes \I_{\X_2},\\
\operatorname{Tr}_{\Y_1}(X_1) = \I_{\X_1},
\end{gathered}
\end{equation}
for some choice of operators
\begin{equation}
\begin{gathered}
X_2 \in \setft{Pos}(\Y_1\otimes\Y_2\otimes\X_1\otimes\X_2),\\
X_1 \in \setft{Pos}(\Y_1\otimes\X_1).
\end{gathered}
\end{equation}
These operators correspond to the Choi representations $X_2 = J(\Xi_2)$ and
$X_1 = J(\Xi_1)$.
Finally, the probability that Bob's measurement produces a given outcome
is a linear function of the channel $\Xi_3$, and is therefore a linear function
of the Choi representation $X_3 = J(\Xi_3)$.
Although this process is not relevant to the main result of this paper, we note
that it is possible to obtain an explicit description of this linear function
given a specification of Bob's actions, including his final measurement.
In somewhat vague terms, the linear function describing Bob's probability to
produce a particular measurement outcome is given by $\ip{P}{X_3}$, where
\begin{equation}
P\in\setft{Pos}(\Y_1\otimes\Y_2\otimes\Y_3\otimes\X_1\otimes\X_2\otimes\X_3)
\end{equation}
is an operator that is obtained from $\Psi_1$, $\Psi_2$, $\Psi_3$, $\Psi_4$,
and the measurement operator corresponding to the outcome being considered
by a process very similar to the one through which $X_3$ is obtained from
$\Phi_1$, $\Phi_2$, and $\Phi_3$.
The reader is again referred to
\cite{GutoskiW07,ChiribellaDP08,ChiribellaDP09} for further details.
More generally, an arbitrary real-valued linear function of the operator
$X_3$ may be expressed as $\ip{H}{X_3}$ for some choice of a Hermitian operator
\begin{equation}
H\in\setft{Herm}(\Y_1\otimes\Y_2\otimes\Y_3\otimes\X_1\otimes\X_2\otimes\X_3),
\end{equation}
which need not represent the probability with which a particular measurement
outcome is obtained for channels $\Psi_1,\ldots,\Psi_4$ followed by a
measurement.
Such a function could, for instance, represent an expected payoff for Alice's
actions, under the assumption that a real-valued payoff is associated with each
of Bob's measurement outcomes.
\subsection*{General semidefinite programming formulation}
As mentioned previously, the six-message example just described generalizes to
any finite number of message exchanges.
If the number of message exchanges is equal to $n$, the input registers to
Alice (the player whose actions are being optimized) are
$\reg{X}_1,\ldots,\reg{X}_n$, and the output registers of Alice are
$\reg{Y}_1,\ldots,\reg{Y}_n$, then the possible strategies for Alice are
represented by channels of the form
\begin{equation}
\Xi_n \in \setft{C}(\X_1\otimes\cdots\otimes\X_n,
\Y_1\otimes\cdots\otimes\Y_n)
\end{equation}
that obey constraints that generalize \eqref{eq:channel-constraint-3} and
\eqref{eq:channel-constraint-2}.
Specifically, there must exist channels
\begin{equation}
\begin{gathered}
\Xi_{n-1} \in \setft{C}(\X_1\otimes\cdots\otimes\X_{n-1},
\Y_1\otimes\cdots\otimes\Y_{n-1})\\
\vdots\\
\Xi_1\in\setft{C}(\X_1,\Y_1)
\end{gathered}
\end{equation}
such that
\begin{equation}
{\scriptscriptstyle\mathsf{T}}ext{Tr}_{\Y_k} \circ \Xi_k = \Xi_{k-1} \circ {\scriptscriptstyle\mathsf{T}}ext{Tr}_{\X_k}
\end{equation}
for all $k\in\{2,\ldots,n\}$.
For the maximization of a real-valued linear function over all strategies for
Alice, represented by a Hermitian operator
\begin{equation}
H\in\setft{Herm}(\Y_1\otimes\cdots\otimes\Y_n\otimes\X_1\otimes\cdots\otimes\X_n),
\end{equation}
one obtains the semidefinite program described in
Figure~\ref{figure:semidefinite-program}.
The primal problem corresponds to an optimization over all Choi
representations of the channels $\Xi_1,\ldots,\Xi_n$.
This semidefinite programming formulation is implicit in \cite{GutoskiW07},
and first appeared explicitly in \cite{Gutoski09}.
It also appears in \cite{ChiribellaE16}, where it was used to define
a generalized notion of min-entropy for quantum networks.
\begin{figure}
\caption{The semidefinite program representing a maximization of a linear
function of an $n$-turn strategy.}
\label{figure:semidefinite-program}
\end{figure}
It may be noted that the general problem just formulated concerns interactions
involving an even number of register exchanges, where Alice (the player whose
actions are being optimized) always receives the first transmission,
represented by $\reg{X}_1$, and sends the last transmission, represented by
$\reg{Y}_n$.
However, one is free to take either or both of the registers $\reg{X}_1$ and
$\reg{Y}_n$ to be trivial registers, so that correspondingly $\X_1 = \complex$
and/or $\Y_n = \complex$.
This is tantamount to allowing either an odd number of register exchanges or an
even number in the situation that Alice sends the first (nontrivial) register
and receives the last.
\section{Statement and proof of the main result}
The main result of the current paper concerns the optimization problem
described in the previous section, as represented by the semidefinite program
in Figure~\ref{figure:semidefinite-program}, in the case that $H = u u^{\ast}$
is a rank one positive semidefinite operator.
The result to be described does not hold in general when $H$ does not take this
form.
In order to explain the main result in precise terms, it will be helpful to
introduce some notation.
Suppose that a positive integer $n$ along with spaces $\X_1,\ldots,\X_n$ and
$\Y_1,\ldots,\Y_n$ have been fixed.
For each $k\in\{1,\ldots,n\}$, let
\begin{equation}
\S_k(\X_1,\ldots,\X_k; \Y_1,\ldots,\Y_k)
\subset \setft{Pos}(\Y_1\otimes\cdots\otimes\Y_k\otimes\X_1\otimes\cdots\otimes\X_k)
\end{equation}
denote the primal-feasible choices for the operator $X_k$ in the semidefinite
program specified in Figure~\ref{figure:semidefinite-program}.
That is, we define
\begin{equation}
\S_1(\X_1;\Y_1) = \bigl\{X_1\in\setft{Pos}(\Y_1\otimes\X_1)\,:\,
\operatorname{Tr}_{\Y_1}(X_1) = \I_{\X_1}\bigr\}
\end{equation}
(which is the set of all Choi operators of channels of the form
$\Xi_1 \in \setft{C}(\X_1,\Y_1)$), and
\begin{equation}
\begin{multlined}
\S_k(\X_1,\ldots,\X_k;\Y_1,\ldots,\Y_k)\\[2mm]
= \bigl\{
X_k\in\setft{Pos}(\Y_1\otimes\cdots\otimes\Y_k\otimes\X_1\otimes\cdots\otimes\X_k)
\,:\,\operatorname{Tr}_{\Y_k}(X_k) = X_{k-1}\otimes\I_{\X_k}\\
{\scriptscriptstyle\mathsf{T}}ext{for some}\;X_{k-1}\in\S_{k-1}(\X_1,\ldots,
\X_{k-1};\Y_1,\ldots,\Y_{k-1})\bigr\}
\end{multlined}
\end{equation}
for $k \in \{2,\ldots,n\}$.
The primal form of the semidefinite program described in
Figure~\ref{figure:semidefinite-program} can therefore be expressed succinctly
as
\begin{equation}
\begin{aligned}
{\scriptscriptstyle\mathsf{T}}ext{maximize:} \quad & \ip{H}{X}\\
{\scriptscriptstyle\mathsf{T}}ext{subject to:} \quad & X\in
\S_n(\X_1,\ldots,\X_n;\Y_1,\ldots,\Y_n).
\end{aligned}
\end{equation}
We will refer to operators in the sets defined above as
\emph{strategy operators}, as they represent $n$-turn strategies with respect
to the quantum strategy framework.
Let us also define an isometry
\begin{equation}
W \in
\setft{U}(\Y_1\otimes\cdots\otimes\Y_n\otimes\X_1\otimes\cdots\otimes\X_n,
\X_n\otimes\cdots\otimes\X_1\otimes\Y_n\otimes\cdots\otimes\Y_1)
\end{equation}
by the action
\begin{equation}
W (y_1\otimes\cdots\otimes y_n\otimes x_1\otimes\cdots\otimes x_n)
= x_n\otimes\cdots\otimes x_1\otimes y_n\otimes\cdots\otimes y_1
\end{equation}
for all vectors $x_1\in\X_1,\ldots,x_n\in\X_n$ and
$y_1\in\Y_1,\ldots,y_n\in\Y_n$.
In words, $W$ simply reverses the order of the tensor factors of the space
$\Y_1\otimes\cdots\otimes\Y_n\otimes\X_1\otimes\cdots\otimes\X_n$, yielding a
vector in $\X_n\otimes\cdots\otimes\X_1\otimes\Y_n\otimes\cdots\otimes\Y_1$
that, aside from this re-ordering of tensor factors, is the same as its input
vector.
\subsection*{Statement of the main result}
With the notation just introduced in hand, the main theorem may now be stated.
\begin{theorem}
\label{theorem:main}
Let $\X_1,\ldots,\X_n$ and $\Y_1,\ldots,\Y_n$ be complex Euclidean spaces,
for $n$ a positive integer, let
\begin{equation}
u \in \Y_1\otimes\cdots\otimes\Y_n\otimes\X_1\otimes\cdots\otimes\X_n
\end{equation}
be a vector, and let
\begin{equation}
X\in\S_n(\X_1,\ldots,\X_n;\Y_1,\ldots,\Y_n)
\end{equation}
be a strategy operator.
There exists a strategy operator
\begin{equation}
Y\in\S_n(\Y_n,\ldots,\Y_1;\X_n,\ldots,\X_1)
\end{equation}
such that
\begin{equation}
\label{eq:main-theorem-inequality}
\ip{W u u^{\ast} W^{\ast}}{Y} \geq \ip{u u^{\ast}}{X}.
\end{equation}
If it is the case that
$\dim(\Y_1\otimes\cdots\otimes\Y_n)\leq\dim(\X_1\otimes\cdots\otimes\X_n)$,
then the operator $Y$ may be chosen so that equality holds in
\eqref{eq:main-theorem-inequality}.
\end{theorem}
\begin{cor}
\label{cor:main}
Let $\X_1,\ldots,\X_n$ and $\Y_1,\ldots,\Y_n$ be complex Euclidean spaces,
for $n$ a positive integer, and let
\begin{equation}
u \in \Y_1\otimes\cdots\otimes\Y_n\otimes\X_1\otimes\cdots\otimes\X_n
\end{equation}
be a vector.
The semidefinite optimization problems
\begin{equation}
\begin{aligned}
{\scriptscriptstyle\mathsf{T}}ext{maximize:}\quad & \ip{u u^{\ast}}{X}\\
{\scriptscriptstyle\mathsf{T}}ext{subject to:}\quad
& X \in \S_n(\X_1,\ldots,\X_n;\Y_1,\ldots,\Y_n)
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
{\scriptscriptstyle\mathsf{T}}ext{maximize:}\quad & \ip{W u u^{\ast}W^{\ast}}{Y}\\
{\scriptscriptstyle\mathsf{T}}ext{subject to:}\quad
& Y \in \S_n(\Y_n,\ldots,\Y_1;\X_n,\ldots,\X_1)
\end{aligned}
\end{equation}
have the same optimum value.
\end{cor}
\begin{remark}
Using the notation introduced in \cite{ChiribellaE16}, which defines a
quantum network generalization of conditional min-entropy, the equivalence
expressed by Corollary~\ref{cor:main} may alternatively be written
\begin{equation}
\begin{multlined}
\op{H}_{{\scriptscriptstyle\mathsf{T}}ext{min}}\bigl(\reg{Y}_n\mid\reg{X}_1,\reg{Y}_1,\ldots,
\reg{X}_{n-1},\reg{Y}_{n-1},\reg{X}_n\bigr)_{u u^{\ast}}\\[1mm]
= \op{H}_{{\scriptscriptstyle\mathsf{T}}ext{min}}\bigl(\reg{X}_1\mid\reg{Y}_n,\reg{X}_n,\ldots,
\reg{Y}_{2},\reg{X}_{2},\reg{Y}_1\bigr)_{u u^{\ast}}
\end{multlined}
\end{equation}
for every vector $u \in \X_1 \otimes\Y_1\otimes\cdots\otimes\X_n\otimes\Y_n$.
\end{remark}
\subsection*{Interpretations of the main theorem}
Theorem~\ref{theorem:main} establishes a \emph{time-reversal property} of
rank-one strategy functions.
Intuitively speaking, the linear function
\begin{equation}
Y \mapsto \ip{W u u^{\ast} W^{\ast}}{Y}
\end{equation}
defined on $\S_n(\Y_n,\ldots,\Y_1;\X_n,\ldots,\X_1)$ represents the
\emph{time-reversal} of the linear function
\begin{equation}
X \mapsto \ip{u u^{\ast}}{X}
\end{equation}
defined on $\S_n(\X_1,\ldots,\X_n;\Y_1,\ldots,\Y_n)$, in the sense that the two
functions differ only in the reversal of the ordering of the register
exchanges:
$\reg{X}_1$, $\reg{Y}_1$, $\ldots$, $\reg{X}_n$, $\reg{Y}_n$ for the function
corresponding to $u u^{\ast}$ and $\reg{Y}_n$, $\reg{X}_n$, $\ldots$,
$\reg{Y}_1$, $\reg{X}_1$ for the function corresponding to
$W u u^{\ast} W^{\ast}$.
For a given choice of $X\in\S_n(\X_1,\ldots,\X_n;\Y_1,\ldots,\Y_n)$, it is
generally not the case that
$W^{\ast} X W\in\S_n(\Y_n,\ldots,\Y_1;\X_n,\ldots,\X_1)$.
It may not even be the case that $W^{\ast} X W$ is the Choi representation of a
channel, and in the case that $W^{\ast} X W$ is the Choi representation of a
channel, it will generally not be the case that this channel obeys the
constraints necessary for it to be a valid strategy operator.
When combined with the observation that
$\S_n(\X_1,\ldots,\X_n;\Y_1,\ldots,\Y_n)$ and
$\S_n(\Y_n,\ldots,\Y_1;\X_n,\ldots,\X_1)$ are compact and convex sets, this
fact implies that the main theorem cannot possibly hold for all Hermitian
operators $H$ by the separating hyperplane theorem.
For small values of $n$ and for spaces having small dimensions, simple
examples of operators $H$ for which the main theorem fails may also easily be
obtained through random selections.
In Section~\ref{sec:entanglement-manipulation} we discuss another
interpretation of Theorem~\ref{theorem:main}, which concerns multiple round
entanglement manipulation.
\subsection*{Proof of Theorem~\ref{theorem:main}}
We will now prove Theorem~\ref{theorem:main}.
The first step is to express the strategy represented by $X$ as a sequence of
channels corresponding to invertible isometries (i.e., unitary operators for
which the input and output spaces have different names but necessarily the same
dimension), assuming an auxiliary input space initialized to a pure state
is made available.
Through the repeated application of the Stinespring dilation theorem, together
with the result of \cite{GutoskiW07,ChiribellaDP08,ChiribellaDP09} establishing
that $X=J(\Xi_n)$ is the Choi representation of a channel $\Xi_n$ arising from
a valid $n$-turn strategy, one finds that there must exist complex Euclidean
spaces $\Z_0,\ldots,\Z_n$ satisfying
$\dim(\Z_{k-1}\otimes\X_k) = \dim(\Z_k\otimes\Y_k)$ for all
$k\in\{1,\ldots,n\}$, a unit vector $v \in \Z_0$, and invertible isometries
$U_1,\ldots,U_n$ of the form
\begin{equation}
U_k\in\setft{U}(\Z_{k-1}\otimes \X_k,\Y_k\otimes\Z_k)
\end{equation}
such that
\begin{equation}
\label{eq:Xi_n}
\Xi_n(Z) = \operatorname{Tr}_{\Z_n} \bigl(U (v v^{\ast} \otimes Z\bigr) U^{\ast}\bigr)
\end{equation}
for all $Z\in\setft{L}(\X_1\otimes\cdots\otimes\X_n)$, where
\begin{equation}
\begin{multlined}
U = (\I_{\Y_1\otimes\cdots\otimes\Y_{n-1}}\otimes U_n)
\cdots (U_1 \otimes \I_{\X_2\otimes\cdots\otimes\X_n})\\[1mm]
\in\setft{U}(\Z_0\otimes\X_1\otimes\cdots\otimes\X_n,
\Y_1\otimes\cdots\otimes\Y_n\otimes\Z_n).
\end{multlined}
\end{equation}
In words, the strategy represented by the operator $X$ is implemented by
first initializing a register $\reg{Z}_0$ to the pure state $v$, then
applying the invertible isometric channels corresponding to $U_1,\ldots,U_n$,
and finally discarding $\reg{Z}_n$ after the interaction has finished.
(The top picture in Figure~\ref{fig:unitary-Alice} illustrates this for the
case $n=3$.)
\begin{figure}
\caption{An arbitrary strategy may be implemented by initializing a register
$\reg{Z}
\label{fig:unitary-Alice}
\end{figure}
The vector $u$ may be expressed as
\begin{equation}
u = \sum_{\substack{a_1,\ldots,a_n\\b_1,\ldots,b_n}}
u(b_1,\ldots,b_n,a_1,\ldots,a_n)
\ket{b_1}\cdots\ket{b_n}\ket{a_1}\cdots\ket{a_n},
\end{equation}
where the sum is over all standard basis states $\ket{a_1},\ldots,\ket{a_n}$ of
$\X_1,\ldots,\X_n$ and $\ket{b_1},\ldots,\ket{b_n}$ of $\Y_1,\ldots,\Y_n$,
respectively.
Based on this expression, define an operator $A\in\setft{L}(\Z_0,\Z_n)$ as
\begin{equation}
\begin{multlined}
A = \sum_{\substack{a_1,\ldots,a_n\\b_1,\ldots,b_n}}
u(b_1,\ldots,b_n,a_1,\ldots,a_n)
(\bra{b_n}\otimes\I_{\Z_n})U_n(\I_{\Z_{n-1}}\otimes\ket{a_n})
\rule{10mm}{0mm}\\[-4mm]
\cdots
(\bra{b_1}\otimes\I_{\Z_1})U_1(\I_{\Z_{0}}\otimes\ket{a_1}).
\end{multlined}
\end{equation}
By considering the action of the strategy represented by $v$ and
$U_1,\ldots,U_n$, then performing the required operator-vector multiplications
required to evaluate the expression $\ip{u u^{\ast}}{X}$
when $X = J(\Xi_n)$ for $\Xi_n$ given by \eqref{eq:Xi_n}, one concludes that
\begin{equation}
\ip{u u^{\ast}}{X} = \norm{A v}^2 = \ip{v v^{\ast}}{A^{\ast} A}.
\end{equation}
Next we turn to the reversed interaction.
To obtain a strategy operator $Y$ satisfying the requirements of the theorem,
we consider the strategy obtained by initializing the register $\reg{Z}_n$ to
a particular choice of a pure state $w$, which will be selected later,
then applying in sequence the invertible isometric channels corresponding to
the operators $U_n^{{\scriptscriptstyle\mathsf{T}}},\ldots,U_1^{{\scriptscriptstyle\mathsf{T}}}$.
(The bottom picture in Figure~\ref{fig:unitary-Alice} illustrates this for the
case $n=3$.)
That is, for
\begin{equation}
\begin{multlined}
V = (\I_{\X_n\otimes\cdots\otimes\X_{2}}\otimes U_1^{{\scriptscriptstyle\mathsf{T}}})
\cdots (U_n^{{\scriptscriptstyle\mathsf{T}}} \otimes \I_{\Y_{n-1}\otimes\cdots\otimes\Y_1})\\[1mm]
\in\setft{U}(\Z_n\otimes\Y_n\otimes\cdots\otimes\Y_1,
\Z_0\otimes\X_n\otimes\cdots\otimes\X_1),
\end{multlined}
\end{equation}
we consider the channel $\Theta_n\in\setft{C}(\Y_n\otimes\cdots\otimes\Y_1,
\X_n\otimes\cdots\otimes\X_1)$ defined as
\begin{equation}
\Theta_n(Z) = \operatorname{Tr}_{\Z_0}\bigl( V (w w^{\ast} \otimes Z) V^{\ast}\bigr)
\end{equation}
for all $Z\in\setft{L}(\Y_n\otimes\cdots\otimes\Y_1)$.
It is evident from the specification of this channel,
irrespective of the choice of the pure state $w$, that
$Y = J(\Theta_n)\in\S_n(\Y_n,\ldots,\Y_1;\X_n,\ldots,\X_1)$.
By considering the action of this strategy, a similar calculation to the one
above reveals that
\begin{equation}
\ip{Wu u^{\ast}W^{\ast}}{Y} = \norm{A^{{\scriptscriptstyle\mathsf{T}}} w}^2 =
\bigip{w w^{\ast}}{\overline{A}A^{{\scriptscriptstyle\mathsf{T}}}}.
\end{equation}
The nonzero eigenvalues of $A^{\ast} A$ and $\overline{A} A^{{\scriptscriptstyle\mathsf{T}}}$ are equal,
and therefore by choosing $w$ to be an eigenvector corresponding to the largest
eigenvalue of $\overline{A} A^{{\scriptscriptstyle\mathsf{T}}}$ one obtains
\begin{equation}
\label{eq:final-inequality}
\ip{Wu u^{\ast}W^{\ast}}{Y} =
\bigip{w w^{\ast}}{\overline{A}A^{{\scriptscriptstyle\mathsf{T}}}}
\geq \ip{v v^{\ast}}{A^{\ast} A} = \ip{u u^{\ast}}{X}.
\end{equation}
If it holds that $\dim(\Y_1\otimes\cdots\otimes\Y_n)\leq
\dim(\X_1\otimes\cdots\otimes\X_n)$, then $\dim(\Z_0)\leq\dim(\Z_n)$, which
implies that the inequality in \eqref{eq:final-inequality} may be taken as an
equality for an appropriate choice of a pure state $w$.
This completes the proof.
\section{Application to min- and max-entropy}
In this section we connect the main result proved in the previous section to
the conditional min- and max-entropy functions.
These function, which were first introduced in \cite{Datta09}, may be defined
as follows.
First, one defines the max- and min-relative entropy of $P$ with respect to
$Q$, for positive semidefinite operators $P$ and $Q$ (acting on the same
space), as follows:
\begin{align}
\op{D}_{{\scriptscriptstyle\mathsf{T}}ext{max}}(P \,\|\, Q) & = \log\bigl(
\min\{\lambda \geq 0\,:\, P \leq \lambda Q\}\bigr),\\
\op{D}_{{\scriptscriptstyle\mathsf{T}}ext{min}}(P \,\|\, Q) & = -\log\bigl(\operatorname{F}(P,Q)^2\bigr).
\end{align}
Then, with respect to a given state $\rho\in\setft{D}(\X\otimes\Y)$ of a pair of
registers $(\reg{X},\reg{Y})$, one defines
\begin{align}
\op{H}_{{\scriptscriptstyle\mathsf{T}}ext{min}}(\reg{X} | \reg{Y}) = -\inf_{\sigma\in\setft{D}(\Y)}
\op{D}_{{\scriptscriptstyle\mathsf{T}}ext{max}}\bigl(\rho\,\big\|\,\I_{\X} \otimes \sigma\bigr),\\
\op{H}_{{\scriptscriptstyle\mathsf{T}}ext{max}}(\reg{X} | \reg{Y}) = -\inf_{\sigma\in\setft{D}(\Y)}
\op{D}_{{\scriptscriptstyle\mathsf{T}}ext{min}}\bigl(\rho\,\big\|\,\I_{\X} \otimes \sigma\bigr).
\end{align}
It is known that these two quantities are related in the following way:
with respect to any pure state $u u^{\ast}$ of a triple of registers
$(\reg{X},\reg{Y},\reg{Z})$, one has that
\begin{equation}
\label{eq:relation-min-and-max-entropy}
\op{H}_{{\scriptscriptstyle\mathsf{T}}ext{min}}(\reg{X} | \reg{Y})
= -\op{H}_{{\scriptscriptstyle\mathsf{T}}ext{max}}(\reg{X} | \reg{Z}).
\end{equation}
(Indeed, in \cite{KoenigRS09} the conditional max-relative entropy of a
state of $(\reg{X},\reg{Z})$ is \emph{defined} by the equation
\eqref{eq:relation-min-and-max-entropy}, which does not depend on which
purification of this state is chosen, and is then proved to agree with the
definition stated previously.)
Consider any unit vector $u \in \X \otimes \Y \otimes \Z$, which defines a pure
state $u u^{\ast}$ of a triple of registers $(\reg{X},\reg{Y},\reg{Z})$.
We will consider two optimization problems defined by $u$, the first of which
is as follows:
\begin{equation}
\label{eq:min-entropy-problem}
\begin{aligned}
{\scriptscriptstyle\mathsf{T}}ext{maximize:} & \quad\bigip{ u u^{\ast} }{ X } \\
{\scriptscriptstyle\mathsf{T}}ext{subject to:} & \quad X \in \S_2(\Y,\Z;\X,\complex).
\end{aligned}
\end{equation}
This optimization problem is illustrated in Figure~\ref{fig:min-entropy}.
\begin{figure}
\caption{The optimization problem \eqref{eq:min-entropy-problem}
\label{fig:min-entropy}
\end{figure}
In this case, the channel $\Phi_2$ takes registers $\reg{Z}$ and $\reg{W}$
as input and outputs nothing (which is equivalent to outputting the unique
state $1 \in \setft{D}(\complex)$ of a one-dimensional system).
That is, $\Phi_2$ must be the trace mapping.
One may therefore simplify this problem, obtaining the following semidefinite
program:
\begin{center}
\begin{minipage}[t]{.42{\scriptscriptstyle\mathsf{T}}extwidth}
\centerline{\underline{Primal problem}}
\begin{align*}
{\scriptscriptstyle\mathsf{T}}ext{maximize:}\quad & \ip{\operatorname{Tr}_{\Z}(u u^{\ast})}{X}\\
{\scriptscriptstyle\mathsf{T}}ext{subject to:}\quad
& \operatorname{Tr}_{\X}(X) = \I_{\Y},\\
& X \in \setft{Pos}(\X\otimes\Y).
\end{align*}
\end{minipage}
\begin{minipage}[t]{.42{\scriptscriptstyle\mathsf{T}}extwidth}
\centerline{\underline{Dual problem}}
\begin{align*}
{\scriptscriptstyle\mathsf{T}}ext{minimize:}\quad & \operatorname{Tr}(Y)\\
{\scriptscriptstyle\mathsf{T}}ext{subject to:}\quad
& \I_{\X} \otimes Y \geq \operatorname{Tr}_{\Z}(u u^{\ast}),\\
& Y \in \setft{Herm}(\X).
\end{align*}
\end{minipage}
\end{center}
By examining the dual problem, one sees that the optimal value of this
semidefinite program is
\begin{equation}
\label{eq:H_min-SDP-optimum}
2^{-\op{H}_{{\scriptscriptstyle\mathsf{T}}ext{min}}(\reg{X}|\reg{Y})}
\end{equation}
with respect to the state $u u^{\ast}$ of $(\reg{X},\reg{Y},\reg{Z})$.
K\"onig, Renner, and Schaffner \cite{KoenigRS09} observed that the primal
problem coincides with the value represented by the expression
\eqref{eq:H_min-SDP-optimum}, which is consistent with the observation that
strong duality always holds for this semidefinite program (which may be
verified through Slater's theorem, for instance).
The second optimization problem we consider is the time-reversal of the first,
and may be stated as follows:
\begin{equation}
\label{eq:max-entropy-problem}
\begin{aligned}
{\scriptscriptstyle\mathsf{T}}ext{maximize:} & \quad\bigip{ W u u^{\ast} W^{\ast} }{ Y } \\
{\scriptscriptstyle\mathsf{T}}ext{subject to:} & \quad Y \in \S_2(\complex,\X;\Z,\Y).
\end{aligned}
\end{equation}
Figure~\ref{fig:max-entropy} illustrates the interaction corresponding to this
optimization problem.
\begin{figure}
\caption{The optimization problem \eqref{eq:max-entropy-problem}
\label{fig:max-entropy}
\end{figure}
The inclusion $X\in\S_2(\complex,\X;\Z,\Y)$, for a given operator
$X\in\setft{Pos}(\Z\otimes\Y\otimes\X)$, is equivalent to the condition that
$\operatorname{Tr}_{\Y}(X) = \sigma\otimes\I_{\X}$ for some $\sigma\in\setft{D}(\Z)$.
After re-ordering tensor factors, we obtain the following semidefinite program:
\begin{center}
\begin{minipage}[t]{.42{\scriptscriptstyle\mathsf{T}}extwidth}
\centerline{\underline{Primal problem}}
\begin{align*}
{\scriptscriptstyle\mathsf{T}}ext{maximize:}\quad & \ip{u u^{\ast}}{X}\\
{\scriptscriptstyle\mathsf{T}}ext{subject to:}\quad
& \operatorname{Tr}_{\Y}(X) = \I_{\X} \otimes \sigma,\\
& X \in \setft{Pos}(\X\otimes\Y\otimes\Z),\\
& \sigma \in \setft{D}(\Z).
\end{align*}
\end{minipage}
\begin{minipage}[t]{.42{\scriptscriptstyle\mathsf{T}}extwidth}
\centerline{\underline{Dual problem}}
\begin{align*}
{\scriptscriptstyle\mathsf{T}}ext{minimize:}\quad & \lambda\\
{\scriptscriptstyle\mathsf{T}}ext{subject to:}\quad
& Y \otimes \I_{\Y} \geq u u^{\ast},\\
& \lambda \I_{\Z} \geq \operatorname{Tr}_{\X}(Y),\\
& Y \in \setft{Herm}(\X\otimes\Z),\\
& \lambda \in \real.
\end{align*}
\end{minipage}
\end{center}
An examination of the primal problem reveals (through Uhlmann's theorem) that
the optimal value of this semidefinite program is
\begin{equation}
\label{eq:H_max-SDP-optimum}
2^{\op{H}_{{\scriptscriptstyle\mathsf{T}}ext{max}}(\reg{X}|\reg{Z})}.
\end{equation}
By our main theorem, it follows that the two optimization problems have the
same optimal value, and therefore we obtain an alternative proof that with
respect to every pure state of a triple or registers
$(\reg{X},\reg{Y},\reg{Z})$ one has
\begin{equation}
\label{eq:min-max-entropy-identity}
\op{H}_{{\scriptscriptstyle\mathsf{T}}ext{min}}(\reg{X} | \reg{Y})
= -\op{H}_{{\scriptscriptstyle\mathsf{T}}ext{max}}(\reg{X} | \reg{Z}).
\end{equation}
It is natural to ask if the connections among min-entropy, max-entropy, and
optimization problems involving three-message strategies have interesting
implications or generalizations for interactions involving four or more
messages.
As a partial answer to this question, we observe that when our main result is
applied to the four-message interaction depicted in
Figure~\ref{fig:four-message-interaction}, it reveals the identity
\begin{equation}
\label{eq:four-message-identity}
\max_{\Phi\in\setft{C}(\Y,\X)} \operatorname{F}\bigl(
\operatorname{Tr}_{\W}(u u^{\ast}), J(\Phi) \otimes \I_{\Z}\bigr)
= \max_{\Psi\in\setft{C}(\W,\Z)} \operatorname{F}\bigl(
\operatorname{Tr}_{\X}(u u^{\ast}), \I_{\Y} \otimes J(\Psi)\bigr)
\end{equation}
for all vectors $u\in\X\otimes\Y\otimes\Z\otimes\W$.
\begin{figure}
\caption{
Maximizing the linear function defined by $u u^{\ast}
\label{fig:four-message-interaction}
\end{figure}
This identity is appealing in its simplicity and symmetry, and by taking
$\W=\complex$ (or $\Y=\complex$) a statement equivalent to
\eqref{eq:min-max-entropy-identity} for all pure states of
$(\reg{X},\reg{Y},\reg{Z})$ is obtained.
We do not know, however, if the quantity represented by either side of the
identity has any direct operational significance.
Other identities may be obtained through a similar methodology, although they
become increasingly complex as the number of messages is increased.
\section{Online pure state entanglement manipulation}
\label{sec:entanglement-manipulation}
The following three statements are equivalent for a given operator
$X\in\setft{L}(\Y\otimes\X)$:
\begin{enumerate}
\item[1.]
$X\in\S_1(\X;\Y)$.
(Equivalently, $X\in\setft{Pos}(\Y\otimes\X)$ and $\operatorname{Tr}_{\Y}(X) = \I_{\X}$.)
\item[2.] $X = (\Phi\otimes\I_{\setft{L}(\X)})(\operatorname{vec}(\I_{\X})\operatorname{vec}(\I_{\X})^{\ast})$
for some channel $\Phi\in\setft{C}(\X,\Y)$.
\item[3.] $X = (\I_{\setft{L}(\Y)}\otimes \Psi)(\operatorname{vec}(\I_{\Y})\operatorname{vec}(\I_{\Y})^{\ast})$
for some completely positive and unital map $\Psi\in\setft{CP}(\Y,\X)$.
\end{enumerate}
(Here and throughout this section, ${\scriptscriptstyle\mathsf{T}}ext{vec}$ refers to the vectorization
mapping, which is the mapping obtained by extending the transformation
$\ket{a}\bra{b} \mapsto \ket{a}\ket{b}$ for standard basis states to arbitrary
operators by linearity.
In particular, $\operatorname{vec}(\I_{\X})$ is a non-normalized vector proportional to the
canonical maximally entangled pure state corresponding to two identical copies
of a system whose state space is $\X$.)
The maps $\Phi$ and $\Psi$ uniquely determine one another, and it is reasonable
to view these maps as being related by transposition (with respect to the
standard basis): $\Psi = \Phi^{{\scriptscriptstyle\mathsf{T}}}$ and $\Phi = \Psi^{{\scriptscriptstyle\mathsf{T}}}$.
To obtain a Kraus representation for $\Psi$, for instance, one may simply
take a Kraus representation of $\Phi$ and transpose each of the Kraus
operators.
(The transpose of an arbitrary map can be defined in a manner that is
consistent with these statements, but it is sufficient for our needs to focus
on channels and completely positive unital maps.)
A generalization of the equivalence mentioned above to the quantum strategy
framework may also be verified.
For an operator
$X\in\setft{L}(\Y_1\otimes\cdots\otimes\Y_n\otimes\X_1\otimes\cdots\otimes\X_n)$,
these three statements are equivalent:
\begin{enumerate}
\item[1.] $X\in\S_n(\X_1,\ldots,\X_n;\Y_1,\ldots,\Y_n)$.
\item[2.] There exist complex Euclidean spaces $\Z_1,\ldots,\Z_{n-1}$
(and $\Z_0 = \complex$ and $\Z_n = \complex$), along
with channels $\Phi_1, \ldots,\Phi_n$ having the form
\begin{equation}
\Phi_k \in \setft{C}(\Z_{k-1}\otimes\X_k,\Y_k\otimes\Z_k),
\end{equation}
such that the channel
$\Xi_n\in\setft{C}(\X_1\otimes\cdots\otimes\X_n,\Y_1\otimes\cdots\otimes\Y_n)$
defined as
\begin{equation}
\Xi_n = \bigl(\I_{\setft{L}(\Y_1\otimes\cdots\otimes\Y_{n-1})}\otimes\Phi_n\bigr)
\cdots \bigl(\Phi_1 \otimes \I_{\setft{L}(\X_2\otimes\cdots\otimes\X_n)}\bigr)
\end{equation}
satisfies
\begin{equation}
X = \bigl(\Xi_n \otimes \I_{\setft{L}(\X_1\otimes\cdots\otimes\X_n)}\bigr)
(\operatorname{vec}(\I_{\X_1\otimes\cdots\otimes\X_n})
\operatorname{vec}(\I_{\X_1\otimes\cdots\otimes\X_n})^{\ast}).
\end{equation}
\item[3.] There exist complex Euclidean spaces $\Z_1,\ldots,\Z_{n-1}$
(and $\Z_0 = \complex$ and $\Z_n = \complex$), along
with completely positive and unital maps $\Psi_1, \ldots,\Psi_n$ having the
form
\begin{equation}
\Psi_k \in \setft{C}(\Y_k\otimes\Z_{k},\Z_{k-1}\otimes\X_k),
\end{equation}
such that the unital map
$\Lambda_n\in\setft{CP}(\Y_1\otimes\cdots\otimes\Y_n,\X_1\otimes\cdots\otimes\X_n)$
defined as
\begin{equation}
\Lambda_n =
\bigl(\Psi_1 \otimes \I_{\setft{L}(\X_2\otimes\cdots\otimes\X_n)}\bigr)\cdots
\bigl(\I_{\setft{L}(\Y_1\otimes\cdots\otimes\Y_{n-1})}\otimes\Psi_n\bigr)
\end{equation}
satisfies
\begin{equation}
X = \bigl(\I_{\setft{L}(\Y_1\otimes\cdots\otimes\Y_n)}\otimes\Lambda_n\bigr)
(\operatorname{vec}(\I_{\Y_1\otimes\cdots\otimes\Y_n})
\operatorname{vec}(\I_{\Y_1\otimes\cdots\otimes\Y_n})^{\ast}).
\end{equation}
\end{enumerate}
Through this equivalence, for a given state
$\rho\in\setft{D}(\Y_1\otimes\cdots\otimes\Y_n
\otimes\X_1\otimes\cdots\otimes\X_n)$,
one arrives at an alternative interpretation of the semidefinite program
\begin{equation}
\label{eq:SDP-entanglement-concentration}
\begin{aligned}
{\scriptscriptstyle\mathsf{T}}ext{maximize:} \quad & \ip{\rho}{X}\\
{\scriptscriptstyle\mathsf{T}}ext{subject to:} \quad & X\in
\S_n(\X_1,\ldots,\X_n;\Y_1,\ldots,\Y_n)
\end{aligned}
\end{equation}
that concerns an online variant of entanglement manipulation, as is explained
shortly.
The term ``online'' in this context refers to a situation in which a quantum
state must be manipulated in multiple turns, where an output is required
immediately after each input system arrives and prior to the next input system
being made available, similar to an online process.
By the equivalence of the third statement above to the first, a maximization
over all $X\in\S_n(\X_1,\ldots,\X_n;\Y_1,\ldots,\Y_n)$ is equivalent to a
maximization over all operators
\begin{equation}
\bigl(\I_{\setft{L}(\Y_1\otimes\cdots\otimes\Y_n)}\otimes\Lambda_n\bigr)
(\operatorname{vec}(\I_{\Y_1\otimes\cdots\otimes\Y_n})
\operatorname{vec}(\I_{\Y_1\otimes\cdots\otimes\Y_n})^{\ast})
\end{equation}
for
\begin{equation}
\Lambda_n =
\bigl(\Psi_1 \otimes \I_{\setft{L}(\X_2\otimes\cdots\otimes\X_n)}\bigr)\cdots
\bigl(\I_{\setft{L}(\Y_1\otimes\cdots\otimes\Y_{n-1})}\otimes\Psi_n\bigr)
\end{equation}
and $\Psi_1,\ldots,\Psi_n$ being completely positive and unital maps of
the form
\begin{equation}
\Psi_k \in \setft{C}(\Y_k\otimes\Z_k,\Z_{k-1}\otimes\X_k).
\end{equation}
The value of the objective function $\ip{\rho}{X}$ may therefore be expressed
as
\begin{equation}
\bigip{
(\I_{\setft{L}(\Y_1\otimes\cdots\otimes\Y_n)}\otimes\Lambda_n^{\ast})(\rho)
}{
\operatorname{vec}(\I_{\Y_1\otimes\cdots\otimes\Y_n})
\operatorname{vec}(\I_{\Y_1\otimes\cdots\otimes\Y_n})^{\ast}
},
\end{equation}
which is $\dim(\Y_1\otimes\cdots\otimes\Y_n)$ times the squared fidelity
between the maximally entangled state
${\scriptscriptstyle\mathsf{T}}au\in\setft{D}(\Y_1\otimes\cdots\otimes\Y_n\otimes
\Y_1\otimes\cdots\otimes\Y_n)$ given by
\begin{equation}
{\scriptscriptstyle\mathsf{T}}au = \frac{\operatorname{vec}(\I_{\Y_1\otimes\cdots\otimes\Y_n})
\operatorname{vec}(\I_{\Y_1\otimes\cdots\otimes\Y_n})^{\ast}}{
\dim(\Y_1\otimes\cdots\otimes\Y_n)}
\end{equation}
and the state obtained by applying the channel $\Lambda_n^{\ast}$ to the
portion of $\rho$ corresponding to the spaces $\X_1,\ldots,\X_n$.
In the case that $n=1$, K\"onig, Renner, and Schaffner \cite{KoenigRS09}
refer to this quantity as the \emph{quantum correlation}.
This situation is illustrated for the case $n=3$ in
Figure~\ref{fig:online-entanglement-manipulation-1}.
\begin{figure}
\caption{The channel $\Lambda_3^{\ast}
\label{fig:online-entanglement-manipulation-1}
\end{figure}
\begin{figure}
\caption{A similar process to the one illustrated in
Figure~\ref{fig:online-entanglement-manipulation-1}
\label{fig:online-entanglement-manipulation-2}
\end{figure}
By Theorem~\ref{theorem:main}, one finds that when $\rho$ is pure, the same
optimal value is achieved when the ordering of the channels and the registers
on which they act is reversed, as illustrated in
Figure~\ref{fig:online-entanglement-manipulation-2} for the case $n=3$.
That is, when $\rho$ is a pure state, the optimal value of the semidefinite
program \eqref{eq:SDP-entanglement-concentration} represents the value
\begin{equation}
\bigip{(\Xi_n\otimes\I_{\setft{L}(\X_1\otimes\cdots\otimes\X_n)})(\rho)}{
\operatorname{vec}(\I_{\X_1\otimes\cdots\otimes\X_n})
\operatorname{vec}(\I_{\X_1\otimes\cdots\otimes\X_n})^{\ast}},
\end{equation}
maximized over all channels $\Xi_n\in\setft{C}(\Y_1\otimes\cdots\otimes\Y_n,
\X_1\otimes\cdots\otimes\X_n)$ of the form
\begin{equation}
\Xi_n = \bigl(\Phi_1\otimes\I_{\setft{L}(\X_2\otimes\cdots\otimes\X_n)}\bigr)
\cdots
\bigl(\I_{\setft{L}(\Y_1\otimes\cdots\otimes\Y_{n-1})}\otimes \Phi_n\bigr)
\end{equation}
for channels $\Phi_1,\ldots,\Phi_n$ taking the form
\begin{equation}
\Phi_k \in \setft{C}(\Y_k\otimes\Z_k,\Z_{k-1}\otimes\X_k)
\end{equation}
and for $\Z_2,\ldots,\Z_{n-1}$ arbitrary complex Euclidean spaces
(along with $\Z_0 = \complex$ and $\Z_n = \complex$).
\section{Conclusion}
We have identified a time-reversal property for rank-one quantum strategy
functions, explained its connection to conditional min- and max-entropy, and
described an alternative view of this property through an online variant of
pure state entanglement manipulation.
An obvious question arises: are there interesting applications or implications
of this property beyond those we have mentioned?
\subsection*{Acknowledgments}
Yuan Su was supported in part by the Army Research Office (MURI award
W911NF-16-1-0349) and the National Science Foundation (grant 1526380).
John Watrous acknowledges the support of Canada's NSERC.
We thank Fr\'{e}d\'{e}ric Dupuis, James R. Garrison, Brian Swingle, Penghui
Yao, Ronald de Wolf, and M\={a}ris Ozols for helpful discussions, and we thank
the anonymous referees for their comments and suggestions.
\end{document} |
\begin{document}
\markboth{David Loeffler and Sarah Livia Zerbes}
{Iwasawa theory and p-adic L-functions over $\ZZ_p^2$-extensions}
\title{IWASAWA THEORY AND P-ADIC L-FUNCTIONS OVER $\ZZ_p^2$-EXTENSIONS}
\author{DAVID LOEFFLER}
\thanks{Supported by a Royal Society University Research Fellowship.}
\address{Mathematics Institute\\
Zeeman Building\\
University of Warwick\\
Coventry CV4 7AL, United Kingdom}
\email{[email protected]}
\author{SARAH LIVIA ZERBES}
\thanks{Supported by EPSRC First Grant EP/J018716/1.}
\address{Mathematics Department\\
University College London\\
Gower Street\\
London WC1E 6BT, United Kingdom}
\email{[email protected]}
\begin{abstract}
We construct a two-variable analogue of Perrin-Riou's $p$-adic regulator map for the Iwasawa cohomology of a crystalline representation of the absolute Galois group of $\QQ_p$, over a Galois extension whose Galois group is an abelian $p$-adic Lie group of dimension 2.
We use this regulator map to study $p$-adic representations of global Galois groups over certain abelian extensions of number fields whose localisation at the primes above $p$ is an extension of the above type. In the example of the restriction to an imaginary quadratic field of the representation attached to a modular form, we formulate a conjecture on the existence of a ``zeta element'', whose image under the regulator map is a $p$-adic $L$-function. We show that this conjecture implies the known properties of the 2-variable $p$-adic $L$-functions constructed by Perrin-Riou and Kim.
\end{abstract}
\keywords{Iwasawa theory, $p$-adic regulator, $p$-adic $L$-function}
\subjclass[2010]{Mathematics Subject Classification 2010:
Primary 11R23,
11G40.
Secondary: 11S40,
11F80
}
\maketitle
\section{Introduction}
\label{sect:intro}
In the first part of this paper (Sections \ref{sect:yager} and \ref{sect:regulator}), we develop a ``two-variable'' analogue of Perrin-Riou's theory of $p$-adic regulator maps for crystalline representations of $p$-adic Galois groups.
Let us briefly recall Perrin-Riou's cyclotomic theory as developed in \cite{perrinriou95}. Let $p$ be an odd prime, $F$ a finite unramified extension of $\QQ_p$, and $V$ a continuous $p$-adic representation of the absolute Galois group $\mathcal{G}_F$ of $F$, which is crystalline with Hodge--Tate weights $\ge 0$ and with no quotient isomorphic to the trivial representation. Then there is a ``regulator'' or ``big logarithm'' map
\[ \mathcal{L}^\Gamma_{F, V} : H^1_{\Iw}(F(\mu_{p^\infty}), V) \rTo \mathcal{H}_{\QQ_p}(\Gamma) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\]
which interpolates the values of the Bloch--Kato dual exponential and logarithm maps for the twists $V(j)$, $j \in \mathbb{Z}$, over each finite subextension $F(\mu_{p^n})$. Here $\mathcal{H}_{\QQ_p}(\Gamma)$ is the algebra of $\QQ_p$-valued distributions on the group $\Gamma = \Gal(F(\mu_{p^\infty}) / F) \cong \ZZ_p^\times$, and the Iwasawa cohomology $H^1_{\Iw}(F(\mu_{p^\infty}), V)$ is defined as $\QQ_p \otimes_{\ZZ_p} \varprojlim_n H^1(F(\mu_{p^n}), T)$ where $T$ is any $\mathcal{G}_F$-stable $\ZZ_p$-lattice in $V$. This map plays a crucial role in cyclotomic Iwasawa theory for $p$-adic representations of the Galois groups of number fields, as a bridge between cohomological objects and $p$-adic $L$-functions.
It is natural to ask whether or not the construction of the maps $\mathcal{L}^\Gamma_{F, V}$ may be extended to consider twists of $V$ by more general characters of $\mathcal{G}_{F}$. In this paper, we give a complete answer to this question for characters factoring through an extension $K_\infty / F$ which is abelian over $\QQ_p$ (thus for all characters if $F = \QQ_p$). Any such character factors through the Galois group $G$ of an extension of the form $K_\infty = F_\infty(\mu_{p^\infty})$, where $F_\infty$ is an unramified extension of $F$ which is a finite extension of the unique unramified $\ZZ_p$-extension of $F$. Denote by $\widehat{F}_\infty$ the $p$-adic completion of $F_\infty$, and $\mathcal{H}_{\widehat{F}_\infty}(G)$ the algebra of $\widehat{F}_\infty$-valued distributions on $G$.
\begin{theorem}
For any crystalline representation $V$ of $\mathcal{G}_{F}$ with non-negative Hodge--Tate weights, there exists a regulator map
\[ \mathcal{L}^{G}_{V} : H^1_{\Iw}(K_\infty, V) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\]
interpolating the maps $\mathcal{L}^\Gamma_{K, V}$ for all unramified extensions $K/F$ contained in $F_\infty$.
\end{theorem}
See Theorem \ref{thm:localregulator} for a precise statement of the result. Unlike the cyclotomic case, this result holds whether or not $V$ has trivial quotients.
In Sections \ref{sect:semilocalregulator} and \ref{sect:defmoduleofLfunctions}, we use the $2$-variable $p$-adic regulator to study global Galois representations. Let $K$ be a finite extension of $\mathbb{Q}$, $\mathfrak{p}$ a prime of $K$ above $p$ which is unramified, and $K_\infty$ be a $p$-adic Lie extension of $K$ such that for any prime $\mathfrak{P}$ of $K_\infty$ above $\mathfrak{p}$, the local extension $K_{\infty, \mathfrak{P}} / K_{\mathfrak{p}}$ is of the type considered above. Let $G=\Gal(K_\infty\slash K)$. In Section \ref{sect:semilocalregulator}, we extend the regulator map to a map
\[ \mathcal{L}^G_{\mathfrak{p}, V} : Z^1_{\Iw, \mathfrak{p}}(K_\infty, V) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(K_{\mathfrak{p}}, V) \]
where $Z^1_{\Iw, \mathfrak{p}}(K_\infty, V)$ is the direct sum of the Iwasawa cohomology groups at each of the primes $\mathfrak{q} \mid \mathfrak{p}$, and $\mathbb{D}_{\mathrm{cris}}(K_{\mathfrak{p}}, V)$ is the Fontaine $\mathbb{D}_{\mathrm{cris}}$ functor for $V$ regarded as a representation of a decomposition group at $\mathfrak{p}$. There is a natural localisation map
\[ H^1_{\Iw, S}(K_\infty, V) \to \bigoplus_{\mathfrak{p} \mid p} Z^1_{\Iw, \mathfrak{p}}(K_\infty, V)\]
where $H^1_{\Iw, S}(K_\infty,V)$ denotes the inverse limit of global cohomology groups unramified outside a fixed set of primes $S$. As in the case of Perrin-Riou's cyclotomic regulator map, our map $\mathcal{L}^G_V$ allows elements of Iwasawa cohomology (or, more generally, of its exterior powers) to be interpreted as $\mathbb{D}_{\mathrm{cris}}$-valued distributions on $G$ (after extending scalars). Assuming a plausible conjecture analogous to Leopoldt's conjecture, we use the map $\mathcal{L}^G_V$ to define a certain submodule $\mathbb{I}_{\arith}(V)$ of the distributions on $G$ with values in an exterior power of $\mathbb{D}_{\mathrm{cris}}$. Following Perrin-Riou \cite{perrinriou95}, we call $\mathbb{I}_{\arith}(V)$ the \emph{module of $2$-variable $L$-functions}. We conjecture that there exist special elements of the top exterior power of $H^1_{\Iw,S}(K_\infty, V)$ (``zeta elements'') whose images under the regulator map are $p$-adic $L$-functions, and that these should generate $\mathbb{I}_{\arith}(V)$ as a module over the Iwasawa algebra $\Lambda_{\QQ_p}(G)$.
In Section \ref{sect:imquad}, we investigate in detail two instances of this conjecture that occur when the field $K$ is imaginary quadratic. We first show that for the representation $\ZZ_p(1)$, our regulator map coincides with the map constructed in \cite{yager82}. In this paper, Yager shows that his map sends the Euler system of elliptic units to Katz's $p$-adic $L$-function. As the second example, we study the representation attached to a weight 2 cusp form for $\GL_2 / K$: here we predict the existence of multiple distributions, depending on a choice of Frobenius eigenvalue at each prime above $p$ (Conjecture \ref{conj:twovarmf}), and we show that our conjectures imply the known properties of the 2-variable $p$-adic $L$-functions constructed by Perrin-Riou \cite{perrinriou88} (for $f$ ordinary) and by B.D.~Kim \cite{kim-preprint} (for $f$ non-ordinary). However, our conjectures also predict the existence of some new $p$-adic $L$-functions. (The existence of these $p$-adic $L$-functions is verified in a forthcoming paper \cite{loeffler13} of the first author.)
In our paper \cite{leiloefflerzerbes12} (joint with Antonio Lei), we use the $2$-variable $p$-adic regulator to study the critical slope $p$-adic $L$-functions of an ordinary CM modular form. In this case, there are two candidates for the $p$-adic $L$-function, one arising from Kato's Euler system and a second from $p$-adic modular symbols. The latter has been studied by Bella\"\i che \cite{bellaiche11}, who has proved a formula (Theorem 2 of \emph{op.cit.}) relating it to the Katz $L$-function for the CM field. We use the methods of the present paper to prove a corresponding formula for the $L$-function arising from Kato's construction, implying that the two $p$-adic $L$-functions in fact coincide.
\section{Setup and notation}
\subsection{Fields and their extensions}
\label{sect:fieldsandtheirextensions}
Let $p$ be an odd prime, and denote by $\mu_{p^\infty}$ the set of $p$-power roots of unity. Let $K$ be a finite extension of either $\mathbb{Q}$ or $\QQ_p$. Define the Galois groups $\mathcal{G}_K=\Gal(\overline{K}\slash K)$ and $H_K=\Gal(\overline{K}\slash K(\mu_{p^\infty}))$. A \emph{$p$-adic Lie extension} of $K$ is a Galois extension $K_\infty/K$ such that $\Gal(K_\infty / K)$ is a compact $p$-adic Lie group of finite dimension.
We write $\Gamma$ for the Galois group $\Gal(\mathbb{Q}(\mu_{p^\infty}) / \mathbb{Q})\cong \Gal(\QQ_p(\mu_{p^\infty})\slash \QQ_p)$, which we identify with $\ZZ_p^\times$ via the cyclotomic character $\chi$. Then $\Gamma \cong \Delta \times \Gamma_1$, where $\Delta$ is cyclic of order $p-1$ and $\Gamma_1 = \Gal(\QQ_p(\mu_{p^\infty}) / \QQ_p(\mu_p)) \cong \ZZ_p$, so in particular $\mathbb{Q}_\infty$ (resp. $\mathbb{Q}_{p,\infty}$) is a $p$-adic Lie extension of $\mathbb{Q}$ (resp. $\QQ_p$) of dimension $1$.
\subsection{Iwasawa algebras and power series}
\label{sect:iwasawaalgs}
Let $G$ be a compact $p$-adic Lie group, and $L$ a complete discretely valued extension of $\QQ_p$ with ring of integers $\mathcal{O}_L$. We let $\Lambda_{\mathcal{O}_L}(G)$ be the Iwasawa algebra $\varprojlim_U \mathcal{O}_L[G /U]$, where the limit is taken over open subgroups $U \subseteq G$. We shall always equip this with the inverse limit topology (sometimes called the ``weak topology'') for which it is a Noetherian topological $\mathcal{O}$-algebra (cf.~\cite[Theorem 6.2.8]{emerton04}). If $L / \QQ_p$ is a finite extension then $\Lambda_{\mathcal{O}_L}(G)$ is compact (but not otherwise).
We let $\Lambda_L(G) = L \otimes_{\mathcal{O}_L} \Lambda_{\mathcal{O}}(G)$, which is also Noetherian; it is isomorphic to the continuous dual of the space $C(G, L)$ of continuous $L$-valued functions on $G$. (See \cite[Corollary 2.2]{schneiderteitelbaum02} for a proof of the last statement when $L / \QQ_p$ is a finite extension; this extends immediately to general discretely-valued $L$, since $\Lambda_{L}(G) = L \mathbin{\hat\otimes}_{\QQ_p} \Lambda_{\QQ_p}(G)$ and similarly for $C(G, L)$.)
Let $\mathcal{H}_L(G)$ be the space of $L$-valued locally analytic distributions on $G$ (the continuous dual of the space $C^{\mathrm{la}}(G, L)$ of $L$-valued locally analytic functions on $G$). There is an injective algebra homomorphism $\Lambda_L(G) \hookrightarrow \mathcal{H}_L(G)$ (see \cite[Proposition 2.2.7]{emerton04}), dual to the inclusion of $C^{\mathrm{la}}(G, L)$ as a dense subspace of $C(G, L)$. We endow $\mathcal{H}_L(G)$ with its natural topology as an inverse limit of Banach spaces, with respect to which the map $\Lambda_L(G) \hookrightarrow \mathcal{H}_L(G)$ is continuous.
We shall mostly be concerned with the case when $G$ is abelian, in which case $G$ has the form $H \times \ZZ_p^d$ for $H$ a finite abelian group. In this case $\Lambda_{\mathcal{O}_L}(G)$ is isomorphic to the power series ring $\mathcal{O}_L[H][[X_1, \dots, X_d]]$, where $X_i = \gamma_i - 1$ for generators $\gamma_1, \dots, \gamma_d$ of the $\ZZ_p^d$ factor (see \cite[\S 8.4.1]{nekovar06}). The weak topology on $\Lambda_{\mathcal{O}_L}(G)$ is the $I$-adic topology, where $I$ is the ideal $(p, X_1, \dots, X_d)$. Meanwhile, $\mathcal{H}_L(G)$ identifies with the algebra of $L[H]$-valued power series in $X_1, \dots, X_d$ converging on the rigid-analytic unit ball $|X_i| < 1$, with the topology given by uniform convergence on the closed balls $|X_i| \le r$ for all $r < 1$.
In particular, for the group $\Gamma \cong \ZZ_p^*$ as in Section \ref{sect:fieldsandtheirextensions}, we may identify $\mathcal{H}_L(\Gamma)$ with the space of formal power series
\[ \{f \in L[\Delta][[X]]:\text{$f$ converges everywhere on the open unit $p$-adic disc}\},\]
where $X$ corresponds to $\gamma - 1$ for $\gamma$ a topological generator of $\Gamma_1$; and $\Lambda_L(\Gamma)$ corresponds to the subring of $\mathcal{H}_L(\Gamma)$ consisting of power series with bounded coefficients. Similarly, we define $\mathcal{H}_L(\Gamma_1)$ as the subring of $\mathcal{H}_L(\Gamma)$ defined by power series over $\QQ_p$, rather than $\QQ_p[\Delta]$.
For each $i \in \mathbb{Z}$, we define an element $\ell_i \in \mathcal{H}_{\QQ_p}(\Gamma_1)$ by
\[ \ell_i = \mathfrak{r}ac{\log \gamma}{\log \chi(\gamma)} - i\]
for any non-identity element $\gamma \in \Gamma_1$ (cf.~\cite[\S II.1]{berger03}); note that this differs by a sign from the element denoted by the same symbol in \cite{perrinriou94}.
\subsection{Fontaine rings}
\label{sect:Fontainerings}
We review the definitions of some of Fontaine's rings that we use in this paper. Details can be found in \cite{berger04} or \cite{leiloefflerzerbes11}. Let $K$ be a finite extension of $\mathbb{Q}_p$; the rings we shall require are those denoted by $\mathbb{A}_{K}$, $\mathbb{A}^+_K$, $\mathbb{B}_K$, $\mathbb{B}^+_K$, and $\mathbb{B}^+_{\rig, K}$.
These rings have intrinsic definitions independent of any choices and valid for any $K$; but we shall be interested in the case when $K$ is unramified over $\mathbb{Q}_p$. In this case, they have concrete (but slightly noncanonical) descriptions as follows. A choice of compatible system $(\zeta_n)_{n \ge 0}$ of $p$-power roots of unity defines an element $\pi \in \mathbb{A}_K^+$, and allows us to identify $\mathbb{A}_K^+$ with the formal power series ring $\mathcal{O}_K[[\pi]]$. The ring $\mathbb{A}_K$ is simply $\widehat{\mathbb{A}^+_K[1/\pi]}$. The ring $\mathbb{B}^+_K$ is defined as $\mathbb{A}^+_K[1/p]$, and similarly $\mathbb{B}_K = \mathbb{A}_K[1/p]$. Finally, we let $\mathbb{B}^+_{\rig, K}$ be the ring of power series $f \in K[[\pi]]$ which converge on the open unit disc $|\pi| < 1$.
All these rings are endowed with an $\mathcal{O}_K$-linear action of $\Gamma$ by $\gamma(\pi)=(\pi+1)^{\chi(\gamma)}-1$, and with a Frobenius $\varphi$ which acts as the usual arithmetic Frobenius on $\mathcal{O}_K$ and on $\pi$ by $\varphi(\pi)=(\pi+1)^p-1$. There is also a left inverse $\psi$ of $\varphi$ on all of the above rings, satisfying
\[
\varphi\circ\psi(f(\pi))=\mathfrak{r}ac{1}{p}\sum_{\zeta^p=1}f(\zeta(1+\pi)-1).
\]
Write $t=\log(1+\pi) \in \BB^+_{\rig,\Qp}$, and $q=\varphi(\pi)/\pi\in\mathbb{A}_{\QQ_p}^+$. A formal power series calculation shows that $g(t) = \chi(g) t$ for $g \in \Gamma$, and $\varphi(t) = pt$.
The action of $\Gamma$ on $\mathbb{A}^+_{K}$ gives an isomorphism of $\Lambda_{\mathcal{O}_K}(\Gamma)$ with the submodule $(\mathbb{A}^+_{K})^{\psi=0}$, the so-called ``Mellin transform''
\begin{align*}
\mathfrak{M}: \Lambda_{\mathcal{O}_K}(\Gamma) & \rightarrow (\mathbb{A}^+_{K})^{\psi=0} \\
f(\gamma-1) & \mapsto f(\gamma-1) \cdot (\pi+1).
\end{align*}
This extends to bijections $\Lambda_K(\Gamma) \cong (\mathbb{B}^+_{K})^{\psi=0}$ and $\mathcal{H}_K(\Gamma) \cong (\mathbb{B}^+_{\rig, K})^{\psi=0}$. (See \cite[\S 1.3]{perrinriou90}, \cite[Proposition 1.2.7]{perrinriou94}, or \cite[\S 1.C.2]{leiloefflerzerbes11} for more details.)
\subsection{Crystalline and de Rham representations}
\label{sect:crystallinereps}
Let $K$ be a finite extension of $\QQ_p$, and $V$ a continuous representation of $\mathcal{G}_K$ on a $\QQ_p$-vector space of dimension $d$. Recall that $\mathbb{D}_{\dR}(V)$ denotes the space $(V \otimes_{\QQ_p} \mathbb{B}_{\dR})^{\mathcal{G}_K}$, where $\mathbb{B}_{\dR}$ is Fontaine's ring of periods. This space $\mathbb{D}_{\dR}(V)$ is a filtered $K$-vector space of dimension $\le d$, and we say $V$ is \emph{de Rham} if equality holds. If $j \in \mathbb{Z}$, $\Fil^j \mathbb{D}_{\dR}(V)$ denotes the $j$-th step in the Hodge filtration of $\mathbb{D}_{\dR}(V)$.
If $L$ is a finite extension of $K$, we shall sometimes write $\mathbb{D}_{\dR}(L, V)$ for $\mathbb{D}_{\dR}(V|_{\mathcal{G}_L})$, which can be canonically identified with $L \otimes_K \mathbb{D}_{\dR}(V)$.
We also consider the crystalline period ring $\BB_{\cris} \subset \mathbb{B}_{\dR}$, and define similarly $\mathbb{D}_{\mathrm{cris}}(V) = (V \otimes_{\QQ_p} \BB_{\cris})^{\mathcal{G}_K}$. This is a $K_0$-vector space of dimension $\le d$, where $K_0$ is the maximal unramified subspace of $K$, endowed with a semilinear Frobenius (acting as the usual arithmetic Frobenius on $K_0$). We say $V$ is \emph{crystalline} if $\dim_{K_0} \mathbb{D}_{\mathrm{cris}}(V) = d$, in which case $V$ is automatically de Rham, and there is a canonical isomorphism of $K$-vector spaces $\mathbb{D}_{\dR}(V) \cong K \otimes_{K_0} \mathbb{D}_{\mathrm{cris}}(V)$. As above, we will write $\mathbb{D}_{\mathrm{cris}}(L, V)$ for $\mathbb{D}_{\mathrm{cris}}(V|_{\mathcal{G}_L})$, where $L$ is a finite extension of $K$; if $V$ is crystalline over $K$ this is isomorphic to $L_0 \otimes_{K_0} \mathbb{D}_{\mathrm{cris}}(V)$.
For an integer $j$, $V(j)$ denotes the $j$-th Tate twist of $V$, i.e. $V(j)=V\otimes_{\ZZ_p} (\varprojlim_n \mu_{p^n})^{\otimes j}$. If $\zeta = (\zeta_n)_{n \ge 0}$ is a choice of a compatible system of $p$-power roots of unity, this defines a basis vector $e_j$ of $\QQ_p(j)$ and an element $t^{-j} \in \mathbb{B}_{\dR}$; these each depend on $\zeta$, but the element $t^{-j} e_j \in \mathbb{D}_{\dR}(\QQ_p(j))$ does not, and tensoring with $t^{-j}e_j$ thus gives a canonical isomorphism $\mathbb{D}_{\dR}(\QQ_p(j)) \cong \QQ_p$ for each $j$.
We write
\[ \exp_{K,V} : \mathfrak{r}ac{\mathbb{D}_{\dR}(V)}{\Fil^0 \mathbb{D}_{\dR}(V) + \mathbb{D}_{\mathrm{cris}}(V)^{\varphi=1}} \rInto H^1(K,V)\]
for the \emph{Bloch-Kato exponential} of $V$ over $K$ (c.f. \cite{blochkato90}), which is the boundary map in the cohomology of the ``fundamental exact sequence''
\[ 0 \rTo V \rTo V \otimes_{\QQ_p} \BB_{\cris}^{\varphi = 1} \rTo V \otimes_{\QQ_p} \left( \mathfrak{r}ac{\mathbb{B}_{\dR}}{\mathbb{B}_{\dR}^+}\right) \rTo 0. \]
The image of this map is denoted $H^1_e(K,V)$, and we denote its inverse by
\[ \log_{K, V} : H^1_e(K, V) \rTo^\cong \mathfrak{r}ac{\mathbb{D}_{\dR}(V)}{\Fil^0 \mathbb{D}_{\dR}(V) + \mathbb{D}_{\mathrm{cris}}(V)^{\varphi=1}}.\]
We also denote by
\[ \exp_{K,V}^*: H^1(K,V^*(1)) \rightarrow \Fil^0\mathbb{D}_{\dR}(V^*(1))\]
the \emph{dual exponential} map, which is the dual of $\exp_{K, V}$ with respect to the Tate duality pairing (c.f.~\cite[\S II.1.4]{kato93}); it satisfies the identity
\[ \langle \exp_{K, V}(a), b \rangle_{K} = \langle a, \exp^*_{K, V}(b)\rangle_{\dR}\]
for all $a \in \mathbb{D}_{\dR}(V)$ and $b \in H^1(K, V)$, where $\langle -, - \rangle_{\mathrm{Tate}}$ is the Tate pairing and $\langle-, -\rangle_{\dR, K}$ is the pairing
\[ \mathbb{D}_{\dR}(V) \otimes \mathbb{D}_{\dR}(V^*(1)) \rTo \mathbb{D}_{\dR}(\QQ_p(1)) \cong K \rTo^{\operatorname{trace}} \QQ_p.\]
Finally, if $L$ is a number field, $V$ is a $p$-adic representation of $\mathcal{G}_L$ and $\mathfrak{p}$ is a prime of $L$ above $p$, we write $\mathbb{D}_{\dR}(L_\mathfrak{p}, V)$ and $\mathbb{D}_{\mathrm{cris}}(L_\mathfrak{p}, V)$ for the Fontaine spaces attached to $V$ regarded as a representation of $\Gal(\overline{L}_{\mathfrak{P}} / L_\mathfrak{p})$ for any choice of prime $\mathfrak{P} \mid \mathfrak{p}$ of $\overline{L}$; up to a canonical isomorphism these spaces are independent of the choice of $\mathfrak{P}$.
\subsection{$(\varphi,\Gamma)$-modules and Wach modules}
\label{sect:phigammawach}
Let $K$ be a finite extension of $\QQ_p$, and let $T$ be a $\ZZ_p$-representation of $\mathcal{G}_K$ (that is, a finite-rank free module over $\ZZ_p$ with a continuous action of $\mathcal{G}_K$). Denote the $(\varphi,\Gamma)$-module of $T$ by $\mathbb{D}_K(T)$. This is a module over Fontaine's ring $\mathbb{A}_K$.
If $K$ is unramified over $\QQ_p$ and $T$ is a $\ZZ_p$-representation of $\mathcal{G}_K$ which is crystalline (i.e.~such that $V = T[1/p]$ is crystalline), Wach and Berger have shown that there exists a canonical $\mathbb{A}^+_K$-submodule $\mathbb{N}_K(T) \subset \mathbb{D}_K(T)$, the \emph{Wach module} (see \cite{wach96}, \cite{berger04}); this is the unique submodule such that
\begin{itemize}
\item $\mathbb{N}_K(T)$ is free of rank $d$ over $\mathbb{A}^+_K$,
\item the action of $\Gamma$ preserves $\mathbb{N}_K(T)$ and is trivial on $\mathbb{N}_K(T) / \pi \mathbb{N}_K(T)$,
\item there exists $b \in \mathbb{Z}$ such that $\varphi(\pi^b \mathbb{N}_K(T)) \subseteq \pi^b \mathbb{N}_K(T)$ and the quotient $\pi^b \mathbb{N}_K(T) / \varphi^*(\pi^b \mathbb{N}_K(T))$ is killed by a power of $q = \varphi(\pi)/\pi$.
\end{itemize}
Here $\varphi^*(\pi^b \mathbb{N}_K(T))$ denotes the $\mathbb{A}^+_K$-submodule of $\mathbb{D}_K(T)$ generated by $\varphi(\pi^b \mathbb{N}_K(T))$.
The following lemma is immediate from the definition of the functors $\mathbb{D}_K(-)$ and $\mathbb{N}_K(-)$:
\begin{lemma}
\label{lemma:unramifiedphiGammabasechange}
Assume that $T$ is a $\ZZ_p$-representation of $\mathcal{G}_{K}$, and $L$ a finite extension of $K$, with $L$ and $K$ both unramified over $\QQ_p$. There is a canonical isomorphism of $(\varphi, \Gamma)$-modules
\[ \mathbb{D}_L(T) \cong \mathbb{D}_K(T)\otimes_{\mathcal{O}_K}\mathcal{O}_L,\]
where $\varphi$ acts on $\mathcal{O}_L$ via the arithmetic Frobenius $\sigma_p \in \Gal(L/\QQ_p)$. If $V = T[1/p]$ is crystalline, then this isomorphism restricts to an isomorphism
\[ \mathbb{N}_L(T)\cong \mathbb{N}_K(T)\otimes_{\mathcal{O}_K}\mathcal{O}_L.\]
\end{lemma}
\subsection{Iwasawa cohomology and the Perrin-Riou pairing}
\label{sect:iwasawacoho}
Let $K$ be a finite extension of $\mathbb{Q}_\ell$ for some prime $\ell$ (which may or may not equal $p$) and let $T$ be a $\ZZ_p$-representation of $\mathcal{G}_K$. Let $K_\infty$ be a $p$-adic Lie extension of $K$.
\begin{definition}
\label{def:Iwasawacohomology}
We define
\[ H^i_{\Iw}(K_\infty, T) := \varprojlim H^i(L, T),\]
where $L$ varies over the finite extensions of $K$ contained in $K_\infty$, and the inverse limit is taken with respect to the corestriction maps.
If $V = \QQ_p \otimes_{\ZZ_p} T$, we write
\[ H^i_{\Iw}(K_\infty, V) := \QQ_p \otimes_{\ZZ_p} H^i_{\Iw}(K_\infty,T)\]
(which is independent of the choice of $\ZZ_p$-lattice $T \subset V$).
\end{definition}
It is clear that the groups $H^i_{\Iw}(K_\infty, T)$ are $\Lambda_{\ZZ_p}(G)$-modules; we show in \S \ref{appendix:iwacoho} below that they are finitely generated.
There is a natural extension of the Tate pairing to this setting. We may clearly choose an increasing sequence $\{K_n\}$ of finite extensions of $K$ with $\bigcup_n K_n = K_\infty$ and each $K_n$ Galois over $K$. If $\langle -, - \rangle_{K_n}$ denotes the Tate pairing $H^1(K_n, T) \times H^1(K_n, T^*(1)) \to \ZZ_p$, and $x = (x_n)$ and $y = (y_n)$ are sequences in $H^1_{\Iw}(K_\infty, T)$ and $H^1_{\Iw}(K_\infty, T^*(1))$, then the sequence whose $n$-th term is
\begin{equation}
\label{eq:PRpairing}
\sum_{\sigma \in \Gal(K_n / K)} \langle x_n, \sigma(y_n) \rangle_{K_n} [\sigma] \in \ZZ_p[\Gal(K_n / K)]
\end{equation}
is compatible under the natural projection maps, and hence defines an element of $\Lambda_{\ZZ_p}(G)$.
\begin{definition}
We define the \emph{Perrin-Riou pairing} to be the pairing
\[ \langle - , - \rangle_{K_\infty, T} : H^1_{\Iw}(K_\infty, T) \times H^1_{\Iw}(K_\infty, T^*(1)) \to \Lambda_{\ZZ_p}(G)\]
defined by the inverse limit of the pairings \eqref{eq:PRpairing}.
\end{definition}
It is easy to see that for $\alpha, \beta \in G$ we have
\[ \langle \alpha x , \beta y \rangle_{K_\infty, T} = \alpha \cdot \langle x , y \rangle_{K_\infty, T} \cdot \beta^{-1}.\]
(The above construction is valid for any $p$-adic Lie extension $K_\infty / K$, but in this paper we shall only use the above construction when $G$ is abelian, in which case the distinction between left and right multiplication is not significant.)
\begin{lemma}
\label{lemma:PRpairingtwist}
If $\eta$ is any continuous $\ZZ_p$-valued character of $G$, and we identify $H^1_{\Iw}(K_\infty, T(\eta))$ with $H^1_{\Iw}(K_\infty, T)(\eta)$, then we have
\[ \langle x, y \rangle_{K_\infty, T(\eta)} = \Tw_{\eta^{-1}} \langle x, y \rangle_{K_\infty, T},\]
where $\Tw_{\eta}$ is the map $\Lambda_{\ZZ_p}(G) \to \Lambda_{\ZZ_p}(G)$ mapping $g \in G$ to $\eta(g) g$.
\end{lemma}
\begin{proof}
This is immediate if $\eta$ has finite order, and follows for all $\eta$ by reduction modulo powers of $p$; cf.~\cite[\S 3.6.1]{perrinriou94}.
\end{proof}
If $V = T[1/p]$, we obtain by extending scalars a pairing
\[ H^1_{\Iw}(K_\infty, V) \times H^1_{\Iw}(K_\infty, V^*(1)) \to \Lambda_{\QQ_p}(G)\]
which we denote by $\langle -, - \rangle_{K_\infty, V}$. This pairing is independent of the choice of lattice $T \subseteq V$.
It is clear that if $T$ is an $\mathcal{O}_E$-module for some finite extension $E / \QQ_p$, then we may similarly define an $\mathcal{O}_E$-linear analogue of the Perrin-Riou pairing, and in this case Lemma \ref{lemma:PRpairingtwist} applies to any $\mathcal{O}_E$-valued character $\eta$.
\subsection{The Fontaine isomorphism}
In the case when $K_\infty=K(\mu_{p^\infty})$, we can describe $H^1_{\Iw}(K_\infty,T)$ in terms of the $(\varphi, \Gamma)$-module $\mathbb{D}_K(T)$. Let $\Gamma_K = \Gal(K(\mu_{p^\infty}) / K)$, which we identify with a subgroup of $\Gamma$. The following result is originally due to Fontaine (unpublished); for a reference see \cite[Section II]{cherbonniercolmez99}.
\begin{theorem}
We have a canonical isomorphism of $\Lambda_{\ZZ_p}(\Gamma_K)$-modules
\begin{equation}
\label{eq:fontaineisom1}
h^1_{\Iw,T}: \mathbb{D}_K(T)^{\psi=1} \rTo^\cong H^1_{\Iw}(K(\mu_{p^\infty}),T).
\end{equation}
\end{theorem}
If $T$ is a representation of $\mathcal{G}_{\QQ_p}$, then the action of $\Gamma$ extends to an action of $\Gal(K(\mu_{p^\infty}) / \QQ_p)$ on both sides of equation \eqref{eq:fontaineisom1}, and the map $h^1_{\Iw, T}$ commutes with the action of this larger group. We shall apply this below in the case when $K$ is an unramified extension of $\QQ_p$, so $\Gamma_K = \Gamma$ and $\Gal(K(\mu_{p^\infty}) / \QQ_p) = \Gamma \times U_F$, where $U_F = \Gal(F / \QQ_p)$.
Now let $K$ be a finite unramified extension of $\QQ_p$, and assume that $V$ is a crystalline representation of $\mathcal{G}_K$ whose Hodge-Tate weights\footnote{In this paper we adopt the convention that the Hodge--Tate weight of the cyclotomic character is $+1$.} lie in the interval $[a,b]$. The following result is due to Berger \cite[Theorem A.2]{berger03}.
\begin{theorem}
\label{thm:crystallinepsiinvariants}
We have $\mathbb{D}_K(T)^{\psi=1}\subset \pi^{a-1}\mathbb{N}_K(T)$. Moreover, if $V$ has no quotient isomorphic to $\QQ_p(a)$, then $\mathbb{D}_K(T)^{\psi=1}\subset \pi^a\mathbb{N}_K(T)$.
\end{theorem}
In particular, if $V$ has non-negative Hodge--Tate weights and no quotient isomorphic to $\QQ_p$, we have $\mathbb{N}_K(T)^{\psi=1} = \mathbb{D}_K(T)^{\psi=1}$. Then \eqref{eq:fontaineisom1} becomes
\begin{equation}
\label{eq:fontaineisom2}
h^1_{\Iw,T}: \mathbb{N}_K(T)^{\psi=1} \rTo^\cong H^1_{\Iw}(K(\mu_{p^\infty}),T).
\end{equation}
\subsection{Gauss sums, L- and epsilon-factors}
\label{sect:epsfactors}
In many of our formulae, epsilon-factors attached to characters of the Galois group (or rather the Weil group) of $\QQ_p$ will make an appearance, so we shall fix normalizations for these. We follow the conventions of \cite{deligne73}.
Let $E$ be an algebraically closed field of characteristic 0, and let $\zeta = (\zeta_n)_{n \ge 0}$ be a choice of a compatible system of $p$-power roots of unity in $E$. The data of such a choice is equivalent to the data of an additive character $\lambda: \QQ_p \to E^\times$ with kernel $\ZZ_p$, defined by $\lambda(1/p^n) = \zeta_n$.
We first define the Gauss sum of a finitely ramified character $\omega$ of the Weil group $W_{\mathbb{Q}_p}$, which will in fact depend only on the restriction of $\omega$ to the inertia subgroup $\Gal(\QQ_pb / \QQ_pnr)$. If $\omega$ has conductor $n$, then we define
\[ \tau(\omega, \zeta) = \sum_{\sigma \in \Gal(\QQ_pnr(\mu_{p^n}) / \QQ_pnr)} \omega(\sigma)^{-1} \zeta_{p^n}^\sigma.\]
Now let us recall the definition of epsilon-factors given in \cite{deligne73} for locally constant characters of $\QQ_p^\times$. These depend on the character $\omega$, the auxilliary additive character $\lambda$, and a choice of Haar measure $\mathrm{d}x$; we choose $\mathrm{d}x$ so that $\ZZ_p$ has volume 1. The definition is given as
\[ \varepsilon(\omega, \lambda, \mathrm{d}x) =
\begin{cases}
1 & \text{if $\omega$ is unramified,} \\
\int_{\QQ_p^\times} \omega(x^{-1}) \lambda(x)\, \mathrm{d}x & \text{if $\omega$ is ramified.}
\end{cases}
\]
As shown in \textit{op.cit.}, if the conductor of $\omega$ is $n$, then it suffices to take the integral over $p^{-n} \ZZ_p^\times$. For consistency with \cite{cfksv}, we will rather work with the additive character $\lambda(-x)$ rather than $\lambda(x)$; then we find that
\[ \varepsilon(\omega, \lambda(-x), \mathrm{d}x) = \omega(p)^n \sum_{x \in (\mathbb{Z} / p^n \mathbb{Z})^\times} \omega(x)^{-1} \zeta_n^{-x}.\]
We now recall that local reciprocity map $\operatorname{rec}_{\QQ_p}$ of class field theory identifies $W_{\QQ_p}^{\mathrm{ab}}$ with $\mathbb{Q}_p^\times$. Following \cite{deligne73}, we normalize $\operatorname{rec}_{\QQ_p}$ such that \emph{geometric} Frobenius elements of $W_{\QQ_p}^\mathrm{ab}$ are sent to uniformizers. Then the restriction of $\operatorname{rec}_{\QQ_p}$ to $\Gal(\mathbb{Q}_p^\mathrm{ab} / \QQ_pnr)$ gives an isomorphism
\[ \Gal(\mathbb{Q}_p^\mathrm{ab} / \QQ_pnr) \rTo \ZZ_p^\times.\]
Our choice of normalization for the local reciprocity map implies that this coincides with the cyclotomic character. On the other hand, $p \in \mathbb{Q}_p^\times$ corresponds to $\tilde\sigma_p^{-1}$, where $\tilde\sigma_p$ is the unique element of $\Gal(\QQ_p^\mathrm{ab} / \QQ_p)$ which acts as the arithmetic Frobenius $\sigma_p$ on $\QQ_pnr$ and acts trivially on all $p$-power roots of unity. Hence
\[ \varepsilon(\omega^{-1}, \lambda(-x), \mathrm{d}x) = \omega(\tilde\sigma_p)^{n} \sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma) \zeta_n^{-\sigma} = \mathfrak{r}ac{p^n \omega(\tilde\sigma_p)^n}{\tau(\omega, \zeta)}.\]
This quantity $\varepsilon(\omega^{-1}, \lambda(-x), \mathrm{d}x)$, which we shall abbreviate to $\varepsilon(\omega^{-1})$, will appear in our formulae for the two-variable regulator.
We shall also need to consider the case when $E$ is a $p$-adic field and $\omega$ is a continuous character of $\mathcal{G}_{\QQ_p}^{\mathrm{ab}}$ which is Hodge--Tate, but not necessarily finitely ramified. Any such character is potentially crystalline, and a well-known construction of Fontaine \cite{fontaine94c} allows us to regard $\mathbb{D}_{\mathrm{pst}}(\omega)$ as a one-dimensional representation of the Weil group; concretely, if $\omega = \chi^j \omega'$ where $\omega'$ is finitely ramified, then $\sigma \in W_{\QQ_p}$ acts on $\mathbb{D}_{\mathrm{pst}}(\omega)$ as $p^{j n(\sigma)} \omega'(\sigma)$, where $n(\sigma)$ is the power of the arithmetic Frobenius by which $\sigma$ acts on $\QQ_pnr$. We define $\varepsilon(\omega) = \varepsilon(\omega, \lambda(-x), \mathrm{d}x)$ to be the epsilon-factor attached to $\mathbb{D}_{\mathrm{pst}}(\omega)$, so
\[ \varepsilon(\omega^{-1}) = \mathfrak{r}ac{p^{n(1+j)} \omega(\tilde\sigma_p)^n}{\tau(\omega, \zeta)}.\]
We write $P(\omega, X)$ for the $L$-factor of the Weil--Deligne representation $\mathbb{D}_{\mathrm{pst}}(\omega)$. This is a polynomial $P(\omega, X)$ in $X$, which is identically 1 if $\omega$ is not crystalline; otherwise, it is given by $P(\omega, X) = 1 - u X$, where $u$ is the scalar by which crystalline Frobenius acts on $\mathbb{D}_{\mathrm{cris}}(\omega)$, so $u = p^{-j} \omega'(\sigma_p)^{-1}$ if $\omega = \chi^j \omega'$ with $\omega'$ unramified.
\section{Local theory: Yager modules and Wach modules}
\label{sect:yager}
\subsection{Some cohomological preliminaries}
Let $F$ be a finite unramified extension of $\QQ_p$, and let $F_\infty / F$ be an unramified $p$-adic Lie extension with Galois group $U$. (Thus $U$ is either a finite cyclic group, or the product of such a group with $\ZZ_p$.) Let $\widehat{\cO}_{F_\infty}$ be the completion of the ring of integers of $F_\infty$.
\begin{lemma}
\label{lemma:ranktwistedmodule}
Let $M$ be a free $\ZZ_p$-module of rank $d < \infty$, with a continuous action of $U$. Then the module
\[ H^0(U, \widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} M)\]
is free of rank $d$ over $\mathcal{O}_F$, and
\[ H^1(U, \widehat{\cO}_{F_\infty} \otimes M) = 0.\]
\end{lemma}
\begin{proof}
This is a form of Hilbert's Theorem 90; for the form of the statement given here see e.g.~\cite[Proposition 1.2.4]{fontaine90}.
\end{proof}
We will need the following result on trace maps for unramified extensions.
\begin{proposition}
\label{prop:iwasawastructure}
The module
\[ \varprojlim_{K} \mathcal{O}_{K}, \]
where $K$ varies over finite extensions of $F$ contained in $F_\infty$ and the inverse limit is with respect to the trace maps, is free of rank 1 over $\Lambda_{\mathcal{O}_F}(U)$.
\end{proposition}
\begin{proof}
We first note that if $L / K$ is any finite unramified extension of local fields, then the trace map $\mathcal{O}_{L} \to \mathcal{O}_K$ is surjective, since the residue extension $k_{L} / k_K$ is separable and hence its trace map is surjective. Moreover, $\mathcal{O}_L$ is free of rank 1 over $\mathcal{O}_K[\Gal(L/K)]$; elements of $\mathcal{O}_{L}$ that generate it as a $\mathcal{O}_F[\Gal(L/K)]$-module are called \emph{integral normal basis generators} of $L/K$. We must show that there exists a trace-compatible sequence $x = (x_K) \in \varprojlim_{K} \mathcal{O}_{K}$ such that $x_K$ is an integral normal basis generator of $K/F$ for all $K$.
Let $F_0$ be the largest subfield of $F_\infty$ such that $[F_0 : F]$ is prime to $p$; this is a finite extension of $F$, by our hypotheses on $F_\infty$. Choose a normal basis generator $x_0$ of $F_0 / F$.
We claim that if $K$ is any finite extension of $F_0$ contained in $F_\infty$, and $x$ is any element of $\mathcal{O}_K$ with $\operatorname{Tr}_{K/F_0}(x) = x_0$, then $x$ is an integral normal basis generator of $K/F$.
To prove this, consider the group ring $R = \mathcal{O}_F[\Gal(K/F)]$. As noted above, $\mathcal{O}_K$ is a free $R$-module of rank 1. Let $I$ be the ideal of $R$ given by the kernel of the natural map $\mathcal{O}_F[\Gal(K/F)] \to \mathcal{O}_F[\Gal(F_0/F)]$. Then $I$ is contained in the Jacobson radical $J$ of $R$ (indeed $J$ is generated by $I$ and $p$). So, by Nakayama's lemma, an element $x \in \mathcal{O}_K$ generates $\mathcal{O}_K$ as an $R$-module if and only if its image in $\mathcal{O}_K / I \mathcal{O}_K$ generates this quotient; but the trace map $\Tr_{K/F_0}: \mathcal{O}_K \to \mathcal{O}_{F_0}$ is surjective and factors through $\mathcal{O}_K / I \mathcal{O}_K$, and $\mathcal{O}_{F_0}$ and $\mathcal{O}_K / I \mathcal{O}_K$ are free $\ZZ_p$-modules of the same rank, so $\Tr_{K/F_0}$ must give an isomorphism $\mathcal{O}_K / I \mathcal{O}_K \to \mathcal{O}_{F_0}$. This proves the claim.
So it suffices to take any element of $\varprojlim_{K} \mathcal{O}_{K}$ lifting $x_0$.
\end{proof}
\begin{remark}
As noted in \cite{pickett10}, one can also deduce the above claim from the work of Semaev \cite[Lemma 4.1]{semaev88} on normal bases of extensions of \emph{finite} fields, which does not explicitly use Nakayama's lemma.
\end{remark}
\subsection{The Yager module}
In this section we develop a variant of the construction in~\cite[\S 2]{yager82} in order to construct a certain module which, in a sense we shall make precise below, encodes the periods for the unramified characters of $\mathcal{G}_{F}$.
\begin{definition}
Let $K / F$ be a finite unramified extension. For $x \in \mathcal{O}_K$, we define
\[ y_{K/F}(x) = \sum_{\sigma \in \Gal(K / F)} x^\sigma\, [\sigma^{-1}] \in \mathcal{O}_{K}[\Gal(K / F)].\]
\end{definition}
It is clear that $y_{K/F}$ is $\mathcal{O}_F$-linear and injective, and we have $y_{K/F}(x^g) = [g] y_{K/F}(x)$ for all $g \in \Gal(K / F)$, where $[u]$ is the image of $u$ in the group ring. Moreover, the image of $y_{K/F}$ is precisely the submodule $S_{K/F}$ of $\mathcal{O}_{K}[\Gal(K / F)]$ consisting of elements satisfying $y^g = [g] y$ for all $g \in \Gal(K / F)$, where $y^g$ denotes the action of $\Gal(K / F)$ on the coefficients $\mathcal{O}_K$.
\begin{proposition}
\label{prop:reduction}
If $L \supset K \supset F$ are finite unramified extensions and $x \in \mathcal{O}_{L}$, the image of $y_{L/F}(x)$ under the reduction map
\[ \mathcal{O}_{L}[\Gal(L / F)] \to \mathcal{O}_{L}[\Gal(K / F)] \]
induced by the surjection $\Gal(L / F) \to \Gal(K / F)$ is equal to $y_{K/F}(\operatorname{Tr}_{L/K} x)$. In particular, the reduction has coefficients in $\mathcal{O}_{K}$.
\end{proposition}
\begin{proof}
Clear from the formula defining the maps $y_{K/F}$ and $y_{L/F}$.
\end{proof}
Now let $F_\infty / F$ be any unramified $p$-adic Lie extension with Galois group $U$, as in the previous section. Passing to inverse limits with respect to the trace maps, we deduce that there is an isomorphism of $\Lambda_{\mathcal{O}_F}(U)$-modules
\begin{equation}
\label{def:y}
y_{F_\infty / F} : \varprojlim_{F \subseteq K \subseteq F_\infty} \mathcal{O}_{K} \rTo^\cong S_{F_\infty / F} := \varprojlim_{F \subseteq K \subseteq F_\infty} S_{K/F}.
\end{equation}
\begin{proposition}
We have
\[ S_{F_\infty / F} = \{ f \in \Lambda_{\widehat{\cO}_{F_\infty}}(U) : f^u = [u] f\}\]
for any topological generator $u$ of $U$.
\end{proposition}
\begin{proof}
Let us set $X = \{ f \in \Lambda_{\widehat{\cO}_{F_\infty}}(U) : f^u = [u] f\}$. Let $F_n$ be a family of finite extensions of $F$ whose union is $F_\infty$, and let $U_n = \Gal(F_\infty / F_n)$.
Firstly, since $S_{F_n / F} \subseteq \mathcal{O}_{F_n}[U / U_n] \subseteq \widehat{\cO}_{F_\infty}[U / U_n]$, we clearly have an embedding $S_{F_\infty / F} \hookrightarrow \Lambda_{\widehat{\cO}_{F_\infty}}(U)$, which must land in $X$, because of the Galois-equivariance property of the elements of $S_{F_n / F}$. However, it is clear that for any $x \in X$, the image $x_n$ of $x$ in $\widehat{\cO}_{F_\infty}[U / U_n]$ has coefficients in $\mathcal{O}_{F_n}$ (since $(\widehat{\cO}_{F_\infty})^{U_n} = \mathcal{O}_n$ by Lemma \ref{lemma:ranktwistedmodule}) and satisfies $(x_n)^u = [u] x_n$, thus lies in $S_{F_n}$. So the map $S_{F_\infty / F} \hookrightarrow X$ is a bijection.
\end{proof}
We shall always equip $S_{F_\infty / F}$ with the inverse limit topology (arising from the $p$-adic topology of the finitely generated $\ZZ_p$-modules $S_{F_n / F}$). This topology is compact and Hausdorff, and coincides with the subspace topology from $\Lambda_{\widehat{\cO}_{F_\infty}}(U)$.
\begin{definition}
We refer to $S_{F_\infty / F}$ as the \emph{Yager module}, since it is closely related to the objects appearing in \cite[\S 2]{yager82}.
\end{definition}
We now explain the relation between $S_{F_\infty / F}$ and the periods for characters of $U$. Let $M$ be a finite-rank free $\ZZ_p$-module with an action of $U$, given by a continuous map $\rho: U \to \operatorname{Aut}_{\ZZ_p}(M)$. Then $\rho$ induces a ring homomorphism $\Lambda_{\widehat{\cO}_{F_\infty}}(U) \to \widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} \End_{\ZZ_p} M$, which we also denote by $\rho$.
\begin{proposition}
\label{prop:twist}
Let $\omega \in S_{F_\infty / F}$. Then $\rho(\omega) \in \widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} \End_{\ZZ_p}(M)$ is a period for $\rho$, in the sense that
\[
\rho(\omega)^u = \rho(u) \cdot \rho(\omega) .
\]
for all $u \in U$.
\end{proposition}
\begin{proof}
Since $\omega \in S_{F_\infty / F}$, we have $\omega^u = [u] \omega$ for any $u \in U$. However, the map $\Lambda_{\widehat{\cO}_{F_\infty}}(U) \to \widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} \End_{\ZZ_p} M$ commutes with the action of $U$ on the coefficient ring $\widehat{\cO}_{F_\infty}$; so we have
\[ \rho(\omega)^u = \rho(\omega^u) = \rho([u] \cdot \omega) = \rho(u) \rho(\omega).\]
\end{proof}
\begin{remark}
After the results in this section had been proven, we discovered that similar results had been obtained by Pasol in his unpublished PhD thesis \cite[\S 2.5]{pasol05}. Our module $S_{F_\infty / F}$ is the same as his module $\mathbb{D}_0$. He uses the module $\mathbb{D}_0$ to relate Katz's $2$-variable $p$-adic $L$-functions attached to a CM elliptic curve to the modular symbols construction by Greenberg and Stevens \cite{greenbergstevens93}.
\end{remark}
\subsection{P-adic representations}
Let $T$ be a crystalline $\ZZ_p$-representation of $\mathcal{G}_{F}$. If $K / F$ is any unramified extension, we have isomorphisms $\mathbb{N}_K(T) \cong \mathbb{N}_F(T) \otimes_{\mathcal{O}_F} \mathcal{O}_K$, so we have trace maps $\mathbb{N}_L(T) \to \mathbb{N}_K(T)$ for $L / K$ any two finite unramified extensions of $F$.
\begin{definition}
\label{def:Dinfty}
Let $\mathbb{N}_{F_\infty}(T) = \varprojlim_{F \subseteq K \subset F_\infty} \mathbb{N}_K(T)$, where the inverse limit is taken with respect to the trace maps.
\end{definition}
By construction, $\mathbb{N}_{F_\infty}(T)$ has actions of $\Gamma$ and $U$, since these act on the modules $\mathbb{N}_K(T)$ compatibly with the trace maps.
\begin{proposition}
We have an isomorphism of topological modules
\[ \mathbb{N}_{F_\infty}(T) \cong \mathbb{N}_F(T) \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty / F}.\]
\end{proposition}
\begin{proof}
Clear by construction.
\end{proof}
By construction, $\mathbb{N}_{F_\infty}(T)$ has $\mathcal{O}_F$-linear actions of $\Gamma$ and of $U$, which extend to a continuous action of $\Lambda_{\mathcal{O}_F}(\Gamma \times U)$.
Define $\varphi^* \mathbb{N}_{F_\infty}(T)$ as the $\mathbb{A}^+_{F}$-submodule of $\mathbb{N}_{F_\infty}(T)[q^{-1}]$ generated by $\varphi(\mathbb{N}_{F_\infty}(T))$; this is in fact an $\mathbb{A}^+_{F} \mathbin{\hat\otimes}_{\mathcal{O}_F} \Lambda_{\mathcal{O}_F}(U)$-submodule, since $\varphi$ acts bijectively on $\Lambda_{\mathcal{O}_F}(U)$. If $T$ has non-negative Hodge--Tate weights, then we have an inclusion
\[ \mathbb{N}_{F_\infty}(T) \hookrightarrow \varphi^* \mathbb{N}_{F_\infty}(T),\]
with quotient annihilated by $q^h$, for any $h$ such that the Hodge-Tate weights of $T$ lie in $[0, h]$. Note that the map $\varphi : \mathbb{N}_{F_\infty}(T) \to \varphi^* \mathbb{N}_{F_\infty}(T)$ commutes with the action of $G = U \times \Gamma$. Similarly, the maps $\psi$ on $\mathbb{N}_K(T)[q^{-1}]$ for each $K$ assemble to a map
\[ \psi : \varphi^* \mathbb{N}_{F_\infty}(T) \to \mathbb{N}_{F_\infty}(T),\]
which is a left inverse of $\varphi$.
The following proposition will be important for constructing the regulator map:
\begin{proposition}
\label{prop:kernelofpsi}
We have
\[ \left( \varphi^* \mathbb{N}_{F_\infty}(T) \right)^{\psi = 0} = \left(\varphi^* \mathbb{N}_F(T)\right)^{\psi = 0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty / F}.\]
\end{proposition}
\begin{proof}
Choose a basis $n_1, \dots, n_d$ of $\mathbb{N}_F(T)$ as an $\mathbb{A}^+_F$-module, and a basis $\Omega$ of $S_{F_\infty / F}$ as a $\Lambda_{\mathcal{O}_F}(U)$-module. Then any vector $v \in \varphi^* \mathbb{N}_{F_\infty}(T)$ can be uniquely written as
\[ v = \sum_{i = 0}^{p-1} \sum_{j = 0}^d (1 + \pi)^i \varphi(x_{ij}) \cdot (\varphi(n_j) \otimes \Omega),\]
for some $x_{ij} \in \mathbb{A}^+_{F} \mathbin{\hat\otimes}_{\mathcal{O}_F} \Lambda_{\mathcal{O}_F}(U)$, since $\{ (1 + \pi)^{i}: 0 \le i \le p-1\}$ is a basis of $\mathbb{A}^+_{F}$ over $\varphi(\mathbb{A}^+_{F})$.
Applying $\psi$, we have
\[ \psi(v) = \sum_{i = 0}^{p-1} \sum_{j = 0}^d \psi\left((1 + \pi)^i\right) x_{ij} \cdot (n_j \otimes \sigma_p^{-1}\Omega),\]
where $\sigma_p$ is the arithmetic Frobenius element of $\Gal(F_\infty / \QQ_p)$. The element $\sigma_p^{-1} \Omega$ is also a $\Lambda_{\mathcal{O}_F}(U)$-generator of $S_{F_\infty / F}$. Moreover, it is well known that $\psi\left((1 + \pi)^i\right)$ is $1$ if $i = 0$ and $0$ if $1 \le i \le p-1$. So we have $\psi(v) = 0$ if and only if $v$ is in the submodule
\[ \bigoplus_{i = 1}^{p-1} (1 + \pi)^i\varphi(\mathbb{N}_F(T)) \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty / F} = \varphi^*\mathbb{N}_F(T)^{\psi = 0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty/F}.\]
\end{proof}
\subsection{Recovering unramified twists}
Let us pick a finite-rank free $\ZZ_p$-module $M$ equipped with a continuous action of $U$, via a homomorphism $\rho: \Lambda_{\ZZ_p}(U) \to \End_{\ZZ_p}(M)$ as above.
There is a ``twisting'' map from $M \otimes_{\ZZ_p} \Lambda_{\ZZ_p}(U)$ to itself, defined by $m \otimes [u] \mapsto \rho(u)^{-1} m \otimes [u]$ for $u \in U$. This map intertwines two different actions of $U$: on the left-hand side the action given by
\[ u \cdot (m \otimes [v]) = m \otimes [u^{-1}v]\]
and on the right the action given by
\[ u \cdot (m \otimes [v]) = \rho(u) m \otimes [u^{-1} v].\]
Taking the completed tensor product with $\widehat{\cO}_{F_\infty}$ (endowed with its natural $U$-action) and passing to $U$-invariants, we obtain a bijection
\[ i_M: M \otimes_{\ZZ_p} S_{F_\infty/F} \rTo^\cong S_{F_\infty/F} \cdot \left(M \otimes_{\ZZ_p} \widehat{\cO}_{F_\infty} \right)^{U}.\]
\begin{proposition}
There is a canonical isomorphism
\begin{equation}
\label{eq:unramifiedisom}
\mathbb{N}_F(T) \otimes_{\mathcal{O}_F} \left(\widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} M \right)^{U} \rTo \mathbb{N}_F(T \otimes_{\ZZ_p} M),
\end{equation}
commuting with the actions of $\mathbb{A}^+_{F}$, $\Gamma$, $\varphi$ and $\psi$ (where the latter two elements act on $\widehat{\cO}_{F_\infty}$ as the arithmetic Frobenius and its inverse).
\end{proposition}
\begin{proof}
Wach modules are known to commute with tensor products \cite{berger04}, so it suffices to check that \[ \mathbb{N}_F(M) = \mathbb{A}^+_F \otimes_{\mathcal{O}_F} \left(\widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} M \right)^{U}.\]
This follows from the fact that there is a canonical embedding of $\widehat{\cO}_{F_\infty}$ into Fontaine's ring $\mathbb{A}$, hence there is a canonical inclusion
\[ \left(\widehat{\cO}_{F_\infty} \otimes_{\ZZ_p} M \right)^{U} \subseteq \left(M \otimes_{\ZZ_p} \mathbb{A}\right)^{H_F} = \mathbb{D}_F(M).\]
Since the left-hand side is free of rank $d$ over $\mathcal{O}_F$, extending scalars to $\mathbb{A}^+_F$ gives a submodule of $\mathbb{D}_F(M)$ which is free of rank $d$ over $\mathbb{A}^+_F$ and clearly satisfies the conditions defining the Wach module $\mathbb{N}_F(M) \subset \mathbb{D}_F(M)$.
\end{proof}
\begin{remark}
Suppose (for simplicity) that $F = \QQ_p$ and $M \cong \ZZ_p$ with $U$ acting via a character $\tau: U \to \ZZ_p^\times$. Since $\left(M \otimes \widehat{\cO}_{F_\infty} \right)^{U}$ is free of rank 1 over $\ZZ_p$, any choice of basis of this space gives a non-canonical isomorphism between $\mathbb{N}_{\QQ_p}(T(\tau))$ and $\mathbb{N}_{\QQ_p}(T)$ with its $\varphi$-action twisted by $\tau(\sigma_p)^{-1}$. However, the isomorphism \eqref{eq:unramifiedisom} \emph{is} canonical and does not depend on any such choice.
\end{remark}
\begin{theorem}
\label{thm:unramifiedtwist}
There is a canonical isomorphism
\[ i_{M}: M \otimes_{\ZZ_p} \mathbb{N}_{F_\infty}(T) \rTo^\cong \mathbb{N}_{\infty}(M \otimes_{\ZZ_p} T) \]
which commutes with the actions of $\varphi$, $\Gamma$, $\mathbb{A}^+_{F}$ and $\End_{\mathcal{G}_F}(M)$, and satisfies
\[ i_{M}(u \cdot x) = \rho(u)^{-1} u \cdot i_M(x)\]
for $u \in U$ and $x \in \mathbb{N}_{F_\infty}(T)$.
\end{theorem}
\begin{proof}
This follows immediately by tensoring the map
\[ i_M : M \otimes_{\ZZ_p} S_{F_\infty / F} \rTo^\cong S_{F_\infty/F} \cdot \left(M \otimes_{\ZZ_p} \widehat{\cO}_{F_\infty}\right)^{U}\]
with $\mathbb{N}_F(T)$, and using the isomorphism \eqref{eq:unramifiedisom}.
\end{proof}
\section{The 2-variable p-adic regulator}
\label{sect:regulator}
\subsection{A lemma on universal norms}
Let $F$ be a finite unramified extension of $\QQ_p$, and let $T$ be a $\ZZ_p$-rep\-re\-sen\-ta\-tion of $\mathcal{G}_{F}$.
\begin{definition}
\label{def:goodcrystalline}
The representation $T$ is \emph{good crystalline} if $V = T[1/p]$ is crystalline and has non-negative Hodge-Tate weights.
\end{definition}
By \cite[Theorem A.3]{berger03}, for any good crystalline $T$ there is a canonical isomorphism
\[ H^1_{\Iw}(F(\mu_{p^\infty}), T) \rTo^\cong \left( \pi^{-1} \mathbb{N}_F(T) \right)^{\psi = 1}.\]
We define a ``residue'' map
\[ r_{F, V} : H^1_{\Iw}(F(\mu_{p^\infty}), T) \to \mathbb{D}_{\mathrm{cris}}(F, V)\]
by composing the above isomorphism with the natural map
\[ \pi^{-1} \mathbb{N}_F(T) \to \mathfrak{r}ac{\pi^{-1} \mathbb{N}_F(V)}{\mathbb{N}_F(V)} \cong \mathfrak{r}ac{\mathbb{N}_F(V)}{\pi\mathbb{N}_F(V)} \cong \mathbb{D}_{\mathrm{cris}}(F, V).\]
As is shown in the proof of \cite[Theorem A.3]{berger03}, the image of the map $r_{F, V}$ is contained in $\mathbb{D}_{\mathrm{cris}}(F, V)^{\varphi = 1}$; in particular, if the latter space is zero, then $H^1_{\Iw}(F(\mu_{p^\infty}), T) \cong \mathbb{N}_F(T)^{\psi = 1}$.
We now consider the behaviour of these maps in unramified towers. Let $F_\infty$ be an infinite unramified $p$-adic Lie extension of $F$, so we may write $F_\infty = \bigcup_n F_n$ where $F_0 / F$ is a finite extension and $F_n$ is the unramified extension of $F_0$ of degree $p^n$. As we have seen above, $\mathbb{D}_{\mathrm{cris}}(F_n, V) \cong \mathbb{D}_{\mathrm{cris}}(V) \otimes_{F} F_n$. Let us formally write $\mathbb{D}_{\mathrm{cris}}(F_\infty, V) = F_\infty \otimes_{F} \mathbb{D}_{\mathrm{cris}}(F, V)$.
\begin{proposition}
There is an $n_0$ (depending on $V$) such that
\[ \mathbb{D}_{\mathrm{cris}}(F_\infty, V)^{\varphi = 1} = \mathbb{D}_{\mathrm{cris}}(F_n, V)^{\varphi = 1}.\]
\end{proposition}
\begin{proof}
Since the spaces $\mathbb{D}_{\mathrm{cris}}(F_n, V)^{\varphi = 1}$ are an increasing sequence of finite-dimensional $\QQ_p$-vector spaces, it suffices to show that their union $\mathbb{D}_{\mathrm{cris}}(F_\infty, V)^{\varphi = 1}$ is finite-dimensional over $\QQ_p$. This follows from the fact that $F_\infty$ is a field, and $\varphi$ acts on $F_\infty$ as the arithmetic Frobenius $\sigma_p$, so $(F_\infty)^{\varphi = 1} = \QQ_p$. Thus
\[\dim_{\QQ_p} \left(F_\infty \otimes_F \mathbb{D}_{\mathrm{cris}}(V)\right)^{\sigma_p \otimes \varphi = 1} \le \dim_{\QQ_p} V,\]
by Propositions 1.4.2(i) and 1.6.1 of \cite{fontaine94b}.
\end{proof}
\begin{proposition}
Let $\mathbb{D}_{\mathrm{cris}}(T)$ be the $\ZZ_p$-lattice in $\mathbb{D}_{\mathrm{cris}}(V)$ which is the image of $\mathbb{N}_F(T)$. If $m \ge n \ge n_0$, $x \in H^1_{\Iw}(F_{m}(\mu_{p^\infty}), T)$, and $y = \operatorname{cores}_{F_m/F_{n}}(x) \in H^1_{\Iw}(F_{n}(\mu_{p^\infty}), T)$, then we have
\[ r_{F_{n}, V}(y) \in p^{m-n} \mathcal{O}_{F_n} \otimes_{\mathcal{O}_F} \mathbb{D}_{\mathrm{cris}}(T).\]
\end{proposition}
\begin{proof}
This follows from the fact that for any $n \ge 0$, we have a commutative diagram
\[
\begin{diagram}
H^1_{\Iw}(F_{n+1}(\mu_{p^\infty}), T) & \rTo^{r_{F_{n+1}, V}} & \mathbb{D}_{\mathrm{cris}}(F_{n+1}, V)^{\varphi = 1}\\
\dTo^{\operatorname{cores}_{F_{n+1}/F_n}} & & \dTo^{\operatorname{Tr}_{F_{n+1}/F_n}} \\
H^1_{\Iw}(F_n(\mu_{p^\infty}), T) & \rTo^{r_{F_{n}, V}} & \mathbb{D}_{\mathrm{cris}}(F_n, V)^{\varphi = 1}.
\end{diagram}
\]
If $n \ge n_0$, then the trace map on the right-hand side is simply multiplication by $[F_{n+1} : F_n] = p$.
\end{proof}
\begin{theorem}
\label{thm:unramunivnorms}
Let $F_\infty$ be an infinite unramified $p$-adic Lie extension of $F$, and let $x \in H^1_{\Iw}(F_\infty(\mu_{p^\infty}), T)$. Then for any $n \ge 0$, the image $y$ of $x$ in $H^1_{\Iw}(F(\mu_{p^\infty}), T) \cong \left(\pi^{-1}\mathbb{N}_F(T)\right)^{\psi = 1}$ is contained in $\mathbb{N}_F(T)^{\psi = 1}$.
\end{theorem}
\begin{proof}
This follows immediately from the preceding proposition, since $r_{F_{n}, V}(y)$ must be divisible by arbitrarily large powers of $p$ and hence is zero.
\end{proof}
\subsection{The regulator map}
\label{sect:2variableregulator}
For the rest of Section \ref{sect:regulator}, we assume that $T$ is a good crystalline representation of $\mathcal{G}_{F}$, for $F$ a finite unramified extension of $\QQ_p$, and we let $F_\infty$ be any unramified $p$-adic Lie extension of $F$ with Galois group $U$ as before. We define $K_\infty = F_\infty(\mu_{p^\infty})$, and $G = \Gal(K_\infty / F) \cong U \times \Gamma$.
\begin{proposition}
\label{prop:yagercohomology2}
We have a canonical isomorphism
\[ H^1_{\Iw}(K_\infty ,T) \cong \left(\pi^{-1}\mathbb{N}_{F_\infty}(T)\right)^{\psi = 1}.\]
If either $F_\infty / F$ is infinite, or $T$ has no quotient isomorphic to the trivial representation, then we have
\[ H^1_{\Iw}(K_\infty, T) \cong \mathbb{N}_{F_\infty}(T)^{\psi = 1}.\]
\end{proposition}
\begin{proof}
If $F_\infty / F$ is a finite extension, we may assume $F_\infty = F$, and this is \cite[Theorem A.1]{berger03}.
If $F_\infty / F$ is an infinite extension, then we note that for each finite subextension $K / F$ contained in $F_\infty$ we have an isomorphism
\[ H^1_{\Iw}(K(\mu_{p^\infty}),T) \cong \left(\pi^{-1}\mathbb{N}_{K}(T)\right)^{\psi=1},\]
and if $L / K$ are two such fields, then the corestriction map
\[ H^1_{\Iw}(L(\mu_{p^\infty}),T)\rTo H^1_{\Iw}(K(\mu_{p^\infty}),T)\]
corresponds to the maps
\[ \pi^{-1}\mathbb{N}_{L}(T)\rTo \pi^{-1}\mathbb{N}_{K}(T)\]
induced from the trace map $\mathcal{O}_L \to \mathcal{O}_K$. By Theorem \ref{thm:unramunivnorms}, we have an isomorphism
\[
H^1_{\Iw}(K_\infty, T) = \varprojlim_K \left( \pi^{-1} \mathbb{N}_{K}(T)\right)^{\psi=1} \cong \varprojlim_K \mathbb{N}_{K}(T)^{\psi=1}=
\mathbb{N}_{F_\infty}(T)^{\psi=1},
\]
which finishes the proof.
\end{proof}
As shown in \cite[Proposition 2.11]{leiloefflerzerbes11}, we have a $\Lambda_{\mathcal{O}_F}(\Gamma)$-equivariant embedding
\[ \big(\varphi^*\mathbb{N}_F(T)\big)^{\psi=0}\subset \mathcal{H}_{F}(\Gamma) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V),\]
which is continous with respect to the weak topology on $\varphi^* \mathbb{N}_F(T)^{\psi=0}$ and the usual Fr\'echet topology on $\mathcal{H}_{F}(\Gamma)$. Moreover, we have a continuous injection
\[ S_{F_\infty / F} \hookrightarrow \Lambda_{\widehat{\cO}_{F_\infty}}(U) \hookrightarrow \mathcal{H}_{\widehat{F}_\infty}(U).\]
Tensoring these together we obtain a continuous, $\Lambda_{\mathcal{O}_F}(G)$-linear map
\begin{multline*}
\big(\varphi^*\mathbb{N}_F(T)\big)^{\psi=0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty / F} \hookrightarrow \mathcal{H}_{\widehat{F}_\infty}(U) \mathbin{\hat\otimes}_{\mathcal{O}_F} \mathcal{H}_F(\Gamma) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \\ = \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V).
\end{multline*}
\begin{definition}
\label{def:2variableregulator}
We define the $p$-adic regulator
\[ \mathcal{L}^{G}_V : H^1_{\Iw}(K_\infty, T) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V)\]
to be the composite map
\begin{align*}
H^1_{\Iw}(K_\infty, T) & \rTo^\cong \mathbb{N}_{F_\infty}(T)^{\psi=1} \cong \left(\mathbb{N}_F(T) \mathbin{\hat\otimes}_{\mathcal{O}_F} S_\infty\right)^{\psi = 1}\\
& \rTo^{1- \varphi} \big(\varphi^*\mathbb{N}_F(T)\big)^{\psi=0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_\infty \\
& \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V).
\end{align*}
Here, we use that $\varphi^* \mathbb{N}_{F_\infty}(T)^{\psi = 0} \cong \varphi^*\mathbb{N}_F(T)^{\psi=0} \mathbin{\hat\otimes}_{\ZZ_p} S_\infty$ by Proposition \ref{prop:kernelofpsi}.
\end{definition}
By construction, $\mathcal{L}^{G}_V$ is a morphism of $\Lambda_{\mathcal{O}_F}(G)$-modules. As suggested by the notation, we will usually invert $p$ and regard $\mathcal{L}^{G}_V$ as a map on $H^1_{\Iw}(K_\infty, V)$, associating to each compatible system of cohomology classes in $H^1_{\Iw}(K_\infty, V)$ a distribution on $G$ with values in $\widehat{F}_\infty \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V)$.
We can summarise the properties of the map we have constructed by the following theorem:
\begin{theorem}
\label{thm:localregulator}
Let $F$ be a finite unramified extension of $\QQ_p$, and $K_\infty$ a $p$-adic Lie extension of $F$ with Galois group $G$ such that
\[ F(\mu_{p^\infty}) \subseteq K_\infty \subset F \cdot \QQ_p^{\mathrm{ab}}.\]
Let $T$ be a crystalline representation of $\mathcal{G}_{F}$ with non-negative Hodge--Tate weights, and assume that either $K_\infty / F(\mu_{p^\infty})$ is infinite, or $T$ has no quotient isomorphic to the trivial representation.
Then there exists a morphism of $\Lambda_{\mathcal{O}_F}(G)$-modules
\[ \mathcal{L}_V^G : H^1_{\Iw}(K_\infty, T) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V),\]
where $F_\infty$ is the maximal unramified subfield of $K_\infty$, such that:
\begin{enumerate}
\item for any finite unramified extension $K / F$ contained in $K_\infty$, we have a commutative diagram
\[
\begin{diagram}
H^1_{\Iw}(K_\infty / \QQ_p, V) &&\rTo^{\mathcal{L}_V^G} &&\mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V)\\
\dTo & & & &\dOnto \\
H^1_{\Iw}(K(\mu_{p^\infty}), V )& \rTo^{\mathcal{L}_V^{G'}} & \mathcal{H}_{K}(G') \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) & \rInto & \mathcal{H}_{\widehat{F}_\infty}(G') \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V).
\end{diagram}
\]
Here $G' = \Gal(K(\mu_{p^\infty}) / \QQ_p)$, the right-hand vertical arrow is the map on distributions corresponding to the projection $G \twoheadrightarrow G'$, and the map $\mathcal{L}_V^{G'}$ is defined by
\[ \mathcal{L}_V^{G'} = \sum_{\sigma \in \Gal(K / F)} [\sigma] \cdot \mathcal{L}^{\Gamma}_{K, V}(\sigma^{-1} \circ x),\]
where $\mathcal{L}^\Gamma_{K, V}$ is the Perrin-Riou regulator map for $K(\mu_{p^\infty}) / K$.
\item For any $x \in H^1_{\Iw}(F_\infty(\mu_{p^\infty}) / \QQ_p, V)$ and any character $\eta$ of $\Gamma$, the distribution $\pr^{\eta}(\mathcal{L}_{G, V}(x))$ on $U$, which is defined by twisting by $\eta$ and pushing forward along the projection to $U$, is bounded.
\end{enumerate}
Moreover, the conditions (1) and (2) above uniquely determine the morphism $\mathcal{L}_V^G$.
\end{theorem}
\begin{proof}
Let us show first that the map $\mathcal{L}_{V}^G$ defined above satisfies (1) and (2). Let $T$ be a choice of lattice in $V$.
Let $K$ be any finite unramified extension of $F$ contained in $F_\infty$. Then the diagram
\[
\begin{diagram}
\mathbb{N}_{F_\infty}(T)^{\psi = 1} & \rTo^{1-\varphi} & \varphi^*\mathbb{N}_{F_\infty}(T)^{\psi = 0}\\
\dTo & & \dTo\\
\mathbb{N}_K(T)^{\psi = 1} & \rTo^{1-\varphi} & \varphi^*\mathbb{N}_{K}(T)^{\psi = 0}
\end{diagram}
\]
evidently commutes; and we also have a commutative diagram
\[
\begin{diagram}
\varphi^* \mathbb{N}_F(T)^{\psi = 0} \mathbin{\hat\otimes}_{\mathcal{O}_F} S_{F_\infty/F} & \rTo^{i_{F_\infty / F}} & \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \\
\dTo & & \dTo\\
\varphi^* \mathbb{N}_K(T)^{\psi = 0} \otimes_{\ZZ_p} S_{K/F} & \rTo^{i_{K/F}} & \mathcal{H}_{\widehat{F}_\infty}(G') \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V),
\end{diagram}
\]
where the arrows $i_{F_\infty / F}$ and $i_{K/F}$ are induced by the inclusions $S_{F_\infty/F} \hookrightarrow \mathcal{H}_{\widehat{F}_\infty}(U)$ and $S_{K/F} \hookrightarrow \mathcal{O}_K[U'] \hookrightarrow \widehat{F}_\infty[U']$, where $U' = \Gal(K/F)$, and the right vertical arrow is the one arising from the projection $G \to G'$. If we combine the two diagrams using the identification $\mathbb{N}_K(T) \cong \mathbb{N}_F(T) \otimes_F S_{K/F}$ and similarly for $F_\infty$, the composite of the maps on the top row is the definition of $\mathcal{L}_{V}^G$, and the composite of the arrows on the bottom row is the map $\mathcal{L}_V^{G'}$. The commutativity of these diagrams therefore proves (1).
Property (2) is clear, since the image of $\Lambda_{\widehat{F}_\infty}(U)$ in $\mathcal{H}_{\widehat{F}_\infty}(U)$ is exactly the bounded distributions.
We now show that these properties characterise $\mathcal{L}^G_{V}$ uniquely. It suffices to show that (1) and (2) determine the value of $\mathcal{L}^G_{V}(x)$ at any character of $G$. Such a character has the form $\eta \varpi$ where $\eta$ is a character of $\Gamma$ and $\varpi$ is a character of $U$. Property (1) uniquely determines the value at $\eta \times \varpi$ if $\varpi$ has finite order, and property (2) implies that for each fixed $\eta$, the function $\varpi \mapsto \mathcal{L}^G_{V}(x)(\eta \times \varpi)$ is a bounded analytic function on the rigid space parametrising characters of $U$, and hence is determined uniquely by its values on finite-order $\varpi$'s.
\end{proof}
We now record some properties of the map $\mathcal{L}^G_{V}$.
\begin{proposition}
\label{prop:orderofdistributions}
Let $W \subseteq \mathbb{D}_{\mathrm{cris}}(V)$ be a $\varphi$-invariant $F$-subspace such that all eigenvalues of $\varphi$ on the quotient $Q = \mathbb{D}_{\mathrm{cris}}(V) / W$ have $p$-adic valuation $\ge -h$ (where we normalise the $p$-adic valuation on $\QQ_pb$ such that $v_p(p) = 1$).
Then for any $x \in H^1_{\Iw}(K_\infty, V)$, the image of $x$ under
\[ H^1_{\Iw}(K_\infty, V) \rTo^{\mathcal{L}^G_V} \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \to \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} Q\]
lies in $D^{(0,h)}(G, \widehat{F}_\infty) \otimes Q$, where $D^{(0,h)}(G, \widehat{F}_\infty)$ is the space of $\widehat{F}_\infty$-valued distributions of order $(0, h)$ with respect to the subgroups $(U, \Gamma)$.
\end{proposition}
\begin{proof}
This is immediate from the definition of the 2-variable regulator map and the corresponding statement for the 1-variable regulator, which is well known.
\end{proof}
\begin{proposition}
\label{prop:galoisequivariance}
If $u \in U$ and $\tilde u$ is the unique lifting of $u$ to $G$ acting trivially on $F(\mu_{p^\infty})$, then for any $x \in H^1_{\Iw}(K_\infty, V)$ we have
\[\mathcal{L}^G_{V}(x)^u = [\tilde u] \cdot \mathcal{L}^G_{V}(x).\]
\end{proposition}
\begin{proposition}
If $m_1, \dots, m_d$ are a $\Lambda_{\mathcal{O}_F}(\Gamma)$-basis of $\varphi^* \mathbb{N}_F(T)^{\psi = 0}$, and $\omega$ a $\Lambda_{\mathcal{O}_F}(U)$-basis of $S_\infty$, then the image of the $p$-adic regulator is contained in the $\Lambda_{\mathcal{O}_F}(G)$-span of the vectors
\[ \left(i_\infty(m_j \mathbin{\hat\otimes} \omega)\right)_{j = 1, \dots, d}.\]
\end{proposition}
\begin{proposition}
\label{prop:injectivereg}
If $F_\infty / F$ is infinite, the regulator map $\mathcal{L}^{G}_V$ is injective.
\end{proposition}
\begin{proof}
As before, let us identify $F_\infty$ with the unramified $\ZZ_p$-extension of a finite extension $F_0 / F$. Let $\pr_n:\mathbb{N}_{F_\infty}(T) \rightarrow \mathbb{N}_{F_n}(T)$ be the projection map; for $x\in\mathbb{N}_{F_\infty}(T)$, we have $\varphi(x) = x$ if and only if $\pr_n(x) \in \mathbb{N}_{F_n}(T)^{\varphi = 1}$ for all $n$. However,
\[ \mathbb{N}_{F_n}(T)^{\varphi = 1} \subset \mathbb{D}_{F_n}(T)^{\varphi=1} = T^{H_{F_n}}.\]
As $T$ is a finitely generated $\ZZ_p$-module, there must be some $m$ such that $T^{H_{F_n}} = T^{H_{F_m}}$ for all $n \ge m$. However, for $n \ge m$ the projection map $T^{H_{F_{n+1}}} \to T^{H_{F_{n}}}$ is multiplication by $p$; so $\pr_m(x)$ is divisible by arbitrarily high powers of $p$ and is thus zero. Hence $x = 0$.
\end{proof}
The next statement requires some extra notation. Let $\varpi$ be a continuous character $U \to \mathcal{O}_E^\times$, where $E$ is some finite extension of $\QQ_p$. Then there is an obvious isomorphism
\begin{equation}
\label{eq:iwacohotwist}
H^1_{\Iw}(K_\infty, T(\varpi)) \cong H^1_{\Iw}(K_\infty, T)(\varpi).
\end{equation}
Moreover, via the isomorphism $V \otimes_{\QQ_p} \BB_{\cris} \cong \mathbb{D}_{\mathrm{cris}}(V) \otimes_{F} \BB_{\cris}$, we can regard the space
\[ \mathbb{D}_{\mathrm{cris}}(V(\varpi)) = \left( E \otimes_{\QQ_p} V \otimes_{\QQ_p} \BB_{\cris}\right)^{\mathcal{G}_{\QQ_p} = \varpi^{-1}}\]
as a subspace of $E \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V) \otimes_{F} \BB_{\cris}$.
Since the natural inclusion $\widehat{F}_\infty \hookrightarrow \BB_{\cris}$ induces an injection
\[ (E \otimes_{\QQ_p} K_\infty)^{U = \varpi^{-1}} \hookrightarrow (E \otimes_{\QQ_p} \BB_{\cris})^{\mathcal{G}_{\QQ_p} = \varpi^{-1}}\]
which must be an isomorphism (as the right-hand side must have $E$-dimension $\le 1$), we have a canonical isomorphism
\[ \mathbb{D}_{\mathrm{cris}}(V(\varpi)) = \mathbb{D}_{\mathrm{cris}}(V) \otimes_{F} \left(E \otimes_{\QQ_p} K_\infty\right)^{U = \varpi^{-1}}.\]
In particular, there is a canonical isomorphism
\[ \widehat{F}_\infty \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V(\varpi)) \cong E \otimes_{\QQ_p} \widehat{F}_\infty \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V).\]
We also have a canonical map
\[ \Tw_{\varpi^{-1}} : E \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G) \to E \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G)\]
which on group elements corresponds to the map $g \mapsto \varpi(g)^{-1} g$. Tensoring with the canonical isomorphism above, we obtain a map (which we also denote by $\Tw_{\varpi^{-1}}$)
\[ E \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V(\varpi)).\]
\begin{proposition}
\label{prop:twisting}
With the identifications described above, the regulator $\mathcal{L}^G_V$ is invariant under unramified twists: there is a commutative diagram
\begin{diagram}
\mathcal{O}_E \otimes H^1_{\Iw}(K_\infty, T) & \rTo^{\mathcal{L}_V^G} & E \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V) \\
\dTo^\cong & & \dTo^{\Tw_{\varpi^{-1}}}\\
H^1_{\Iw}(K_\infty, T(\varpi)) & \rTo^{\mathcal{L}_{V(\varpi)}^{G}} & \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{F} \mathbb{D}_{\mathrm{cris}}(V(\varpi))
\end{diagram}
\end{proposition}
\begin{proof}
By \eqref{eq:iwacohotwist}, we have canonical isomorphisms $H^1_{\Iw}(K_\infty, T) \otimes_{\ZZ_p} \mathcal{O}_E \cong \mathbb{N}_{F_\infty}(T)^{\psi = 1} \otimes_{\ZZ_p} \mathcal{O}_E$, and $H^1_{\Iw}(K_\infty, T(\varpi)) \cong \mathbb{N}_{F_\infty}(T(\varpi))^{\psi = 1}$. We can therefore rewrite the above diagram to obtain the following:
\begin{diagram}
\mathbb{N}_{F_\infty}(T)^{\psi = 1} \otimes_{\ZZ_p} \mathcal{O}_E & \rTo^{1-\varphi} & \mathbb{N}_{F_\infty}(T)^{\psi = 0} \otimes_{\ZZ_p} \mathcal{O}_E & \rTo & \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V) \otimes_{\QQ_p} E \\
\dTo & & \dTo & & \dTo^{\Tw_{\varpi^{-1}}}\\
\mathbb{N}_{F_\infty}(T(\varpi))^{\psi = 1} & \rTo^{1-\varphi} & \mathbb{N}_{F_\infty}(T(\varpi))^{\psi = 0} & \rTo & \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V(\varpi)).
\end{diagram}
Here the left and middle vertical maps are obtained by restriction from that of Theorem \ref{thm:unramifiedtwist}, taking $\tau = \varpi^{-1}$; as noted above, this isomorphism commutes with $\varphi$ and $\psi$.
The commutativity of the left square is clear. Moreover, the isomorphisms
\[ \mathbb{N}_F(T(\varpi)) \cong \mathbb{N}_F(T) \otimes_{\ZZ_p} \left( \mathcal{O}_E \otimes_{\ZZ_p} \widehat{\cO}_{F_\infty}\right)^{U = \varpi^{-1}}\]
and
\[ \mathbb{D}_{\mathrm{cris}}(V(\varpi)) \cong \mathbb{D}_{\mathrm{cris}}(V) \otimes_{\ZZ_p} \left( \mathcal{O}_E \otimes_{\ZZ_p} \widehat{\cO}_{F_\infty} \right)^{U = \varpi^{-1}}\]
are compatible (since the first is given by multiplication in $\mathbb{A}$, the second in $\BB_{\cris}$, and the inclusion of $\widehat{\cO}_{F_\infty}$ in $\BB_{\cris}$ factors through the natural maps $\mathbb{A}^+ \hookrightarrow \tilde{\mathbb{A}}^+ \hookrightarrow \mathbb{A}_{\cris}$). Hence the commutativity of the right square follows, as the twisting maps $\Lambda_{\widehat{\cO}_{F_\infty}}(U) \to \Lambda_{\widehat{\cO}_{F_\infty}}(U)$ and $\mathcal{H}_{\widehat{F}_\infty}(U) \to \mathcal{H}_{\widehat{F}_\infty}(U)$ are evidently compatible.
\end{proof}
\subsection{An explicit formula for the values of the regulator}
\label{sect:explicitformula}
In this section, we use the results from the previous section to give a direct interpretation of the value of the regulator map $\mathcal{L}^G_V$ at any de Rham character of $G$, relating these to the values of the Bloch-Kato exponential maps for $V$ and its twists. In this section we assume (for simplicity) that $F = \QQ_p$.
As above, let $\varpi$ be a continuous character of $U$ with values in $\mathcal{O}_E$, for some finite extension $E / \QQ_p$. Combining Proposition \ref{prop:twisting} with the defining property of $\mathcal{L}^G_{V(\varpi)}$ in Theorem \ref{thm:localregulator}, we have:
\begin{theorem}
The following diagram commutes:
\[
\begin{diagram}
H^1_{\Iw}(K_\infty, V) & \rTo^{\mathcal{L}_{V}^G} & \mathcal{H}_{\widehat{F}_\infty}(G)^{\circ} \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V) \\
\dTo_{\pr_{\Iw}^\varpi} & & \dTo_{\pr^\varpi_{\cris}}\\
H^1_{\Iw}(\QQ_p(\mu_{p^\infty}), V(\varpi)) & \rTo^{\mathcal{L}^\Gamma_{V(\varpi)}} & \mathcal{H}_{\QQ_p}(\Gamma) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V(\varpi)).
\end{diagram}
\]
\end{theorem}
Here $\mathcal{H}_{\widehat{F}_\infty}(G)^\circ$ denotes the subspace of $\mathcal{H}_{\widehat{F}_\infty}(G)$ satisfying the Galois-equivariance property of Proposition \ref{prop:galoisequivariance}. The map $\pr_{\Iw}^\varpi$ is the composite of the isomorphism \eqref{eq:iwacohotwist} with the corestriction map; the right-hand vertical map is the composite of $\Tw_{\varpi^{-1}}$ with push-forward to $\Gamma$. (Hence both vertical maps are $U$-equivariant, if we let $U$ act on the bottom row by $\varpi^{-1}$.)
We now apply the results of \S \ref{appendix:cyclo} to each unramified twist $V(\varpi)$ of $V$ to determine exactly the values of $\mathcal{L}_V^G$ at any character of $G$ which is Hodge--Tate, in terms of the dual exponential and logarithm maps (cf.~\S\ref{sect:crystallinereps} above).
\begin{definition}
Let $\omega$ be any continuous character of $G$ with values in some finite extension $E / \QQ_p$. For $x\in H^1_{\Iw}(K_\infty,V)$, we write $x_{\omega, 0}$ for the image of $x$ in $H^1(\QQ_p, V(\omega^{-1}))$.
\end{definition}
We can now apply Theorem \ref{thm:explicitformulacyclo} to obtain the following formulae for the values of $\mathcal{L}^G_V$:
\begin{theorem}
\label{thm:explicitformula}
Let $x\in H^1_{\Iw}(K_\infty,V)$. Let $j$ be the Hodge--Tate weight of $\omega$, and $n$ its conductor. If $n = 0$, suppose that $\mathbb{D}_{\mathrm{cris}}(V(\omega^{-1}))^{\varphi = p^{-1}} = 0$. Then we have
\begin{multline*}
\mathcal{L}^G_{V}(x)(\omega) =
\Gamma^*(1+j) \cdot \varepsilon(\omega^{-1}) \cdot \mathfrak{r}ac{\Phi^n P(\omega^{-1}, \Phi)}{P(\omega, p^{-1} \Phi^{-1})} \\
\times \begin{cases}
\exp^*_{V(\omega^{-1})^*(1)}(x_{\omega,0}) \otimes t^{-j} e_j & \text{if $j \ge 0$,}\\
\log_{\QQ_p, V(\omega^{-1})}(x_{\omega,0}) \otimes t^{-j} e_j & \text{if $j \le -1$,}
\end{cases}
\end{multline*}
where the notation is as follows:
\begin{itemize}
\item $\Gamma^*(1 + j)$ is the leading term of the Taylor expansion of the Gamma function at $1 + j$,
\[ \Gamma^*(1 + j) =
\begin{cases}
j! & \text{if $j \ge 0$,}\\
\mathfrak{r}ac{(-1)^{-j-1}}{(-j-1)!} & \text{if $j \le -1$.}
\end{cases}
\]
\item $P_\omega$ and $\varepsilon(\omega)$ are the $L$ and $\varepsilon$-factors of the Weil--Deligne representation $\mathbb{D}_{\mathrm{pst}}(\omega)$ (see \S \ref{sect:epsfactors} above).
\item $\Phi$ denotes the operator on $\mathbb{D}_{\mathrm{cris}}(V) \otimes_{\QQ_p} \widehat{F}_\infty$ which is obtained by extending the Frobenius of $\mathbb{D}_{\mathrm{cris}}(V)$ to act trivially on $\widehat{F}_\infty$ (rather than as the usual Frobenius on $\widehat{F}_\infty$).
\end{itemize}
\end{theorem}
\begin{remark}
To define $\mathcal{L}^G_{V}$ we made a choice of compatible system of $p$-power roots of unity $\zeta$; but the dependence of $\mathcal{L}^G_V$ on $\zeta$ is clear from the formula of Theorem \ref{thm:explicitformula}. If we temporarily write $\mathcal{L}^{G}_{V}(x, \zeta)$ for the regulator using the roots of unity $\zeta$, then for any $\gamma \in \Gamma$ we have
\[ \mathcal{L}^G_V(x, \gamma \zeta)(\omega) = \omega(\tilde \gamma)^{-1} \mathcal{L}^G_V(x, \zeta)(\omega),\]
where $\tilde \gamma$ is the unique lifting of $\gamma$ to the inertia subgroup of $G$.
\end{remark}
\subsection{A local reciprocity formula}
\label{sect:localrecip}
Our final local result will be an analogue of Perrin-Riou's local reciprocity formula, relating the maps $\mathcal{L}^G_V$ and $\mathcal{L}^G_{V^*(1)}$. The cyclotomic version of this formula, conjecture $\operatorname{Rec}(V)$ in \cite{perrinriou94}, was originally formulated in terms of Perrin-Riou's exponential map $\Omega_{V, h}$, and proved independently by Colmez \cite{colmez98} and Benois \cite{benois00}. In Appendix \ref{appendix:cyclo} below we formulate and prove a version using the map $\mathcal{L}^\Gamma_V$ instead.
Here, as in Appendix \ref{appendix:cyclo}, it will be convenient to us to extend the definition of the regulator map to representations which are crystalline, but which may have some negative Hodge--Tate weights. To do this, we note that if $V$ is good crystalline, then for any $k \ge 0$ we have
\[ \Tw_{\chi}\left( \mathcal{L}^G_{V(1)}(x \otimes e_{1})\right) = \ell_{-1}\left(\mathcal{L}^G_V(x)\right) \otimes t^{-1} e_1.\]
So for arbitrary $V$, and any $j \gg 0$ such that $V(j)$ is good crystalline, we may \emph{define} $\mathcal{L}^G_{V}$ by the formula
\[ \mathcal{L}^G_V(x) = (\ell_{-1} \circ \dots \circ \ell_{-j})^{-1}\left( \Tw_{\chi^j}\left(\mathcal{L}^G_{V(j)}(x \otimes e_{j})\right)\right) \otimes t^j e_{-j}\]
and this does not depend on the choice of $j$; this then takes values in the fraction field of $\mathcal{H}_{\widehat{F}_\infty}(G)$.
\begin{theorem}
For any crystalline representation $V$ and any classes $x \in H^1_{\Iw}(K_\infty, V)$ and $y \in H^1_{\Iw}(K_\infty, V^*(1))$, we have
\[ \langle \mathcal{L}_V^G(x), \mathcal{L}_{V^*(1)}^G(y) \rangle_{\mathrm{cris}, V} = -\sigma_{-1} \cdot \ell_0 \cdot \langle x, y \rangle_{K_\infty, V},\]
where $\sigma_{-1}$ denotes the unique element of the inertia subgroup of $G$ such that $\chi(\sigma_{-1}) = -1$.
\end{theorem}
\begin{proof}
Using Lemma \ref{lemma:PRpairingtwist} and Proposition \ref{prop:twisting} for each unramified character $\tau$ of $G$ reduces this immediately to the corresponding statement for the cyclotomic regulator maps $\mathcal{L}^\Gamma_{V(\tau)}$, which is Theorem \ref{thm:cycloreciprocity}.
\end{proof}
\section{Regulators for extensions of number fields}
\label{sect:globalregulator}
\label{sect:semilocalregulator}
\newcommand{\overline{K}_{\fp}}{\overline{K}_{\mathfrak{p}}}
In this section, we show how to define an extension of the regulator map in the context of certain $p$-adic Lie extensions of number fields. This section draws heavily on the cyclotomic case studied by Perrin-Riou in \cite{perrinriou94}; see also \cite{iovitapollack06} for the case of more general $\ZZ_p$-extensions of number fields.
Let $K$ be a number field, $p$ a (rational) prime, and $\mathfrak{p}$ a prime of $K$ above $p$. We choose a prime $\mathfrak{P}$ of $\overline{K}$ above $\mathfrak{p}$.
\subsection{Semilocal cohomology}
Let $T$ be a finitely generated $\ZZ_p$-module with a continuous action of $\mathcal{G}_K$. For each finite extension $L$ of $K$, the set of primes $\mathfrak{q}$ of $L$ above $\mathfrak{p}$ is finite, and for each $i$ we may define the semilocal cohomology group
\[ Z^i_{\mathfrak{p}}(L, T) = \bigoplus_{\mathfrak{q} \mid \mathfrak{p}} H^i(L_\mathfrak{q}, T).\]
If $L / K$ is Galois, with Galois group $G$, then we have a canonical isomorphism
\begin{equation}
\label{eq:semilocalcoho}
Z^i_{\mathfrak{p}}(L, T) \cong \ZZ_p[G] \otimes_{\ZZ_p[G_\mathfrak{P}]} H^i(L_\mathfrak{P}, T),
\end{equation}
where $G_\mathfrak{P}$ is the decomposition group of $\mathfrak{P}$ in $G$. In particular, it has an action of $\ZZ_p[G]$, and it is easy to see that the localization map
\[ \loc_{\mathfrak{p}} = \bigoplus_{\mathfrak{q} \mid \mathfrak{p}} \loc_\mathfrak{q} : H^i(L, T) \to Z^i_\mathfrak{p}(L, T)\]
is $G$-equivariant.
If now $K_\infty / K$ is a $p$-adic Lie extension of number fields with Galois group $G$, we may define semilocal Iwasawa cohomology groups
\[ Z^i_{\Iw, \mathfrak{p}}(K_\infty, T) = \varprojlim_{K'} Z^i_{\mathfrak{p}}(K', T),\]
where the inverse limit is over finite Galois extensions $K' / K$ contained in $K_\infty$. The isomorphisms \eqref{eq:semilocalcoho} for each finite subextension imply that
\begin{equation}
\label{eq:semilocaliwasawacoho}
Z^i_{\Iw, \mathfrak{p}}(K_\infty, T) = \Lambda_{\ZZ_p}(G) \otimes_{\Lambda_{\ZZ_p}(G_\mathfrak{P})} H^i_{\Iw}(K_{\infty, \mathfrak{P}}, T).
\end{equation}
\begin{theorem}
\label{thm:semilocalreg}
Let $K_\infty / K$ be any $p$-adic Lie extension of number fields with Galois group $G$, $\mathfrak{p}$ a prime of $K$ above $p$, and $\mathfrak{P}$ a prime of $\overline{K}$ above $\mathfrak{p}$, such that
\begin{itemize}
\item $K_\mathfrak{p}$ is unramified over $\QQ_p$,
\item the completion $K_{\infty, \mathfrak{P}}$ is of the form $F_\infty(\mu_{p^\infty})$, for $F_\infty$ an infinite unramified extension of $K_{\mathfrak{p}}$.
\end{itemize}
Then there is a unique homomorphism of $\Lambda_{\ZZ_p}(G)$-modules
\[ \mathcal{L}^G_{V} : Z^1_{\Iw, \mathfrak{p}}(K_\infty, V) \to \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V), \]
where $\widehat{F}_\infty$ is the $p$-adic completion of the maximal unramified subfield of $K_{\infty, \mathfrak{P}}$, whose restriction to $H^1_{\Iw}(K_{\infty, \mathfrak{P}}, V)$ is the local regulator map $\mathcal{L}^{G_\mathfrak{P}}_V$.
\end{theorem}
\begin{proof}
Immediate by tensoring the local regulator $\mathcal{L}^{G_\mathfrak{P}}_V$ with $\Lambda_{\ZZ_p}(G)$, using equation \eqref{eq:semilocaliwasawacoho}.
\end{proof}
\begin{remark}
Note that if $\mathfrak{p}$ is finitely decomposed in $K_\infty$, so $[G : G_{\mathfrak{P}}]$ is finite, one can describe $\mathcal{L}^G_{V}$ as a direct sum of local regulators:
\[ \mathcal{L}^G_{V}(x) = \bigoplus_{\sigma \in G / G_{\mathfrak{P}}} [\sigma] \cdot \mathcal{L}^{G_{\mathfrak{P}}}_{V}(\loc_{\mathfrak{p}} \sigma^{-1}(x)).\]
However, the construction also applies when $\mathfrak{p}$ is infinitely decomposed. Thus, for instance, if $d > 1$ and $K$ is a CM field of degree $2d$ in which $p$ splits completely, then one can take $K_\infty$ to be the $(d + 1)$-dimensional abelian $p$-adic Lie extension given by the ray class field $K(p^\infty)$.
\end{remark}
\begin{remark}
One can use the regulator maps to construct Coleman maps and restricted Selmer groups of $V$ over $K_\infty$, in the spirit of the constructions in \cite{leiloefflerzerbes10} for the cyclotomic extension.
\end{remark}
\subsection{The module of p-adic L-functions}
\label{sect:defmoduleofLfunctions}
We now assume that the number field $K$ is totally complex and Galois over $\mathbb{Q}$, and that $p$ splits completely in $K$, $(p)=\mathfrak{p}_1\dots \mathfrak{p}_e$. For each of these primes, fix an embedding of $\overline{\QQ}$ into $\overline{K_{\mathfrak{p}_i}}$.
Let $T$ be a $\ZZ_p$-representation of $\mathcal{G}_K$, and let $V=T[p^{-1}]$.
\begin{assumption}
\label{assumption:crystalline}
For all $1\leq i\leq e$, the restriction of $V$ to $\mathcal{G}_{K_{\mathfrak{p}_i}}$ is good crystalline.
\end{assumption}
Let $S$ be the finite set of primes of $K$ containing all the primes above $p$, all the archimedean places and all the places whose inertia group acts non-trivially on $T$. Denote by $K^S$ the maximal extension of $K$ unramified outside $S$. Let $K_\infty$ be a $p$-adic Lie extension of $K$ contained in $K^S$ which is Galois over $\mathbb{Q}$ and satisfies the conditions of Theorem \ref{thm:semilocalreg} for each of the primes $\mathfrak{p}_1, \dots, \mathfrak{p}_e$.
\begin{definition}
Define $H^1_{\Iw,S}(K_\infty,T)=\varprojlim H^1(\Gal(K^S\slash K_n),T)$, where $\{K_n\}$ is a sequence of finite extensions of $K$ such that $K_\infty=\bigcup K_n$. We also let
\[ H^1_{\Iw,S}(K_\infty,V)=H^1_{\Iw,S}(K_\infty,T)\otimes_{\ZZ_p}\QQ_p.\]
\end{definition}
\begin{assumption}
\label{assumption:eltsorderp}
The Galois group $G = \Gal(K_\infty / K)$ has no element of order $p$.
\end{assumption}
\begin{remark}
Examples of $p$-adic Lie extensions satisfying the above hypotheses occur naturally in the context of class field theory; for instance, if $K$ is a CM field in which $p$ splits, and $K_\infty$ the ray class field $K(p^\infty)$, all the conditions are automatic except possibly \ref{assumption:eltsorderp}, and this may be dealt with by replacing $K_\infty$ by a finite subextension.
We shall study extensions of this type in more detail in \S \ref{sect:imquad} below, where we take $K$ to be an imaginary quadratic field.
\end{remark}
As $K_\infty$ is a Galois extension of $\mathbb{Q}$, the Galois groups $\Gal(K_{\infty, \mathfrak{p}_i}\slash K_{\mathfrak{p}_i})$, $1\leq i\leq e$, are conjugate to each other in $\Gal(K_\infty\slash \mathbb{Q})$, as are their inertia subgroups. If $L_{\infty,i}$ denotes the maximal unramified extension of $K_{\mathfrak{p},i}$ in $K_{\infty,\mathfrak{p}_i}$, we get canonical identifications of $L_{\infty,i}$ with $L_{\infty,j}$ for all $1\leq i,j\leq e$. We can therefore drop the index and denote this unramified extension of $\QQ_p$ by $F_\infty$.
As explained in Section \ref{sect:semilocalregulator}, for $1\leq i\leq e$, we have a regulator map
\[\mathcal{L}^G_{V,\mathfrak{p}_i} : Z^1_{\Iw, \mathfrak{p}_i}(K_\infty, T) \rightarrow \mathbb{D}_{\cris,\mathfrak{p}_i}(V) \otimes_{\QQ_p} \mathcal{H}_{\widehat{F}_\infty}(G).\]
Via the localisation map $\loc_{\mathfrak{p}_i}:H^1_{\Iw,S}(K_\infty, T) \rightarrow Z^1_{\Iw, \mathfrak{p}_i}(K_\infty, T)$, it induces a map
\[ H^1_{\Iw,S}(K_\infty, T) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_{\cris,\mathfrak{p}_i}(V)\]
which we also denote by $\mathcal{L}^G_{V,\mathfrak{p}_i}$. Let
\[
\mathbb{D}_p(V) =\mathbb{D}_{\mathrm{cris}}\Big(\big(\Ind_{K\slash\mathbb{Q}}V\big)|_{\mathcal{G}_{\QQ_p}}\Big) \cong \bigoplus_{i=1}^e \mathbb{D}_{\cris,\mathfrak{p}_i}(V).
\]
Define
\[ \mathcal{L}^G_V = \bigoplus_{i=1}^e \mathcal{L}^G_{V,\mathfrak{p}_i} : H^1_{\Iw,S}(K_\infty,V) \rTo \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_p(V).\]
Denote by $\mathcal{K}_{\widehat{F}_\infty}(G)$ the fraction field of $\mathcal{H}_{\widehat{F}_\infty}(G)$. Assume that Conjecture $\Leop(K_\infty,V)$ (as formulated in \S \ref{sect:globalranks} below) olds, so $H^2_{\Iw}(K_\infty\slash K,V)$ is $\Lambda_{\QQ_p}(G)$-torsion. Let $d = \mathfrak{r}ac{1}{2}[K : \mathbb{Q}] \dim_{\QQ_p}(V)$. As $\rank_{\Lambda_{\ZZ_p}(G)} H^1_{\Iw,S}(K_\infty,T)=d$ by Theorem \ref{thm:globalrank}, the regulator $\mathcal{L}^G_V$ induces a map\footnote{For the definition of the determinant of a finitely generated $\Lambda_{\ZZ_p}(G)$-module, see \cite{knudsenmumford76}; c.f. also \cite[\S 3.1.5]{perrinriou94}.}
\[ \det \mathcal{L}^G_V : \det_{\Lambda_{\QQ_p}(G)} H^1_{\Iw,S}(K_\infty,V) \rTo \mathcal{K}_{\widehat{F}_\infty}(G)\otimes_{\QQ_p}\bigwedge^d \mathbb{D}_p(V).\]
\begin{definition}
\label{def:Iarith}
Define $\mathbb{I}_{\arith,p}(V)$ to be the $\Lambda_{\QQ_p}(G)$-submodule of $\mathcal{K}_{\widehat{F}_\infty}(G)\otimes_{\QQ_p}\bigwedge^d\mathbb{D}_p(V)$
\[ \mathbb{I}_{\arith,p}(V)= \det \mathcal{L}^G_V\big(H^1_{\Iw}(K_\infty,T)\big) \otimes \left(\det H^2_{\Iw}(K_\infty,T)\right)^{-1}.\]
\end{definition}
In the spirit of Perrin-Riou (c.f. \cite[\S 3.1]{perrinriou03}), we can give an explicit description of $\mathbb{I}_{\arith,p}(V)$ as follows. Let $f_2 \in \Lambda_{\QQ_p}(G)$ be a generator of the characteristic ideal of $H^2_{\Iw}(K_\infty,T)$, so $\det H^2_{\Iw}(K_\infty,T) = f_2^{-1} \Lambda_{\QQ_p}(G)$.
\begin{proposition}
\label{prop:explicitdescription}
Let $\mathfrak{c}=\{c_1,\dots,c_d\}\subset H^1_{\Iw,S}(K_\infty,V)$ be elements such that if $\mathcal{C}$ denotes the $\Lambda_{\QQ_p}(G)$-submodule of $H^1_{\Iw,S}(K_\infty,V)$ spanned by the elements of $\mathfrak{c}$, then the quotient $H^1_{\Iw,S}(K_\infty,V)\slash \mathcal{C}$ is $\Lambda_{\QQ_p}(G)$-torsion. Denote by $f_{\mathfrak{c}}\in\Lambda_{\QQ_p}(G)$ the corresponding characteristic element. Then
\[ \mathbb{I}_{\arith,p}(V)=\Lambda_{\QQ_p}(G) \, f_2 f_{\mathfrak{c}}^{-1} \, \mathcal{L}^G_V(c_1)\wedge\dots\wedge \mathcal{L}^G_V(c_d).\]
\end{proposition}
\begin{proof}
Clear from the construction.
\end{proof}
\begin{remark}
If $H^1_{\Iw,S}(K_\infty,V)$ is free as a $\Lambda_{\QQ_p}(G)$-module, then $\mathbb{I}_{\arith,p}(V)$ must be contained in $\mathcal{H}_{\widehat{F}_\infty}(G)$.
\end{remark}
\begin{remark}
Via the isomorphism
\[ \mathcal{K}_{\widehat{F}_\infty}(G)\otimes\bigwedge^d\mathbb{D}_p(V)\cong \Hom_{\QQ_p}\Big(\bigwedge^d\mathbb{D}_p(V^*(1)),\mathcal{K}_{\widehat{F}_\infty}(G)\Big),\]
we can consider the $\Lambda_{\QQ_p}(G)$-module $\mathbb{I}_{\arith,p}(V)$ as a submodule of
\[ \Hom_{\QQ_p}\big(\bigwedge^d\mathbb{D}_p(V^*(1)),\mathcal{K}_{\widehat{F}_\infty}(G)\big).\]
\end{remark}
The following proposition implies that $\mathbb{I}_{\arith,p}(V)\neq 0$:
\begin{proposition}
\label{prop:smallkernel}
Assume that conjecture $\Leop(K_\infty,V^*(1))$ holds. Then the kernel of the homomorphism
\[ \loc_p:H^1_{\Iw,S}(K_\infty, T) \rTo \bigoplus_{i=1}^e Z^1_{\Iw,\mathfrak{p}_i}(K_\infty,T)\]
is $\Lambda_{\ZZ_p}(G)$-torsion.
\end{proposition}
\begin{proof}
We adapt the arguments in \cite[\S A.2]{perrinriou95}. For $0\leq j\leq 2$, define the $\Lambda_{\ZZ_p}(G)$-modules
\[ Z^j_S(K_\infty,T)=\bigoplus_{v\in S} H^j_{\Iw}(K_{\infty,v},T)\hspace{3ex} \text{and}\hspace{3ex} Z^j_p(K_\infty,T)= \bigoplus_{i=1}^e Z^j_{\Iw,\mathfrak{p}_i}(K_\infty,T).\]
Also, define
\[ X^i_{\infty,S}(K_\infty,T)=H^i\big(G_S(K_\infty),V^*(1)\slash T^*(1)\big). \]
Taking the limit over the $K_{m,n}$ of the Poitou-Tate exact sequence gives an exact sequence of $\Lambda_{\ZZ_p}(G)$-modules
\begin{align*}
0 &\rTo X^2_{\infty,S}(K_\infty,T)^\varepsilone \rTo H^1_{\Iw,S}(K_\infty,T) \rTo^{\loc_S} Z^1_S(K_\infty,T) \\
&\rTo X^1_{\infty,S}(K_\infty,T)^\varepsilone \rTo H^2_{\Iw,S}(K_\infty,T) \rTo Z^2_{S}(K_\infty,T) \\
&\rTo X^0_{\infty,S}(K_\infty,T)^\varepsilone\rTo 0.
\end{align*}
By Theorem \ref{thm:globalrank}, since we are assuming $\Leop(K_\infty,V^*(1))$, the module $X^2_{\infty,S}(K_\infty,T)$ is $\Lambda_{\ZZ_p}(G)$-cotorsion. Thus $\ker(\loc_S)$ is $\Lambda_{\ZZ_p}(G)$-torsion. As
\[ \rank_{\Lambda_{\ZZ_p}(G)}Z^1_S(K_\infty,T)=\rank_{\Lambda_{\ZZ_p}(G)}Z^1_p(K_\infty,T)\]
by Proposition \ref{thm:localrank}, this implies the result.
\end{proof}
\begin{corollary}
If conjecture $\Leop(K_\infty,V)$ holds, the $\Lambda_{\ZZ_p}(G)$-mod\-ule $\mathbb{I}_{\arith,p}(V)$ is non-zero.
\end{corollary}
\begin{proof}
Consequence of Propositions \ref{prop:smallkernel} and \ref{prop:injectivereg}.
\end{proof}
As in the cyclotomic case, we conjecture that $\mathbb{I}_{\arith,p}(V)$ should have a canonical basis vector -- a $p$-adic $L$-function for $V$ -- whose image under evaluation at de Rham characters of $G$ is related to the critical $L$-values of $V$ and its twists. In the above generality this is a somewhat vain exercise as even the analytic continuation and algebraicity of the values of the complex $L$-function is conjectural. In the next section, we shall make this philosophy precise in some special cases; we shall show that it is consistent with known results regarding $p$-adic $L$-functions, but that it also implies some new conjectures regarding $p$-adic $L$-functions of modular forms.
\section{Imaginary quadratic fields}
\label{sect:imquad}
\subsection{Setup}
\label{sect:setupimquad}
Throughout this section, let $K$ be an imaginary quadratic field in which $p$ splits; write $(p)=\mathfrak{p} \mathfrak{p}b$. We now introduce a specific class of extensions $K_\infty / K$ for which the hypotheses in Section \ref{sect:defmoduleofLfunctions} are satisfied. Let $\mathfrak{f}$ be an integral ideal of $K$ prime to $\mathfrak{p}$ and $\mathfrak{p}b$, and let $K_\infty$ be the ray class field $K(\mathfrak{f} p^\infty)$. We assume that $\mathfrak{f}$ is stable under $\Gal(K / \mathbb{Q})$, which is equivalent to the assumption that $K_\infty$ is Galois over $\mathbb{Q}$. It is well known that $K_\infty \supseteq K(\mu_{p^\infty})$, and that the primes $\mathfrak{p}$ and $\mathfrak{p}b$ are finitely decomposed in $K$; so $G = \Gal(K_\infty / K)$ is an abelian $p$-adic Lie group of dimension 2, and the decomposition groups $G_{\mathfrak{p}}$ and $G_{\mathfrak{p}b}$ are open subgroups.
\begin{lemma}
If $p$ is coprime to the order of the ray class group $\operatorname{Cl}_{\mathfrak{f}}(K)$, then $G$ has no elements of order $p$.
\end{lemma}
\begin{proof}
By class field theory, we have an exact sequence
\[ 0 \rTo \overline{U_\mathfrak{f}} \rTo (\mathcal{O}_K \otimes \ZZ_p)^\times \rTo \operatorname{Cl}_{\mathfrak{f} p^\infty}(K) \rTo \operatorname{Cl}_{\mathfrak{f}}(K) \rTo 0,\]
where $U_\mathfrak{f}$ is the group of units of $\mathcal{O}_K$ that are 1 modulo $\mathfrak{f}$ and $\overline{U_\mathfrak{f}}$ is the closure of $U_\mathfrak{f}$ in $(\mathcal{O}_K \otimes \ZZ_p)^\times$. So it suffices to show that the quotient
\[ \mathfrak{r}ac{(\mathcal{O}_K \otimes \ZZ_p)^\times}{\overline{U_\mathfrak{f}}}\]
is $p$-torsion-free. However, since $K$ is imaginary quadratic, $\overline{U_\mathfrak{f}} = U_\mathfrak{f}$ is a finite group, and as $p$ is odd and split in $K$, we have $p \nmid |\overline{U_\mathfrak{f}}|$. Since $(\mathcal{O}_K \otimes \ZZ_p)^\times$ is $p$-torsion-free, the result follows.
\end{proof}
Let us write $\Gamma_\mathfrak{p} = \Gal(K_\infty / K(\mathfrak{f} \mathfrak{p}b^\infty))$ and $\Gamma_{\mathfrak{p}b} = \Gal(K_\infty / K(\mathfrak{f} \mathfrak{p}^\infty))$. Note that $\Gamma_\mathfrak{p}$ and $\Gamma_{\mathfrak{p}b}$ are $p$-adic Lie groups of rank 1 whose intersection is trivial, and the open subgroup $\Gal(K_\infty / K(\mathfrak{f}))$ is isomorphic to $\Gamma_\mathfrak{p} \times \Gamma_{\mathfrak{p}b}$.
Let $T$ be a $\ZZ_p$-representation of $\mathcal{G}_K$ of rank $d$, and let $V=T[p^{-1}]$. As in the previous section, assume that for $\star\in \{\mathfrak{p}, \mathfrak{p}b \}$, the restriction of $V$ to $\mathcal{G}_{K_\star}$ is good crystalline.
\begin{lemma}
We have a canonical decomposition
\[ \bigwedge^d \mathbb{D}_p(V) \cong \bigoplus_{m+n=d} \left( \bigwedge^m \mathbb{D}_{\cris,\mathfrak{p}}(V)\otimes_{\QQ_p} \bigwedge^n \mathbb{D}_{\cris,\mathfrak{p}b}(V) \right). \]
\end{lemma}
\begin{proof}
Clear from the definition.
\end{proof}
To simplify the notation, let us write \[\mathbb{D}_p(V)^{(m,n)}=\bigwedge^m \mathbb{D}_{\cris,\mathfrak{p}}(V)\otimes_{\QQ_p} \bigwedge^n \mathbb{D}_{\cris,\mathfrak{p}b}(V).\]
\subsection{Galois descent of the module of L-functions}
\label{sect:Galoisdescent}
\begin{definition}
For $m,n\in\mathbb{N}$ such that $m+n=d$, define $\mathbb{I}^{(m,n)}_{\arith,p}(V)$ to be the image of $\mathbb{I}_{\arith,p}(V)$ in $\mathcal{K}_{\widehat{F}_\infty}(G)\otimes_{\QQ_p} \mathbb{D}_p(V)^{(m,n)}$ induced from the projection map $\bigwedge^d \mathbb{D}_p(V)\rTo \mathbb{D}_p(V)^{(m,n)}$.
\end{definition}
The following theorem is the main result of this section:
\begin{theorem}
\label{thm:coefficients}
If $d = 2n$, then $\mathbb{I}^{(n,n)}_{\arith,p}(V) \subset \mathcal{K}_{L}(G) \otimes_{\QQ_p} \mathbb{D}_p(V)^{(n,n)}$, where $L$ is a finite unramified extension of $\QQ_p$.
\end{theorem}
The rest of this section is devoted to proving this theorem. As in Proposition \ref{prop:galoisequivariance}, let $\tilde \sigma_{\mathfrak{p}} \in G_{\mathfrak{p}}$ be the unique element of $G_{\mathfrak{p}}$ which lifts the Frobenius automorphism at $\mathfrak{p}$ of $K(\mathfrak{f} \mathfrak{p}b^\infty)$ and which is trivial on $K(\mu_{p^\infty})$, and similarly for $\mathfrak{p}b$.
\begin{lemma}
The element $\tilde \sigma_{\mathfrak{p}} \tilde \sigma_{\mathfrak{p}b} \in G$ has finite order.
\end{lemma}
\begin{proof}
Let us consider the ``semilocal Artin map''
\[ \theta = (\theta_\mathfrak{p}, \theta_{\mathfrak{p}b}) : K_{\mathfrak{p}}^\times \times K_{\mathfrak{p}b}^\times \to G.\]
Here $\theta_{\mathfrak{p}}$ is the Artin map for $K_\mathfrak{p}$, normalised so that uniformisers map to geometric Frobenius elements. The kernel of $\theta$ is the image in $K_{\mathfrak{p}}^\times \times K_{\mathfrak{p}b}^\times$ of the elements of $K^\times$ which are units outside $p$ and congruent to $1 \bmod \mathfrak{f}$.
By the functoriality of the global Artin map (cf.~\cite[VI.5.2]{neukirch99}), there is a commutative diagram
\[
\begin{diagram}
K_{\mathfrak{p}}^\times \times K_{\mathfrak{p}b}^\times & \rTo^\theta & G \\
\dTo^{N_{K/\mathbb{Q}}} & & \dTo \\
\QQ_p^\times & \rTo & \Gamma,
\end{diagram}
\]
The bottom horizontal map is the local Artin map for $\mathbb{Q}(\mu_{p^\infty}) / \mathbb{Q}$; if we identify $\Gamma$ with $\ZZ_p^\times$, this map is the identity on $\ZZ_p^\times$ and sends $p$ to 1.
Consider the element $(p, 1)$ of $K_{\mathfrak{p}}^\times \times K_{\mathfrak{p}b}^\times$. The image of this in the group $\Gal(K(\mathfrak{f} \mathfrak{p}b^\infty) / K)$ is the Frobenius $\sigma_p$. Its image in $\QQ_p^\times$ is $p$, which is mapped to the identity in $\Gamma$. Hence the image of $(p, 1)$ in $G$ is $\tilde \sigma_\mathfrak{p}$. Similarly, $(1, p)$ is a lifting of $\tilde \sigma_{\mathfrak{p}b}$.
Hence $\tilde \sigma_{\mathfrak{p}} \tilde \sigma_{\mathfrak{p}b}$ is the image of the element $(p, p)$ of $K_{\mathfrak{p}}^\times \times K_{\mathfrak{p}b}^\times$. Thus if $m$ is such that $p^m = 1 \bmod \mathfrak{f}$, $(p^m, p^m) \in K_{\mathfrak{p}}^\times \times K_{\mathfrak{p}b}^\times$ is in the kernel of $\theta$, and hence $(\tilde \sigma_{\mathfrak{p}} \tilde \sigma_{\mathfrak{p}b})^m = 1$ in $G$.
\end{proof}
\begin{corollary}
\label{cor:coefficients}
Let $x_1,\dots,x_n$ be any elements of $Z^1_{\mathfrak{p}}(K_\infty, V)$, and similarly let $y_1,\dots,y_n \in Z^1_{\mathfrak{p}b}(K_{\infty}, V)$. Then the element
\[ \left(\mathcal{L}^G_{V, \mathfrak{p}}(x_1) \wedge \dots \wedge \mathcal{L}^G_{V, \mathfrak{p}}(x_n) \right) \otimes \left(\mathcal{L}^G_{V, \mathfrak{p}b}(y_1) \wedge \dots \wedge \mathcal{L}^G_{V, \mathfrak{p}b}(y_n) \right) \]
of $ \mathcal{H}_{\widehat{F}_\infty}(G) \otimes_{\QQ_p} \mathbb{D}_p(V)^{(n, n)}$
in fact lies in
\( \mathcal{H}_{L}(G) \otimes_{\QQ_p} \mathbb{D}_p(V)^{(n, n)},\)
where $L$ is a finite unramified extension of $\QQ_p$.
\end{corollary}
\begin{proof}
Clear, since the Frobenius automorphism of $\widehat{F}_\infty$ acts on this element as multiplication by $[\tilde{\sigma}_{\mathfrak{p}} \tilde{\sigma}_{\mathfrak{p}b}]^n$, which we have seen has finite order.
\end{proof}
\begin{remark}
As is clear from the proof of the lemma and its corollary, the degree of $L / \QQ_p$ is bounded by the exponent of the ray class group of $K$ modulo $\mathfrak{f}$, and in particular is independent of $V$.
\end{remark}
We deduce Theorem \ref{thm:coefficients} by combining Corollary \ref{cor:coefficients} with Proposition \ref{prop:explicitdescription}.
\subsection{Orders of distributions}
Let us choose subspaces $W_\mathfrak{p} \subseteq \bigwedge^m \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$ and $W_{\mathfrak{p}b} \subseteq \bigwedge^n \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}b, V)$. Then the space
\[ Q = \left( \bigwedge^m \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V) / W_\mathfrak{p} \right) \otimes_{\QQ_p} \left( \bigwedge^n \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}b, V) / W_\mathfrak{p}b \right)\]
is a quotient of $\mathbb{D}_p(V)^{(m, n)}$ and hence of $\mathbb{D}_p(V)$. So, for any $c_1, \dots, c_d \in H^1_{\Iw, S}(K_\infty, V)$, we may consider the projection of
\[ \mathcal{L}^G_V(c_1, \dots, c_d) = \mathcal{L}^G_V(c_1) \wedge \dots \wedge \mathcal{L}^G_V(c_d)\]
to $Q$.
\begin{theorem}
\label{thm:orderofproduct}
The distribution $\pr_Q\left(\mathcal{L}^G_V(c_1) \wedge \dots \wedge \mathcal{L}^G_V(c_d)\right)$ is a distribution on $G$ of order $(m h_\mathfrak{p}, n h_{\mathfrak{p}b})$ with respect to the subgroups $(\Gamma_{\mathfrak{p}}, \Gamma_{\mathfrak{p}b})$, where $h_\mathfrak{p}$ (resp. $h_{\mathfrak{p}b}$) is the largest valuation of any eigenvalue of $\varphi$ on $\wedge^m \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V) / W_\mathfrak{p}$ (resp. on $\wedge^n \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}b, V) / W_{\mathfrak{p}b}$).
\end{theorem}
\begin{proof}
Let us write $c_{j, \mathfrak{p}}$ for the localisation of $c_j$ at $\mathfrak{p}$, and similarly for $\mathfrak{p}b$. By Proposition \ref{prop:orderofdistributions}, for each subset $\{j_1, \dots, j_m\} \subseteq \{1, \dots, d\}$ of order $m$, the projection of the element
\[ \mathcal{L}_{V, \mathfrak{p}}^G(c_{j_1,\mathfrak{p}}) \wedge \dots \wedge \mathcal{L}_{V, \mathfrak{p}}^G(c_{j_m, \mathfrak{p}})\]
to $\bigwedge^m \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V) / W_{\mathfrak{p}}$ is a distribution of order $(h_{\mathfrak{p}}, 0)$ with respect to the subgroups $(\Gamma_\mathfrak{p}, U)$, where $U = \Gal(K_\infty / K(\mu_{p^\infty}))$. By the change-of-variable result of Proposition \ref{prop:changevar} in the appendix, it is also a distribution of order $(h_{\mathfrak{p}},0)$ with respect to the subgroups $(\Gamma_{\mathfrak{p}}, \Gamma_{\mathfrak{p}b})$.
We have also a corresponding result for the projection to $\bigwedge^n \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}b, V) / W_{\mathfrak{p}b}$ of the distribution obtained from any $n$-element subset of $\{1, \dots, d\}$: this gives a distribution with order $(0, h_{\mathfrak{p}b})$ with respect to $(\Gamma_\mathfrak{p}, \Gamma_\mathfrak{p}b)$. Since the product of distributions of order $(a,0)$ and $(0, b)$ is a distribution of order $(a, b)$ by Proposition \ref{prop:orderofproduct}, the product of any two such subsets gives a distribution with values in $Q$ of order $(h_\mathfrak{p}, h_{\mathfrak{p}b})$. Since $\pr_Q\left(\mathcal{L}^G_V(c_1) \wedge \dots \wedge \mathcal{L}^G_V(c_d)\right)$ is a finite linear combination of products of this form, the theorem follows.
\end{proof}
\subsection{Example 1: Gr\"ossencharacters and Katz's L-function}
\label{sect:KatzLfunction}
\subsubsection{Kummer maps}
We recall the well-known local theory of exponential maps for the representation $\ZZ_p(1)$. For any finite extension $L / \QQ_p$, there is a Kummer map $\kappa_L: \mathcal{O}_L^\times \to H^1(L, \ZZ_p(1))$, whose kernel is the Teichm\"uller lifting of $k_L^\times$. In particular, the restriction to the kernel $U^1(L)$ of reduction modulo the maximal ideal is an injection.
Moreover, after inverting $p$, we have a commutative diagram relating the Kummer map to the exponential map of Bloch--Kato (see \cite{blochkato90}):
\[
\begin{diagram}
\QQ_p \otimes_{\ZZ_p} U^1(L) & \rTo^{\kappa_L} & H^1(L, \QQ_p(1)) \\
\dTo & & \dEq\\
\mathbb{D}_{\dR, L}(\QQ_p(1)) &\rTo^{\exp_{L, \QQ_p(1)}} & H^1(L, \QQ_p(1))
\end{diagram}
\]
where the vertical map sends
$u$ to $t^{-1} \log(u) \otimes e_1$, where $e_1$ is the basis vector of $\QQ_p(1)$ corresponding to our compatible system of roots of unity.
The maps $\kappa_L$ are compatible with the norm and corestriction maps for finite extensions $L' / L$, so for an infinite algebraic extension $K_\infty / \QQ_p$ we can take the inverse limit over the finite extensions of $\QQ_p$ contained in $K_\infty$ to define
\[ \kappa_{K_\infty} : U^1(K_\infty) \rTo H^1_{\Iw}(K_\infty, \QQ_p(1)),\]
where $U^1(K_\infty) := \varprojlim_{K' \subset K_\infty} U^1(K')$.
\subsubsection{Coleman series}
We recall the following basic result, due to Coleman. Let $\mathcal{F}$ be any height 1 Lubin--Tate group over $\QQ_p$, and $F$ an unramified extension of $\QQ_p$. Fix a generator $v = (v_n)$ of the Tate module of $\mathcal{F}$ (that is, a norm-compatible sequence of $p^n$-torsion points of $\mathcal{F}$).
\begin{theorem}[{\cite{coleman79}}]
\label{thm:colemanstheorem}
Let $F$ be a finite unramified extension of $\QQ_p$. Then for each $\beta = (\beta_n) \in U^1(F(\mathcal{F}_{p^\infty}))$, there is a unique power series
\[ g_{F,\mathcal{F}}(\beta) \in \mathcal{O}_F[[X]]^{\times,N_\mathcal{F} = 1}\]
where $N_\mathcal{F}$ is Coleman's norm operator, such that for all $n \ge 1$ we have
\[ \beta_n = [g_{F,\mathcal{F}}(\beta)]^{\sigma^{-n}}(v_n).\]
\end{theorem}
Here $\sigma$ is the arithmetic Frobenius automorphism of $F / \QQ_p$, which we extend to an automorphism of $\mathcal{O}_F[[X]]$ acting trivially on the variable $X$.
If $\mathcal{F}$ is the formal multiplicative group $\hat{\mathbb{G}}_m$, then we shall drop the suffix $\mathcal{F}$; and we take $v_n = \zeta_n - 1$, where $(\zeta_n)$ is our chosen compatible sequence of $p$-power roots of unity. In this case, if we identify $X$ with the variable $\pi$ in Fontaine's rings, the relation between the map $g_F$ and the Perrin-Riou regulator map is given by the following diagram:
\[
\begin{diagram}
U^1(F(\mu_{p^\infty})) & \rTo^{\kappa_{F(\mu_{p^\infty})}} & H^1_{\Iw}(F(\mu_{p^\infty}), \ZZ_p(1))\\
\dTo^{g_F} & & \\
\mathcal{O}_F[[ \pi ]]^{\times,N = 1} & & \dTo^{\mathcal{L}_{F, \QQ_p(1)}^\Gamma}\\
\dTo^{(1 - \tfrac{\varphi}{p})\log} & & \\
\mathcal{O}_F[[ \pi ]]^{\psi = 0} & \rTo & \mathcal{H}(\Gamma) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(F, \QQ_p(1))
\end{diagram}
\]
If we identify $\mathbb{D}_{\mathrm{cris}}(F, \QQ_p(1))$ with $F$ via the basis vector $t^{-1} \otimes e_1$, then the bottom map sends $f \in \mathcal{O}_F[[ \pi ]]^{\psi = 0}$ to $\ell_0 \cdot \mathfrak{M}^{-1}(f)$, where $\ell_0 = \mathfrak{r}ac{\log \gamma}{\log \chi(\gamma)}$ for any non-identity element $\gamma \in \Gamma_1$ and $\mathfrak{M}$ is the Mellin transform as defined in Section \ref{sect:Fontainerings}. (See e.g.~the proof of Proposition 1.5 of \cite{leiloefflerzerbes11}.) Thus the image of the bottom map is precisely $\ell_0 \cdot \Lambda_{\mathcal{O}_F}(\Gamma) \subseteq \mathcal{H}_F(\Gamma)$; and if we define
\[ h_F(\beta) = \ell_0^{-1}\cdot \mathcal{L}_{F, \QQ_p(1)}^\Gamma(\kappa_{F_\infty}(\beta)) \in \Lambda_{\mathcal{O}_F}(\Gamma),\]
then we have
\[ \mathfrak{M}(h_F(\beta)) = (1 - \tfrac{\varphi}{p})\log g_F(\beta).\]
\subsubsection{Two-variable Coleman series}
Now let $K_\infty / \QQ_p$ be an abelian $p$-adic Lie extension containing $\QQ_p(\mu_{p^\infty})$ such that $G=\Gal(K_\infty\slash\QQ_p)$ is a $p$-adic Lie group of dimension $2$. Let $\widehat{F}_\infty$ be the completion of the maximal unramifed subextension of $K_\infty$. We define
\[ h_{\infty} : U^1(K_\infty) \to \Lambda_{\widehat{\cO}_{F_\infty}}(G)\]
to be the unique map such that the composite
\[ U^1(K_\infty) \rTo^{\kappa_{K_\infty}} H^1_{\Iw}(K_\infty/\QQ_p, \ZZ_p(1)) \rTo^{\mathcal{L}_{\QQ_p(1)}^G} \mathcal{H}_{\widehat{F}_\infty}(G)\]
is equal to $\ell_0 \cdot h_\infty$.
\begin{proposition}
\label{prop:colemanmaps}
The element $h_\infty(\beta)$ is uniquely determined by the relation
\[ h_\infty(\beta) = \sum_{\sigma \in U_F} h_{F}(\beta_F)^\sigma [\sigma] \pmod{I_F}\]
for all unramified subextensions $F \subset K_\infty$, where $U_F = \Gal(F / \QQ_p)$, $I_F$ is the kernel of the natural map $\Lambda_{\widehat{\cO}_{F_\infty}}(G) \to \Lambda_{\widehat{\cO}_{F_\infty}}(U_F \times \Gamma)$, and $\beta_F$ denotes the image of $\beta$ in $U^1(F(\mu_{p^\infty}))$.
\end{proposition}
\begin{proof}
This follows from the compatibility of the maps $\mathcal{L}^\Gamma$ and $\mathcal{L}^G$ (Theorem \ref{thm:localregulator}(i)).
\end{proof}
We would like to compare this result to Theorem 5 of \cite{yager82}. Our method differs from that of Yager, as we build measures on $G$ out of measures on the Galois groups of extensions $F(\mu_{p^\infty}) / F$ for unramified extensions $F \subset K_\infty$, while Yager considers instead the extensions $F(\mathcal{F}_{p^\infty})/F$ where $\mathcal{F}$ is the Lubin--Tate group corresponding to an elliptic curve with CM by $\mathcal{O}_K$.
Let $\mathcal{F}$ be any Lubin--Tate formal group over $\QQ_p$ which becomes isomorphic to $\hat{\mathbb{G}}_m$ over $\widehat{F}_\infty$. If $F$ is any finite unramified extension of $\QQ_p$ contained in $K_\infty$, then $F(\mathcal{F}_{p^\infty}) \subseteq K_\infty$. For any $\beta \in U^1(K_\infty)$, let $\beta_{F, \mathcal{F}}$ be its image in $U^1(F(\mathcal{F}_{p^\infty}))$. Then Coleman's theorem (Theorem \ref{thm:colemanstheorem}) gives us an element
\[ g_{F, \mathcal{F}}(\beta_{F, \mathcal{F}}) \in \mathcal{O}_F[[X]]^{\times,N_\mathcal{F} = 1}.\]
We write
\[ h_{F, \mathcal{F}}(\beta_{F, \mathcal{F}}) = \mathfrak{M}^{-1} \left[ \left(1 - \tfrac{\varphi}{p}\right) \log\left( g_{F, \mathcal{F}}(\beta) \circ \theta\right)\right] \]
where $\theta$ is the unique power series in $\widehat{\cO}_{F_\infty}[[X]]$ giving an isomorphism $\mathcal{F} \rTo^\cong \hat{\mathbb{G}}_m$ such that $v_n$ maps to $\zeta_n - 1$.
\begin{theorem}[de Shalit]
We have
\[ h_\infty(\beta) = \sum_{\sigma \in U_F} h_{F, \mathcal{F}}(\beta_{F,\mathcal{F}})^\sigma [\sigma] \pmod{I_{F, \mathcal{F}}},\]
where $I_{F, \mathcal{F}}$ is the kernel of the natural map
\[ \Lambda_{\widehat{\cO}_{F_\infty}}(G) \rTo \Lambda_{\widehat{\cO}_{F_\infty}}( \Gal(F(\mathcal{F}_{p^\infty}) / F)).\]
\end{theorem}
\begin{proof}
See \cite[\S I.3.8]{deshalit87}. (Note that the theorem is stated there for $K_\infty = \mathbb{Q}_p^{\mathrm{ab}}$, the maximal abelian extension of $\QQ_p$; but the theorem, and the proof given, are true with $K_\infty$ replaced by any smaller extension over which the formal groups concerned become isomorphic.)
\end{proof}
It follows that the map $h_{\infty}$ defined above coincides with the map constructed (under more restrictive hypotheses) by Yager in \cite{yager82}. In particular, if $c$ denotes the element of the global $H^1_{\Iw}$ obtained by applying the Kummer map to the elliptic units, then $\mathcal{L}^G_{\mathfrak{p}, V} (c)$ is equal to $\ell_0 \mu$ where $\mu$ is Katz's $p$-adic $L$-function.
\subsection{Example 2: Two-variable L-functions of modular forms}
We now consider the restriction to $\mathcal{G}_{K}$ of the representation $V$ of $\mathcal{G}_\mathbb{Q}$ attached to a modular form $f$ of weight 2, level $N$ prime to $p \Delta_{K/\mathbb{Q}}$, and character $\delta$. Let $E \subseteq \QQ_pb$ be the completion of $\mathbb{Q}(f) \subseteq \overline{\QQ}$ at our chosen prime of $\overline{\QQ}$. We take $V = V_f^*$, so $V$ has Hodge--Tate weights $\{0, 1\}$ at each of $\mathfrak{p}$ and $\mathfrak{p}b$. Let $\{\alpha, \beta\}$ be the roots of $X^2 - a_p X + p \delta(p)$, so the eigenvalues of $\varphi$ on either $\mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$ or $\mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}b, V)$ are $\alpha^{-1}$ and $\beta^{-1}$.
\begin{definition}
A $p$-refinement of $f$ is a pair $u = (u_\mathfrak{p}, u_{\mathfrak{p}b}) \subseteq \{\alpha, \beta\}^{\times 2}$. We say that $u$ is \emph{non-critical} if $v_p(u_\mathfrak{p}), v_p(u_{\mathfrak{p}b}) < 1$; otherwise $u$ is \emph{critical}.
\end{definition}
Let $K_\infty$ be an extension of $K$ with Galois group $G$, satisfying the hypotheses specified in Section \ref{sect:setupimquad}. For a finite-order character $\omega$ of $G$, let $L_{\{\mathfrak{p}, \mathfrak{p}b\}}(f/K, \omega^{-1}, s)$ denote the twisted $L$-function of $f$ with the Euler factors at $\mathfrak{p}$ and $\mathfrak{p}b$ removed. Let $\Omega^+_f$ and $\Omega^-_f$ be the real and complex periods of $f$ (which are defined up to multiplication by an element of $\mathbb{Q}(f)^\times$).
\begin{conjecture}[Existence of $L$-functions]
\label{conj:lfunc}
Let $(u_\mathfrak{p}, u_\mathfrak{p}b)$ be a $p$-refinement which is non-critical. Then there exists a distribution $\mu_f(u_\mathfrak{p}, u_\mathfrak{p}b)$ on $G$, of order $(v_p(u_\mathfrak{p}), v_p(u_{\mathfrak{p}b}))$ with respect to the subgroups $(\Gamma_\mathfrak{p}, \Gamma_{\mathfrak{p}b})$, such that for all finite-order characters $\omega$ we have
\begin{align}
& \int_{G} \omega\, \mathrm{d}\mu_f(u_\mathfrak{p}, u_\mathfrak{p}b) = \notag \\
& \left( \prod_{q \in \{\mathfrak{p}, \mathfrak{p}b\}} u_q^{-c_q(\omega)} e_q(\omega^{-1}) \mathfrak{r}ac{P_q(\omega^{-1}, u_q^{-1})}{P_q(\omega, p^{-1} u_q)} \right)
\mathfrak{r}ac{L_{\{\mathfrak{p}, \mathfrak{p}b\}} (f / K, \omega^{-1}, 1)}{\Omega^+_f \Omega^-_f}.
\label{eq:interpolating}
\end{align}
\end{conjecture}
\begin{remark}
The definition of the order of a distribution on $\ZZ_p^2$ is given in Section \ref{sect:order}. The hypothesis that the $p$-refinement be non-critical implies that the distribution $\mu_f(u_\mathfrak{p}, u_\mathfrak{p}b)$ is unique if it exists, since a distribution of order $(r,s)$ with $r,s < 1$ is uniquely determined by its values at finite-order characters.
\end{remark}
Two approaches are known to the construction of such $L$-functions: either via $p$-adic interpolation of Rankin--Selberg convolutions, as in \cite{hida88,perrinriou88,kim-preprint}, or via the combinatorics of modular symbols on symmetric spaces attached to $\GL(2, \mathbb{A}_K)$, as in \cite{haran87}. The details have not been written down in the full generality described above (although M.~Emerton and B.~Zhang have announced results of this kind in a paper which is currently in preparation). The literature to date contains constructions of $\mu_f(u_\mathfrak{p}, u_\mathfrak{p}b)$ in the following cases:
\begin{itemize}
\item if $f$ is ordinary, $\delta = 1$, and $u$ is the ``ordinary refinement'' $(\alpha, \alpha)$ where $\alpha$ is the unit root \cite{haran87}
\item if $f$ is ordinary, $u$ is the ordinary refinement, and $G$ decomposes as a direct product of eigenspaces for complex conjugation \cite{perrinriou88}
\item if $f$ is non-ordinary, $u_{\mathfrak{p}} = u_{\mathfrak{p}b}$, $[K(\mathfrak{f} p) : K]$ is prime to $p$, $\delta^2 = 1$, and we consider only the restriction of the distribution to the set of characters whose restriction to $\Gal(K(\mathfrak{f} p) / K)$ does not factor through a Dirichlet character via the norm map \cite{kim-preprint}.
\end{itemize}
\begin{remark}\mbox{~}
(i) We have chosen to write the interpolating formula \eqref{eq:interpolating} in a way that emphasises the similarity with that of \cite{cfksv}. The cited references use a range of different formulations, and the distributions they construct differ from ours by various correction factors; but in each case the \emph{existence} of a measure satisfying their conditions is equivalent to the conjecture above.
(ii) If $f$ is ordinary and $u$ is the ordinary refinement, the condition that $\mu_f(u)$ has order $(0, 0)$ is simply that it be a measure. In the non-ordinary case considered by Kim, the condition that $\mu_f(u)$ has order $(v_p(u_\mathfrak{p}), v_p(u_{\mathfrak{p}b}))$ is more delicate, and depends crucially on the decomposition of $\Gal(K_\infty / K(\mathfrak{f}))$ as the direct product of the distinguished subgroups $\Gamma_{\mathfrak{p}}$ and $\Gamma_{\mathfrak{p}b}$ corresponding to the two primes above $p$.
\end{remark}
We now give a conjectural interpretation of these $p$-adic $L$-functions in terms of our regulator map $\mathcal{L}^G_V$. Let us write
\[ Z^1_{\Iw, p}(V) = Z^1_{\Iw, \mathfrak{p}}(V) \oplus Z^1_{\Iw, \mathfrak{p}b}(V).\]
We write $\exp^*_{V}$ for the map $\exp^*_{K_\mathfrak{p}, V} \oplus \exp^*_{K_\mathfrak{p}b, V} : Z^1_{\Iw,p}(V) \to \mathbb{D}_p(V)$, and similarly $\mathcal{L}^G_V$ for the map
$\mathcal{L}^G_{\mathfrak{p}, V} \oplus \mathcal{L}^G_{\mathfrak{p}b, V} : Z^1_{\Iw, p}(V) \to \mathcal{H}_{\widehat{F}_\infty}(G) \otimes \mathbb{D}_p(V)$. Both of these induce maps on the wedge square, which we denote by the same symbols.
The following conjecture can be seen as a special case of the very general ``$\zeta$-isomorphism conjecture'' of Fukaya and Kato (Conjecture 2.3.2 of \cite{fukayakato06}), applied to the module $\Lambda_{\ZZ_p}(G) \otimes T$ for $T$ a $\ZZ_p$-lattice in $V$.
\begin{conjecture}
\label{conj:twovarmf}
Choose a basis $v$ of $\Fil^0 \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V) \otimes_{\QQ_p} \Fil^0 \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}b, V) \subseteq \mathbb{D}_p(V)^{(1, 1)}$. Then there is a distinguished element $\mathfrak{c} \in \bigwedge^2 H^1_{\Iw, S}(K_\infty, V_f)$ such that for all finite-order characters $\omega$, we have
\[ \exp^*_{V(\omega^{-1})^*(1)}(\mathfrak{c}_\omega) = \mathfrak{r}ac{L(f / K, \omega^{-1}, 1)}{ \Omega^+_f \Omega^-_f }\, v.\]
Moreover, $\mathfrak{c}$ is a $\Lambda_{\ZZ_p}(G)$-basis of $\mathbb{I}_{\arith,p}(V)$.
\end{conjecture}
We choose a basis $v_{\mathfrak{p}, \alpha}, v_{\mathfrak{p}, \beta}$ of $\varphi$-eigenvectors in $\mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$, and similarly for $\mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}b, V)$; and for a $p$-refinement $u = (u_\mathfrak{p}, u_{\mathfrak{p}b})$, we let $v_u = v_{\mathfrak{p}, u_{\mathfrak{p}}} \otimes v_{\mathfrak{p}b, u_\mathfrak{p}b} \in \mathbb{D}_p(V)^{(1, 1)}$. We may normalise such that $v_{\mathfrak{p}} = v_{\mathfrak{p}, \alpha} + v_{\mathfrak{p}, \beta}$ is a basis of $\Fil^0 \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$ (and respectively for $\mathfrak{p}b$); then $v = v_{\mathfrak{p}} \otimes v_{\mathfrak{p}b}$ is a basis of $\Fil^0 \mathbb{D}_p(V)$.
\begin{proposition}
Let $\mathfrak{c} \in \bigwedge^2 H^1_{\Iw, S}(K_\infty, V)$. Then for each $p$-refinement $u$ (critical or otherwise), the projection of $\mathcal{L}^G_V(\mathfrak{c})$ to the subspace $E \cdot v_u \subseteq \mathbb{D}_p(V)^{(1, 1)}$ is a distribution of order $(v_p(u_\mathfrak{p}), v_p(u_{\mathfrak{p}b}))$. If $\mathfrak{c}$ satisfies the condition of Conjecture \ref{conj:twovarmf}, then the projection of $\mathcal{L}^G_V(\mathfrak{c})$ satisfies the interpolating property \eqref{eq:interpolating}.
\end{proposition}
\begin{proof}
The values of $\mathcal{L}^G_{V}(\mathfrak{c})$ at $\omega$ can be expressed in terms of those of the dual exponential map using Proposition \ref{thm:explicitformula}, which clearly gives the formula of \eqref{eq:interpolating}.
The statement regarding the orders of the projections is an instance of Theorem \ref{thm:orderofproduct}. Concretely, suppose we choose elements $\mathfrak{c}_1, \mathfrak{c}_2$ such that $\mathfrak{c}_1 \wedge \mathfrak{c}_2 = \mathfrak{c}$. Then we have
\begin{multline*}
\mathcal{L}^G_{V}(\mathfrak{c}) = (v_{\mathfrak{p}, \alpha}\mathcal{L}_{V, \mathfrak{p}}(\mathfrak{c}_1)_{\alpha} + v_{\mathfrak{p}, \beta}\mathcal{L}_{V, \mathfrak{p}}(\mathfrak{c}_1)_{\beta} + v_{\mathfrak{p}b, \alpha}\mathcal{L}_{V, \mathfrak{p}b}(\mathfrak{c}_1)_{\alpha} + v_{\mathfrak{p}b, \beta}\mathcal{L}_{V, \mathfrak{p}b}(\mathfrak{c}_1)_{\beta}) \\ \wedge (v_{\mathfrak{p}, \alpha}\mathcal{L}_{V, \mathfrak{p}}(\mathfrak{c}_2)_{\alpha} + v_{\mathfrak{p}, \beta}\mathcal{L}_{V, \mathfrak{p}}(\mathfrak{c}_2)_{\beta} + v_{\mathfrak{p}b, \alpha}\mathcal{L}_{V, \mathfrak{p}b}(\mathfrak{c}_2)_{\alpha} + v_{\mathfrak{p}b, \beta}\mathcal{L}_{V, \mathfrak{p}b}(\mathfrak{c}_2)_{\beta}),
\end{multline*}
so the projection of $\mathcal{L}^G_{V}(\mathfrak{c})$ to the line spanned by $v_u$ is
\[ v_u \cdot
\begin{vmatrix}
\mathcal{L}^G_{V, \mathfrak{p}}(\mathfrak{c}_1)_{u_\mathfrak{p}} & \mathcal{L}^G_{V, \mathfrak{p}}(\mathfrak{c}_2)_{u_\mathfrak{p}}\\
\mathcal{L}^G_{V, \mathfrak{p}b}(\mathfrak{c}_1)_{u_\mathfrak{p}b} & \mathcal{L}^G_{V, \mathfrak{p}b}(\mathfrak{c}_2)_{u_\mathfrak{p}b}
\end{vmatrix}.
\]
Since $\mathcal{L}^G_{V, \mathfrak{p}}(\mathfrak{c}_?)_{u_\mathfrak{p}}$ (for $? \in \{1, 2\}$) is a distribution of order $(v_p(u_\mathfrak{p}), 0)$, and $\mathcal{L}^G_{V, \mathfrak{p}b}(\mathfrak{c}_?)_{u_\mathfrak{p}b}$ is a distribution of order $(0, v_p(u_\mathfrak{p}b))$, the determinant gives a distribution of order $(v_p(u_\mathfrak{p}), v_p(u_{\mathfrak{p}b})$ as claimed.
\end{proof}
In particular, when the refinement $u$ is non-critical, we conclude that Conjecture \ref{conj:twovarmf} implies Conjecture \ref{conj:lfunc} and the projection of $\mathcal{L}^G_{V}(\mathfrak{c})$ to $v_u$ must be equal to the uniquely determined distribution $\mu_f(u)$.
\begin{remark}
If Conjecture \ref{conj:twovarmf} holds, then one can also project the element $\mathcal{L}^G_V(\mathfrak{c})$ into $\mathbb{D}_p(V)^{(2, 0)}$ (or into $\mathbb{D}_p(V)^{(0, 2)}$). The resulting distributions are of a rather simpler type: if $\mathfrak{c} = \mathfrak{c}_1 \wedge \mathfrak{c}_2$ as before, then
\[
\pr_{2, 0} \mathcal{L}^G_{V}(\mathfrak{c}) = \mathcal{L}^G_{\mathfrak{p}, V}(\mathfrak{c}_1) \wedge \mathcal{L}^G_{\mathfrak{p}, V}(\mathfrak{c}_2).
\]
This is a distribution on $G$ with values in the 1-dimensional space $\mathbb{D}_p(V)^{(2, 0)} = \det_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(K_\mathfrak{p}, V)$ of order $(1, 0)$, divisible by the image in $\mathcal{H}_{\widehat{F}_\infty}(G)$ of the distribution $\ell_0 \in \mathcal{H}_{\QQ_p}(\Gamma_\mathfrak{p})$, so dividing by this factor gives a bounded measure on $G$ with values in $\widehat{F}_\infty$. Note that acting by the arithmetic Frobenius of $\widehat{F}_\infty$ on this measure corresponds to multiplication by $[\sigma_\mathfrak{p}]^2$, so it \emph{never} descends to a finite extension of $\QQ_p$.
It is natural to conjecture (and would follow from Conjecture 2.3.2 of \cite{fukayakato06}) that if $\tau$ is a character of $G$ whose Hodge--Tate weights at $\mathfrak{p}$ and $\mathfrak{p}b$ are $(r,s)$ with $r \ge 1$ and $s \le -1$, so $\Fil^0 \bigwedge^2 \mathbb{D}_{p}(V(\tau^{-1})) = \mathbb{D}_p(V)^{(2,0)}$, then the value of $\pr_{2, 0} \mathcal{L}^G_V(\mathfrak{c})$ at $\tau$ should (after dividing by an appropriate period) correspond to the value at $1$ of the $L$-function of the automorphic representation $\operatorname{BC}(\pi_f) \otimes \tau$ of $\operatorname{GL}(2,\mathbb{A}_K)$. Up to a shift by the cyclotomic character, this corresponds to the set of characters denoted by $\Sigma^{(2)}(\mathfrak{f})$ in \cite{BDP13}, while the finite-order characters covered by the interpolating property in Conjecture \ref{conj:twovarmf} correspond to the set denoted there by $\Sigma^{(1)}(\mathfrak{f})$.
If this conjecture holds, the image of $\pr_{2, 0} \mathcal{L}^G_{V}(\mathfrak{c}) / \ell_0$ in the Galois group of the anticyclotomic $\ZZ_p$-extension of $K$ should be related to the $L$-functions of \cite[Proposition 6.10]{BDP13} and \cite{brakocevic}, which interpolate the $L$-values of twists of $f$ by anticyclotomic characters in $\Sigma^{(2)}(\mathfrak{f})$. We intend to study this question further in a future paper.
\end{remark}
\appendix
\section{Local and global Iwasawa cohomology}
\label{appendix:iwacoho}
In this section, we shall recall some results on the structure of Iwasawa cohomology groups of $p$-adic Galois representations over towers of representations of local and global fields. These are generalizations of well-known results for cyclotomic towers due to Perrin-Riou (cf.~\cite[\S 2]{perrinriou92}); much more general results have since been obtained by Nekovar \cite{nekovar06} and we briefly indicate how to derive the results we need from those of \emph{op.cit.}.
\subsection{Conventions}
We shall work with extensions of (local or global) fields $F_\infty/ F$ whose Galois group is of the form $G = \Delta \times \ZZ_p^e$, where $e \ge 1$ and $\Delta$ is a finite abelian group of order prime to $p$. The Iwasawa algebra $\Lambda_{\ZZ_p}(G)$ is a reduced ring, but it is not in general an integral domain; rather, it is isomorphic to the direct product of the subrings $e_\eta \Lambda_{\ZZ_p}(G)$, where $\eta$ ranges over the $\QQ_pb / \QQ_p$-conjugacy classes of characters of $\Delta$. For each such $\eta$, $e_\eta \Lambda_{\ZZ_p}(G)$ is a local integral domain.
In order to greatly simplify the presentation of our results, we shall adopt a minor abuse of notation, following the conventions of \cite{perrinriou95}.
\begin{definition}
We shall say that a $\Lambda_{\ZZ_p}(G)$-module has rank $r$ if $M_\eta$ has rank $r$ over $e_\eta \Lambda_{\ZZ_p}(G)$ for all $\eta$.
\end{definition}
When using this notation it is important to bear in mind that when $\Delta$ is not trivial, most finitely generated $\Lambda_{\ZZ_p}(G)$ modules will not have a rank.
\subsection{The local case}
\label{sect:localranks}
Let $F$ be a finite extension of $\mathbb{Q}_\ell$, for some prime $\ell$. Let $V$ a $\QQ_p$-representation of $\mathcal{G}_{F}$ of dimension $d$, and choose a Galois invariant $\ZZ_p$-lattice $T$. For $F_\infty / F$ an abelian extension satisfying the conditions above, we define
\[ H^i_{\Iw}(F_\infty, T) = \varprojlim_{K} H^i(K, T)\]
where the limit is over all finite extensions $K/F$ contained in $F_\infty$, with respect to the corestriction maps; and $H^i_{\Iw}(F_\infty, V) = \QQ_p \otimes_{\ZZ_p} H^i_{\Iw}(F_\infty, T)$.
\begin{theorem}
\label{thm:localrank}
The groups $H^i_{\Iw}(F_\infty, T)$ are finitely-generated $\Lambda_{\ZZ_p}(G)$-modules, zero if $i \ne \{0, 1\}$. We have an isomorphism
\[ H^2_{\Iw}(F_\infty, T) \cong H^0(F_\infty, T^\varepsilone(1))^\varepsilone,\]
where $(-)^\varepsilone$ denotes the Pontryagin dual; in particular $H^2_{\Iw}(F_\infty, T)$ is $\Lambda_{\ZZ_p}(G)$-torsion.
The group $H^1_{\Iw}(F_\infty, T)$ has well-defined rank given by
\[ \operatorname{rk}_{\Lambda_{\ZZ_p}(G)} H^1_{\Iw}(F_\infty, T) = \begin{cases} 0& \text{if $\ell \ne p$,} \\ [F : \QQ_p] d & \text{if $\ell = p$}.\end{cases}\]
\end{theorem}
\begin{proof}
We have assumed that $G$ has a subgroup isomorphic to $\ZZ_p^e$ with $e \ge 1$; thus the profinite degree of $F_\infty / F$ is divisible by $p^\infty$, so $H^0_{\Iw}(F_\infty, T) = 0$ by \cite[8.3.5 Proposition]{nekovar06}.
For the finiteness statements for $i > 0$, we note that
\[ H^i_{\Iw}(F_\infty, T) \cong H^i(F, \Lambda_{\ZZ_p}(G) \otimes_{\ZZ_p} T)\]
by \cite[8.4.4.2 Proposition]{nekovar06}, where the action of $\mathcal{G}_F$ on $\Lambda_{\ZZ_p}(G)$ is via the inverse of the canonical character $\mathcal{G}_F \to G \to \Lambda_{\ZZ_p}(G)^\times$. This implies the finite generation of the groups $H^i_{\Iw}(F_\infty, T)$, and their vanishing for $i \ge 3$, by Proposition 4.2.2 of \emph{op.cit.}.
The isomorphism $H^2_{\Iw}(F_\infty, T)^\varepsilone \cong H^0(F_\infty, T^\varepsilone(1))$ follows by applying local Tate duality to each finite extension $K / F$ contained in $F_\infty$. Finally, the formula for the rank of $H^1_{\Iw}(F_\infty, T)$ follows from Tate's local Euler characteristic formula for finite modules and Corollary 4.6.10 of \emph{op.cit.}.
\end{proof}
\subsection{The global case}
\label{sect:globalranks}
We now let $K$ be a number field.
Let $V$ be a $\QQ_p$-representation of $\mathcal{G}_K$ of dimension $d$, and choose a $G_K$-invariant $\ZZ_p$-lattice $T$. Let $S$ be a finite set of places of $K$ containing all the primes above $p$, all infinite places and all the places whose inertia group acts non-trivially on $V$, and let $K^S$ be the maximal extension of $K$ unramified outside $S$.
\begin{theorem}[Tate's global Euler characteristic formula]
If $M$ is a $\ZZ_p$-module of finite length with a continuous action of $\Gal(K^S / K)$, then the modules $H^i(K^S / K, M)$ are finite groups, zero for $i \ge 3$. If $K$ is totally complex, then we have
\[ \prod_{i =0}^2 \left( \# H^i(K^S / K, M)\right)^{(-1)^i} = (\# M)^{-\tfrac{1}{2}[K : \mathbb{Q}]}.\]
\end{theorem}
\begin{proof}
See \cite[8.3.17, 8.6.14]{nsw}.
\end{proof}
We now consider a Galois extension $K_\infty / K$, contained in $K^S$, whose Galois group $G$ is of the form $\Delta \times \ZZ_p^e$, where $e\geq 1$ and $\Delta$ is abelian of order prime to $p$, as above. For $i \ge 0$, we define
\[ H^i_{\Iw, S}(K_\infty, T) = \varprojlim_{L} H^i(K^S / L, T)\]
where the limit is taken over number fields $L$ satisfying $K \subseteq L \subset K_\infty$, with respect to the corestriction maps.
\begin{theorem}
\label{thm:globalrank}
The groups $H^i_{\Iw, S}(K_\infty, T)$ are finitely-generated $\Lambda_{\ZZ_p}(G)$-modules, zero if $i = 0$ or $i \ge 3$. If $K$ is totally complex, then for each character $\eta$ of $\Delta$ we have
\[ \rank_{e_\eta \Lambda_{\ZZ_p}(G)} e_\eta H^1_{\Iw, S}(K_\infty, T) = \tfrac 1 2 [K : \mathbb{Q}] d + \rank_{e_\eta \Lambda_{\ZZ_p}(G)} e_\eta H^2_{\Iw, S}(K_\infty, T).\]
\end{theorem}
\begin{proof}
This follows exactly as in Theorem \ref{thm:localrank}, using Tate's global Euler characteristic formula in place of the local one. (There are no issues with real embeddings, thanks to our running assumption that $p$ be odd.)
\end{proof}
\begin{proposition}
\label{prop:leopoldt}
The following statements are equivalent:
\begin{enumerate}[(i)]
\item $H^2_{\Iw, S}(K_\infty, T)$ is $\Lambda_{\ZZ_p}(G)$-torsion.
\item For each character $\eta$ of $\Delta$, there is a character $\tau$ of $G$ such that $\tau|_\Delta = \eta$ and $H^2(K^S / K, V(\tau)) = 0$.
\item $H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p)$ is a cotorsion $\Lambda_{\ZZ_p}(G)$-module.
\item $H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p) = 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
Since $\Delta$ has order prime to $p$, we may assume $\Delta = 1$, so $G \cong \ZZ_p^e$ and $\Lambda = \Lambda_{\ZZ_p}(G)$ is a local integral domain.
We first show (i) $\Leftrightarrow$ (ii). By \cite[8.4.8.2 Corollary, (ii)]{nekovar06} we have an isomorphism
\[ H^2_{\Iw, S}(K_\infty, T) \otimes_{\Lambda} \ZZ_p(\tau) \cong H^2(K^S / K, T(\tau^{-1})).\]
If $H^2_{\Iw, S}(K_\infty, T)$ is torsion, then it is annihilated by some non-zero $f \in \Lambda$. Since $f \ne 0$, there exists a character $\tau$ such that $f(\tau) \ne 0$; but by the above formula $f(\tau)$ annihilates $H^2(K^S / K, T(\tau^{-1}))$, so $H^2(K^S / K, V(\tau^{-1})) = 0$. Conversely, if $H^2(K^S/K, V(\tau^{-1})) = 0$ for some $\tau$, then $H^2(K^S / K, T(\tau^{-1}))$ is $\ZZ_p$-torsion, so by a form of Nakayama's lemma -- see \cite[Theorem 2]{balisterhowson} -- we can conclude that $H^2_{\Iw, S}(K_\infty, T)$ is a torsion $\Lambda$-module.
We now show (ii) $\Leftrightarrow$ (iii). We know that $H^2(K^S / K, T(\tau) \otimes \QQ_p/\ZZ_p)$ is finite if and only if $H^2(K^S / K, V(\tau)) = 0$. From the Hochschild--Serre spectral sequence and Poincar\'e duality for $G$-cohomology we have an isomorphism
\[ H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p)^\varepsilone \otimes_{\Lambda} \ZZ_p(\tau) \cong H^2(K^S / K, T(\tau) \otimes \QQ_p/\ZZ_p)^\varepsilone\]
and we conclude by the same argument as before.
To finish the proof, it suffices to show that (iii) $\Rightarrow$ (iv). We claim that the module $H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p)$ is co-free over $\Lambda$, i.e.~its Pontryagin dual $X = H^2(K^S / K_\infty, T \otimes \QQ_p/\ZZ_p)^\varepsilone$ is a free $\Lambda$-module; thus if it is cotorsion, it must be zero. For $e = 1$ this is a theorem of Greenberg, cf.~\cite[Proposition 1.3.2]{perrinriou95}, so we shall reduce to this case by induction on $e$.
Let us choose topological generators $\gamma_1, \dots, \gamma_e$ of $\Gal(K_\infty / K) \cong \ZZ_p^e$, and set $u_i = [\gamma_i] - 1 \in \Lambda$. Then $\Lambda \cong \ZZ_p[[u_1, \dots, u_e]]$ and in particular $(p, u_1, \dots, u_e)$ is a regular sequence for $\Lambda$; so in order to show that $X$ is free, it suffices to show that $X[u_e] = 0$ and $X/u_e X$ is free as a module over $\Lambda / u_e \Lambda$.
If we let $U$ be the subgroup of $G$ generated by $\gamma_e$, then
\[ X[u_e] = H^1(U, H^2(K_\infty, T \otimes \QQ_p/\ZZ_p))^\varepsilone,\]
and by the Hochschild--Serre exact sequence, $H^1(U, H^2((K_\infty)^U, T \otimes \QQ_p/\ZZ_p))$ injects into $H^3(K_\infty^{U}, T \otimes \QQ_p/\ZZ_p)$, which is 0 (since $p$ is odd); and we have
\[ X / u_e X = H^2((K_\infty)^U, T \otimes \QQ_p/\ZZ_p)^\varepsilone,\]
which (by the induction hypothesis) is free over $\ZZ_p[[u_1, \dots, u_{e-1}]]$, so we are done.
\end{proof}
To define our module of $p$-adic $L$-functions we will need to assume the following conjecture, which corresponds to the ``conjecture de Leopoldt faible'' of \cite[\S 1.3]{perrinriou95}:
\begin{conjecture}[Conjecture $\operatorname{Leop}(K_\infty, V)$]
\label{conj:leopoldt}
The equivalent conditions of Proposition \ref{prop:leopoldt} hold, for some (and hence every) $\ZZ_p$-lattice $T$ in $V$.
\end{conjecture}
Note that if $K_\infty, L_\infty$ are two extensions of $K$ satisfying our conditions, with $K_\infty \subseteq L_\infty$, and $\Gal(L_\infty/K_\infty)$ is torsion-free (hence isomorphic to a product of copies of $\ZZ_p$), then conjecture $\Leop(K_\infty, V)$ implies conjecture $\Leop(L_\infty, V)$, since $\Gal(K_\infty / K)$ and $\Gal(L_\infty / K)$ have the same torsion subgroup and thus condition (ii) of Proposition \ref{prop:leopoldt} for $K_\infty$ implies the corresponding condition for $L_\infty$. It is conjectured that $\Leop(K(\mu_{p^\infty}), V)$ should hold for any $V$, and this is known in many cases; see \cite[Appendix B]{perrinriou95}.
\begin{example}
Let $V$ be the 2-dimensional $p$-adic representation of $\mathcal{G}_\mathbb{Q}$ associated to a modular form, $K / \mathbb{Q}$ an imaginary quadratic field, and $K_\infty$ the unique $\ZZ_p^2$-extension of $K$. Then $\operatorname{Leop}(K_\infty, V)$ holds.
To see this, we use the fact that $\operatorname{Leop}(K_\infty, V)$ is implied by $\operatorname{Leop}(K^{\mathrm{cyc}}, V)$, where $K^{\mathrm{cyc}}$ is the cyclotomic $\ZZ_p$-extension of $K$. However, by Shapiro's lemma the conjecture $\operatorname{Leop}(K^{\mathrm{cyc}}, V)$ is equivalent to $\operatorname{Leop}(\mathbb{Q}^{\operatorname{cyc}}, V \oplus V(\varepsilon_K))$, where $\mathbb{Q}^{\operatorname{cyc}}$ is the cyclotomic $\ZZ_p$-extension of $\mathbb{Q}$ and $\varepsilon_K$ is the quadratic Dirichlet character associated to $K$. The conjectures and $\operatorname{Leop}(\mathbb{Q}^{\operatorname{cyc}}, V)$ and $\operatorname{Leop}(\mathbb{Q}^{\operatorname{cyc}}, V(\varepsilon_K))$ follow from \cite[Theorem 12.4]{kato04} applied to $f$ and its twist by $\varepsilon_K$.
\end{example}
\begin{corollary}
If $K$ is totally complex and Conjecture $\operatorname{Leop}(K_\infty, V)$ holds, then the module $H^1_{\Iw, S}(K_\infty, T)$ has well-defined $\Lambda_{\ZZ_p}(G)$-rank, equal to $\tfrac{1}{2}[K : \mathbb{Q}]d$, where $d=\rank_{\ZZ_p}T$.
\end{corollary}
\section{Explicit formulae for Perrin-Riou's p-adic regulator}
\label{appendix:cyclo}
In this section, we give the proof of the formulae for the cyclotomic regulator used in the proof of Proposition \ref{thm:explicitformula}. As we work only over $\QQ_p$ here, we shall write $\mathbb{D}(-)$ and $\mathbb{N}(-)$ for $\mathbb{D}_{\QQ_p}(-)$ and $\mathbb{N}_{\QQ_p}(-)$ respectively.
Let $V$ be a good crystalline representation of $\mathcal{G}_{\QQ_p}$, and $x \in \mathcal{H}(\Gamma) \otimes_{\Lambda_{\ZZ_p}(\Gamma)} H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V)$. We write $x_j$ for the image of $x$ in $H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V(-j))$, and $x_{j, n}$ for the image of $x_j$ in $H^1(\mathbb{Q}_{p, n}, V(-j))$. If we identify $x$ with its image in $\mathbb{D}(V)^{\psi = 1}$, then $x_j$ corresponds to the element $x \otimes e_{-j} \in \mathbb{D}(V)^{\psi = 1} \otimes e_{-j} = \mathbb{D}(V(-j))^{\psi = 1}$.
Since $V$ has non-negative Hodge--Tate weights, we may interpret $x$ as an element of the module $\left( \mathbb{B}^+_{\rig, \QQ_p}\left[\tfrac1t\right] \otimes \mathbb{D}_{\mathrm{cris}}(V)\right)^{\psi = 1}$.
We shall assume:
\[ \tag{$\dag$} x \in \left(\mathbb{B}^+_{\rig, \QQ_p} \otimes_{\mathbb{B}^+_{\QQ_p}} \mathbb{N}(V)\right)^{\psi = 1} \subseteq \left( \mathbb{B}^+_{\rig, \QQ_p}\left[\tfrac1t\right] \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\right)^{\psi = 1}.\]
This condition is satisfied in the following two situations:
\begin{itemize}
\item if $V$ has no quotient isomorphic to $\QQ_p$, by \cite[Theorem A.3]{berger03};
\item or if $x$ is in the image of the Iwasawa cohomology over $F_\infty(\mu_{p^\infty})$, by Theorem \ref{thm:unramunivnorms} above.
\end{itemize}
We will base our proofs on the work of Berger \cite{berger03}, so we recall the notation of that reference. Let $\partial$ denote the differential operator $(1 + \pi) \tfrac{\mathrm{d}}{\mathrm{d}\pi}$ on $\BB^+_{\rig,\Qp}$. We also use Berger's notation $\partial_V \circ \varphi^{-n}$ for the map
\[ \BB^+_{\rig,\Qp}\left[\tfrac1t\right] \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V) \rTo \mathbb{Q}_{p, n}\otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\]
which sends $\pi^k\otimes v$ to the constant coefficient of $(\zeta_n \exp(t/p^n) - 1)^k \otimes \varphi^{-n}(v)\in \mathbb{Q}_{p,n}((t))\otimes_{\QQ_p}\mathbb{D}_{\mathrm{cris}}(V)$.
For $m \in \mathbb{Z}$, define $\Gamma^*(m)$ to be the leading term of the Taylor series expansion of $\Gamma(x)$ at $x = m$ (cf.~\cite[\S 3.3.6]{fukayakato06}); thus
\[ \Gamma^*(1 + j) = \begin{cases} j! & \text{if $j \ge 0$,} \\ \mathfrak{r}ac{(-1)^{-j-1}}{(-j-1)!} & \text{if $j \le -1$.}\end{cases}.\]
\begin{proposition}
For $x$ satisfying $(\dag)$, let us define
\[ R_{j, n}(x) = \mathfrak{r}ac{1}{\Gamma^*(1 + j)} \times
\begin{cases}
p^{-n} \partial_{V(-j)}(\varphi^{-n}(\partial^{j} x \otimes t^j e_{-j})) & \text{if $n \ge 1$,}\\
(1 - p^{-1} \varphi^{-1}) \partial_{V(-j)}(\partial^{j} x \otimes t^j e_{-j}) & \text{if $n = 0$.}
\end{cases}
\]
Then we have
\[ R_{j, n}(x) =
\begin{cases}
\exp^*_{\mathbb{Q}_{p, n}, V(-j)^*(1)}(x_{j, n}) & \text{for $j \ge 0$,}\\
\log_{\mathbb{Q}_{p, n}, V(-j)}(x_{j, n}) & \text{for $j \le -1$.}
\end{cases}
\]
\end{proposition}
\begin{proof}
This result is essentially a minor variation on \cite[Theorem II.10]{berger03}. The case $j \ge 0$ is immediate from Theorem II.6 of \emph{op.cit.} applied with $V$ replaced by $V(-j)$ and $x$ by $x \otimes e_{-j}$, using the formula
\[ \partial_{V(-j)}(\varphi^{-n}(x \otimes e_{-j})) = \mathfrak{r}ac{1}{j!} \partial_{V(-j)}(\varphi^{-n}(\partial^j x \otimes t^{j} e_{-j})).\]
For the formula when $j \le -1$, we choose an auxilliary integer $h \ge 1$ such that $\Fil^{-h} \mathbb{D}_{\mathrm{cris}}(V) = \mathbb{D}_{\mathrm{cris}}(V)$. The element $\partial^{j} x \otimes t^{j} e_{-j}$ lies in $\left(\BB^+_{\rig,\Qp} \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V(-j))\right)^{\psi = 1}$, by (\dag). Applying Theorem II.3 of \emph{op.cit.} with $V$, $h$ and $x$ replaced by $V(-j)$, $h-j$, and $\partial^{j} x \otimes t^{-j} e_j$, we see that
\[
\Gamma^*(j+1) R_{j, n}(x) = \Gamma^*(j - h + 1) \log_{\mathbb{Q}_{p, n}, V(-j)}
\left[ \left(\ell_0 \dots \ell_{h-1} x \right)_{j, n}\right].
\]
For $x \in \mathcal{H}(\Gamma) \otimes_{\Lambda_{\ZZ_p}(\Gamma)} H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V)$, we have
\[ \left( \ell_r x\right)_{j, n} = (j - r) x_{j, n},\]
so (since $j \le -1$) we have
\[ \left( \ell_0 \dots \ell_{h-1} x \right)_{j, n} = (j)(j-1) \dots (j - h + 1) x_{j, n} = \mathfrak{r}ac{\Gamma^*(j+1)}{\Gamma^*(j - h + 1)} x_{j, n}\]
as required.
\end{proof}
\begin{proposition}
If $x$ is as above, and $\mathcal{L}^\Gamma_V(x)$ is the unique element of $\mathcal{H}(\Gamma) \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)$ such that $\mathcal{L}^\Gamma_V(x) \cdot (1 + \pi) = (1 - \varphi) x$, then for any $j \in \mathbb{Z}$ we have
\[ (1 - \varphi) \cdot \partial_{V(-j)}(\varphi^{-n}(\partial^{j} x \otimes t^j e_{-j})) = \mathcal{L}^\Gamma(x)(\chi^j) \otimes t^j e_{-j},\]
while for any finite-order character $\omega$ of $\Gamma$ of conductor $n \ge 1$, we have
\begin{multline*} \left(\sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma)^{-1} \sigma\right) \cdot \partial_{V(-j)}(\varphi^{-n}(\partial^{j} x \otimes t^j e_{-j})) \\=
\tau(\omega) \varphi^{-n} \left(\mathcal{L}^\Gamma(x)(\chi^j \omega) \otimes t^j e_{-j}\right).
\end{multline*}
\end{proposition}
\begin{proof}
We note that
\[ \mathcal{L}^\Gamma_{V(-j)}(\partial^{j} x \otimes t^j e_{-j}) = \Tw_j(\mathcal{L}^\Gamma_V(x)) \otimes t^{j} e_{-j},\]
so it suffices to prove the result for $j = 0$. Suppose we have
\[ x = \sum_{k \ge 0} v_k \pi^k,\quad v_k \in \mathbb{D}_{\mathrm{cris}}(V).\]
Then
\[ \partial_V(\varphi^{-n}(x)) = \sum_{k \ge 0} \varphi^{-n}(v_k) \left(\zeta_{p^n} - 1 \right)^k.\]
On the other hand
\[ \partial_V(\varphi^{-n}((1-\varphi) x)) = \sum_{k \ge 0} \varphi^{-n}(v_k) \left(\zeta_{p^n} - 1 \right)^k - \sum_{k \ge 0} \varphi^{1-n}(v_k) \left(\zeta_{p^{n-1}} - 1 \right)^k.\]
Applying the operator $e_\omega = \sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma)^{-1} \sigma$, we have for $n \ge 1$
\[ e_\omega \cdot \partial_V(\varphi^{-n}(x)) = e_\omega \cdot \partial_V(\varphi^{-n}((1-\varphi) x)),\]
since $e_\omega$ is zero on $\mathbb{Q}_{p, n-1}((t))$.
However, since the map $\partial_V \circ \varphi^{-n}$ is a homomorphism of $\Gamma$-modules, we have
\begin{align*}
e_\omega \cdot \partial_V(\varphi^{-n}((1-\varphi) x)) &= e_\omega \cdot \partial_V( \mathcal{L}^\Gamma(x) \cdot (1 + \pi))\\
&= \varphi^{-n}(\mathcal{L}^\Gamma(x)) \cdot e_\omega \partial_{\QQ_p}(\varphi^{-n}(1 + \pi))\\
&= \tau(\omega) \varphi^{-n}\left( \mathcal{L}^\Gamma(x)(\omega)\right).
\end{align*}
This completes the proof of the proposition for $j = 0$.
\end{proof}
\begin{definition}
Let $x\in H^1_{\Iw}(\mathbb{Q}_{p,\infty},V)$. If $\eta$ is any continuous character of $\Gamma$, denote by $x_\eta$ the image of $x$ in $H_{\Iw}^1(\mathbb{Q}_{p,\infty}, V(\eta^{-1}))$. If $n\geq 0$, denote by $x_{\eta,n}$ the image of $x_\eta$ in $H^1(\mathbb{Q}_{p,n}, V(\eta^{-1}))$.
\end{definition}
Thus $x_{\chi^j, n} = x_{j, n}$. in the previous notation. The next lemma is valid for arbitrary de Rham representations of $\mathcal{G}_{\QQ_p}$ (with no restriction on the Hodge--Tate weights):
\begin{lemma}
For any finite-order character $\omega$ factoring through $\Gamma / \Gamma_n$, with values in a finite extension $E / \QQ_p$, we have
\[ \sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma)^{-1} \exp^*_{\mathbb{Q}_{p,n}, V^*(1)}(x_{0, n})^\sigma = \exp^*_{\QQ_p, V(\omega^{-1})^*(1)}(x_{\omega,0})\]
and
\[ \sum_{\sigma \in \Gamma / \Gamma_n} \omega(\sigma)^{-1} \log_{\mathbb{Q}_{p, n}, V}(x_{0,n})^\sigma = \log_{\QQ_p, V(\omega^{-1})}(x_{\omega,0})\]
where we make the identification
\[ \mathbb{D}_{\dR}(V(\omega^{-1})) \cong \left(E \otimes_{\QQ_p} \mathbb{Q}_{p, n} \otimes_{\QQ_p} \mathbb{D}_{\mathrm{cris}}(V)\right)^{\Gamma = \omega}.\]
\end{lemma}
\begin{proof}
This follows from the compatibility of the maps $\exp^*$ and $\log$ with the corestriction maps (cf~\cite[\S\S II.2 \& II.3]{berger03}).
\end{proof}
Combining the three results above, we obtain:
\begin{theorem}
\label{thm:explicitformulacyclo}
Let $j \in \mathbb{Z}$ and let $x$ satisfy $(\dag)$. Let $\eta$ be a continuous character of $\Gamma$ of the form $\chi^j \omega$, where $\omega$ is a finite-order character of conductor $n$.
\begin{enumerate}[(a)]
\item If $j \ge 0$, we have
\begin{multline*}
\mathcal{L}^\Gamma_{V}(x)(\eta)
= j! \times\\
\begin{cases}
(1 - p^j \varphi)(1 - p^{-1-j} \varphi^{-1})^{-1} \left( \exp^*_{\QQ_p, V(\eta^{-1})^*(1)}(x_{\eta,0}) \otimes t^{-j} e_j\right) & \text{if $n = 0$,}\\
\tau(\omega)^{-1} p^{n(1+j)} \varphi^n \left(\exp^*_{\QQ_p, V(\eta^{-1})^*(1)}(x_{\eta,0}) \otimes t^{-j} e_j\right) & \text{if $n \ge 1$.}
\end{cases}
\end{multline*}
\item If $j \le -1$, we have
\begin{multline*}
\mathcal{L}^\Gamma_{V}(x)(\eta)
= \mathfrak{r}ac{(-1)^{-j-1}}{(-j-1)!} \times\\
\begin{cases}
(1 - p^j \varphi)(1 - p^{-1-j} \varphi^{-1})^{-1} \left( \log_{\QQ_p, V(\eta^{-1})}(x_{\eta,0}) \otimes t^{-j} e_j\right) & \text{if $n = 0$,}\\
\tau(\omega)^{-1} p^{n(1+j)} \varphi^n \left(\log_{\QQ_p, V(\eta^{-1})}(x_{\eta,0}) \otimes t^{-j} e_j\right) & \text{if $n \ge 1$.}
\end{cases}
\end{multline*}
\end{enumerate}
(In both cases, we assume that $(1 - p^{-1-j} \varphi^{-1})$ is invertible on $\mathbb{D}_{\mathrm{cris}}(V)$ when $\eta = \chi^j$.)
\end{theorem}
From this theorem it is straightforward to deduce a version of Perrin-Riou's explicit reciprocity formula, relating the regulator for $V$ and for $V^*(1)$. We recall from \ref{sect:iwasawacoho} the definition of the Perrin-Riou pairing
\[ \langle -, - \rangle_{\mathbb{Q}_{p, \infty}} : H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V) \times H^1_{\Iw}(\mathbb{Q}_{p, \infty}, V^*(1)) \to \Lambda_{\QQ_p}(\Gamma).\]
Let $h$ be sufficiently large that $V^*(1 + h)$ has Hodge--Tate weights $\ge 0$. Recall that we write $y_{-h}$ for the image of $y$ in $H^1_{\Iw}(\mathbb{Q}_{p, n}, V^*(1 + h))$. Define $\mathcal{L}^\Gamma_{V^*(1)}$ by
\begin{multline}
\label{eq:twist1}
\mathcal{L}^\Gamma_{V^*(1)}(y) = (\ell_{-1} \ell_{-2} \cdot \ell_{-h})^{-1} \Tw_{-h} \left(\mathcal{L}_{V^*(1 + h)}(y_{-h})\right) \otimes t^h e_{-h} \\ \in \operatorname{Frac} \mathcal{H}_{\QQ_p}(\Gamma) \otimes \mathbb{D}_{\mathrm{cris}}(V^*(1));
\end{multline}
note that this definition is independent of the choice of $h \gg 0$. Write $\langle \cdot, \cdot \rangle_{\cris, V}$ for the natural pairing $\mathbb{D}_{\mathrm{cris}}(V) \times \mathbb{D}_{\mathrm{cris}}(V^*(1)) \to \mathbb{D}_{\mathrm{cris}}(\QQ_p(1)) \cong \QQ_p$. We extend the crystalline pairing $\Lambda_{\ZZ_p}(\Gamma)$-linearly in the first argument and antilinearly in the second argument.
\begin{theorem}
\label{thm:cycloreciprocity}
For all $x\in H^1_{\Iw}(\mathbb{Q}_{p,\infty},V)$ and $y\in H^1_{\Iw}(\mathbb{Q}_{p,\infty},V^*(1))$, we have
\[ \left\langle \mathcal{L}_V(x), \mathcal{L}_{V^*(1)}(y)\right\rangle_{\cris, V} = -\sigma_{-1} \cdot \ell_0 \cdot \langle x, y \rangle_{\mathbb{Q}_{p, \infty}, V},\]
where $\sigma_{-1}$ is the unique element of $\Gamma$ such that $\chi(\sigma_{-1}) = -1$.
\end{theorem}
\begin{proof}
By Theorem \ref{thm:explicitformulacyclo} (a), for $j \ge 1+h$ we have
\[
\mathcal{L}_V(x)(\chi^j) = j! (1 - p^j \varphi)(1 - p^{-1-j} \varphi^{-1})^{-1} \left( \exp^*_{0, V^*(1+j)}(x_{j,0}) \otimes t^{-j} e_j\right)\]
and
\begin{multline*}
\mathcal{L}_{V^*(1 + h)}(y_{-h})(\chi^{h-j}) \otimes t^h e_{-h} =\mathfrak{r}ac{(-1)^{j-h-1}}{(j-h-1)!} \times\\ (1 - p^{-j} \varphi)(1 - p^{j-1} \varphi^{-1})^{-1} \left( \log_{\QQ_p, V^*(1 + j)} (y_{-j,0}) \otimes t^{j} e_{-j}\right).
\end{multline*}
Hence we have
\begin{align*}
\big\langle \mathcal{L}_V(x)(\chi^j), & \mathcal{L}_{V^*(1 + h)}(y_{-h})(\chi^{h-j}) \otimes t^h e_{-h}\big\rangle_{\cris, V} \\
& = \mathfrak{r}ac{(-1)^{h-j-1} j!}{(j-h-1)!} \langle \exp^*_{0, V^*(1+j)}(x_{j,0}), \log_{\QQ_p, V^*(1 + j)} (y_{-j,0}) \rangle_{\cris, V(-j)}\\
& = \mathfrak{r}ac{(-1)^{h-j-1} j!}{(j-h-1)!} \langle x_{j,0}, y_{-j,0}\rangle_{\QQ_p, V(-j)}\\
& = (-1)^{h+1} \left[ \sigma_{-1} \cdot (\ell_0 \dots \ell_{h}) \cdot \langle x, y \rangle_{\mathbb{Q}_{p, \infty}, V} \right] (\chi^j).
\end{align*}
Using the definition of $\mathcal{L}^\Gamma_{V^*(1)}$ as in \eqref{eq:twist1}, this relation takes the more pleasing form
\[ \left\langle \mathcal{L}_V(x), \mathcal{L}_{V^*(1)}(y)\right\rangle_{\cris, V} = -\sigma_{-1} \cdot \ell_0 \cdot \langle x, y \rangle_{\mathbb{Q}_{p, \infty}, V}.\]
\end{proof}
\section{Functions of two p-adic variables}
Let $p$ be a prime. We let $L$ be a complete discretely valued subfield of $\CC_p$, and let $v_p$ denote the $p$-adic valuation on $L$, normalised in the usual fashion, so $v_p(p) = 1$.
\subsection{Functions and distributions of one variable}
We recall the theory in the one-variable case, as presented in \cite{colmez10}. Let $h \in \mathbb{R}$, $h \ge 0$.
Let $f$ be a function $\ZZ_p \to L$. We say $f$ has order $h$ if, informally, it may be approximated by a Taylor series of degree $[h]$ at every point with an error term of order $h$. More precisely, $f$ has order $h$ if there exist functions $f^{(j)}, 0 \le j \le [h]$, such that the quantity
\[ \varepsilon_f(x,y) = f(x + y) - \sum_{j=0}^{[h]} \mathfrak{r}ac{f^{(j)}(x) y^j}{j!},\]
satisfies
\[ \sup_{x \in \ZZ_p, y \in p^n \ZZ_p} v_p \left( \varepsilon_f(x, y)\right) -hn \to \infty\]
as $n \to \infty$. (It is clear that this determines the functions $f^{(0)}, \dots, f^{(j)}$ uniquely.)
We write $C^h(\ZZ_p, L)$ for the space of such functions, with a Banach space structure given by the valuation
\[ v_{C^h}(f) = \inf\left( \inf_{0 \le j \le [h], x \in \ZZ_p} v_p(f^{(j)}(x)), \inf_{x,y \in \ZZ_p} v_p(\varepsilon_f(x,y)) - h v_p(y)\right).\]
We define the space $D^h(\ZZ_p, L)$ of \emph{distributions of order h} to be the continuous dual of $C^h(\ZZ_p, L)$.
Then we have the following celebrated theorem, due to Mahler \cite{mahler58} for $h = 0$ and to Amice \cite{amice64} for $h > 0$:
\begin{enumerate}
\item[(1)] The space $C^h(\ZZ_p, L)$ has a Banach space basis given by the functions
\[ x \mapsto p^{[h \ell(n)]} \binom{x}{n}\]
for $n \ge 0$, where $\ell(n)$ is, as in \S 1.3.1 of \cite{colmez10}, the smallest integer $m$ such that $p^m > n$.
\item[(2)] The space $LP^{N}(\ZZ_p, L)$ of $L$-valued locally polynomial functions of degree $N$ is dense in $C^h(\ZZ_p, L)$ for any $N \ge [h]$, and a linear functional
\[ \mu: LP^{N}(\ZZ_p, L) \to L\]
extends continuously to a distribution of order $h$ if and only there is a constant $C$ such that we have
\[ v_p\left(\int_{x \in a + p^n \ZZ_p} \left( \mathfrak{r}ac{x - a}{p^n} \right)^k \, \mathrm{d}\mu\right) \ge C - hn\]
for all $a \in \ZZ_p$, $n \in \mathbb{N}$ and $0 \le k \le N$.
\end{enumerate}
A modern account of this theorem is given in \cite[\S\S 1.3, 1.5]{colmez10}.
\subsection{The two-variable case}
\label{sect:order}
We now consider functions of two variables. For $a, b \ge 0$, we define the space
\[ C^{(a,b)}(\ZZ_p^2, L) := C^a(\ZZ_p, L) \mathbin{\hat\otimes}_L C^b(\ZZ_p, L),\]
with its natural completed tensor product topology. We regard this as a space of functions on $\ZZ_p^2$ in the obvious way, and refer to these as the $L$-valued functions on $\ZZ_p^2$ of order $(a, b)$. It is clear that $C^{(0, 0)}(\ZZ_p^2, L)$ is simply the space of continuous $L$-valued functions on $\ZZ_p^2$, and that if $a' \ge a$ and $b' \ge b$, then $C^{(a',b')}(\ZZ_p^2, L)$ is dense in $C^{(a,b)}(\ZZ_p^2, L)$. Moreover, for any $(a,b)$ the space $LA(\ZZ_p^2, L)$ of locally analytic functions on $\ZZ_p^2$ is a dense subspace of $C^{(a,b)}(\ZZ_p^2, L)$.
Note that any choice of Banach space bases for the two factors in the tensor product gives a Banach space basis for $C^{(a,b)}(\ZZ_p^2, L)$. In particular, from (1) above we have a Banach basis given by the functions
\[ c_{n_1,n_2} : (x_1,x_2) \mapsto p^{[a \ell(n_1)] + [b \ell(n_2)]} \binom{x_1}{n_1}\binom{x_2}{n_2}.\]
The following technical proposition will be useful to us in the main text:
\begin{proposition}
\label{prop:changevar}
For any $h \ge 0$, the space $C^{(0, h)}(\ZZ_p^2, L)$ is invariant under pullback via the map $\Phi: (x, y) \to (x, ax + y)$, for any $a \in \ZZ_p$.
\end{proposition}
\begin{proof}
It suffices to show that $\Phi^*(c_{n_1,n_2})$ can be written as a convergent series in terms of the functions $c_{m_1, m_2}$ with uniformly bounded coefficients. We find that
\begin{align*}
\Phi^*(c_{n_1,n_2})(x_1, x_2) &= p^{[h \ell(n_2)]} \binom{x_1}{n_1} \binom{ax_1 + x_2}{n_2}\\
&= \sum_{i=0}^{n_1} p^{[h \ell(n_2)]} \binom{x_1}{n_1} \binom{ax_1}{n_2 - i} \binom{x_2}{i}.
\end{align*}
The functions $x_1 \mapsto \binom{x_1}{n_1} \binom{ax_1}{n_2 - i}$ are continuous $\ZZ_p$-valued functions on $\ZZ_p$, and hence the coefficients of their Mahler expansions are integral; and since the function $\ell(n)$ is increasing, we see that the coefficients of $\Phi^*(c_{n_1,n_2})$ in this basis are in fact bounded by 1.
\end{proof}
Dually, we define a distribution of order $(a,b)$ to be an element of the dual of $C^{(a,b)}(\ZZ_p^2, L)$; the space $D^{(a,b)}(\ZZ_p^2, L)$ of such distributions is canonically isomorphic to the completed tensor product $D^a(\ZZ_p, L) \mathbin{\hat\otimes}_L D^b(\ZZ_p, L)$.
An analogue of (2) above is also true for these spaces. Let us write $LP^{(N_1, N_2)}(\ZZ_p^2, L)$ for the space of functions on $\ZZ_p^2$ which are locally polynomial of degree $\le N_1$ in $x_1$ and of degree $\le N_2$ in $x_2$; that is, the algebraic tensor product $LP^{N_1}(\ZZ_p, L) \otimes_L LP^{N_2}(\ZZ_p, L)$.
\begin{proposition}
Suppose $N_1 \ge [a]$ and $N_2 \ge [b]$. Then the subspace $LP^{(N_1, N_2)}(\ZZ_p^2, L)$ is dense in $C^{(a,b)}(\ZZ_p^2, L)$, and a linear functional on $LP^{(N_1, N_2)}(\ZZ_p^2, L)$ extends to an element of $D^{(a, b)}(\ZZ_p, L)$ if and only if there is a constant $C$ such that
\begin{multline}
\label{eq:ordercondition}
v_p \left(\int_{(x_1, x_2) \in (a_1 + p^{n_1}\ZZ_p) \times (a_2 + p^{n_2}\ZZ_p)} \left( \mathfrak{r}ac{x_1 - a_1}{p^{n_1}}\right)^{k_1} \left( \mathfrak{r}ac{x_2 - a_2}{p^{n_2}}\right)^{k_2}\,\mathrm{d}\mu\right) \\ \ge C - a n_1 - b n_2
\end{multline}
for all $(a_1, a_2) \in \ZZ_p^2$, $(n_1, n_2) \in \mathbb{N}^2$, $0 \le k_1 \le N_1$ and $0 \le k_2 \le N_2$.
\end{proposition}
The proof of this result is virtually identical to the 1-variable case, so we shall not give the full details here.
In particular, if $a,b < 1$, we may take $N_1 = N_2 = 0$, and a distribution of order $(a, b)$ is uniquely determined by its values on locally constant functions, or equivalently, by its values on the indicator functions of open subsets of $\ZZ_p^2$. Conversely, a finitely-additive function $\mu$ on open subsets of $\ZZ_p^2$ defines a distribution of order $(a,b)$ if and only if there is $C$ such that
\[ v_p \mu\left( (a_1 + p^{n_1}\ZZ_p) \times (a_2 + p^{n_2}\ZZ_p) \right) \ge C - a n_1 - b n_2.\]
The following is easily verified:
\begin{proposition}
\label{prop:orderofproduct}
The convolution of distributions of order $(a,b)$ and $(a', b')$ has order $(a + a', b + b')$.
\end{proposition}
It is important to note that the spaces of functions and of distributions of order $(a,b)$ depend on a choice of coordinates; they are not invariant under automorphisms of $\ZZ_p^2$, \emph{even if} $a = b$. However, dualising Proposition \ref{prop:changevar} above, the space of distributions of order $(0, h)$ is invariant under automorphisms preserving the subgroup $(0, \ZZ_p)$.
\begin{remark}
One can also define a function $f:\ZZ_p^2 \to L$ to be of order $h$, for a single non-negative real $h$, if $f$ has a Taylor expansion of degree $[h]$ at every point, with the error term $\varepsilon(x,y)$ (defined as above) satisfying
\[ \inf_{x \in \ZZ_p^2, y \in p^n \ZZ_p^2} v_p \varepsilon(x,y) - hn \to \infty.\]
This definition \emph{is} invariant under automorphisms of $\ZZ_p$ (and indeed under arbitrary morphisms of locally $\QQ_p$-analytic manifolds). However it is not so convenient for us, since locally constant functions are only dense for $h < 1$, and a finitely-additive function on open subsets extends to a linear functional on this space if we can find a $C$ such that
\begin{equation}
\label{eq:badordercondition}
v_p \mu\left( a + p^n \ZZ_p^2 \right) \ge C - nh.
\end{equation}
The requirement that this be satisfied, for some $h < 1$, is much stronger than the requirement that \eqref{eq:ordercondition} is satisfied for some $a, b < 1$.
\end{remark}
We shall also use the concept of distributions of order $(a, b)$ on a slightly larger class of group: if we have an abelian $p$-adic Lie group $G$, and an open subgroup $H$ with distinguished subgroups $H_1,H_2$ such that $H = H_1 \times H_2$ and $H_1 \cong H_2 \cong \ZZ_p$, then we may define a distribution on $G$ to have order $(a, b)$ if its restriction to every coset of $H$ has order $(a, b)$ in the above sense. Note that this does not depend on a choice of generators of the groups $H_i$, but it does depend on the choice of the subgroups $H_1, H_2$; so when there is a possibility of ambiguity we shall write ``order $(a,b)$ with respect to the subgroups $H_1, H_2$''.
Note that an application of Proposition \ref{prop:changevar} shows that a distribution has order $(0, h)$ with respect to the subgroups $(H_1, H_2)$ if and only if it has order $(0, h)$ with respect to $(H_1', H_2)$ for any other subgroup $H_1'$ complementary to $H_2$; that is, in this special case the definition of ``order $(0, h)$'' depends only on the choice of $H_2$.
\end{document} |
\begin{document}
\title{A Proof of the $(n,k,t)$ Conjectures}
\begin{abstract}
An \varepsilonmph{$(n,k,t)$-graph} is a graph on $n$ vertices in which every
set of
$k$ vertices contain a clique on $t$ vertices.
Tur\'an's Theorem (complemented) states that the unique minimum $(n,k,2)$-graph is a disjoint union of cliques.
We prove that minimum $(n,k,t)$-graphs are always disjoint unions of cliques for any $t$ (despite nonuniqueness of extremal examples), thereby generalizing Tur\'an's Theorem and
confirming two conjectures of Hoffman et al.
\noindent \textsc{Keywords.} Extremal Graph Theory, Tur\'{a}n's Theorem, $(n,k,t)$ Conjectures
\noindent \textsc{Mathematics Subject Classification.} 05C35
\varepsilonnd{abstract}
\section{Introduction}
The protagonists of this paper are $(n,k,t)$-graphs.
Throughout this paper, it is assumed $n,k,t$ and $r$ are positive integers and $n \geq k \geq t$.
\begin{definition}
A graph $G$ is an $(n,k,t)$-\varepsilonmph{graph} if $\vert V(G) \vert = n$ and every induced subgraph on $k$ vertices contains a clique on $t$ vertices.
A \varepsilonmph{minimum} $(n,k,t)$-graph is an $(n,k,t)$-graph with the minimum number of edges among all $(n,k,t)$-graphs.
\varepsilonnd{definition}
The study of the minimum number of edges in an $(n,k,t)$-graph, and so implicitly the structure of minimum $(n,k,t)$-graphs, is a natural extremal graph theory problem-as we will see shortly, this setting generalizes the flagship theorems of Mantel and Tur\'an.
In \mathcal Ite{Hoffman} the following were conjectured.
\begin{conjecture}[The Weak $(n,k,t)$-Conjecture]
There exists a minimum $(n,k,t)$-graph that is a disjoint union of cliques.
\varepsilonnd{conjecture}
\begin{conjecture}[The Strong $(n,k,t)$-Conjecture] \label{strongconj}
All minimum $(n,k,t)$-graphs are a disjoint union of cliques.
\varepsilonnd{conjecture}
Note that throughout this paper ``disjoint union'' refers to a \varepsilonmph{vertex}-disjoint union, and a disjoint union of cliques allows isolated vertices (namely cliques of size 1).
We prove the following theorem, confirming the Strong (and therefore also the Weak) $(n,k,t)$-Conjecture.
\begin{theorem} \label{strongthm}
All minimum $(n,k,t)$-graphs are a disjoint union of cliques.
\varepsilonnd{theorem}
We prove Theorem \ref{strongthm} by proving a stronger statement involving the independence number of the graph (see Theorem \ref{mainthm}).
\section{Previous Results}
Given graphs $G$ and $H$, let $\overline{G}$ denote the complement of $G$, let $G + H$ denote the disjoint union of $G$ and $H$, and for a positive integer $s$ let $sG = G + \dots + G$ ($s$ times).
The authors in \mathcal Ite{Hoffman} discussed the following basic cases of the Strong $(n,k,t)$-Conjecture.
\begin{itemize}
\item $t=1$. Every graph on $n$ vertices is an $(n,k,1)$-graph, so the unique minimum $(n,k,1)$-graph is $nK_1$.
\item $k=t \geq 2$. The unique minimum $(n,k,k)$-graph is $K_n$.
\item $n=k$. The unique minimum $(n,n,t)$-graph is $(n-t)K_1+K_t$.
\varepsilonnd{itemize}
When $t=2$, the strong $(n,k,t)$-conjecture is equivalent to Tur\'{a}n's Theorem, which we recall here.
If $r \leq n$ are positive integers let $\mathcal{T}r{n}{r} = \overline{K_{p_1} + \dots + K_{p_r}}$ where $p_1 \leq p_2 \leq \dots \leq p_r \leq p_1+1$ and $p_1 + p_2 \dots + p_r =n$ ($\mathcal{T}r{n}{r}$ is called the \varepsilonmph{Tur\'{a}n Graph}).
That is to say, $\cTr{n}{r}$ is a disjoint union of $r$ cliques where the number of vertices in each clique differs by at most one.
Tur\'{a}n's Theorem will be used frequently throughout this paper.
The complement of the traditional statement is below.
\begin{theorem}[Tur\'{a}n's Theorem]\label{turan}
The unique graph on $n$ vertices without an independent set of size $r+1$ with the minimum number of edges is $\cTr{n}{r}$.
\varepsilonnd{theorem}
By Tur\'{a}n's Theorem, the unique minimum $(n,k,2)$-graph is $\cTr{n}{k-1}$.
Indeed, the fact that Tur\'{a}n's Theorem (this version) depends on the independence number of a graph inspired the proof of the main theorem.
Arguably the most famous research direction in extremal graph theory is study of Tur\'an numbers, that is, for a fixed graph $F$ and the class $\mathcal{F}orb_n(F)$ of $n$-vertex graphs not containing a copy of $F$, what is the maximum number of edges? Tur\'an's Theorem gives an exact answer when $F$ is complete. Denoting $\mathcal Hi(F)$ for the chromatic number of $F$, the classical Erd\H{o}s-Stone Theorem \mathcal Ite{erdos1946structure} gives an asymptotic answer of $(1-1/(\mathcal Hi(F)-1)) \binom{n}{2}$ for nonbipartite $F$ as $n \rightarrow \infty$, and the bipartite $F$ case is still a very active area of research (see eg \mathcal Ite{bukh2015random}, \mathcal Ite{furedi2013history}). A more general question considers a family $\mathcal{F}$ of graphs and the collection $\mathcal{F}orb_n(\mathcal{F})$ of all $n$-vertex graphs not containing any subgraph in $\mathcal{F}$. In this language, an $(n,k,t)$-graph is precisely a complement of a graph in $\mathcal{F}orb_n(\mathcal{F})$, where $\mathcal{F}$ is the family of $k$-vertex graphs $F$ where $\bar{F}$ does not contain $K_t$. Note that this $\mathcal{F}$ almost always has $\max_{F\in \mathcal{F}} \{\mathcal Hi(F)\} >2$, meaning the Erd\H{o}s-Stone Theorem determines the correct edge density for large $n$. But Theorem \ref{mainthm} is still interesting for 2 reasons:
\begin{itemize}
\item The values of $k$ and $t$ are arbitrary, not fixed (and may grow with $n$), and
\item This result is exact, not asymptotic.
\varepsilonnd{itemize}
There has been some previous work towards the proof of the conjectures.
In \mathcal Ite{noble2017application}, the Strong $(n,k,t)$-Conjecture was proved for $n \geq k \geq t \geq 3$ and $k \leq 2t-2$, utilizing
an extremal result about vertex covers attributed by Hajnal \mathcal Ite{hajnal1965theorem} to Erd\H{o}s and Gallai \mathcal Ite{erdos1961minimal}.
In \mathcal Ite{Hoffman}, for all $n \geq k \geq t$,
the structure of
minimum $(n,k,t)$-graphs that are disjoint union of cliques
was described more precisely as follows.
\begin{theorem} [\mathcal Ite{Hoffman}] \label{mincliquegraph}
Suppose $G$ has the minimum number of edges of all $(n,k,t)$-graphs that are a disjoint union of cliques.
Then $G=aK_1+ \cTr{n-a}{b}$, for some $a,b$ satisfying
$$
a+b(t-1)=k-1,
$$
and
$$
b \leq \min \left( \left\lfloor \frac{k-1}{t-1} \right\rfloor, n-k+1) \right).
$$
\varepsilonnd{theorem}
The following example shows Theorem \ref{strongthm} cannot be strengthened to include uniqueness (and in particular the choice of $b$ above need not be unique).
\begin{example} \label{multmin}
Theorem \ref{mincliquegraph} tells us the graphs with the minimum number of edges of all $(10,8,3)$-graphs that are a disjoint union of cliques are among $5K_1+\cTr{5}{1} = 5K_1+K_5$, $3K_1+\cTr{7}{2}=3K_1 + K_3 + K_4$, or $K_1+\cTr{9}{3}=K_1 + 3K_3$ (as given by $b=1,2,3$ in the above).
These graphs have $10$, $9$, and $9$ edges respectively.
So, $3K_1+\cTr{7}{2}=3K_1 + K_3 + K_4$ and $K_1+\cTr{9}{3} =K_1 + 3K_3$ both have the minimum number of edges of all $(10,8,3)$-graphs among disjoint unions of cliques.
\varepsilonnd{example}
Hoffman and Pavlis \mathcal Ite{HoffPav} have
found for
any positive integer $N$ there exist some $n \geq k \geq 3$ with
at least $N$ non-isomorphic $(n,k,3)$-graphs which are minimum (among graphs which are disjoint unions of cliques, although Theorem \ref{mainthm} now removes this restriction).
Further, Allen et al. \mathcal Ite{REU} determined precisely the values of $b$ from Theorem \ref{mincliquegraph} that minimize the number of edges.
This was somehow off-putting, but Observation \ref{uniqueindnum} puts such concerns to rest (and may be the reason why the problem still remains tractable).
Notice that the independence number of the graphs in Example \ref{multmin} are $6$, $5$, and $4$ respectively.
These lead to Observation \ref{uniqueindnum}, but first we state the following observation that will be used in the proof of Observation \ref{uniqueindnum} and frequently throughout the remainder of the paper.
Let $c(G)$ denote the number of connected components of a graph $G$, and let
$\alpha(G)$ denote its independence number.
\begin{observation} \label{indnumcomponents}
Suppose $G$ is a disjoint union of cliques. Then $\alpha(G) =c(G)$.
\varepsilonnd{observation}
The following observation inspired considering the independence number in the main proof.
\begin{observation} \label{uniqueindnum}
Suppose $G_1$ and $G_2$ are disjoint union of clique graphs that are both minimum $(n,k,t)$-graphs. Then $\alpha(G_1) \neq \alpha(G_2)$.
\varepsilonnd{observation}
\begin{proof}
If $t=2$, then $G_1=G_2$ by uniqueness in Tur\'{a}n's Theorem.
So assume $t>2$.
For a contradiction, suppose $\alpha(G_1) = \alpha(G_2)$.
By Theorem \ref{mincliquegraph}, there exist positive integers $a_1, b_1, a_2, b_2$, such that $G_i = a_iK_1+ \cTr{n-a_{i}}{b_i}$ and $a_i+b_i(t-1)=k-1$ (for $i=1,2$).
Thus, $a_1+b_1(t-1)=a_2+b_2(t-1)$.
Also, by Observation \ref{indnumcomponents}, $a_1+b_1=a_2+b_2$.
Hence
$b_1-b_1(t-1)=b_2-b_2(t-1)$.
Combining these and noting $t > 2$ gives $b_1=b_2$
and $a_1=a_2$.
\varepsilonnd{proof}
\section{The Proof}
We collect together some preliminary lemmas. Firstly,
because we will use induction on $t$ in Theorem \ref{mainthm}, we observe that $(n,k,t)$-graphs contain $(n',k',t')$-graphs for certain smaller values of $n'$, $k'$, and $t'$.
Given a set $X \subseteq V(G)$, let $G[X]$ denote the subgraph of $G$ induced by the vertices in $X$ and let $G-X = G[V(G) \setminus X]$.
\begin{lemma} \label{minusset}
If $G$ is an $(n,k,t)$-graph and $S \subseteq V(G)$ is an independent set,
then $G-S$ is an $(n- \vert S \vert, k - \vert S \vert, t-1)$-graph.
\varepsilonnd{lemma}
\begin{proof}
Clearly $\vert V(G-S) \vert = n - \vert S \vert$.
Let $X$ be a subset of $V(G-S)$ with $\vert X \vert = k- \vert S \vert$.
Because $G$ is an $(n,k,t)$-graph, $G[S \mathcal Up X]$ contains a $K_t$.
Because $S$ is an independent set, at most $1$ vertex in this $K_t$ was from $S$.
Thus, $(G-S)[X]$ must contain a $K_{t-1}$.
\varepsilonnd{proof}
Because Theorem \ref{mainthm} considers the independence number of $(n,k,t)$-graphs, we now
enrich the $(n,k,t)$ notation to include the independence number.
\begin{definition}
A graph $G$ is an $(n,k,t,r)$-\varepsilonmph{graph} if $G$ is an $(n,k,t)$-graph, and $\alpha(G)=r$.
A \varepsilonmph{minimum} $(n,k,t,r)$-graph is an $(n,k,t,r)$-graph with the minimum number of edges among all $(n,k,t,r)$-graphs.
\varepsilonnd{definition}
The following lemma determines an upper bound for the independence number of an $(n,k,t)$-graph.
\begin{lemma} \label{maxalpha}
If $G$ is an $(n,k,t)$-graph, then $\alpha(G) < k-t+2$.
\varepsilonnd{lemma}
\begin{proof}
For a contradiction, suppose $G$ has an independent set, call it $S$, of size $k-t+2$.
Let $X'$ be any $t-2$ vertices in $G-S$ (this is possible because $\vert V(G-S) \vert = n- (k-t+2) = (n-k) + t-2 \geq t-2$).
Define $X \coloneqq S \mathcal Up X'$, so $\vert X \vert = k$.
Then there exists a $K_t$ in $G[X]$ which necessarily contains $2$ vertices in $S$.
This is a contradiction because $S$ is independent.
Thus $\alpha(G) < k-t+2$.
\varepsilonnd{proof}
But, subject to this bound, all independence numbers $\alpha(G)$ are attainable.
Indeed, for any $1\leq r \leq k-t+1$ the graph $(r-1)K_1 + K_{n-r+1}$ is an example of an $(n,k,t,r)$-graph.
Below is an observation about minimum $(n,k,t)$-graphs.
It will be used in in the proof of Theorem \ref{mainthm}.
\begin{lemma} \label{notkminus1}
Suppose $k>t$
and let $a \leq n$ be a positive integer.
Let $H$ be an $(n,k,t)$-graph with the minimum number of edges among $(n,k,t)$-graphs with independence number at most $a$.
If $\alpha(H) <a$, then $H$ is not an $(n,k-1,t)$-graph.
\varepsilonnd{lemma}
\begin{proof}
Among all $(n,k,t)$-graphs with independence number at most $a$, suppose $H$ has the fewest edges.
For a contradiction suppose $\alpha(H)<a$ and $H$ is an $(n,k-1,t)$-graph.
Because $\alpha(H)<a \leq n$, there exists an edge, $e$ in $H$.
Let $H^-$ be the graph formed from $H$ by deleting $e$ and let
$v \in e$.
Let $X$ be a subset of $V(H^-)$ with $\vert X \vert = k$.
If $X$ does not contain $v$, then $H^-[X]=H[X]$ contains a $K_t$.
So, suppose $X$ contains $v$.
Then $\vert X \setminus \{v\} \vert = k-1$.
Because $H$ is an $(n, k-1, t)$-graph, $H[X \setminus \{v\}]=H^-[X \setminus \{v\}]$ contains a $K_t$.
So $H^-[X]$ contains a $K_t$.
Also, $H^-$ has independence number at most $1$ greater than $H$.
Thus, $H^-$ is an $(n,k,t)$-graph with fewer edges than $H$ and $\alpha(H^-) \leq a$, contradicting minimality of $H$.
\varepsilonnd{proof}
We caution that Lemma \ref{notkminus1} is not necessarily true when $\alpha(H)=a$.
For example, the minimum $(8,8,4)$-graph, $H$, with independence number $\alpha(H) \leq a:= 2$ is $K_4 + K_4$.
Despite this, $K_4 + K_4$ is also an $(8,7,4)$-graph.
Finally, since we will be using the $(n,k,t)$ condition to build the desired disjoint union of cliques, it is useful to keep track of the size of their largest $K_t$-free subgraph.
\begin{lemma}\label{l.nktrelation}
Suppose $\Gamma$ is a graph which is a disjoint union of cliques. Denote by $A_\Gamma$ the subgraph consisting of cliques with $<t$ vertices, and $B_\Gamma$ the subgraph of components with $\geq t$ vertices (so that $\Gamma=A_\Gamma + B_\Gamma$).
Then:
\begin{enumerate}[a.]
\item the largest $K_t$-free subgraph of $\Gamma$ has $(t-1)c(B_\Gamma) +|V(A_\Gamma)|$ vertices,
\item if also $\Gamma$ is an $(n,k,t)$-graph, then \[
k-1 \geq (t-1) c(B_\Gamma) +|V(A_\Gamma)|, \hspace{5mm} \text{ and}
\]
\item if furthermore $\Gamma$ is \varepsilonmph{not} an $(n,k-1,t)$-graph, the above inequality is an equality.
\varepsilonnd{enumerate}
\varepsilonnd{lemma}
\begin{proof}
The largest subgraph $F$ of $\Gamma$ with no $K_t$ is obtained by starting with $A_{\Gamma}$ and adding $t-1$ vertices from each clique of $B_{\Gamma}$ . So $|V(F)|=(t-1)c(B_\Gamma)+|V(A_\Gamma)|$. If $\Gamma$ is an $(n,k,t)$-graph, then $F$ has at most $k-1$ vertices in total. If $\Gamma$ is not an $(n,k-1,t)$-graph, then there is some $(k-1)$-set of vertices without a $K_t$, and as $F$ is the largest, it follows $|V(F)| \geq k-1$ so we have equality.
\varepsilonnd{proof}
We now proceed with the proof of the main theorem.
\begin{theorem} \label{mainthm}
Every minimum $(n,k,t,r)$-graph is a disjoint union of cliques.
\varepsilonnd{theorem}
\begin{proof}
The proof proceeds by induction on $t$.
If $t=1$, then every graph on $n$ vertices is an $(n,k,1)$-graph because all sets of $k$ vertices contain a $K_1$.
Thus, the unique minimum $(n,k,1,r)$-graph is $\cTr{n}{r}$ by Tur\'{a}n's Theorem.
Else, $t \geq 2$.
Suppose $r \geq k-t+2$.
By Lemma \ref{maxalpha}, there does not exist an $(n,k,t,r)$-graph.
So the theorem holds vacuously.
Next suppose $r < \frac{k}{t-1}$.
By Tur\'{a}n's Theorem, $\cTr{n}{r}$ is the unique graph with independence number $r$ and the minimum number of edges.
So it suffices to check that $\cTr{n}{r}$ is genuinely an $(n,k,t)$-graph.
Let $X$ be a subset of $V(\cTr{n}{r})$ with $\vert X \vert = k$.
By the pigeonhole principle, $X$ contains at least $\frac{k}{r} > t-1$ vertices from one connected component of $\cTr{n}{r}$.
Because all connected components of $\cTr{n}{r}$ are cliques, $\cTr{n}{r}[X]$ contains a $K_t$.
Therefore $\cTr{n}{r}$ is the unique minimum $(n,k,t,r)$-graph.
So suppose instead $\frac{k}{t-1} \leq r < k-t+2$.
Let $G$ be an $(n,k,t,r)$-graph.
We first construct an $(n,k,t,r)$-graph $G'$ that is a disjoint union of cliques with $E(G) \geq E(G')$.
Let $S$ be an independent set in $G$ with $\vert S \vert = r$.
Then $\alpha(G-S) \leq \alpha(G) = r$.
By Lemma \ref{minusset}, $G-S$ must be an $(n-r, k-r, t-1)$-graph.
Let $H$ be any minimum $(n-r, k-r, t-1)$-graph with $\alpha(H) \leq r$,
so that $\vert E(G-S) \vert \geq \vert E(H) \vert$.
By the induction hypothesis, $H$ is a disjoint union of cliques with $\alpha(H) \leq r$.
So, by Observation \ref{indnumcomponents}, $c(H) \leq r$.
Let $G'$ be the graph formed by joining one distinct vertex to each clique component of $H$ and adding $r-c(H)$ new isolated vertices
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{tikzpicture}[scale=0.6]
\filldraw[] (0,4) circle [radius=0.125];
\draw[very thick] (0,4) -- (-0.25,0.5);
\draw[very thick] (0,4) -- (0.25,0.5);
\filldraw[] (1,4) circle [radius=0.125];
\draw[very thick] (1,4) -- (0.25,0.5);
\draw[very thick] (1,4) -- (0.75,0.5);
\draw[very thick] (1,4) -- (1.25,0.5);
\draw[very thick] (1,4) -- (1.75,0.5);
\filldraw[] (2,4) circle [radius=0.125];
\draw[very thick] (2,4) -- (2,0.5);
\filldraw[] (3,4) circle [radius=0.125];
\filldraw[] (4,4) circle [radius=0.125];
\draw[very thick] (4,4) -- (3.5,0.5);
\draw[very thick] (4,4) -- (4.5,0.5);
\filldraw[] (5,4) circle [radius=0.125];
\draw[very thick] (5,4) -- (4.5,0.5);
\draw[very thick] (5,4) -- (5.5,0.5);
\filldraw[] (6,4) circle [radius=0.125];
\filldraw[] (7,4) circle [radius=0.125];
\draw[very thick] (7,4) -- (6.5,0.5);
\draw[very thick] (7,4) -- (7,0.5);
\draw[very thick] (7,4) -- (7.5,0.5);
\filldraw[] (8,4) circle [radius=0.125];
\draw[very thick] (8,4) -- (8,0.5);
\draw [thick, decorate,
decoration = {brace, raise=2, amplitude=10}] (-0.25,4.4) -- (8.25,4.4);
\node at (4,5.5) {$\vert S \vert =r$};
\filldraw [gray!20, fill=gray!20] (-0.5,-3.9) rectangle (9,2);
\node at (4,0) {Some};
\node at (4,-0.75) {$(n-r, k-r, t-1)$-graph};
\varepsilonnd{tikzpicture}
&
\begin{tikzpicture}[scale=0.6]
\filldraw[] (0.25,4) circle [radius=0.125];
\filldraw[] (0.75,4) circle [radius=0.125];
\filldraw[] (1.25,4) circle [radius=0.125];
\filldraw[] (1.75,4) circle [radius=0.125];
\filldraw[] (2.25,4) circle [radius=0.125];
\filldraw[] (3.25,4) circle [radius=0.125];
\filldraw[] (3.75,4) circle [radius=0.125];
\filldraw[] (5.6,4) circle [radius=0.125];
\filldraw[] (7.4,4) circle [radius=0.125];
\draw [thick, decorate,
decoration = {brace, raise=2, amplitude=10}] (3,4.4) -- (7.65,4.4);
\node at (5.325,5.5) {$c(H)$};
\draw [thick, decorate,
decoration = {brace, raise=2, amplitude=10}] (0,4.4) -- (2.5,4.4);
\node at (1.6,5.5) {$r-c(H)$};
\filldraw [gray!20, fill=gray!20] (-0.5,-3.9) rectangle (9,2);
\node at (4,-3.2) {$H$};
\draw[very thick] (3.25,4) -- (0.5,0.7);
\draw[very thick] (3.25,4) -- (1,0.7);
\draw[very thick] (3.75,4) -- (1.7,0.7);
\draw[very thick] (3.75,4) -- (2.1,0.7);
\draw[very thick] (5.6,4) -- (4.5,1.1);
\draw[very thick] (5.6,4) -- (5,1.1);
\draw[very thick] (5.6,4) -- (5.5,1.1);
\draw[very thick] (7.4,4) -- (6.75,1.1);
\draw[very thick] (7.4,4) -- (7.25,1.1);
\draw[very thick] (7.4,4) -- (7.75,1.1);
\draw[very thick, fill=gray!60] \boundellipse{0.75,0}{0.5}{0.8};
\draw[very thick, fill=gray!60] \boundellipse{2,0}{0.5}{0.8};
\draw[very thick, fill=gray!60]
\boundellipse{5,0}{1}{1.25};
\draw[very thick, fill=gray!60]
\boundellipse{7.25,0}{1}{1.25};
\draw[dashed] (0,1) -- (0,-1.75);
\draw[dashed] (0,-1.75) -- (2.75,-1.75);
\draw[dashed] (2.75,-1.75) -- (2.75,1);
\draw[dashed] (0,1) -- (2.75,1);
\node at (1.375,-1.25) {$A_H$};
\draw[] (-0.25,1.45) -- (-0.25,-2.5);
\draw[] (-0.25,-2.5) -- (3.15,-2.5);
\draw[] (3.15,-2.5) -- (3.15,1.45);
\node at (1.2,2.95) {$A_{G'}$};
\draw[] (-0.25,1.45) -- (-0.25,3.75);
\draw[] (-0.25,3.75) -- (-0.25, 4.25);
\draw[] (-0.25, 4.25) -- (4.25, 4.25);
\draw[] (4.25, 4.25) -- (4.25, 3.75);
\draw[] (4.25, 3.75) -- (3.15,1.45);
\draw[dashed] (3.75,1.45) -- (3.75,-1.75);
\draw[dashed] (3.75,-1.75) -- (8.5,-1.75);
\draw[dashed] (8.5,-1.75) -- (8.5,1.45);
\draw[dashed] (3.75,1.45) -- (8.5,1.45);
\node at (6.325,-1.25) {$B_H$};
\draw[] (3.5,1.45) -- (3.5,-2.5);
\draw[] (3.5,-2.5) -- (8.75,-2.5);
\draw[] (8.75,-2.5) -- (8.75,1.45);
\node at (6.4,2.95) {$B_{G'}$};
\draw[] (3.5,1.45) -- (4.55,3.75);
\draw[] (4.55,3.75) -- (4.55, 4.25);
\draw[] (4.55, 4.25) -- (8.75, 4.25);
\draw[] (8.75, 4.25) -- (8.75, 3.75);
\draw[] (8.75, 3.75) -- (8.75,1.45);
\varepsilonnd{tikzpicture}\\
$G$ & $G'$\\
\varepsilonnd{tabular}
\mathcal Aption{}
\label{Gdef}
\varepsilonnd{figure}
(see Figure \ref{Gdef}).
Thus, $c(G')= c(H) + (r - c(H)) = r$.
So, $G'$ is also a disjoint union of cliques and, by Observation \ref{indnumcomponents}, $\alpha(G')=r$.
Each vertex in $V(G-S)$ must be adjacent to at least one vertex in $S$, else there is an independent set in $G$ with more than $r$ vertices. Also, by construction of $G'$, each vertex in $H$ has exactly $1$ edge to the vertices in $G'-H$.
So,
\begin{equation} \label{edgesGG'}
\begin{split}
\vert E(G) \vert & = \vert E(G-S) \vert + \sum_{v \in V(G-S)} \vert E(v,S) \vert\\
& \geq \vert E(H) \vert + \sum_{v \in V(G-S)} 1 \\
& = \vert E(H) \vert + \sum_{v \in V(H)} \vert E(v, V(G'-H)) \vert \\
& = \vert E(G') \vert, \\
\varepsilonnd{split}
\varepsilonnd{equation}
where $E(v,W)$ denotes the set of edges between $v$ and $W$.
We now show $G'$ is an $(n,k,t,r)$-graph.
First, let $A_{H}$ be the subgraph of $H$ consisting of connected components (necessarily cliques) with strictly less than $t-1$ vertices and $B_{H}$ be the subgraph of $H$ consisting of components
with at least $t-1$ vertices.
Lemma \ref{l.nktrelation} (b) applied to
the $(n-r,k-r,t-1)$-graph $H$
gives
\begin{equation} \label{Hfact}
k-r > (t-2)c(B_H) +\vert V(A_H) \vert.
\varepsilonnd{equation}
Now let $A_{G'}$ be the subgraph of $G'$ consisting of connected components (by construction of $G'$, necessarily cliques) with strictly less than $t$ vertices and $B_{G'}$ be the subgraph of $G'$ consisting of components (necessarily cliques)
with at least $t$ vertices.
Then:
\begin{itemize}
\item $c(B_{G'})=c(B_H)$, as every clique of size $\geq t-1$ in $H$ has become a clique of size $\geq t$ in $G'$ (by construction of $G'$); and
\item $\vert V(A_{G'}) \vert = \vert V(A_H) \vert + r - c(B_H)$, as every clique of size $< t-1$ in $H$ has become a clique of size $<t$ in $G'$, and only $c(B_H)$ of the $r$ vertices in $G'-H$ are not added to the larger $B_{H}$ components.
\varepsilonnd{itemize}
Let $X$ be a subset of $V(G')$ with $\vert X \vert = k$.
By definition of $G'$,
at most $\vert V(A_{G'}) \vert = \vert V(A_H) \vert + r - c(B_H)$ vertices in $X$ are in components with less than $t$ vertices.
Thus, by the pigeonhole principle, some component of $B_{G'}$ contains at least $\frac{k- \vert V(A_{G'}) \vert}{c(B_{G'})}$ vertices from $X$.
Plus, we can lower bound
\begin{equation*}
\begin{aligned}
\frac{k- \vert V(A_{G'}) \vert}{c(B_{G'})} & = \frac{ k-(\vert V(A_H) \vert + r - c(B_H))}{c(B_H)} & \\
& = \frac{ k-\vert V(A_H) \vert -r}{c(B_H)} +1 & \\
& > (t-2) + 1 & \text{(By \varepsilonqref{Hfact})}\\
& =t-1. & \\
\varepsilonnd{aligned}
\varepsilonnd{equation*}
Thus, $G'[X]$ contains a $K_t$.
This proves $G'$ is an $(n,k,t,r)$-graph.
So, we have found an $(n,k,t,r)$-graph $G'$ which is a disjoint union of cliques and with $\vert E(G) \vert \geq \vert E(G') \vert$
(as needed for the weak $(n,k,t)$-conjecture).
Now, suppose $G$ is also a minimum $(n,k,t,r)$-graph.
So the inequality in \varepsilonqref{edgesGG'} is actually an equality.
Thus, each vertex $v \in G-S$ is adjacent to exactly one vertex in $S$.
Also, $\vert E(G-S) \vert = \vert E(H) \vert$, so $G-S$ is a minimum $(n-r, k-r, t-1)$ graph with independence number at most $r$ and is therefore also a disjoint union of cliques by the induction hypothesis.
We show $G$ must also be a disjoint union of cliques. This will be implied by the following two additional facts.
\begin{itemize}
\item[$(\diamondsuit)$] If $u$,$v$ are two vertices in different components of $G-S$, then they cannot be adjacent to the same vertex $w \in S$.
\item[$(\heartsuit)$] If $u$,$v$ are two vertices in the same components of $G-S$, then they are adjacent to the same vertex in $S$.
\varepsilonnd{itemize}
To prove $(\diamondsuit)$, note that if $u$ and $v$ are two vertices in the different components of $G-S$ then they cannot be adjacent to the same vertex $w \in S$, for otherwise
$S \setminus \{w\} \mathcal Up \{u,v\}$ is an independent set in $G$ with $r+1$ vertices.
We now prove $(\heartsuit)$.
Similarly to before, define $A_{G-S}$ to be the subgraph of $G-S$ consisting of connected components with strictly less than $t-1$ vertices and $B_{G-S}$ the subgraph of $G-S$ consisting of components
with at least $t-1$ vertices.
For a contradiction, suppose $u$ and $v$ are two vertices in the same component, $F$, of $G-S$ and they are adjacent to two distinct vertices in $S$.
Because of $(\diamondsuit)$ and the vertices in component $F$ are adjacent to at least $2$ distinct vertices in $S$, $c(G-S) < |S|=r$.
This leads to two important facts:
\begin{enumerate}[(i)]
\item By Observation \ref{indnumcomponents}, $\alpha(G-S)<r$.
\item $A_{G-S}$ contains only isolated vertices (else choose a vertex in $A_{G-S}$ with at least one incident edge and delete all such. This is still an $(n-r, k-r, t-1)$-graph with independence number at most $r$, and strictly fewer edges than $G-S$, contradicting minimality). In particular, as $|F| \geq 2$, $F$ must be a component of $B_{G-S}$.
\varepsilonnd{enumerate}
Let $Y$ be the union of subsets of $t-2$ vertices from each connected component of $B_{G-S}-V(F)$.
Therefore, $\vert Y \vert = (t-2)(C(B_{G-S})-1)$.
Let $Z$ be a set of $t-3$ vertices in $V(F)-\{u,v\}$ (this is possible because of (ii)).
Define
\begin{equation*}
X'= S \mathcal Up V(A_{G-S}) \mathcal Up Y \mathcal Up Z \mathcal Up \{u,v\}.
\varepsilonnd{equation*}
We now show $\vert X' \vert =k$.
Consider two cases.
Suppose $k-r > t-1$.
Because $G-S$ is an $(n-r, k-r, t-1)$-graph,
using (i)
and
Lemma \ref{notkminus1} shows $G-S$ is not an $(n- r, k-r-1, t-1)$-graph.
So
Lemma \ref{l.nktrelation} (c) gives
\begin{equation} \label{largestset}
k-r-1
=
\vert V(A_{G-S}) \vert +(t-2)c(B_{G-S}).
\varepsilonnd{equation}
Now, suppose $k-r=t-1$.
Then, $G-S=K_{n-r}$, so $\vert V(A_{G-S}) \vert =0$ and $c(B)=1$.
Therefore, Equation \varepsilonqref{largestset} holds for all $k-r \geq t-1$.
Thus, in either case,
\begin{equation*}
\begin{array}{llr}
\vert X' \vert & \multicolumn{2}{l}{= r + \vert V(A_{G-S}) \vert + (t-2)(c(B_{G-S})-1) + (t-3) + 2}\\
& = r + (k-r-1) + 1 & \text{ (By \varepsilonqref{largestset})}\\
& = k. & \\
\varepsilonnd{array}
\varepsilonnd{equation*}
Any clique of size $t$ in $G[X']$ may contain at most $1$ vertex from $S$ since $S$ is an independent set.
Thus, if $G[X']$ contains at $K_t$, then at least $t-1$ vertices in $X'$ must be in the same connected component of $G[V(A_{G-S}) \mathcal Up Y \mathcal Up Z \mathcal Up \{u,v\}]$.
The set $X'$ does not contain $t-1$ vertices from the same connected components in $A_{G-S}$ or $Y$.
The set $X'$ contains $t-1$ vertices from $V(F)$ (namely, $Z \mathcal Up \{u,v\}$), but because each vertex in $V(F)$ is adjacent to exactly $1$ vertex in $S$ and $u$ and $v$ are adjacent to two unique vertices in $S$, $u$ and $v$ are not in a clique on $t$ vertices in $G[X']$.
Thus, $G[X']$ does not contain a clique on $t$ vertices, contradicting $G$ being an $(n,k,t)$-graph.
\varepsilonnd{proof}
Theorem \ref{mainthm} implies Theorem \ref{strongthm} proving the $(n,k,t)$-conjectures.
Indeed, a minimum $(n,k,t)$-graph is one with the minimum number of edges among all minimum $(n,k,t,r)$-graphs for $1 \leq r < k-t+2$.
Note, even if we relax the definition of an $(n,k,t,r)$-graph and let $n$, $k$, and $t$ be any positive integers, Theorem \ref{mainthm} still holds.
If $k > n$ then there does not exist a set of $k$ vertices, so all graphs on $n$ vertices are $(n,k,t,r)$-graphs.
Thus, the unique minimum $(n,k,t,r)$-graph is $\cTr{n}{r}$ by Tur\'{a}n's Theorem.
Also, if $n \geq k$ and $t>k$, then an induced subgraph on $k$ vertices cannot contain a clique on $t$ vertices, so no graphs are $(n,k,t,r)$-graphs.
So, the theorem holds vacuously.
In light of Observation \ref{uniqueindnum}, one might be led to believe for every positive integer $r$, there exists a unique minimum $(n,k,t,r)$-graph.
However, this is not the case.
\begin{observation}
There exist $n,k,t,$ and $r$ for which
the minimum $(n,k,t,r)$-graph is not unique.
\varepsilonnd{observation}
For example, $2K_2+K_5$ and $K_1+2K_4$ are both minimum $(9,8,4,3)$-graphs.
These can each be formed as described in the proof of Theorem \ref{mainthm} by letting $G-S$ be the minimum $(6,5,3)$-graphs, $2K_1+K_4$ or $K_3 + K_3$, respectively.
However, by Observation \ref{uniqueindnum} each minimum $(n,k,t)$-graph has a unique independence number.
In this example, by Theorem \ref{mincliquegraph} and Theorem \ref{strongthm}, the minimum $(9,8,4)$-graph is $4K_1 + K_5$ and this is the unique minimum $(9,8,4,5)$-graph.
\section{Future Directions}
Our main result and \mathcal Ite{REU} together solve the extremal problem of finding the minimum number of edges in an $(n,k,t)$-graph.
Viewing edges as cliques on $2$ vertices leads to one possible generalization of this problem.
\begin{question} \label{futurescliques}
Let $s$ be a positive integer. What is the minimum number of cliques on $s$ vertices in an $(n,k,t)$-graph?
\varepsilonnd{question}
A logical first step may be to ask:
What is the minimum number of cliques on $s$ vertices in an $(n,k,2)$-graph?
Recall, when $s=2$, this is equivalent to Tur\'{a}n's Theorem.
For general $s$, it turns out this is a special case of a question asked by Erd\H{o}s \mathcal Ite{Erdos}.
He conjectured that a disjoint union of cliques would always be best.
However, Nikiforov \mathcal Ite{nik} disproved this conjecture, observing that a clique-blowup of $C_5$ has independence number $2$ (so is an $(n,3,2)$-graph), but has fewer cliques on $4$ vertices than $2K_{\frac{n}{2}}$--the graph which has fewest $K_4$'s among all $(n,3,2)$-graphs
that are a disjoint union of cliques.
Das et al. \mathcal Ite{das} and Pikhurko \mathcal Ite{pik} independently found the minimum number of cliques on $4$ vertices in an $(n,3,2)$-graph for $n$ sufficiently large and Pikhurko \mathcal Ite{pik} found the minimum number of cliques on $5$ vertices, cliques on $6$ vertices, and cliques on $7$ vertices in an $(n,3,2)$-graph for $n$ sufficiently large.
But their approaches are non-elementary and rely heavily on Razborov's flag algebra method.
A summary on related results can be found in Razborov's survey \mathcal Ite{razborov}.
In \mathcal Ite{noble2017application}, Noble et al. showed for $n \geq k \geq t \geq 3$ and $k \leq 2t-2$ every $(n,k,t)$-graph must contain a $K_{n-k+t}$.
Because $(k-t) K_1 + K_{n-k+t}$ is an $(n,k,t)$-graph, this shows for $n \geq k \geq t \geq 3$ and $k \leq 2t-2$ the minimum number of $s$-cliques an $(n,k,t)$-graph must contain is ${n-k+t \mathcal Hoose s}$
(noting ${n-k+t \mathcal Hoose s}=0$ if $s > n-k+t$).
Thus, the smallest interesting open question of this form is as follows:
\begin{question}
What is the minimum possible number of triangles in an $(n,5,3)$-graph?
\varepsilonnd{question}
We can also consider the other end of the spectrum.
What is the minimum number of cliques on $t$ vertices in an $(n,k,t)$-graph?
Also, for what values of $n$, $k$, and $t$ does an $(n,k,t)$-graph necessarily contain at least one clique of $t+1$ vertices?
Similar to Question \ref{futurescliques}, one could ask: What is the minimum number $n$ that an $(n,k,t)$-graph must contain a clique on $s$ vertices?
However, if $t=2$, this is exactly the Ramsey number $R(k,s)$.
Thus, this question seems very hard in general.
One other common direction to take known results in extremal graph theory where extremal structures have been identified, is to ask whether every \varepsilonmph{near-optimal} example can be modified to make the optimal example using relatively few edits. For example, a result of F\"{u}redi \mathcal Ite{furedi2015proof}, when complemented, gives the following strengthening of Tur\'{a}n's Theorem:
\begin{theorem}\label{furedi}
Let $G$ be an $n$-vertex graph without an independent set of size $r+1$ with $|E(G)| \leq |E(\cTr{n}{r})|+q$. Then, upon adding at most $q$ additional edges, $G$ contains a spanning vertex-disjoint union of at most $r$ cliques.
\varepsilonnd{theorem}
Fixing an independence number $r$ and writing $G'$ for a minimum $(n,k,t,r)$-graph
(as in Theorem \ref{mainthm}), this leads us to the following question:
\begin{question}
Suppose $G$ is an $(n,k,t)$-graph with independence number $r$, satisfying $|E(G)| \leq |E(G')|+q$. Does there exist a function $f(q)$ such that only $f(q)$ edges need adding to make $G$ contain a spanning disjoint union of $\leq r$ cliques?
\varepsilonnd{question}
One would need the function $f(q)$ to be independent of $n$ in order for the above to be interesting. Does $f$ need to depend on $k$? Or on $t$? Theorem \ref{furedi} suggests that the answer to the first question may actually be no.
Finally, in light of the constructive nature of the proof of Theorem \ref{mainthm}, one may ask whether every \varepsilonmph{inclusion-minimal} $(n,k,t)$-graph (that is, an $(n,k,t)$-graph which is no longer $(n,k,t)$ upon the removal of any edge) is necessarily a disjoint union of cliques. But, there are counterexamples to this even in the original setting of Tur\'an's theorem ($t=2$). For example, $C_5$ is an inclusion-minimal $(5,3,2)$-graph, but contains neither of the inclusion-minimal $(5,3,2)$-graphs which are disjoint unions of cliques ($K_2+K_3$ and $K_1+K_4$).
Since there are inclusion-minimal $(n,k,t)$-graphs which are not disjoint unions of cliques, this suggests the saturation problem for $(n,k,t)$-graphs is interesting:
\begin{question}
Among all inclusion-minimal $(n,k,t)$-graphs, what is the maximum possible number of edges?
\varepsilonnd{question}
The classical saturation result of Zykov \mathcal Ite{zykov1949some} and Erd\H{o}s-Hajnal-Moon \mathcal Ite{erdos1964problem} (when complemented) says that when $t=2$, the answer is given by $(k-2)K_1+K_{n-k+2}$.
The example above shows there are instances of inclusion-minimal $(n,k,t)$-graphs that are not disjoint unions of cliques, but $K_1+K_4$ still has more edges that $C_5$. In general, we conjecture that the maximum is still always attained by a disjoint union of cliques.
\begin{thebibliography}{10}
\bibitem{REU}
J.~Allen, B.~Drynan, D.~Dunst, Z.~Hefty, and E.~Smith.
\newblock A precise definition of an extremal family.
\newblock In preparation.
\bibitem{bukh2015random}
B.~Bukh.
\newblock Random algebraic construction of extremal graphs.
\newblock {\varepsilonm Bulletin of the London Mathematical Society}, 47(6):939--945,
2015.
\bibitem{das}
S.~Das, H.~Huang, J.~Ma, H.~Naves, and B.~Sudakov.
\newblock A problem of erd{\H{o}}s on the minimum number of $k$-cliques.
\newblock {\varepsilonm Journal of Combinatorial Theory, Series B}, 103(3):344--373,
2013.
\bibitem{Erdos}
P.~Erd\H{o}s.
\newblock On the number of complete subgraphs contained in certain graphs.
\newblock {\varepsilonm Magyar Tud. Akad. Mat. Kutat\'{o} Int. K\"{o}zl.}, 7:459--464,
1962.
\bibitem{erdos1964problem}
P.~Erd\H{o}s, A.~Hajnal, and J.~W. Moon.
\newblock A problem in graph theory.
\newblock {\varepsilonm The American Mathematical Monthly}, 71(10):1107--1110, 1964.
\bibitem{erdos1961minimal}
P.~Erdos and T.~Gallai.
\newblock On the minimal number of vertices representing the edges of a graph.
\newblock {\varepsilonm Publ. Math. Inst. Hungar. Acad. Sci}, 6(18):1--203, 1961.
\bibitem{erdos1946structure}
P.~Erd{\"o}s and A.~H. Stone.
\newblock On the structure of linear graphs.
\newblock {\varepsilonm Bulletin of the American Mathematical Society},
52(12):1087--1091, 1946.
\bibitem{furedi2015proof}
Z.~F{\"u}redi.
\newblock A proof of the stability of extremal graphs, simonovits' stability
from szemer{\'e}di's regularity.
\newblock {\varepsilonm Journal of Combinatorial Theory, Series B}, 115:66--71, 2015.
\bibitem{furedi2013history}
Z.~F{\"u}redi and M.~Simonovits.
\newblock The history of degenerate (bipartite) extremal graph problems.
\newblock In {\varepsilonm Erd{\H{o}}s Centennial}, pages 169--264. Springer, 2013.
\bibitem{hajnal1965theorem}
A.~Hajnal.
\newblock A theorem on $k$-saturated graphs.
\newblock {\varepsilonm Canadian Journal of Mathematics}, 17:720--724, 1965.
\bibitem{Hoffman}
D.~Hoffman, P.~Johnson, and J.~McDonald.
\newblock Minimum {$(n,k,t)$} clique graphs.
\newblock {\varepsilonm Congr. Numer.}, 223:199--204, 2015.
\bibitem{HoffPav}
D.~Hoffman and H.~Pavlis.
\newblock Private communication.
\bibitem{nik}
V.~Nikiforov.
\newblock On the minimum number of $k$-cliques in graphs with restricted
independence number.
\newblock {\varepsilonm Combinatorics, Probability and Computing}, 10(4):361--366, 2001.
\bibitem{noble2017application}
M.~Noble, P.~Johnson, D.~Hoffman, and J.~McDonald.
\newblock Application of an extremal result of erd{\H{o}}s and gallai to the
$(n, k, t)$ problem.
\newblock {\varepsilonm Theory and Applications of Graphs}, 4(2):1, 2017.
\bibitem{pik}
O.~Pikhurko and E.~R. Vaughan.
\newblock Minimum number of $k$-cliques in graphs with bounded independence
number.
\newblock {\varepsilonm Combinatorics, Probability and Computing}, 22(6):910--934, 2013.
\bibitem{razborov}
A.~A. Razborov.
\newblock Flag algebras: an interim report.
\newblock In {\varepsilonm The Mathematics of Paul Erd{\H{o}}s II}, pages 207--232.
Springer, 2013.
\bibitem{zykov1949some}
A.~A. Zykov.
\newblock On some properties of linear complexes.
\newblock {\varepsilonm Matematicheskii sbornik}, 66(2):163--188, 1949.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\newcommand{$\hbox{$\cal B$}ox$\\}{$\hbox{$\cal B$}ox$\\}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{R}}{\mathbf{R}}
\newcommand{\mathbf{Q}}{\mathbf{Q}}
\newcommand{\mathbf{N}}{\mathbf{N}}
\renewcommand{\mathbf{S}}{\mathbf{S}}
\newcommand{\mathbf{P}}{\mathbf{P}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\hbox{$\cal B$}}{\hbox{$\cal B$}}
\setlength{\parindent}{0pt}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{\saut}[1]{
\\[#1]}
\newcommand{\displaystyle \frac}{\displaystyle \frac}
\newcommand{\textrm{dist}}{\textrm{dist}}
\newcommand{\mathrm{diam}}{\mathrm{diam}}
\newcommand{\dim_{\mathcal{H}} }{\dim_{\mathcal{H}} }
\newcommand{\textrm{Recov}}{\textrm{Recov}}
\newcommand{\mathrm{Var}}{\mathrm{Var}}
\newcommand{\mathrm{Gr}}{\mathrm{Gr}}
\newcommand{\mathrm{Rg}}{\mathrm{Rg}}
\newcommand{\textbf}{\textbf}
\newcommand{\mathscr{B}}{\mathscr{B}}
\newcommand{\mathbbm{B}}{\mathbbm{B}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\texttt{\large $\boldsymbol{\alpha}$}}{\texttt{\large $\boldsymbol{\alpha}$}}
\newcommand{\widetilde{\mathbb{\alpha}}}{\widetilde{\mathbb{\alpha}}}
\newcommand{\underline{\mathbb{\alpha}}}{\underline{\mathbb{\alpha}}}
\title[Local regularity and Hausdorff dimension of Gaussian fields]{From almost sure local regularity to almost sure Hausdorff dimension for Gaussian fields
}
\author{Erick Herbin}
\address{Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chatenay-Malabry, France} \email{[email protected]}
\author{Benjamin Arras}
\email{[email protected]}
\author{Geoffroy Barruel}
\email{[email protected]}
\subjclass[2000]{60\,G\,15, 60\,G\,17, 60\,G\,10, 60\,G\,22, 60\,G\,60}
\keywords{Gaussian processes, Hausdorff dimension, (multi)fractional Brownian motion, multiparameter processes, H\"older regularity, stationarity.}
\begin{abstract}
Fine regularity of stochastic processes is usually measured in a local way by local H\"older exponents and in a global way by fractal dimensions.
Following a previous work of Adler, we connect these two concepts for multiparameter Gaussian random fields. More precisely, we prove that almost surely the Hausdorff dimensions of the range and the graph in any ball $B(t_0,\rho)$ are bounded from above using the local H\"older exponent at $t_0$.
We define the deterministic local sub-exponent of Gaussian processes, which allows to obtain an almost sure lower bound for these dimensions.
Moreover, the Hausdorff dimensions of the sample path on an open interval are controlled almost surely by the minimum of the local exponents.
Then, we apply these generic results to the cases of the multiparameter fractional Brownian motion, the multifractional Brownian motion whose regularity function $H$ is irregular and the generalized Weierstrass function, whose Hausdorff dimensions were unknown so far.
\end{abstract}
\maketitle
\section{Introduction}
Since the 70's, the regularity of stochastic processes used to be considered in different ways.
On one hand, the local regularity of sample paths is usually measured by local moduli of continuity and H\"older exponents (e.g. \cite{dudley, 2ml, Orey.Pruitt(1973), Yadrenko}). And on the other hand, the global regularity can be quantified by the global H\"older exponent (e.g. \cite{Xiao2009, Xiao2010}) or by fractal dimensions (Hausdorff dimension, box-counting dimension, packing dimension, \dots) and respective measures of the graph of the processes (e.g. \cite{Berman(1972), Pruitt1969, Strassen(1964)}).
As an example, if $B^H=\{B^H_t;\;t\in\mathbf{R}_+\}$ is a real-valued fractional Brownian motion (fBm) with self-similarity index $H\in (0,1)$, the pointwise H\"older exponent at any point $t\in\mathbf{R}_+$ satisfy $\texttt{\large $\boldsymbol{\alpha}$}_{B^H}(t) = H$ almost surely.
Besides, the Hausdorff dimension of the graph of $B^H$ is given by $\dim_{\mathcal{H}} (\mathrm{Gr}_{B^H}) = 2-H$ almost surely.
In this specific case, we observe a connection between the global and local points of view of regularity for fBm.
Is it possible to obtain some general result, for some larger class of processes?
In \cite{Adler77}, Adler showed that the Hausdorff dimension of the graph of a $\mathbf{R}^d$-valued Gaussian field $X=\{X^{(i)}_t;\;1\leq i\leq p,\; t\in\mathbf{R}^N_+\}$, made of i.i.d. Gaussian coordinate processes $X^{(i)}$ with stationary increments, can be deduced from the local behavior of its incremental variance. More precisely, when the quantities $\sigma^2(t)=\mathbf{E}[|X^{(i)}_{t+t_0}-X^{(i)}_{t_0}|^2]$ independent of $1\leq i\leq p$ and $t_0\in\mathbf{R}^N_+$ satisfy
\begin{equation}\label{ineqAdler}
\forall \epsilon>0,\quad
|t|^{\alpha+\epsilon} \leq \sigma(t) \leq |t|^{\alpha-\epsilon}
\quad\textrm{as } t\rightarrow 0,
\end{equation}
the Hausdorff dimension of the graph $\mathrm{Gr}_X=\{(t,X_t):t\in\mathbf{R}^N_+\}$ of $X$ is proved to be
\begin{align*}
\dim_{\mathcal{H}} (\mathrm{Gr}_X) &= \min\left\{ \frac{N}{\alpha}, N+d (1-\alpha) \right\}.
\end{align*}
This result followed Yoder's previous works in \cite{Yoder} where the Hausdorff dimensions of the graph and also the range $\mathrm{Rg}_X=\{ X_t: t\in\mathbf{R}^N_+ \}$ were obtained for a multiparameter Brownian motion in $\mathbf{R}^d$.
As an application to Adler's result, the Hausdorff dimension of the graph of fractional Brownian motion can be deduced from the local H\"older exponents of its sample paths.
As an extension of this result, Xiao has completely determined in \cite{Xiao95} the Hausdorff dimensions of the image $X(K)$ and the graph $\mathrm{Gr}_X(K)$ of a Gaussian field $X$ as previously, for a compact set $K\subset\mathbf{R}^N_+$, in function of $\dim_{\mathcal{H}} K$.
In this paper, we aim at extending Adler's result to Gaussian random fields with non-stationary increments. We will see that this goal requires a localization of Adler's index $\alpha$ along the sample paths.
There is a large litterature about local regularity of Gaussian processes. We refer to \cite{AT, davar, ledouxtalagrand, marcusrosen} for a contemporary and detailled review of it.
This field of research is still very active, especially in the multiparameter context, and a non-exhaustive list of authors and recent works in this area includes Ayache \cite{AyLeVe, Ayache.Shieh.ea(2011)}, Mountford \cite{Baraka.Mountford.ea(2009)}, Dozzi \cite{dozzi07}, Khoshnevisan \cite{KX05}, Lawler \cite{lawler2011}, L\'evy V\'ehel \cite{2ml}, Lind \cite{lind08} and Xiao \cite{mwx, tudorxiao07, Xiao95, Xiao2009, Xiao2010}.
Usually the local regularity of an $\mathbf{R}^d$-valued stochastic process $X$ at $t_0\in\mathbf{R}^N_+$ is measured by the pointwise and local H\"older exponents $\texttt{\large $\boldsymbol{\alpha}$}_X(t_0)$ and $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$ defined by
\begin{align}
\texttt{\large $\boldsymbol{\alpha}$}_X(t_0) &= \sup\left\{ \alpha>0: \limsup_{\rho\rightarrow 0}
\sup_{s,t\in B(t_0,\rho)} \frac{\|X_t-X_s\|}{\rho^{\alpha}} < +\infty \right\},\nonumber\\
\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0) &= \sup\left\{ \alpha>0: \lim_{\rho\rightarrow 0}
\sup_{s,t\in B(t_0,\rho)} \frac{\|X_t-X_s\|}{\| t-s \|^{\alpha}} < +\infty \right\}.\label{eq:localHolder-exp}
\end{align}
A general connection between the local structure of a stochastic process and the Hausdorff dimension of its graph has already been studied. In \cite{BCI03}, the specific case of local self-similarity property has been considered.
Here, we show how the local H\"older regularity of a Gaussian random field allows to estimate the Hausdorff dimensions of its range $\mathrm{Rg}_X$ and its graph $\mathrm{Gr}_X$.
Recently in \cite{2ml}, the quantities $\mathbf{E}[|X_t - X_s|^2]$ when $s,t$ are close to $t_0\in\mathbf{R}^N_+$ are proved to capture a lot of informations about the almost sure local regularity. More precisely, the almost sure $2$-microlocal frontier of $X$ at $t_0$ allows to predict the evolution of the local regularity at $t_0$ under fractional integrations or derivations. Particularly, as special points of the $2$-microlocal frontier, both pointwise and local H\"older exponents can be derived from the study of $\mathbf{E}[|X_t - X_s|^2]$.
For all $t_0\in\mathbf{R}_+^N$, we define in Section~\ref{sec:subexp} the exponents $\underline{\mathbb{\alpha}}_X(t_0)$ and $\widetilde{\mathbb{\alpha}}_X(t_0)$ of a real-valued Gaussian process $X$ as the minimum of $\underline{\mathbb{\alpha}}>0$ and maximum of $\widetilde{\mathbb{\alpha}}>0$ such that
\begin{equation*}
\forall s,t\in B(t_0,\rho_0),\quad
\| t-s \|^{2\, \underline{\mathbb{\alpha}}} \leq \mathbf{E}[|X_t-X_s|^2]
\leq \| t-s \|^{2\, \widetilde{\mathbb{\alpha}}},
\end{equation*}
for some $\rho_0>0$.
The exponents of the components $X^{(i)}$ of a Gaussian random field $X=(X^{(1)},\dots,X^{(d)})$ allow to get almost sure lower and upper bounds for quantities,
$$\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)))\quad\textrm{and}\quad\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))).$$
After the statement of the main result in Section~\ref{sec:main}, the almost sure local Hausdorff dimensions are given uniformly in $t_0\in\mathbf{R}_+^N$ and the global dimensions $\dim_{\mathcal{H}} (\mathrm{Gr}_X(I))$ and $\dim_{\mathcal{H}} (\mathrm{Rg}_X(I))$ are almost surely bounded for any open interval $I\subset\mathbf{R}_+^N$, in function of $\inf_{t\in I}\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ and $\inf_{t\in I}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t)$.
Sections~\ref{sec:up} and \ref{sec:low} are devoted to the proofs of the upper bound and lower bound of the Hausdorff dimensions respectively.
In Section~\ref{sec:app}, the main result is applied to some stochastic processes whose increments are not stationary and whose Hausdorff dimension is still unknown.
The first one is the multiparameter fractional Brownian motion (MpfBm), derived from the set-indexed fractional Brownian motion introduced in \cite{sifBm, MpfBm}. On the contrary to fractional Brownian sheet studied in \cite{AyXiao, WuXiao}, the MpfBm does not satisfy the increment stationarity property. Then the study of the local regularity of its sample path allows to determine the Hausdorff dimension of its graph in Section~\ref{sec:mpfbm}.
The second application is the multifractional Brownian motion (mBm), introduced in \cite{RPJLV,BJR} as an extension of the classical fractional Brownian motion where the self-similarity index $H\in (0,1)$ is substituted with a function $H:\mathbf{R}_+\rightarrow (0,1)$ in order to allow the local regularity to vary along the sample path.
The immediate consequence is the loss of the increment stationarity property. Then, the knowledge of local H\"older regularity implies the Hausdorff dimensions of the graph and the range of the mBm.
In the case of a regular function $H$, the almost sure value of $\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)))$ was already known to be $2-H(t_0)$ for any fixed $t_0\in\mathbf{R}_+$. In Section~\ref{sec:mbm}, this almost sure result is proved uniformly in $t_0$. The new case of an irregular function $H$ is also considered.
The last application of this article concerns the generalized Weierstrass function, defined as a stochastic Gaussian version of the well-known Weierstrass function, where the index varies along the trajectory. The local H\"older regularity is determined in Section~\ref{sec:GW} and consequentely, the Hausdorff dimension of its sample path.
\section{Hausdorff dimension of the sample paths of Gaussian random fields}
In this paper, we denote by {\em multiparameter Gaussian random field} in $\mathbf{R}^d$, a stochastic process $X=\{ X_t;\;t\in\mathbf{R}_+^N \}$, where $X_t = (X^{(1)}_t,\dots,X^{(d)}_t)\in\mathbf{R}^d$ for all $t\in\mathbf{R}^N_+$ and the coordinate processes $X^{(i)}=\{ X^{(i)}_t;\;t\in\mathbf{R}^N_+\}$ are independent real-valued Gaussian processes with the same law.
\subsection{A new local exponent}\label{sec:subexp}
According to \cite{2ml}, the local regularity of a Gaussian process $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ can be obtained by the {\em deterministic local H\"older exponent}
\begin{equation}\label{DetLocalHolder}
\widetilde{\mathbb{\alpha}}_X(t_0) = \sup\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \sup_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\alpha}}
<+\infty \right\}.
\end{equation}
More precisely, the local H\"older exponent of $X$ at any $t_0\in\mathbf{R}_+^N$ is proved to satisfy $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)=\widetilde{\mathbb{\alpha}}_X(t_0)$ a.s.
In order to get a localized version of (\ref{ineqAdler}), we need to introduce a new exponent $\underline{\mathbb{\alpha}}_X(t_0)$, the {\em deterministic local sub-exponent} at any $t_0\in\mathbf{R}_+^N$,
\begin{align}\label{DetUpLocalHolder}
\underline{\mathbb{\alpha}}_X(t_0) &= \inf\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\alpha}}
=+\infty \right\} \\
&= \sup\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\alpha}}
=0 \right\}. \nonumber
\end{align}
As usually, this double definition relies on the equality
\begin{align*}
\frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2 \alpha'}}
= \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2 \alpha}}
\times \| t-s \|^{2(\alpha-\alpha')}.
\end{align*}
\
\begin{lemma}\label{lemcovinc}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian process.\\
Consider $\widetilde{\mathbb{\alpha}}_X(t_0)$ and $\underline{\mathbb{\alpha}}_X(t_0)$ the deterministic local H\"older exponent and local sub-exponent of $X$ at $t_0\in\mathbf{R}_+^N$ (as defined in (\ref{DetLocalHolder}) and (\ref{DetUpLocalHolder})).
For any $\epsilon>0$, there exists $\rho_0>0$ such that
\begin{equation*}
\forall s,t\in B(t_0,\rho_0),\quad
\| t-s \|^{2\, \underline{\mathbb{\alpha}}_X(t_0) +\epsilon} \leq \mathbf{E}[|X_t-X_s|^2]
\leq \| t-s \|^{2\, \widetilde{\mathbb{\alpha}}_X(t_0) -\epsilon}.
\end{equation*}
\end{lemma}
\
\begin{proof}
For any $\epsilon >0$, the definition of $\widetilde{\mathbb{\alpha}}_X(t_0)$ leads to
\begin{equation*}
\lim_{\rho\rightarrow 0} \sup_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\,\widetilde{\mathbb{\alpha}}_X(t_0) - \epsilon}} =0.
\end{equation*}
Then there exits $\rho_1>0$ such that
\begin{equation*}
0<\rho\leq\rho_1 \mathbf{R}ightarrow \forall s,t\in B(t_0,\rho),\ \mathbf{E}[|X_t-X_s|^2] \leq \|t-s\|^{2\,\widetilde{\mathbb{\alpha}}_X(t_0) - \epsilon}
\end{equation*}
and then
\begin{equation*}
\forall s,t\in B(t_0,\rho_1),\quad \mathbf{E}[|X_t-X_s|^2] \leq \|t-s\|^{2\,\widetilde{\mathbb{\alpha}}_X(t_0) - \epsilon}.
\end{equation*}
For the lower bound, we use the definition of the new exponent $\underline{\mathbb{\alpha}}_X(t_0)$
\begin{align*}
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\,\underline{\mathbb{\alpha}}_X(t_0) + \epsilon}} =+\infty.
\end{align*}
Then, there exists $\rho_2>0$ such that
\begin{equation*}
0<\rho\leq\rho_2 \mathbf{R}ightarrow \forall s,t\in B(t_0,\rho),\ \mathbf{E}[|X_t-X_s|^2] \geq \|t-s\|^{2\,\underline{\mathbb{\alpha}}_X(t_0) + \epsilon}
\end{equation*}
and then
\begin{equation*}
\forall s,t\in B(t_0,\rho_2),\quad
\mathbf{E}[|X_t-X_s|^2] \geq \|t-s\|^{2\,\underline{\mathbb{\alpha}}_X(t_0) + \epsilon}.
\end{equation*}
The result follows setting $\rho_0=\rho_1\wedge\rho_2$.
\end{proof}
\
From the previous result, we can derive an ordering relation between the deterministic local sub-exponent and the deterministic local H\"older exponent.
We have
\begin{equation}\label{ineqexp}
\forall t_0\in\mathbf{R}^N_+,\quad
\widetilde{\mathbb{\alpha}}_X(t_0) \leq \underline{\mathbb{\alpha}}_X(t_0).
\end{equation}
\subsection{Main result: The Hausdorff dimension of Gaussian random fields}\label{sec:main}
For sake of self-containess of the paper, we recall the basic frame of the Hausdorff dimension definition.
For all $\delta>0$, we denote by $\delta$-covering of a non-empty subset $E$ of $\mathbf{R}^d$. all collection $A = (A_i)_{i\in\mathbf{N}}$ such that
\begin{itemize}
\item $\forall i \in \mathbf{N}, \mathrm{diam}(A_i) < \delta$, where $\mathrm{diam}(A_i)$ denotes $\sup(\|x-y\|;\; x,y\in A_i)$ ; and
\item $E \subseteq \bigcup_{i \in \mathbf{N}}A_i$.
\end{itemize}
We denote by $\Sigma_\delta(E)$ the set of $\delta$-covering de $E$ and by $\Sigma(E)$ the set of the covering of $E$. We define
$$\mathcal{H}^s_{\delta}(E)=\inf_{A\in{\Sigma_\delta(E)}}\left\{\sum_{i=1}^{\infty}\mathrm{diam}(A_i)^s\right\},$$
and the Hausdorff measure of $E$ by
\begin{align*}
\mathcal{H}^s(E)=\lim_{\delta\rightarrow 0}\mathcal{H}^s_{\delta}(E)
= \begin{cases}
+\infty & \text{si } 0 \leq s < \dim_{\mathcal{H}} (E), \\
0 & \text{si } s > \dim_{\mathcal{H}} (E).
\end{cases}
\end{align*}
The quantity $\dim_{\mathcal{H}} (E)$ is the Hausdorff dimension of $E$. It is defined by
$$\dim_{\mathcal{H}} (E)=\inf \left\{s \in \mathbf{R}_+: \mathcal{H}^s(E)=0\right\}=\sup\left\{s \in \mathbf{R}_+: \mathcal{H}^s(E)=+\infty\right\}.$$
\
For any random field $X=\{X^{(i)}_t;\;1\leq i\leq p,\;t\in\mathbf{R}^N_+\}$ made of i.i.d. Gaussian coordinate processes with possibly non-stationary increments, the Hausdorff dimensions of the range $\mathrm{Rg}_X(B(t_0,\rho)) = \{ X_t;\; t\in B(t_0,\rho)\}$ and the graph $\mathrm{Gr}_X(B(t_0,\rho)) = \{ (t,X_t);\; t\in B(t_0,\rho)\}$ of $X$ in the ball $B(t_0,\rho)$ of center $t_0$ and radius $\rho>0$ can be estimated when $\rho$ goes to $0$, using the deterministic local H\"older exponent and the deterministic local sub-exponent of $X^{(i)}$ at $t_0$.
In the following statements and in the sequel of the paper, the deterministic local H\"older exponent $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ and the deterministic local sub-exponent $\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ of $X^{(i)}$ at any $t_0\in\mathbf{R}^N_+$ are independent of $1\leq i\leq d$, since the component $X^{(i)}$ are assumed to be i.i.d.
\begin{theorem}[Pointwise almost sure result]\label{thmain}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multi-parameter Gaussian random field in $\mathbf{R}^d$. Let $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ be the deterministic local H\"older exponent and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ the deterministic local sub-exponent of $X^{(i)}$ at $t_0\in\mathbf{R}_+^N$ as defined in (\ref{DetLocalHolder}) and (\ref{DetUpLocalHolder}), independent of $1\leq i\leq d$.
Assume that $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)>0$.
Then, the Hausdorff dimensions of the graph and the range of $X$ satisfy almost surely,
\begin{align*}
\left. \begin{array}{l r}
\textrm{if } N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0),
& N/\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) \\
\textrm{if } N> d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0),
& N + d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0))
\end{array} \right\}
\leq &\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \\
&\leq \min\left\{\frac{N}{\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)} ; N + d(1-\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0))\right\}
\end{align*}
and
\begin{align*}
\left. \begin{array}{l r}
\textrm{if } N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0),
& N/\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) \\
\textrm{if } N> d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0),
& d
\end{array} \right\}
\leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))
\leq \min\left\{\frac{N}{\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)} ; d\right\}.
\end{align*}
\end{theorem}
The proof of Theorem \ref{thmain} relies on Propositions \ref{propmajdimH} and \ref{propmindimH}.
\
\begin{theorem}[Uniform almost sure result]\label{thmainunif}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multi-parameter Gaussian random field in $\mathbf{R}^d$. Let $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t)$ be the deterministic local H\"older exponent and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ the deterministic local sub-exponent of $X^{(i)}$ at any $t\in\mathbf{R}_+^N$.
Set $\mathcal{A} = \{ t\in\mathbf{R}_+^N: \liminf_{u\rightarrow t}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)>0\}$.
Then, with probability one, for all $t_0\in\mathcal{A}$,
\begin{itemize}
\item if $N\leq d\ \liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)$ then
\begin{align*}
\frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)}
\leq \lim_{\rho\rightarrow 0}&\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \\
&\leq \min\left\{\frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)} ; N + d(1-\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u))\right\}
\end{align*}
and
\begin{align*}
\frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)}
\leq \lim_{\rho\rightarrow 0}&\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))
\leq \min\left\{\frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)} ; d\right\}.
\end{align*}
\item if $N> d\ \liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)$ then
\begin{align*}
N + d(1-\liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u))
\leq \lim_{\rho\rightarrow 0}&\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \\
&\leq \min\left\{\frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)} ; N + d(1-\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u))\right\}
\end{align*}
and
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) = d.
\end{align*}
\end{itemize}
\end{theorem}
The proof of Theorem \ref{thmainunif} relies on Proposition \ref{propmajdimH} and Corollary \ref{cormindimHunif2}.
\begin{theorem}[Global almost sure result]\label{thmaincompact}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$. Let $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t)$ be the deterministic local H\"older exponent and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ the deterministic local sub-exponent of $X^{(i)}$ at any $t\in\mathbf{R}_+^N$.
For any open interval $I \subset\mathbf{R}^N_+$, assume that the quantities $\underline{\mathbb{\alpha}} = \inf_{t\in I}\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ and $\widetilde{\mathbb{\alpha}} = \inf_{t\in I}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t)$ satisfy
$0 < \widetilde{\mathbb{\alpha}} \leq \underline{\mathbb{\alpha}}$.
Then, with probability one,
\begin{align*}
\left. \begin{array}{l r}
\textrm{if } N\leq d\ \underline{\mathbb{\alpha}},
& N/\underline{\mathbb{\alpha}} \\
\textrm{if } N> d\ \underline{\mathbb{\alpha}},
& N + d(1-\underline{\mathbb{\alpha}})
\end{array} \right\}
\leq \dim_{\mathcal{H}} (\mathrm{Gr}_X(I))
\leq \min\left\{N/\widetilde{\mathbb{\alpha}} ; N + d(1-\widetilde{\mathbb{\alpha}}) \right\}
\end{align*}
and
\begin{align*}
\left. \begin{array}{l r}
\textrm{if } N\leq d\ \underline{\mathbb{\alpha}},
&N/\underline{\mathbb{\alpha}} \\
\textrm{if } N> d\ \underline{\mathbb{\alpha}},
& d
\end{array} \right\}
\leq \dim_{\mathcal{H}} (\mathrm{Rg}_X(I))
\leq \min\left\{N/\widetilde{\mathbb{\alpha}} ; d\right\}.
\end{align*}
\end{theorem}
The proof of Theorem \ref{thmaincompact} relies on Corollary \ref{cormajdimHunif1} and Corollary \ref{cormindimHunif1}.
\subsection{Upper bound for the Hausdorff dimension}\label{sec:up}
\begin{lemma}\label{lemdimHmaj}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter random process with values in $\mathbf{R}^d$. Let $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$ be the local H\"older exponent of $X$ at $t_0\in\mathbf{R}_+^N$.
For any $\omega$ such that $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)>0$,
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_X(B(t_0,\rho))) \\
&\leq \min\left\{ \frac{N}{\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)} ; N + d(1-\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0))\right\}.
\end{align*}
\end{lemma}
\begin{proof}
The first inequality follows the fact that the range $\mathrm{Rg}_X(B(t_0,\rho))$ is a projection of the graph $\mathrm{Gr}_X(B(t_0,\rho))$.
For the second inequality, we need to localize the argument of Yoder (\cite{Yoder}), who proved the upper bound for the Hausdorff dimensions of the range and the graph of a H\"olderian function from $\mathbf{R}^N$ (or $[0,1]^N$) to $\mathbf{R}^d$ (see also \cite{falconer}, Corollary~11.2 p. 161).
Assume that $\omega$ is fixed such that $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)>0$.
By definition of $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$, for all $\epsilon>0$ there exists $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$,
\begin{align*}
\forall s,t\in B(t_0,\rho),\quad
\|X_t(\omega)-X_s(\omega)\| \leq \|t-s\|^{\widetilde{\boldsymbol{\alpha}}_X(t_0,\omega)-\epsilon}.
\end{align*}
There exists a real $0<\delta_0<1$ such that for all $u\in [0,1]^N$, $t_0 + \delta_0.u\in B(t_0,\rho_0)$ and consequently,
\begin{align*}
\forall u,v\in [0,1]^N,\quad
\|X_{t_0+\delta_0.u}(\omega)-X_{t_0+\delta_0.v}(\omega)\| \leq (\delta_0\ \|u-v\|)^{\widetilde{\boldsymbol{\alpha}}_X(t_0,\omega)-\epsilon}.
\end{align*}
Then, the function $Y_{\bullet}(\omega):u\mapsto Y_u(\omega)=X_{t_0+\rho_0.u}(\omega)$ is H\"older-continuous of order $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)-\epsilon$ on $[0,1]^N$ and therefore, according to \cite{Yoder},
\begin{align*}
\dim_{\mathcal{H}} (\mathrm{Rg}_{Y_{\bullet}(\omega)}([0,1]^N)) \leq \dim_{\mathcal{H}} &(\mathrm{Gr}_{Y_{\bullet}(\omega)}([0,1]^N)) \\
&\leq \min\left\{ \frac{N}{\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)-\epsilon} ; N + d(1-\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)+\epsilon)\right\}.
\end{align*}
We can observe that the graph $\mathrm{Gr}_{X_{\bullet}(\omega)}(t_0+\delta_0.[0,1]^N))$ is an affine transformation of the graph $\mathrm{Gr}_{Y_{\bullet}(\omega)}([0,1]^N))$, therefore their Hausdorff dimensions are equal.
Moreover, there exists $\rho>0$ such that $B(t_0,\rho) \subset t_0+\delta_0.[0,1]^N$. By monotony of the function $\rho\mapsto\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho)))$, we can write
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho)))
\leq \min\left\{ \frac{N}{\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)-\epsilon} ; N + d(1-\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)+\epsilon)\right\}.
\end{align*}
Since this inequality stands for all $\epsilon>0$, we get
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho)))
\leq \min\left\{ \frac{N}{\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)} ; N + d(1-\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega))\right\}.
\end{align*}
\end{proof}
Lemma \ref{lemdimHmaj} gives a random upper bound for the Hausdorff dimensions of the (localized) range and graph of the sample path, in function of its local H\"older exponents.
When $X$ is a multiparameter Gaussian field in $\mathbf{R}^d$, we prove that this upper bound can be expressed almost surely with the deterministic local H\"older exponent of the Gaussian component processes $X^{(i)}$.
\begin{proposition}\label{propmajdimH}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$. Let $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ be the deterministic local H\"older exponent of $X^{(i)}$ at $t_0\in\mathbf{R}_+^N$ and assume that $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)>0$.
Then, almost surely
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_X(B(t_0,\rho))) \\
&\leq \min\left\{N/\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0) ; N + d(1-\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0))\right\}.
\end{align*}
Moreover, an uniform result can be stated on the set
$$\mathcal{A}=\{t_0\in\mathbf{R}^N_+: \liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)>0\}.$$ With probability one, for all $t_0\in \mathcal{A}$,
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq &\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \\
&\quad\leq \min\left\{N/\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u) ; N + d(1-\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u))\right\}.
\end{align*}
\end{proposition}
\begin{proof}
In \cite{2ml}, the local H\"older exponent of any Gaussian process $Y$ at $t_0\in\mathbf{R}^N_+$ such that $\widetilde{\mathbb{\alpha}}_Y(t_0)>0$ is proved to satisfy $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_Y(t_0) = \widetilde{\mathbb{\alpha}}_Y(t_0)$ almost surely.
Therefore, by definition of $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_{X^{(i)}}(t_0)$, for all $\epsilon>0$ there exists $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$, we have almost surely
\begin{align*}
\forall s,t\in B(t_0,\rho),\quad
|X^{(i)}_t-X^{(i)}_s| \leq \|t-s\|^{\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon},
\end{align*}
and consequently, almost surely
\begin{align}\label{eqmajGaussField}
\forall s,t\in B(t_0,\rho),\quad
\|X_t-X_s\| \leq K\ \|t-s\|^{\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon},
\end{align}
for some constant $K>0$.
From (\ref{eqmajGaussField}), we deduce that $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)\geq \widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ almost surely.
Then Lemma \ref{lemdimHmaj} implies almost surely
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_X(B(t_0,\rho))) \\
&\leq \min\left\{N/\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0) ; N + d(1-\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0))\right\}.
\end{align*}
For the uniform result on $t_0\in\mathbf{R}^N_+$, we use the Theorem 3.14 of \cite{2ml} which states that if $Y$ is a Gaussian process such that the function $t_0\mapsto\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_Y(u)$ is positive, then with probability one,
\begin{align*}
\forall t_0\in\mathbf{R}^N_+,\quad
\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_Y(u)
\leq \widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_Y(t_0)
\leq \limsup_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_Y(u).
\end{align*}
This inequality yields to the existence of $\Omega_i\in\mathcal{F}$ for all $1\leq i\leq d$ with $\mathbf{P}(\Omega_i)=1$ and: \\
For all $\omega\in\Omega_i$, all $t_0\in \mathcal{A}$ and all $\epsilon>0$, there exists $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$,
\begin{align*}
\forall s,t\in B(t_0,\rho),\quad
|X^{(i)}_t(\omega)-X^{(i)}_s(\omega)| \leq \|t-s\|^{\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)-\epsilon}.
\end{align*}
This yields to: For all $\omega\in\bigcap_{1\leq i\leq d}\Omega_i$, all $t_0\in \mathcal{A}$ and all $\epsilon>0$, there exists $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$,
\begin{align*}
\forall s,t\in B(t_0,\rho),\quad
\|X_t(\omega)-X_s(\omega)\| \leq K\ \|t-s\|^{\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)-\epsilon},
\end{align*}
for some constant $K>0$.
With the argument of Lemma \ref{lemdimHmaj}, we deduce
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho),\omega)) &\leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho),\omega)) \\
&\quad\leq \min\left\{N/\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u) ; N + d(1-\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u))\right\},
\end{align*}
which is the result stated.
\end{proof}
\begin{corollary}\label{cormajdimHunif1}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$ and $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ the deterministic local H\"older exponent of $X^{(i)}$ at $t_0\in\mathbf{R}_+^N$.\\
Assume that for some bounded interval $I\subset\mathbf{R}^N_+$,
we have
$\alpha = \inf_{t_0\in I}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0) >0$.
Then, with probability one,
$$\dim_{\mathcal{H}} (\mathrm{Rg}_X(I)) \leq \dim_{\mathcal{H}} (\mathrm{Gr}_X(I))\leq \min\left\{N/\alpha ; N + d(1-\alpha)\right\}.$$
\end{corollary}
\begin{proof}
With the same arguments as in the proof of Proposition \ref{propmajdimH}, we can claim that, with probability one,
$\forall t_0\in I,\ \alpha\leq\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0).$
Then, there exists $\Omega_0\in\mathcal{F}$ with $\mathbf{P}(\Omega_0)=1$ and:
For all $\omega\in\Omega_0$, all $t_0\in I$ and all $\epsilon>0$, there exist $\rho_0>0$ and $K>0$ such that $\forall \rho\in (0,\rho_0]$,
\begin{align*}
\forall s,t\in B(t_0,\rho),\quad
\| X_t(\omega)-X_s(\omega) \| \leq K\ \| t-s \|^{\alpha-\epsilon}.
\end{align*}
Then the continuity of $t\mapsto X_t(\omega)$ on the bounded interval $I$ allows to deduce that, for all $\omega\in\Omega_0$ and all $\epsilon>0$, there exists a constant $K'>0$ such that
\begin{align}\label{eq:maj-holder-global}
\forall s,t\in I,\quad
\| X_t(\omega)-X_s(\omega) \| \leq K'\ \| t-s \|^{\alpha-\epsilon}.
\end{align}
If the interval $I$ is compact, we can exhibit an affine one-to-one mapping $I\rightarrow [0,1]^N$ and conclude with the arguments of Lemma \ref{lemdimHmaj} that \cite{Yoder} implies
\begin{align*}
\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(I)) \leq \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I))
\leq \min\left\{ \frac{N}{\alpha-\epsilon} ; N + d(1-\alpha+\epsilon)\right\}
\qquad\textrm{a.s.}
\end{align*}
Since this inequality stands for any $\epsilon>0$, the result follows in that case.
If $I$ is not closed, we remark that
$$\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(I)) \leq \dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(\overline{I}))\quad\textrm{and}\quad \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I)) \leq \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(\overline{I})).$$
Then, extending the inequality (\ref{eq:maj-holder-global}) to $\overline{I}$ by continuity, the result for the compact interval $\overline{I}$ is proved as previously.
\end{proof}
\
\subsection{Lower bound for the Hausdorff dimension}\label{sec:low}
Frostman's Theorem constitutes the key argument to prove the lower bound for the Hausdorff dimensions. We recall the basic notions of potential theory, which are used along the proofs of this section.
For any Borel set $E\subseteq\mathbf{R}^d$, the $\beta$-dimensional energy of a probability measure $\mu$ on $E$ is defined by
\begin{equation*}
I_{\beta}(\mu) = \int_{E\times E} \|x-y\|^{-\beta}\ \mu(dx)\ \mu(dy).
\end{equation*}
Then, the $\beta$-dimensional Bessel-Riesz capacity of $E$ is defined as
$$ C_{\beta}(E) = \sup\left( \frac{1}{I_{\beta}(\mu)};\; \mu\textrm{ probability measure on }E \right). $$
According to Frostman's Theorem, the Hausdorff dimension of $E$ is obtained from the capacity of $E$ by the expression
\begin{align*}
\dim_{\mathcal{H}} E = \sup\left(\beta: C_{\beta}(E)>0\right)
=\inf\left(\beta: C_{\beta}(E)=0\right).
\end{align*}
Consequently, if $I_{\beta}(\mu) < +\infty$ for some probability measure (or some mass distribution) $\mu$ on $E$, then $\dim_{\mathcal{H}} E \geq \beta$.
\begin{proposition}\label{propmindimH}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$ and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ the deterministic local sub-exponent of $X^{(i)}$ at $t_0\in\mathbf{R}_+^N$.
Then, almost surely
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \geq
\left\{ \begin{array}{l l}
N/\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)
& \textrm{if }N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) ; \\
N + d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0))
& \textrm{if }N > d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) ;
\end{array} \right.
\end{align*}
and
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \geq
\left\{ \begin{array}{l l}
N/\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)
& \textrm{if }N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) ; \\
d
& \textrm{if }N > d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0).
\end{array} \right.
\end{align*}
\end{proposition}
\begin{proof}
Following the Adler's proof for the lower bound in the case of processes with stationary increments, we distinguish the two cases: $N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ and $N > d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$.
\begin{itemize}
\item Assume that $N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$.
In that case, we prove that almost surely,
\begin{align}\label{eqmindimH1}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))
\geq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))
\geq \frac{N}{\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)}.
\end{align}
For any $\epsilon>0$, we consider any $\beta<N/(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)+\epsilon) \leq d$ and we aim at showing that the $\beta$-dimensional capacity $C_\beta(\mathrm{Rg}_X(B(t_0,\rho)))$ is positive almost surely for all $\rho>0$.
\noindent With this intention, for $E=\mathrm{Rg}_X(B(t_0,\rho))=X(B(t_0,\rho))$, we consider the $\beta$-dimensional energy $I_{\beta}(\mu)$ of the mass distribution $\mu = \lambda|_{B(t_0,\rho)} \circ X^{-1}$ of $E$, where $\lambda|_{B(t_0,\rho)}$ denotes the restriction of the Lebesgue measure to $B(t_0,\rho)$. As mentioned above (see also Theorem B in \cite{Taylor}), a sufficient condition for the capacity to be positive is that, almost surely
\begin{align}\label{eqcapa}
\int_{E\times E}\|x-y\|^{-\beta}\ \mu(dx)\ \mu(dy) =
\int_{B(t_0,\rho)\times B(t_0,\rho)}\|X_t-X_s\|^{-\beta}\ ds\ dt
< +\infty.
\end{align}
Since the $X^{(i)}$ are independent and have the same distribution, we compute for all $s,t\in\mathbf{R}^N_+$,
\begin{align*}
\mathbf{E}\left[\|X_t-X_s\|^{-\beta}\right]=\frac{1}{[2\pi \sigma^2(s,t)]^{d/2}}
\int_{\mathbf{R}^d} \|x\|^{-\beta} \exp\left(-\frac{\|x\|^2}{2\ \sigma^2(s,t)}\right)\ dx,
\end{align*}
where $\sigma^2(s,t) = \mathbf{E}[| X^{(i)}_t-X^{(i)}_s |^2]$ is independent of $1\leq i\leq d$.\\
Let us consider the change of variables $(\mathbf{R}_+\setminus\{0\}, \mathbf{S}^{d-1})\rightarrow\mathbf{R}^d\setminus\{0\}$ defined by $(r,u)\mapsto r.u$, where $\mathbf{S}^{d-1}$ denotes the unit hypersphere of $\mathbf{R}^d$. The previous expression becomes
\begin{align*}
\mathbf{E}\left[\|X_t-X_s\|^{-\beta}\right] &= \frac{K_1}{[2\pi \sigma^2(s,t)]^{d/2}}
\int_{\mathbf{R}_+} r^{d-1-\beta} \exp\left(-\frac{r^2}{2\ \sigma^2(s,t)}\right)\ dr \\
&= K_1\ (\sigma(s,t))^{-\beta}\int_{\mathbf{R}_+} z^{d-1-\beta}\exp\left(-\frac{1}{2}z^2\right)\ dz,
\end{align*}
where $K_1$ is a positive constant and using the change of variables $r=\sigma(s,t)\ z$.\\
Since the integral is finite when $\beta<d$, we get
\begin{equation}\label{eqCovInc_beta}
\forall s,t\in\mathbf{R}_+^N,\quad
\mathbf{E}\left[\|X_t-X_s\|^{-\beta}\right]\leq K_2\ (\sigma(s,t))^{-\beta},
\end{equation}
for some positive constant $K_2$.\\
By Tonelli's theorem and Lemma \ref{lemcovinc}, this inequality implies the existence of $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$,
\begin{align*}
\mathbf{E}\left[\int_{B(t_0,\rho)\times B(t_0,\rho)}\|X_t-X_s\|^{-\beta}\ dt\ ds\right]& \\
\leq \int_{B(t_0,\rho)\times B(t_0,\rho)}K_2\ &\|t-s\|^{-\beta(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) + \epsilon)}\ dt\ ds < +\infty
\end{align*}
because $\beta(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) + \epsilon)< N$.
Thus (\ref{eqcapa}) holds and for all $\rho\in (0,\rho_0]$,
\begin{align*}
\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))
\geq \frac{N}{\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)+\epsilon} \qquad\textrm{a.s.}
\end{align*}
Taking $\rho,\epsilon\in\mathbf{Q}_+$, this yields to
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))
\geq \frac{N}{\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)} \qquad\textrm{a.s.},
\end{align*}
which proves (\ref{eqmindimH1}).
\
\item Assume $N>d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$.
We use the previous method to prove that almost surely
\begin{equation}\label{eqmindimHrg2}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))\geq d.
\end{equation}
For any $\epsilon>0$ such that $d<N/(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)+\epsilon)$, consider any real $\beta$ such that $\beta<d$.
As previously, we show that equation (\ref{eqcapa}) is verified, which implies that the $\beta$-dimensional capacity $C_\beta(\mathrm{Rg}_X(B(t_0,\rho)))$ is positive almost surely for all $\rho>0$.
\noindent Since $\beta<d$, equation (\ref{eqCovInc_beta}) still holds.
As in the previous case, the inequality $\beta(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) + \epsilon)< N$ implies (\ref{eqcapa}) for $\rho$ small enough and then
\begin{align*}
\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \geq d \qquad\textrm{a.s.}
\end{align*}
Taking $\rho\in\mathbf{Q}_+$, the inequality (\ref{eqmindimHrg2}) follows.
\
\item Assume $N>d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$.
To prove the lower bound for the Hausdorff dimension of the graph,
\begin{equation}\label{eqmindimH2}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)))\geq N + d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)) \qquad\textrm{a.s.},
\end{equation}
we use the same arguments of potential theory than for the range.
\noindent For any $\epsilon>0$, consider any real $\beta$ such that $d<\beta<N+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon)$.
In order to prove that the $\beta$-dimensional capacity $C_\beta(\mathrm{Gr}_X(B(t_0,\rho)))$ is positive almost surely for all $\rho>0$, it is sufficient to show that
\begin{equation}\label{eqcapa2}
\int_{B(t_0,\rho)\times B(t_0,\rho)} \|(t,X_t)-(s,X_s)\|^{-\beta}\ ds\ dt<+\infty
\qquad\textrm{a.s.}
\end{equation}
\noindent
Since the components $X^{(i)}$ ($1\leq i\leq d$) of $X$ are i.i.d., we compute
\begin{align*}
&\mathbf{E}\left[(\|X_t-X_s\|^2+\|t-s\|^2)^{-\beta/2}\right] \\
&\qquad =\frac{1}{[2\pi \sigma^2(s,t)]^{d/2}}
\int_{\mathbf{R}^d}\left(\|x\|^2+\|t-s\|^2\right)^{-\beta/2}
\exp\left(-\frac{\|x\|^2}{2\ \sigma^2(s,t)}\right)\ dx.
\end{align*}
As in the previous case, by using the hyperspherical change of variables $(r,u)\in\mathbf{R}_+\times\mathbf{S}^{d-1}$ and then $r=\sigma(s,t)\ z$, we get
\begin{align*}
&\mathbf{E}\left[(\|X_t-X_s\|^2 + \|t-s\|^2)^{-\beta/2}\right] \\
&\qquad = K_3 \int_{\mathbf{R}_+} \left(z^2\sigma^2(s,t)+\|t-s\|^2\right)^{-\beta/2}\ z^{d-1}\ e^{-\frac{1}{2}z^2}\ dz \\
&\qquad = K_3\ \sigma(s,t)^{-\beta} \int_{\mathbf{R}_+} \left(z^2+\frac{\|t-s\|^2}{\sigma^2(s,t)}\right)^{-\beta/2}\ z^{d-1}\ e^{-\frac{1}{2}z^2}\ dz,
\end{align*}
where $K_3$ is a positive constant. Then, since $\beta>d$, the following inequality holds
\begin{align*}
&\mathbf{E}\left[(\|X_t-X_s\|^2 + \|t-s\|^2)^{-\beta/2}\right] \\
&\qquad \leq \frac{2^{-\beta/2}\ K_3}{\sigma(s,t)^{\beta}} \left[ \int_0^\frac{\|t-s\|}{\sigma(s,t)}
\left( \frac{\|t-s\|}{\sigma(s,t)} \right)^{-\beta} z^{d-1}\ dz
+ \int_\frac{\|t-s\|}{\sigma(s,t)}^\infty z^{d-1-\beta}\ dz \right] \\
&\qquad \leq \frac{K_4}{\sigma(s,t)^{\beta}}\ \left( \frac{\|t-s\|}{\sigma(s,t)} \right)^{d-\beta}
\leq K_4\ \frac{\|t-s\|^{d-\beta}}{\sigma(s,t)^d}.
\end{align*}
\noindent By Tonelli's Theorem and Lemma \ref{lemcovinc}, this inequality implies the existence of $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$,
\begin{align*}
&\mathbf{E}\left[ \int_{B(t_0,\rho)\times B(t_0,\rho)} \|(t,X_t)-(s,X_s)\|^{-\beta}\ dt\ ds \right]\\
&\qquad\qquad\leq \int_{B(t_0,\rho)\times B(t_0,\rho)} K_4\
\frac{\|t-s\|^{d-\beta}}{\sigma(s,t)^d}\ ds\ dt \\
&\qquad\qquad\leq \int_{B(t_0,\rho)\times B(t_0,\rho)} K_4\
\|t-s\|^{-\beta+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon)}\ ds\ dt
<+\infty,
\end{align*}
because $\beta<N+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon)$.
Thus (\ref{eqcapa2}) holds and for all $\rho\in (0,\rho_0]$,
\begin{equation*}
\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)) \geq N+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon)
\qquad\textrm{a.s.}
\end{equation*}
Taking $\rho,\epsilon\in\mathbf{Q}_+$, this yields to
\begin{equation*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)) \geq
N+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)) \qquad\textrm{a.s.},
\end{equation*}
which proves (\ref{eqmindimH2}).
\end{itemize}
\end{proof}
We now investigate uniform extensions of Proposition \ref{propmindimH}.
\begin{corollary}\label{cormindimHunif1}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$ and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ the deterministic local sub-exponent of $X^{(i)}$ at any $t\in\mathbf{R}_+^N$.
Assume that for some open subset $I \subset\mathbf{R}^N_+$, we have
$\underline{\alpha} = \inf_{t\in I} \underline{\mathbb{\alpha}}_{X^{(i)}}(t) > 0$.
Then, with probability one,
\begin{align*}
\dim_{\mathcal{H}} (\mathrm{Gr}_X(I)) \geq
\left\{ \begin{array}{l l}
N/\underline{\alpha} & \textrm{if } N \leq d\ \underline{\alpha} ; \\
N + d(1-\underline{\alpha}) & \textrm{if } N > d\ \underline{\alpha} ;
\end{array} \right.
\end{align*}
and
\begin{align*}
\dim_{\mathcal{H}} (\mathrm{Rg}_X(I)) \geq
\left\{ \begin{array}{l l}
N/\underline{\alpha} & \textrm{if } N \leq d\ \underline{\alpha} ; \\
d & \textrm{if } N > d\ \underline{\alpha}.
\end{array} \right.
\end{align*}
\end{corollary}
\begin{proof}
For any open subset $I\subset\mathbf{R}^N_+$, we first prove that for all $\omega$, the Hausdorff dimension of the graph of $X_{\bullet}(\omega):t\mapsto X_t(\omega)$ satisfies
\begin{align}\label{ineqdimHboules}
\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I)) \geq \sup_{t_0\in I} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))).
\end{align}
Since $I$ is an open subset of $\mathbf{R}^N_+$, for all $t_0\in I$, there exists $\rho>0$ such that $B(t_0,\rho)\subset I$.
This leads to $\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))) \leq \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I))$ and then
$$ \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I)) \geq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))), $$
since $\rho\mapsto \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))) $ is decreasing.
Then (\ref{ineqdimHboules}) follows.
In the same way, we prove that for all $\omega$,
\begin{align}\label{ineqdimHboulesRg}
\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(I)) \geq \sup_{t_0\in I} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(B(t_0,\rho))).
\end{align}
Following the proof of Proposition \ref{propmindimH}, we distinguish the two cases: $N \leq d\ \underline{\alpha}$ and $N > d\ \underline{\alpha}$ with $\underline{\alpha} = \inf_{t\in I} \underline{\mathbb{\alpha}}_{X^{(i)}}(t)$.
\begin{itemize}
\item Assume that $N \leq d\ \underline{\alpha}$. In that case, for all $t_0\in I$, we have $N \leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$.
Equations (\ref{eqmindimH1}), (\ref{ineqdimHboules}) and (\ref{ineqdimHboulesRg}) imply almost surely
\begin{equation*}
\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I)) \geq \dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(I))
\geq \frac{N}{\underline{\alpha}}.
\end{equation*}
\item Assume that $N > d\ \underline{\alpha}$. By definition of $\underline{\alpha}$, for all $\epsilon>0$ with $N>d\ (\underline{\alpha} + \epsilon)$, there exists $t_0\in I$ such that
$$ \underline{\alpha} < \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) < \underline{\alpha} + \epsilon. $$
Then, we have $N > d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$. In the proof of Proposition \ref{propmindimH}, we proved that this implies almost surely
\begin{equation*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))\geq d
\end{equation*}
and
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)))
&\geq N + d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)) \\
&\geq N + d(1-\underline{\alpha} - \epsilon)
\end{align*}
for all $\epsilon \in\mathbf{Q}_+$ with $N>d\ (\underline{\alpha} + \epsilon)$.
Then almost surely,
\begin{equation*}
\sup_{t_0\in I}\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))\geq d
\end{equation*}
and
\begin{align*}
\sup_{t_0\in I}\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)))
\geq N + d(1-\underline{\alpha}).
\end{align*}
\end{itemize}
\end{proof}
\begin{corollary}\label{cormindimHunif2}
Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$ and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ the deterministic local sub-exponent of $X^{(i)}$ at any $t\in\mathbf{R}_+^N$.\\
Set $\underline{\mathcal{A}} = \{ t\in\mathbf{R}_+^N: \liminf_{u\rightarrow t}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)>0\}$.
Then, with probability one, for all $t_0\in\underline{\mathcal{A}}$,
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \geq
\left\{ \begin{array}{l l}
\displaystyle
N/\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t)
& \displaystyle\textrm{if }N\leq d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t);\\
\displaystyle
N + d\left(1-\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) \right)
& \displaystyle\textrm{if }N>d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) ;
\end{array} \right.
\end{align*}
and
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \geq
\left\{ \begin{array}{l l}
\displaystyle
N/\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t)
& \displaystyle\textrm{if } N \leq d\ \liminf_{t\rightarrow t_0} \underline{\mathbb{\alpha}}_{X^{(i)}}(t) ; \\
d & \displaystyle\textrm{if } N>d\ \liminf_{t\rightarrow t_0} \underline{\mathbb{\alpha}}_{X^{(i)}}(t).
\end{array} \right.
\end{align*}
\end{corollary}
\begin{proof}
Corollary \ref{cormindimHunif1} implies the existence of $\Omega^*\in\mathcal{F}$ with $\mathbf{P}(\Omega^*)=1$ such that: For all $\omega\in\Omega^*$ and all $a,b\in\mathbf{Q}^N_+$ with $a\prec b$, such that $\underline{\alpha} = \inf_{t\in (a,b)} \underline{\mathbb{\alpha}}_{X^{(i)}}(t) > 0$, we have
$\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}((a,b)))
\geq N/\underline{\alpha}$ if $N\leq d\ \underline{\alpha}$ and $\geq N + d(1-\underline{\alpha})$ if $N> d\ \underline{\alpha}$ and
$\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}((a,b)))
\geq N/\underline{\alpha}$ if $N\leq d\ \underline{\alpha}$ and $\geq d$ if $N> d\ \underline{\alpha}$.
Therefore, taking two sequences $(a_n)_{n\in\mathbf{N}}$ and $(b_n)_{n\in\mathbf{N}}$ such that $\forall n\in\mathbf{N}$, $a_n<t_0<b_n$ and converging to $t_0$, we get
\begin{align*}
\lim_{n\rightarrow\infty}\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}((a_n,b_n))) \geq
\left\{ \begin{array}{l l}
\displaystyle
N/\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t)
& \displaystyle\textrm{if }N\leq d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t);\\
\displaystyle
N + d(1-\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) )
& \displaystyle\textrm{if }N>d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) ;
\end{array} \right.
\end{align*}
and
\begin{align*}
\lim_{n\rightarrow\infty}\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}((a_n,b_n))) \geq
\left\{ \begin{array}{l l}
\displaystyle
N/\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t)
& \displaystyle\textrm{if } N \leq d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) ; \\
d & \displaystyle\textrm{if } N>d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t).
\end{array} \right.
\end{align*}
By monotony of the Hausdorff dimension, the result follows.
\end{proof}
\section{Applications}\label{sec:app}
In this section, we apply the main results to Gaussian processes whose fine regularity is not completely known: the multiparameter fractional Brownian motion, the multifractional Brownian motion with a regularity function lower than its own regularity and the generalized Weierstrass function.
\subsection{Multiparameter fractional Brownian motion}\label{sec:mpfbm}
The multiparameter fractional Brownian motion (MpfBm) $\mathbf{B}^H=\{ \mathbf{B}^H_t;\; t\in\mathbf{R}_+^N\}$ of index $H\in (0,1/2]$ is defined as a particular case of set-indexed fractional Brownian motion (see \cite{sifBm, MpfBm}), where the indexing collection is $\mathcal{A}=\{ [0,t];\; t\in\mathbf{R}_+^N\} \cup\{\emptyset\}$.
It is characterized as a real-valued mean-zero Gaussian process with covariance function
\begin{align*}
\forall s,t\in\mathbf{R}_+^N,\quad
\mathbf{E}[\mathbf{B}^H_s \mathbf{B}^H_t] = \frac{1}{2} \left[ m([0,s])^{2H} + m([0,t])^{2H} - m([0,s]\bigtriangleup [0,t])^{2H} \right],
\end{align*}
where $m$ denotes a Radon measure in $\mathbf{R}^N_+$.
In the specific case where $N=2$ and $m$ is the Lebesgue measure of $\mathbf{R}^2_+$, the covariance structure of the MpfBm is
\begin{align*}
\forall s,t\in\mathbf{R}_+^2,\quad
\mathbf{E}[\mathbf{B}^H_s \mathbf{B}^H_t] = \frac{1}{2} \left[ (s_1s_2)^{2H}+(t_1t_2)^{2H}-(s_1s_2+t_1t_2-2(s_1\wedge t_1)(s_2\wedge t_2))^{2H} \right].
\end{align*}
Then, its incremental variance is
\begin{align}\label{eqcovincMpfBm}
\forall s,t\in \mathbf{R}_+^2,\quad
\mathbf{E}\left[|\mathbf{B}^H_t-\mathbf{B}^H_s|^2\right]=(s_1 s_2 + t_1 t_2 - 2 (s_1\wedge t_1)(s_2\wedge t_2))^{2H}.
\end{align}
The stationarity of the increments of the multiparameter fractional Brownian motion are studied in \cite{MpfBm}. Among all the various definitions of the stationarity property for a multiparameter process, the MpfBm does not satisfy the increment stationarity assumption of \cite{Adler77}. Indeed, (\ref{eqcovincMpfBm}) shows that $\mathbf{E}\left[|\mathbf{B}^H_t-\mathbf{B}^H_s|^2\right]$ does not only depend on $t-s$.
Since the Hausdorff dimension of its graph does not come directly from \cite{Adler77},
we use the generic results of Section \ref{sec:main}.
\begin{lemma}\label{lemdist}
If $m$ is the Lebesgue measure of $\mathbf{R}^N$, for any $a\prec b$ in
$\mathbf{R}^N_{+}\setminus\left\{0\right\}$, there exists two positive constants
$m_{a,b}$ and $M_{a,b}$ such that
\begin{equation*}
\forall s,t\in [a,b];\quad
m_{a,b}\ d_1(s,t)\leq m([0,s]\bigtriangleup [0,t])
\leq M_{a,b}\ d_{\infty}(s,t)
\end{equation*}
where $d_1$ and $d_{\infty}$ are the usual distances of $\mathbf{R}^N$ defined by
\begin{align*}
d_1:(s,t)&\mapsto\|t-s\|_1=\sum_{i=1}^N |t_i-s_i| \\
d_{\infty}:(s,t)&\mapsto\|t-s\|_{\infty}=\max_{1\leq i\leq N} |t_i-s_i|.
\end{align*}
\end{lemma}
\begin{proof}
For all $s,t\in [a,b]$, we write
\begin{align*}
[0,s] \bigtriangleup [0,t]=\left([0,s]\setminus [0,t]\right) \cup
\left([0,t]\setminus [0,s]\right).
\end{align*}
Suppose that for all $i\in I\subset\left\{1,\dots,N\right\}$, $s_i>t_i$,
and that for all $i\in\left\{1,\dots,N\right\}\setminus I$, $s_i\leq t_i$.
For any subset $J$ of $\{1,\dots,N\}$, we denote by $\prod_{i\in J}[0,s_i]$ the cartesian product of $[0,s_i]$ for $i\in J$.
\noindent We have
\begin{align*}
[0,s] &= \prod_{i\notin I}[0,s_i] \times \prod_{i\in I}\left([0,t_i]\cup [t_i,s_i]\right) \\
&= \left( \prod_{i\notin I}[0,s_i] \times \prod_{i\in I}[0,t_i] \right) \cup
\bigcup_{J\subsetneq I}\left( \prod_{i\notin I}[0,s_i] \times \prod_{i\in J}[0,t_i]
\times \prod_{i\in I\setminus J}[t_i,s_i] \right),
\end{align*}
and then
\begin{align*}
[0,s]\setminus [0,t] &= \bigcup_{J\subsetneq I}\left( \prod_{i\notin I}[0,s_i] \times \prod_{i\in J}[0,t_i] \times \prod_{i\in I\setminus J}[t_i,s_i] \right) \\
&= \left\{ x\in [0,s]:\;\exists i\in I;\; t_i<x_i\leq s_i \right\}.
\end{align*}
We deduce
\begin{align*}
m([0,s]\setminus [0,t]) = \prod_{i\notin I} |s_i|\ \sum_{J\subsetneq I} \left( \prod_{i\in J} |t_i|
\prod_{i\in I\setminus J} |t_i-s_i| \right).
\end{align*}
In the same way, we get
\begin{align*}
m([0,t]\setminus [0,s]) = \prod_{i\in I} |s_i|\ \sum_{J\subsetneq I^c} \left(
\prod_{i\in J} |t_i| \prod_{i\in I^c\setminus J} |t_i-s_i| \right).
\end{align*}
\noindent
For all $1\leq i\leq N$, we have $|a| \leq |s_i| \leq |b|$ and $|a| \leq |t_i| \leq |b|$.
Then,
\begin{align*}
m([0,s] &\bigtriangleup [0,t]) \\
&\leq |b|^{\# I^c} \sum_{J\subsetneq I} |b|^{\# J}
d_{\infty}(s,t)^{\#(I\setminus J)} + |b|^{\# I} \sum_{J\subsetneq I^c} |b|^{\# J}
d_{\infty}(s,t)^{\#(I^c\setminus J)} \\
&\leq d_{\infty}(s,t)\ \underbrace{\left[ |b|^{\# I^c} \sum_{J\subsetneq I} |b|^{\# J}
d_{\infty}(s,t)^{\#(I\setminus J)-1} + |b|^{\# I} \sum_{J\subsetneq I^c} |b|^{\# J}
d_{\infty}(s,t)^{\#(I^c\setminus J)-1}\right]}_{\textrm{bounded in }[a,b]}\\
&\leq M_{a,b}\ d_{\infty}(s,t).
\end{align*}
For the lower bound, we write
\begin{align*}
m([0,s] \bigtriangleup [0,t])
\geq |a|^{\# I^c} \sum_{J\subsetneq I} |a|^{\# J}
\prod_{i\in I\setminus J} |t_i-s_i| + |a|^{\# I} \sum_{J\subsetneq I^c} |a|^{\# J}
\prod_{i\in I^c\setminus J} |t_i-s_i|
\end{align*}
Let $m_a$ be the minimum of $|a|^k$ for $1\leq k\leq N$. We get
\begin{align}\label{eqminda}
m([0,s] \bigtriangleup [0,t])
\geq m_a^2 \sum_{J\subsetneq I} \prod_{i\in I\setminus J} |t_i-s_i|
+ m_a^2 \sum_{J\subsetneq I^c} \prod_{i\in I^c\setminus J} |t_i-s_i|.
\end{align}
\noindent Let us remark that
\begin{align*}
\sum_{J\subsetneq I} \prod_{i\in I\setminus J} |t_i-s_i|
= \prod_{i\in I} \left(1+|t_i-s_i|\right) - 1.
\end{align*}
Using the expansion
\begin{align*}
\log\prod_{i\in I} \left(1+|t_i-s_i|\right) = \sum_{i\in I}\log\left(1+|t_i-s_i|\right)
=\sum_{i\in I} |t_i-s_i| + o(|t_i-s_i|^2),
\end{align*}
which implies
\begin{align*}
\prod_{i\in I} \left(1+|t_i-s_i|\right) = 1+\sum_{i\in I} |t_i-s_i| + o(|t_i-s_i|^2),
\end{align*}
the inequality (\ref{eqminda}) becomes
\begin{align*}
m([0,s] \bigtriangleup [0,t])
\geq m_a^2 \sum_{1\leq i\leq N} |t_i-s_i| + o(\|t-s\|_{\infty}).
\end{align*}
The result follows.
\end{proof}
\begin{lemma}\label{lemMpfBmexp}
Let $\mathbf{B}^H=\{ \mathbf{B}^H_t;\; t\in\mathbf{R}_+^N\}$ be a multiparameter fractional Brownian motion with index $H\in (0, 1/2]$. The deterministic local H\"older exponent and deterministic local sub-exponent of $\mathbf{B}^H$ at any $t_0\in\mathbf{R}_+^N$ is given by
$\widetilde{\mathbb{\alpha}}_X(t_0)=\underline{\mathbb{\alpha}}_X(t_0)=H$.
\end{lemma}
\begin{proof}
We prove that $\widetilde{\mathbb{\alpha}}_X(t_0)\geq H$ and $\underline{\mathbb{\alpha}}_X(t_0)\leq H$.
The result will follow from $\widetilde{\mathbb{\alpha}}_X(t_0)\leq\underline{\mathbb{\alpha}}_X(t_0)$.
Since for all $s,t\in \mathbf{R}^N_+$,
\begin{align*}
\frac{\mathbf{E}\left[|\mathbf{B}^H_t - \mathbf{B}^H_t|^2\right]}{\|t-s\|^{2H}} =
\left( \frac{m([0,s] \bigtriangleup [0,t])}{d_2(s,t)} \right)^{2H},
\end{align*}
Lemma \ref{lemdist} implies that for all $s,t$ in any interval $[a,b]$,
\begin{align}\label{ineqMpfBm}
M_1 \left( \frac{d_1(s,t)}{d_2(s,t)} \right)^{2H} \leq
\frac{\mathbf{E}\left[|\mathbf{B}^H_t - \mathbf{B}^H_t|^2\right]}{\|t-s\|^{2H}}
\leq M_2 \left( \frac{d_{\infty}(s,t)}{d_2(s,t)} \right)^{2H},
\end{align}
for some positive constants $M_1$ and $M_2$.
\noindent Since the distances $d_1, d_2$ and $d_{\infty}$ are equivalent, the inequality (\ref{ineqMpfBm}) implies that the quantity $\mathbf{E}\left[|\mathbf{B}^H_t - \mathbf{B}^H_t|^2\right]/\|t-s\|^{2H}$ is bounded on any interval $[a,b]$.
Consequently, for all $t_0\in\mathbf{R}^N_+$, $\widetilde{\mathbb{\alpha}}_X(t_0)\geq H$ and
$\underline{\mathbb{\alpha}}_X(t_0)\leq H$, by definition of the deterministic local H\"older exponent and the deterministic local sub-exponent.
\end{proof}
\
A direct consequence from Lemma \ref{lemMpfBmexp} is the local regularity of the sample paths of the multiparameter fractional Brownian motion. In \cite{2ml}, Corollary 3.15 states that for any Gaussian process $X$ such that the function $t\mapsto\widetilde{\mathbb{\alpha}}_X(t)$ is continuous and positive, the local H\"older exponents satisfy with probability one: $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t)=\widetilde{\mathbb{\alpha}}_X(t)$ for all $t\in\mathbf{R}_+^N$. Since the deterministic local H\"older exponents of the MpfBm are constant and positive, the following result comes directly.
\begin{corollary}
The local H\"older exponent of the multiparameter fractional Brownian motion $\mathbf{B}^H=\{ \mathbf{B}^H_t;\; t\in\mathbf{R}_+^N\}$ (with $1<H\leq 1/2$) satisfies with probability one, $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_{\mathbf{B}^H}(t_0)=H$ for all $t_0\in\mathbf{R}^N_+$.
\end{corollary}
As an application of Theorem \ref{thmaincompact}, the property of constant local regularity of the multiparameter fractional Brownian motion yields to sharp results about the Hausdorff dimensions of its graph and its range.
\begin{proposition}\label{prop:mpfbm}
Let $X=\{ X_t;\; t\in\mathbf{R}_+^N\}$ be a multiparameter fractional Brownian field with index $H\in (0, 1/2]$, i.e. whose coordinate processes $X^{(1)},\dots,X^{(d)}$ are i.i.d. multiparameter fractional Brownian motions with index $H$. \\
With probability one, the Hausdorff dimensions of the graph and the range of the sample paths of $X$ are
\begin{align*}
\forall I=(a,b)\subset\mathbf{R}_+^N,\quad
\dim_{\mathcal{H}} (\mathrm{Gr}_{X}(I)) &= \min\{ N / H; N + d(1 - H) \}, \\
\dim_{\mathcal{H}} (\mathrm{Rg}_{X}(I)) &= \min\{ N / H; d \}.
\end{align*}
\end{proposition}
\begin{corollary}\label{cor:mpfbm}
Let $\mathbf{B}^H=\{ \mathbf{B}^H_t;\; t\in\mathbf{R}_+^N\}$ be a multiparameter fractional Brownian motion with index $H\in (0, 1/2]$.
With probability one, the Hausdorff dimensions of the graph and the range of the sample paths of $\mathbf{B}^H$ are
\begin{align*}
\forall I=(a,b)\subset\mathbf{R}_+^N,\quad
\dim_{\mathcal{H}} (\mathrm{Gr}_{\mathbf{B}^H}(I)) &= N + 1 - H, \\
\dim_{\mathcal{H}} (\mathrm{Rg}_{\mathbf{B}^H}(I)) &= 1.
\end{align*}
\end{corollary}
Proposition \ref{prop:mpfbm} and Corollary \ref{cor:mpfbm} should be compared to Theorem 1.3 of \cite{AyXiao} which states the Hausdorff dimensions of the range and the graph of the fractional Brownian sheet (result extended by Proposition 1 and Theorem 3 of \cite{WuXiao}). In particular, the Hausdorff dimensions of the sample path (range and graph) of the multiparameter fractional Brownian motion are equal to the respective quantities for the fractional Brownian sheet, when the Hurst index is the same along each axis.
\subsection{Irregular Multifractional Brownian motion}\label{sec:mbm}
The multifractional Brownian motion (mBm) is an extension of the fractional Brownian motion, where the self-similarity index $H\in (0,1)$ is substituted with a function $H:\mathbf{R}_+\rightarrow (0,1)$ (see \cite{RPJLV} and \cite{BJR}). More precisely, it can be defined as a zero mean Gaussian process $\{X_t;\;t\in\mathbf{R}_+\}$ with
\begin{equation*}
X_t = \int_{-\infty}^t \left[(t-u)^{H(t)-1/2} - (-u)^{H(t)-1/2}\right].\mathbbm{W}(du)
+\int_0^t (t-u)^{H(t)-1/2}.\mathbbm{W}(du)
\end{equation*}
or
\begin{equation}\label{eq:mbm-harm}
X_t = \int_{\mathbf{R}} \frac{e^{it\xi}-1}{|\xi|^{H(t)+1/2}}.\widehat{\mathbbm{W}}(du),
\end{equation}
where $\mathbbm{W}$ is a Gaussian measure in $\mathbf{R}$ and $\widehat{\mathbbm{W}}$ is the Fourier transform of a Gaussian measure in $\mathbf{C}$.
The variety of the class of multifractional Brownian motions is described in \cite{StoevTaqqu}.
In the first definitions of the mBm, the different groups of authors used to consider the assumption: $H$ is a $\beta$-H\"older function and $H(t)<\beta$ for all $t\in\mathbf{R}_+$.
Under this so-called $(H_{\beta})$-assumption, the local regularity of the sample paths was described by
\begin{equation*}
\texttt{\large $\boldsymbol{\alpha}$}_X(t_0) = \widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0) = H(t_0) \qquad\textrm{a.s.}
\end{equation*}
where $\texttt{\large $\boldsymbol{\alpha}$}_X(t_0)$ and $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$ denote the pointwise and local H\"older exponents of $X$ at any $t_0\in\mathbf{R}_+$.
A localization of the Hausdorff dimension of the graph were also proved: For any $t_0\in\mathbf{R}_+$,
\begin{equation*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} \left[ \mathrm{Gr}_X\left( B(t_0,\rho) \right)\right] = 2-H(t_0)
\qquad\textrm{a.s.}
\end{equation*}
Let us notice that this result could not be a direct consequence of Adler's earlier work \cite{Adler77} since the multifractional Brownian motion does not have stationary increments, on the contrary to the classical fractional Brownian motion.
In \cite{EH06, 2ml}, the fine regularity of the multifractional Brownian motion has been studied in the irregular case, i.e. when the function $H$ is only assumed to be $\beta$-H\"older continuous with $\beta>0$. In this more general case, the pointwise and local H\"older exponents of $X$ at any $t_0\in\mathbf{R}_+$ satisfy respectively
\begin{align*}
\texttt{\large $\boldsymbol{\alpha}$}_X(t_0) &= H(t_0) \wedge \alpha_H(t_0) \qquad\textrm{a.s.}\\
\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0) &= H(t_0) \wedge \widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.},
\end{align*}
where
\begin{align*}
\alpha_H(t_0)&= \sup\left\{ \alpha>0: \limsup_{\rho\rightarrow 0}
\sup_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{\rho^{\alpha}} < +\infty \right\};\\
\widetilde{\alpha}_H(t_0)&= \sup\left\{ \alpha>0: \lim_{\rho\rightarrow 0}
\sup_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{|t-s|^{\alpha}} < +\infty \right\}.
\end{align*}
Roughtly speaking, when the function $H$ is irregular, it transmits its local regularity to the sample paths of the mBm.
But in that case, nothing is known about the Hausdorff dimension of the range or the graph of the process.
In this section, the main results of the paper stated in Section \ref{sec:main} are applied to derive informations on these Hausdorff dimensions, without any regularity assumptions on the function $H$.
As for Gaussian processes, we define the {\em local sub-exponent} of $H$ at $t_0\in\mathbf{R}_+$ by
\begin{align*}
\underline{\alpha}_H(t_0)&= \inf\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{|t-s|^{\alpha}}
=+\infty \right\} \\
&= \sup\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{|t-s|^{\alpha}}
=0 \right\}.
\end{align*}
\begin{proposition}\label{prop:mbm}
Let $X=\{X_t;\; t\in\mathbf{R}_+\}$ be the multifractional Brownian motion of integral representation (\ref{eq:mbm-harm}), with regularity function $H:\mathbf{R}_+\rightarrow (0,1)$ assumed to be $\beta$-H\"older-continuous with $\beta>0$.
Let $\widetilde{\alpha}_H(t_0)$ and $\underline{\alpha}_H(t_0)$ be respectively the local H\"older exponent and sub-exponent of $H$ at $t_0\in\mathbf{R}_+$.
In the three following cases, the Hausdorff dimension of the graph of the sample path of $X$ satisfies:
\begin{enumerate}[(i)]
\item If $H(t_0) < \widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0)$ for $t_0\in\mathbf{R}_+$, then
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) = 2-H(t_0) \qquad\textrm{a.s.}
\end{align*}
\item If $\widetilde{\alpha}_H(t_0) < H(t_0) \leq \underline{\alpha}_H(t_0)$ for $t_0\in\mathbf{R}_+$, then
\begin{align*}
2-H(t_0) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) &\leq 2-\widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.}
\end{align*}
\item If $\widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0) < H(t_0)$ for $t_0\in\mathbf{R}_+$, then
\begin{align*}
2-\underline{\alpha}_H(t_0) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) &\leq 2-\widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.}
\end{align*}
\end{enumerate}
With probability one, the Hausdorff dimension of the range of the sample path of $X$ satisfies:
\begin{align*}
\forall t_0\in\mathbf{R}_+,\quad
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) = 1.
\end{align*}
Moreover if the $(H_{\beta})$-assumption holds then, with probability one,
\begin{align*}
\forall t_0\in\mathbf{R}_+,\quad
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) = 2-H(t_0).
\end{align*}
\end{proposition}
\begin{proof}
In \cite{EH06}, an asymptotic behaviour of the incremental variance of the multifractional Brownian motion, in a neighborhood $B(t_0,\rho)$ of any $t_0\in\mathbf{R}_+$ as $\rho$ goes to $0$, is given by: $\forall s,t\in B(t_0,\rho)$,
\begin{align}\label{eq:asympMBM}
\mathbf{E}[|X_t-X_s|^2] \sim K(t_0)\ |t-s|^{H(t)+H(s)} + L(t_0)\ [H(t)-H(s)]^2,
\end{align}
where $K(t_0)$ and $L(t_0)$ are positive constants.\\
From (\ref{eq:asympMBM}), for any $t_0\in\mathbf{R}_+$, for all $\alpha>0$ and for all $s,t\in B(t_0,\rho)$,
\begin{align}\label{eq:asympMBM2}
\frac{\mathbf{E}[|X_t-X_s|^2]}{|t-s|^{2\alpha}} \sim K(t_0)\ |t-s|^{H(t)+H(s)-\alpha} + L(t_0)\ \left[\frac{H(t)-H(s)}{|t-s|^{\alpha}}\right]^2,
\end{align}
when $\rho\rightarrow 0$.
This expression allows to evaluate the exponents $\widetilde{\mathbb{\alpha}}_X(t_0)$ (and consequently $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$) and $\underline{\mathbb{\alpha}}_X(t_0)$, in function of the respective exponents of the function $H$.
The local behaviour of $H$ around $t_0$ is described by one of the two following situations:
\begin{itemize}
\item Either there exists $\rho>0$ such that the restriction $\left.H\right|_{B(t_0,\rho)}$ is increasing or decreasing.
In that case, $\underline{\alpha}_H(t_0)\in\mathbf{R}_+\cup\{+\infty\}$.
\item Or for all $\rho>0$, there exist $s,t\in B(t_0,\rho)$ such that $H(t)=H(s)$.\\
In that case, for all $\alpha>0$ and for all $\rho>0$, $\displaystyle\inf_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{|t-s|^{\alpha}}=0$ and therefore, $\underline{\alpha}_H(t_0)=+\infty$.
\end{itemize}
Since $\widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0)$ for all $t_0\in\mathbf{R}_+$ as noticed in Section~\ref{sec:subexp}, we distinguish the three following cases:
\begin{enumerate}[(i)]
\item If $H(t_0) < \widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0)$ for some $t_0\in\mathbf{R}_+$, then for all $0<\epsilon<\widetilde{\alpha}_H(t_0) - H(t_0)$, there exists $\rho_0>0$ such that
$$ \forall t\in B(t_0,\rho_0),\quad H(t_0)-\epsilon < H(t) < H(t_0)+\epsilon,$$
and thus
\begin{equation}\label{eq:mbm-H}
\forall s,t\in B(t_0,\rho_0),\quad |t-s|^{2H(t_0)+2\epsilon} \leq |t-s|^{H(s)+H(t)} \leq |t-s|^{2H(t_0)-2\epsilon}.
\end{equation}
Then, expression (\ref{eq:asympMBM2}) implies $H(t_0)-\epsilon\leq\widetilde{\bbalpha}_X(t_0)$ and $\underline{\bbalpha}_X(t_0) \leq H(t_0)+\epsilon$, by definition of the exponents. Letting $\epsilon$ tend to $0$, and using $\widetilde{\bbalpha}_X(t_0) \leq \underline{\bbalpha}_X(t_0)$, we get $\widetilde{\bbalpha}_X(t_0) = \underline{\bbalpha}_X(t_0) = H(t_0)$.
\noindent Then, Theorem \ref{thmain} (with $N>d\ \underline{\bbalpha}_X(t_0)$) implies:
\begin{align*}
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) = 2-H(t_0) \qquad\textrm{a.s.}
\end{align*}
\item If $\widetilde{\alpha}_H(t_0) < H(t_0) \leq \underline{\alpha}_H(t_0)$ for some $t_0\in\mathbf{R}_+$, then as previously, we consider any $0<\epsilon<H(t_0)-\widetilde{\alpha}_H(t_0)$ and we show that expression (\ref{eq:asympMBM2}) and inequalities (\ref{eq:mbm-H}) imply $\widetilde{\bbalpha}_X(t_0) = \widetilde{\alpha}_H(t_0)$ and $\underline{\bbalpha}_X(t_0) = H(t_0)$.
Theorem \ref{thmain} (with $N>d\ \underline{\bbalpha}_X(t_0)$) implies:
\begin{align*}
2-H(t_0) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \leq 2-\widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.}
\end{align*}
\item If $\widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0) < H(t_0)$ for some $t_0\in\mathbf{R}_+$, then as previously, we consider any $0<\epsilon<H(t_0)-\underline{\alpha}_H(t_0)$ and we show that expression (\ref{eq:asympMBM2}) and inequalities (\ref{eq:mbm-H}) imply $\widetilde{\bbalpha}_X(t_0) = \widetilde{\alpha}_H(t_0)$ and $\underline{\bbalpha}_X(t_0) = \underline{\alpha}_H(t_0)$.
Theorem \ref{thmain} (with $N>d\ \underline{\bbalpha}_X(t_0)$) implies:
\begin{align*}
2-\underline{\alpha}_H(t_0) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \leq 2-\widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.}
\end{align*}
\end{enumerate}
Since $H$ is $\beta$-H\"older-continuous with $\beta>0$, Theorem \ref{thmainunif} can be applied with $\mathcal{A}=\mathbf{R}_+$.
In the three previous case, we observe that $\underline{\bbalpha}_X(u) < 1$ for all $u\in\mathbf{R}_+$. Consequently, $N>d\ \liminf_{u\rightarrow t_0}\underline{\bbalpha}_X(u)$ and, with probability one,
\begin{align*}
\forall t_0\in\mathbf{R}_+,\quad
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) = 1.
\end{align*}
When the $(H_{\beta})$-assumption holds, $\widetilde{\bbalpha}_X(t_0) = \widetilde{\alpha}_H(t_0)=\underline{\bbalpha}_X(t_0)$ for all $t_0\in\mathbf{R}_+$, and by continuity of $H$,
$$ \liminf_{u\rightarrow t_0}\widetilde{\bbalpha}_X(u) =
\liminf_{u\rightarrow t_0}\underline{\bbalpha}_X(u) = H(t_0). $$
Then, Theorem \ref{thmainunif} implies:
With probability one,
\begin{align*}
\forall t_0\in\mathbf{R}_+,\quad
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) = 2-H(t_0).
\end{align*}
\end{proof}
According to Proposition \ref{prop:mbm}, the general theorems of Section \ref{sec:main} fail to derive sharp values for the Hausdorff dimensions of the sample paths of the multifractional Brownian motion when the $(H_{\beta})$-assumption for the function $H$ is not satisfied.
This is due to the fact that the irregularity of $H$ is not completely controlled by the exponents $\widetilde{\alpha}_H(t_0)$ and $\underline{\alpha}_H(t_0)$. A deeper analysis of the function $H$ is required in order to determine the exact Hausdorff dimensions of the mBm.
\subsection{Generalized Weierstrass function}\label{sec:GW}
The local regularity of the Weierstrass function $W_H$, defined by
\begin{equation*}
t\mapsto W_H(t)=\sum_{j=1}^{\infty}\lambda^{-j H}\ \sin\lambda^{j}t,
\end{equation*}
where $\lambda \geq 2$ and $H\in (0,1)$, has been deeply studied in the literature (e.g. see \cite{falconer}).
When $\lambda$ is large enough, the box-counting dimension of the graph of $W_H$ is known to be $2-H$.
Nevertheless the exact value of the Hausdorff dimension remains unknown at this stage.
Different stochastic versions of the Weierstrass function have been considered in \cite{AyLeVe, falconer, 2ml, hunt, liningPhD} and their geometric properties have been investigated.
In this section, we consider the {\em generalized Weierstrass function (GW)}, defined as the Gaussian process $X=\left\{X_t;\;t\in\mathbf{R}_{+}\right\}$,
\begin{equation}\label{def:Weierstrass}
\forall t\in\mathbf{R}_+,\quad
X_t=\sum_{j=1}^{\infty}Z_j\ \lambda^{-j H(t)}\ \sin(\lambda^{j}t + \theta_j)
\end{equation}
where
\begin{itemize}
\item $\lambda \geq 2$,
\item $t\mapsto H(t)$ takes values in $(0,1)$,
\item $\left(Z_j\right)_{j\geq 1}$ is a sequence of $\mathcal{N}(0,1)$ i.i.d. random variables,
\item and
$\left(\theta_j\right)_{j\geq 1}$ is a sequence of uniformly distributed on $[0,2\pi)$ random variables independent of $\left(Z_j\right)_{j\geq 1}$.
\end{itemize}
In the specific case of $\theta_j=0$ for all $j\geq 1$, Theorem 4.9 of \cite{2ml} determines the local regularity of the sample path of the GW through its $2$-microlocal frontier, when the function $H$ is $\beta$-H\"older continuous with $\beta>0$ and when the $(H_{\beta})$-assumption holds, i.e. $H(t)<\beta$ for all $t\in\mathbf{R}_+$.
In particular, the deterministic local H\"older exponent is proved to be $\widetilde{\bbalpha}_X(t_0) = H(t_0)$ for all $t_0\in\mathbf{R}_+$ and the local H\"older exponent satisfies, with probability one,
\begin{align*}
\forall t_0\in\mathbf{R}_+,\quad
\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0) = H(t_0).
\end{align*}
Moreover, when $H$ is constant and $\theta_j=0$ for all $j\geq 1$, the Hausdorff dimension of the graph of the sample path of the GW is proved to be equal to $2-H$, as a particular case of Theorem 5.3.1 of \cite{liningPhD}. In the sequel, we use Theorem \ref{thmainunif} to extend this result when $H$ is no longer constant and the $\theta_j$'s are not equal to $0$.
The two following lemmas are the key results to determine the deterministic local H\"older exponent and sub-exponent of the GW, in the general case. Their proofs of are sketched in \cite{falconer} when $\left(\theta_j\right)_{j\geq 1}$ are independent and uniformly distributed on $[0,2\pi)$; for sake of completeness, we detail them in this section without requiring the independence of the $\theta_j$'s,
before considering the case of a non-constant function $H$.
\begin{lemma}\label{lem:GW-inc-var}
Let $\{X_t;\;t\in\mathbf{R}_+\}$ be the stochastic Weierstrass function defined by (\ref{def:Weierstrass}).
Then, the incremental variance between $u,v\in\mathbf{R}_+$ is given by
\begin{align}\label{eq:GW-inc-var}
\mathbf{E}[|X_u-X_v|^2] = 2\sum_{j\geq 1} \lambda^{-2j H(u)}
\sin^2\left(\lambda^j \frac{u-v}{2}\right)
+ \sum_{j\geq 1} \left( \lambda^{-j H(v)} - \lambda^{-j H(u)} \right)^2.
\end{align}
\end{lemma}
\begin{proof}
For all $u,v\in\mathbf{R}_+$, we compute
\begin{align*}
X_u-X_v &= \sum_{j\geq 1} Z_j\ \lambda^{-j H(u)} \left[ \sin(\lambda^j u + \theta_j) - \sin(\lambda^j v + \theta_j) \right] \\
&\qquad\qquad + \sum_{j\geq 1} Z_j\ \left[ \lambda^{-j H(v)} - \lambda^{-j H(u)} \right] \sin(\lambda^j v + \theta_j) \\
&= 2 \sum_{j\geq 1} Z_j\ \lambda^{-j H(u)} \sin\left( \lambda^j \frac{u-v}{2} \right) \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \\
&\qquad\qquad + \sum_{j\geq 1} Z_j\ \left[ \lambda^{-j H(v)} - \lambda^{-j H(u)} \right] \sin(\lambda^j v + \theta_j).
\end{align*}
In the expression of $\mathbf{E}[|X_u-X_v|^2]$, the three following terms appear:
\begin{itemize}
\item $\displaystyle\mathbf{E}\left[ Z_j Z_k \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \cos\left( \lambda^k \frac{u+v}{2} + \theta_k \right) \right]$,
\item $\displaystyle\mathbf{E}\left[ Z_j Z_k \sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right]$
\item and $\displaystyle\mathbf{E}\left[ Z_j Z_k \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right]$,
\end{itemize}
where $j,k \geq 1$.
The first two terms are treated in the same way. For the second one, we have
\begin{align*}
\mathbf{E}\left[ Z_j Z_k \right. &\left.\sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right] \\
&= \mathbf{E}\left( \mathbf{E}\left[ Z_j Z_k \sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \mid Z_j, Z_k\right] \right) \\
&= \mathbf{E}[Z_j Z_k]\ \mathbf{E}\left[ \sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right],
\end{align*}
using the independence of $(\theta_j, \theta_k)$ with $(Z_j, Z_k)$.
Then, since $\mathbf{E}[Z_j Z_k] = \mathbbm{1}_{j=k}$ and
\begin{align*}
\mathbf{E}[\sin^2(\lambda^j v + \theta_j)] = \frac{1}{2\pi} \int_{[0,2\pi)} \sin^2(\lambda^j v + x)\ dx = \frac{1}{2},
\end{align*}
we get
\begin{align*}
\mathbf{E}\left[ Z_j Z_k \sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right]
= \frac{1}{2}.\mathbbm{1}_{j=k}.
\end{align*}
In the same way, we prove that
\begin{align*}
\mathbf{E}\left[ Z_j Z_k \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \cos\left( \lambda^k \frac{u+v}{2} + \theta_k \right) \right]
= \frac{1}{2}.\mathbbm{1}_{j=k}.
\end{align*}
For the third term, we compute as previously
\begin{align*}
\mathbf{E}\bigg[ Z_j Z_k & \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \bigg] \\
&= \mathbf{E}[Z_j Z_k] \ \mathbf{E}\left[ \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right] \\
&= \mathbbm{1}_{j=k} . \mathbf{E}\left[ \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \sin\left( \lambda^j v + \theta_j \right) \right] \\
&= \mathbbm{1}_{j=k} . \frac{1}{2\pi}\int_{[0,2\pi)} \cos\left( \lambda^j \frac{u+v}{2} + x \right) \sin\left( \lambda^j v + x \right) \ dx =0,
\end{align*}
by a parity argument.
The result follows.
\end{proof}
\begin{lemma}\label{lem:GW-const}
Let $\{X_t;\;t\in\mathbf{R}_+\}$ be the stochastic Weierstrass function defined by (\ref{def:Weierstrass}), where the function $H$ is assumed to be constant.
Then, for all compact subset $I\subset\mathbf{R}_+$, there exists two constants $C_1>0$ and $C_2>0$ such that for all $u,v\in I$,
\begin{align}\label{eq:GW-const}
0 < C_1 \leq \frac{\mathbf{E}[|X_u-X_v|^2]}{|u-v|^{2H}} \leq C_2 < +\infty.
\end{align}
\end{lemma}
\begin{proof}
According to Lemma \ref{lem:GW-inc-var}, the incremental variance of $X$ is given by
\begin{align}\label{eq:GW-inc-var-const}
\mathbf{E}[|X_u-X_v|^2] = 2\sum_{j\geq 1} \lambda^{-2j H}
\sin^2\left(\lambda^j \frac{u-v}{2}\right).
\end{align}
Let $N$ be the integer such that $\lambda^{-(N+1)} \leq |u-v| < \lambda^{-N}$.
For all $j\leq N$, $\displaystyle\lambda^j \frac{u-v}{2} \leq \frac{1}{2}$.
Since $\displaystyle x^2 - \frac{x^4}{3} \leq \sin^2 x \leq x^2$ for all $x\in [0,1]$, expression (\ref{eq:GW-inc-var-const}) implies
\begin{align}\label{eq:GW-const-up}
\mathbf{E}[|X_u-X_v|^2] &\leq 2\sum_{j=1}^N \lambda^{-2j H} \lambda^{2j} \left(\frac{u-v}{2}\right)^2 + 2\sum_{j\geq N+1} \lambda^{-2j H} \nonumber\\
&\leq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2
+ \frac{2\ \lambda^{-2H(N+1)}}{1-\lambda^{-2H}} \nonumber\\
&\leq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2
+ \frac{2\ |u-v|^{2H}}{1-\lambda^{-2H}}
\end{align}
and
\begin{align}\label{eq:GW-const-low}
\mathbf{E}[|X_u-X_v|^2] &\geq 2\sum_{j=1}^N \lambda^{-2j H} \lambda^{2j} \left(\frac{u-v}{2}\right)^2 - \frac{2}{3} \sum_{j=1}^N \lambda^{-2j H} \lambda^{4j} \left(\frac{u-v}{2}\right)^4 \nonumber\\
&\geq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2
- \frac{1}{24} \lambda^{-4N} \sum_{j=1}^N \lambda^{j(4-2H)} \nonumber\\
&\geq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2
- \frac{1}{24} \lambda^{-4N} \lambda^{4-2H} \frac{\lambda^{(4-2H)N} - 1}{\lambda^{4-2H} - 1} \nonumber\\
&\geq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2
- \frac{1}{24} \frac{\lambda^{4-2H}}{\lambda^{4-2H} - 1} \lambda^{-2HN} \nonumber\\
&\geq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2
- \frac{1}{24} \frac{\lambda^{4}}{\lambda^{4-2H} - 1} |u-v|^{2H}.
\end{align}
Now, it remains to compare the term $\displaystyle\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2$ with $|u-v|^{2H}$.
By definition of the integer $N$, we have
\begin{align}\label{eq:GW-const-ineq-1er}
\frac{\lambda^{-2(N+1)}}{4} \sum_{j=1}^N \lambda^{2j(1-H)}
\leq \sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2
\leq \frac{\lambda^{-2N}}{4} \sum_{j=1}^N \lambda^{2j(1-H)}.
\end{align}
But
\begin{align*}
\frac{\lambda^{-2N}}{4} \sum_{j=1}^N \lambda^{2j(1-H)}
&= \frac{\lambda^{-2N}}{4} \lambda^{2(1-H)} \frac{\lambda^{2N(1-H)}-1}{\lambda^{2(1-H)}-1} \\
&= \frac{\lambda^{2(1-H)}}{4(\lambda^{2(1-H)}-1)} \left(\lambda^{-2NH}-\lambda^{-2N}\right).
\end{align*}
Using the definition of $N$, we get
\begin{align*}
|u-v|^{2H} - \lambda^2\ |u-v|^2
\leq \lambda^{-2NH}-\lambda^{-2N} \leq
\lambda^{2H}\ |u-v|^{2H} - |u-v|^2.
\end{align*}
Then there exists two constants $c_1>0$ and $c_2>0$ such that for all $u,v\in I$,
\begin{align*}
c_1\ |u-v|^{2H}
\leq \frac{\lambda^{-2N}}{4} \sum_{j=1}^N \lambda^{2j(1-H)} \leq
c_2\ |u-v|^{2H}.
\end{align*}
Then, the result follows from \eqref{eq:GW-const-up}, \eqref{eq:GW-const-low} and\eqref{eq:GW-const-ineq-1er}.
\end{proof}
When the function $H:\mathbf{R}_+\rightarrow (0,1)$ is $\beta$-H\"older continuous (and no longer constant), the double inequality (\ref{eq:GW-const}) can be improved by the following result.
\begin{proposition}\label{prop:GW}
Let $X=\{X_t;\;t\in\mathbf{R}_+\}$ be a generalized Weierstrass function defined by (\ref{def:Weierstrass}), where the function $H$ is assumed to be $\beta$-H\"older-continuous with $\beta>0$.
Then, for any $t_0\in\mathbf{R}_+$,
for all $\epsilon > 0$, there exist $\rho_0>0$ and positive constants $c_1, c_2, c_3, c_4$ such that for all $u,v\in B(t_0,\rho_0)$,
\begin{align}
& c_1\ |u-v|^{2 H(t_0)+\epsilon} + c_3\ [H(u) - H(v)]^2 \leq \mathbf{E}[|X_u-X_v|^2] \label{eq:GW-min}\\
&\textrm{and} \qquad
\mathbf{E}[|X_u-X_v|^2] \leq c_2\ |u-v|^{2 H(t_0)-\epsilon} + c_4\ [H(u) - H(v)]^2. \label{eq:GW-max}
\end{align}
\end{proposition}
\begin{proof}
Since the function $H:\mathbf{R}_+\rightarrow (0,1)$ is continuous, for all $t_0\in\mathbf{R}_+$ and all $\epsilon>0$, there exists $\rho_0>0$ such that
\begin{align*}
\forall u,v\in B(t_0,\rho_0),\quad
H(u), H(v) \in (H(t_0)-\epsilon; H(t_0)+\epsilon).
\end{align*}
Then, the first term of the expression (\ref{eq:GW-inc-var}) for $\mathbf{E}[|X_u-X_v|^2]$ satisfies
\begin{align*}
2\sum_{j\geq 1} \lambda^{-2j H(u)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right) \leq
2\sum_{j\geq 1} \lambda^{-2j (H(t_0)-\epsilon)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right)
\end{align*}
and
\begin{align*}
2\sum_{j\geq 1} \lambda^{-2j H(u)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right) \geq
2\sum_{j\geq 1} \lambda^{-2j (H(t_0)+\epsilon)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right).
\end{align*}
Then, according to Lemma \ref{lem:GW-const}, there exist two constants $c_1>0$ and $c_2>0$ such that for all $u,v\in B(t_0,\rho_0)$,
\begin{align}\label{eq:GW-first}
c_1\ |u-v|^{2(H(t_0) + \epsilon)}
\leq 2\sum_{j\geq 1} \lambda^{-2j H(u)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right) \leq
c_2\ |u-v|^{2(H(t_0) - \epsilon)}.
\end{align}
For the second term of the expression (\ref{eq:GW-inc-var}) for $\mathbf{E}[|X_u-X_v|^2]$, we consider the function $\psi_{\lambda,j}:x\mapsto \lambda^{-jx}=e^{-jx \ln\lambda}$ of derivative $\psi'_{\lambda,j}(x) = -j \ln\lambda \ \lambda^{-jx}$.
From the finite increment theorem, for all $u, v\in B(t_0,\rho_0)$, there exists $h_{uv}$ between $H(u)$ and $H(v)$ (i.e. in either $(H(u),H(v))$ or $(H(v),H(u))$) such that
\begin{align*}
|\lambda^{-j H(u)} - \lambda^{-j H(v)}|
= |H(u)-H(v)|\ j \ln\lambda \ \lambda^{-j h_{uv}}.
\end{align*}
Using the fact that $H(u)$ and $H(v)$ belong to the interval $(H(t_0)-\epsilon,H(t_0)+\epsilon)$ implies $H(t_0)-\epsilon<h_{uv}<H(t_0)+\epsilon$, we get
\begin{align*}
&|H(u)-H(v)|\ j \ln\lambda \ \lambda^{-j (H(t_0)+\epsilon)}
\leq |\lambda^{-j H(u)} - \lambda^{-j H(v)}| \\
\textrm{and}\quad &|\lambda^{-j H(u)} - \lambda^{-j H(v)}|
\leq |H(u)-H(v)|\ j \ln\lambda \ \lambda^{-j (H(t_0)-\epsilon)}.
\end{align*}
Since $\sum_{j\geq 1}j\lambda^{-j(H(t_0)-\epsilon)} < +\infty$ and $\sum_{j\geq 1}j\lambda^{-j(H(t_0)+\epsilon)} < +\infty$, the second term of (\ref{eq:GW-inc-var}) is bounded by
\begin{align}\label{eq:GW-2nd}
c_3\ [H(u)-H(v)]^2
\leq \sum_{j\geq 1}\left[\lambda^{-j H(u)} - \lambda^{-j H(v)}\right]^2 \leq
c_4\ [H(u)-H(v)]^2.
\end{align}
The result follows from (\ref{eq:GW-inc-var}), (\ref{eq:GW-first}) and (\ref{eq:GW-2nd}).
\end{proof}
The following result shows that Theorem \ref{thmainunif} allows to derive the Hausdorff dimensions of the graph of the generalized Weierstrass function.
\begin{corollary}\label{prop:Weierstrass}
Let $X=\{X_t;\;t\in\mathbf{R}_+\}$ be a generalized Weierstrass function defined by (\ref{def:Weierstrass}), where the function $H$ is assumed to be $\beta$-H\"older-continuous with $\beta>0$ and satisfies the $(H_{\beta})$-assumption.
Then, the local H\"older exponents and sub-exponents of $X$ are given by
\begin{align*}
\forall t_0\in\mathbf{R}_+,\quad
\widetilde{\bbalpha}_X(t_0)=\underline{\bbalpha}_X(t_0)=H(t_0).
\end{align*}
Consequently, the Hausdorff dimensions of the graph and the range of the sample path of $X$ satisfy: With probability one,
\begin{align*}
\forall t_0\in\mathbf{R}_+,\quad
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) &= 2-H(t_0), \\
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) &= 1.
\end{align*}
\end{corollary}
\begin{proof}
According to the $(H_{\beta})$-assumption, $H(t_0) < \beta$ for all $t_0\in\mathbf{R}_+$.
Let us fix $t_0\in\mathbf{R}_+$ and consider any $0<\epsilon< 2(\beta-H(t_0))$.
From Proposition \ref{prop:GW} and the fact that $H$ is $\beta$-H\"older continuous with $2H(t_0)-\epsilon<2H(t_0)+\epsilon<2\beta$, there exist $\rho_0>0$ and two constants $C_1>0$ and $C_2>0$ such that for all $u,v \in B(t_0,\rho_0)$,
\begin{align*}
C_1\ |u-v|^{2H(t_0) + \epsilon} \leq \mathbf{E}[|X_u-X_v|^2] \leq
C_2\ |u-v|^{2H(t_0) - \epsilon}.
\end{align*}
From the definitions of the deterministic local H\"older exponent and sub-exponent
$\widetilde{\bbalpha}_X(t_0)$ and $\underline{\bbalpha}_X(t_0)$, we get
\begin{align*}
\forall 0<\epsilon< 2(\beta-H(t_0)), \quad &\widetilde{\bbalpha}_X(t_0) \geq H(t_0) - \epsilon/2, \\
&\underline{\bbalpha}_X(t_0) \leq H(t_0) + \epsilon/2
\end{align*}
and therefore, $H(t_0)\leq\widetilde{\bbalpha}_X(t_0)\leq\underline{\bbalpha}_X(t_0)\leq H(t_0)$ leads to $\widetilde{\bbalpha}_X(t_0)=\underline{\bbalpha}_X(t_0)=H(t_0)$.
Consequently, by continuity of the function $H$, Theorem \ref{thmainunif} implies: With probability one,
\begin{align*}
\forall t_0\in\mathbf{R}_+,\quad
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) &= 2-H(t_0), \\
\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) &= 1.
\end{align*}
\end{proof}
\begin{remark}
Proposition \ref{prop:Weierstrass} should be compared to Theorem 1 of \cite{hunt},
where the Hausdorff dimension of the graph of the process $\{Y_t;\;t\in\mathbf{R}_+\}$ defined by
$$\forall t\in\mathbf{R}_+,\quad Y_t = \sum_{n=1}^{+\infty} \lambda^{-nH} \sin(\lambda^n t + \theta_n),$$
where $\lambda\geq 2$, $H\in (0,1)$ and $(\theta_n)_{n\geq 1}$ are independent random variables uniformly distributed on $[0, 2\pi)$, is proved to be $D=2-H$.
The generalized Weierstrass function $X$ differs from the process $Y$, in the form of the random serie (the $\theta_n$'s in the definition of $Y_t$ cannot be all equal) and in the fact that the exponent $H$ is constant in the definition of $Y$, on the contrary to $X$.
\end{remark}
\end{document} |
\begin{equation}gin{document}
\begin{equation}gin{frontmatter}
\title{Test for Temporal Homogeneity of Means in High-dimensional Longitudinal Data}
\runtitle{Tests for Temporal Homogeneity}
\begin{equation}gin{aug}
\author{\fnms{Ping-Shou} \snm{Zhong}\ead[label=e1]{[email protected]}}
\and
\author{\fnms{Jun} \snm{Li}\ead[label=e2]{[email protected]}}
\affiliation{ Michigan State University and Kent State University}
\address{Department of Statistics and Probability\\
Michigan State University\\
East Lansing, MI 48824,
USA\\
\printead{e1}}
\address{Department of Mathematical Sciences\\
Kent State University\\
Kent, OH 44242,
USA\\
\printead{e2}\\
\phantom{E-mail:\ }}
\end{aug}
\begin{equation}gin{abstract}
This paper considers the problem of testing temporal homogeneity of $p$-dimensional population mean vectors from the repeated measurements of $n$ subjects over $T$ times.
To cope with the challenges brought by high-dimensional longitudinal data,
we propose a test statistic that takes into account not only the ``large $p$, large $T$ and small $n$" situation, but also the complex temporospatial dependence.
The asymptotic distribution of the proposed test statistic is established under mild conditions. When the null hypothesis of temporal homogeneity is rejected, we further propose a binary segmentation method shown to be consistent
for multiple change-point identification. Simulation studies and an application to fMRI data are provided to demonstrate the performance of the proposed methods.
\end{abstract}
\begin{equation}gin{keyword}
\kwd{Homogeneity Test}
\kwd{Longitudinal Data}
\kwd{Spatial and Temporal Dependence}
\end{keyword}
\end{frontmatter}
\section{Introduction}
High-dimensional longitudinal data are often observed in modern applications such as genomics studies and neuroimaging studies of brain function. Collected by repeatedly measuring a large number of components from
a small number of subjects over many time points, the high-dimensional longitudinal data exhibit complex temporospatial dependence: the spatial dependence among the components of each high-dimensional measurement at a particular time point, and the temporal dependence among different high-dimensional measurements collected at different time points. For example, the functional magnetic resonance imaging (fMRI) data are collected by repeatedly measuring the $p$ blood oxygen level-dependent (BOLD) responses from the brains over $T$ times while a small number of subjects are given some task to perform ($p$, $T$ and $n$ are typically at the order of $100,000$, $100$ and $10$, respectively). The fMRI data are characterized by the spatial dependence between the BOLD response in one voxel and a large number of responses measured at neighboring voxels at one time, and the temporal dependence among the BOLD responses of the same subject repeatedly measured at different time points (Ashby, 2011).
This article aims to develop a data-driven and nonparametric method to detect and
identify temporal changes in a course of high-dimensional time dependent data. Specifically, letting $X_{it}=(X_{it1}, \cdots, X_{tip})^{\prime}$ be a $p$-dimensional random vector observed for the $i$-th subject ($i=1, \cdots, n$) at time $t$ $(t=1, \cdots, T)$, we are interested in testing
\begin{equation}gin{eqnarray}
&H_0&: \mu_1=\cdots=\mu_T, \qquad \mbox{vs.} \nonumber\\
&H_1&: \mu_1=\cdots=\mu_{\theta_\alphau_1}\ne \mu_{\theta_\alphau_1+1}=\cdots=\mu_{\theta_\alphau_q}\ne \mu_{\theta_\alphau_q+1}=\cdots=\mu_T, \label{Hypo}
\end{eqnarray}
where $\mu_t$ $(t=1, \cdots, T)$ is a $p$-dimensional population mean vector and $1 \le \theta_\alphau_1 < \cdots < \theta_\alphau_q <T$ are $q$ ($q<\infty$) unknown locations of change-points. If the null hypothesis is rejected, we will further estimate the locations of change-points.
The above hypotheses assume that all the individuals come from the same population with
the same mean vectors and change-points. In many applications such as fMRI studies, it is more meaningful to allow the responding mechanism to be different across subjects. This motivates us to further generalize the above hypotheses to (\ref{general-hypo}) where the whole population consists of $G$ ($G>1$) groups, and each group has its own unique means and change-points. A mixture model is proposed to accommodate such group effect (the details will be introduced in Section 4).
The classical multivariate analysis of variance (MANOVA) assumes that there exist a finite number ($T<\infty$) of independent normal populations with mean vectors $\mu_1,\cdots,\mu_T$
and common covariance $\Sigma$. In the classical setting with $p<n$, the likelihood ratio test (Wilks, 1932) and Hotelling's $T^2$ test are commonly applied.
When $p>n$, Dempster (1958, 1960) firstly considered the MANOVA in the case of two-sample problem. Since then, more methods have been developed in the literature.
For instance, Bai and Saranadasa (1996) proposed a test by assuming $p/n$ is a finite constant.
Chen and Qin (2010) further improved the test in Bai and Saranadasa (1996) by proposing a test statistic formulated through the $U$-statistics.
See also Schott (2007) and Srivastava and Kubokawa (2013).
Recently, Wang, Peng and Li (2015) proposed a new multivariate test which is able to accommodate heavier tail distributed data.
Paul and Aue (2014) discussed the applications of random matrix theory in the MANOVA problem. Readers are referred to Fujikoshi et al. (2011) and Hu et al. (2015) for excellent reviews.
There exist several significant differences between the hypotheses (\ref{Hypo}) considered in this article and the classical MANOVA problem. First, the number of mean vectors $T$
in (\ref{Hypo}) is allowed to diverge to infinity, whereas the typical MANOVA considers the comparison of a finite number of mean vectors. Second, the data considered in this article exhibit complex temporal and spatial
dependence.
However, the MANOVA problem typically considers the inference for independent samples without taking into account temporal dependence. Finally, the classical MANOVA problem assumes the homogeneity among subjects but this paper considers the mixture model to accommodate the group effect such that each group is allowed to have its own mean vectors and change-points. Based on the above facts, all of the aforementioned MANOVA methods cannot be applied to the hypotheses (\ref{Hypo}).
In this paper, we propose a new testing procedure for the hypotheses (\ref{Hypo}) under the ``large $p$, large $T$ and small $n$" paradigm. Most importantly, it takes into account both spatial dependence among different components of $X_{it}$, and temporal dependence between $X_{it}$ and $X_{is}$ collected at time points $t \ne s$. The proposed test statistic is constructed in two steps.
In the first step, test statistics are constructed at each $t\in\{1,\cdots, T-1\}$ to distinguish the null from the alternative. In the second step, we choose the maximum of $T-1$ statistics from the first step to make the test free of any tuning parameters and further improve the power. Under some regularity conditions, the maximized statistic is shown to follow the Gumbel distribution if both $T$ and $p$ diverge as $n$ goes to infinite.
When the null hypothesis of (\ref{Hypo}) is rejected, we further propose a binary segmentation method to identify all the change-points $1 \le \theta_\alphau_1< \cdots < \theta_\alphau_q<T$. The proposed method is shown to be consistent for
the change-point identification by allowing $p$ and $T$ increase as $n$ increases. Moreover, the rate of convergence is established for the proposed change-point estimator, which explicitly includes the effect of dimension
$p$, time $T$, sample size $n$ as well as the signal-to-noise ratio.
It is worth mentioning that the current work is different from recent literature on change-point identification under high-dimensionality in several important ways. First, we consider the identification of high-dimensional mean changes that are common to a subgroup of subjects such that inference can be made for a certain population, whereas existing work (e.g., Chen and Zhang, 2015; Jirak, 2015) focuses on change-point identification for high-dimensional time series or panel data with only one subject ($n=1$). Consequently, the proposed method can establish the consistency of the change-point estimators rather than the ratio consistency (Jirak, 2015). Second, compared with Chen and Zhang (2015) and Jiark (2015), the proposed binary segmentation is computationally efficient. No resampling methods or simulation methods are needed to find the critical values for the change-point identification. Finally, the current work takes into account both temporal and spatial dependence, and the assumptions on dependence structures are very mild. This is different from Chen and Zhang (2015) who assume no temporal dependence, and Jirak (2015) who imposes
some spatial dependence that requires a natural ordering of $p$ random variables in $X_{it}$.
The rest of the paper is organized as follows. Section 2 introduces the temporal homogeneity test for the equality of high-dimensional mean vectors at a large number of time points. Its theoretical properties are also investigated. Section 3
proposes a change-point identification estimator whose rate of convergence is derived. To further identify multiple change-points, we consider a binary segmentation algorithm, which is shown to be consistent. Section 4 extends the
established temporal homogeneity test and change-point identification method to the mixture model. Simulation experiment and case study are conducted in Sections 5 and 6 to demonstrate the empirical performance of the proposed
methods. A brief discussion is given in Section 7. All technical details are relegated to Appendix. Some technical lemmas and additional simulation results are included into a supplementary material.
\section{Temporal Homogeneity Test}
\subsection{Testing Statistic}
We are to propose a test statistic for the hypotheses (\ref{Hypo}). Toward this end, for any $t \in \{1, \cdots, T-1\}$, we first quantify the difference between two sets of mean vectors $\{\mu_{s_1}\}_{s_1=1}^{t}$ and $\{\mu_{s_2}\}_{s_2=t+1}^{T}$ by defining a measure
\begin{equation}
M_t=h^{-1}(t)\sum_{s_1=1}^t \sum_{s_2=t+1}^T (\mu_{s_1}-\mu_{s_2})^{\prime} (\mu_{s_1}-\mu_{s_2}), \label{pop-mean}
\end{equation}
where the scale function $h(t)=t(T-t)$. From its definition, $M_t$ is an average of $t(T-t)$ terms, each of which is an Euclidean distance between two population mean vectors chosen before and after a specific
$t \in \{1, \cdots, T-1\}$.
Since $M_t=0$ under $H_0$ and $M_t \ne 0$ under $H_1$, it can be used to distinguish the alternative from the null hypothesis. Another advantage of proposing $M_t$ is that it always attains its maximum at one of change-points $\{\theta_\alphau_1, \cdots, \theta_\alphau_q\}$ as shown in Lemma 3 in the supplementary material. Thus, it can also be used as a measure for identifying change-points when $H_0$ is rejected (Details will be covered in Section 3). Although there exist other measures for the hypotheses (\ref{Hypo}), some of them might not be designed for identifying change-points.
For example, Schott (2007)'s test statistic was based on the measure
$S_{1T}=T\sum_{s=1}^T(\mu_s-\bar{\mu})^{\prime}(\mu_s-\bar{\mu})=\sum_{1\leq s_1<s_2\leq T} (\mu_{s_1}-\mu_{s_2})^{\prime}(\mu_{s_1}-\mu_{s_2})$ where
$\bar{\mu}=\sum_{s=1}^T\mu_s/T$. It can be shown that $S_{1T}=h(t)M_t+S_{1t}+S_{(t+1)T}$. Note that $S_{1t}$ measures distance among mean vectors before time $t$
and $S_{(t+1)T}$ measures distance among mean vectors after time $t$. Both $S_{1t}$ and $S_{(t+1)T}$ are not informative for the differences
between the mean vectors $\{\mu_{s_1}\}_{s_1=1}^{t}$ and $\{\mu_{s_2}\}_{s_2=t+1}^{T}$.
In practice, $M_t$ is unknown. Given a random sample $\{X_{it}=(X_{it1}, \cdots, X_{itp})^{\prime},\\ i=1\cdots, n \, \mbox{and} \,\, t=1, \cdots, T\}$, it can be estimated by
\begin{equation}gin{eqnarray*}
\hat{M}_t=\frac{1}{h(t)n(n-1)}\sum_{s_1=1}^t \sum_{s_2=t+1}^T \mathbf{I}ggl({\sum_{i\ne j}^nX_{is_1}^{\prime}X_{js_1}}+{\sum_{i\ne j}^nX_{is_2}^{\prime}X_{js_2}}-2{\sum_{i\ne j}^nX_{is_1}^{\prime}X_{js_2}} \mathbf{I}ggr).\nonumber
\end{eqnarray*}
Some elementary derivations show that $\mbox{E}(\hat{M}_t)=M_t$. Thus, $\hat{M}_t$ is chosen to be the test statistic for the hypotheses (\ref{Hypo}).
If $T=2$, the above statistic reduces to the two-sample U-statistics studied by Chen and Qin (2010) for testing the equality of two population means. There are some significant differences between the settings
considered in current paper and those in Chen and Qin (2010). First, instead of two independent samples in Chen and Qin (2010), we consider high-dimensional time dependent data for testing the equality of more than two
population mean vectors. There are two types of dependence for consideration: the spatial dependence across the components of $X_{it}$ at a specific time $t$ and the temporal dependence between
$X_{is}$ and $X_{it}$ with $s \ne t$. Second, although dimension is much larger than sample size in Chen and Qin (2010), $T$ is fixed and equal to 2. Here, we consider the ``large $p$, large $T$ and small $n$" paradigm
in the sense that both dimension $p$ and time $T$ are much larger than the sample size $n$.
We model $X_{it}$ using a general factor model:
\begin{equation}
X_{it}=\mu_t+\Gamma_t Z_{i} \qquad \mbox{for} \quad i=1, \cdots, n \quad \mbox{and} \quad t=1, \cdots, T, \label{model0}
\end{equation}
where $\Gamma_t$ is a $p \times m$ matrix with $m\ge p$ and $\{Z_i\}_{i=1}^n$ are $m$-variate i.i.d. random vectors satisfying $\mbox{E} (Z_i)=0$, $\mbox{Var} (Z_i)= I_m$, the $m\times m$ identity matrix. If we write $Z_i=(z_{i1}, \cdots, z_{im})^{\prime}$ and let $\Delta$ be a finite constant, we further assume that
\begin{equation}
\mbox{E}(z_{ik}^4)=3+\Delta, \; \mbox{and} \; \mbox{E}(z_{ik_1}^{l_1}z_{ik_2}^{l_2}\cdots z_{ik_h}^{l_h})=\mbox{E}(z_{ik_1}^{l_1})\mbox{E}(z_{ik_2}^{l_2})\cdots \mbox{E}(z_{ik_h}^{l_h}),\label{factor}
\end{equation}
where $h$ is positive integer such that $\sum_{j=1}^h l_h \le 8$ and $l_1\ne l_2 \ne \cdots \ne l_h$.
The above models are considered to accommodate the high-dimensional time dependent data. First, (\ref{model0}) enables us to incorporate both spatial and temporal dependence of the data. Let $\delta_{ij}=1$ if $i=j$, and $0$ otherwise. From (\ref{model0}), it immediately follows that
\begin{equation}
\mbox{Cov}(X_{is}, X_{jt})=\delta_{ij}\Gamma_s \Gamma_t^{\prime}\equiv \delta_{ij}\Xi_{st}. \nonumber
\end{equation}
Here $\Xi_{st}$ quantifies the temporal correlation between $X_{is}$ and $X_{it}$ for the same individual measured at different time points $s$ and $t$. Moreover, $\Xi_{st}$ become the covariance matrix $\Sigma_t$ if $s=t$, describing the spatial dependence of $X_{it}$ at time $t$. Second, similar to Chen and Qin (2010) and Bai and Saranadasa (1996), the model (\ref{factor}) allows us to analyze the data beyond commonly assumed Gaussian distribution.
Define
\begin{equation}gin{align}
A_{0t}&=\!\sum_{r_1=1}^t\sum_{r_2=t+1}^T (\Gamma_{r_1}-\Gamma_{r_2})^{\prime}(\Gamma_{r_1}-\Gamma_{r_2})\;\mbox{and}\;\nonumber\\
A_{1t}&=\!\sum_{r_1=1}^t\sum_{r_2=t+1}^T (\mu_{r_1}-\mu_{r_2})^{\prime}(\Gamma_{r_1}-\Gamma_{r_2}). \label{variance}
\end{align}
The following proposition summarizes the variance of the test statistic $\hat{M}_t$.
\begin{proposition}
Under (\ref{model0}),
\begin{equation}gin{eqnarray}
{\rm Var}(\hat{M}_t)\equiv \sigma_{nt}^2=h^{-2}(t)\Big\{\frac{2}{n(n-1)}{\rm tr}(A_{0t}^2)+\frac{4}{n}||A_{1t}||^2\Big\},\label{mean-var}
\end{eqnarray}
where $A_{0t}$ and $A_{1t}$ are specified in (\ref{variance}), and $\| \cdot \|$ denotes the vector $l^2$-norm.
\end{proposition}
Specially, $A_{1t}$ becomes a $1\times m$ vector with zeros under $H_0$ of (\ref{Hypo}). Proposition 1 says that the variance of $\hat{M}_t$ under $H_0$ is
$\sigma_{nt,0}^2=2\mbox{tr}(A_{0t}^2)/\{h^{2}(t)n(n-1)\}$.
\subsection{Asymptotic Distribution of the Proposed Test Statistic }
To establish the asymptotic normality of the proposed test statistic $\hat{M}_t$ at any $t \in \{1, \cdots, T-1 \}$, we require the following condition.
(C1). As $n \to \infty$, $p \to \infty$ and $T \to \infty$, $\mbox{tr}(A_{0t}^4)=o\{\mbox{tr}^2(A_{0t}^2)\}$. In addition, under $H_1$, $A_{1t}A_{0t}^2 A_{1t}^{\prime}=o\{\mbox{tr}(A_{0t}^2)\,\|A_{1t}\|^2\}$.
Imposing $\mbox{tr}(A_{0t}^4)=o\{\mbox{tr}^2(A_{0t}^2)\}$ is to generalize the condition (3.6) in Chen and Qin (2010) from a fixed $T$ to the diverging $T$ case. Given that $A_{1t}A_{0t}^2 A_{1t}^{\prime}\leq (\max_{k}\lambda_k)\|A_{1t}\|^2$ where $\lambda_k$s are eigenvalues of $A_{0t}^2$, we have $A_{1t}A_{0t}^2 A_{1t}^{\prime}=o\{\mbox{tr}(A_{0t}^2)\,\|A_{1t}\|^2\}$ if $\max_{k}\lambda_k=o\{\mbox{tr}(A_{0t}^2)\}$.
If the number of non-zero $\lambda_k$s diverges and all the non-zero $\lambda_k$s are bounded, the condition (C1) is easily satisfied.
\begin{equation}t
\label{th1}
Under (\ref{model0}), (\ref{factor}) and condition (C1), as $n \to \infty$, $p \to \infty$ and $T \to \infty$,
\[
(\hat{M}_t-M_t)/{\sigma_{nt}} \xrightarrow{d} N(0, 1),
\]
where $\sigma_{nt}$ is defined in (\ref{mean-var}).
\end{equation}t
Specially, under $H_0$, the variance of $\hat{M}_t$ is $\sigma_{nt,0}^2=2\mbox{tr}(A_{0t}^2)/\{h^{2}(t)n(n-1)\}$ with $A_{0t}$ given in (\ref{variance}) and $\hat{M}_t/{\sigma_{nt,0}} \xrightarrow{d} N(0, 1)$.
In practice, $\sigma_{nt,0}^2$ is unknown. To implement a testing procedure, we estimate $\sigma_{nt,0}^2$ by
\begin{equation}gin{eqnarray}
\hat{\sigma}_{nt,0}^2=\frac{2}{h^2(t)n(n-1)}\sum_{r_1,s_1=1}^t\sum_{r_2,s_2=t+1}^T \sum_{a,b,c,d \in \{1,2\}}(-1)^{|a-b|+|c-d|}\widehat{\mbox{tr}(\Gamma_{r_b}^{\prime}\Gamma_{r_a}\Gamma_{s_c}^{\prime}\Gamma_{s_d})}, \nonumber
\end{eqnarray}
where, by defining $P_n^4=n(n-1)(n-2)(n-3)$ to be the permutation number,
\begin{equation}gin{align}
\widehat{\mbox{tr}(\Gamma_{r_b}^{\prime}\Gamma_{r_a}\Gamma_{s_c}^{\prime}\Gamma_{s_d})}=&
\frac{1}{P_n^4} \sum_{i \ne j \ne k \ne l}^n (X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{j s_d}-X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{k s_d}\nonumber\\
&\qquad-X_{i r_a}^{\prime} X_{j r_b} X_{k s_c}^{\prime} X_{j s_d}+X_{i r_a}^{\prime} X_{j r_b} X_{k s_c}^{\prime} X_{l s_d}). \label{variance-est}
\end{align}
Note that the computational cost of $\hat{\sigma}_{nt,0}^2$ is not an issue. The main reason is
two-fold. First, some simple algebra can be applied to simplify the computation of the summations so that the computation complexity is at the order of $O(n^2T^2p)$.
Second, the computational cost is mainly due to the size of $n, T$ not $p$, but $n$ and $T$ are
typically not large in fMRI and genomics applications.
The ratio consistency of $\hat{\sigma}_{nt,0}^2$ is established by the following theorem.
\begin{equation}t
Assume the same conditions in Theorem 1. As $n \to \infty$, $p \to \infty$ and $T \to \infty$,
\[
{\hat{\sigma}_{nt,0}^2}/{\sigma_{nt,0}^2}-1=O_p\mathbf{I}g\{ n^{-\frac{1}{2}}{\rm tr}^{-1}(A_{0t}^{2}){\rm tr}^{\frac{1}{2}}(A_{0t}^4)+ n^{-1}\mathbf{I}g\}=o_p(1).
\]
\end{equation}t
Theorems 1 and 2 lead to a testing procedure that rejects $H_0$ if $\hat{M}_t/ \hat{\sigma}_{nt,0} > z_{\alpha}$ where $z_{\alpha}$ is the upper $\alpha$ quantile of $\mbox{N}(0, 1)$. To implement the testing procedure, we also need to specify $t$, which can be thought as a tuning parameter. Although the type I error of the test will not be affected for any $t \in \{1, \cdots, T-1\}$, the power can be significantly different with respect to different $t$.
To make our testing procedure free of any tuning parameter, we consider the following test statistic for the hypotheses (\ref{Hypo}):
\begin{equation}
\hat{\mathscr{M}}=\max_{0< t/T <1} {\hat{M}_t}/{\hat{\sigma}_{nt,0}}, \label{max-test}
\end{equation}
which can be readily shown to attain better power than $\hat{M}_t/\hat{\sigma}_{nt,0}$ at any fixed $t \in \{1, \cdots, T\}$ (see the paragraph after Theorem 3 for a proof).
To establish the asymptotic distribution of $\hat{\mathscr{M}}$, we also need (C2) in addition to (C1).
(C2). There exists $\phi(k)$ satisfying $\sum_{k=1}^T\phi^{1/2}(k)<\infty$ such that for any $r, s \in \{1, \cdots, T\}$, $\mbox{tr}(\Xi_{r s}\Xi_{r s}^\prime)\asymp \phi(|r-s|)\mbox{tr}(\Sigma_{r}\Sigma_{s})$. Here $ a \asymp b$ means that
$a$ and $b$ are of the same order.
The condition (C2) imposes some mild assumption on the temporal dependence among the time series $\{X_{it}\}_{t=1}^T$.
It basically requires that the time series are weakly dependent to ensure the tightness of the process $\hat{\sigma}_{nt,0}^{-1}\hat{M}_t$ (Billingsley, 1999).
To establish the weak convergence of $\hat{\mathscr{M}}$, we also define the correlation coefficient
$r_{nz, uv}=2\mbox{tr}(A_{0u}A_{0v})/\{n(n-1) h(u) h(v) \sigma_{nu,0} \sigma_{nv, 0}\}$ and its limit $r_{z, uv}=\lim_{n \to \infty} r_{nz, uv}.$
\begin{equation}t
\label{th3}
Assume (\ref{model0}), (\ref{factor}), (C1), (C2) and $H_0$ of (\ref{Hypo}). As $n\to\infty$ and $p \to \infty$, (i) if $T$ is finite,
$\hat{\mathscr{M}} \xrightarrow{d} \max_{0< t/T < 1} Z_t,$
where $Z_t$ is the $t$-th component of $Z=(Z_1, \cdots, Z_{T-1})^{\prime} \sim \mbox{N}(0, R_Z)$ with $R_Z=(r_{z, uv})$;
(ii) if $T \to \infty$ and the maximum eigenvalue of $R_Z$ is bounded, then
$P(\hat{\mathscr{M}}\leq \sqrt{2\log(T)-\log\log(T)+x})\to \exp\mathbf{I}g\{-(2\sqrt{\pi})^{-1}\exp(-x/2)\mathbf{I}g\}.$
\end{equation}t
For the fMRI data analysis, $T$ is typically large and we can apply part (ii) of Theorem \ref{th3}. Specifically, with $x_{\alpha}=-2\log\{-2\sqrt{\pi}\log (1-\alpha)\}$ defined to be the upper $\alpha$ quantile of the Type I extreme value distribution,
an $\alpha$-level test rejects $H_0$ of (\ref{Hypo}) if
$\hat{\mathscr{M}}> {\mathscr{M}}_{\alpha}$ where ${\mathscr{M}}_{\alpha}=\sqrt{2\log(T)-\log\log(T)+x_{\alpha}}$. Moreover, from Theorems \ref{th1}-\ref{th3}, the lower bound of the power of the test based on $\hat{\mathscr{M}}$ is
\begin{equation}gin{align}
\mbox{P}(\hat{\mathscr{M}} >\mathscr{M}_{\alpha})\ge\max_t\mbox{P}(\frac{\hat{M}_t}{\sigma_{nt,0}}>\mathscr{M}_{\alpha})
=\max_t\Phi \Big(-\frac{\sigma_{nt, 0}}{\sigma_{nt}}\mathscr{M}_{\alpha}+\frac{M_t}{\sigma_{nt}} \Big),
\end{align}
where $\Phi(\cdot)$ is the cumulative distribution function of the standard normal. If $\log(T)=o(M_t^2/\sigma_{nt}^2)$ for all $t$, the right hand side of the above expression is the maximum power of the test based on $\hat{M}_t$'s.
This indicates that the test based on $\hat{\mathscr{M}}$ is more powerful than the test based on the asymptotic normality of $\hat{M}_t$ at a single $t$.
\section{Change-points Identification}
When $H_0$ of (\ref{Hypo}) is rejected, it is very often interesting to further identify the change-points.
To expedite our analysis, we first consider the simplest case with only one change-point $\theta_\alphau \in \{1,\cdots, T-1\}$ satisfying the condition $\theta_\alphau/T=\kappa$ with $0<\kappa<1$.
It can be shown that $M_t$ attains its maximum at $\theta_\alphau$, which motivates us to identify the change-point $\theta_\alphau$ by the following estimator
\begin{equation}
\hat{\theta_\alphau}=\arg \max_{0 < t/T < 1} \hat{M}_t. \label{est_CP}
\end{equation}
Let ${v}_{\max}=\max_{1 \le t \le T-1}\max \mathbf{I}g\{\sqrt{\mbox{tr}(\Sigma_t^2)}, \sqrt{n (\mu_{1}-\mu_{T})^{\prime}\Sigma_t(\mu_{1}-\mu_{T})} \mathbf{I}g\}$
and
$\delta^2=(\mu_{1}-\mu_{T})^{\prime}(\mu_{1}-\mu_{T})$.
The following theorem establishes the rate of convergence for the change point estimator $\hat{\theta_\alphau}$.
\begin{equation}t
Assume that a change-point $\theta_\alphau \in \{1,\cdots, T-1\}$ satisfies $\theta_\alphau/T=\kappa$ with $0<\kappa<1$,
$(\mu_{1}-\mu_{T})^{\prime}\Xi_{r s}(\mu_{1}-\mu_{T})\asymp \phi(|r-s|)(\mu_{1}-\mu_{T})^{\prime}\Sigma_{r}(\mu_{1}-\mu_{T})$,
where $\phi(\cdot)$ is defined in condition (C2).
Under (\ref{model0}), (\ref{factor}), (C1) and (C2), as $n \to \infty$,
\[
\hat{\theta_\alphau}-\theta_\alphau=O_p\mathbf{I}g\{\sqrt{T\log(T)}\,\,{v}_{\max}/(n \,\delta^2) \mathbf{I}g\}.
\]
\end{equation}t
Theorem 4 shows that $\hat{\theta_\alphau}$ is consistent to $\theta_\alphau$ if $n \delta^2 /\{{v}_{\max} \sqrt{T\log(T)}\} \to \infty$, where $n\delta^2$ is a measure of signal and ${v}_{\max}$ is associated with noise. Most importantly, it explicitly demonstrates the contributions of dimension $p$, time $T$ and sample size $n$ to the rate of convergence. First, if both $p$ and $T$ are fixed, $\hat{\theta_\alphau}-\theta_\alphau=O_p(n^{-1/2})$ as $n \to \infty$. Second, if $p$ is fixed but $T$ diverges as $n$ increases, $\hat{\theta_\alphau}-\theta_\alphau=O_p(\sqrt{T\log(T)/n})$.
Last but not least, if both $p$ and $T$ diverge as $n$ increases,
the convergence rate can be faster than $O_p(\sqrt{T\log(T)/n})$. To appreciate this, we consider a special setting where $X_{it}$ in (\ref{model0}) has the identity covariance $\Sigma_t=I_p$,
the non-zero components of $\delta^2$ are equal and fixed, and the number of non-zero components is $p^{1-\begin{equation}ta}$ for $\begin{equation}ta \in (0,1)$. Under such setting,
\[
\hat{\theta_\alphau}-\theta_\alphau=O_p\mathbf{I}ggl (\frac{\{T\log(T)\}^{1/2}}{\min\{np^{1/2-\begin{equation}ta}, n^{{1}/{2}} \,p^{(1-\begin{equation}ta)/2}\}} \mathbf{I}ggr),
\]
which is faster than the rate $O_p\{\sqrt{T\log(T)/n}\}$ if $n^{1/2}p^{1/2-\begin{equation}ta}\to\infty$.
Next, we consider that there exist more than one change-point. To identify these change-points, we first define some notation. Let $\mathbb{S}= \{1 \le \theta_\alphau_1 < \cdots < \theta_\alphau_q <T\}$ be a set containing all $q$ ($q \ge 1$) change-points. For any $t_1, t_2 \in \{1, \cdots, T\}$ satisfying $t_1 < t_2$, let $\hat{\mathscr{M}}[t_1, t_2]$ and $\mathscr{M}_{\alpha_n}[t_1, t_2]$ denote the maximum test statistic in (\ref{max-test}) and the corresponding upper $\alpha_n$ quantile, calculated based on data collected between the time points $t_1$ and $t_2$. Lemma 3 in supplementary material shows that $M_t$ in (\ref{pop-mean}) always attains its maximum at one of the change-points, which motivates us to identify all change-points by the following binary segmentation algorithm (Venkatraman, 1992).
\begin{equation}gin{enumerate}
\item[(1).] Check if $\hat{\mathscr{M}}[1, T] \le \mathscr{M}_{\alpha_n}[1, T]$. If yes, then no change-point is identified and stop. Otherwise, a change-point $\hat{\theta_\alphau}_{(1)}$ is selected by
$\hat{\theta_\alphau}_{(1)}=\mbox{arg} \max_{1\le t \le T-1} \hat{M}_t$,
and included into $\hat{\mathbb{S}}=\{ \hat{\theta_\alphau}_{(1)} \}$;
\item[(2).] Treat $\{1, \hat{\theta_\alphau}_{(1)}, T\}$ as new ending points and first check if $\hat{\mathscr{M}}[1, \hat{\theta_\alphau}_{(1)}] \le \mathscr{M}_{\alpha_n}[1, \hat{\theta_\alphau}_{(1)}]$. If yes, no change-point is selected from time 1 to $\hat{\theta_\alphau}_{(1)}$. Otherwise, one change-point is selected by
$\hat{\theta_\alphau}^1_{(2)}=\mbox{arg} \max_{1\le t \le \hat{\theta_\alphau}_{(1)}-1} \hat{M}_t$,
and update $\hat{\mathbb{S}}$ by adding $\hat{\theta_\alphau}^1_{(2)}$. Next check if $\hat{\mathscr{M}}[\hat{\theta_\alphau}_{(1)}+1, T] \le \mathscr{M}_{\alpha_n}[\hat{\theta_\alphau}_{(1)}+1, T]$. If yes, no time point is selected from time $\hat{\theta_\alphau}_{(1)}+1$ to $T$. Otherwise, one change-point is selected by $\hat{\theta_\alphau}^2_{(2)}=\mbox{arg} \max_{\hat{\theta_\alphau}_{(1)}+1 \le t \le T-1} \hat{M}_t$,
and $\hat{\mathbb{S}}$ is updated by including $\hat{\theta_\alphau}^2_{(2)}$. If no any change-point has been identified from both $[1, \hat{\theta_\alphau}_{(1)}]$ and $[\hat{\theta_\alphau}_{(1)}+1, T]$, then stop. Otherwise, rearrange $\hat{\mathbb{S}}$ by sorting its elements from smallest to largest and update ending points by $\{1, \hat{\mathbb{S}}, T\}$;
\item[(3).] Repeat step 2 until no more change-point is identified from each time segment, and obtain the final set $\hat{\mathbb{S}}$ as an estimate of the set $\mathbb{S}$.
\end{enumerate}
Define $\theta_\alphau_0=1$ and $\theta_\alphau_{q+1}=T$. Let $I_t$ be any time interval of the form $I_t=[\theta_\alphau_i+1, \theta_\alphau_j]$ with $i+1<j$ that contains at least one change-point $\theta_\alphau_i$ for $i \in \{1, \cdots, q\}$, and define the smallest maximum signal-to-noise ratio among all time intervals $I_t$ to be $\mathscr{R}^*=\min_{I_t} \max_{\theta_\alphau_i \in I_t} M[I_t]/\sigma_n[I_t]$ where $M[I_t]$ and $\sigma_n[I_t]$ are (\ref{pop-mean}) and (\ref{mean-var}) specified in $I_t$, respectively. To establish the consistency of $\hat{\mathbb{S}}$ obtained from the above binary segmentation algorithm, we need the following condition in addition to (C1) and (C2).
(C3). As $T \to \infty$, $\theta_\alphau_i/T$ converges to $\kappa_i$ for $i=1, \cdots, q$ with fixed $q \ge 1$, satisfying $0 < \kappa_1< \cdots < \kappa_q <1$.
\begin{equation}t
Assume (\ref{model0}), (\ref{factor}), (C1)-(C3), and $\mathscr{R}^*$ diverges such that the upper $\alpha_n$-quantile of the Gumbel distribution
$\mathscr{M}_{\alpha_n}=o(\mathscr{R}^*)$ as $\alpha_n \to 0$. Furthermore, $v_{\max}[I_t]=o\{n\delta^2[I_t]/\sqrt{T\log(T)}\}$ for all $I_t$ that contains at least one change-point. Then,
$\hat{\mathbb{S}} \xrightarrow{p} \mathbb{S}, $ as $n \to \infty$ and $T \to \infty$.
\end{equation}t
\section{An Extension to Mixture Models}
Thus far we focus on temporal homogeneity detection by assuming that all subjects in the sample come from a population with the same change-points. In fMRI experiments, if different subjects choose different strategies
to solve the same task, the patterns activated by stimuli will be different across subjects (Ashby, 2011). Analytically, it is more attractive to consider that subjects show the same activation pattern within each group, but
different patterns across groups.
In this section, we will generalize the approaches developed in the last two sections to accommodate such group effect. Instead of the model (\ref{model0}) considered in Section 2, we assume that the data follow a mixture model
\begin{equation}
X_{it}=\sum_{g=1}^G \Lambda_{ig}\mu_{gt}+\Gamma_t Z_i,
\label{mixture-of-means}
\end{equation}
where independent of $\{Z_i \}_{i=1}^n$, $(\Lambda_{i1},\cdots, \Lambda_{iG})$ follows a multinomial distribution with parameters 1 and $p=(p_1,\cdots,p_G)$.
This suggests that $\sum_{g=1}^G\Lambda_{ig}=1$ with $\Lambda_{ig}\in\{0,1\}$, and
$\mbox{P}(\Lambda_{ig}=1)=p_g$ satisfying $\sum_{g=1}^G p_g=1$ with the number of groups $G \ge 1$.
Note that the above model implies that $i$-th subject only belongs to one of $G$ groups.
The mixture model is more general because (\ref{model0}) is a special case of (\ref{mixture-of-means}) if there is only one group ($G=1$).
The mixture morel (\ref{mixture-of-means}) is also flexible because it allows each group to have its own population mean vectors $\{\mu_{gt}\}_{t=1}^T$ for $g=1, \cdots, G$. In analogy to (\ref{Hypo}), we want to know whether there exist some change-points within some groups by testing
\begin{equation}gin{align}
H_0^*: \mu_{g1}&=\mu_{g2}=\cdots=\mu_{gT}\;\;\mbox{for all $1\leq g\leq G$ \quad vs.}\nonumber \\
H_1^*: \mu_{g1}&=\cdots=\mu_{g\theta_\alphau_1^{(g)}}\neq \mu_{g(\theta_\alphau_1^{(g)}+1)}=\cdots=\mu_{g\theta_\alphau_{q_g}^{(g)}}\neq \mu_{g(\theta_\alphau_{q_g}^{(g)}+1)}=\cdots=\mu_{gT}\nonumber\\
&\;\mbox{for some $g$}.\label{general-hypo}
\end{align}
If $H_0^*$ is rejected, we further identify $\{\theta_\alphau_1^{(g)},\theta_\alphau_2^{(g)}\cdots,\theta_\alphau_{q_g}^{(g)}\}_{g=1}^G$, the collection of $q$ ($q=\sum_{g=1}^G q_g$) change-points from $G$ groups.
Toward this end, we first evaluate the mean and variance of the test statistic $\hat{M}_t$ under the mixture model (\ref{mixture-of-means}). Similar to Proposition 1, the mean is $E(\hat{M}_t)=\tilde{M}(t)=
h^{-1}(t)\sum_{r_1=1}^t\sum_{r_2=t+1}^{T}(\tilde{\mu}_{r_1}-\tilde{\mu}_{r_2})^{\prime}(\tilde{\mu}_{r_1}-\tilde{\mu}_{r_2})$ with $\tilde{\mu}_{r_i}=\sum_{g=1}^G p_g\mu_{g r_i}$ for $i=1, 2$. The variance of $\hat{M}_t$ is
\begin{equation}
\mbox{Var}(\hat{M}_t)\equiv \tilde{\sigma}_{nt}^2= \frac{2}{n(n-1)h^{2}(t)}\{\mbox{tr}(A_{0t}^2)+\tilde{A}_{3t}\}+\frac{4}{nh^{2}(t)}\{||\tilde{A}_{1t}||^2+\tilde{A}_{2t}\}, \label{mix_var}
\end{equation}
where $A_{0t}$ is defined in (\ref{variance}), $\tilde{A}_{1t}=\sum_{r_1=1}^t \sum_{r_2=t+1}^T(\tilde{\mu}_{r_1}-\tilde{\mu}_{r_2})^{\prime}(\Gamma_{r_1}-\Gamma_{r_2})$. In addition, with $\delta_{g_1 g_2 r_i}=\mu_{g_1 r_i}-\mu_{g_2 r_i}$ for $i=1, 2$,
\begin{equation}gin{align}
\tilde{A}_{2t}&=\sum_{g_1<g_2}^G p_{g_1} p_{g_2} \Big\{\sum_{r_1=1}^t \sum_{r_2=t+1}^T(\delta_{g_1 g_2 r_1}-\delta_{g_1 g_2 r_2})^{\prime}(\tilde{\mu}_{r_1}-\tilde{\mu}_{r_2}) \Big\}^2\;\;\mbox{and}\;\;\nonumber\\
\tilde{A}_{3t}&=\sum_{g_1<g_2, g_3<g_4}^G p_{g_1} p_{g_2}p_{g_3} p_{g_4} \Big\{\sum_{r_1=1}^t \sum_{r_2=t+1}^T(\delta_{g_1 g_2 r_1}-\delta_{g_1 g_2 r_2})^{\prime}(\delta_{g_3 g_4 r_1}-\delta_{g_3 g_4 r_2})
\Big\}^2.\nonumber
\end{align}
It is worth discussing some special cases of (\ref{mix_var}). First, if there is only one group ($G=1$), it can be shown that $\tilde{A}_{2t}=\tilde{A}_{3t}=0$, and $\tilde{A}_{1t}=A_{1t}$ defined in (\ref{variance}). Therefore, the variance formulated in Proposition 1 is a special case of the variance (\ref{mix_var}) under the mixture model. Second, under $H_0^*$ of (\ref{general-hypo}),
$\tilde{\sigma}_{nt,0}^2\equiv\mbox{Var}(\hat{M}_t)=2\mbox{tr}(A_{0t}^2)/\{n(n-1)h^{2}(t)\}$ because $\tilde{A}_{1t}=\tilde{A}_{2t}=\tilde{A}_{3t}=0$. The unknown $\tilde{\sigma}_{nt,0}^2$ can be estimated by
$$
\widehat{\tilde{\sigma}}_{nt,0}^2=\frac{2}{h^2(t)n^2(n-1)^2}\sum_{i \ne j}^n\Big\{\sum_{r_1=1}^t\sum_{r_2=t+1}^T \sum_{a,b\in \{1,2\}}(-1)^{|a-b|}
X_{i r_a}^{\prime} X_{j r_b}\Big\}^2.
$$
Similar to $\hat{\mathscr{M}}$ given by (\ref{max-test}), we define
$\tilde{\mathscr{M}}=\max_{1\le t \le T-1} \hat{M}_t/\widehat{\tilde{\sigma}}_{nt,0}$. The temporal homogeneity detection and
identification procedures developed in Sections 2 and 3 can be extended to testing the hypothesis in (\ref{general-hypo}) by replacing $\hat{\mathscr{M}}$ with $\tilde{\mathscr{M}}$.
Furthermore, the asymptotic results in Theorem 1-5 can be established for the mixture model (\ref{mixture-of-means}) under some regularity
conditions. Due to the space limitation, we only demonstrate the empirical performance under the mixture model through simulation studies and leave explorations of the theoretical results to future study.
\section{Simulation Studies}
In this section, we will evaluate the finite sample performance of the methods proposed in Sections 2--4.
\subsection{Test for the Homogeneity of Means }
We first evaluate the numerical performance of the test procedure proposed in Section 2.
The random sample $\{X_{it}\}$ for $i=1, \cdots, n$ and $t=1, \cdots, T$, were generated from the following
multivariate linear process
\begin{equation}
X_{it}=\mu_t+\sum_{l=0}^J Q_{lt}\,\epsilon_{i(t-l)}, \label{data_g}
\end{equation}
where $\mu_t$ is the $p$-dimensional population mean vector at time $t$, $Q_{lt}$ is a $p \times p$ matrix and $\epsilon_{it}$ is $p$-variate normally distributed with mean $0$ and identity covariance $\mbox{I}_p$. The model was considered to account for both time dependence of $X_{it}$ and $X_{is}$ at $t\ne s$, and spatial dependence among the $p$-components of $X_{it}$ at a specific time $t$. Specifically, it can be seen that
$\mbox{Cov}(X_{it}, X_{is})= \sum_{l=t-s}^J Q_{lt} Q_{(l-t+s)s}$ if $t-s\le J$ and $\mbox{Cov}(X_{it}, X_{is})=0$ otherwise.
Note that $J$ is used to control the level of dependence. As $J$ increases, the temporal dependence among $\{X_{it}\}_{t=1}^T$ becomes stronger.
In the simulation, we chose $J=2$ and $Q_{lt}=\{0.5^{|i-j|}\mbox{I}(|i-j|<p/2)/(J-l+1)\}$ for $i, j=1, \cdots, p$, and $l=0, 1, 2$.
To evaluate the empirical size of the proposed test, we simply chose $\mu_t=0$ for all $t$ under $H_0$ of (\ref{Hypo}).
Under $H_1$, we considered one change-point located at $0.4 \cdot T$ such as $\mu_t=0$ for $t=1, \cdots, 0.4T$ and $\mu_t=\mu$ for $t=0.4T+1, \cdots, T$. The non-zero mean vector $\mu$ had $[p^{0.7}]$ non-zero components which were uniformly and randomly drawn from $p$ coordinates $\{1, \cdots, p\}$. Here, $[a]$ denotes the integer part of $a$. The magnitude of non-zero entry of $\mu$ was controlled by a constant $\delta$ multiplied by a random sign. The effect of sample size, dimensionality, and length of time series on the performance of the proposed testing procedure was demonstrated by different combinations of $n \in \{30, 60, 90\}$, $p \in \{50, 100, 200\}$ and $T \in \{50, 100, 150\}$. The nominal significance level was chosen to be $0.05$. All the simulation results were obtained based on 1000 replications.
Table \ref{case1} summarizes the empirical performance of the proposed procedure for testing the homogeneity of means.
All the empirical sizes ($\delta=0$) were well controlled under the nominal significance level $0.05$ although some of them were relatively conservative.
This is largely due to the slow convergence of the Gumbel distribution. Furthermore, the empirical powers increased as $p$, $T$ and $n$ increased,
which confirms the theoretical findings of the proposed testing procedure.
\begin{equation}gin{table}[t]
\theta_\alphabcolsep 4pt
\begin{equation}gin{center}
\caption{Empirical sizes and powers of the proposed test for homogeneity of means under different combinations of $n$, $p$ and $T$.}
\label{case1}
\begin{equation}gin{tabular}{ccccccccccccccc}
\hline &\multicolumn{2}{c}{}&\multicolumn{4}{c}{$T=50$} &\multicolumn{4}{c}{$T=100$} &\multicolumn{4}{c}{$T=150$}\\[1mm]
\cline{5-7} \cline{9-11} \cline{13-15}$\delta$& &n& &$p=50$ & $100$ & $200$ & &$p=50$ & $100$ & $200$ & & $p=50$
& $100$& $200$ \\\hline
& &$30$ & & $0.040$ & $0.033$ & $0.028$ & &$0.049$& $0.030$ & $0.017$& &$0.034$& $0.042$& $0.024$ \\
0 & &$60$ & & $0.052$ & $0.036$ & $0.021$ & &$0.033$& $0.031$ & $0.021$& &$0.029$& $0.014$& $0.015$ \\
& &$90$ & & $0.050$ & $0.032$ & $0.022$ & &$0.033$& $0.024$ & $0.017$& &$0.051$& $0.031$& $0.018$ \\\hline
& &$30$ & & $0.117$ & $0.121$ & $0.122$ & &$0.141$& $0.157$ & $0.216$& &$0.186$& $0.209$& $0.276$ \\
0.2 & &$60$ & & $0.309$ & $0.362$ & $0.504$ & &$0.523$& $0.738$ & $0.833$& &$0.731$& $0.884$& $0.982$ \\
& &$90$ & & $0.578$ & $0.790$ & $0.922$ & &$0.918$& $0.995$ & $0.999$& &$0.993$& $1.000$& $1.000$ \\
\hline
& &$30$ & & $0.378$ & $0.474$ & $0.633$ & &$0.648$& $0.826$ & $0.938$& &$0.860$& $0.954$& $0.992$ \\
0.3 & &$60$ & & $0.956$ & $0.994$ & $1.000$ & &$1.000$& $1.000$ & $1.000$& &$1.000$& $1.000$& $1.000$ \\
& &$90$ & & 1.000 & 1.000 & 1.000 & & 1.000 & 1.000 & 1.000 & & 1.000 & 1.000 & 1.000 \\\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Change-Point Identification }
Simulation experiments were also conducted to evaluate the change-point identification procedure proposed in Section 3. We generated data using similar setup for change-point testing in the last subsection, but we considered two
change-points at $0.4 \cdot T$ and $0.7 \cdot T$ such as $\mu_t=0$ for $t=1, \cdots, 0.4T$, $\mu_t=\mu$ for $t=0.4T+1, \cdots, 0.7T$ and $\mu_t=0$ for $t=0.7T+1, \cdots, T$.
Again, the non-zero mean vector $\mu$ had $[p^{0.7}]$ non-zero components which were uniformly and randomly drawn from $\{1, \cdots, p\}$.
The non-zero entry of $\mu$ was $\delta=0.5$ and $\delta=0.6$, respectively, multiplied by a random sign.
\begin{equation}gin{figure}[t!]
\begin{equation}gin{center}
\includegraphics[width=3.5 in]ludegraphics[width=0.4\textwidth,height=0.4\textwidth]{errord05n30.pdf}
\includegraphics[width=3.5 in]ludegraphics[width=0.4\textwidth,height=0.4\textwidth]{errord05n60.pdf}\\
\includegraphics[width=3.5 in]ludegraphics[width=0.4\textwidth,height=0.4\textwidth]{errord06n30.pdf}
\includegraphics[width=3.5 in]ludegraphics[width=0.4\textwidth,height=0.4\textwidth]{errord06n60.pdf}
\caption{The average FP+FN under different combinations of signal strength $\delta$, dimension $p$, time $T$ and sample size $n$. The total number of change-points are set to be $2$. }
\label{fig2}
\end{center}
\end{figure}
There are two types of errors for change-point identification: the false positive (FP) and the false negative (FN).
The FP means that a time point without changing the mean is wrongly identified as a change-point, and the FN refers that a change-point is wrongly treated as a time point without changing the mean.
The accuracy of the proposed change-point identification was measured by the sum of FP and FN. Simulation results were obtained based on 100 replications.
Figure \ref{fig2} demonstrates the FP+FN associated with the proposed change-point identification procedure under different combinations of $\delta$, $p$, $T$ and $n$.
More specifically, the average FP+FN decreased as $\delta$ increased with fixed $p$, $T$ and $n$.
Also the FP+FN decreased as either $p$ increased with fixed $\delta$, $T$ and $n$, or $n$ increased with fixed $\delta$, $p$ and $T$.
In the supplementary material, we also summarize the performance using the number of true positives (TP). The results show that the TP identified by the proposed procedure converged to the number of change-points (see supplementary material for details).
We also conducted simulation studies for the proposed change-point detection and identification methods with non-Gaussian data. Instead of using the normally distributed $\epsilon_{it}$ in (\ref{data_g}),
we considered the centralized Gamma(4, 0.5). The results were similar to those given in Table 1 and Figure 1, which shows that the proposed test is presumably nonparametric in the sense that it does not rely on the Gaussian data. Due to the space limitation, the results are reported in the supplementary material.
\subsection{Detection and Identification Under the Mixture Model}
To evaluate the performance of the proposed methods under the mixture model (\ref{mixture-of-means}), we generated the data from the following model with three groups:
\begin{equation}
X_{it}=\sum_{g=1}^3 \Lambda_{ig}\mu_{gt}+\sum_{l=0}^J Q_{lt}\,\epsilon_{i(t-l)},
\label{mixture-of-means-simu}
\end{equation}
where $(\Lambda_{i1},\Lambda_{i2}, \Lambda_{i3})$ follows a multinomial distribution with parameters 1 and $p=(p_1, p_2,p_3)$.
satisfying $P(\Lambda_{ig}=1)=p_g$ for $g=1,2$ and 3. In the simulation, we set $(p_1,p_2,p_3)=(0.3,0.3,0.4)$. Among three groups, we considered two change-points $\theta_\alphau_1=0.4 \cdot T$ and $\theta_\alphau_2=0.7 \cdot T$.
Specifically, for the first group ($g=1$), $\mu_{1t}=0$ for $1 \leq t\leq \theta_\alphau_1$ and $\mu_{1t}=\mu_1$ for $\theta_\alphau_1+1\leq t\leq T$, where $\mu_{1}$ had $[p^{0.7}]$ non-zero components drawn uniformly and
randomly from $\{1, \cdots, p\}$. The magnitude of non-zero entry of $\mu_{1}$ was $\delta_1$ multiplied by a random sign.
For the second group ($g=2$), the mean vectors $\mu_{2t}$ were obtained similarly to those for the first group except that we changed $\theta_\alphau_1$ to $\theta_\alphau_2$, and $\delta_1$ to $\delta_2$.
For the third group ($g=3$), we set $\mu_{3t}=0$ for $1 \leq t\leq \theta_\alphau_1$, $\mu_{3t}$ equal to the non-zero mean vectors similar to those in group 2 for $\theta_\alphau_1+1 \le t \le \theta_\alphau_2$,
and $\mu_{3t}=\mu_3$ for $\theta_\alphau_2+1 \leq t\leq T$ where $\mu_3$ were generated similarly to that in the first group except that we changed $\delta_1$ to $\delta_3$.
\begin{equation}gin{table}[t]
\theta_\alphabcolsep 3pt
\begin{equation}gin{center}
\caption{Empirical powers of the proposed test under the mixture model with different combinations of $n$, $p$ and $T$.} \label{case2}
\begin{equation}gin{tabular}{ccccccccccccccc}
\hline &\multicolumn{2}{c}{}&\multicolumn{4}{c}{$T=50$} &\multicolumn{4}{c}{$T=100$} &\multicolumn{4}{c}{$T=150$}\\[1mm]
\cline{5-7} \cline{9-11} \cline{13-15}
$(\delta_1,\delta_2,\delta_3)$& &n& &$p=50$ & $100$ & $200$ & &$p=50$ & $100$ & $200$ & & $p=50$
& $100$& $200$ \\\hline
& &$30$ & & 0.094 & 0.088 & 0.123 & & 0.119 & 0.142 & 0.179 & & 0.139 & 0.174 & 0.241\\
(0.25, 0.35, 0.4) & &$60$ & & 0.300 & 0.349 & 0.445 & & 0.463 & 0.592 & 0.712 & & 0.618 & 0.752 & 0.882\\
& &$90$ & & 0.533 & 0.690 & 0.814 & & 0.817 & 0.928 & 0.979 & & 0.929 & 0.981 & 0.998\\\hline
& &$30$ & & 0.691 & 0.816 & 0.907 & & 0.892 & 0.957 & 0.987 & & 0.946 & 0.991 & 0.999\\
(0.5, 0.7, 0.8) & &$60$ & & 0.997 & 1.000 & 1.000 & & 1.000 & 1.000 & 1.000 & & 1.000 & 1.000 & 1.000\\
& &$90$ & & 1.000 & 1.000 & 1.000 & & 1.000 & 1.000 & 1.000 & & 1.000 & 1.000 & 1.000\\ \hline
\end{tabular}
\end{center}
\end{table}
We first evaluate the proposed test under the mixture model (\ref{mixture-of-means-simu}). Since the empirical sizes under the mixture model were very similar to those in Table \ref{case1},
we only report the empirical powers in Table \ref{case2}. The patterns are very similar to what we observed in Table \ref{case1}.
We also observe that the empirical powers increased as $(\delta_1,\delta_2, \delta_3$), or $n$, $p$ and $T$ increased. This suggests that the proposed test procedure is consistent under the mixture model.
Based on the same setup, we also conducted simulation experiment to evaluate performance of the proposed change-point identification procedure under the mixture model (\ref{mixture-of-means-simu}).
The accuracy of the procedure is measured by the sum of FP and FN, which is illustrated in Figure \ref{fig4}. We observe that the patterns are similar to those reported in Figure \ref{fig2}. As $n$,
$p$ and $T$, or $(\delta_1,\delta_2, \delta_3$) increased, the FP+FN decreased. Specially, it was close to 0 when $p=200$, $n=60$ and $(\delta_1=0.7,\delta_2=1.3, \delta_3=0.8)$, showing that procedure is consistent under the mixture model.
\begin{equation}gin{figure}[t!]
\begin{equation}gin{center}
\includegraphics[width=3.5 in]ludegraphics[width=0.4\textwidth,height=0.4\textwidth]{errord05n30mix.pdf}
\includegraphics[width=3.5 in]ludegraphics[width=0.4\textwidth,height=0.4\textwidth]{errord05n60mix.pdf}\\
\includegraphics[width=3.5 in]ludegraphics[width=0.4\textwidth,height=0.4\textwidth]{errord07n30mix.pdf}
\includegraphics[width=3.5 in]ludegraphics[width=0.4\textwidth,height=0.4\textwidth]{errord07n60mix.pdf}
\caption{The average FP+FN under the mixture model with different combinations of signal strength $\delta$, dimension $p$, time $T$ and sample size $n$. The total number of change-points are set to be $2$. }
\label{fig4}
\end{center}
\end{figure}
\section{Real Data Analysis}
Recent studies suggest that the parahippocampal region of the brain activates more significantly to images with spatial structures than others without such structures (Epstein and Kanwisher, 1998; Henderson et al., 2007). An experiment was conducted to investigate the functions of such region in scene processing.
During the experiment, fourteen students in Michigan State University were presented alternatively with six sets of
scene images and six sets of object images. The order of presenting the images follows ``sososososoos'' where `s' and `o' represent a set of scene images and object images, respectively.
The fMRI data were acquired by placing each brain into a 3T GE Sigma EXCITE scanner. After the data were preprocessed by shifting time difference, correcting rigid-body motion and removing trends
(more detail can be found in Henderson et al., 2011), the resulted dataset consists of BOLD measurements of 33,866 voxels from $14$ subjects and at $192$ time points,
which clearly is a ``large $p$, large $T$ and small $n$" case.
\begin{equation}gin{figure}[t!]
\begin{equation}gin{center}
\includegraphics[width=3.5 in]ludegraphics[width=0.95\textwidth,height=5cm]{change-points-plot.pdf}\mbox{var}_*pace{-0.2cm}
\caption{The illustration of change-points identified by the proposed method.
The green solid and dash curves, respectively, represent the expected BOLD responses to the scene and objective images. The x-values and y-values of the red stars marked on the curves, are the identified change-points and the corresponding BOLD responses. The blue plus signs represent the locations where subjects rest such that the BOLD responses are zero.
Out of the 59 identified change-points, 58 are expected to have signal changes.
}
\label{cpts-fig}
\end{center}
\end{figure}
Let $X_{it}$ be a $p$-dim ($p=33,866$) random vector representing the fMRI image data for the $i$-th subject measured at time point $t$ ($i=1,\cdots, 14$ and $t=1,\cdots, 192$).
We first applied the testing procedure described in Section 4 to the dataset for testing the homogeneity of mean vectors, namely the hypothesis (\ref{general-hypo}).
The test statistic $\tilde{\mathscr{M}}= 9.117$ with p-value less than $10^{-6}$, which indicates existence of change-points. After further implementing the proposed binary segmentation approach,
we identified 59 change-points, which is not surprising because the large number of change-points arise from the time-altered scene and object images stimuli.
To crosscheck the credibility of the identified change-points, we compared them with the predicted BOLD responses obtained from the convolution of the boxcar function with a gamma HRF function (Ashby, 2011).
In Figure \ref{cpts-fig}, the green solid and the green dot dash curves following the order of presenting the images, are predicted BOLD responses to the scene images and object images, respectively. The x-values and y-values of the red stars marked on the curves, are the identified change-points and the corresponding BOLD responses. Based on the predicted BOLD response function, we found that 58 out of 59 identified change-points were expected to have signal changes.
Keeping in mind that the proposed change-point detection and identification approach is nonparametric with no attempt to model neural activation, we have demonstrated that it has satisfactory performance for the fMRI data analysis.
\begin{equation}gin{figure}[bt!]
\begin{equation}gin{center}
\includegraphics[width=3.5 in]ludegraphics[width=0.26\textwidth,height=0.26\textwidth]{cpt582.pdf}
\includegraphics[width=3.5 in]ludegraphics[width=0.26\textwidth,height=0.26\textwidth]{cpt5101.pdf}\\
\includegraphics[width=3.5 in]ludegraphics[width=0.26\textwidth,height=0.26\textwidth]{cpt5780.pdf}
\includegraphics[width=3.5 in]ludegraphics[width=0.26\textwidth,height=0.26\textwidth]{Cpt57110.pdf}
\caption{Upper Panels: the activated brain regions at the 5th identified change-point (17th time point) where the object images were presented. Most of the significant changes (red areas) occurred at visual cortex areas. Lower Panels: the activated brain regions at the 57th change-point (188th time point) where the scene images were presented. Most of the significant changes (red areas) occurred at both visual cortex and parahippocampal areas.}
\label{significant-cpt5}
\end{center}
\end{figure}
To confirm that the parahippocampal region is selectively activated by the scenes over the objects, we compared the brain region activated by the scene images and with that activated by the object images. To do this, we let $X_{i \theta_\alphau j}$ be the $j$-th component (voxel) of the random vector $X_{i \theta_\alphau}$ for $i$-th subject at the change-point $\theta_\alphau$ where $i=1, \cdots, 14$, $\theta_\alphau=1, \cdots, 59$ and $j=1, \cdots, 33,866$. Similarly, let $X_{i \theta_\alphau+1 j}$ be the $j$-th component of the random vector $X_{i \theta_\alphau+1}$ after the change-point $\theta_\alphau$.
For each voxel ($j=1,\cdots, 33,866$), we computed the difference between two sample means $\bar{X}_{\theta_\alphau j}$ and $\bar{X}_{\theta_\alphau+1 j}$ and then conducted paired t-test for the significance of the mean difference before and after the change-point.
Based on obtained p-values, we allocated the activated brain regions composed of all significant voxels after controlling the false discovery rate at $0.01$ (Storey, 2003). The results showed that the activated brain regions were quite similar across the same type of images, but significantly different between scene and object images. More specifically, the brain region activated by the scene images was located at both the visual cortex area and the parahippocampal area, whereas the region activated by the object images was only located at the visual cortex area.
Our findings are consistent with the results in Henderson et al. (2011). For illustration purpose, we only included pictures at two change-points in Figure \ref{significant-cpt5}.
\section{Discussion}
Motivated by the real applications such as the fMRI studies, we consider the problem of testing the homogeneity of high dimensional mean vectors under the ``large $p$, large $T$ and small $n$'' paradigm. We propose a new test statistic and establish its asymptotic distribution under mild conditions. One important feature of the proposed test is that it accommodates both temporal and spatial dependence. To the best of our knowledge, the temporal dependence has not been investigated in the literature of high dimensional MANOVA problems, so the proposed method has bridged this gap. When the null hypothesis is rejected, we further propose a procedure which is shown to be able to identify the change-points with probability converging to one. The rate of consistency of the change-point estimator is also established.
The proposed methods have also been generalized to a mixture model to allow heterogeneity among subjects. Numerical results demonstrate that the extension is promising and encouraging. Due to the space limitation, we will explore the theoretical results of the extension to the mixture model in a separate paper. Although the current article demonstrates the empirical performance of the proposed methods through the fMRI data analysis, they can be also applied to other high-dimensional longitudinal data.
\setcounter{equation}{0}
\defA.\arabic{equation}{A.\arabic{equation}}
\defA{A}
\section*{\large Appendix: Technical Details}
In this Appendix, we provide proofs to the Theorems and Propositions in the paper. Assume $\mu_t=0$ in (\ref{model0}) and (\ref{factor}). For any squared $m \times m$ matrix $A$ and $B$, the following results commonly
used in Appendix can be derived: $\mbox{E}(X_{is}^{\prime}A X_{it})= \mbox{tr}(\Gamma_s^{\prime}A \Gamma_t),$
and
\begin{equation}gin{align}
\mbox{E}(X_{is}^{\prime}A X_{it}X_{is^*}^{\prime}B X_{it^*})&=\mbox{tr}(\Gamma_s^{\prime}A \Gamma_t)\mbox{tr}(\Gamma_{s^*}^{\prime}B \Gamma_{t^*})
+\mbox{tr}(\Gamma_s^{\prime}A \Gamma_t\Gamma_{s^*}^{\prime}B \Gamma_{t^*})\nonumber\\
&+\mbox{tr}(\Gamma_s^{\prime}A \Gamma_t\Gamma_{t^*}^{\prime}B^{\prime} \Gamma_{s^*})+(3+\Delta)\mbox{tr}(\Gamma_s^{\prime}A \Gamma_t \perp\!\!\!\perprc\Gamma_{s^*}^{\prime}B \Gamma_{t^*}),\label{com1}
\end{align}
where $A\perp\!\!\!\perprc B$ is the Hadamard product of $A$ and $B$.
\mathbf{I}gskip
\noindent{\bf A.1. Proof of Theorem 1.}
\mathbf{I}gskip
Theorem 1 can be established by the martingale central limit theorem. Toward this end, we first construct a martingale difference sequence. If we define $Y_{i s_a}=X_{i s_a}-\mu_{s_a}$, then
$\hat{M}_t-M_t=\sum_{i=1}^n M_{ti}, $
where
\begin{equation}gin{align*}
M_{ti}=&\frac{2}{n(n-1)h(t)}\sum_{j=1}^{i-1}\mathbf{I}g\{\sum_{s_1=1}^t \sum_{s_2=t+1}^T \sum_{a,b \in \{1,2\}} (-1)^{|a-b|}Y_{i s_a}^{\prime}Y_{j s_b}\mathbf{I}g\}\\
&+\frac{2}{nh(t)} \sum_{s_1=1}^t \sum_{s_2=t+1}^T \sum_{a,b \in \{1,2\}} (-1)^{|a-b|} \mu^{\prime}_{s_a} Y_{i s_b}.
\end{align*}
Let $\{\mathscr{F}_i, 1 \le i \le n\}$ be $\sigma$-fields generated by $\sigma\{\mathbb{Y}_1, \cdots, \mathbb{Y}_i \}$ where $\mathbb{Y}_i=\{Y_{i1}, \cdots, Y_{i T} \}^{\prime}$.
Then it can be shown that ${\mbox{E}}(M_{tk}|\mathscr{F}_{k-1})=0$ for $k=1, \cdots, n$. Therefore, $\{M_{ti}, 1\le i \le n\}$ is a martingale difference sequence with respect to $\sigma$-fields $\{\mathscr{F}_i, 1 \le i \le n\}$.
Based on Lemmas 1 and 2 proved in the supplementary material, Theorem 1 can be proved using the martingale central limit theorem
(Hall and Heyde, 1980).
\mathbf{I}gskip
\noindent{\bf A.2. Proof of Theorem 2.}
\mathbf{I}gskip
Note that the estimator $\widehat{\mbox{tr}(\Xi_{r_a s_c} \Xi_{r_b s_d}^{\prime})}$ in (\ref{variance-est}) is invariant by transforming $X_{it}$ to $X_{it}-\mu_{t}$ where $t=1, \cdots, \theta_\alphau$.
With loss of generality, we assume that $\mu_1=\mu_2=\cdots=\mu_T=0$. First,
\begin{equation}gin{align*}
&\quad\mbox{E}\mathbf{I}g\{\widehat{\mbox{tr}(\Xi_{r_a s_c} \Xi_{r_b s_d}^{\prime})} \mathbf{I}g\}\\
&=\mbox{E}( X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{j s_d}) -\mbox{E} (X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{k s_d})\nonumber\\
&\quad -\mbox{E} (X_{i r_a}^{\prime} X_{j r_b} X_{k s_c}^{\prime} X_{j s_d})+\mbox{E}( X_{i r_a}^{\prime} X_{j r_b} X_{k s_c}^{\prime} X_{l s_d})
= \mbox{tr}(\Xi_{r_a s_c} \Xi_{r_b s_d}^{\prime}). \nonumber
\end{align*}
This shows that $\mbox{E}(\hat{\sigma}_{nt, 0}^2)=\sigma_{nt,0}^2$. Therefore, to prove Theorem 2, we only need to show that
$\mbox{Var}(\hat{\sigma}_{nt, 0}^2)/\sigma_{nt,0}^4 \to 0.$
For convenience, we denote the summation $\sum_{r_1=1}^t\sum_{r_2=t+1}^T\sum_{s_1=1}^t\sum_{s_2=t+1}^T$ by $\sum_{r_1, r_2, s_1, s_2}$.
Define the right hand side of ``$=$'' in (\ref{variance-est}) as $B_1+B_2+B_3+B_4$, and accordingly,
\begin{equation}gin{eqnarray}
\hat{\sigma}_{nt,0}^2&=&\frac{2}{h^{2}(t)n(n-1)}\sum_{r_1, r_2, s_1, s_2} \sum_{a,b,c,d \in \{1,2\}}(-1)^{|a-b|+|c-d|} (B_1+B_2+B_3+B_4)\nonumber\\
&\equiv& \hat{\sigma}_{nt,0}^{2 (1)}+ \hat{\sigma}_{nt,0}^{2 (2)}+ \hat{\sigma}_{nt,0}^{2 (3)}+ \hat{\sigma}_{nt,0}^{2 (4)}.\nonumber
\end{eqnarray}
Therefore, we only need to show that $\mbox{Var}(\hat{\sigma}_{nt, 0}^{2 (i)})/ \sigma_{nt,0}^4 \to 0$ for $i=1,2,3$ and $4$ respectively. Toward this end, we first show that $\mbox{Var}(\hat{\sigma}_{nt, 0}^{2 (1)} )/ \sigma_{nt,0}^4 \to 0$ as follows.
\begin{equation}gin{align}
&\mbox{Var}(\hat{\sigma}_{nt, 0}^{2 (1)})\nonumber\\
=&\frac{4}{h^{4}(t)n^4(n-1)^4}\mbox{Var}\mathbf{I}g\{ \sum_{r_1, r_2, s_1, s_2} \sum_{a,b,c,d \in \{1,2\}} (-1)^{|a-b|+|c-d|} \sum_{i \ne j}^n X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{j s_d} \mathbf{I}g\}\nonumber\\
=&\frac{4}{h^{4}(t)n^4(n-1)^4}\overline{\sum} \Big\{ \sum_{i \ne j, k\ne l}^n \mbox{E}(X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{j s_d}X_{k r^*_{a^*}}^{\prime} X_{l r^*_{b^*}} X_{k s^*_{c^*}}^{\prime} X_{l s^*_{d^*}})\nonumber\\
&\qquad\qquad\qquad
-n^2(n-1)^2\mbox{tr}(\Gamma_{r_a}^{\prime} \Gamma_{r_b} \Gamma_{s_c}^{\prime} \Gamma_{s_d})\mbox{tr}(\Gamma_{r^*_{a^*}}^{\prime} \Gamma_{r^*_{b^*}} \Gamma_{s^*_{c^*}}^{\prime} \Gamma_{s^*_{d^*}})\Big\},
\label{Th2-1}
\end{align}
where $\overline{\sum}$ represents $\sum_{r_1, r_2, s_1, s_2} \sum_{a,b,c,d \in \{1,2\}}\sum_{r_1^*, r_2^*, s_1^*, s_2} \sum_{a^*,b^*,c^*,d^* \in \{1,2\}}$.
Now we evaluate $\sum_{i \ne j, k\ne l}^n \mbox{E}(X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{j s_d}X_{k r^*_{a^*}}^{\prime} X_{l r^*_{b^*}} X_{k s^*_{c^*}}^{\prime} X_{l s^*_{d^*}})$ with respect to different
cases in the following.
First, if all indices are distinct, i.e., $i\ne j \ne k \ne l$. Using (\ref{com1}), we have
\begin{equation}gin{align*}
\sum_{i \ne j, k\ne l}^n &\mbox{E}(X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{j s_d}X_{k r^*_{a^*}}^{\prime} X_{l r^*_{b^*}} X_{k s^*_{c^*}}^{\prime} X_{l s^*_{d^*}})\\
&\asymp n^4\mbox{tr}(\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{s_d}^{\prime}\Gamma_{s_c})\mbox{tr}(\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}}\Gamma_{s^*_{d^*}}^{\prime}\Gamma_{s^*_{c^*}}).
\end{align*}
Next, if $(i=k)\ne j \ne l$, then by (\ref{com1}),
\begin{equation}gin{align*}
&\quad\sum_{i \ne j, k\ne l}^n \mbox{E}(X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{j s_d}X_{k r^*_{a^*}}^{\prime} X_{l r^*_{b^*}} X_{k s^*_{c^*}}^{\prime} X_{l s^*_{d^*}})\\
&\asymp n^3\mathbf{I}g\{(3+\Delta) \mbox{tr}(\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{s_d}^{\prime}\Gamma_{s_c}\perp\!\!\!\perprc\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}}\Gamma_{s^*_{d^*}}^{\prime}\Gamma_{s^*_{c^*}})\\
&\quad+\mbox{tr}(\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{s_d}^{\prime}\Gamma_{s_c})\mbox{tr}(\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}}\Gamma_{s^*_{d^*}}^{\prime}\Gamma_{s^*_{c^*}})\\
&\quad+\mbox{tr}(\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{s_d}^{\prime}\Gamma_{s_c}\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}}\Gamma_{s^*_{d^*}}^{\prime}\Gamma_{s^*_{c^*}})+\mbox{tr}(\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{s_d}^{\prime}\Gamma_{s_c}\Gamma_{s^*_{c^*}}^{\prime}\Gamma_{s^*_{d^*}}\Gamma_{r^*_{b^*}}^{\prime}\Gamma_{r^*_{a^*}})\mathbf{I}g\},\nonumber
\end{align*}
which is equal to other cases $(j=k)\ne i \ne l$, $(i=l)\ne j \ne k$ and $(j=l)\ne i \ne k$.
Finally, we consider the cases $(i=k) \ne (j=l)$ and $(i=l) \ne (j=k)$. For the case $(i=k) \ne (j=l)$,
\begin{equation}gin{align*}
&\quad\sum_{i \ne j, k\ne l}^n \mbox{E}(X_{i r_a}^{\prime} X_{j r_b} X_{i s_c}^{\prime} X_{j s_d}X_{k r^*_{a^*}}^{\prime} X_{l r^*_{b^*}} X_{k s^*_{c^*}}^{\prime} X_{l s^*_{d^*}})\\
\asymp& n^2\mathbf{I}g\{3\mbox{tr}(\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{s_d}^{\prime}\Gamma_{s_c})\mbox{tr}(\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}}\Gamma_{s^*_{d^*}}^{\prime}\Gamma_{s^*_{c^*}})
+3Q_1+(3+\Delta)Q_2\\
&+3(3+\Delta) \mbox{tr}(\Gamma_{s_d}^{\prime}\Gamma_{s_c}\Gamma_{r_a}^{\prime}\Gamma_{r_b}\perp\!\!\!\perprc\Gamma_{s^*_{d^*}}^{\prime}\Gamma_{s^*_{c^*}}\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}})\\
&
+(3+\Delta)^2\sum_{\alpha \begin{equation}ta}(\Gamma_{r_a}^{\prime}\Gamma_{r_b})_{\alpha \begin{equation}ta}(\Gamma_{s_d}^{\prime}\Gamma_{s_c})_{\begin{equation}ta \alpha}(\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}})_{\alpha \begin{equation}ta}(\Gamma_{s_{d^*}}^{\prime}\Gamma_{s_{c^*}})_{\begin{equation}ta \alpha}\mathbf{I}g\},\nonumber
\end{align*}
where $Q_1=\mbox{tr}(\Gamma_{s_d}^{\prime}\Gamma_{s_c}\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{s^*_{d^*}}^{\prime}\Gamma_{s^*_{c^*}}\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}})+\mbox{tr}(\Gamma_{s_d}^{\prime}\Gamma_{s_c}\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{r^*_{b^*}}^{\prime}\Gamma_{r^*_{a^*}}\Gamma_{s^*_{c^*}}^{\prime}\Gamma_{s^*_{d^*}})$ and $Q_2=\mbox{tr}(\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{s_d}^{\prime}\Gamma_{s_c}\perp\!\!\!\perprc\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}}\Gamma_{s^*_{d^*}}^{\prime}\Gamma_{s^*_{c^*}})+
\mbox{tr}(\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{r_{b^*}}^{\prime}\Gamma_{r_{a^*}}\perp\!\!\!\perprc\Gamma_{s_{d}}^{\prime}\Gamma_{s_{c}}\Gamma_{s^*_{c^*}}^{\prime}\Gamma_{s^*_{d^*}})\\+\mbox{tr}(\Gamma_{r_a}^{\prime}\Gamma_{r_b}\Gamma_{s_{d^*}}^{\prime}\Gamma_{s_{c^*}}\perp\!\!\!\perprc\Gamma_{r_{a^*}}^{\prime}\Gamma_{r_{b^*}}\Gamma_{s_{d}}^{\prime}\Gamma_{s_{c}})$.
It can be shown that the case $(j=l)\ne i \ne k$ is the as the case $(i=k) \ne (j=l)$.
Plugging all the above results into (\ref{Th2-1}), we have
\[
\mbox{Var}(\hat{\sigma}_{nt, 0}^{2 (1)})\asymp h^{-4}(t)n^{-5}\overline{\sum}\mbox{tr}(\Gamma_{r_b}^{\prime}\Gamma_{r_a}\Gamma_{s_c}^{\prime}\Gamma_{s_d}\Gamma_{s^*_{d^*}}^{\prime}\Gamma_{s^*_{c^*}}\Gamma_{r^*_{a^*}}^{\prime}\Gamma_{r^*_{b^*}})+h^{-4}(t)n^{-6}\mbox{tr}(A_{0t}^2).
\]
Following the same procedure, it can be also shown that $\mbox{Var}(\hat{\sigma}_{nt, 0}^{2 (j)})=o\{\mbox{Var}(\hat{\sigma}_{nt, 0}^{2 (1)})\}$ for $j=2, 3$ and $4$.
Then, using condition (C1), we have $\mbox{Var}(\hat{\sigma}_{nt, 0}^{2 (j)})/ \sigma_{nt,0}^4 \to 0$ for $j=1, 2, 3$ and $4$. This completes the proof of Theorem 2.
\mathbf{I}gskip
\noindent{\bf A.4. Proof of Theorem 3.}
\mathbf{I}gskip
First, we derive $\mbox{Cov}(\hat{M}_u, \hat{M}_v)$ for $u, v \in \{1, \cdots, T-1\}$ under $H_0$ of (\ref{Hypo}). Without loss of generality, we assume that $\mu_1=\mu_2=\cdots=\mu_T=0$. Recall that
\begin{equation}gin{align}
\hat{M}_u&=\frac{1}{h(u)n(n-1)}\sum_{s_1=1}^u \sum_{s_2=u+1}^T \mathbf{I}ggl\{{\sum_{i\ne j}^nX_{is_1}^{\prime}X_{js_1}}+{\sum_{i\ne j}^nX_{is_2}^{\prime}X_{js_2}}-2{\sum_{i\ne j}^nX_{is_1}^{\prime}X_{js_2}} \mathbf{I}ggr\}, \nonumber\\
\hat{M}_v&=\frac{1}{h(v)n(n-1)}\sum_{s_1=1}^v \sum_{s_2=v+1}^T \mathbf{I}ggl\{{\sum_{i\ne j}^nX_{is_1}^{\prime}X_{js_1}}+{\sum_{i\ne j}^nX_{is_2}^{\prime}X_{js_2}}-2{\sum_{i\ne j}^nX_{is_1}^{\prime}X_{js_2}} \mathbf{I}ggr\}. \nonumber
\end{align}
Following similar derivations for the variance of $\hat{M}_t$ in the proof of Proposition 1 in the supplementary material, we can derive that
\begin{equation}gin{align*}
\mbox{Cov}(\hat{M}_u, \hat{M}_v)&=\frac{2}{h(u) h(v) n(n-1)}\sum_{r_1=1}^u\sum_{r_2=u+1}^T\sum_{s_1=1}^v\sum_{s_2=v+1}^T \\
&\times \sum_{a,b,c,d \in \{1,2\}}(-1)^{|a-b|+|c-d|}\mbox{tr}(\Xi_{r_a s_c}\Xi^{\prime}_{r_b s_d}).
\end{align*}
Next, we show that $\{\hat{M}_t\}_{t=1}^{T-1}$ follow a joint multivariate normal distribution when $T$ is fixed. According to the Cramer-word device, we only need to show that for any non-zero constant vector
$a=(a_1, \cdots, a_{T-1})^{\prime}$, $\sum_{t=1}^{T-1} a_t \hat{M}_t$ is asymptotically normal under $H_0$ of (\ref{Hypo}). Toward this end, we note that
$\mbox{Var}(\sum_{t=1}^{T-1} a_t \hat{M}_t)=\sum_{u=1}^{T-1} \sum_{v=1}^{T-1} a_u a_v \mbox{Cov}(\hat{M}_u, \hat{M}_v)$. Then we only need to show that
$\sum_{t=1}^{T-1} a_t \hat{M}_t /\sqrt{\mbox{Var}(\sum_{t=1}^{T-1} a_t \hat{M}_t)} \xrightarrow{d} N(0, 1)$, which can be proved by the martingale central limit theorem.
Since the proof is very similar to that of Theorem 1, we omit it. With the joint normality of $\{\hat{M}_t\}_{t=1}^{T-1}$, the distribution of $\hat{\mathscr{M}}\to \max_{1\leq t\leq T-1}Z_t$
can be established by the continuous mapping theorem.
To establish the asymptotic distribution of $\hat{\mathscr{M}}$ for $T$ diverging case, we need to show that under $H_0$, $\max_{1\leq t\leq T-1}\sigma_{nt}^{-1}\hat{M}_t$
converges to $\max_{1\leq t\leq T-1} Z_t$ where $Z_t$ is a Gaussian process with mean $0$ and covariance $\Sigma_Z$.
To this end, we need to show (i) the joint asymptotic normality of $(\sigma_{nt_1}^{-1}\hat{M}_{t_1},\cdots, \sigma_{nt_d}^{-1}\hat{M}_{t_d})^{\prime}$ for $t_1<t_2<\cdots<t_d$. (ii) the tightness of
$\max_{1\leq t\leq T-1}\sigma_{nt}^{-1}\hat{M}_t$. The proof of (i) is the similar to the proof of the joint asymptotic normality under finite $T$ case. We need to prove (ii).
To prove (ii), let $W_n(s_1,s_2)=\sum_{a,b\in\{1,2\}}(-1)^{|a-b|}\{n(n-1)\}^{-1}\sum_{i\neq j} X_{is_a}^{\prime}X_{js_b}$ and the first order projection
as $W_{n1}(s_1)=\{n(n-1)\}^{-1}\sum_{i\neq j} X_{is_1}^{\prime}X_{js_1}$. Then we have the following Hoeffding-type decomposition for $\hat{M}_t$,
\begin{equation}gin{align*}
\hat{M}_t=\sum_{s_1=1}^t\sum_{s_2=t+1}^Tg_n(s_1,s_2)+\sum_{s_1=1}^t\sum_{s_2=t+1}^T\{W_{n1}(s_1)+W_{n2}(s_2)\}:=\hat{M}_t^{(1)}+\hat{M}_t^{(2)},
\end{align*}
where $g_n(s_1,s_2)=W_n(s_1,s_2)-W_{n1}(s_1)-W_{n2}(s_2)$. The covariance between $\hat{M}_t^{(1)}$ and $\hat{M}_t^{(2)}$ is 0. First, we compute the variances of $\hat{M}_t^{(2)}$ under the
the null hypothesis $H_0$. We first write $\hat{M}_t^{(2)}=(T-t)\sum_{s_1=1}^tW_{n1}(s_1)+t\sum_{s_2=t+1}^TW_{n2}(s_2):=\hat{M}_t^{(21)}+\hat{M}_t^{(22)}$. Then we have
\begin{equation}gin{align*}
\mbox{Var}(\hat{M}_t^{(21)})=\frac{2(T-t)^2}{n(n-1)}\sum_{s_1=1}^t\sum_{r_1=1}^t \mbox{tr}(\Xi_{s_1 r_1}\Xi_{s_1r_1}^\prime)
\end{align*}
Similarly, we have
\begin{equation}gin{align*}
\mbox{Var}(\hat{M}_t^{(22)})=\frac{2t^2}{n(n-1)}\sum_{s_2=t+1}^T\sum_{r_2=t+1}^T \mbox{tr}(\Xi_{s_2 r_2}\Xi_{s_2r_2}^\prime).
\end{align*}
In addition, the covariance between $\hat{M}_t^{(21)}$ and $\hat{M}_t^{(22)}$ is,
\begin{equation}gin{align*}
\mbox{Cov}(\hat{M}_t^{(21)}, \hat{M}_t^{(22)})=\frac{2t(T-t)}{n(n-1)}\sum_{s_1=1}^t\sum_{s_2=t+1}^T\mbox{tr}(\Xi_{s_1 s_2}\Xi_{s_1s_2}^\prime).
\end{align*}
In summary, the variance for $\hat{M}_t^{(2)}$ is
\begin{equation}gin{align*}
\mbox{Var}(\hat{M}_t^{(2)})&=\frac{2}{n(n-1)}\sum_{s_1,r_1=1}^t \sum_{s_2,r_2=t+1}^T\{\mbox{tr}(\Xi_{s_1 r_1}\Xi_{s_1r_1}^\prime)+\mbox{tr}(\Xi_{s_2 r_2}\Xi_{s_2r_2}^\prime)\\
&\quad+2\mbox{tr}(\Xi_{s_1 s_2}\Xi_{s_1s_2}^\prime)\}.
\end{align*}
Moreover, we have
\begin{equation}gin{align*}
\mbox{Var}(\hat{M}_t^{(1)})&=\frac{4}{n(n-1)}\sum_{s_1=1}^t\sum_{s_2=t+1}^T\{\mbox{tr}(\Sigma_{s_1}\Sigma_{s_2})+\mbox{tr}(\Xi_{s_2 s_1}\Xi_{s_2s_1})\}\\
&\quad+\frac{4}{n(n-1)}\sum_{s_1\neq r_1=1}^t \sum_{s_2\neq r_2=t+1}^T\{\mbox{tr}(\Xi_{s_1 r_1}\Xi_{s_2r_2}^\prime)+\mbox{tr}(\Xi_{s_2 r_1}\Xi_{s_1r_2}^\prime)\}.
\end{align*}
According to the condition (C2), $\mbox{tr}(\Xi_{s_1 r_1}\Xi_{s_1r_1}^\prime)\asymp \phi(|s_1-r_1|)\mbox{tr}(\Sigma_{s_1}\Sigma_{r_1})$ and $\sum_{k=1}^T\phi^{1/2}(k)<\infty$. Under the null hypothesis $H_0$, we have
\begin{equation}gin{align*}
&\mbox{Var}(\hat{M}_t^{(2)})\\
&\asymp \frac{2\mbox{tr}(\Sigma^2)}{n(n-1)}\sum_{s_1,r_1=1}^t \sum_{s_2,r_2=t+1}^T\{\phi(|s_1-r_1|)+\phi(|s_2-r_2|)+2\phi(|s_1-s_2|)\}\\
&\asymp \frac{2\mbox{tr}(\Sigma^2)}{n(n-1)}\{(T-t)^2t+t^2(T-t)\}.
\end{align*}
On the other hand, we notice that the first term of $\mbox{Var}(\hat{M}_t^{(1)})$ has the same order as $t(T-t)\mbox{tr}(\Sigma^2)/\{n(n-1)\}$.
Using the Cauchy-Schwarz inequality and under $H_0$, we have
$$
\mbox{tr}^2(\Xi_{s_1 r_1}\Xi_{s_2r_2}^\prime)\leq \mbox{tr}(\Xi_{s_1 r_1}\Xi_{s_1r_1}^\prime)\mbox{tr}(\Xi_{s_2 r_2}\Xi_{s_2r_2}^\prime)\asymp
\phi(|s_1-r_1|)\phi(|s_2-r_2|)\mbox{tr}^2(\Sigma^2).
$$
Therefore, using the condition $\sum_{k=1}^T\phi^{1/2}(k)<\infty$, the second term in $\mbox{Var}(\hat{M}_t^{(1)})$ is also of order $t(T-t)\mbox{tr}(\Sigma^2)/\{n(n-1)\}$. In summary, $\hat{M}_t^{(1)}$
is a small order of $\hat{M}_t^{(2)}$. This also implies that $\sigma_{nt}^2=\mbox{Var}(\hat{M}_t^{(2)})\{1+o(1)\}$.
Consider $t=[T\nu]$ for $\nu=j/T\in(0,1)$ with $j=1,\cdots, T-1$. Based on the above results, to show the tightness of $\max_{1\leq t\leq T-1}\sigma_{nt}^{-1}\hat{M}_t$ is equivalent to show the tightness
of $G_n(\nu)$ where
$$G_n(\nu)=T^{-3/2}n^{-1}\mbox{tr}^{-1/2}(\Sigma^2)(\hat{M}_{[T\nu]}^{(1)}+\hat{M}_{[T\nu]}^{(2)}):=G_n^{(1)}(\nu)+G_n^{(2)}(\nu).$$
We first show the tightness of $G_n^{(1)}(\nu)$. To this end, we first note that, for $1>\eta>\nu>0$,
\begin{equation}gin{align*}
&E\Big\{|G_n^{(1)}(\nu)-G_n^{(1)}(\eta)|^2\Big\}\\
&=\frac{1}{T^{3}n^{2}\mbox{tr}(\Sigma^2)}E\Big\{\Big|\sum_{s_1=1}^{[T\nu]}\sum_{s_2=[T\nu]+1}^{[T\eta]}g_n(s_1,s_2)-\sum_{s_1=[T\nu]+1}^{[T\eta]}\sum_{s_2=[T\eta]+1}^{T}g_n(s_1,s_2)\Big|^2\Big\}\\
&\leq CT^{-3} \{[T\nu]([T\eta]-[T\nu])+(T-[T\eta])([T\eta]-[T\nu])\}\leq C(\eta-\nu)/T.
\end{align*}
Applying the above inequality with $\nu=k/T$ and $\eta=m/T$ for $0\leq k\leq m< T$ for integers $k, m$ and $T$ and using Chebyshev's inequality, we have, for any $\epsilon>0$,
\small
\begin{equation}gin{align*}
P\Big(\Big|G_n^{(1)}(k/T)-G_n^{(1)}(m/T)\Big|\geq \epsilon\Big)&\leq E\Big\{|G_n^{(1)}(k/T)-G_n^{(1)}(m/T)|^2\Big\}/\epsilon^2\\
&\leq C(m-k)/(\epsilon T)^2\leq (C/\epsilon^2)(m-k)^{1+\alpha}/T^{2-\alpha},
\end{align*}
\normalsize
where $0<\alpha<1/2$. Now if we define $\xi_i=G_n^{(1)}(i/T)-G_n^{(1)}((i-1)/T)$ for $i=1,\cdots, T-1$. Then $G_n^{(1)}(i/T)$ is equal to the partial sum of $\xi_i$, namely $S_i=\xi_1+\cdots+\xi_i=G_n^{(1)}(i/T)$. Here $S_0=0$.
Then we have
$$P(|S_m-S_k|\geq \epsilon)\leq (1/\epsilon^2)\{C^{1/(1+\alpha)}(m-k)/T^{(2-\alpha)/(1+\alpha)}\}^{1+\alpha}.$$
Then using Theorem 10.2 in Billingsley (1999), we conclude the following
$$
P(\max_{1\leq i\leq T}|S_i|\geq \epsilon)\leq (KC/\epsilon^2)\{T/T^{(2-\alpha)/(1+\alpha)}\}^{1+\alpha}\leq (KC/\epsilon^2)T^{-1+2\alpha}.
$$
The right hand side of the above inequality goes to 0 as $T\to\infty$ because $\alpha<1/2$. Based on the relationship between $S_i$ and $G_n^{(1)}(i/T)$, we have shown the
tightness of $G_n^{(1)}(\nu)$.
Next, we consider the tightness of $G_n^{(2)}(\nu)$. Recall that
\begin{equation}gin{align*}
G_n^{(2)}(\nu)&=T^{-3/2}n^{-1}\mbox{tr}^{-1/2}(\Sigma^2)\sum_{s_1=1}^{[T\nu]}\sum_{s_2=[T\nu]+1}^T\{W_{n1}(s_1)+W_{n2}(s_2)\}\\
&=T^{-3/2}n^{-1}\mbox{tr}^{-1/2}(\Sigma^2)(T-[T\nu])\sum_{s_1=1}^{[T\nu]}W_{n1}(s_1)\\
&\quad+T^{-3/2}n^{-1}\mbox{tr}^{-1/2}(\Sigma^2)[T\nu]\sum_{s_2=[T\nu]+1}^{T}W_{n2}(s_2):=G_n^{(21)}(\nu)+G_n^{(22)}(\nu).
\end{align*}
It is enough to show the tightness of $G_n^{(21)}(\nu)$, since the tightness of $G_n^{(22)}(\nu)$ is similar.
Let $h(i,j)=T^{-1/2}\sum_{s_1=[T\nu]+1}^{[T\eta]} (X_{is_1}-\mu)^{\prime}(X_{js_1}-\mu).$ Then, we have the following
\begin{equation}gin{align*}
G_n^{(21)}(\eta)-G_n^{(21)}(\nu)&=T^{-1/2}n^{-1}\mbox{tr}^{-1/2}(\Sigma^2)\sum_{s_1=[T\nu]+1}^{[T\eta]} \frac{1}{n(n-1)}\sum_{i\neq j} X_{is_1}^{\prime}X_{js_1}\\
&=\frac{1}{\sqrt{n(n-1)}\mbox{tr}(\Sigma^2)}\sum_{i\neq j} h(i,j).
\end{align*}
First, note that
\begin{equation}gin{align*}
&\{G_n^{(21)}(\eta)-G_n^{(21)}(\nu)\}^2\\
&=\frac{2}{n(n-1)\mbox{tr}(\Sigma^2)}\sum_{i\neq j} h^2(i,j)+\frac{4}{n(n-1)\mbox{tr}(\Sigma^2)}\sum_{i\neq j\neq k} h(i,j)h(i,k)\\
&\quad+\frac{1}{n(n-1)\mbox{tr}(\Sigma^2)}\sum_{i\neq j\neq k\neq l} h(i,j)h(k,l).
\end{align*}
Then, we have the following
\begin{equation}gin{align*}
E[\{G_n^{(21)}(\eta)-G_n^{(21)}(\nu)\}^4]&\leq E\Big[\frac{8}{n^2(n-1)^2\mbox{tr}^2(\Sigma^2)}\mathbf{I}g\{\sum_{i\neq j} h^2(i,j)\mathbf{I}g\}^2\Big]\\
&\quad+E\Big[\frac{32}{n^2(n-1)^2\mbox{tr}^2(\Sigma^2)}\mathbf{I}g\{\sum_{i\neq j\neq k} h(i,j)h(i,k)\mathbf{I}g\}^2\Big]\\
&\quad+E\Big[\frac{2}{n^2(n-1)^2\mbox{tr}^2(\Sigma^2)}\mathbf{I}g\{\sum_{i\neq j\neq k\neq l} h(i,j)h(k,l)\mathbf{I}g\}^2\Big]\\
&:=I_1+I_2+I_3.
\end{align*}
First, we consider $I_1$ in the above expression.
\begin{equation}gin{align*}
I_1&=E\Big[\frac{8}{n^2(n-1)^2\mbox{tr}^2(\Sigma^2)}\sum_{i\neq j}\sum_{i_i\neq j_1} h^2(i,j)h^2(i_1,j_1)\Big]\\
&=E\Big[\frac{16}{n^2(n-1)^2\mbox{tr}^2(\Sigma^2)}\sum_{i\neq j} h^4(i,j)\Big]\\
&\quad+E\Big[\frac{32}{n^2(n-1)^2\mbox{tr}^2(\Sigma^2)}\sum_{i\neq j\neq k} h^2(i,j)h^2(i,k)\Big]\\
&\quad+E\Big[\frac{8}{n^2(n-1)^2\mbox{tr}^2(\Sigma^2)}\sum_{i\neq j\neq i_i\neq j_1} h^2(i,j)h^2(i_1,j_1)\Big]:=I_{11}+I_{12}+I_{13}.
\end{align*}
We see that
\begin{equation}gin{align*}
I_{13}&\asymp \frac{C}{T^2\mbox{tr}^2(\Sigma^2)}\Big\{\sum_{s_1=[T\nu]+1}^{[T\eta]}\sum_{r_1=[T\nu]+1}^{[T\eta]}\mbox{tr}(\Xi_{s_1r_1}\Xi_{s_1r_1}^\prime)\Big\}^2\asymp \frac{C}{T^2}\mathbf{I}g\{[T\eta]-[T\nu]\mathbf{I}g\}^2.
\end{align*}
After some calculation, we obtain that
\begin{equation}gin{align*}
I_{11}&=\frac{C}{n(n-1)T^2\mbox{tr}^2(\Sigma^2)}\Big[\Big\{\sum_{s_1=[T\nu]+1}^{[T\eta]}\sum_{r_1=[T\nu]+1}^{[T\eta]}\mbox{tr}(\Xi_{s_1r_1}\Xi_{s_1r_1}^\prime)\Big\}^2\\
&\quad+\sum_{s_1=[T\nu]+1}^{[T\eta]}\sum_{r_1=[T\nu]+1}^{[T\eta]}\sum_{u_1=[T\nu]+1}^{[T\eta]}\sum_{v_1=[T\nu]+1}^{[T\eta]} \mbox{tr}(\Xi_{r_1s_1}\Xi_{s_1v_1}\Xi_{v_1u_1}\Xi_{u_1r_1})\Big]=o(I_{13}).
\end{align*}
Similarly, it can be shown that $I_{12}=o(I_{13})$. In summary, $I_1\leq C\mathbf{I}g\{[T\eta]-[T\nu]\mathbf{I}g\}^2/T^2.$
Now, we check $I_2$. We have the following
\begin{equation}gin{align*}
I_2&=E\Big[\frac{64}{n^2(n-1)^2\mbox{tr}^2(\Sigma^2)}\sum_{i\neq i_1\neq j\neq k}h(i,j)h(i,k)h(i_1,j)h(i_1,k)\Big]\\
&\quad+E\Big[\frac{64}{n^2(n-1)^2\mbox{tr}^2(\Sigma^2)}\sum_{i\neq j\neq k}h(i,j)h(i,k)h(i,j)h(i,k)\Big]:=I_{21}+I_{22}.
\end{align*}
It can be seen that
\begin{equation}gin{align*}
I_{21}&\leq \frac{C}{\mbox{tr}^2(\Sigma^2)}E\Big[h(i,j)h(i,k)h(i_1,j)h(i_1,k)\Big]\\
&=\frac{C}{T^2\mbox{tr}^2(\Sigma^2)}\sum_{s_1,r_1,u_1,v_1}\mbox{tr} (\Xi_{s_1r_1}\Xi_{r_1v_1}\Xi_{v_1u_1}\Xi_{u_1s_1}),
\end{align*}
which is a smaller order of $I_{13}$.
For $I_{22}$, we have
\small
\begin{equation}gin{align*}
I_{22}&=\frac{C}{n\mbox{tr}^2(\Sigma^2)}E\Big[h(i,j)h(i,k)h(i,j)h(i,k)\Big]\\
&=\frac{C}{nT^2\mbox{tr}^2(\Sigma^2)}\sum_{s_1,r_1,u_1,v_1}\Big\{\mbox{tr}(\Xi_{s_1u_1}\Xi_{s_1u_1}^\prime)\mbox{tr}(\Xi_{r_1v_1}\Xi_{r_1v_1}^\prime)
+\mbox{tr} (\Xi_{s_1u_1}\Xi_{u_1r_1}\Xi_{r_1v_1}\Xi_{v_1s_1})\Big\}.
\end{align*}
\normalsize
Therefore, $I_{22}$ is also a smaller order of $I_{13}$. In summary, $I_1$ is a smaller oder of $I_{13}$.
At last, let us consider $I_3$. After some calculation, we have the following
\begin{equation}gin{align*}
I_{3}&\asymp E\Big[\frac{C}{\mbox{tr}^2(\Sigma^2)}\{h^2(i,j)h^2(k,l)+h(i,j)h(k,l)h(i,k)h(j,l)\}\Big]\\
&=\frac{C}{T^2\mbox{tr}^2(\Sigma^2)}\Big\{\sum_{s_1=[T\nu]+1}^{[T\eta]}\sum_{r_1=[T\nu]+1}^{[T\eta]}\mbox{tr}(\Xi_{s_1r_1}\Xi_{s_1r_1}^\prime)\Big\}^2\\
&\quad+\frac{C}{T^2\mbox{tr}^2(\Sigma^2)}\sum_{s_1,r_1,u_1,v_1}\mbox{tr} (\Xi_{s_1r_1}\Xi_{r_1v_1}\Xi_{v_1u_1}\Xi_{u_1s_1}).
\end{align*}
Now it is clear that the first term in $I_3$ is of the same order as $I_{13}$ and the second term is of the same order as $I_{21}$. Therefore, $I_3\leq C\mathbf{I}g\{[T\eta]-[T\nu]\mathbf{I}g\}^2/T^2.$
Let $\nu=k/T$ and $\eta=m/T$ for $0\leq k\leq m< T$ for integers $k, m$ and $T$ and using the above bounds for the fourth moment of $|G_n^{(21)}(\eta)-G_n^{(21)}(\nu)|$, we have, for any $L>0$,
\begin{equation}gin{align*}
P\Big(\Big|G_n^{(21)}(k/T)-G_n^{(21)}(m/T)\Big|\geq L\Big)&\leq E\Big\{|G_n^{(21)}(k/T)-G_n^{(21)}(m/T)|^4\Big\}/L^4\\
&\leq (C/L^4)\{(m-k)/T\}^2.
\end{align*}
Applying Theorem 10.2 in Billingsley (1999) again, we have
$$
P(\max_{1\leq i\leq T}|G_n^{(21)}(i/T)|\geq L)\leq KC/L^4.
$$
If $L$ is large enough, the above probability could be smaller than any $\epsilon>0$. Therefore, $\max_{1\leq i\leq T}|G_n^{(21)}(i/T)|$ is tight. Similarly, we can show the tightness of $\max_{1\leq i\leq T}|G_n^{(22)}(i/T)|$.
In summary, we have shown the tightness of $G_n^{(1)}(\nu)$ and $G_n^{(2)}(\nu)$. Hence, $G_n(\nu)$ is also tight.
Combining (i) and (ii) together, we know that $\sigma_{nt}^{-1}\hat{M}_t$ converges to a Gaussian process with mean 0 and covariance $\Sigma_Z$.
Finally, applying Lemma 4 in the supplementary material,
we can show that the asymptotic distribution of $\max_{1\leq t\leq T-1}\sigma_{nt, 0}^{-1}\hat{M}_t$ is the desired Gumbel distribution.
This completes the proof of Theorem 3.
\mathbf{I}gskip
\noindent{\bf A.5. Proof of Theorem 4.}
\mathbf{I}gskip
Recall that $\sigma_{\max}=\max_{0< t/T<1}\max \{\sqrt{\mbox{tr}(A_{0t}^2)/h^2(t)}, \sqrt{n||A_{1t}||^2/h^2(t)}\}$ and $\delta=\|\mu_1-\mu_T\|^2$.
Given a constant $C$, we define a set $$K(C)=\{t: |t-\theta_\alphau| > C T\mbox{log}^{1/2} T \sigma_{\max}/(n\delta), \quad 1 \le t \le T-1 \}.$$ To show Theorem 4, we first show that for any $\epsilon>0$, there exists a constant $C$ such that
\begin{equation}
\mbox{P}\mathbf{I}g\{|\hat{\theta_\alphau}-\theta_\alphau|> C T \mbox{log}^{1/2} T \sigma_{\max}/(n\delta)\mathbf{I}g\} < \epsilon. \label{consist}
\end{equation}
Since the event $\{\hat{\theta_\alphau} \in K(C)\}$ implies the event $\{\max_{t \in K(C)} \hat{M}_t > \hat{M}_{\theta_\alphau} \}$, then it is enough to show that
\[
\mbox{P}\mathbf{I}g(\max_{t \in K(C)} \hat{M}_t > \hat{M}_{\theta_\alphau} \mathbf{I}g) < \epsilon.
\]
Toward this end, we first derive the result based on the definition of $M_t$:
\[
M_t=\Big\{\frac{T-\theta_\alphau}{T-t}I(1\le t \le \theta_\alphau)
+\frac{\theta_\alphau}{t}I(\theta_\alphau<t \le T)\Big\}\delta,
\]
where $\delta=(\mu_{1}-\mu_{T})^{\prime}(\mu_{1}-\mu_{T})$. Specially, $M_t$ attains its maximum $\delta$ at $t=\theta_\alphau$ since $1/(T-t)$ is an increasing function and $1/t$ is a decreasing function. As a result, by union sum inequality and letting $A(t, \theta_\alphau |1, T)={1}/{(T-t)}I(1\le t \le \theta_\alphau)+{1}/{t}I(\theta_\alphau<t \le T)$, we have
\begin{equation}gin{align*}
\mbox{P}\mathbf{I}ggl(\max_{t \in K(C)} \hat{M}_t > \hat{M}_{\theta_\alphau} \mathbf{I}ggr)&\le \sum_{t \in K(C)} \mbox{P}\mathbf{I}ggl( \hat{M}_t-M_t+M_t-M_{\theta_\alphau} > \hat{M}_{\theta_\alphau}-M_{\theta_\alphau} \mathbf{I}ggr)\\
&\le\sum_{t \in K(C)} \mbox{P}\mathbf{I}ggl\{\Big|\frac{\hat{M}_t-M_t}{\sigma_{nt}}\Big| > \frac{A(t, \theta_\alphau |1, T)}{2}\frac{\delta}{\sigma_{\max}}|\theta_\alphau-t|\mathbf{I}ggr\} \\
&+\sum_{t \in K(C)} \mbox{P}\mathbf{I}ggl\{\Big|\frac{\hat{M}_{\theta_\alphau}-M_{\theta_\alphau}}{\sigma_{n\theta_\alphau}}\Big| > \frac{A(t, \theta_\alphau |1, T)}{2}\frac{\delta}{\sigma_{\max}}|\theta_\alphau-t|\mathbf{I}ggr\}\\
&\le\sum_{t \in K(C)} \mbox{P}\mathbf{I}ggl\{\Big|\frac{\hat{M}_t-M_t}{\sigma_{nt}}\Big| > \sqrt{C \mbox{log} T}\mathbf{I}ggr\}\\
&+\sum_{t \in K(C)} \mbox{P}\mathbf{I}ggl\{\Big|\frac{\hat{M}_{\theta_\alphau}-M_{\theta_\alphau}}{\sigma_{n\theta_\alphau}}\Big|> \sqrt{C \mbox{log} T}\mathbf{I}ggr\},
\end{align*}
where the result of $A(t, \theta_\alphau|1, T)=O(1/T)$ has been used.
Since $({\hat{M}_t-M_t})/{\sigma_{nt}} \sim \mbox{N}(0, 1)$, for a large $C$,
\[
\sum_{t \in K(C)}\mbox{P}\mathbf{I}ggl\{\Big|\frac{\hat{M}_t-M_t}{\sigma_{nt}}\Big| > \sqrt{C \mbox{log} T}\mathbf{I}ggr\}=\sum_{t \in K(C)} C (\mbox{log} T)^{-1/2} T^{-C}
\le \epsilon.
\]
Similarly, we can show that
\[
\sum_{t \in K(C)} \mbox{P}\mathbf{I}ggl\{\Big|\frac{\hat{M}_{\theta_\alphau}-M_{\theta_\alphau}}{\sigma_{n\theta_\alphau}}\Big|> \sqrt{C \mbox{log} T}\mathbf{I}ggr\} \le \epsilon.
\]
Hence, (\ref{consist}) is true, which implies that $\hat{\theta_\alphau}-\theta_\alphau=O_p\mathbf{I}g\{T\mbox{log}^{1/2} T \sigma_{\max}/(n\delta)\mathbf{I}g\}$.
Recall that ${\sigma}_{\max}=\max_{0< t/T<1}\max \{\sqrt{\mbox{tr}(A_{0t}^2)/h^2(t)}, \sqrt{n||A_{1t}||^2/h^2(t)}\}$ and the assumption
$\mbox{tr}(\Xi_{s_1 r_1}\Xi_{s_1r_1}^\prime)\asymp \phi(|s_1-r_1|)\mbox{tr}(\Sigma_{s_1}\Sigma_{r_1})$ and $\sum_{k=1}^T\phi^{1/2}(k)<\infty$,
following the proofs in Theorem 3, we have $\mbox{tr}(A_{0t}^2)\asymp T^3\mbox{tr}(\Sigma^2)$. Thus we have $\mbox{tr}(A_{0t}^2)/h^2(t)\asymp \mbox{tr}(\Sigma^2)/T$.
For the second part in ${\sigma}_{\max}$, if $1\leq t\leq \theta_\alphau$, we have
\begin{equation}gin{align*}
\|A_{1t}\|^2=(\mu_1-\mu_T)^{\prime}\sum_{r_1,s_1=1}^t\sum_{r_2,s_2=t+1}^T(\Gamma_{r_1}-\Gamma_{r_2})(\Gamma_{s_1}-\Gamma_{s_2})^{\prime}(\mu_1-\mu_T).
\end{align*}
Using the assumption that $(\mu_1-\mu_T)^{\prime}\Xi_{r_1s_1}(\mu_1-\mu_T)\asymp \phi(|r_1-s_1|)(\mu_1-\mu_T)^{\prime}\Sigma(\mu_1-\mu_T)$, it can be checked that
$\|A_{1t}\|^2\asymp T^3 (\mu_1-\mu_T)^{\prime}\Sigma(\mu_1-\mu_T)$. In summary, we have
$$
\sigma_{\max}=\max \{\sqrt{\mbox{tr}(\Sigma^2)}, \sqrt{n(\mu_1-\mu_T)^{\prime}\Sigma(\mu_1-\mu_T)}\}/\sqrt{T}=v_{\max}/\sqrt{T}.
$$
This completes the proof of Theorem 4.
\mathbf{I}gskip
\noindent{\bf A.6. Proof of Theorem 5.}
\mathbf{I}gskip
To prove Theorem 5, we need the following Lemma \ref{lem3}, whose proof is presented in the supplementary material.
The Lemma \ref{lem3} basically tells that the maximum of $M_t$ given by (\ref{pop-mean}) is attained at one of the change-points $1 \le \theta_\alphau_1 < \cdots < \theta_\alphau_q <T$.
\setcounter{lemma}{2}
\begin{equation}gin{lemma}
\label{lem3}
Let $1 \le \theta_\alphau_1 < \cdots < \theta_\alphau_q <T$ be $q \ge 1$ change-points such that $\mu_1=\cdots=\mu_{\theta_\alphau_1}\ne \mu_{\theta_\alphau_1+1}=\cdots=\mu_{\theta_\alphau_q}\ne \mu_{\theta_\alphau_q+1}=\cdots=\mu_T$.
Then, $M_t$ defined by (\ref{pop-mean}) attains its maximum at one of the change-points.
\end{lemma}
Now let's prove Theorem 5. Recall that within the time interval $[1, T]$, there are $q$ change-points. First, we will show that the proposed binary segmentation algorithm detects the existence of change-points with probability one. To show this, according to Theorem 3, we only need to show that $\mbox{P}(\hat{\mathscr{M}} [1, T]>\mathscr{M}_{\alpha_n}[1, T])=1$ where $\mathscr{M}_{\alpha_n}$ is the upper $\alpha_n$ quantile of the Gumbel distribution. This can be shown because for any $1 \le t \le T-1$,
\begin{equation}gin{eqnarray}
\mbox{P}(\hat{\mathscr{M}} [1, T]>\mathscr{M}_{\alpha_n}[1, T]) &\ge& \mbox{P}(\frac{\hat{{M}_t}}{\sigma_{nt,0}}>\mathscr{M}_{\alpha_n}[1, T])\\
&=&1-\Phi(\frac{\sigma_{nt,0}}{\sigma_{nt}}\mathscr{M}_{\alpha_n}-\frac{M_t}{\sigma_{nt}}), \nonumber
\end{eqnarray}
which converges to 1 because $\sigma_{nt,0} \le \sigma_{nt}$, $M_t/\sigma_{nt} \ge \mathscr{M}^* \to \infty$, and $\mathscr{M}_{\alpha_n}=o(\mathscr{M}^*)$.
Once the existence of change-points is detected, the proposed binary segmentation algorithm will continue to identify change-points. Since $v_{\max}=o\{n \delta/ (T \sqrt{\log T}) \}$, one change-point $\theta_\alphau_{(1)} \in \{\theta_\alphau_1, \cdots, \theta_\alphau_q \}$ can be identified correctly with probability 1 based on similar derivations given in the proof of Theorem 4, and the fact that $M_t$ achieves its maximum at one of change-points as shown in Lemma 3.
Since each subsequence satisfies the condition that $\mathscr{M}_{\alpha_n}=o(\mathscr{M}^*)$, the detection continues. Suppose that there are less than $q$ change-points identified successfully, then there exists a segment $I_t$ contains a change-point. Since $\mathscr{M}_{\alpha_n}=o(\mathscr{M}^*)$ and $v_{\max}[I_t]=o\{n \delta[I_t]/ (T \sqrt{\log T}) \}$, the change-point will be detected and identified by the proposed binary segmentation method. Once all $q$ change-points have been identified consistently, each of all the subsequent segments has two end points chosen from $1, \theta_\alphau_1, \cdots, \theta_\alphau_q, T$. Then the proposed binary segmentation algorithm will not wrongly detect any change-point from any segment $I_t$ that contains no change-point, because according to Theorem 3,
$
\mbox{P}(\hat{\mathscr{M}} [I_t]>\mathscr{M}_{\alpha_n}[1, T])=\alpha_n \to 0,
$
which implies that no change-point will be identified further. This completes the proof of Theorem 5.
\begin{equation}gin{thebibliography}{99}
\mathbf{I}bitem{r1}
\textsc{Ashby, F. G. } (2011), \textit{Statistical Analysis of fMRI Data}, MIT Press.
\mathbf{I}bitem{r2}
\textsc{Bai, Z. D. and Saranadasa, H.} (1996). Effect of high dimension: By an example of a two sample problem. \textit{Statistica Sinica}, 6, 311?329.
\mathbf{I}bitem{r3}
\textsc{Billingsley, P.} (1999). \textit{Convergence of Probability Measures}, Wiley.
\mathbf{I}bitem{r4}
\textsc{Chen, S. X. and Qin, Y.} (2010). A two-sample test for high-dimensional data with applications to gene-set testing. \textit{The Annals of Statistics}, \textbf{38}, 808-835.
\mathbf{I}bitem{r4a}
\textsc{Chen, H. and Zhang, N. R.} (2015). Graph-based change-point detection. \textit{The Annals of Statistics}, \textbf{43}, 139-176.
\mathbf{I}bitem{r5}
\textsc{Dempster, A.} (1958). A high dimensional two sample significance test. \textit{The Annals of Mathematical Statistics}, \textbf{29}, 995-1010.
\mathbf{I}bitem{r6}
\textsc{Dempster, A.} (1960). A significance test for the separation of two highly multivariate small samples. \textit{Biometrics}, \textbf{16}, 41-50.
\mathbf{I}bitem{r8}
\textsc{Epstein, R. and Kanwisher, N.} (1998). A cortical representation of the local visual environment. \textit{Nature}, \textbf{392}, 598-601.
\mathbf{I}bitem{r9}
\textsc{Fujikoshi, Y., Ulyanov, V. and Shimizu, R.} (2011). \textit{Multivariate statistics: high-dimensional and large-sample approximations.} Wiley.
\mathbf{I}bitem{r10}
\textsc{Hall, P. and Heyde, C.} (1980). Martingale Limit Theory and Applications. Academic Press, New York.
\mathbf{I}bitem{r11}
\textsc{Henderson, J., Larson, C. and Zhu, D.} (2007). Cortical activation to indoor versus outdoor scenes: An fMRI study. \textit{Experimental Brain Research}, \textbf{179}, 75-84.
\mathbf{I}bitem{r12}
\textsc{Henderson, J., Zhu, D. and Larson, C.} (2011). Functions of parahippocampal place area and retrosplenial cortex in real-world scene analysis: An fMRI study. \textit{Visual Cognition}, \textbf{19}, 910-927.
\mathbf{I}bitem{r13}
\textsc{Hu, J., Bai, Z., Wang, C. and Wang W.} (2015). On testing the equality of high dimensional mean vectors with unequal covariance matrices, \textit{Annals of the Institute of Statistical Mathematics}, 1-23.
\mathbf{I}bitem{r14}
\textsc{Jirak, M.} (2015). Uniform change point tests in high dimension. \textit{The Annals of Statistics}, \textbf{43}, 2451-2483.
\mathbf{I}bitem{r15}
\textsc{Paul, D. and Aue, A.} (2014). Random matrix theory in statistics: A review, \textit{Journal of Statistical Planning and Inference}, \textbf{150}, 1-29.
\mathbf{I}bitem{15a}
\textsc{Srivastava, M. and Kubokawa, T.} (2013).
Tests for multivariate analysis of variance in high dimension under non-normality. \textit{Journal of Multivariate Analysis}, \textbf{115}, 204-216.
\mathbf{I}bitem{r16}
\textsc{Schott, J. R.} (2007). Some high-dimensional tests for a one-way MANOVA. \textit{Journal of Multivariate Analysis}, \textbf{98}, 1825-1839.
\mathbf{I}bitem{r17}
\textsc{Storey, J.} (2003). The positive false discovery rate: a Bayesian interpretation and the q-value, \textit{The Annals of Statistics}, \textbf{31}, 2013-2035.
\mathbf{I}bitem{r18}
\textsc{Wang, L., Peng, B. and Li, R.} (2015). A high-dimensional nonparametric multivariate test for mean vector, \textit{Journal of the American Statistical Association}, \textbf{110}, 1658-1669.
\mathbf{I}bitem{r19}
\textsc{Wilks, S.S.} (1932). Certain generalizations in the analy- sis of variance, \textit{Biometrika}, \textbf{24}, 471-494.
\mathbf{I}bitem{r20}
\textsc{Venkatraman, E.} (1992). Consistency results in multiple change-points problems, \textit{Technical Report No. 24, Stanford University}.
\end{thebibliography}
\end{document} |
\begin{document}
\title{"Physical quantity" and " Physical reality" in Quantum Mechanics: an epistemological path.}
\author{David Vernette}
\author{Michele Caponigro}
\affiliation{}
\emph{\date{\today}}
\begin{abstract}
We reconsider briefly the \textbf{relation} between
"\textbf{physical quantity}" and "\textbf{physical reality}" in
the light of recent interpretations of Quantum Mechanics. We
argue, that these interpretations are conditioned from the
epistemological relation between these two fundamental concepts.
In detail, the choice as ontic level of the concept affect, the
relative interpretation. We note, for instance, that the
informational view of quantum mechanics ( primacy of the
subjectivity) is due mainly to the evidence of the "random"
physical quantities as ontic element. We will analyze four
positions: Einstein, Rovelli, d'Espagnat and Zeilinger.
\end{abstract}
\maketitle
\section{Introduction}
What do we mean with physical quantities? In quantum mechanics
they play a central role, specifically in a measurement process.
Physical quantities give us information on the state of a physical
system. What do we mean instead with physical reality? We have not
any clear definition. There are many hypothesis on their relation,
most important (Einstein position), was the tentative to establish
a perfect "isomorphism". We are interested to analyze this
possible relation. We retain fundamental this debate, because the
evolution of the two concepts are strictly linked with the
foundations of physical laws. We will utilize the foundations of
Quantum Theory as useful tool to go at the heart of the problem.
\subsection{Realism}
We need to give a "general" definition of "realism". There are
many forms of realism, stronger and weaker. Realism, roughly
speaking, is the belief that there exists an objective world “out
there” independent of our observations. The doctrines of realism
are divided into a number of varieties: ontological, semantical,
epistemological, axiological, methodological. Ontological studies
the nature of reality, especially problems concerning existence,
semantical is interested in the relation between language and
reality. Epistemological investigates the possibility nature and
scope of human knowledge. The question of the aims of enquiry is
one of the subject of axiology, while methodological studies the
best, or most effective means of attaining knowledge. In
synthesis:
\begin{itemize}
\item (ontological):Which entities are real? Is there a mind-independent world?.
\item (semantical):Is truth an objective language-world relation?.
\item (epistemological):Is knowledge about the world possible?.
\item (axiological): Is truth one of the aims of enquiry?
\item (methodological): What are the best methods for pursuing
knowledge.
\end{itemize}
In this paper, we are interested to "ontological realism",
specifically the ontological realism in quantum mechanics. We will
analyze four significative positions: Einstein, Rovelli,
d'Espagnat and Zeilinger. In advance we can say that, starting
from Einstein to Zeilinger, we will assist to a gradual
disappearance of the physical reality (and their relative
isomorphism).
\section{Physical quantity and physical reality in: Einstein, Rovelli, d'Espagnat, Zeilinger}
\subsection{Einstein position\cite{Ei}:} \emph{If, without in any way disturbing a system, we can predict
with certainty (i.e., with probability equal to unity) the value
of a physical quantity, then there exists an element of physical
reality corresponding to this physical quantity}.\\
\\
{\footnotesize This was the basic conjecture of the EPR argument
with the primary objective to prove the incompleteness of QM. The
original paper used entangled pairs of particles states wave,
whose function cannot be written as tensor products. Instead of
using the quite general configuration, usually is considered an
entangled pairs of spin-$\frac{1}{2}$ particles that are prepared,
following Bohm\cite{Bo}, in the so-called \emph{singlet state}
that is rotation invariant and given along any vector by:
\[\Psi(x_1,x_2)=\frac{1}{\sqrt{2}}(| +\rangle _1\otimes|
-\rangle_2-| -\rangle_1\otimes| +\rangle_2)\,\] .}
The above citation lead us to analyze two mentioned fundamental
concepts: (i) physical quantity and (ii) physical reality. We
retain the debate on these two notions completely opened, because
we have not any univocal and deep definition. The importance of
the above statement, to us, is the following strong
epistemological affirmation:\\
\emph{[..]\textbf{then there exists an element of physical reality
corresponding to this physical quantity}}.\\
We note a forced "isomorphism" between two concepts. Through this
line,according us, starts the genuine differences between various
interpretations of quantum theory. Is it correct to "force" the
isomorphic-relation? The relation could be much more complex.\\
We can do some theoretical considerations, first: in a"realist’s
world view", there exist physical quantities with "objective
properties", which are independent of any acts of observation or
measurement, but we can not exclude the existence others elements
of physical reality, with a definite values, which do not depend
by measurement. We summarize, below theoretical conjectures:
\begin{itemize}
\item The perfect "isomorphism" between two assumptions ( e.g. Einstein
position)
\item Physical quantity (measurable) without correspondence in the physical
reality (e.g. Zeilinger position) {\footnotesize\item Physical
quantity (measurable) with \emph{"veiled"} correspondence in the
physical reality (e.g. d'Espagnat position)}
\item Unmeasurable physical quantity with possible existence in the
physical reality.
\item Unmeasurable physical quantity with any existence in
the physical reality.
\end{itemize}
Of course, Philosophers can to ascribe these epistemological
positions to philosophical schools. Here, we can easily do many
questions, for instance, (i)what is a physical reality
unmeasurable? (ii)Is it possible that all physical quantities are
measurable? (iii) What is a physical
quantity without the correspondent physical reality? \\
How, we can go out? There are some interesting works, for
instance, the relational quantum mechanics.
\subsection{Rovelli position\cite{Rov}:} \emph{Rovelli departs radically from
such strict Einstein realism, the \textbf{physical reality} is
taken to be formed by the \textbf{individual quantum events}
through which interacting systems (objects)affect one another.
\textbf{Quantum events exist only in interactions} and the reality
of each quantum event is only relative to the system involved in
the interaction. In Relational QM, the preferred observer is
abandoned. Indeed, it is a fundamental assumption of this approach
that nothing distinguishes,a priori, systems and observers: any
physical system provides a potential observer, and physics
concerns what can be said about nature on the basis of the
information that any physical system can, in principle, have.
Different observers can of course exchange information, but we
must not forget that such information exchange is itself a quantum
mechanical interaction. An exchange of information is therefore a
quantum measurement performed by one observing system $A$ upon
another observing system $B$.} \\
These considerations are based on the following basic
concepts\cite{Rov}:\\
\emph{The physical theory is concerned with relations between
physical systems. In particular, it is concerned with the
description that observers give about observed systems. Following
our hypothesis ( i.e. All systems are equivalent: Nothing a priori
distinguishes observer systems from quantum systems. If the
observer O can give a description of the system S, then it is also
legitimate for an observer O' to give a quantum description of the
system formed by the observer O),we reject any fundamental or
metaphysical distinctions as: system / observer, quantum system
/classical system, physical system / consciousness. We assume the
existence of an ensemble of systems, each of which can be
equivalently considered as an observing system or as an observed
system. A system (observing system ) may have information about
another system (observed system). Information is exchanged via
physical interactions. The actual process through which
information is collected and stored is not of particular interest
here, but can be physically described in any specific instance.}
\\
Rovelli position, lead us to think the following epistemological
implications:
\begin{itemize}
\item (i) rejection of individual object
\item (ii) rejection of individual intrinsic properties
\end{itemize}
Some consequence:(a)is not possible to give a definition of the
\textbf{individual} object in a spatio-temporal location, (b)is
not possible to characterize the properties of the objects, in
order to distinguish from the other ones. In other words, if we
adopt the \textbf{interaction} like basic level of the physical
reality, we accept the philosophy of the \textbf{relations} and:
\begin{itemize}
\item (i) we renounce at the possible existence of intrinsic
properties.
\item (ii) and we accept relational properties ( math models).
\end{itemize}
{\footnotesize We remember, for instance, that a mathematical
model based on the relationist principle accept that the position
of an object can only be defined respect to other matter. We do
not venture in the philosophical implications of the relationalism
(i.e. the monism which affirm that there are not distinction a
priori between physical entities). An important advantage of these
approaches is the possibility to eliminate the privileged role of
the observer. This is the importance of Rovelli's approach to
quantum mechanics. In details, Rovelli\cite{Rov} claim that QM
itself drives us to the relational perspective, and the founding
postulate of relational quantum mechanics is to stipulate that we
shall not talk about \textbf{properties of systems in the
abstract}, but only of properties of systems relative to one
system, we can never juxtapose properties relative to different
systems. Relational QM is not the claim that reality is described
by the collection of all properties relatives to all systems,
rather, reality admits one description per each (observing)
system, each such description is internally consistent. As
Einstein's original motivation with EPR was not to question
locality, but rather to question the completeness of QM, so the
relation interpretation can be interpreted as the discovery of the
incompleteness of the description of reality that any
\textbf{single observer} can give: in this particular sense,
relational QM can be said to show the "incompleteness" of
single-observer Copenhagen QM.}
Rovelli's approach seem do not venture in the clarification of two
notions: physical quantity and physical reality. As we have seen,
he retain fundamental the relation \textbf{between} systems. The
\textbf{math nature} of the relation is the real problem. Of
course, we can ask: math law of what?
\subsection{d'Espagnat position\cite{Es}:} \emph{"defines his philosophical view as ”open realism”;
existence precedes knowledge; something exists independently of us
even if it cannot be described".} According d'Espagnat, we are
unable to describe the physical reality, but he admit his
\textbf{existence}. For this reason, respect our analysis, is not
clear this position, according d'Espagnat, we can trust only of
physical quantities but we have not any tool to verify their
correspondence in the physical world.
\subsection{Zeilinger position\cite{Ze}:}
The \textbf{individuality} notion have introduced recently radical
interpretation of quantum mechanics. The forced equivalence is
between \textbf{information and individuality} (and not between
physical quantity and physical reality), this is
Zeilinger\cite{Ze}view. He put forward an idea which connects the concept of information with the notion of elementary systems:\\
\emph{First we note that our description of the physical world is
represented by propositions. Any physical object can be described
by a set of true propositions. Second, we have knowledge or
information about an object only through observations. It does not
make any sense to talk about reality without the information about
it. Any complex object which is represented by numerous
propositions can be decomposed into constituent systems which need
fewer propositions to be specified. The process of subdividing
reaches its limit when the individual subsystems only represent a
single proposition, and such a system is denoted as an elementary
system. (qubit of modern quantum physics).}\\
In short, \textbf{random physical quantity} is the main
fundamental rule to fix \textbf{any} correspondence with the
physical reality. Opposite Einstein's position.
\section{conclusion}
We have analyzed, how starting from the genuine realism we have
reached a genuine subjectivism. The physical reality step by step
is \textbf{gradually disappearance}. The physical reality is
replaced by the subject. We ascribe this evolution to the unclear
epistemological relation between physical quantity and physical
reality, so, the interpretation of quantum mechanics is not only
due to the analysis of the formalism. Finally, we conclude with a
paradoxical question: Was Einstein a realist? As we have seen, he
was the only \textbf{real "idealist"} because, he did not give up
to research the physical reality.
\section{{\tiny }}
{\footnotesize------------------\\ $\diamond$ David
Vernette:Quantum Philosophy Theories
www.qpt.org.uk\\david.vernette.org.uk
\\
$\diamond$ Michele Caponigro: University of Camerino, Physics
Department [email protected]}
\end{document} |
\begin{document}
\title{On resolvent matrix, Dyukarev-Stieltjes parameters and orthogonal matrix polynomials via $
a$-Stieltjes transformed sequences}
\begin{abstract}
By using \tKt{ed} sequences and Dyukarev-Stieltjes parameters we obtain a new representation of the resolvent matrix corresponding to the truncated matricial Stieltjes moment problem. Explicit relations between orthogonal matrix polynomials and matrix polynomials of the second kind constructed from consecutive \tKt{ed} sequences are obtained. Additionally, a \tnnH{} measure for which the matrix polynomials of the second kind are the orthogonal matrix polynomials is found.
\end{abstract}
\subsection*{Keywords:}
Resolvent matrix, orthogonal matrix polynomials, Dyukarev-Stieltjes parameters, \tKt{ed} sequences.
\section{Introduction}
This paper is a continuation of work done in the papers~\zitas{MR3324594,MR3327132,MR2735313,MR3014201,MR3133464}, where two truncated matricial power moment problems on semi-infinite intervals made up one of the main topics. The starting point of studying such problems was the famous two part memoir of \Stieltjes{}~\zitas{MR1508159,MR1508160} where the author's investigations on questions for special continued fractions led him to the power moment problem on the interval \(\ra\). A complete theory of the treatment of power moment problems on semi-infinite intervals in the scalar case was developed by M.~G.~Kre{\u\i}n in collaboration with A.~A.~Nudelman (see~\zitaa{MR0044591}{\cSect{10}},~\zita{MR0233157},~\zitaa{MR0458081}{\cchap{V}}). For a modern operator-theoretical treatment of the power moment problems named after Hamburger and Stieltjes and its interrelations, we refer the reader to Simon~\zita{MR1627806}.
The matrix version of the classical \Stieltjes{} moment problem was studied in Adamyan/Tkachenko~\zitas{MR2155645,MR2215856}, And\^o~\zita{MR0290157}, Bolotnikov~\zitas{MR975671,MR1362524,MR1433234}, Bolotnikov/Sakhnovich~\zita{MR1722780}, Chen/Hu~\zita{MR1807884}, Chen/Li~\zita{MR1670527}, Dyukarev~\zitas{Dyu81,MR686076}, Dyukarev/Katsnel{\('\)}son~\zitas{MR645305,MR752057}, Hu/Chen~\zita{MR2038751}.
The central research object of the present work is the resolvent matrix (RM) $\Uu{m}$ of the truncated matricial Stieltjes matrix moment (TSMM) problem. The importance of the knowledge of the RM $\Uu{m}$ is explained by the fact that the matrix $\Uu{m}$ generates the solution set of the TSMM problem via a linear fractional transformation. The multiplicative decomposition of $\Uu{m}$ in simplest factors containing Dyukarev-Stieltjes (DS) parameters $\DSpm{j}$ and $\DSpl{j}$~\zita{MR3324594} allowed us to attain interesting interrelations between the orthogonal polynomials $P_{k,j}$, their second kind polynomials $Q_{k,j}$ and the DS-parameters $\DSpm{j}$ and $\DSpl{j}$, as well as the Schur complements $\widehat H_{k,j}=L_{k,j}$; see~\zita{MR3324594}.
In the present work, inspired by Dyukarev's multiplicative decomposition of \(\Uu{m}\) (see \rprop{P1033}), the \tKt{ed} sequences \zitas{114arxiv,141arxiv}, and the representation of the RM in terms of the polynomials $P_{k,j}$ and $Q_{k,j}$~\zita{MR3324594}, a factorization of the RM $\Uu{m}$ is obtained. This representation is constructed through a sequence of \taKt{\ell}ed sequences and corresponding polynomials $P_{k,j}^{(\ell)}$ and $Q_{k,j}^{(\ell)}$. An important consequence of such representation is the fact that a \tnnH{} measure for which the polynomials of the second kind $Q_{k,j}^{(1)}$ are orthogonal is explained. By employing the interrelations between the orthogonal matrix polynomials and the Hurwitz type matrix polynomials (see~\cite{MR3327132}) new identities involving $P_{k,j}$, $Q_{k,j}$ and $P_{k,j}^{(1)}$, $Q_{k,j}^{(1)}$ are attained; see \rtheo{Tnew1}. The scalar version of the mentioned interrelations were studied in~\cite{CR16}.
The starting point of our considerations in this paper is the \tDSp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\) of a matricial moment sequence corresponding to a completely \tnd{} \tnnH{} measure on the right half-axis \(\ra\), having moments up to any order. Yu.~M.~Dyukarev~\zita{MR2053150} introduced these parameters in connection with a multiplicative decomposition of his resolvent matrix for the truncated matricial Stieltjes moment problem into elementary factors to characterize indeterminacy. In the scalar case the \tDSp{} coincides with the classical parameters \([\seq{\spl{k}}{k}{0}{\infty},\seq{\spm{k}}{k}{0}{\infty}]\) used by \Stieltjes{}~\zitas{MR1508159,MR1508160} to formulate his indeterminacy criterion. M.~G.~Kre{\u\i}n gave a mechanical interpretation for \Stieltjes{'} investigations on continued fractions (see Gantmacher/Kre{\u\i}n~\zitaa{MR0114338}{\canha{2}} or Akhiezer~\zitaa{MR0184042}{Appendix}) by a weightless thread carrying point masses \(m_k\) with intermediate distances \(l_k\). In~\zita{MR3014201} another parametrization of moment sequences related to a semi-infinite interval \([\alpha,\infp)\) was introduced, the so-called \tSp{} \(\seq{\Spu{j}}{j}{0}{\infi}\). This parametrization is strongly connected to a Schur-type algorithm considered in~\zitas{MR2038751,MR1807884,114arxiv,141arxiv} for solving the truncated matricial Stieltjes moment problem step-by-step by reducing the number of given data. In one step the sequence of prescribed moments is transformed into a shorter sequence, the so-called \tKt{}, eliminating one moment. This procedure is equivalent to dropping the first Stieltjes parameter \(\Spu{0}\). In \rtheo{R0921} we show that this transformation is also essentially equivalent to dropping the Dyukarev-Stieltjes parameter \(\DSpm{0}\) and interchanging the roles of \(\DSpl{k}\) and \(\DSpm{k}\),
which is especially interesting against the background of M.~G.~Kre{\u\i}n's mechanical interpretation. By dividing the elementary factors of Yu.~M.~Dyukarev's above mentioned multiplicative representation of his resolvent matrix into two groups, in \rtheo{Taaa007} we give a factorization of this resolvent matrix, which corresponds to a splitting up of the original problem into two smaller moment problems associated with the first part of the original sequence of prescribed moments and a second part obtained by repeated application of the \tKt{ation}. Comparing blocks in this formula, in \rtheo{T0820} we can represent the orthogonal matrix polynomials and second kind matrix polynomials with respect to the \tKt{ed} moment sequence in terms of the polynomials corresponding to the original moment sequence. In particular, we state orthogonality relations for the matrix polynomials of the second kind in \rprop{orthg-001}.
In order to formulate the moment problems we are going to study, we first review some notation. Let \(\C\)\index{c@\(\C\)}, \(\R\)\index{r@\(\R\)}, \(\NO\)\index{n@\(\NO\)}, and \(\N\)\index{n@\(\N\)} be the set of all complex numbers, the set of all real numbers, the set of all \tnn{} integers, and the set of all positive integers, respectively. Throughout this paper, let \(p,q\in\N\)\index{p@\(p\)}\index{q@\(q\)}. For all \(\alpha,\beta\in\R\cup\set{-\infty,\infp}\), let \(\mn{\alpha}{\beta}\)\index{z@\(\mn{\alpha}{\beta}\)} be the set of all integers \(k\) for which \(\alpha\leq k\leq\beta\) holds. If \(\cX\) is a nonempty set, then \(\cX^\pxq\)\index{\(\cX^\pxq\)} stands for the set of all \tpqa{matrices}, each entry of which belongs to \(\cX\), and \(\cX^p\)\index{\(\cX^p\)} is short for \(\cX^\xx{p}{1}\). If \((\Omega,\gA)\) is a measurable space, then each countably additive mapping whose domain is \(\gA\) and whose values belong to the set \(\Cggq\)\index{c@\(\Cggq\)} of all \tnn{} \tH{} complex \tqqa{matrices} is called a \tnnH{} \tqqa{measure} on \((\Omega,\gA)\). Denote by \symba{\Cgq}{c} the set of all \tpH{} complex \tqqa{matrices}.
Let \(\Bra\)\index{b@\(\Bra\)} be the \(\sigma\)\nobreakdash-algebra of all Borel subsets of \(\ra\), let \(\Mggqa{\ra}\)\index{m@\(\Mggqa{\ra}\)} be the set of all \tnn{} \tH{} \tqqa{measures} on \((\ra,\Bra)\) and, for all \(\kappa\in\NOinf \), let \(\Mgguqa{\kappa}{\ra}\)\index{m@\(\Mgguqa{\kappa}{\ra}\)} be the set of all \(\sigma\in\Mggqa{\ra}\) such that the integral
\bgl{so}
\suo{j}{\sigma}
\defg\int_{\ra}t^j\sigma(\dif t)
\eg
\index{s@\(\suo{j}{\sigma}\)}exists for all \(j\in\mn{0}{\kappa}\).
Two matricial power moment problems lie in the background of our considerations. The first one is the following:
\begin{description}
\item[\mproblem{\ra}{\kappa}{=}]\index{m@\mproblem{\ra}{\kappa}{=}} Let \(\kappa\in\NOinf \) and let \(\seqska \) be a sequence of complex \tqqa{matrices}. Describe the set \(\Mggqaag{\ra}{\seqska }\)\index{m@\(\Mggqaag{\ra}{\seqska }\)} of all \(\sigma\in\Mgguqa{\kappa}{\ra}\) for which \(\suo{j}{\sigma}=\su{j}\) is fulfilled for all \(j\in\mn{0}{\kappa}\).
\end{description}
The second matricial moment problem under consideration is a truncated one with an additional inequality condition for the last prescribed moment:
\begin{description}
\item[\mproblem{\ra}{m}{\leq}]\index{m@\mproblem{\ra}{m}{\leq}} Let \(m\in\NO\) and let \(\seq{\su{j}}{j}{0}{m}\) be a sequence of complex \tqqa{matrices}. Describe the set \(\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}\)\index{m@\(\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}\)} of all \(\sigma\in\Mgguqa{m}{\ra}\) for which \(\su{m}-\suo{m}{\sigma}\) is \tnn{} \tH{} and, in the case \(m>0\), moreover \(\suo{j}{\sigma}=\su{j}\) is fulfilled for all \(j\in\mn{0}{m-1}\).
\end{description}
In order to give a better motivation for our considerations in this paper,
we are going to recall the characterizations of solvability of the above mentioned moment problems, which were obtained in~\zita{MR2735313}.
This requires some preparations.
For all \(n\in\NO\), let \(\Kggqu{2n+1}\)\index{k@\(\Kggqu{2n+1}\)} be the set of all sequences \(\seq{\su{j}}{j}{0}{2n+1}\) of complex \tqqa{matrices}, such that the block \Hankel{} matrices
\begin{align}\label{HK}
\Hu{n}&\defg\matauuo{\su{j+k}}{j,k}{0}{n}&
&\text{and}&
\Ku{n}&\defg\matauuo{\su{j+k+1}}{j,k}{0}{n}
\end{align}
\index{h@\(\Hu{n}\)}\index{k@\(\Ku{n}\)}are both \tnnH{}. For all \(n\in\NO\), let \(\Kggqu{2n}\)\index{k@\(\Kggqu{2n}\)} be the set of all sequences \(\seq{\su{j}}{j}{0}{2n}\) of complex \tqqa{matrices}, such that \(\Hu{n}\) is \tnnH{} and, in the case \(n\geq1\), furthermore \(\Ku{n-1}\) is \tnnH{}. For all \(m\in\NO\), let \(\Kggequ{m}\)\index{k@\(\Kggequ{m}\)} be the set of all sequences \(\seq{\su{j}}{j}{0}{m}\) of complex \tqqa{matrices} for which a complex \tqqa{matrix} \(\su{m+1}\) exists such that \(\seq{\su{j}}{j}{0}{m+1}\) belongs to \(\Kggqu{m+1}\). Let \(\Kggqu{\infty}\)\index{k@\(\Kggqinf\)} be the set of all sequences \(\seqsinf \) of complex \tqqa{matrices} such that \(\seq{\su{j}}{j}{0}{m}\) belongs to \(\Kggqu{m}\) for all \(m\in\NO\), and let \(\Kggequ{\infty}\defg\Kggqu{\infty}\)\index{k@\(\Kggeqinf\)}. For all \(\kappa\in\NOinf \), we call a sequence \(\seqska \) \noti{\tSnnd}{Stieltjes!\tnn{} definite} (resp.\ \noti{\tSnnde}{Stieltjes!\tnn{} definite!extendable}) if it belongs to \(\Kggqu{\kappa}\) (resp.\ to \(\Kggequ{\kappa}\)). Observe that these notions coincide with the right-sided version in~\zitaa{MR3014201}{\cdefnp{1.3}{213}} for \(\alpha=0\).
Using the sets of matrix sequences above, we are able to formulate the solvability criterions of the problems~\mproblem{\ra}{m}{=} and~\mproblem{\ra}{m}{\leq}, which were obtained in~\zita{MR2735313} for intervals \([\alpha,\infp)\) with arbitrary \(\alpha\in\R\):
\btheol{T1121}
Let \(\kappa\in\NOinf \) and let \(\seqska \) be a sequence of complex \tqqa{matrices}.
Then \(\Mggqaag{\ra}{\seqska }\neq\emptyset\) if and only if \(\seqska \in\Kggequ{\kappa}\).
\etheo
In the case \(\kappa\in\NO\), \rtheo{T1121} is a special case of~\zitaa{MR2735313}{\cthmp{1.3}{909}}. If \(\kappa=\infi\), the asserted equivalence can be proved using the equation \(\Mggqaag{\ra }{\seqsinf }=\bigcap_{m=0}^\infty\Mggqaag{\ra }{\seq{\su{j}}{j}{0}{m}}\) and the matricial version of the Helly-Prohorov theorem (see~\zitaa{MR975253}{\cSatzp{9}{388}}). We omit the details of the proof, the essential idea of which is originated in~\zitaa{MR0184042}{proof of \cthmp{2.1.1}{30}}.
\bthmnl{see~\zitaa{MR2735313}{\cthmp{1.4}{909}}}{T1122}
Let \(m\in\NO\) and let \(\seq{\su{j}}{j}{0}{m}\) be a sequence of complex \tqqa{matrices}. Then \(\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}\neq\emptyset\) if and only if \(\seq{\su{j}}{j}{0}{m}\in\Kggqu{m}\).
\end{thm}
The importance of \rtheoss{T1121}{T1122} led us in~\zitas{MR2735313,MR3014201,MR3133464} to a closer look at the properties of sequences of complex \tqqa{matrices}, which are \tSnnd{} or \tSnnde{}. Guided by our former investigations on \tHnnd{} sequences and \tHnnde{} sequences, which were done in~\zita{MR2570113}, in~\zitaa{MR2735313}{\cSect{4}} and in~\zita{MR3014201} we started a thorough study of the structure of \tSnnd{} sequences and \tSnnde{} sequences.
This paper is organized as follows: In \rsect{S1557} we recall some results on the Schur complement $\Hu{j}/\Hu{j-1}$ called \tSp{} $(\Spu{j})_{j=0}^\kappa$ of a sequence $(s_j)_{j=0}^\kappa$. In \rsect{S0820} the \tDSp{} given by the sequence $[\seq{\DSpl{k}}{k}{0}{\infi},\seq{\DSpm{k}}{k}{0}{\infi}]$ is recalled. Interrelations between this pair of sequences and the \tSp{} and vice versa are discussed. In \rsect{S1538}, via a set of nonnegative column pairs $\tmat{\phi(z)\\\psi(z)}$, the parametrization of the solution set in the non-degenerate case is rewritten in terms of a linear fractional transformation. Moreover, the representation of the resolvent matrix corresponding to a matricial truncated Stieltjes moment problem with matrix polynomials orthogonal on $\ra$ and matrix polynomials of the second kind is recalled. In \rsect{S1649} we consider the notion and main results concerning the \tKt{}. \rsect{S1011} is devoted to the \tDSp{} $[\seq{\DSpol{k}{\ell}}{k}{0}{\infi},\seq{\DSpom{k}{\ell}}{k}{0}{\infi}]$ of the \taKta{\ell}{ $\seqsinf $}. A representation of the resolvent matrix of the matricial truncated Stieltjes moment problem in terms of the \tKt{ed} moment sequence is obtained in \rsect{S0834}. In \rsect{S1945}, interrelations between the polynomials $P^{(1)}_{k,j}$ and $Q^{(1)}_{k,j}$ constructed from the \tKt{ed} moment sequence with the polynomials $P_{k,j}$ and $Q_{k,j}$ corresponding to the original moment sequence are presented. A \tnnH{} measure, for which the matrix polynomials of the second kind are orthogonal matrix polynomials, is attained.
\section{\tSp{}}\label{S1557}
With later applications to the matrix version of the \Stieltjes{} moment problem in mind, a particular inner parametrization, called \tSp{}, for matrix sequences was developed in~\zita{MR3014201}. First we are going to recall the definition of the \tSp{} of a sequence of complex \tpqa{matrices}. To prepare this notion, we need some further matrices built from the given data. If \(A\in\Cpq\), then a unique matrix $G\in\Cqp$ exists which satisfies the four equations $AGA=A$, $GAG=G$, $(AG)^\ad=AG$ and $(GA)^\ad=GA$. This matrix $G$ is called the \noti{Moore-Penrose inverse of $A$}{Moore-Penrose inverse} and is denoted by \(A^\MP\)\index{\(A^\MP\)}
Let \(\kappa\in\NOinf \) and let \(\seqska \) be a sequence of complex \tpqa{matrices}. Then, let
\begin{align}\label{yz}
\yuu{\ell}{m}&\defg
\bMat
\su{\ell}\\
\su{\ell+1}\\
\vdots\\
\su{m}
\eMat&
&\text{and}&
\zuu{\ell}{m}&\defg\brow\su{\ell},\su{\ell+1},\dotsc,\su{m}\erow
\end{align}
\index{y@\(\yuu{\ell}{m}\)}\index{z@\(\zuu{\ell}{m}\)}for all \(\ell,m\in\NO\) with \(\ell\leq m\leq\kappa\). We use the notation
\begin{align}\label{L}
\Lu{0}&\defg\su{0}&
&\text{and}&
\Lu{n}&\defg\su{2n}-\zuu{n}{2n-1}\Hu{n-1}^\MP\yuu{n}{2n-1}
\end{align}
\index{l@\(\Lu{n}\)}for all \(n\in\N\) with \(2n\leq\kappa\), and the notation
\begin{align}\label{La}
\Lau{0}&\defg\su{1}&
&\text{and}&
\Lau{n}&\defg\su{2n+1}-\zuu{n+1}{2n}\Ku{n-1}^\MP\yuu{n+1}{2n}
\end{align}
\index{l@\(\Lau{n}\)}for all \(n\in\N\) with \(2n+1\leq\kappa\). Observe that for \(n\geq1\) the matrix \(\Lu{n}\) is the Schur complement \(\Hu{n}/\Hu{n-1}\) of \(\Hu{n-1}\) in the block \Hankel{} matrix \(\Hu{n}=\tmat{\Hu{n-1}&\yuu{n}{2n-1}\\\zuu{n}{2n-1}&\su{2n}}\) corresponding to the sequence \(\seqska \), whereas the matrix \(\Lau{n}\) is the Schur complement \(\Ku{n}/\Ku{n-1}\) of \(\Ku{n-1}\) in the block \Hankel{} matrix \(\Ku{n}=\tmat{\Ku{n-1}&\yuu{n+1}{2n}\\\zuu{n+1}{2n}&\su{2n+1}}\)
corresponding to the sequence \(\seq{\sau{j}}{j}{0}{\kappa-1}\) defined by
\bgl{s+}
\sau{j}
\defg\su{j+1}.
\eg
\index{\(\sau{j}\)}The sequence \(\seq{\sau{j}}{j}{0}{\kappa-1}\) coincides with the right-sided version of the sequence in~\zitaa{MR3014201}{\cdefnp{2.1}{217}} for \(\alpha=0\).
Now we are able to recall the notion of \tSp{} which was introduced in~\zitaa{MR3014201}{\cdefnp{4.2}{223}} as \emph{right-sided \(\alpha\)\nobreakdash-\Stieltjes{} parametrization} denoted by \(\seq{Q_j}{j}{0}{\kappa}\) in a more general context related to a semi-infinite interval \([\alpha,\infp)\). There one can find further details.
\bdefil{D1021}
Let \(\kappa\in\NOinf \) and let \(\seqska \) be a sequence of complex \tpqa{matrices}. Then the sequence \(\seq{\Spu{j}}{j}{0}{\kappa}\)\index{q@\(\seq{\Spu{j}}{j}{0}{\kappa}\)} given by \(\Spu{2k}\defg\Lu{k}\) for all \(k\in\NO\) with \(2k\leq\kappa\), and by \(\Spu{2k+1}\defg\Lau{k}\) for all \(k\in\NO\) with \(2k+1\leq\kappa\) is called the \noti{\tSpa{\(\seqska \)}}{Stieltjes!parametrization}.
\end{defn}
There is a one-to-one correspondence between a sequence \(\seqska \) of complex \tpqa{matrices} and its
\tSp{} \(\seq{\Spu{j}}{j}{0}{\kappa}\) (see~\zitaa{MR3014201}{\cremp{4.3}{224}}). In particular, the original sequence can be explicitly reconstructed from its \tSp{}. Let \symba{\nul{A}}{n} be the null space of a complex matrix \(A\).
\bpropnl{see~\zitaa{MR3014201}{\cthmp{4.12(b)}{225}}}{P0926}
Let \(\kappa\in\NOinf \) and let \(\seqska \) be a sequence of complex \tqqa{matrices} with \tSp{} \((\Spu{j})_{j=0}^\kappa\). Then \(\seqska \in\Kggqu{\kappa}\) if and only if \(\Spu{j}\in\Cggq\) for all \(j\in\mn{0}{\kappa}\) and \(\nul{\Spu{j}}\subseteq\nul{\Spu{j+1}}\) for all \(j\in\mn{0}{\kappa-2}\).
\eprop
\bpropnl{see~\zitaa{MR3014201}{\cthmp{4.12(c)}{225}}}{P0937}
Let \(\kappa\in\NOinf \) and let \(\seqska \) be a sequence of complex \tqqa{matrices} with \tSp{} \((\Spu{j})_{j=0}^\kappa\). Then \(\seqska \in\Kggequ{\kappa}\) if and only if \(\Spu{j}\in\Cggq\) for all \(j\in\mn{0}{\kappa}\) and \(\nul{\Spu{j}}\subseteq\nul{\Spu{j+1}}\) for all \(j\in\mn{0}{\kappa-1}\).
\eprop
Now we introduce an important subclass of the class of \tSnnd{} sequences. More precisely, we turn our attention to some subclass of \(\Kggqu{\kappa}\), which is characterized by stronger positivity properties. For all \(n\in\NO\), let \(\Kgqu{2n+1}\)\index{k@\(\Kgqu{2n+1}\)} be the set of all sequences \(\seq{\su{j}}{j}{0}{2n+1}\) of complex \tqqa{matrices}, such that \(\Hu{n}\) and \(\Ku{n}\) are both \tpH{}. For all \(n\in\NO\), let \(\Kgqu{2n}\)\index{k@\(\Kgqu{2n}\)} be the set of all sequences \(\seq{\su{j}}{j}{0}{2n}\) of complex \tqqa{matrices}, such that \(\Hu{n}\) is \tpH{} and, in the case \(n\geq1\), furthermore \(\Ku{n-1}\) is \tpH{}. Let \(\Kgqu{\infty}\)\index{k@\(\Kgqinf\)} be the set of all sequences \(\seqsinf \) of complex \tqqa{matrices} such that \(\seq{\su{j}}{j}{0}{m}\) belongs to \(\Kgqu{m}\) for all \(m\in\NO\). For all \(\kappa\in\NOinf \), we call a sequence \(\seqska \) \noti{\tSpd}{Stieltjes!positive definite} if it belongs to \(\Kgqu{\kappa}\). For \(\kappa\in\NOinf \), we have \(\Kgqkappa\subseteq\Kggeqkappa\subseteq\Kggqkappa\) (see~\zitaa{114arxiv}{\cpropp{3.8}{12}}). In view of \rtheoss{T1121}{T1122} we obtain then:
\begin{rem}\label{R1524}
Let \(\kappa\in\NOinf \) and let \(\seqska \in\Kgqu{\kappa}\). Then, \(\Mggqaag{\ra}{\seqska }\neq\emptyset\) and, in the case \(\kappa<\infi\), furthermore \(\Mggqaakg{\ra}{\seqska }\neq\emptyset\).
\end{rem}
If \(m\in\NO\) and \(\seqs{m}\in\Kgqu{m}\), the associated truncated moment problems~\mproblem{\ra}{m}{=} and~\mproblem{\ra}{m}{\leq} have infinitely many solutions. Since every principal submatrix of a \tpH{} matrix is again \tpH{}, we can easily see:
\bremal{R2347}
Let \(\kappa\in\Ninf\) and let \(\seqska\in\Kgqkappa\), then \(\seq{\sau{j}}{j}{0}{\kappa-1}\in\Kgqu{\kappa-1}\).
\erema
\bpropnl{see~\zitaa{MR3014201}{\cthmp{4.12(d)}{225}}}{T1337}
Let \(\kappa\in\NOinf \) and let \(\seqska \) be a sequence of complex \tqqa{matrices} with \tSp{} \((\Spu{j})_{j=0}^\kappa\). Then \(\seqska \in\Kgqu{\kappa}\) if and only if \(\Spu{j}\in\Cgq\) for all \(j\in\mn{0}{\kappa}\).
\eprop
Based on the matrices defined via \eqref{L} and \eqref{La}, we now introduce a further important subclass of $\Kggqkappa$. Let $m\in\NO$ and let $\seqs{m}\in\Kggqu{m}$. Then $\seqs{m}$ is called \notii{completely degenerate} if $\Lu{n}=\Oqq$ in the case \(m=2n\) with some \(n\in\NO\) or if $\Lau{n}=\Oqq$ in the case \(m=2n+1\) with some \(n\in\NO\). The set \symba{\Kggdqu{m}}{k} of all completely degenerate sequences belonging to $\Kggqu{m}$ is a subset of $\Kggequ{m}$ (see~\zitaa{MR3014201}{\cpropp{5.9}{231}}). The moment problem~\mproblem{\ra}{m}{=} has a unique solution if and only if \(\seqs{m}\) belongs to \(\Kggdqu{m}\) (see~\zitaa{142arxiv}{\cthmp{13.3}{53}}).
\bpropnl{cf.~\zitaa{MR3014201}{\cpropp{5.3}{229}}}{111.P5-3}
Let $m\in\NO$ and $\seqs{m}\in\Kggqu{m}$ with \tSp{} \((\Spu{j})_{j=0}^m\). Then $\seqs{m}\in\Kggdqu{m}$ if and only if $\Spu{m}=\Oqq$.
\eprop
If $\seqs{m}\in\Kggequ{m}$ with \tSp{} \((\Spu{j})_{j=0}^m\), then from~\zitaa{MR2735313}{\clemmss{4.15}{4.16}} one can easily see that $\seqs{m}$ belongs to $\Kggdqu{m}$ if and only if there is some $\ell\in\mn{0}{m}$ such that $\Spu{\ell}=\Oqq$.
Let $\seqsinf\in\Kggqinf$. Then \(\seqsinf\) is said to be \notii{completely degenerate} if there is some $m\in\NO$ such that $\seqs{m}$ is a \tcdSnnd{} sequence. By \symba{\Kggdqinf}{k} we denote the set of all \tcdSnnd{} sequences $\seqsinf $ of complex \tqqa{matrices}.
\bpropnl{cf.~\zitaa{MR3014201}{\ccorp{5.4}{230}}}{111.C5-4}
Let $\seqsinf\in\Kggqinf$ with \tSp{} \((\Spu{j})_{j=0}^\infi\). Then $\seqsinf\in\Kggdqinf$ if and only if there exists some \(m\in\NO\) with $\Spu{m}=\Oqq$.
\eprop
The sequence $\seqsinf $ is called \notii{completely degenerate of order $m$} if $\seqs{m}$ is completely degenerate. By \symba{\Kggdoq{m}}{k} we denote the set of all \tSnnd{} sequences $\seqsinf $ from $\Cqq$ which are completely degenerate of order $m$. If $m\in\NO$ and $\seqsinf \in\Kggdoq{m}$, then $\seqs{\ell}\in\Kggdqu{\ell}$ for each $\ell\in\minf{m}$ (cf.~\zitaa{MR3014201}{\clemp{5.5}{230}}).
\bdefil{D0909}
Let \(m\in\NO\) and let \(\seqs{m}\) be a sequence of complex \tpqa{matrices}. Let the sequence \(\seq{\Spl{j}}{j}{0}{\infi}\) be given by
\[
\Spl{j}
\defg
\begin{cases}
\Spu{j}\ifa{j\leq m}\\
\Opq\ifa{j>m}
\end{cases}.
\]
Then, we call the unique sequence \(\seqzinf\)\index{$\seqzinf$} with \tSp{} \(\seq{\Spl{j}}{j}{0}{\infi}\) the \notii{\tzexto{\(\seqs{m}\)}}.
\edefi
\blemml{L0916}
Let \(m\in\NO\) and let \(\seqs{m}\in\Kggequ{m}\). Denote by \(\seqzinf\) the \tzexto{\(\seqs{m}\)}. Then, \(\zext{s}{j}=\su{j}\) for all \(j\in\mn{0}{m}\) and \(\seqzinf\in\Kggdoq{m+1}\).
\elemm
\bproof
Denote by \(\seq{\Spu{j}}{j}{0}{m}\) the \tSpa{$\seqs{m}$} and by \(\seq{\Spl{j}}{j}{0}{\infi}\) the \tSpa{$\seqzinf$}. Since \(\Spl{j}=\Spu{j}\) holds true for all \(j\in\mn{0}{m}\), we have \(\zext{s}{j}=\su{j}\) for all \(j\in\mn{0}{m}\). Using \rpropss{P0937}{P0926} we can conclude furthermore \(\seqzinf\in\Kggqinf\). In view of \(\Spl{m+1}=\Oqq\) and \rprop{111.P5-3}, thus \(\seqzinf\in\Kggdoq{m+1}\) follows.
\eproof
\blemml{L0912}
Let \(m\in\NO\) and let \(\seqsinf\in\Kggdoq{m+1}\). Denote by \(\seqzinf\) the \tzexto{\(\seqs{m}\)}. Then, \(\zext{s}{j}=\su{j}\) for all \(j\in\NO\).
\elemm
\bproof
Denote by \(\seq{\Spu{j}}{j}{0}{\infi}\) the \tSpa{$\seqsinf$} and by \(\seq{\Spl{j}}{j}{0}{\infi}\) the \tSpa{$\seqzinf$}. Since \(\seq{\Spu{j}}{j}{0}{m}\) is then the \tSpa{$\seqs{m}$}, we have by definition \(\Spl{j}=\Spu{j}\) for all \(j\in\mn{0}{m}\). Because of \(\seqsinf\in\Kggdoq{m+1}\), the sequence \(\seqs{m+1}\) belongs to \(\Kggdqu{m+1}\). Thus, \rprop{111.P5-3} yields \(\Spu{m+1}=\Oqq\). From \rprop{P0926}, we obtain then \(\Spu{j}=\Oqq\) for all \(j\in\mn{m+2}{\infi}\). By definition, we have furthermore \(\Spl{j}=\Oqq\) for all \(j\in\mn{m+1}{\infi}\). Hence, \(\Spl{j}=\Spu{j}\) for all \(j\in\mn{m+1}{\infi}\). We have shown that the \tSp{s} of $\seqsinf$ and $\seqzinf$ coincide, which completes the proof.
\eproof
\section{\tDSp{}}\label{S0820}
In~\zita{MR2053150} Yu.~M.~\tDyukarev{} studied the moment problem \mproblem{\ra}{\infty}{=}. One of his main results (see~\zitaa{MR2053150}{\cthmp{8}{78}}) is a generalization of a classical criterion due to \Stieltjes{}~\zitas{MR1508159,MR1508160} for the indeterminacy of this moment problem. In order to find an appropriate matricial version of \Stieltjes{'} indeterminacy criterion Yu.~M.~\tDyukarev{} had to look for a convenient matricial generalization of the parameter sequences which \Stieltjes{} obtained from the consideration of particular continued fractions associated with the sequence \(\seqsinf \). In this way, Yu.~M.~\tDyukarev{} found an interesting inner parametrization of sequences belonging to \(\Kgqinf\). The main theme of this section is to recall some interrelations obtained in~\zitaa{MR3133464}{\S8} between Yu.~M.~\tDyukarev{'s} parametrization and the \tSp{} introduced in \rdefi{D1021}.
The notations \(\Iq\)\index{i@\(\Iq\)} and \(\Opq\)\index{0@\(\Opq\)} stand for the identity matrix in \(\Cqq\) and for the zero matrix in \(\Cpq\), resp. If \(\kappa\in\NOinf \) and \(\seqska \in\Kgqkappa\), then the matrix \(\Hu{k}\) is \tpH{} and, in particular, invertible for all \(k\in\NO\) with \(2k\leq\kappa\), and the matrix \(\Ku{k}\) is \tpH{} and, in particular, invertible for all \(k\in\NO\) with \(2k+1\leq\kappa\). Let
\begin{align}\label{v}
\vqu{0}&\defg\Iq&
&\text{and}&
\vqu{k}&\defg\bMat\Iq\\\Ouu{kq}{q}\eMat
\end{align}
\index{v@\(\vqu{k}\)}for all \(k\in\N\). The following construction of a pair of sequences of \tqqa{matrices} associated with a \tSpd{} sequence goes back to Yu.~M.~\tDyukarev{}~\zitaa{MR2053150}{p.~77}:
Let \(\kappa\in\NOinf \) and let \(\seqska \in\Kgqkappa\). Then let
\beql{M0}
\DSpm{0}
\defg\su{0}^\inv
\eeq
and, in the case \(\kappa\geq1\), let
\beql{L0}
\DSpl{0}
\defg\su{0}\su{1}^\inv\su{0}.
\eeq
Furthermore, let
\beql{Mk}
\DSpm{k}
\defg\vqu{k}^\ad\Hu{k}^\inv\vqu{k}-\vqu{k-1}^\ad\Hu{k-1}^\inv\vqu{k-1}
\eeq
\index{m@\(\DSpm{k}\)}for all \(k\in\N\) with \(2k\leq\kappa\), and let
\beql{Lk}
\DSpl{k}
\defg\yuu{0}{k}^\ad\Ku{k}^\inv\yuu{0}{k}-\yuu{0}{k-1}^\ad\Ku{k-1}^\inv\yuu{0}{k-1}
\eeq
\index{l@\(\DSpl{k}\)}for all \(k\in\N\) with \(2k+1\leq\kappa\).
Obviously, for all \(k\in\NO\) with \(2k\leq\kappa\), the matrix \(\DSpm{k}\)
only depends on the matrices \(\su{0},\dotsc,\su{2k}\), and, for all \(k\in\NO\) with \(2k+1\leq\kappa\), the matrix \(\DSpl{k}\) only depends on the matrices \(\su{0},\su{1},\dotsc,\su{2k+1}\).
\bdefil{D1455}
Let \(\seqsinf \in\Kgqinf\), then the ordered pair \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\)
is called the \noti{\tDSp{}}{d@\tDSp{}} (shortly \noti{\tdsp{}}{d@\tDSp{}}) \emph{of \(\seqsinf \)}.
\edefi
It should be mentioned that, for a given sequence \(\seqsinf \in\Kgqinf\), Yu.~M.~Dyukarev~\zita{MR2053150} treated the moment problem~\mproblem{\ra}{\infi}{=} by approximation through the sequence \((\mproblem{\ra}{k}{\leq})_{k\in\NO}\) of truncated moment problems. One of his central results~\zitaa{MR2053150}{\cthmp{7}{77}} shows that the resolvent matrices for the truncated moment problems can be multiplicatively decomposed into elementary factors which are determined by the corresponding first sections of the \tdspa{\(\seqsinf \)}.
If \(\seqsinf \in\Kgqinf\) with \tdsp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\), then, in view of~\zitaa{MR2053150}{\cthmp{7}{77}}, the matrices \(\DSpl{k}\) and \(\DSpm{k}\) are \tpH{} for all \(k\in\NO\). According to~\zitaa{MR3133464}{\cpropp{8.26}{3923}}, every sequence \(\seqsinf \in\Kgqinf\) can be recursively reconstructed from its \tdsp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\). A similar result for the truncated \tdsp{} \([\seq{\DSpl{k}}{k}{0}{m-1},\seq{\DSpm{k}}{k}{0}{m}]\) (resp.\ \([\seq{\DSpl{k}}{k}{0}{m},\seq{\DSpm{k}}{k}{0}{m}]\)) was obtained in~\zitaa{MR3327132}{\cpropp{4.9}{68}}. Furthermore,~\zitaa{MR3133464}{\cpropp{8.27}{3924}} shows that each pair \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\) of sequences of \tpH{} complex \tqqa{matrices} is the \tdsp{} of some sequence \(\seqsinf \in\Kgqinf\). Hence, the \tdsp{} establishes a one-to-one correspondence between \tSpd{} sequences \(\seqsinf \) and ordered pairs \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\) of sequences of \tpH{} complex \tqqa{matrices}.
In~\zitaa{MR3133464}{\cpropp{8.30}{3925}} it was shown that in the scalar case the \tdsp{} of a \tSpd{} sequence coincides with the classical parameters used by \Stieltjes{}~\zitas{MR1508159,MR1508160} to formulate his indeterminacy criterion. We mention that M.~G.~Kre{\u\i}n was able to find a mechanical interpretation for \Stieltjes{'} investigations on continued fractions (see Gantmacher/Kre{\u\i}n~\zitaa{MR0114338}{\canha{2}} or Akhiezer~\zitaa{MR0184042}{Appendix}). Against to the background of his mechanical interpretation M.~G.~Kre{\u\i}n divided \Stieltjes{'} original parameters into two groups which play the roles of lengths and masses, respectively.
Now we want to recall the concrete definition of these parameters (see Kre{\u\i}n/Nudel{\('\)}man~\zitaa{MR0458081}{\cchap{V}, \cform{(6.1)}}) and their connection to the \tdsp{}.
\bdefil{D0834}
Let \(\seqsinf \in\Kguu{1}{\infty}\) and let \(\detHu{n}\defg\det\Hu{n}\)\index{d@\(\detHu{n}\)} and \(\detKu{n}\defg\det\Ku{n}\)\index{d@\(\detKu{n}\)} for all \(n\in\NO\). Let
\begin{align*}
\spl{k}&\defg\frac{\detHu{k}^2}{\detKu{k}\detKu{k-1}}&
&\text{and}&
\spm{k}&\defg\frac{(\detKu{k-1})^2}{\detHu{k}\detHu{k-1}}
\end{align*}
\index{l@\(\spl{k}\)}\index{m@\(\spm{k}\)}for all \(k\in\NO\), where \(\detHu{-1}\defg1\) and \(\detKu{-1}\defg1\). Then the ordered pair \([\seq{\spl{k}}{k}{0}{\infty},\seq{\spm{k}}{k}{0}{\infty}]\) is called the \noti{\tKSp{} of \(\seqsinf \)}{k@\tKSp{}}.
\edefi
\bpropnl{see~\zitaa{MR3133464}{\cpropp{8.30}{3925}}}{P0857}
Let \(\seqsinf \in\Kguu{1}{\infty}\) with \tKSp{} \([\seq{\spl{k}}{k}{0}{\infty},\seq{\spm{k}}{k}{0}{\infty}]\) and \tdsp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\). Then \(\spl{k}=\DSpl{k}\) and \(\spm{k}=\DSpm{k}\) for all \(k\in\NO\).
\eprop
Now we recall the connection between the \tdsp{} of a \tSpd{} sequence and its \tSp{}. If \(\seqsinf \in\Kgqinf\) with \tSp{} \(\seq{\Spu{j}}{j}{0}{\infty}\), then, in view of \rprop{T1337}, the matrices \(\Spu{j}\) are \tpH{} and, in particular, invertible for all \(j\in\NO\).
\bthmnl{see~\zitaa{MR3133464}{\cthmp{8.22}{3921}}}{T1513}
Let \(\seqsinf \in\Kgqinf\) with \tSp{} \(\seq{\Spu{j}}{j}{0}{\infty}\) and \tdsp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\). Then
\[
\DSpl{k}
=\rk*{\rprod_{j=0}^k\Spu{2j}\Spu{2j+1}^\inv}\Spu{2k+1}\rk*{\rprod_{j=0}^k\Spu{2j}\Spu{2j+1}^\inv}^\ad
\]
and
\[
\DSpm{k}
=
\begin{cases}
\Spu{0}^\inv\incase{k=0}\\
\rk*{\rprod_{j=0}^{k-1}\Spu{2j}^\inv\Spu{2j+1}}\Spu{2k}^\inv\rk*{\rprod_{j=0}^{k-1}\Spu{2j}^\inv\Spu{2j+1}}^\ad\incase{k\geq1}
\end{cases}
\]
for all \(k\in\NO\).
\etheo
\bthmnl{see~\zitaa{MR3133464}{\cthmp{8.24}{3923}} and
\zitaa{MR3327132}{Corollary 4.10, p. 72}}{T1523}
Let \(\seqsinf \in\Kgqinf\) with \tSp{} \(\seq{\Spu{j}}{j}{0}{\infty}\) and \tdsp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\). Then
\[
\Spu{2k}
=
\begin{cases}
\DSpm{0}^\inv\incase{k=0}\\
\rk*{\rprod_{j=0}^{k-1}\DSpm{j}\DSpl{j}}^\invad\DSpm{k}^\inv\rk*{\rprod_{j=0}^{k-1}\DSpm{j}\DSpl{j}}^\inv\incase{k\geq1}
\end{cases}
\]
and
\[
\Spu{2k+1}
=\rk*{\rprod_{j=0}^k\DSpm{j}\DSpl{j}}^\invad\DSpl{k}\rk*{\rprod_{j=0}^k\DSpm{j}\DSpl{j}}^\inv
\]
for all \(k\in\NO\).
\etheo
\section{Parametrization of all solutions in the non-degenerate case}\label{S1538}
In this section we recall a parametrization of the solution set of the moment problem \mproblem{\ra}{m}{\leq} for the so-called non-degenerate case. Let \(\SFq\)\index{s@\(\SFq\)} be the set of all holomorphic functions \(F\colon\Cs\to\Cqq\) which satisfy the following conditions:
\bAeqi{0}
\item The matrix \(\im F(w)\) is \tnnH{} for all \(w\in\C\) with \(\im w>0\).
\item The matrix \(F(x)\) is \tnnH{} for all \(x\in(-\infty,0)\).
\eAeqi
Further, let \(\SOFq\)\index{s@\(\SOFq\)} be the set of all \(S\in\SFq\) such that
\[
\sup_{y\in[1,\infp)}y\normAE{S(\ii y)}
<\infp,
\]
where \(\normaE{\cdot}\)\index{e@\(\normaE{\cdot}\)} is the Euclidean matrix norm. We have the following integral representation for functions belonging to \(\SOFq\):
\bthmnl{cf.~\zitaa{142arxiv}{\cthmp{5.1}{19}}}{T1434}
Let \(S\colon\Cs\to \Cqq\).
\benui
\il{T1434.a} If \(S\in \SOFq\), then there exists a unique \tnnH{} measure \(\sigma\in\Mggqa{\ra}\) such that
\begin{equation}\label{T1434.1}
S(z)
=\int_{\ra} \frac{1}{t-z} \sigma(\dif t)
\end{equation}
for all \(z\in\Cs\).
\il{T1434.b} If there exists a \tnnH{} measure \(\sigma\in\Mggqa{\ra}\) such that \(S\) can be represented via \eqref{T1434.1} for all \(z\in\Cs\), then \(S\) belongs to \(\SOFq\).
\eenui
\etheo
If \(\sigma\) is a measure belonging to \(\Mggqa{\ra}\), then we will call the matrix-valued function \(S\colon\Cs\to\Cqq\) which is given for all \(z\in\Cs\) by \eqref{T1434.1} the \noti{\tSta{\(\sigma\)}}{Stieltjes!transform} and write \(\sttrhl{\sigma}\) for \(S\). If \(S\in\SOFq\), then the unique measure \(\sigma\) which belongs to \(\Mggqa{\ra}\) and which fulfills \eqref{T1434.1} for all \(z\in\Cs\) is said to be the \noti{\tSma{\(S\)}}{Stieltjes!measure}.
In view of \rtheo{T1434}, the moment problem~\mproblem{\ra}{m}{\leq} can be reformulated:
\begin{quote}
Let \(m\in\NO\) and let \(\seq{\su{j}}{j}{0}{m}\) be a sequence of complex \tqqa{matrices}. Describe the set \(\SOFqakg{\seq{\su{j}}{j}{0}{m}}\)\index{s@\(\SOFqakg{\seq{\su{j}}{j}{0}{m}}\)} of all \(S\in\SOFq\) with \tSm{} belonging to \(\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}\).
\end{quote}
Let the \taaa{2q}{2q}{signature} matrices \(\Jreq\) and \(\Jimq\)
associated with the real and imaginary part be defined by
\begin{align*}
\Jreq
&\defg
\bMat
\Oqq&\Iq\\
\Iq&\Oqq
\eMat&
&\text{and}&
\Jimq
&\defg
\bMat
\Oqq&\ii\Iq\\
-\ii\Iq&\Oqq
\eMat.
\end{align*}
\index{j@\(\Jreq\)}\index{j@\(\Jimq\)}Let \(\SPq\)\index{s@\(\SPq\)} be the set of all ordered pairs \((\phi,\psi)\) of \(\Cqq\)\nobreakdash-valued functions \(\phi\) and \(\psi\) which are meromorphic in \(\Cs\) and for which there exists a discrete subset \(\Dpq\) of \(\Cs\) such that the following conditions are fulfilled:
\bAeqi{2}
\item The functions \(\phi\) and \(\psi\) are both holomorphic in \(\C\setminus(\ra\cup\Dpq)\).
\item \(\rank\tmat{\phi(z)\\\psi(z)}=q\) for all \(z\in\C\setminus(\ra\cup\Dpq)\).
\item For all \(z\in\C\setminus(\R\cup\Dpq)\),
\[
\frac{1}{\im z}
\bMat
\phi(z)\\
\psi(z)
\eMat^\ad\Jimq
\bMat
\phi(z)\\
\psi(z)
\eMat
\in\Cggq.
\]
\item For all \(z\in\C\setminus\Dpq\) with \(\re z<0\),
\[
\bMat
\phi(z)\\
\psi(z)
\eMat^\ad\Jreq
\bMat
\phi(z)\\
\psi(z)
\eMat
\in\Cggq.
\]
\eAeqi
Two pairs \((\phi_1,\psi_1),(\phi_2,\psi_2)\in\SPq\) are called \noti{equivalent}{pairs!equivalent}, if there exists a \(\Cqq\)\nobreakdash-valued function \(\eta\) which is meromorphic in \(\Cs\) such that \(\det\eta\) does not vanishing identically and
\[
\bMat
\phi_2\\
\psi_2
\eMat
=
\bMat
\phi_1\\
\psi_1
\eMat\eta.
\]
For all \(n\in\NO\), let
\[
\Tqu{n}
\defg\matauuo{\Kronu{j,k+1}\Iq}{j,k}{0}{n}
\]
\index{t@\(\Tqu{n}\)}and let \(\Rqu{n}\colon\C\to\Coo{(n+1)q}{(n+1)q}\)\index{r@\(\Rqu{n}\)} be defined by
\[
\Rqua{n}{z}
\defg(\Iu{(n+1)q}-z\Tqu{n})^\inv.
\]
Let \(\kappa\in\NOinf \) and let \(\seqska \) be a sequence of complex \tqqa{matrices}. Then let
\begin{align*}
\uu{0}&\defg\Oqq&
&\text{and}&
\uu{k}&\defg\bMat\Oqq\\-\yuu{0}{k-1}\eMat
\end{align*}
\index{u@\(\uu{k}\)}for all \(k\in\mn{1}{\kappa+1}\).
Now we suppose that \(\seqska \)
belongs to \(\Kgqkappa\). Let
\(\pau{n}\colon\C\to\Cqq\)\index{a@\(\pau{n}\)} and
\(\pcu{n}\colon\C\to\Cqq\)\index{c@\(\pcu{n}\)} be defined by
\begin{align}
\paua{n}{z}&\defg\Iq-z\uu{n}^\ad\ek*{\Rqua{n}{\ko z}}^\ad
\Hu{n}^\inv\vqu{n} \label{alp0}
\shortintertext{and}
\pcua{n}{z}&\defg-z\vqu{n}^\ad\ek*{\Rqua{n}{\ko z}}^\ad
\Hu{n}^\inv\vqu{n}\label{bet0}
\end{align}
for all \(n\in\NO\) with \(2n\leq\kappa\), and let \(\pbu{n}\colon\C\to\Cqq\)\index{b@\(\pbu{n}\)} and \(\pdu{n}\colon\C\to\Cqq\)\index{d@\(\pdu{n}\)} be defined by
\begin{align}
\pbua{n}{z}&\defg
\begin{cases}
\Oqq\incase{n=0}\\
\yuu{0}{n-1}^\ad\ek*{\Rqua{n-1}{\ko z}}^\ad\Ku{n-1}^\inv
\yuu{0}{n-1}\incase{n\geq1} \label{gam0}
\end{cases}
\shortintertext{and}
\pdua{n}{z}&\defg
\begin{cases}
\Iq\incase{n=0}\\
\Iq+z\vqu{n-1}^\ad\ek*{\Rqua{n-1}{\ko z}}^\ad\Ku{n-1}^\inv
\yuu{0}{n-1}\incase{n\geq1} \label{del0}
\end{cases}
\end{align}
for all \(n\in\NO\) with \(2n-1\leq\kappa\). Then
\begin{align*}
\paua{0}{z}&=\Iq,&&&\pbua{0}{z}&=\Oqq\\
\pcua{0}{z}&=-z\su{0}^\inv,&&\text{and}&\pdua{0}{z}&=\Iq
\end{align*}
for all \(z\in\C\). Let
\begin{equation} \label{Uu2n}
\Uu{2n}
\defg
\bMat
\pau{n}&\pbu{n}\\
\pcu{n}&\pdu{n}\\
\eMat
\end{equation}
\index{u@\(\Uu{2n}\)}for all \(n\in\NO\) with \(2n\leq\kappa\), and let
\begin{equation} \label{Uu2n1}
\Uu{2n+1}
\defg
\bMat
\pau{n}&\pbu{n+1}\\
\pcu{n}&\pdu{n+1}\\
\eMat
\end{equation}
\index{u@\(\Uu{2n+1}\)}for all \(n\in\NO\) with
\(2n+1\leq\kappa\).
\bdefil{D1151}
Let \(\seqsinf\in\Kgqinf\). Then \symba{\seq{\Uu{m}}{m}{0}{\infi}}{u} is called the \noti{\tsqDmpo{\(\seqsinf\)}}{d@\tqDyump{}}.
\edefi
Now we are able to recall a parametrization of
the set \(\SOFqakg{\seq{\su{j}}{j}{0}{m}}\) with pairs belonging to
\(\SPq\):
\bthmnl{see~\zitaa{MR3324594}{\cthmp{3.2}{9}} or~\zitas{Dyu81,MR1699439}}{T1327}
Let \(m\in\NO\) and let \(\seq{\su{j}}{j}{0}{m}\in\Kgqu{m}\). Let \(\Uu{m}=\tmat{A_m&B_m\\C_m&D_m}\) be the \tqqa{block} representation of \(\Uu{m}\). Then:
\benui
\il{T1327.a} Let \((\phi,\psi)\in\SPq\). Then \(\det(C_m\phi+D_m\psi)\) does not vanish identically in \(\Cs\) and
\[
(A_m\phi+B_m\psi)(C_m\phi+D_m\psi)^\inv
\in\SOFqakg{\seq{\su{j}}{j}{0}{m}}.
\]
\item Let \((\phi_1,\psi_1),(\phi_2,\psi_2)\in\SPq\) be such that
\[
(A_m\phi_1+B_m\psi_1)(C_m\phi_1+D_m\psi_1)^\inv
=(A_m\phi_2+B_m\psi_2)(C_m\phi_2+D_m\psi_2)^\inv.
\]
Then, \((\phi_1,\psi_1)\) and \((\phi_2,\psi_2)\) are equivalent.
\item Let \(S\in\SOFqakg{\seq{\su{j}}{j}{0}{m}}\). Then, there exists a pair \((\phi,\psi)\in\SPq\) such that
\[
S
=(A_m\phi+B_m\psi)(C_m\phi+D_m\psi)^\inv.
\]
\eenui
\etheo
If \(\kappa\in\NOinf \) and \(\seqska \in\Kgqkappa\), then let \(\Mmu{k}\colon\C\to\Coo{2q}{2q}\)\index{m@\(\Mmu{k}\)} be defined by
\begin{equation}
\Mmua{k}{z}
\defg
\bMat
\Iq&\Oqq\\
-z\DSpm{k}&\Iq
\eMat
\label{ma11}
\end{equation}
for all \(k\in\NO\) with \(2k\leq\kappa\), and let
\begin{equation}
\Llu{k}
\defg
\bMat
\Iq&\DSpl{k}\\
\Oqq&\Iq
\eMat
\label{la11}
\end{equation}
\index{l@\(\Llu{k}\)}for all \(k\in\NO\) with \(2k+1\leq\kappa\).
We have the following factorization of \(\Uu{m}\) in a product of
complex \taaa{2q}{2q}{matrix} polynomials of degree \(1\):
\bpropnl{see~\zitaa{MR3324594}{\ceqqp{(4.11)}{(4.12)}{11}} or~\zitaa{MR2053150}{\cthmp{7}{77}}}{P1033}
Let \(\kappa\in\NOinf \) and let
\(\seqska \in\Kgqkappa\), then
\begin{align*}
\Uu{0}&=\Mmu{0}&
&\text{and}&
\Uu{2n}&=\rk*{\rprod_{k=0}^{n-1}\Mmu{k}\Llu{k}}\Mmu{n}
\end{align*}
for all \(n\in\N\) with \(2n\leq\kappa\), and
\[
\Uu{2n+1}
=\rprod_{k=0}^n(\Mmu{k}\Llu{k})
\]
for all \(n\in\NO\) with \(2n+1\leq\kappa\).
\eprop
\bnotal{N1104}
Let \(\kappa\in\NOinf \) and let
\(\seqska \in\Kgqkappa\). Let
\(\ophu{n}\colon\C\to\Cqq\)\index{p@\(\ophu{n}\)} and
\(\sphu{n}\colon\C\to\Cqq\)\index{q@\(\sphu{n}\)} be defined by
\begin{align*}
\ophua{n}{z}&\defg
\begin{cases}
\Iq\incase{n=0}\\
\vqu{n}^\ad\ek*{\Rqua{n}{\ko z}}^\ad\tmat{-\Hu{n-1}^\inv\yuu{n}{2n-1}\\\Iq}\incase{n\geq1}
\end{cases}
\shortintertext{and}
\sphua{n}{z}&\defg
\begin{cases}
\Oqq\incase{n=0}\\
-\uu{n}^\ad\ek*{\Rqua{n}{\ko z}}^\ad\tmat{-\Hu{n-1}^\inv\yuu{n}{2n-1}\\\Iq}\incase{n\geq1}
\end{cases}
\end{align*}
for all \(n\in\NO\) with \(2n-1\leq\kappa\), and let \(\opku{n}\colon\C\to\Cqq\)\index{p@\(\opku{n}\)} and \(\spku{n}\colon\C\to\Cqq\)\index{q@\(\spku{n}\)} be defined by
\begin{align*}
\opkua{n}{z}&\defg
\begin{cases}
\Iq\incase{n=0}\\
\vqu{n}^\ad\ek*{\Rqua{n}{\ko z}}^\ad\tmat{-\Ku{n-1}^\inv\yuu{n+1}{2n}\\\Iq}\incase{n\geq1}
\end{cases}
\shortintertext{and}
\spkua{n}{z}&\defg
\begin{cases}
\su{0}\incase{n=0}\\
\zuu{0}{n}\ek*{\Rqua{n}{\ko z}}^\ad\tmat{-\Ku{n-1}^\inv\yuu{n+1}{2n}\\\Iq}\incase{n\geq1}
\end{cases}
\end{align*}
for all \(n\in\NO\) with \(2n\leq\kappa\).
\enota
\bdefil{D1137}
Let \(\seqsinf\in\Kgqinf\). Then \(\PQPQ\) is called the \noti{\tStiqosoompo{\(\seqsinf\)}}{s@\tStiqosoomp{}}.
\edefi
\bremal{R0001}
For all \(n\in\NO\) with \(2n-1\leq\kappa\), the functions \(\ophu{n}\) and \(\sphu{n}\) are \tqqa{matrix} polynomials, where \(\ophu{n}\) has degree \(n\) and leading coefficient \(\Iq\). Similarly, for all \(n\in\NO\) with \(2n\leq\kappa\), the functions \(\opku{n}\) and \(\spku{n}\) are \tqqa{matrix} polynomials, where \(\opku{n}\) has degree \(n\) and leading coefficient \(\Iq\). Furthermore, if \(\sigmah\in\Mggqaag{\ra}{\seqska }\), we have the orthogonality relations
\[
\int_{\ra}\ek*{\ophua{m}{t}}^\ad\sigmah(\dif t)\ek*{\ophua{n}{t}}
=
\begin{cases}
\Oqq\incase{m\neq n}\\
\Lu{n}\incase{m=n}
\end{cases}
\]
for all \(m,n\in\NO\) with \(2n-1\leq\kappa\) and \(2m-1\leq\kappa\), and
\[
\int_{\ra}\ek*{\opkua{m}{t}}^\ad\sigmak(\dif t)\ek*{\opkua{n}{t}}
=
\begin{cases}
\Oqq\incase{m\neq n}\\
\Lau{n}\incase{m=n}
\end{cases}
\]
for all \(m,n\in\NO\) with \(2n\leq\kappa\) and \(2m\leq\kappa\), where \(\sigmak\colon\Bra\to\Cggq\)\index{s@$\sigmak$} is defined by \(\sigmak(B)\defg\int_{\ra}t\sigmah(\dif t)\) and belongs to \(\Mggqaag{\ra}{\seq{\sau{j}}{j}{0}{\kappa-1}}\) (see \eqref{s+} and \eqref{so}).
\erema
\bremnl{see~\zitaa{MR3324594}{\clemp{4.3}{11}}}{R1017}
Let \(\kappa\in\NOinf \) and let \(\seqska \in\Kgqkappa\), then
\[
\ophua{n}{0}
=(-1)^n\rk*{\rprod_{k=0}^{n-1}\DSpm{k}\DSpl{k}}^\inv
\]
for all \(n\in\N\) with \(2n-1\leq\kappa\), and
\[
\spkua{n}{0}
=(-1)^n\ek*{\rk*{\rprod_{k=0}^{n-1}\DSpm{k}\DSpl{k}}\DSpm{n}}^\inv
\]
for all \(n\in\N\) with \(2n\leq\kappa\).
\erema
We have the following representation of \(\Uu{m}\) in terms of the
above introduced matrix polynomials:
\bpropnl{see~\zitaa{MR3324594}{\cthmp{4.7}{68}}}{P1422}
Let \(\kappa\in\NOinf \) and let \(\seqska \in\Kgqkappa\). For all \(z\in\C\), then
\[
\Uua{2n}{z}
=
\bMat
\spkua{n}{z}&-\sphua{n}{z}\\
-z\opkua{n}{z}&\ophua{n}{z}
\eMat
\bMat
[\spkua{n}{0}]^\inv&\Oqq\\
\Oqq&[\ophua{n}{0}]^\inv
\eMat
\]
for all \(n\in\NO\) with \(2n\leq\kappa\), and
\[
\Uua{2n+1}{z}
=
\bMat
\spkua{n}{z}&-\sphua{n+1}{z}\\
-z\opkua{n}{z}&\ophua{n+1}{z}
\eMat
\bMat
[\spkua{n}{0}]^\inv&\Oqq\\
\Oqq&[\ophua{n+1}{0}]^\inv
\eMat
\]
for all \(n\in\NO\) with \(2n+1\leq\kappa\).
\eprop
\section{The \hKt{}}\label{S1649}
In~\zitaa{114arxiv}{\S7} a transformation for sequences of complex \tpqa{matrices} was considered using the following concept of \tsraute{s} presented in~\zita{MR3014197}. The paper~\zita{MR3014197} deals with the question of invertibility as it applies to matrix sequences.
\begin{defn}\label{D1430}
Let \(\kappa \in \NOinf \) and let \(\seqska \) be a sequence of complex \tpqa{matrices}. The sequence \(\seq{\su{j}^\rez}{j}{0}{\kappa}\)\index{$\seq{\su{j}^\rez}{j}{0}{\kappa}$} given recursively by
\begin{align*}
s_0^\rez&\defg\su{0}^\MP&
&\text{and}&
s_j^\rez&\defg-\su{0}^\MP \sum_{\ell=0}^{j-1} s_{j-\ell} \su{\ell}^\rez
\end{align*}
for all \(j\in\mn{1}{\kappa}\) is called the \noti{\tsrautea{\seqska }}{r@\tsraute}.
\end{defn}
\begin{defn}\label{D1059}
Let \(\kappa\in\Ninf \) and let \(\seqska \) be a sequence of complex \tpqa{matrices}. Then, the sequence \(\seq{\su{j}^\Sta{1}}{j}{0}{\kappa-1}\)\index{$\seq{\su{j}^\Sta{1}}{j}{0}{\kappa-1}$} given by
\[
\su{j}^\Sta{1}
\defg-\su{0}\su{j+1}^\rez\su{0}
\]
is called the \noti{first \tKta{\(\seqska \)}}{s@\hKt{}!first}.
\end{defn}
Observe that this transformation coincides with the first \tlasnt{\alpha} from~\zitaa{114arxiv}{\cdefnp{7.1}{33}} for \(\alpha=0\) and served together with its counterpart for matrix-valued functions in~\zitas{114arxiv,141arxiv} as elementary step of a Schur type algorithm to solve the truncated matricial moment problem on the semi-infinite interval \([\alpha,\infp)\).
\bdefil{D1632}
Let \(\kappa\in\NOinf \) and let \(\seqska \) be a sequence of complex \tpqa{matrices}. The sequence \(\seq{\su{j}^\Sta{0}}{j}{0}{\kappa}\) given by \(\su{j}^\Sta{0}\defg\su{j}\) for all \(j\in\mn{0}{\kappa}\) is called the \noti{\taKta{0}{\(\seqska \)}}{s@\hKt{}!$0$th}. In the case \(\kappa\geq1\), for all \(k\in\mn{1}{\kappa}\), the \noti{\taKt{k} \(\seq{\su{j}^\Sta{k}}{j}{0}{\kappa-k}\) of \(\seqska \)}{s@\hKt{}!$k$th}\index{$\seq{\su{j}^\Sta{k}}{j}{0}{\kappa-k}$} is recursively defined by
\[
\su{j}^\Sta{k}
\defg t_j^\Sta{1}
\]
for all \(j\in\mn{0}{\kappa-k}\), where \(\seq{t_j}{j}{0}{\kappa-(k-1)}\) denotes the \taKta{(k-1)}{\(\seqska \)}.
\edefi
\bpropnl{see~\zitaa{114arxiv}{\cthmp{8.10(c)}{42}}}{P1542c}
Let \(\kappa\in\NOinf \) and \(\seqska \in\Kgqu{\kappa}\). Then \(\seq{\su{j}^\Sta{k}}{j}{0}{\kappa-k}\in\Kgqu{\kappa-k}\) for all \(k\in\mn{0}{\kappa}\).
\eprop
\bpropnl{see~\zitaa{114arxiv}{\cthmp{8.10(d)}{42}}}{P1542d}
Let \(m\in\NO\) and \(\seqsinf\in\Kggdoq{m}\). Then \(\seq{\su{j}^\Sta{k}}{j}{0}{\kappa-k}\in\Kggdoq{\max\set{0,m-k}}\) for all \(k\in\mn{0}{\kappa}\).
\eprop
\bpropnl{see~\zitaa{114arxiv}{\cthmp{8.10(e)}{42}}}{P1542e}
Let \(\kappa\in\NOinf \) and \(\seqska \in\Kggdqu{\kappa}\). Then \(\seq{\su{j}^\Sta{k}}{j}{0}{\kappa-k}\in\Kggdqu{\kappa-k}\) for all \(k\in\mn{0}{\kappa}\).
\eprop
\bpropl{T1658}
Let \(\kappa\in\NOinf \), let \(\seqska \in\Kgqu{\kappa}\) with \tSp{} \(\seq{\Spu{j}}{j}{0}{\kappa}\), and let \(k\in\mn{0}{\kappa}\). Then \(\seq{\Spu{k+j}}{j}{0}{\kappa-k}\) is the \tSpa{\(\seq{\su{j}^\Sta{k}}{j}{0}{\kappa-k}\)}.
\eprop
\bproof
According to~\zitaa{MR3014201}{\cpropp{2.20}{221}} we have \(\seqska \in\Kggeqkappa\). Hence, the application of~\zitaa{114arxiv}{\cthmp{9.26}{57}} yields the assertion.
\eproof
\section{The \hdsp{} after \hKt{ation}}\label{S1011}
From now on we consider only infinite sequences \(\seqsinf\in\Kgqinf\). Let \(\ell\in\NO\). According to \rprop{P1542c}, then the \taKt{\ell} \(\seq{\su{j}^\Sta{\ell}}{j}{0}{\infi}\) of \(\seqsinf\) belongs to \(\Kgqinf\). If \(X\) is an object build from the sequence \(\seqsinf\), then we will use the notation \(X^{(\ell)}\)\index{\(X^{(\ell)}\)} for this object build from the sequence \(\seq{\su{j}^\Sta{\ell}}{j}{0}{\infi}\), e.\,g., in view of \eqref{HK}, we have\index{h@$\Hu{n}^\Sta{\ell}\defg\matauuo{\su{j+k}^\Sta{\ell}}{j,k}{0}{n}$}\index{k@$\Ku{n}^\Sta{\ell}\defg\matauuo{\su{j+k+1}^\Sta{\ell}}{j,k}{0}{n}$}
\begin{align*}
\Hu{n}^{(\ell)}&\defg\matauuo{\su{j+k}^\Sta{\ell}}{j,k}{0}{n}&
&\text{and}&
\Ku{n}^{(\ell)}&\defg\matauuo{\su{j+k+1}^\Sta{\ell}}{j,k}{0}{n}
\end{align*}
for all \(n\in\NO\). From \rdefi{D1021} and \rprop{T1658} we see:
\bremal{R2352}
Let \(\seqsinf\in\Kgqinf\). Then \(\Lu{k}^{(1)}=\Lau{k}\) and \(\Lau{k}^{(1)}=\Lu{k+1}\) for all \(k\in\NO\).
\erema
In view of \rprop{P1542c} and \rdefi{D1455}, we are particularly able to introduce the following notation:
\bnotal{N1045}
Let \(\seqsinf \in\Kgqinf\) and let \(\ell\in\NO\). Then, we write \symb{[\seq{\DSpol{k}{\ell}}{k}{0}{\infi},\seq{\DSpom{k}{\ell}}{k}{0}{\infi}]} for the \tdspa{the \taKta{\ell}{\(\seqsinf\)}}.
\enota
From \rtheo{T1513} and \rprop{T1658} we obtain then:
\blemml{L1656}
Let \(\seqsinf \in\Kgqinf\) and let \(\ell\in\NO\), then
\begin{align*}
\DSpol{k}{\ell}
&=\rk*{\rprod_{j=0}^k\Spu{2j+\ell}\Spu{2j+\ell+1}^\inv}\Spu{2k+\ell+1}\rk*{\rprod_{j=0}^k\Spu{2j+\ell}\Spu{2j+\ell+1}^\inv}^\ad
\shortintertext{and}
\DSpom{k}{\ell}
&=
\begin{cases}
\Spu{\ell}^\inv\incase{k=0}\\
\rk*{\rprod_{j=0}^{k-1}\Spu{2j+\ell}^\inv\Spu{2j+\ell+1}}\Spu{2k+\ell}^\inv\rk*{\rprod_{j=0}^{k-1}\Spu{2j+\ell}^\inv\Spu{2j+\ell+1}}^\ad\incase{k\geq1}
\end{cases}
\end{align*}
for all \(k\in\NO\).
\elemm
Using \rlemm{L1656} we conclude:
\blemml{L0759}
Let \(\seqsinf \in\Kgqinf\) and let \(\ell\in\NO\), then
\begin{align*}
\DSpol{k}{\ell}
&=\rk*{\rprod_{j=0}^{m-1}\Spu{2j+\ell}\Spu{2j+\ell+1}^\inv}\DSpol{k-m}{\ell+2m}\rk*{\rprod_{j=0}^{m-1}\Spu{2j+\ell}\Spu{2j+\ell+1}^\inv}^\ad
\shortintertext{and}
\DSpom{k}{\ell}
&=\rk*{\rprod_{j=0}^{m-1}\Spu{2j+\ell}^\inv\Spu{2j+\ell+1}}\DSpom{k-m}{\ell+2m}\rk*{\rprod_{j=0}^{m-1}\Spu{2j+\ell}\Spu{2j+\ell+1}^\inv}^\ad
\end{align*}
for all \(k\in\N\) and \(m\in\mn{1}{k}\).
\elemm
\blemml{L0855}
Let \(\seqsinf \in\Kgqinf\) and let \(\ell\in\NO\), then
\begin{align*}
\DSpol{k}{\ell+1}&=\Spu{\ell}\DSpom{k+1}{\ell}\Spu{\ell}&
&\text{and}&
\DSpom{k}{\ell+1}&=\Spu{\ell}^\inv\DSpol{k}{\ell}\Spu{\ell}^\inv
\end{align*}
for all \(k\in\NO\).
\elemm
\bproof
From \rprop{T1337} we know that the matrices \(\Spu{j}\) are \tH{} and invertible for all \(j\in\NO\). Using \rlemm{L1656}, we obtain thus
\[
\begin{split}
\DSpol{k}{\ell+1}
&=\Spu{\ell}\Spu{\ell}^\inv\DSpol{k}{\ell+1}\Spu{\ell}^\invad\Spu{\ell}^\ad\\
&=\Spu{\ell}\Spu{\ell}^\inv\rk*{\rprod_{j=0}^k\Spu{2j+\ell+1}\Spu{2j+\ell+2}^\inv}\Spu{2k+\ell+2}\rk*{\rprod_{j=0}^k\Spu{2j+\ell+1}\Spu{2j+\ell+2}^\inv}^\ad\Spu{\ell}^\invad\Spu{\ell}^\ad\\
&=\Spu{\ell}\rk*{\rprod_{j=0}^k\Spu{2j+\ell}^\inv\Spu{2j+\ell+1}}\Spu{2k+\ell+2}^\inv\Spu{2k+\ell+2}\Spu{2k+\ell+2}^\invad\rk*{\rprod_{j=0}^k\Spu{2j+\ell}^\inv\Spu{2j+\ell+1}}^\ad\Spu{\ell}^\ad\\
&=\Spu{\ell}\rk*{\rprod_{j=0}^k\Spu{2j+\ell}^\inv\Spu{2j+\ell+1}}\Spu{2(k+1)+\ell}^\inv\rk*{\rprod_{j=0}^k\Spu{2j+\ell}^\inv\Spu{2j+\ell+1}}^\ad\Spu{\ell}
=\Spu{\ell}\DSpom{k+1}{\ell}\Spu{\ell}
\end{split}
\]
for all \(k\in\NO\) and
\[
\begin{split}
\DSpom{0}{\ell+1}
=\Spu{\ell}^\inv\Spu{\ell}\DSpom{0}{\ell+1}\Spu{\ell}^\ad\Spu{\ell}^\invad
&=\Spu{\ell}^\inv\Spu{\ell}\Spu{\ell+1}^\inv\Spu{\ell}^\ad\Spu{\ell}^\invad
=\Spu{\ell}^\inv\Spu{\ell}\Spu{\ell+1}^\inv\Spu{\ell+1}\Spu{\ell+1}^\invad\Spu{\ell}^\ad\Spu{\ell}^\invad\\
&=\Spu{\ell}^\inv(\Spu{\ell}\Spu{\ell+1}^\inv)\Spu{\ell+1}(\Spu{\ell}\Spu{\ell+1}^\inv)^\ad\Spu{\ell}^\inv
=\Spu{\ell}^\inv\DSpol{0}{\ell}\Spu{\ell}^\inv
\end{split}
\]
and furthermore
\[
\begin{split}
&\DSpom{k}{\ell+1}
=\Spu{\ell}^\inv\Spu{\ell}\DSpom{k}{\ell+1}\Spu{\ell}^\ad\Spu{\ell}^\invad\\
&=\Spu{\ell}^\inv\Spu{\ell}\rk*{\rprod_{j=0}^{k-1}\Spu{2j+\ell+1}^\inv\Spu{2j+\ell+2}}\Spu{2k+\ell+1}^\inv\rk*{\rprod_{j=0}^{k-1}\Spu{2j+\ell+1}^\inv\Spu{2j+\ell+2}}^\ad\Spu{\ell}^\ad\Spu{\ell}^\invad\\
&=\Spu{\ell}^\inv\Spu{\ell}\rk*{\rprod_{j=0}^{k-1}\Spu{2j+\ell+1}^\inv\Spu{2j+\ell+2}}\Spu{2k+\ell+1}^\inv\Spu{2k+\ell+1}\Spu{2k+\ell+1}^\invad\rk*{\rprod_{j=0}^{k-1}\Spu{2j+\ell+1}^\inv\Spu{2j+\ell+2}}^\ad\Spu{\ell}^\ad\Spu{\ell}^\inv\\
&=\Spu{\ell}^\inv\rk*{\rprod_{j=0}^k\Spu{2j+\ell}\Spu{2j+\ell+1}^\inv}\Spu{2k+\ell+1}\rk*{\rprod_{j=0}^k\Spu{2j+\ell}\Spu{2j+\ell+1}^\inv}^\ad\Spu{\ell}^\inv
=\Spu{\ell}^\inv\DSpol{k}{\ell}\Spu{\ell}^\inv
\end{split}
\]
for all \(k\in\N\).
\eproof
In view of \rdefi{D1021} and \eqref{L}, we obtain from \rlemm{L0855} the following relation between the \tdsp{s} of a \tSpd{} sequence and its first \tKt{}:
\btheol{R0921}
Let \(\seqsinf\in\Kgqinf\) with \tdsp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\). Then, the \tdsp{} \([\seq{\DSpol{k}{1}}{k}{0}{\infty},\seq{\DSpom{k}{1}}{k}{0}{\infty}]\) of \(\seq{\su{j}^\Sta{1}}{j}{0}{\infty}\) is given by
\begin{align*}
\DSpol{k}{1}&=\su{0}\DSpm{k+1}\su{0}&
&\text{and}&
\DSpom{k}{1}&=\su{0}^\inv\DSpl{k}\su{0}^\inv
\end{align*}
for all \(k\in\NO\).
\etheo
\section{The resolvent matrix corresponding to a \hKt{ed} moment sequence}\label{S0834}
In view of \rtheo{T1523} and~\zitaa{MR3133464}{\cremp{8.4}{24}} we have:
\bremal{R1054}
Let \(\seqsinf \in\Kgqinf\) with \tdsp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\) and \tSp{} \(\seq{\Spu{j}}{j}{0}{\infi}\), then
\begin{align*}
\rprod_{j=0}^m\Spu{2j}^\inv\Spu{2j+1}&=\rk*{\rprod_{j=0}^m\DSpm{j}\DSpl{j}}^\inv&
&\text{and}&
\rprod_{j=0}^m\Spu{2j}\Spu{2j+1}^\inv&=\rk*{\rprod_{j=0}^m\DSpm{j}\DSpl{j}}^\ad
\end{align*}
for all \(m\in\NO\).
\erema
\blemml{L0926}
Let \(\seqsinf \in\Kgqinf\) with \tdsp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\) and \tStiqosoomp{} \(\PQPQ\). Then,
\[
\DSpl{k}
=\ek*{\ophua{m}{0}}^\invad\DSpol{k-m}{2m}\ek*{\ophua{m}{0}}^\inv
=\spkua{m}{0}\DSpom{k-m}{2m+1}\ek*{\spkua{m}{0}}^\ad
\]
and
\begin{align*}
\DSpm{k}&=\ophua{m}{0}\DSpom{k-m}{2m}\ek*{\ophua{m}{0}}^\ad,&
\DSpm{k+1}=\ek*{\spkua{m}{0}}^\invad\DSpol{k-m}{2m+1}\ek*{\spkua{m}{0}}^\inv
\end{align*}
for all \(k\in\NO\) and \(m\in\mn{0}{k}\).
\elemm
\bproof
Let \(k\in\NO\) and \(m\in\mn{0}{k}\). In the case \(m=0\) all assertions follow from \rnota{N1104} and \rrema{R0921}. Now suppose that \(k\in\N\) and \(m\in\mn{1}{k}\). From \rremass{R1054}{R1017} we obtain
\begin{align*}
\rprod_{j=0}^{m-1}\Spu{2j}^\inv\Spu{2j+1}
&=(-1)^m\ophua{m}{0}&
&\text{and}&
\rprod_{j=0}^{m-1}\Spu{2j}\Spu{2j+1}^\inv&=(-1)^m\ek*{\ophua{m}{0}}^\invad.
\end{align*}
Hence, using \rlemm{L0759} with \(\ell=0\), we get
\[\begin{split}
\DSpl{k}
&=\rk*{\rprod_{j=0}^{m-1}\Spu{2j}\Spu{2j+1}^\inv}\DSpol{k-m}{2m}\rk*{\rprod_{j=0}^{m-1}\Spu{2j}\Spu{2j+1}^\inv}^\ad
=\ek*{\ophua{m}{0}}^\invad\DSpol{k-m}{2m}\ek*{\ophua{m}{0}}^\inv
\end{split}\]
and
\begin{equation}\label{L0926.1}\begin{split}
\DSpm{k}
&=\rk*{\rprod_{j=0}^{m-1}\Spu{2j}^\inv\Spu{2j+1}}\DSpom{k-m}{2m}\rk*{\rprod_{j=0}^{m-1}\Spu{2j}\Spu{2j+1}^\inv}^\ad
=\ophua{m}{0}\DSpom{k-m}{2m}\ek*{\ophua{m}{0}}^\ad.
\end{split}\end{equation}
According to \rlemm{L0855} we have
\begin{align*}
\DSpol{k-m}{2m+1}&=\Spu{2m}\DSpom{k-m+1}{2m}\Spu{2m}&
&\text{and}&
\DSpom{k-m}{2m+1}&=\Spu{2m}^\inv\DSpol{k-m}{2m}\Spu{2m}^\inv.
\end{align*}
\rtheo{T1523} and \rrema{R1017} yield
\[
\Spu{2m}
=\rk*{\rprod_{j=0}^{m-1}\DSpm{j}\DSpl{j}}^\invad\DSpm{m}^\inv\rk*{\rprod_{j=0}^{m-1}\DSpm{j}\DSpl{j}}^\inv
=\ek*{\ophua{m}{0}}^\ad\spkua{m}{0}.
\]
Thus, we get
\[
\begin{split}
\ek*{\ophua{m}{0}}^\invad\DSpol{k-m}{2m}\ek*{\ophua{m}{0}}^\inv
&=\ek*{\ophua{m}{0}}^\invad\Spu{2m}\DSpom{k-m}{2m+1}\Spu{2m}^\ad\ek*{\ophua{m}{0}}^\inv\\
&=\spkua{m}{0}\DSpom{k-m}{2m+1}\ek*{\spkua{m}{0}}^\ad
\end{split}
\]
and, using the already proved identity \eqref{L0926.1} with \(k+1\) instead of \(k\), furthermore
\[
\begin{split}
\DSpm{k+1}
=\ophua{m}{0}\DSpom{k-m+1}{2m}\ek*{\ophua{m}{0}}^\ad
&=\ophua{m}{0}\Spu{2m}^\invad\DSpol{k-m}{2m+1}\Spu{2m}^\inv\ek*{\ophua{m}{0}}^\ad\\
&=\ek*{\spkua{m}{0}}^\invad\DSpol{k-m}{2m+1}\ek*{\spkua{m}{0}}^\inv.\qedhere
\end{split}
\]
\eproof
Let \(\seqsinf \in\Kgqinf\). Then, for all \(n\in\NO\), the matrices \(\ophua{n}{0}\) and \(\spkua{n}{0}\)
are invertible according to \rrema{R1017} and \rnota{N1104}. Thus, for all \(n\in\NO\), let
\beql{PP}
\PPu{n}
\defg
\bMat
[\ophua{n}{0}]^\invad&\Oqq\\
\Oqq&\ophua{n}{0}
\eMat
\eeq
\index{p@$\PPu{n}$}and let \(\QQu{n}\colon\C\to\Coo{2q}{2q}\)\index{q@$\QQu{n}$} be defined by
\beql{QQ}
\QQua{n}{z}
\defg
\bMat
\Oqq&\spkua{n}{0}\\
-z[\spkua{n}{0}]^\invad&\Oqq
\eMat.
\eeq
Obviously, the matrix \(\PPu{n}\) is invertible for all
\(n\in\NO\) and \(\QQua{n}{z}\) is invertible for all \(n\in\NO\)
and all \(z\in\C\setminus\set{0}\). Furthermore, let
\begin{align}\label{LLMMl}
\Lluo{k}{\ell}
&\defg
\bMat
\Iq&\DSpol{k}{\ell}\\
\Oqq&\Iq
\eMat&
&\text{and}&
\Mmuo{k}{\ell}
&\defg
\bMat
\Iq&\Oqq\\
-z\DSpom{k}{\ell}&\Iq
\eMat
\end{align}
for all \(k,\ell\in\NO\) and all \(z\in\C\), where \([\seq{\DSpol{k}{\ell}}{k}{0}{\infi},\seq{\DSpom{k}{\ell}}{k}{0}{\infi}]\) denotes the \tdspa{the \taKta{\ell}{\(\seqsinf\)}}.
\blemml{L1402}
Let \(\seqsinf \in\Kgqinf\), then
\[
\Llu{k}
=\PPu{m}\Lluo{k-m}{2m}\PPu{m}^\inv
=\QQu{m}\Mmuo{k-m}{2m+1}\QQu{m}^\inv
\]
and
\begin{align*}
\Mmu{k}&=\PPu{m}\Mmuo{k-m}{2m}\PPu{m}^\inv,&
\Mmu{k+1}=\QQu{m}\Lluo{k-m}{2m+1}\QQu{m}^\inv
\end{align*}
for all \(k\in\NO\) and \(m\in\mn{0}{k}\).
\elemm
\bproof
Let \(k\in\NO\), let \(m\in\mn{0}{k}\), and let \(z\in\C\setminus\set{0}\). In view of \rlemm{L0926} we have then
\[
\begin{split}
&\PPu{m}\Lluo{k-m}{2m}\PPu{m}^\inv\\
&=
\bMat
[\ophua{m}{0}]^\invad&\Oqq\\
\Oqq&\ophua{m}{0}
\eMat
\bMat
\Iq&\DSpol{k-m}{2m}\\
\Oqq&\Iq
\eMat
\bMat
[\ophua{m}{0}]^\invad&\Oqq\\
\Oqq&\ophua{m}{0}
\eMat^\inv\\
&=
\bMat
[\ophua{m}{0}]^\invad&[\ophua{m}{0}]^\invad\DSpol{k-m}{2m}\\
\Oqq&\ophua{m}{0}
\eMat
\bMat
[\ophua{m}{0}]^\ad&\Oqq\\
\Oqq&[\ophua{m}{0}]^\inv
\eMat\\
&=
\bMat
\Iq&[\ophua{m}{0}]^\invad\DSpol{k-m}{2m}[\ophua{m}{0}]^\inv\\
\Oqq&\Iq
\eMat
=
\bMat
\Iq&\DSpl{k}\\
\Oqq&\Iq
\eMat
=\Llu{k}
\end{split}
\]
and
\[
\begin{split}
&\QQua{m}{z}\Mmuoa{k-m}{2m+1}{z}\ek*{\QQua{m}{z}}^\inv\\
&=
\bMat
\Oqq&\spkua{m}{0}\\
-z[\spkua{m}{0}]^\invad&\Oqq
\eMat
\bMat
\Iq&\Oqq\\
-z\DSpom{k-m}{2m+1}&\Iq
\eMat
\bMat
\Oqq&\spkua{m}{0}\\
-z[\spkua{m}{0}]^\invad&\Oqq
\eMat^\inv\\
&=
\bMat
-z\spkua{m}{0}\DSpom{k-m}{2m+1}&\spkua{m}{0}\\
-z[\spkua{m}{0}]^\invad&\Oqq
\eMat
\bMat
\Oqq&-z^\inv[\spkua{m}{0}]^\ad\\
[\spkua{m}{0}]^\inv&\Oqq
\eMat\\
&=
\bMat
\Iq&\spkua{m}{0}\DSpom{k-m}{2m+1}[\spkua{m}{0}]^\ad\\
\Oqq&\Iq
\eMat
=
\bMat
\Iq&\DSpl{k}\\
\Oqq&\Iq
\eMat
=\Llu{k}
\end{split}
\]
and furthermore
\[
\begin{split}
&\PPu{m}\Mmuoa{k-m}{2m}{z}\PPu{m}^\inv\\
&=
\bMat
[\ophua{m}{0}]^\invad&\Oqq\\
\Oqq&\ophua{m}{0}
\eMat
\bMat
\Iq&\Oqq\\
-z\DSpom{k-m}{2m}&\Iq
\eMat
\bMat
[\ophua{m}{0}]^\invad&\Oqq\\
\Oqq&\ophua{m}{0}
\eMat^\inv\\
&=
\bMat
[\ophua{m}{0}]^\invad&\Oqq\\
-z\ophua{m}{0}\DSpom{k-m}{2m}&\ophua{m}{0}
\eMat
\bMat
[\ophua{m}{0}]^\ad&\Oqq\\
\Oqq&[\ophua{m}{0}]^\inv
\eMat\\
&=
\bMat
\Iq&\Oqq\\
-z\ophua{m}{0}\DSpom{k-m}{2m}[\ophua{m}{0}]^\ad&\Iq
\eMat
=
\bMat
\Iq&\Oqq\\
-z\DSpm{k}&\Iq
\eMat
=\Mmua{k}{z}
\end{split}
\]
and
\begin{equation*}
\begin{split}
&\QQua{m}{z}\Lluo{k-m}{2m+1}\ek*{\QQua{m}{z}}^\inv\\
&=
\bMat
\Oqq&\spkua{m}{0}\\
-z[\spkua{m}{0}]^\invad&\Oqq
\eMat
\bMat
\Iq&\DSpol{k-m}{2m+1}\\
\Oqq&\Iq
\eMat
\bMat
\Oqq&\spkua{m}{0}\\
-z[\spkua{m}{0}]^\invad&\Oqq
\eMat^\inv\\
&=
\bMat
\Oqq&\spkua{m}{0}\\
-z[\spkua{m}{0}]^\invad&-z[\spkua{m}{0}]^\invad\DSpol{k-m}{2m+1}
\eMat
\bMat
\Oqq&-z^\inv[\spkua{m}{0}]^\ad\\
[\spkua{m}{0}]^\inv&\Oqq
\eMat\\
&=
\bMat
\Iq&\Oqq\\
-z[\spkua{m}{0}]^\invad\DSpol{k-m}{2m+1}[\spkua{m}{0}]^\inv&\Iq
\eMat
=
\bMat
\Iq&\Oqq\\
-z\DSpm{k+1}&\Iq
\eMat=\Mmua{k+1}{z}.\qedhere
\end{split}
\end{equation*}
\eproof
In view of \rprop{P1542c} and \rdefi{D1137}, we are able to introduce the following notations:
\bnotal{N1053}
Let \(\seqsinf \in\Kgqinf\) and let \(\ell\in\NO\). Then, we write \symb{[\seq{\ophuo{k}{\ell}}{k}{0}{\infi},\seq{\sphuo{k}{\ell}}{k}{0}{\infi},\seq{\opkuo{k}{\ell}}{k}{0}{\infi},\seq{\spkuo{k}{\ell}}{k}{0}{\infi}]} for the \tStiqosoompo{the \taKta{\ell}{\(\seqsinf\)}}.
\enota
Let \(\seqsinf \in\Kgqinf\) and \(\ell\in\NO\). Then, the matrices \(\ophuoa{n}{\ell}{0}\) and \(\spkuoa{n}{\ell}{0}\) are invertible. In accordance with \eqref{PP} and \eqref{QQ}, let
\beql{PPell}
\PPuo{n}{\ell}
\defg
\bMat
[\ophuoa{n}{\ell}{0}]^\invad&\Oqq\\
\Oqq&\ophuoa{n}{\ell}{0}
\eMat
\eeq
\index{p@$\PPuo{n}{\ell}$}for all \(n\in\NO\), and let \(\QQuo{n}{\ell}\colon\C\to\Coo{2q}{2q}\)\index{q@$\QQuo{n}{\ell}$} be defined by
\beql{QQell}
\QQuoa{n}{\ell}{z}
\defg
\bMat
\Oqq&\spkuoa{n}{\ell}{0}\\
-z[\spkuoa{n}{\ell}{0}]^\invad&\Oqq
\eMat.
\eeq
Obviously, the matrix \(\PPuo{n}{\ell}\) is invertible for all \(n\in\NO\) and \(\QQuoa{n}{\ell}{z}\) is invertible for all \(n\in\NO\) and all \(z\in\C\setminus\set{0}\). For all \(m\in\NO\), let\index{q@$\Qq{m}$}
\beql{Qq}
\Qq{m}
\defg\rprod_{\ell=0}^m\QQuo{0}{\ell}.
\eeq
In particular, we have \(\Qq{0}=\QQuo{0}{0}=\QQu{0}\).
In view of \rprop{P1542c}, we can apply \rlemm{L1402} with \(m=0\) to the \taKta{\ell}{a sequence \(\seqsinf\in\Kgqinf\)} and obtain:
\bremal{R1422}
Let \(\seqsinf\in\Kgqinf\), then
\begin{align*}
\Lluo{k}{\ell}\QQuo{0}{\ell}&=\QQuo{0}{\ell}\Mmuo{k}{\ell+1}&
&\text{and}&
\Mmuo{k+1}{\ell}\QQuo{0}{\ell}&=\QQuo{0}{\ell}\Lluo{k}{\ell+1}
\end{align*}
for all \(k,\ell\in\NO\).
\erema
By repeated application of \rrema{R1422} we get:
\blemml{L1446}
Let \(\seqsinf\in\Kgqinf\), then
\begin{align*}
\Qq{2n}\Lluo{k}{2n+1}&=\Mmu{k+n+1}\Qq{2n},&
\Qq{2n}\Mmuo{k}{2n+1}&=\Llu{k+n}\Qq{2n}
\shortintertext{and}
\Qq{2n+1}\Lluo{k}{2n+2}&=\Llu{k+n+1}\Qq{2n+1},&
\Qq{2n+1}\Mmuo{k}{2n+2}&=\Mmu{k+n+1}\Qq{2n+1}
\end{align*}
for all \(k,n\in\NO\).
\elemm
\bproof
Using \rrema{R1422} and \eqref{Qq}, we obtain
\begin{align*}
\Qq{0}\Lluo{k}{1}
&=\QQuo{0}{0}\Lluo{k}{0+1}
=\Mmuo{k+1}{0}\QQuo{0}{0}
=\Mmu{k+1}\Qq{0},\\
\Qq{0}\Mmuo{k}{1}
&=\QQuo{0}{0}\Mmuo{k}{0+1}
=\Lluo{k}{0}\QQuo{0}{0}
=\Llu{k}\Qq{0},\\
\Qq{1}\Lluo{k}{2}
&=\QQuo{0}{0}\QQuo{0}{1}\Lluo{k}{1+1}\\
&=\QQuo{0}{0}\Mmuo{k+1}{1}\QQuo{0}{1}
=\QQuo{0}{0}\Mmuo{k+1}{0+1}\QQuo{0}{1}
=\Lluo{k+1}{0}\QQuo{0}{0}\QQuo{0}{1}
=\Llu{k+1}\Qq{1},
\shortintertext{and}
\Qq{1}\Mmuo{k}{2}
&=\QQuo{0}{0}\QQuo{0}{1}\Mmuo{k}{1+1}\\
&=\QQuo{0}{0}\Lluo{k}{1}\QQuo{0}{1}
=\QQuo{0}{0}\Lluo{k}{0+1}\QQuo{0}{1}
=\Mmuo{k+1}{0}\QQuo{0}{0}\QQuo{0}{1}
=\Mmu{k+1}\Qq{1}.
\end{align*}
for all \(k\in\NO\). Now let \(n\in\N\) and suppose that
\begin{align*}
\Qq{2n-1}\Lluo{j}{2n}&=\Llu{j+n}\Qq{2n-1},&
\Qq{2n-1}\Mmuo{j}{2n}&=\Mmu{j+n}\Qq{2n-1}
\end{align*}
hold true for all \(j\in\NO\). Taking additionally into account \eqref{Qq} and \rrema{R1422}, we get for all \(k\in\NO\) then
\[\begin{split}
\Qq{2n}\Lluo{k}{2n+1}
=\Qq{2n-1}\QQuo{0}{2n}\Lluo{k}{2n+1}
&=\Qq{2n-1}\Mmuo{k+1}{2n}\QQuo{0}{2n}\\
&=\Mmu{k+1+n}\Qq{2n-1}\QQuo{0}{2n}
=\Mmu{k+n+1}\Qq{2n}
\end{split}\]
and
\[\begin{split}
\Qq{2n}\Mmuo{k}{2n+1}
=\Qq{2n-1}\QQuo{0}{2n}\Mmuo{k}{2n+1}
&=\Qq{2n-1}\Lluo{k}{2n}\QQuo{0}{2n}\\
&=\Llu{k+n}\Qq{2n-1}\QQuo{0}{2n}
=\Llu{k+n}\Qq{2n}.
\end{split}\]
Using this and again \eqref{Qq} and \rrema{R1422}, we obtain for all \(\ell\in\NO\) furthermore
\[\begin{split}
\Qq{2n+1}\Lluo{\ell}{2n+2}
&=\Qq{2n}\QQuo{0}{2n+1}\Lluo{\ell}{(2n+1)+1}\\
&=\Qq{2n}\Mmuo{\ell+1}{2n+1}\QQuo{0}{2n+1}
=\Llu{\ell+1+n}\Qq{2n}\QQuo{0}{2n+1}
=\Llu{\ell+n+1}\Qq{2n+1}
\end{split}\]
and
\[\begin{split}
\Qq{2n+1}\Mmuo{\ell}{2n+2}
&=\Qq{2n}\QQuo{0}{2n+1}\Mmuo{\ell}{(2n+1)+1}\\
&=\Qq{2n}\Lluo{\ell}{2n+1}\QQuo{0}{2n+1}
=\Mmu{\ell+n+1}\Qq{2n}\QQuo{0}{2n+1}
=\Mmu{\ell+n+1}\Qq{2n+1}.
\end{split}\]
Thus, the assertion is proved by mathematical induction.
\eproof
\bnotal{N1053AA}
Let \(\seqsinf \in\Kgqinf\). For all \(\ell\in\NO\) denote by \(\seq{\Uuo{m}{\ell}}{m}{0}{\infi}\) the \tsqDmpo{the \taKta{\ell}{\(\seqsinf\)}}. Then, for all \(k,\ell\in\NO\), let \symba{\UCR{k}{\ell}\defg\Uuo{k-\ell}{\ell}}{u}.
\enota
In particular, this means \(\UCR{k}{0}\defg\Uu{k}\) and, in view of \eqref{Uu2n} and \eqref{Uu2n1}, furthermore
\begin{align}
\UCR{2n}{2k}
&\defg
\bMat
\pauo{n-k}{2k}&\pbuo{n-k}{2k}\\
\pcuo{n-k}{2k}&\pduo{n-k}{2k}\\
\eMat,&
\UCR{2n}{2k+1}
&\defg
\bMat
\pauo{n-k-1}{2k+1}&\pbuo{n-k}{2k+1}\\
\pcuo{n-k-1}{2k+1}&\pduo{n-k}{2k+1}\\
\eMat,\label{aaa66a}
\shortintertext{and}
\UCR{2n+1}{2k}
&\defg
\bMat
\pauo{n-k}{2k}&\pbuo{n-k+1}{2k}\\
\pcuo{n-k}{2k}&\pduo{n-k+1}{2k}\\
\eMat,&
\UCR{2n+1}{2k+1}
&\defg
\bMat
\pauo{n-k}{2k+1}&\pbuo{n-k}{2k+1}\\
\pcuo{n-k}{2k+1}&\pduo{n-k}{2k+1}\\
\eMat\label{aaa77a}
\end{align}
for all \(n,k\in\NO\).
Now we are able to write down the connection between the resolvent matrices \(\UCR{m}{0}=\Uu{m}\) and \(\UCR{m}{1}=\Uuo{m-1}{1}\) given via \eqref{aaa66a} and \eqref{aaa77a}.
\bpropl{P1443}
Let \(\seqsinf\in\Kgqinf\). Then
\(
\UCR{m}{1}
=\QQu{0}^\inv\Mmu{0}^\inv\UCR{m}{0}\QQu{0}
\)
for all \(m\in\N\).
\eprop
\bproof
According to \rprop{P1542c}, we have \(\seq{\su{j}^\Sta{1}}{j}{0}{\infi}\in\Kgqinf\). \rprop{P1033} yields
\begin{align*}
\UCR{1}{0}&=\Mmu{0}\Llu{0}&
&\text{and}&
\UCR{1}{1}&=\Mmuo{0}{1}.
\end{align*}
Hence, using \rlemm{L1402}, we obtain
\[\begin{split}
\QQu{0}^\inv\Mmu{0}^\inv\UCR{1}{0}\QQu{0}
&=\QQu{0}^\inv\Mmu{0}^\inv\Mmu{0}\Llu{0}\QQu{0}\\
&=\QQu{0}^\inv\Mmu{0}^\inv\Mmu{0}\QQu{0}\Mmuo{0}{1}\QQu{0}^\inv\QQu{0}
=\Mmuo{0}{1}
=\UCR{1}{1}.
\end{split}\]
Let \(n\in\N\). \rprop{P1033} yields then
\begin{align*}
\UCR{2n}{0}&=\rk*{\rprod_{k=0}^{n-1}\Mmu{k}\Llu{k}}\Mmu{n},&
\UCR{2n+1}{0}&=\rprod_{k=0}^{n}\rk{\Mmu{k}\Llu{k}}
\shortintertext{and}
\UCR{2n}{1}&=\rprod_{k=0}^{n-1}(\Mmuo{k}{1}\Lluo{k}{1}),&
\UCR{2n+1}{1}&=\rk*{\rprod_{k=0}^{n-1}\Mmuo{k}{1}\Lluo{k}{1}}\Mmuo{n}{1}.
\end{align*}
Hence, using \rlemm{L1402}, we obtain
\[
\begin{split}
\QQu{0}^\inv\Mmu{0}^\inv\UCR{2n}{0}\QQu{0}
&=\QQu{0}^\inv\Mmu{0}^\inv\rk*{\rprod_{k=0}^{n-1}\Mmu{k}\Llu{k}}\Mmu{n}\QQu{0}
=\QQu{0}^\inv\Mmu{0}^\inv\Mmu{0}\rk*{\rprod_{k=0}^{n-1}\Llu{k}\Mmu{k+1}}\QQu{0}\\
&=\QQu{0}^\inv\rk*{\rprod_{k=0}^{n-1}\QQu{0}\Mmuo{k}{1}\QQu{0}^\inv\QQu{0}\Lluo{k}{1}\QQu{0}^\inv}\QQu{0}
=\rprod_{k=0}^{n-1}(\Mmuo{k}{1}\Lluo{k}{1})
=\UCR{2n}{1}
\end{split}
\]
and
\[
\begin{split}
\QQu{0}^\inv\Mmu{0}^\inv\UCR{2n+1}{0}\QQu{0}
&=\QQu{0}^\inv\Mmu{0}^\inv\rk*{\rprod_{k=0}^{n}\Mmu{k}\Llu{k}}\QQu{0}
=\QQu{0}^\inv\Mmu{0}^\inv\Mmu{0}\rk*{\rprod_{k=0}^{n-1}\Llu{k}\Mmu{k+1}}\Llu{n}\QQu{0}\\
&=\QQu{0}^\inv\rk*{\rprod_{k=0}^{n-1}\QQu{0}\Mmuo{k}{1}\QQu{0}^\inv\QQu{0}\Lluo{k}{1}\QQu{0}^\inv}\QQu{0}\Mmuo{n}{1}\QQu{0}^\inv\QQu{0}\\
&=\rk*{\rprod_{k=0}^{n-1}\Mmuo{k}{1}\Lluo{k}{1}}\Mmuo{n}{1}
=\UCR{2n+1}{1}.\qedhere
\end{split}
\]
\eproof
In view of \rprop{P1542c}, we can apply \rprop{P1443} to the \taKta{\ell}{a sequence \(\seqsinf\in\Kgqinf\)} and obtain:
\begin{rem}\label{aaaa400}
Let \(\seqsinf\in\Kgqinf\). Then
\(
\UCR{m}{\ell}
=\Mmuo{0}{\ell}\QQuo{0}{\ell}\UCR{m}{\ell+1}\rk{\QQuo{0}{\ell}}^\inv
\)
for all \(m\in\N\) and all \(\ell\in\mn{0}{m-1}\).
\end{rem}
By applying \rrema{aaaa400} twice, we obtain:
\blemml{cor00aa00}
Let \(\seqsinf\in\Kgqinf\). Then
\[
\UCR{m}{0}
=\Mmu{0}\Llu{0}\rk{\QQu{0}\QQuo{0}{1}}\UCR{m}{2}\rk{\QQu{0}\QQuo{0}{1}}^\inv
\]
for all \(m\in\minf{2}\).
\elemm
\bproof
According to \rprop{P1542c} the sequences \(\seq{\su{j}^\Sta{1}}{j}{0}{\infi}\) and \(\seq{\su{j}^\Sta{2}}{j}{0}{\infi}\) both belong to \(\Kgqinf\). The application of \rrema{aaaa400} to \(\seqsinf\) and \(\seq{\su{j}^\Sta{1}}{j}{0}{\infi}\) yields
\begin{align*}
\UCR{m}{0}&=\Mmu{0}\QQu{0}\UCR{m}{1}\QQu{0}^\inv&
&\text{and}&
\UCR{m}{1}&=\Mmuo{0}{1}\QQuo{0}{1}\UCR{m}{2}\rk{\QQuo{0}{1}}^\inv.
\end{align*}
Hence, \(\UCR{m}{0}=\Mmu{0}\QQu{0}\Mmuo{0}{1}\QQuo{0}{1}\UCR{m}{2}\rk{\QQuo{0}{1}}^\inv\QQu{0}^\inv\). From \rrema{R1422} with \(k=\ell=0\), we obtain furthermore \(\QQu{0}\Mmuo{0}{1}=\Llu{0}\QQu{0}\) which completes the proof.
\eproof
\blemml{L1105}
Let \(\seqsinf \in\Kgqinf\), then
\[
\UCR{m}{0}\Qq{m}
=\rprod_{\ell=0}^m\rk{\Mmuo{0}{\ell}\QQuo{0}{\ell}}
\]
for all \(m\in\NO\).
\elemm
\bproof
According to \rprop{P1542c}, we have \(\seq{\su{j}^\Sta{\ell}}{j}{0}{\infi}\in\Kgqinf\) for all \(\ell\in\NO\). \rprop{P1033} yields
\begin{align*}
\Uu{0}&=\Mmu{0}&
&\text{and}&
\Uu{1}&=\Mmu{0}\Llu{0}.
\end{align*}
Hence, we obtain
\[
\Uu{0}\Qq{0}
=\Mmu{0}\QQu{0}
=\Mmuo{0}{0}\QQuo{0}{0}
\]
and, using \rrema{R1422}, furthermore
\[
\Uu{1}\Qq{1}
=\Mmu{0}\Llu{0}\QQuo{0}{0}\QQuo{0}{1}
=\Mmu{0}\QQu{0}\Mmuo{0}{1}\QQuo{0}{1}
=\rk{\Mmuo{0}{0}\QQuo{0}{0}}\rk{\Mmuo{0}{1}\QQuo{0}{1}}.
\]
Now let \(n\in\N\) and suppose that
\[
\Uu{2n-1}\Qq{2n-1}
=\rprod_{\ell=0}^{2n-1}\rk{\Mmuo{0}{\ell}\QQuo{0}{\ell}}
\]
holds true. \rprop{P1033} yields
\begin{align*}
\Uu{2n}&=\Uu{2n-1}\Mmu{n}&
&\text{and}&
\Uu{2n+1}&=\Uu{2n}\Llu{n}.
\end{align*}
Hence, using \rlemm{L1446} with \(k=0\), we obtain
\[\begin{split}
\Uu{2n}\Qq{2n}
=\Uu{2n-1}\Mmu{n}\Qq{2n-1}\QQuo{0}{2n}
=\Uu{2n-1}\Qq{2n-1}\Mmuo{0}{2n}\QQuo{0}{2n}
=\rprod_{\ell=0}^{2n}\rk{\Mmuo{0}{\ell}\QQuo{0}{\ell}}
\end{split}
\]
and with that furthermore
\[
\Uu{2n+1}\Qq{2n+1}
=\Uu{2n}\Llu{n}\Qq{2n}\QQuo{0}{2n+1}
=\Uu{2n}\Qq{2n}\Mmuo{0}{2n+1}\QQuo{0}{2n+1}
=\rprod_{\ell=0}^{2n+1}\rk{\Mmuo{0}{\ell}\QQuo{0}{\ell}}.
\]
In view of \(\Uu{m}=\UCR{m}{0}\) for all \(m\in\NO\), the assertion is thus proved by mathematical induction.
\eproof
The following result expresses the resolvent matrix $\UCR{m}{0}=\Uu{m}$ in terms of the resolvent matrices \(\UCR{\ell}{0}=\Uu{\ell}\) and \(\UCR{m}{\ell+1}=\Uuo{m-\ell-1}{\ell+1}\), which can be considered as a splitting of the original moment problem:
\btheol{Taaa007}
Let \(\seqsinf\in\Kgqinf\). Then
\(
\UCR{m}{0}
=\UCR{\ell}{0}\Qq{\ell}\UCR{m}{\ell+1}\Qq{\ell}^\inv
\)
for all \(m\in\N\) and \(\ell\in\mn{0}{m-1}\).
\etheo
\bproof
Let \(m\in\N\). According to \rprop{P1033} and \rrema{aaaa400} we have
\[
\UCR{0}{0}\Qq{0}\UCR{m}{1}\Qq{0}^\inv
=\Uuo{0}{0}\QQuo{0}{0}\UCR{m}{1}\rk{\QQuo{0}{0}}^\inv
=\Uu{0}\QQu{0}\UCR{m}{1}\QQu{0}^\inv
=\Mmu{0}\QQu{0}\UCR{m}{1}\QQu{0}^\inv
=\UCR{m}{0}.
\]
Now suppose \(m\geq2\) and that
\[
\UCR{m}{0}
=\UCR{\ell}{0}\Qq{\ell}\UCR{m}{\ell+1}\Qq{\ell}^\inv
\]
holds true for some \(\ell\in\mn{0}{m-2}\). Using \rlemm{L1105} and \rrema{aaaa400}, we can conclude then
\[\begin{split}
\UCR{\ell+1}{0}\Qq{\ell+1}\UCR{m}{\ell+2}\Qq{\ell+1}^\inv
&=\ek*{\rprod_{k=0}^{\ell+1}\rk{\Mmuo{0}{k}\QQuo{0}{k}}}\UCR{m}{\ell+2}\rk{\Qq{\ell}\QQuo{0}{\ell+1}}^\inv\\
&=\ek*{\rprod_{k=0}^{\ell}\rk{\Mmuo{0}{k}\QQuo{0}{k}}}\rk{\Mmuo{0}{\ell+1}\QQuo{0}{\ell+1}}\UCR{m}{(\ell+1)+1}\rk{\QQuo{0}{\ell+1}}^\inv\Qq{\ell}^\inv\\
&=\UCR{\ell}{0}\Qq{\ell}\UCR{m}{\ell+1}\Qq{\ell}^\inv
=\UCR{m}{0}.
\end{split}\]
Thus, the assertion is proved by mathematical induction.
\eproof
\section[Orthogonal matrix polynomials corresponding to a transformed sequence]{Orthogonal matrix polynomials corresponding to a \hKt{ed} moment sequence}\label{S1945}
From \rpropss{P1443}{P1422} we obtain the following connection between the matrix polynomials \(\ophu{n}\), \(\sphu{n}\), \(\opku{n}\), and \(\spku{n}\), and the matrix polynomials \(\ophuo{n}{1}\), \(\sphuo{n}{1}\), \(\opkuo{n}{1}\), and \(\spkuo{n}{1}\) given in \rnotass{N1104}{N1053} and \rdefi{D1137} corresponding to a \tSpd{} sequence and its first \tKt, respectively:
\bpropl{C1536}
Let \(\seqsinf\in\Kgqinf\) with first \tKt{} $(s_j^\Sta{1})_{j=0}^\infi$. Then the \tStiqosoomp{} \(\PQPQ\) of \(\seqsinf\) and \([\seq{\ophuo{k}{1}}{k}{0}{\infi},\seq{\sphuo{k}{1}}{k}{0}{\infi},\seq{\opkuo{k}{1}}{k}{0}{\infi},\seq{\spkuo{k}{1}}{k}{0}{\infi}]\) of $(s_j^\Sta{1})_{j=0}^\infi$, resp., fulfill
\begin{align*}
\ek*{\ophuoa{n}{1}{z}}\ek*{\ophuoa{n}{1}{0}}^\inv&=\ek*{\su{0}^\inv\spkua{n}{z}}\ek*{\su{0}^\inv\spkua{n}{0}}^\inv,\\
\ek*{\sphuoa{n}{1}{z}}\ek*{\ophuoa{n}{1}{0}}^\inv&=\ek*{\spkua{n}{z}-\su{0}\opkua{n}{z}}\ek*{\su{0}^\inv\spkua{n}{0}}^\inv,\\
\ek*{\opkuoa{n-1}{1}{z}}\ek*{\spkuoa{n-1}{1}{0}}^\inv&=\ek*{\su{0}^\inv\sphua{n}{z}}\ek*{-\su{0}\ophua{n}{0}}^\inv,
\shortintertext{and}
\ek*{\spkuoa{n-1}{1}{z}}\ek*{\spkuoa{n-1}{1}{0}}^\inv&
=\ek*{z\sphua{n}{z}-\su{0}\ophua{n}{z}}\ek*{-\su{0}\ophua{n}{0}}^\inv.
\end{align*}
for all \(n\in\N\) and all \(z\in\C\).
\eprop
\bproof
According to \rprop{P1542c}, we have \(\seq{\su{j}^\Sta{1}}{j}{0}{\infty}\in\Kgqinf\).
In view of \eqref{ma11}, \eqref{QQ}, and \(\su{0}^\ad=\su{0}\), we get
\begin{align*}
\Mmua{0}{z}
&=
\bMat
\Iq&\Oqq\\
-z\su{0}^\inv&\Iq
\eMat&
&\text{and}&
\QQua{0}{z}
&=
\bMat
\Oqq&\su{0}\\
-z\su{0}^\inv&\Oqq
\eMat.
\end{align*}
Hence,
\[
\Mmua{0}{z}\QQua{0}{z}
=
\bMat
\Oqq&\su{0}\\
-z\su{0}^\inv&-z\Iq
\eMat
\]
and thus
\[
\ek*{\QQua{0}{z}}^\inv\ek*{\Mmua{0}{z}}^\inv
=\ek*{\Mmua{0}{z}\QQua{0}{z}}^\inv
=
\bMat
-\Iq&-z^\inv\su{0}\\
\su{0}^\inv&\Oqq
\eMat.
\]
Using \eqref{aaa66a} and \rprop{P1443} we obtain then
\[\begin{split}
\bMat
\pauoa{n-1}{1}{z}&\pbuoa{n}{1}{z}\\
\pcuoa{n-1}{1}{z}&\pduoa{n}{1}{z}\\
\eMat
&=\UCRa{2n}{1}{z}
=\ek*{\QQua{0}{z}}^\inv\ek*{\Mmua{0}{z}}^\inv\UCRa{2n}{0}{z}\QQua{0}{z}\\
&=
\bMat
-\Iq&-z^\inv\su{0}\\
\su{0}^\inv&\Oqq
\eMat
\bMat
\pauoa{n}{0}{z}&\pbuoa{n}{0}{z}\\
\pcuoa{n}{0}{z}&\pduoa{n}{0}{z}\\
\eMat
\bMat
\Oqq&\su{0}\\
-z\su{0}^\inv&\Oqq
\eMat\\
&=
\bMat
\ek{z\pbuoa{n}{0}{z}+\su{0}\pduoa{n}{0}{z}}\su{0}^\inv&-\ek{\pauoa{n}{0}{z}+z^\inv\su{0}\pcuoa{n}{0}{z}}\su{0}\\
-z\su{0}^\inv\pbuoa{n}{0}{z}\su{0}^\inv&\su{0}^\inv\pauoa{n}{0}{z}\su{0}
\eMat.
\end{split}
\]
Since \eqref{aaa66a} and \rprop{P1422} yield
\[
\bMat
\pauoa{n}{0}{z}&\pbuoa{n}{0}{z}\\
\pcuoa{n}{0}{z}&\pduoa{n}{0}{z}\\
\eMat
=
\bMat
\ek{\spkua{n}{z}}[\spkua{n}{0}]^\inv&-\ek{\sphua{n}{z}}[\ophua{n}{0}]^\inv\\
-z\ek{\opkua{n}{z}}[\spkua{n}{0}]^\inv&\ek{\ophua{n}{z}}[\ophua{n}{0}]^\inv
\eMat
\]
and
\[
\bMat
\pauoa{n-1}{1}{z}&\pbuoa{n}{1}{z}\\
\pcuoa{n-1}{1}{z}&\pduoa{n}{1}{z}\\
\eMat
=
\bMat
\ek{\spkuoa{n-1}{1}{z}}[\spkuoa{n-1}{1}{0}]^\inv&-\ek{\sphuoa{n}{1}{z}}[\ophuoa{n}{1}{0}]^\inv\\
-z\ek{\opkuoa{n-1}{1}{z}}[\spkuoa{n-1}{1}{0}]^\inv&\ek{\ophuoa{n}{1}{z}}[\ophuoa{n}{1}{0}]^\inv
\eMat
\]
we can conclude
\begin{align*}
&\ek*{\spkuoa{n-1}{1}{z}}\ek*{\spkuoa{n-1}{1}{0}}^\inv
=\ek*{z\rk*{-\ek*{\sphua{n}{z}}\ek*{\ophua{n}{0}}^\inv}+\su{0}\ek*{\ophua{n}{z}}\ek*{\ophua{n}{0}}^\inv}\su{0}^\inv,\\
-&\ek*{\sphuoa{n}{1}{z}}\ek*{\ophuoa{n}{1}{0}}^\inv
=-\ek*{\ek*{\spkua{n}{z}}\ek*{\spkua{n}{0}}^\inv+z^\inv\su{0}\rk*{-z\ek*{\opkua{n}{z}}\ek*{\spkua{n}{0}}^\inv}}\su{0},\\
-z&\ek*{\opkuoa{n-1}{1}{z}}\ek*{\spkuoa{n-1}{1}{0}}^\inv
=-z\su{0}^\inv\rk*{-\ek*{\sphua{n}{z}}\ek*{\ophua{n}{0}}^\inv}\su{0}^\inv,
\shortintertext{and}
&\ek*{\ophuoa{n}{1}{z}}\ek*{\ophuoa{n}{1}{0}}^\inv
=\su{0}^\inv\rk*{\ek*{\spkua{n}{z}}\ek*{\spkua{n}{0}}^\inv}\su{0}.
\end{align*}
Hence, the asserted identities follow.
\eproof
\blemml{L0822}
Let \(\seqsinf\in\Kgqinf\). Then
\begin{align*}
\ophuoa{n}{1}{0}&=\su{0}^\inv\spkua{n}{0}&
&\text{and}&
\spkuoa{n}{1}{0}&=-\su{0}\ophua{n+1}{0}
\end{align*}
for all \(n\in\N\).
\elemm
\bproof
According to \rprop{P1542c}, we have \(\seq{\su{j}^\Sta{1}}{j}{0}{\infty}\in\Kgqinf\). Denote by \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\) and \([\seq{\DSpol{k}{1}}{k}{0}{\infty},\seq{\DSpom{k}{1}}{k}{0}{\infty}]\) the \tdspa{\(\seqsinf\) and \(\seq{\su{j}^\Sta{1}}{j}{0}{\infty}\)}, respectively. Let \(n\in\N\). Using \rremass{R1017}{R0921} we have, in view of \eqref{M0}, then
\[
\begin{split}
\ophuoa{n}{1}{0}
&=(-1)^n\rk*{\rprod_{k=0}^{n-1}\DSpom{k}{1}\DSpol{k}{1}}^\inv
=(-1)^n\rk*{\rprod_{k=0}^{n-1}\su{0}^\inv\DSpl{k}\su{0}^\inv\su{0}\DSpm{k+1}\su{0}}^\inv\\
&=(-1)^n\ek*{\su{0}^\inv\DSpl{0}\rk*{\rprod_{\ell=1}^{n-1}\DSpm{\ell}\DSpl{\ell}}\DSpm{n}\su{0}}^\inv
=(-1)^n\ek*{\DSpm{0}\DSpl{0}\rk*{\rprod_{\ell=1}^{n-1}\DSpm{\ell}\DSpl{\ell}}\DSpm{n}\su{0}}^\inv\\
&=\su{0}^\inv\rk*{(-1)^n\ek*{\rk*{\rprod_{\ell=0}^{n-1}\DSpm{\ell}\DSpl{\ell}}\DSpm{n}}^\inv}
=\su{0}^\inv\spkua{n}{0}
\end{split}
\]
and
\[
\begin{split}
\spkuoa{n}{1}{0}
&=(-1)^n\ek*{\rk*{\rprod_{k=0}^{n-1}\DSpom{k}{1}\DSpol{k}{1}}\DSpom{n}{1}}^\inv\\
&=(-1)^n\ek*{\rk*{\rprod_{k=0}^{n-1}\su{0}^\inv\DSpl{k}\su{0}^\inv\su{0}\DSpm{k+1}\su{0}}\su{0}^\inv\DSpl{n}\su{0}^\inv}^\inv\\
&=(-1)^n\ek*{\su{0}^\inv\DSpl{0}\rk*{\rprod_{\ell=1}^{n}\DSpm{\ell}\DSpl{\ell}}\su{0}^\inv}^\inv
=(-1)^n\ek*{\DSpm{0}\DSpl{0}\rk*{\rprod_{\ell=1}^{n}\DSpm{\ell}\DSpl{\ell}}\su{0}^\inv}^\inv\\
&=-\su{0}\ek*{(-1)^{n+1}\rk*{\rprod_{\ell=0}^{n}\DSpm{\ell}\DSpl{\ell}}^\inv}
=-\su{0}\ophua{n+1}{0}.\qedhere
\end{split}
\]
\eproof
\btheol{T0820}
Let \(\seqsinf\in\Kgqinf\) with first \tKt{} $(s_j^\Sta{1})_{j=0}^\infi$. Then the \tStiqosoomp{} \(\PQPQ\) of \(\seqsinf\) and \([\seq{\ophuo{k}{1}}{k}{0}{\infi},\seq{\sphuo{k}{1}}{k}{0}{\infi},\seq{\opkuo{k}{1}}{k}{0}{\infi},\seq{\spkuo{k}{1}}{k}{0}{\infi}]\) of $(s_j^\Sta{1})_{j=0}^\infi$, resp., fulfill
\begin{align*}
\ophuoa{n}{1}{z}&=\su{0}^\inv\spkua{n}{z},&\sphuoa{n}{1}{z}&=\spkua{n}{z}-\su{0}\opkua{n}{z},\\
\opkuoa{n-1}{1}{z}&=\su{0}^\inv\sphua{n}{z},&\spkuoa{n-1}{1}{z}&=z\sphua{n}{z}-\su{0}\ophua{n}{z}
\end{align*}
for all \(n\in\N\) and all \(z\in\C\).
\etheo
\bproof
Combine \rprop{C1536} with \rlemm{L0822}.
\eproof
\bpropl{orthg-001}
Let \(\seqsinf\in\Kgqinf\) with \tStiqosoomp{} \(\PQPQ\) and first \tKt{} $(s_j^\Sta{1})_{j=0}^\infi$. Then the sequences $(s_j^\Sta{1})_{j=0}^\infi$ and $(s_{j+1}^\Sta{1})_{j=0}^\infi$ both belong to \(\Kgqinf\). Furthermore \(\seq{\su{0}^\inv\spku{k}}{k}{0}{\infi}\) is a \tmrosmpa{$(s_j^\Sta{1})_{j=0}^\infi$} and \(\seq{\su{0}^\inv\sphu{k+1}}{k}{0}{\infi}\) is a \tmrosmpa{$(s_{j+1}^\Sta{1})_{j=0}^\infi$}. In particular, if \(\muh\in\Mggqaag{\ra}{(s_j^\Sta{1})_{j=0}^\infi}\), we have the orthogonality relations
\begin{align*}
\int_{\ra}\ek*{\su{0}^\inv\spkua{m}{t}}^\ad\muh(\dif t)\ek*{\su{0}^\inv\spkua{n}{t}}
&=
\begin{cases}
\Oqq\incase{m\neq n}\\
\Lau{n}\incase{m=n}
\end{cases}
\shortintertext{and}
\int_{\ra}\ek*{\su{0}^\inv\sphua{m}{t}}^\ad\muk(\dif t)\ek*{\su{0}^\inv\sphua{n}{t}}
&=
\begin{cases}
\Oqq\incase{m\neq n}\\
\Lu{n}\incase{m=n}
\end{cases}
\end{align*}
for all \(m,n\in\NO\), where \(\muk\colon\Bra\to\Cggq\) is defined by \symba{\muk(B)\defg\int_{\ra}t\muh(\dif t)}{m} and belongs to \(\Mggqaag{\ra}{(s_{j+1}^\Sta{1})_{j=0}^\infi}\).
\eprop
\bproof
In view of \rprop{P1542c} and \rrema{R2347}, we see that $(s_j^\Sta{1})_{j=0}^\infi$ and $(s_{j+1}^\Sta{1})_{j=0}^\infi$ both belong to \(\Kgqinf\). Since \(\spkua{0}{z}=\su{0}\) for all \(z\in\C\), the combination of \rtheo{T0820} with \rremass{R0001}{R2352} completes the proof.
\eproof
Let \(m\in\NO\) and let \(\seqs{m}\in\Kgqu{m}\). Then we want to draw the attention to two distinguished elements of the solution set \(\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}\). This concerns those measures \(\smin{m}\)\index{s@\(\smin{m}\)} and \(\smax{m}\)\index{s@\(\smax{m}\)}, respectively, the \tSt{s} of which are generated via \rthmp{T1327}{T1327.a} by the two constant pairs \((\iota,\theta),(\theta,\iota)\in\SPq\), where \(\iota\) and \(\theta\) are the constant functions in \(\Cs\) with values \(\Iq\) and \(\Oqq\), respectively. The measures \(\smin{m}\) and \(\smax{m}\) are called the \noti{lower}{extremal element of $\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}$!lower} and \noti{upper extremal elements of \(\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}\)}{extremal element of $\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}$!upper}, respectively. In view of \rtheo{T1327}, \eqref{Uu2n}, \eqref{Uu2n1}, and \rprop{P1422}, for \(n\in\NO\) and \(z\in\Cs\) we infer for the corresponding \tSt{s}
\begin{align}
\sttrhla{\smin{2n}}{z}&=\ek*{\paua{n}{z}}\ek*{\pcua{n}{z}}^\inv=-\ek*{\spkua{n}{z}}\ek*{z\opkua{n}{z}}^\inv,\label{min0}\\
\sttrhla{\smax{2n}}{z}&=\ek*{\pbua{n}{z}}\ek*{\pdua{n}{z}}^\inv=-\ek*{\sphua{n}{z}}\ek*{\ophua{n}{z}}^\inv\label{max0}
\shortintertext{and}
\sttrhla{\smin{2n+1}}{z}&=\ek*{\paua{n}{z}}\ek*{\pcua{n}{z}}^\inv=-\ek*{\spkua{n}{z}}\ek*{z\opkua{n}{z}}^\inv,\label{min1}\\
\sttrhla{\smax{2n+1}}{z}&=\ek*{\pbua{n+1}{z}}\ek*{\pdua{n+1}{z}}^\inv=-\ek*{\sphua{n+1}{z}}\ek*{\ophua{n+1}{z}}^\inv.\label{max1}
\end{align}
The functions introduced in \eqref{min0}--\eqref{max1} play an important role in the considerations of Yu.~M.~Dyukarev~\zita{MR2053150}. We refer the reader to~\zitaa{MR2053150}{\cSect{3}} for a detailed discussion of these functions and their extremality properties.
Taking into account \rnota{N1104}, we see that \(\sttrhl{\smin{2n+1}}\) coincides with \(\sttrhl{\smin{2n}}\) and does not depend on \(\su{2n+1}\) and that \(\sttrhl{\smax{2n}}\) coincides with \(\sttrhl{\smax{2n-1}}\) and does not depend on \(\su{2n}\). In particular
\begin{align}\label{o=e}
\smin{2n+1}&=\smin{2n}&
&\text{and}&
\smax{2n}&=\smax{2n-1}.
\end{align}
Since \(\smin{m}\) and \(\smax{m}\) both belong to $\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}$, we can hence conclude
\begin{align}\label{sm=}
\smin{2n}&\in\MggqAAg{\ra}{\seq{\su{j}}{j}{0}{2n}},&
\smax{2n-1}&\in\MggqAAg{\ra}{\seq{\su{j}}{j}{0}{2n-1}}.
\end{align}
The lower and upper extremal elements \(\smin{m}\) and \(\smax{m}\) of \(\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}\) are concentrated on a finite number of points in \(\ra\). In particular, they possess power moments up to any order, which coincide with \tzext{s} introduced in \rdefi{D0909}:
\blemml{L1014}
Let \(n\in\N\) and let \(\seqs{2n-1}\in\Kgqu{2n-1}\). Then \(\int_\ra x^j\smax{2n-1}(\dif x)=\zext{s}{j}\) for all \(j\in\NO\), where \(\seqzinf\) is the \tzexto{\(\seqs{2n-1}\)}.
\elemm
\bproof
For all \(j\in\NO\) let \(t_j\defg\int_{\ra}x^j\smax{2n-1}(\dif x)\). According to \rtheo{T1121}, we have then \(\seq{t_j}{j}{0}{\infi}\in\Kggqinf\). Because of \eqref{sm=}, we get furthermore \(\su{j}=t_j\) for all \(j\in\mn{0}{2n-1}\). From \rprop{P0926} and \eqref{L} we can consequently conclude that the matrix \(t_{2n}-\Theta_{2n}\) is \tnnH{}, where \(\Theta_{2n}\defg\zuu{n}{2n-1}\Hu{n-1}^\inv\yuu{n}{2n-1}\). Now, we consider an arbitrary \(\epsilon>0\). Let \(\su{2n}\defg\Theta_{2n}+\epsilon\Iq\) and denote by \((\Spu{j})_{j=0}^{2n}\) the \tSpa{\(\seqs{2n}\)}. In view of \eqref{L}, the matrix \(\Spu{2n}=\Lu{n}=\epsilon\Iq\) is \tpH{}. From \rprop{T1337} we then can easily conclude \(\seqs{2n}\in\Kgqu{2n}\). As an element of \(\Mggqaakg{\ra}{\seqs{2n}}\), the measure \(\smax{2n}\) fulfills \(\su{2n}-\int_\ra x^{2n}\smax{2n}(\dif x)\in\Cggq\). Using \eqref{o=e}, we obtain
\[
\Theta_{2n}
\leq t_{2n}
=\int_\ra x^{2n}\smax{2n}(\dif x)
\leq\su{2n}
=\Theta_{2n}+\epsilon\Iq.
\]
Since this holds true for all \(\epsilon>0\), we get \(t_{2n}=\Theta_{2n}\). Thus, the sequence \(\seq{t_j}{j}{0}{\infi}\) belongs to \(\Kggdoq{2n}\). Hence, we can apply \rlemm{L0912} to see that \(\seq{t_j}{j}{0}{\infi}\) is the \tzexto{\(\seq{t_j}{j}{0}{2n-1}\)}. Because of \(\su{j}=t_j\) for all \(j\in\mn{0}{2n-1}\), then \(t_j=\zext{s}{j}\) for all \(j\in\NO\), which completes the proof.
\eproof
\blemml{L1037}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Kgqu{2n}\). Then \(\int_\ra x^j\smin{2n}(\dif x)=\zext{s}{j}\) for all \(j\in\NO\), where \(\seqzinf\) is the \tzexto{\(\seqs{2n}\)}.
\elemm
\bproof
For all \(j\in\NO\) let \(t_j\defg\int_{\ra}x^j\smin{2n}(\dif x)\). According to \rtheo{T1121}, we have then \(\seq{t_j}{j}{0}{\infi}\in\Kggqinf\). Because of \eqref{sm=}, we get furthermore \(\su{j}=t_j\) for all \(j\in\mn{0}{2n}\). From \rprop{P0926} and \eqref{La} we can consequently conclude that the matrix \(t_{2n+1}-\Theta_{2n+1}\) is \tnnH{}, where \(\Theta_1\defg\Oqq\) and \(\Theta_{2n+1}\defg\zuu{n+1}{2n}\Ku{n-1}^\inv\yuu{n+1}{2n}\) for \(n\in\N\). Now, we consider an arbitrary \(\epsilon>0\). Let \(\su{2n+1}\defg\Theta_{2n+1}+\epsilon\Iq\) and denote by \((\Spu{j})_{j=0}^{2n+1}\) the \tSpa{\(\seqs{2n+1}\)}. In view of \eqref{La}, the matrix \(\Spu{2n+1}=\Lau{n}=\epsilon\Iq\) is \tpH{}. From \rprop{T1337} we then can easily conclude \(\seqs{2n+1}\in\Kgqu{2n+1}\). As an element of \(\Mggqaakg{\ra}{\seqs{2n+1}}\), the measure \(\smin{2n+1}\) fulfills \(\su{2n+1}-\int_\ra x^{2n+1}\smin{2n+1}(\dif x)\in\Cggq\). Using \eqref{o=e}, we obtain
\[
\Theta_{2n+1}
\leq t_{2n+1}
=\int_\ra x^{2n+1}\smin{2n+1}(\dif x)
\leq\su{2n+1}
=\Theta_{2n+1}+\epsilon\Iq.
\]
Since this holds true for all \(\epsilon>0\), we get \(t_{2n+1}=\Theta_{2n+1}\). Thus, the sequence \(\seq{t_j}{j}{0}{\infi}\) belongs to \(\Kggdoq{2n+1}\). Hence, we can apply \rlemm{L0912} to see that \(\seq{t_j}{j}{0}{\infi}\) is the \tzexto{\(\seq{t_j}{j}{0}{2n}\)}. Because of \(\su{j}=t_j\) for all \(j\in\mn{0}{2n}\), then \(t_j=\zext{s}{j}\) for all \(j\in\NO\), which completes the proof.
\eproof
In combination with \eqref{min0}--\eqref{max1}, \rtheo{T0820} yields a relation between the lower and upper extremal elements associated with a \tSpd{} sequence and its first \tKt{}:
\bpropl{P1058}
Let \(\seqsinf\in\Kgqinf\) with first \tKt{} $(s_j^\Sta{1})_{j=0}^\infi$. Then \(\seqs{m}\) and $(s_j^\Sta{1})_{j=0}^m$ both belong to \(\Kgqu{m}\) for all \(m\in\NO\). For all \(m\in\NO\) denote by \(\smin{m}\) and \(\smax{m}\) the lower and upper extremal element of \(\Mggqaakg{\ra}{\seq{\su{j}}{j}{0}{m}}\). Furthermore, for all \(m\in\N\), let \(\smino{m}{1}\) and \(\smaxo{m}{1}\) be the lower and upper extremal element of \(\Mggqaakg{\ra}{\seq{\su{j}^\Sta{1}}{j}{0}{m}}\). Then
\begin{align}
\sttrhla{\smino{2n-2}{1}}{z}&=\sttrhla{\smino{2n-1}{1}}{z}=-\su{0}-\su{0}\ek*{z\sttrhla{\smax{2n}}{z}}^\inv\su{0}\label{P1058.B1}
\shortintertext{and}
\sttrhla{\smaxo{2n-1}{1}}{z}&=\sttrhla{\smaxo{2n}{1}}{z}=-\su{0}-\su{0}\ek*{z\sttrhla{\smin{2n}}{z}}^\inv\su{0}\label{P1058.B2}
\end{align}
for all \(n\in\N\) and all \(z\in\Cs\).
\eprop
\bproof
According to \rprop{P1542c}, we have \(\seq{\su{j}^\Sta{1}}{j}{0}{\infty}\in\Kgqinf\). Denote by \(\PQPQ\) the \tStiqosoomp{} of \(\seqsinf\) and by \([\seq{\ophuo{k}{1}}{k}{0}{\infi},\seq{\sphuo{k}{1}}{k}{0}{\infi},\seq{\opkuo{k}{1}}{k}{0}{\infi},\seq{\spkuo{k}{1}}{k}{0}{\infi}]\) the \tStiqosoomp{} of $(s_j^\Sta{1})_{j=0}^\infi$. Let \(n\in\N\) and \(z\in\Cs\). In view of \eqref{min0} and \eqref{min1}, we have
\[
\sttrhla{\smino{2n-2}{1}}{z}
=\sttrhla{\smino{2n-1}{1}}{z}
=-\ek*{\spkuoa{n-1}{1}{z}}\ek*{z\opkuoa{n-1}{1}{z}}^\inv.
\]
Using \rtheo{T0820} we obtain furthermore
\[\begin{split}
-\ek*{\spkuoa{n-1}{1}{z}}\ek*{z\opkuoa{n-1}{1}{z}}^\inv
&=-\ek*{z\sphua{n}{z}-\su{0}\ophua{n}{z}}\ek*{z\su{0}^\inv\sphua{n}{z}}^\inv\\
&=-\su{0}+\frac{1}{z}\su{0}\ek*{\ophua{n}{z}}\ek*{\sphua{n}{z}}^\inv\su{0}.
\end{split}\]
Taking additionally into account \eqref{max0}, \eqref{P1058.B1} follows.
In view of \eqref{max0} and \eqref{max1}, we have
\[
\sttrhla{\smaxo{2n-1}{1}}{z}
=\sttrhla{\smaxo{2n}{1}}{z}
=-\ek*{\sphuoa{n}{1}{z}}\ek*{\ophuoa{n}{1}{z}}^\inv.
\]
Using \rtheo{T0820} we obtain furthermore
\[\begin{split}
-\ek*{\sphuoa{n}{1}{z}}\ek*{\ophuoa{n}{1}{z}}^\inv
&=-\ek*{\spkua{n}{z}-\su{0}\opkua{n}{z}}\ek*{\su{0}^\inv\spkua{n}{z}}^\inv\\
&=-\su{0}+\su{0}\ek*{\opkua{n}{z}}\ek*{\spkua{n}{z}}^\inv\su{0}.
\end{split}\]
Taking additionally into account \eqref{min0}, \eqref{P1058.B2} follows.
\eproof
From the identities derived in the proof of \rprop{P1058}, we can easily obtain the following relations between the matrix polynomials \(\PQPQ\) and \([\seq{\ophuo{k}{1}}{k}{0}{\infi},\seq{\sphuo{k}{1}}{k}{0}{\infi},\seq{\opkuo{k}{1}}{k}{0}{\infi},\seq{\spkuo{k}{1}}{k}{0}{\infi}]\):
\btheol{Tnew1}
Let \(\seqsinf\in\Kgqinf\) with first \tKt{} $(s_j^\Sta{1})_{j=0}^\infi$ and \tStiqosoomp{} \(\PQPQ\). Then $(s_j^\Sta{1})_{j=0}^\infi$ belongs to \(\Kgqinf\). Denote by \([\seq{\ophuo{k}{1}}{k}{0}{\infi},\seq{\sphuo{k}{1}}{k}{0}{\infi},\seq{\opkuo{k}{1}}{k}{0}{\infi},\seq{\spkuo{k}{1}}{k}{0}{\infi}]\) the \tStiqosoompo{$(s_j^\Sta{1})_{j=0}^\infi$}. Then
\begin{align*}
\su{0}\ek*{\ophua{n}{z}}\ek*{\sphua{n}{z}}^\inv\su{0}+\ek*{\spkuoa{n-1}{1}{z}}\ek*{\opkuoa{n-1}{1}{z}}^\inv
&=z\su{0}
\shortintertext{and}
\su{0}\ek*{\opkua{n}{z}}\ek*{\spkua{n}{z}}^\inv\su{0}+\ek*{\sphuoa{n}{1}{z}}\ek*{\ophuoa{n}{1}{z}}^\inv
&=\su{0}
\end{align*}
for all \(n\in\N\) and all \(z\in\Cs\).
\etheo
Note that similar interrelations as exposed in \rtheoss{T0820}{Tnew1} between the polynomials \(\PQPQ\) and \([\seq{\ophuo{k}{1}}{k}{0}{\infi},\seq{\sphuo{k}{1}}{k}{0}{\infi},\seq{\opkuo{k}{1}}{k}{0}{\infi},\seq{\spkuo{k}{1}}{k}{0}{\infi}]\) were in the scalar case considered in~\cite{CR16}. \rprop{P1058} can also be seen from the following matrix continued fraction expansions, which appear in connection with matrix Hurwitz type polynomials in~\zita{MR3327132}. For $A,B\in \Cqq $ with $B$ invertible, set \symb{\frac{A}{B}\defg AB^\inv}.
\bpropnl{\zitaa{MR3324594}{\ctheo{3.4}}}{prop2000}
Let \(\seqsinf\in\Kgqinf\) with \tdsp{} \([\seq{\DSpl{k}}{k}{0}{\infty},\seq{\DSpm{k}}{k}{0}{\infty}]\). For all \(n\in\NO\) and all \(z\in\Cs\), then
\begin{align*}
\sttrhla{\smin{2n}}{z}
&
=\cfrac{\Iq }{-z\DSpm{0}+
\cfrac{\Iq }{\DSpl{0}+
\cfrac{\Iq}{+
\cfrac{\ddots}{-z\DSpm{n-1}+
\cfrac{\Iq }{\DSpl{n-1}-z^\inv \DSpm{n}^\inv}}}}}
\shortintertext{and}
\sttrhla{\smax{2n}}{z}
&
=\cfrac{\Iq }{-z\DSpm{0}+
\cfrac{\Iq }{\DSpl{0}+
\cfrac{\Iq }{+
\cfrac{\ddots}{\DSpl{n-2}+
\cfrac{\Iq }{-z\DSpm{n-1}+\DSpl{n-1}^\inv}}}}}.
\end{align*}
\end{prop}
\appendix
\section{Orthogonal matrix polynomials on $\ra$}
Let us recall some notions on orthogonal matrix polynomials (OMP) which were used in~\zitas{CR13,MR3275438}. Let $P$ be a complex $p \times q$~matrix polynomial. For all $n\in\NO $, let
\[
Y^{[P]}_{n}
\defg
\begin{bmatrix}
A_0\\
A_1\\
\vdots\\
A_n
\end{bmatrix},
\]
where $(A_j)_{j=0}^\infi$ is the unique sequence of complex $p\times q$~matrices such that for all $z\in\C$ the polynomial $P$ admits the representation $P(z)=\sum_{j=0}^\infi z^jA_j$. Furthermore, we denote by $\deg P\defg \sup\setaa{j\in\NO}{A_j\neq\Opq}$ the \noti{degree of $P$}{degree}. Observe that in the case $P(z)=\Opq $ for all $z\in\C$ we have thus $\deg P=-\infi$. If $k\defg \deg P\geq0$, we refer to $A_k$ as the \noti{leading coefficient of $P$}{leading coefficient}.
\begin{defn}\label{def3.2}
Let $\kappa\in\NOinf $, and let $\seqs{2\kappa}$ be a sequence of complex \tqqa{matrices}. A sequence $(P_k)_{k=0}^\kappa$ of complex \tqqa{matrix} polynomials is called a \noti{\tmrosmpa{$\seqs{2\kappa}$}}{monic right orthogonal system of matrix polynomials} if the following three conditions are fulfilled:
\bAeqi{0}
\il{MLOS.I} $\deg P_k=k$ for all $k\in \mn{0}{\kappa}$.
\il{MLOS.II} $P_k$ has the leading coefficient $\Iq$ for all $k\in\mn{0}{\kappa}$.
\il{MLOS.III} $\rk{Y_n^{[P_j]}}^\ad\Hu{n}Y_n^{[P_k]}= \Oqq$ for all $j,k\in\mn{0}{\kappa}$ with $j\neq k$, where $n\defg \max\set{j,k}$.
\eAeqi
\end{defn}
\bremnl{cf.~\zitaa{MR3275438}{\cremp{3.6}{1652}}}{R2312}
Let $\kappa\in \NOinf$ and let $\seqs{2\kappa}$ be a sequence of complex \tqqa{matrices} such that the block Hankel matrix \(\Hu{n}\) is \tpH{} for all \(n\in\mn{0}{\kappa}\). Denote by $(P_k)_{k=0}^\kappa$ the \tmrosmpa{$\seqs{2\kappa}$}. Let $\sigma$ be a \tnnH{} \tqqa{measure} on a non-empty Borel subset $\Omega$ of \(\R\) satisfying $s_j=\int_\Omega t^j\sigma(\dif t)$ for all \(j\in\mn{0}{2\kappa}\). Then
\[
\int_\Omega\ek*{P_j(t)}^\ad\sigma(\dif t)\ek*{P_k(t)}
=
\begin{cases}
\Oqq\incase{j\neq k}\\
\Lu{n}\incase{j=k}
\end{cases}
\]
for all $j,k\in\mn{0}{\kappa}$.
\end{rem}
\noindent
\begin{minipage}{240pt}
A.~E.~Choque-Rivero\\
Instituto de F\'isica y Matem\'aticas\\
Universidad Michoacana de San Nicol\'as de Hidalgo\\
Ciudad Universitaria\\
Morelia, Mich.\\
C.P.~58048\\
M\'exico\\
\texttt{[email protected]}
\end{minipage}
\hspace{10pt}
\begin{minipage}{160pt}
C.~M\"adler\\
Universit\"at Leipzig\\
Mathematisches Institut\\
Augustusplatz~10\\
04109~Leipzig\\
Germany\\
\texttt{[email protected]}
\end{minipage}
\end{document} |
\begin{document}
\title[Uniqueness]{On a general variational framework for existence and uniqueness in Differential Equations}
\author{Pablo Pedregal}
\date{}
\thanks{Supported by the Spanish Ministerio de Ciencia, Innovación y Universidades through project MTM2017-83740-P}
\begin{abstract}
Starting from the classic contraction mapping principle, we establish a general, flexible, variational setting that turns out to be applicable to many situations of existence in Differential Equations. We show its potentiality with some selected examples including initial-value, Cauchy problems for ODEs; non-linear, monotone PDEs; linear and non-linear hyperbolic problems; and steady Navier-Stokes systems.
\end{abstract}
\maketitle
\section{Introduction}
Possibly the most fundamental result yielding existence and uniqueness of solutions of an equation is the classic Banach contraction mapping principle.
\begin{theorem}\label{contraction}
Let $\mathbf T:\mathbb H\to\mathbb H$ be a mapping from a Banach space $\mathbb H$ into itself that is contractive in the sense
$$
\|\mathbf T\mathbf{x}-\mathbf T\mathbf{y}\|\le k\|\mathbf{x}-\mathbf{y}\|,\quad k\in[0, 1), \mathbf{x}, \mathbf{y}\in\mathbb H.
$$
Then $\mathbf T$ admits a unique fixed point $\overline\mathbf{x}\in\mathbb H$,
$$
\mathbf T\overline\mathbf{x}=\overline\mathbf{x}.
$$
\end{theorem}
The proof is well-known, elementary, and independent of dimension. The most fascinating issue is that this basic principle is at the heart of many uniqueness results in
Applied Analysis and Differential Equations. Our aim is to stress this fact from a variational stand-point. This means that we would like to rephrase the previous principle into a variational form that could be directly and flexibly used in many of the situations where uniqueness of solutions is known or expected. Our basic principle is the following.
\begin{proposition}\label{basica}
Let $E:\mathbb H\to\mathbb{R}^+$ be a non-negative, lower semi-continuous functional in a Banach space $\mathbb H$, such that
\begin{equation}\label{enhcoerf}
\|\mathbf{x}-\mathbf{y}\|\le C(E(\mathbf{x})+E(\mathbf{y})),\quad C>0, \mathbf{x}, \mathbf{y}\in\mathbb H.
\end{equation}
Suppose, in addition, that
\begin{equation}\label{infcero}
\inf_{\mathbf{z}\in\mathbb H}E(\mathbf{z})=0.
\end{equation}
Then there is a unique minimizer, i.e. a unique $\overline\mathbf{x}\in\mathbb H$ such that $E(\overline\mathbf{x})=0$, and
\begin{equation}\label{errorint}
\|\mathbf{x}-\overline\mathbf{x}\|\le CE(\mathbf{x}),\quad \mathbf{x}\in\mathbb H.
\end{equation}
\end{proposition}
The proof again is elementary, because every minimizing sequence $\{\mathbf{x}_j\}$ with $E(\mathbf{x}_j)\searrow0$ must be a Cauchy sequence in $\mathbb H$, according to \eqref{enhcoerf}, and so it converges to some $\overline\mathbf{x}\in\mathbb H$. The lower semicontinuity implies that
$$
0\le E(\overline\mathbf{x})\le\liminf_{j\to\infty}E(\mathbf{x}_j)=0,
$$
and $\overline\mathbf{x}$ is a minimizer. Condition \eqref{enhcoerf} implies automatically that such minimizer is unique, and leads to \eqref{errorint}.
Condition \eqref{errorint} is a very clear statement that functional $E$ in Proposition \ref{basica} is a measure of how far we are from $\overline\mathbf{x}$, the unique point where $E$ vanishes. Indeed, this consequence already points in the direction in which to look for functionals $E$ in specific situations: they should be setup as a way to measure departure from solutions sought. This will be taken as a guiding principle in concrete examples. The usual least-square method (see \cite{bogunz}, \cite{glowinski}, for example), suitably adapted to each situation, stands as a main, natural possibility for $E$.
It is not surprising that Proposition \ref{basica} is more general than Theorem \ref{contraction}, in the sense that the latter is a consequence of the former by considering the natural functional
\begin{equation}\label{error}
E(\mathbf{x})=\|\mathbf T\mathbf{x}-\mathbf{x}\|.
\end{equation}
Indeed, for an arbitrary pair $\mathbf{x}, \mathbf{y}\in\mathbb H$,
$$
\|\mathbf{x}-\mathbf{y}\|\le\|\mathbf{x}-\mathbf T\mathbf{x}\|+\|\mathbf T\mathbf{x}-\mathbf T\mathbf{y}\|+\|\mathbf T\mathbf{y}-\mathbf{y}\|,
$$
and
$$
\|\mathbf{x}-\mathbf{y}\|\le E(\mathbf{x})+E(\mathbf{y})+k\|\mathbf{x}-\mathbf{y}\|.
$$
From here, we immediately find \eqref{enhcoerf}
$$
\|\mathbf{x}-\mathbf{y}\|\le \frac1{1-k}(E(\mathbf{x})+E(\mathbf{y})).
$$
Along every sequence of iterates, we have \eqref{infcero} if $\mathbf T$ is contactive. Of course, minimizers for $E$ in \eqref{error} are exactly fixed points for $\mathbf T$.
Our objective is to argue that the basic variational principle in Proposition \ref{basica} is quite flexible, and can be implemented in many of the situations in Differential Equations where uniqueness of solutions is known.
There are two main requisites in Proposition \ref{basica}. The first one \eqref{enhcoerf} has to be shown directly in each particular scenario where uniqueness is sought. Note that it is some kind of enhanced coercivity, and, as such, stronger than plain coercivity.
Concerning \eqref{infcero}, there is, however, a general strategy based on smoothness that can be applied to most of the interesting situations in practice. For the sake of simplicity, we restrict attention to a Hilbert space situation, and regard $\mathbb H$ as a Hilbert space henceforth.
If a non-negative functional $E:\mathbb H\to\mathbb{R}^+$ is $\mathcal C^1$-, then
$$
\inf_{\mathbf{x}\in\mathbb H}\|E'(\mathbf{x})\|=0.
$$
Therefore, it suffices to demand that
$$
\lim_{E'(\mathbf{x})\to\mathbf 0}E(\mathbf{x})=0
$$
to enforce \eqref{infcero}. Proposition \ref{basica} becomes then:
\begin{proposition}\label{basicad}
Let $E:\mathbb H\to\mathbb{R}^+$ be a non-negative, $\mathcal C^1$- functional in a Hilbert space $\mathbb H$, such that
\begin{equation}\label{enhcoers}
\|\mathbf{x}-\mathbf{y}\|\le C(E(\mathbf{x})+E(\mathbf{y})),\quad C>0, \mathbf{x}, \mathbf{y}\in\mathbb H.
\end{equation}
Suppose, in addition, that
\begin{equation}\label{infceros}
\lim_{E'(\mathbf{x})\to\mathbf 0}E(\mathbf{x})=0.
\end{equation}
Then there is a unique $\overline\mathbf{x}\in\mathbb H$ such that $E(\overline\mathbf{x})=0$, and
$$
\|\mathbf{x}-\overline\mathbf{x}\|\le CE(\mathbf{x})
$$
for every $\mathbf{x}\in\mathbb H$.
\end{proposition}
Though the following is a simple observation, it is worth to note it explicitly.
\begin{proposition}
Under the same conditions as in Proposition \ref{basicad}, the functional $E$ enjoys the Palais-Smale condition.
\end{proposition}
We remind readers that the fundamental Palais-Smale condition reads:
\begin{quote}
If the sequence $\{\mathbf{x}_j\}$ is bounded in $\mathbb H$, and $E'(\mathbf{x}_j)\to\mathbf 0$ in $\mathbb H$, then, at least for some subsequence, $\{\mathbf{x}_j\}$ converges in $\mathbb H$.
\end{quote}
Again, it is not difficult to suspect the proof. Condition \eqref{infceros} informs us that Palais-Smale sequences ($\{\mathbf{x}_j\}$, bounded and $E'(\mathbf{x}_j)\to\mathbf 0$) are always minimizing sequences for $E$ ($E(\mathbf{x}_j)\to0$), while the estimate \eqref{enhcoers} ensures that (the full) such sequence is a Cauchy sequence in $\mathbb H$. Notice, however, that, due to \eqref{infceros}, $0$ is the only possible critical value of $E$, and so critical points become automatically global minimizers regardless of convexity considerations.
In view of the relevance of conditions \eqref{enhcoers} and \eqref{infceros}, we adopt the following definition in which we introduce some simple, helpful changes to broaden its applicability. We also change the notation to stress that vectors in $\mathbb H$ will be functions for us.
\begin{definition}\label{errorgeneral}
A non-negative, $\mathcal C^1$-functional
$$
E(\mathbf{u}):\mathbb H\to\mathbb{R}^+
$$
defined over a Hilbert space $\mathbb H$ is called an error functional if
\begin{enumerate}
\item behavior as $E'\to\mathbf 0$:
\begin{equation}\label{comporinff}
\lim_{E'(\mathbf{u})\to\mathbf 0}E(\mathbf{u})=0
\end{equation}
over bounded subsets of $\mathbb H$; and
\item enhanced coercivity: there is a positive constant $C$, such that for every pair $\mathbf{u}, \mathbf{v}\in\mathbb H$
we have
\begin{equation}\label{enhcoer}
\|\mathbf{u}-\mathbf{v}\|^2\le C(E(\mathbf{u})+E(\mathbf{v})).
\end{equation}
\end{enumerate}
\end{definition}
Our basic result Proposition \ref{basicad} remains the same.
\begin{proposition}\label{basicas}
Let $E:\mathbb H\to\mathbb{R}^+$ be an error functional according to Definition \ref{errorgeneral}. Then there is a unique minimizer $\mathbf{u}_\infty\in\mathbb H$ such that $E(\mathbf{u}_\infty)=0$, and
\begin{equation}\label{medidaerror}
\|\mathbf{u}-\mathbf{u}_\infty\|^2\le CE(\mathbf{u}),
\end{equation}
for every $\mathbf{u}\in\mathbb H$.
\end{proposition}
It is usually said that the contraction mapping principle Theorem \ref{contraction}, though quite helpful in ODEs, is almost inoperative for PDEs. We will try to make an attempt at convincing readers that, on the contrary, Proposition \ref{basicad} is equally helpful for ODEs and PDEs. To this end, we will examine several selected examples as a sample of the potentiality of these ideas. Specifically, we will look at the following situations, though none of our existence results is new at this stage:
\begin{enumerate}
\item Cauchy, initial-value problems for ODEs;
\item linear hyperbolic examples;
\item non-linear, monotone PDEs;
\item non-linear wave models;
\item steady Navier-Stokes system.
\end{enumerate}
We systematically will have to check the two basic properties \eqref{enhcoer} and \eqref{comporinff} in each situation treated.
We can be dispensed with condition \eqref{comporinff}, and replace it by \eqref{infcero} if more general results not requiring smoothness are sought. On the other hand, in many regular situations linearization may lead in a systematic way to the following.
\begin{proposition}\label{principalbis}
Let
$$
E(\mathbf{u}):\mathbb H\to\mathbb{R}^+
$$
be a $\mathcal C^1$-functional verifying the enhanced coercivity condition \eqref{enhcoer}.
Suppose there is
$\mathbf T:\mathbb H\to\mathbb H$,
a locally Lipschitz operator, such that
\begin{equation}\label{propoper}
\langle E'(\mathbf{u}), \mathbf T\mathbf{u}\rangle=-dE(\mathbf{u})
\end{equation}
for every $\mathbf{u}\in\mathbb H$, and some constant $d>0$. Then $E$ is an error functional (according to Definition \ref{errorgeneral}), and, consequently, there is a unique global minimizer $\mathbf{u}_\infty$ with $E(\mathbf{u}_\infty)=0$, and \eqref{medidaerror} holds
\begin{equation}\label{medidaerrorbis}
\|\mathbf{u}-\mathbf{u}_\infty\|^2\le CE(\mathbf{u}),
\end{equation}
for some constant $C$, and every $\mathbf{u}\in\mathbb H$.
\end{proposition}
Note how condition \eqref{propoper} leads immediately to \eqref{comporinff}.
In this contribution, we will assume smoothness in all of our examples.
Typically our Hilbert spaces $\mathbb H$ will be usual Sobolev spaces in different situations, so standard facts about these spaces will be taken for granted. In particular, the following Hilbert spaces will play a basic role for us in those various situations mentioned above
$$
H^1(0, T; \mathbb{R}^N),\quad H^1_0(\mathbb{R}^N_+),\quad H^1(\mathbb{R}^N_+),\quad H^1_0(\Omega), \quad H^1_0(\Omega; \mathbb{R}^N),
$$
for a domain $\Omega\subset\mathbb{R}^N$ as regular as we may need it to be.
If one is interested in numerical or practical approximation of solutions $\mathbf{u}_\infty$, note how \eqref{medidaerrorbis} is a clear invitation to seek approximations to $\mathbf{u}_\infty$ by minimizing $E(\mathbf{u})$. The standard way to take a functional to its minimum value is to use a steepest descent algorithm or some suitable variant of it. It is true that such procedure is designed, in fact, to lead the derivative $E'(\mathbf{u})$ to zero; but precisely, condition \eqref{comporinff} is guaranteeing that in doing so we are also converging to $\mathbf{u}_\infty$ always. We are not pursuing this direction here, though it has been implemented in some scenarios (\cite{munped2}, \cite{pedregal3}).
Definition \ref{errorgeneral} is global. A local parallel concept may turn out necessary for some situations. We will show this in our final example dealing with the steady Navier-Stokes system. The application to parabolic problems, though still feasible, is, in general, more delicate.
\section{Cauchy problems for ODEs}
As a preliminary step, we start testing our ideas with a typical initial-value, Cauchy problem for the non-linear system
\begin{equation}\label{ode}
\mathbf{x}'(t)=\mathbf{f}(\mathbf{x}(t))\hbox{ in }(0, +\infty),\quad \mathbf{x}(0)=\mathbf{x}_0
\end{equation}
where the map
$$
\mathbf{f}(\mathbf{y}):\mathbb{R}^N\to\mathbb{R}^N
$$
is smooth and globally Lipschitz, and $\mathbf{x}_0\in\mathbb{R}^N$. Under these circumstances, it is well-known that \eqref{ode} possesses a unique solution.
Let us pretend not to know anything about problem \eqref{ode}, and see if our formalism could be applied in this initial situation to prove the following classical theorem.
\begin{theorem}
If the mapping $\mathbf{f}(\mathbf{y})$ is globally Lipschitz, there is unique absolutely continuous solution
$$
\mathbf{x}(t):[0, \infty)\to\mathbb{R}^N
$$
for \eqref{ode}.
\end{theorem}
According to our previous discussion, we need a functional $E:\mathbb H\to\mathbb{R}^+$ defined on an appropriate Hilbert space $\mathbb H$ complying with the necessary properties.
For a fixed, but otherwise arbitrary, positive time $T$, we will take
\begin{gather}
\mathbb H=\{\mathbf{z}(t):[0, T]\to\mathbb{R}^N: \mathbf{z}\in H^1(0, T; \mathbb{R}^N), \mathbf{z}(0)=\mathbf 0\},\nonumber\\
E(\mathbf{z})=\frac12\int_0^T|\mathbf{z}'(s)-\mathbf{f}(\mathbf{x}_0+\mathbf{z}(s))|^2\,ds\label{erroredo}.
\end{gather}
$\mathbb H$ is a subspace of the standard Sobolev space $H^1(0, T; \mathbb{R}^N)$, under the norm (recall that $\mathbf{z}(0)=\mathbf 0$ for paths in $\mathbb H$)
$$
\|\mathbf{z}\|^2=\int_0^T|\mathbf{z}'(s)|^2\,ds.
$$
Note that paths $\mathbf{x}\in\mathbb H$ are absolutely continuous, and hence $E(\mathbf{z})$ is well-defined over $\mathbb H$.
We first focus on \eqref{enhcoer}.
\begin{lemma}
For paths $\mathbf{y}$, $\mathbf{z}$ in $\mathbb H$, we have
$$
\|\mathbf{y}-\mathbf{z}\|^2\le C(E(\mathbf{y})+E(\mathbf{z})),\quad C>0.
$$
\end{lemma}
\begin{proof}
The proof is, in fact, pretty elementary. Suppose that
$$
\mathbf{z}(0)=\mathbf{y}(0)=\mathbf{x}_0,
$$
so that $\mathbf{y}-\mathbf{z}\in\mathbb H$. Then
\begin{align}
\mathbf{y}(t)-\mathbf{z}(t)=&\int_0^t (\mathbf{y}'(s)-\mathbf{z}'(s))\,ds\nonumber\\
=&\int_0^t (\mathbf{y}'(s)-\mathbf{f}(\mathbf{y}(s)))\,ds+\int_0^t (\mathbf{f}(\mathbf{y}(s))-\mathbf{f}(\mathbf{z}(s)))\,ds\nonumber\\
&+\int_0^t (\mathbf{f}(\mathbf{z}(s))-\mathbf{z}'(s))\,ds.\nonumber
\end{align}
From here, we immediately find
$$
|\mathbf{y}(t)-\mathbf{z}(t)|^2\le C(E(\mathbf{y})+E(\mathbf{z}))+CM^2\int_0^t|\mathbf{y}(s)-\mathbf{z}(s)|^2\,ds,
$$
if $M$ is the Lipschitz constant for the map $\mathbf{f}$, and $C$ is a generic, universal constant we will not care to change. From Gronwall's lemma, we can have
\begin{equation}\label{dercero}
|\mathbf{y}(t)-\mathbf{z}(t)|^2\le C(E(\mathbf{y})+E(\mathbf{z}))e^{CM^2T}
\end{equation}
for all $t\in[0, T]$. This means
\begin{gather}
\|\mathbf{y}-\mathbf{z}\|_{L^\infty(0, T; \mathbb{R}^N)}\le e^{CM^2T/2}\sqrt{C(E(\mathbf{y})+E(\mathbf{z}))},\nonumber\\
\|\mathbf{y}-\mathbf{z}\|_{L^2(0, T; \mathbb{R}^N)}^2\le CTe^{CM^2T}(E(\mathbf{y})+E(\mathbf{z})).\label{cota}
\end{gather}
But once we can rely on this information, the above decomposition allows us to write in a similar manner
$$
\int_0^t |\mathbf{y}'(s)-\mathbf{z}'(s)|^2\,ds\le C(E(\mathbf{y})+E(\mathbf{z}))+CM^2\int_0^t|\mathbf{y}(s)-\mathbf{z}(s)|^2\,ds
$$
and
$$
\|\mathbf{y}'-\mathbf{z}'\|^2_{L^2(0, T; \mathbb{R}^N)}\le C(E(\mathbf{y})+E(\mathbf{z}))+CM^2\|\mathbf{y}-\mathbf{z}\|^2_{L^2(0, T; \mathbb{R}^N)},
$$
and thus, taking into account \eqref{cota},
$$
\|\mathbf{y}'-\mathbf{z}'\|^2_{L^2(0, T; \mathbb{R}^N)}\le (C+C^2M^2Te^{CM^2T})(E(\mathbf{y})+E(\mathbf{z})).
$$
Our estimate \eqref{enhcoer} is then a consequence that the norm in $\mathbb H$ can be taken to be the $L^2$-norm of the derivative.
\end{proof}
The second basic ingredient is \eqref{comporinff}. We will be using Proposition \ref{principalbis}. We assume further that the mapping $\mathbf{f}$ is smooth with a derivative uniformly bounded to guarantee the uniform Lipschitz condition.
For the operator $\mathbf T$, we will put $\mathbf{Z}=\mathbf T\mathbf{z}$ for $\mathbf{z}\in\mathbb H$, and linearize \eqref{ode} at the path $\mathbf{x}_0+\mathbf{z}(t)$ to write
\begin{gather}
\mathbf{Z}'(t)=\mathbf{f}(\mathbf{x}_0+\mathbf{z}(t))+\nabla\mathbf{f}(\mathbf{x}_0+\mathbf{z}(t))\mathbf{Z}(t)-\mathbf{z}'(t)\hbox{ in }[0, T],\label{linealizacion}\\
\mathbf{Z}(0)=\mathbf 0.\nonumber
\end{gather}
This is a linear, differential, non-constant coefficient system for $\mathbf{Z}$ with coefficients depending on $\mathbf{z}$.
Under smoothness assumptions, which we take for granted, such operator $\mathbf T$ is locally Lipschitz because the image $\mathbf{Z}=\mathbf T\mathbf{z}$ is defined through a linear initial-value, Cauchy problem with coefficients depending continuously on $\mathbf{z}$.
The important property to be checked, concerning $\mathbf T$, is \eqref{propoper}. It is elementary to see, under smoothness assumptions which, as indicated, we have taken for granted, that
$$
\langle E'(\mathbf{z}), \mathbf{Z}\rangle=
\int_0^T(\mathbf{z}'(s)-\mathbf{f}(\mathbf{x}_0+\mathbf{z}(s)))(\mathbf{Z}'(s)-\nabla\mathbf{f}(\mathbf{x}_0+\mathbf{z}(s))\mathbf{Z}(s))\,ds
$$
Hence, for $\mathbf{Z}=\mathbf T\mathbf{z}$ coming from \eqref{linealizacion}, we immediately deduce that
$$
\langle E'(\mathbf{z}), \mathbf{Z}\rangle=-2E(\mathbf{z}).
$$
We are, then, entitled to apply Proposition \ref{principalbis} to conclude that functional $E$ in
\eqref{erroredo} is an error functional after Definition \ref{errorgeneral}, and we are entitled to utilize Proposition \ref{basicas} to conclude the following.
\begin{theorem}
If the mapping $\mathbf{f}(\mathbf{y}):\mathbb{R}^N\to\mathbb{R}^N$ is $\mathcal C^1$- with a globally bounded gradient, then, for arbitrary $\mathbf{x}_0\in\mathbb{R}^N$ and $T>0$, problem \eqref{ode} admits a unique $\mathcal C^1$-solution
$$
\overline\mathbf{x}(t):[0, T)\to\mathbb{R}^N,
$$
and there is a positive constant $C$ such that
$$
\|\mathbf{x}-\overline\mathbf{x}\|_\mathbb H^2\le C\int_0^T|\mathbf{x}'(s)-\mathbf{f}(\mathbf{x}_0+\mathbf{x}(s))|^2\,ds
$$
for every $\mathbf{x}\in\mathbb H$.
\end{theorem}
There is no difficulty in showing a local version of this result by using the same ideas.
\section{Linear hyperbolic example}
Since most likely readers will not be used to think about hyperbolic problems in these terms, we will treat the most transparent example of a linear, hyperbolic problem from this perspective, and later apply the method to a non-linear wave equation.
We seek a (weak) solution $u(t, \mathbf{x})$ of some sort of the problem
\begin{gather}
u_{tt}(t, \mathbf{x})-\Delta u(t, \mathbf{x})-u(t, \mathbf{x})=f(t, \mathbf{x})\hbox{ in }\mathbb{R}^N_+,\label{ondas}\\ u(0, \mathbf{x})=0, u_t(0, \mathbf{x})=0\hbox{ on }t=0,\nonumber
\end{gather}
for $f\in L^2(\mathbb{R}^N_+)$. Here $\mathbb{R}^N_+$ is the upper half hyperspace $[0, +\infty)\times\mathbb{R}^N$. We look for
$$
u(t, \mathbf{x})\in H^1_0(\mathbb{R}^N_+)
$$
(jointly in time and space) such that
\begin{gather}
\int_{\mathbb{R}^N_+}[u_t(t, \mathbf{x})w_t(t, \mathbf{x})-\nabla u(t, \mathbf{x})\cdot\nabla w(t, \mathbf{x})\label{formadeb}\\
+(f(t, \mathbf{x})+u(t, \mathbf{x}))w(t, \mathbf{x})]\,d\mathbf{x}\,dt=0\nonumber
\end{gather}
for every test function
$$
w(t, \mathbf{x})\in H^1(\mathbb{R}^N_+).
$$
Note how the arbitrary values of the test function $w$ for $t=0$ imposes the vanishing initial velocity $u_t(0, \mathbf{x})=0$.
To setup a suitable error functional
$$
E(u): H^1_0(\mathbb{R}^N_+)\to\mathbb{R}^+
$$
for every $u(t, \mathbf{x})$,
and not just for the solution we seek, we utilize a natural least-square concept as indicated in the Introduction. Define an appropriate defect or residual function
$$
U(t, \mathbf{x})\in H^1(\mathbb{R}^N_+),
$$
for each such $u\in H^1_0(\mathbb{R}^N_+)$, as the unique variational solution of
\begin{equation}\label{debilondas}
\int_{\mathbb{R}^N_+}[(u_t+U_t)w_t-(\nabla u-\nabla U)\cdot\nabla w+(f+u+U) w]\,d\mathbf{x}\,dt=0
\end{equation}
valid for every $w\in H^1(\mathbb{R}^N_+)$. This function $U$ is indeed the unique minimizer over $H^1(\mathbb{R}^N_+)$ of the strictly convex, quadratic functional
$$
I(w)=\int_{\mathbb{R}^N_+}\left(\frac12[(w_t+u_t)^2+|\nabla w-\nabla u|^2+(u+w)^2]+fw\right)\,d\mathbf{x}\,dt
$$
for each fixed $u$.
The size of $U$ is regarded as a measure of the departure of $u$ from being a solution of our problem
\begin{gather}
E:H^1_0(\mathbb{R}^N_+)\to\mathbb{R}^+,\nonumber\\
E(u)=\int_{\mathbb{R}^N_+}\frac12[U^2_t(t, \mathbf{x})+|\nabla U(t, \mathbf{x})|^2+U^2(t, \mathbf{x})]\,d\mathbf{x}\,dt.\label{errorhyp}
\end{gather}
We can also put, in a short form,
\begin{equation}\label{sencillo}
E(u)=\frac12\|U\|_{H^1(\mathbb{R}^N_+)}^2;
\end{equation}
or even
$$
E(u)=\frac12\|u_{tt}-\Delta u-u-f\|^2_{H^{-1}(\mathbb{R}^N_+)},
$$
though we will stick to \eqref{sencillo} to better manipulate $E$.
We would like to apply Proposition \ref{basicas} in this situation, and hence, we set to ourselves the task of checking the two main assumptions in Definition \ref{errorgeneral}.
Our functional $E$ is definitely smooth and non-negative to begin with.
It is not surprising that in order to work with the wave equation the following two linear operators
\begin{gather}
\mathbb S:H^1(\mathbb{R}^N_+)\mapsto H^1(\mathbb{R}^N_+),\quad \mathbb S w(t, \mathbf{x})=w(t, -\mathbf{x}),\nonumber\\
\mathcal S: H^1(\mathbb{R}^N_+)\mapsto H^1(\mathbb{R}^N_+)^*,\nonumber\\
\mathcal S u(t, \mathbf{x})=(u(t, -\mathbf{x}), u_t(t, -\mathbf{x}), \nabla u(t, -\mathbf{x})),\nonumber
\end{gather}
will play a role. $H^1(\mathbb{R}^N_+)^*$ is here the dual space of $H^1(\mathbb{R}^N_+)$, not to be mistaken with $H^{-1}(\mathbb{R}^N_+)$.
Put $\mathbb H=\mathcal S(H^1_0(\mathbb{R}^N_+))$. The following fact is elementary. Check for instance \cite{brezis}.
\begin{lemma}\label{hiperbolico}
\begin{enumerate}
\item The map $\mathbb S$ is an isometry.
\item $\mathbb H$ is a closed subspace of $H^1(\mathbb{R}^N_+)^*$, and
$$
\mathcal S:H^1_0(\mathbb{R}^N_+)\to\mathbb H
$$
is a bijective, continuous mapping. In fact, we clearly have
\begin{equation}\label{coer}
\|u\|_{H^1_0(\mathbb{R}^N_+)}\le \|\mathcal S u\|_{H^1(\mathbb{R}^N_+)^*}.
\end{equation}
\end{enumerate}
\end{lemma}
We can now proceed to prove inequality \eqref{enhcoer} in this new context.
\begin{proposition}\label{enhcoerhyp}
There is a constant $K>0$ such that
$$
\|u-v\|_{H^1_0(\mathbb{R}^N_+)}^2\le K(E(u)+E(v)),
$$
for every pair $u, v\in H^1_0(\mathbb{R}^N_+)$.
\end{proposition}
\begin{proof}
Let $U, V\in H^1(\mathbb{R}^N)$ be the respective residual functions associated with $u$ and $v$. Because we are in a linear situation, if we replace
$$
u-v\mapsto u,\quad U-V\mapsto U,
$$
we would have
\begin{equation}\label{debilcero}
\int_{\mathbb{R}^N_+}[(u_t+U_t)w_t-(\nabla u-\nabla U)\cdot\nabla w+(u+U) w]\,d\mathbf{x}\,dt=0,
\end{equation}
for every $w\in H^1(\mathbb{R}^N_+)$.
If we use $\mathbb S w$ in \eqref{debilcero} instead of $w$, we immediately find
\begin{gather}
\int_{\mathbb{R}^N_+}[(u_t+U_t)w_t(t, -\mathbf{x})+(\nabla u-\nabla U)\cdot\nabla w(t, -\mathbf{x})\nonumber\\
+(u+U)w(t, -\mathbf{x})]\,d\mathbf{x}\,dt=0.\nonumber
\end{gather}
The terms involving $U$ can be written in compact form as
$$
\langle U, \mathbb S w\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)\rangle}
$$
while a natural change of variables in the terms involving $u$ leads to writing these in the form
$$
\langle w, \mathcal S u\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}.
$$
Hence, for every $w\in H^1(\mathbb{R}^N_+)$, we find
$$
\langle U, \mathbb S w\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)\rangle}+
\langle w, \mathcal S u\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}=0.
$$
Bearing in mind this identity, we have, through the Lemma \ref{hiperbolico},
\begin{align}
\|u\|_{H^1_0(\mathbb{R}^N_+)}\le&\|\mathcal S u\|_{H^1(\mathbb{R}^N_+)^*}\nonumber\\
=&\sup_{\|w\|_{H^1(\mathbb{R}^N_+)}\le 1}\langle w, \mathcal S u\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}\nonumber\\
\le&\|U\|_{H^1(\mathbb{R}^N_+)}\,\|w\|_{H^1(\mathbb{R}^N_+)}
\nonumber\\
\le&\|U\|_{H^1(\mathbb{R}^N_+)}
.\nonumber
\end{align}
If we go back to
$$
u\mapsto u-v,\quad U\mapsto U-V,
$$
we are led to
\begin{align}
\|u-v\|^2_{H^1_0(\mathbb{R}^N_+)}\le &\|U-V\|^2_{H^1(\mathbb{R}^N_+)}\nonumber\\
\le &C\left(\|U\|^2_{H^1(\mathbb{R}^N_+)}+\|V\|^2_{H^1(\mathbb{R}^N_+)}\right)\nonumber\\
\le &C\left(E(u)+E(v)\right),\nonumber
\end{align}
for some constant $C>0$.
\end{proof}
The second main ingredient to apply Proposition \ref{basicas} is to show that
$E$ defined in \eqref{errorhyp} complies with \eqref{comporinff} too. To this end, we need to compute the derivative $E'(u)$, and so
we perform a perturbation
$$
u+\epsilon v\mapsto U+\epsilon V,
$$
in \eqref{debilondas} to write
\begin{gather}
\int_{\mathbb{R}^N_+}[(u_t+\epsilon v_t+U_t+\epsilon V_t)w_t-(\nabla u+\epsilon\nabla v-\nabla U-\epsilon\nabla V)\cdot\nabla w\nonumber\\
+(f+u+\epsilon v+U+\epsilon V) w]\,d\mathbf{x}\,dt=0.\nonumber
\end{gather}
The term to order 1 in $\epsilon$ should vanish
\begin{equation}\label{firstorder}
\int_{\mathbb{R}^N_+}[(v_t+V_t)w_t-(\nabla v-\nabla V)\cdot\nabla w+(v+V) w]\,d\mathbf{x}\,dt=0
\end{equation}
for every $w\in H^1(\mathbb{R}^N_+)$. On the other hand, by differentiating
$$
E(u+\epsilon v)=\int_{\mathbb{R}^N_+}\frac12((U+\epsilon V)^2_t+|\nabla U+\epsilon\nabla V|^2+(U+\epsilon V)^2)\,d\mathbf{x}\,dt,
$$
with respect to $\epsilon$, and setting $\epsilon=0$, we arrive at
$$
\langle E'(u), v\rangle=\int_{\mathbb{R}^N_+}(U_tV_t+\nabla U\cdot\nabla V+UV)\,d\mathbf{x}\,dt.
$$
By taking $w=U$ in \eqref{firstorder}, we can also write
\begin{align}
\langle E'(u), v\rangle=&\int_{\mathbb{R}^N_+}(\nabla v\cdot\nabla U-v_tU_t-vU)\,d\mathbf{x}\,dt\nonumber\\
=&-\langle \mathbb S v, \mathcal S U\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}.\nonumber
\end{align}
From this identity, which ought to be valid for every $v\in H^1_0(\mathbb{R}^N_+)$, we clearly conclude that if
$E'(u)\to\mathbf 0$ then $\mathcal S U\to\mathbf 0$ as well, because $\mathbb S$ preserves the norm. Realizing that
$$
E(u)=\frac12\|U\|_{H^1(\mathbb{R}^N_+)}^2\le \frac12\|\mathcal S U\|_{H^1(\mathbb{R}^N_+)^*}^2,
$$
by estimate \eqref{coer}, we conclude the following.
\begin{proposition}
The functional $E$ in \eqref{errorhyp} is an error functional in the sense of Definition \ref{errorgeneral}.
\end{proposition}
Our main abstract result Proposition \ref{basicas} applies in this situation too, and we can conclude
\begin{theorem}
Problem \eqref{ondas} admits a unique weak solution $u\in H^1_0(\mathbb{R}^N_+)$ in the sense \eqref{formadeb},
and for every other $v\in H^1_0(\mathbb{R}^N_+)$, we have
$$
\|u-v\|_{H^1_0(\mathbb{R}^N_+)}^2\le KE(v),
$$
for some positive constant $K$.
\end{theorem}
\section{Non-linear monotone problems}\label{seven}
Suppose we would like to solve, or approximate the solution of, a certain non-linear elliptic system of PDEs of the form
$$
\operatorname{div}[\Phi(\nabla u)]=0\hbox{ in }\Omega,\quad u=u_0\hbox{ on }\partial\Omega,
$$
for a non-linear, smooth map
$$
\Phi(\mathbf{a}):\mathbb{R}^N\to\mathbb{R}^N.
$$
$\Omega\subset\mathbb{R}^N$ is assumed to be a regular, bounded domain.
One can set up a natural, suitable, non-negative, smooth functional based on the least-squares idea, as already introduced,
\begin{equation}\label{funcellnl}
E(v):H^1_0(\Omega)\to\mathbb{R}
\end{equation}
by putting
$$
E(v)=\frac12\int_\Omega|\nabla U(\mathbf{x})|^2\,d\mathbf{x}
$$
where
$$
\operatorname{div}[\Phi(\nabla v+\nabla u_0)+\nabla U]=0\hbox{ in }\Omega
$$
and $U\in H^1_0(\Omega)$. We can also put
$$
E(v)=\frac12\|\operatorname{div}[\Phi(\nabla v+\nabla u_0)]\|^2_{H^{-1}(\Omega)}.
$$
Our goal is to apply again Proposition \ref{basicas}, or, since we are now in a non-linear situation, Proposition \ref{principalbis}. Anyhow, \eqref{enhcoer} is necessary.
\begin{lemma}
Let $\Phi(\mathbf{a}):\mathbb{R}^N\to\mathbb{R}^N$ be a smooth-map with linear growth at infinity, i.e.
\begin{equation}\label{lineargr}
|\Phi(\mathbf{a})|\le C_1|\mathbf{a}|+C_0,
\end{equation}
with $C_1>0$, and strictly monotone in the sense
\begin{equation}\label{monotonia}
(\Phi(\mathbf{a}_1)-\Phi(\mathbf{a}_0))\cdot(\mathbf{a}_1-\mathbf{a}_0)\ge c|\mathbf{a}_1-\mathbf{a}_0|^2,\quad c>0,
\end{equation}
for every pair of vectors $\mathbf{a}_i$, $i=0, 1$. Then there is a positive constant $C$ such that
$$
\|u-v\|_{H^1_0(\Omega)}^2\le C(E(u)+E(v)),
$$
for every pair $u, v\in H^1_0(\Omega)$.
\end{lemma}
\begin{proof}
Let $u, v\in H^1_0(\Omega)$, and let $U, V\in H^1_0(\Omega)$ be their respective residuals in the sense
\begin{equation}\label{sistemas}
\operatorname{div}[\Phi(\nabla u+\nabla u_0)+\nabla U]=0,\quad
\operatorname{div}[\Phi(\nabla v+\nabla u_0)+\nabla V]=0
\end{equation}
in $\Omega$, and
$$
E(u)=\frac12\|\nabla U\|^2_{L^2(\Omega; \mathbb{R}^N)},\quad E(v)=\frac12\|\nabla V\|^2_{L^2(\Omega; \mathbb{R}^N)}.
$$
If we use $u-v$ as test field in \eqref{sistemas}, we find
\begin{gather}
\int_\Omega\Phi(\nabla u+\nabla u_0)\cdot(\nabla u-\nabla v)\,d\mathbf{x}=-\int_\Omega\nabla U\cdot(\nabla u-\nabla v)\,d\mathbf{x},\nonumber\\
\int_\Omega\Phi(\nabla v+\nabla u_0)\cdot(\nabla u-\nabla v)\,d\mathbf{x}=-\int_\Omega\nabla V\cdot(\nabla u-\nabla v)\,d\mathbf{x}.\nonumber
\end{gather}
The monotonicity condition, together with these identities, takes us, by subtracting one from the other, to
$$
c\int_\Omega|\nabla u-\nabla v|^2\,d\mathbf{x}\le\int_\Omega(\nabla V-\nabla U)\cdot(\nabla u-\nabla v)\,d\mathbf{x}.
$$
The standard Cauchy-Schwarz inequality implies that
$$
c\|\nabla u-\nabla v\|_{L^2(\Omega; \mathbb{R}^N)}\le \|\nabla U-\nabla V\|_{L^2(\Omega; \mathbb{R}^N)},
$$
and thus, taking into account the triangular inequality, we have
$$
c^2\|\nabla u-\nabla v\|_{L^2(\Omega; \mathbb{R}^N)}^2\le 4E(u)+4E(v).
$$
The use of Poincaré's inequality yields our statement.
\end{proof}
The second ingredient, to apply Theorem \ref{principalbis}, is the operator $\mathbf T$ which comes directly from linearization or from Newton's method. Given an approximation of the solution $v+u_0$, $v\in H^1_0(\Omega)$, we seek a better approximation $V\in H^1_0(\Omega)$ in the form
\begin{equation}\label{sistemalineal}
\operatorname{div}[\Phi(\nabla v+\nabla u_0)+\nabla\Phi(\nabla v+\nabla u_0)\nabla V]=0\hbox{ in }\Omega,
\end{equation}
as a linear approximation of
$$
\operatorname{div}[\Phi(\nabla v+\nabla u_0+\nabla V)]=0\hbox{ in }\Omega.
$$
We therefore define
\begin{equation}\label{newoper}
\mathbf T:H^1_0(\Omega)\to H^1_0(\Omega),\quad \mathbf T v=V,
\end{equation}
where $V$ is the solution of \eqref{sistemalineal}. The fact that $\mathbf T$ is well-defined is a direct consequence of the standard Lax-Milgram lemma and the identification
$$
\mathbf A=\nabla\Phi(\nabla v+\nabla u_0),\quad \mathbf{a}=\Phi(\nabla v+\nabla u_0),
$$
provided
$$
|\nabla\Phi(\mathbf{v})|\le M,\quad
\mathbf{u}\cdot\nabla\Phi(\mathbf{v})\mathbf{u}\ge c|\mathbf{u}|^2,\quad M, c>0.
$$
The first bound is compatible with linear growth at infinity, \eqref{lineargr}, while the second one is a consequence of monotonicity \eqref{monotonia}. On the other hand, the smoothness of $\mathbf T$ depends directly on the smoothness of $\Phi$, specifically, we assume $\Phi$ to be $\mathcal C^2$. Since $\mathbf T$ comes from Newton's method, condition \eqref{propoper} is guaranteed. We are hence entitled to apply Proposition \ref{principalbis} and conclude that
\begin{theorem}
Let $\Phi(\mathbf{a}):\mathbb{R}^N\to\mathbb{R}^N$ be a $\mathcal C^2$-mapping such that
\begin{gather}
|\nabla\Phi(\mathbf{a})|\le M,\nonumber\\
(\Phi(\mathbf{a}_1)-\Phi(\mathbf{a}_0))\cdot(\mathbf{a}_1-\mathbf{a}_0)\ge c|\mathbf{a}_1-\mathbf{a}_0|^2,\nonumber
\end{gather}
for constants $M, c>0$, and every $\mathbf{a}$, $\mathbf{a}_1$, $\mathbf{a}_0$ in $\mathbb{R}^N$.
There is a unique weak solution $u\in u_0+H^1_0(\Omega)$, for arbitrary $u_0\in H^1(\Omega)$, of the equation
$$
\operatorname{div}[\Phi(\nabla u)]=0\hbox{ in }\Omega,\quad u-u_0\in H^1_0(\Omega).
$$
Moreover
$$
\|u-v\|^2_{H^1_0(\Omega)}\le CE(v)
$$
for every other $v\in u_0+H^1_0(\Omega)$.
\end{theorem}
It is not hard to design appropriate sets of assumptions to deal with more general equations of the form
$$
\operatorname{div}[\Phi(\nabla v(\mathbf{x}), v(\mathbf{x}), \mathbf{x})]=0.
$$
\section{Non-linear waves}
We would like to explore non-linear equations of the form
$$
u_{tt}(t, \mathbf{x})-\Delta u(t, \mathbf{x})-f(\nabla u(t, \mathbf{x}), u_t(t, \mathbf{x}), u(t, \mathbf{x}))=0\hbox{ in }(t, \mathbf{x})\in\mathbb{R}^N_+,
$$
subjected to initial conditions
$$
u(0, \mathbf{x})=u_0(\mathbf{x}),\quad u_t(0, \mathbf{x})=u_1(\mathbf{x})
$$
for appropriate data $u_0$ and $u_1$ belonging to suitable spaces to be determined.
Dimension $N$ is taken to be at least two.
Though more complicated situations could be considered allowing for a monotone main part in the equation, as in the previous section, to better understand the effect of the term incorporating lower-order terms, we will restrict ourselves to the equation above. Conditions on the non-linear term
$$
f(\mathbf{z}, z, u):\mathbb{R}^N\times\mathbb{R}\times\mathbb{R}\to\mathbb{R}
$$
will be specified along the way as needed.
Our ambient space will be $H^1(\mathbb{R}^N_+)$ so that weak solutions $u$ are sought in $H^1(\mathbb{R}^N_+)$. If we assume
$$
u_0\in H^{1/2}(\mathbb{R}^N),\quad u_1\in L^2(\mathbb{R}^N),
$$
we can take for granted, without loss of generality, that both $u_0$ and $u_1$ identically vanish and $u\in H^1_0(\mathbb{R}^N_+)$, at the expense of permitting
$$
f(\mathbf{z}, z, u, t, \mathbf{x}):\mathbb{R}^N\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}^N\to\mathbb{R}.
$$
We will therefore stick to the problem
\begin{equation}\label{nonlinw}
u_{tt}-\Delta u-f(\nabla u, u_t, u, t, \mathbf{x})=0\hbox{ in }(t, \mathbf{x})\in\mathbb{R}^N_+,
\end{equation}
subjected to initial conditions
\begin{equation}\label{iniccond}
u(0, \mathbf{x})=0,\quad u_t(0, \mathbf{x})=0.
\end{equation}
A weak solution $u\in H^1_0(\mathbb{R}^N_+)$ of \eqref{nonlinw} is such that
\begin{equation}\label{formadebilnlw}
\int_{\mathbb{R}^N_+}[-u_tw_t+\nabla u\cdot\nabla w-f(\nabla u, u_t, u, t, \mathbf{x})w]\,d\mathbf{x}\,dt=0
\end{equation}
for every $w\in H^1(\mathbb{R}^N_+)$. This weak formulation asks for the non-linear term
recorded in the function $f$ to comply with
\begin{equation}\label{nonlinterm}
|f(\mathbf{z}, z, u, t, \mathbf{x})|\le C(|\mathbf{z}|+|z|+|u|^{(N+1)/(N-1)})+f_0(t, \mathbf{x})
\end{equation}
for a function $f_0\in L^2(\mathbb{R}^N_+)$, in such a way that the composition
$$
f(\nabla u, u_t, u, t, \mathbf{x})\in L^2(\mathbb{R}^N_+)
$$
for every $u\in H^1(\mathbb{R}^N_+)$. As expected, for every $u\in H^1_0(\mathbb{R}^N_+)$ we define its residual $U\in H^1(\mathbb{R}^N_+)$ through
\begin{equation}\label{resnlw}
\int_{\mathbb{R}^N_+}[(U_t+u_t)w_t-(\nabla u-\nabla U)\cdot\nabla w+(U+f(\nabla u, u_t, u, t, \mathbf{x}))w]\,d\mathbf{x}\,dt=0
\end{equation}
which ought to be correct for every test $w\in H^1(\mathbb{R}^N_+)$; and the functional
\begin{equation}\label{funcerrnlw}
E(u):H^1_0(\mathbb{R}^N_+)\to\mathbb{R}^+,\quad E(u)=\frac12\|U\|^2_{H^1(\mathbb{R}^N_+)},
\end{equation}
as a measure of departure of $u$ from being a weak solution of \eqref{nonlinw}.
Note how \eqref{resnlw} determines $U$ in a unique way. Indeed, such $U$ is the unique minimizer of the strictly convex, quadratic functional
$$
\frac12\int_{\mathbb{R}^N_+}\left[(U_t+u_t)^2+|\nabla U-\nabla u|^2+(U+f(\nabla u, u_t, u, t, \mathbf{x}))^2\right]\,d\mathbf{x}\,dt
$$
define for $U\in H^1(\mathbb{R}^N_+)$.
We claim that under appropriate additional hypotheses, we can apply Proposition \ref{principalbis} to this situation. To explain things in the most affordable way, however, we will show that Proposition \ref{basicas} can also be applied directly. This requires to check that $E$ in \eqref{funcerrnlw} is indeed an error functional in the sense of Definition \ref{errorgeneral}.
We will be using the operators and the formalism right before Lemma \ref{hiperbolico}, as well as bound \eqref{coer} in this lemma.
\begin{lemma}
Suppose the function $f(\mathbf{z}, z, u, t, \mathbf{x})$ is such that
\begin{enumerate}
\item $f(\mathbf 0, 0, 0, t, \mathbf{x})\in L^2(\mathbb{R}^N_+)$;
\item the difference $f(\mathbf{z}, z, u, t, \mathbf{x})-u$ is globally Lipschitz with respect to triplets $(\mathbf{z}, z, u)$ in the sense
\begin{gather}
|f(\mathbf{z}, z, u, t, \mathbf{x})-u-f(\mathbf{y}, y, v, t, \mathbf{x})+v|\le \nonumber\\
M\left(|\mathbf{z}-\mathbf{y}|+|z-y|+\frac1D|u-v|^{(N+1)/(N-1)}\right),\nonumber
\end{gather}
where $D$ is the constant of the corresponding embedding
$$
H^1(\mathbb{R}^N_+)\subset L^{2(N+1)/(N-1)}(\mathbb{R}^N_+),
$$
and $M<1$.
\end{enumerate}
Then there is a positive constant $K$ with
$$
\|u-v\|_{H^1_0(\mathbb{R}^N_+)}^2\le K(E(u)+E(v)),
$$
for every pair $u, v\in H^1_0(\mathbb{R}^N_+)$, where $E$ is given in \eqref{funcerrnlw}.
\end{lemma}
\begin{proof}
Note how our hypotheses on the nonlinearity $f$ imply the bound \eqref{nonlinterm} by taking
$$
\mathbf{y}=\mathbf 0,\quad y=v=0,\quad f_0(t, \mathbf{x})=f(\mathbf 0, 0, 0, t, \mathbf{x}).
$$
If $u$, $v$ belong to $H^1_0(\mathbb{R}^N_+)$, and $U$, $V$ in $H^1(\mathbb{R}^N_+)$ are their respective residuals, then
\begin{gather}
\int_{\mathbb{R}^N_+}[(U_t+u_t)w_t-(\nabla u-\nabla U)\cdot\nabla w+(U+f(\nabla u, u_t, u, t, \mathbf{x}))w]\,d\mathbf{x}\,dt=0,\nonumber\\
\int_{\mathbb{R}^N_+}[(V_t+v_t)w_t-(\nabla v-\nabla V)\cdot\nabla w+(V+f(\nabla v, v_t, v, t, \mathbf{x}))w]\,d\mathbf{x}\,dt=0,\nonumber
\end{gather}
for every $w\in H^1(\mathbb{R}^N_+)$. By subtracting one from the other, and letting
$$
s=u-v,\quad S=U-V,
$$
we find
\begin{gather}
\int_{\mathbb{R}^N_+}[(S_t+s_t)w_t-(\nabla s-\nabla S)\cdot\nabla w+\nonumber\\
(S+f(\nabla u, u_t, u, t, \mathbf{x})-f(\nabla v, v_t, v, t, \mathbf{x}))w]\,d\mathbf{x}\,dt=0,\nonumber
\end{gather}
for every $w\in H^1(\mathbb{R}^N_+)$. We can recast this identity, by using the formalism in the corresponding linear situation around Lemma \ref{hiperbolico}, as
\begin{gather}
\langle S, \mathbb S w\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)\rangle}+
\langle w, \mathcal S s\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}\nonumber\\
=-\int_{\mathbb{R}^N_+}[f(\nabla u, u_t, u, t, \mathbf{x})-f(\nabla v, v_t, v, t, \mathbf{x})-s)w]\,d\mathbf{x}\,dt.
\nonumber
\end{gather}
The same manipulations as in the proof of Proposition \ref{enhcoerhyp}, together with the assumed Lipschitz property on $f$, lead immediately to
\begin{align}
\|s\|_{H^1_0(\mathbb{R}^N_+)}\le &\|\mathcal S s\|_{H^1(\mathbb{R}^N_+)^*}\nonumber\\
=&\sup_{\|w\|_{H^1(\mathbb{R}^N_+)}\le 1}|\langle w, \mathcal S s\rangle_{\langle H^1(\mathbb{R}^N_+), H^1(\mathbb{R}^N_+)^*\rangle}|\nonumber\\
=&\sup_{\|w\|_{H^1(\mathbb{R}^N_+)}\le 1}\left|\langle S, \mathbb S w\rangle+\int_{\mathbb{R}^N_+}(f_u-f_v-s)w\,d\mathbf{x}\,dt\right|\nonumber\\
\le&\|S\|_{H^1(\mathbb{R}^N_+)}\,\|w\|_{H^1(\mathbb{R}^N_+)}+M\|s\|_{H^1_0(\mathbb{R}^N_0)}\|w\|_{H^1(\mathbb{R}^N_+)}\nonumber\\
\le&\|S\|_{H^1(\mathbb{R}^N_+)}+M\|s\|_{H^1_0(\mathbb{R}^N_+)}.\nonumber
\end{align}
We are putting
$$
f_u=f(\nabla u, u_t, u, t, \mathbf{x}),\quad f_v=f(\nabla v, v_t, v, t, \mathbf{x}),
$$
for the sake of notation. Note also the use of the embedding constant.
The resulting final inequality, and the relative sizes of these constants, show our claim.
\end{proof}
We turn to the second important property for $E$ to become an error functional, namely,
$$
\lim_{E'(\mathbf{u})\to\mathbf 0}E(\mathbf{u})=0
$$
over bounded subsets of $H^1_0(\mathbb{R}^N_+)$. We assume that the non-linearity $f$ is $C^1$- with respect to $(\mathbf{z}, z, u)$, and its partial derivatives are uniformly bounded.
To compute the derivative $E'(u)$ at an arbitrary $u\in H^1_0(\mathbb{R}^N_+)$, we perform, as usual, the perturbation to first-order
$$
u\mapsto u+\epsilon v,\quad U\mapsto U+\epsilon V,
$$
and introduce them in \eqref{resnlw}. After differentiation with respect to $\epsilon$, and setting $\epsilon=0$, we find
\begin{equation}\label{perturbacion}
\int_{\mathbb{R}^N_+}[(V_t+v_t)w_t-(\nabla v-\nabla V)\cdot\nabla w+(V+\overline f_\mathbf{z}\cdot\nabla v+\overline f_z v_t+\overline f_u v)w]\,d\mathbf{x}\,dt=0,
\end{equation}
for all $w\in H^1(\mathbb{R}^N_+)$, where
$$
\overline f_\mathbf{z}(t, \mathbf{x})=f_\mathbf{z}(\nabla u(t, \mathbf{x}), u_t(t, \mathbf{x}), u(t, \mathbf{x}), t, \mathbf{x}),
$$
and the same for $\overline f_z(t, \mathbf{x})$ and $\overline f_u(t, \mathbf{x})$. On the other hand,
$$
\langle E'(u), v\rangle=\lim_{\epsilon\to0}\frac1\epsilon(E(u+\epsilon v)-E(u))
$$
is clearly given by
$$
\langle E'(u), v\rangle=\int_{\mathbb{R}^N_+}\left(\nabla U\cdot\nabla V+U_t\,V_t+U\,V\right)\,d\mathbf{x}\,dt.
$$
If we use $w=U$ in \eqref{perturbacion}, we can write
$$
\langle E'(u), v\rangle=\int_{\mathbb{R}^N_+}\left[-v_t\,U_t+\nabla v\cdot\nabla U-(\overline f_\mathbf{z}\cdot\nabla v+\overline f_z v_t+\overline f_u v)U\right]\,d\mathbf{x}\,dt.
$$
The validity of this representation for every $v\in H^1_0(\mathbb{R}^N_+)$ enables us to identify $E'(u)$ with the triplet
$$
(-U_t-\overline f_zU, \nabla U-U\overline f_\mathbf{z}, -\overline f_uU),
$$
in the sense $E'(u)=\mathcal S U$ where the linear operator
$$
\mathcal S:H^1(\mathbb{R}^N_+)\mapsto H^{-1}(\mathbb{R}^N_+)
$$
is precisely determined by
$$
\langle\mathcal S U, v\rangle=\int_{\mathbb{R}^N_+}\left[-v_t\,U_t+\nabla v\cdot\nabla U-(\overline f_\mathbf{z}\cdot\nabla v+\overline f_z v_t+\overline f_u v)U\right]\,d\mathbf{x}\,dt
$$
for every $v\in H^1_0(\mathbb{R}^N_+)$. Notice how this operator $\mathcal S$ is well-defined because the non-linearity $f$ has been assumed to be globally Lipschitz with partial derivatives uniformly bounded. To conclude that $E'(u)\to\mathbf 0$ implies $U\to0$ and, hence $E(u)=0$, we need to ensure that this operator $\mathcal S$ is injective. We conjecture that this is so, without further requirements; but to simplify the argument here, we add the assumption that
$$
|f_u(\mathbf{z}, z, u, t, z)|\ge \epsilon>0
$$
for every $(\mathbf{z}, z, u, t, z)$. Under this additional hypothesis, the condition
$$
(-U_t-\overline f_zU, \nabla U-U\overline f_\mathbf{z}, -\overline f_uU)=\mathbf 0
$$
automatically implies
$$
U=\nabla U=U_t=0
$$
and hence $E(u)=0$.
\begin{theorem}
Suppose the non-linearity $f(\mathbf{z}, z, u, t, \mathbf{x})$ is $\mathcal C^1$- with respect to variables $(\mathbf{z}, z, u)$, and:
\begin{enumerate}
\item $f(\mathbf 0, 0, 0, t, \mathbf{x})\in L^2(\mathbb{R}^N_+)$;
\item the difference $f(\mathbf{z}, z, u, t, \mathbf{x})-u$ is Lipschitz with respect to triplets $(\mathbf{z}, z, u)$ in the sense
\begin{gather}
|f(\mathbf{z}, z, u, t, \mathbf{x})-u-f(\mathbf{y}, y, v, t, \mathbf{x})+v|\le \nonumber\\
M\left(|\mathbf{z}-\mathbf{y}|+|z-y|+\frac1D|u-v|^{(N+1)/(N-1)}\right),\nonumber
\end{gather}
where $D$ is the constant of the corresponding embedding
$$
H^1(\mathbb{R}^N_+)\subset L^{2(N+1)/(N-1)}(\mathbb{R}^N_+),
$$
and $M<1$;
\item non-vanishing of $f_u$: there is some $\epsilon>0$ with
$$
|f_u(\mathbf{z}, z, u, t, z)|\ge \epsilon>0.
$$
\end{enumerate}
Then the problem
$$
u_{tt}-\Delta u+f(\nabla u, u_t, u, t, \mathbf{x})=0\hbox{ in }(t, \mathbf{x})\in\mathbb{R}^N_+
$$
under vanishing initial conditions
$$
u(0, \mathbf{x})=u_t(0, \mathbf{x})=0,\quad \mathbf{x}\in\mathbb{R}^N,
$$
admits a unique weak solution $u\in H^1_0(\mathbb{R}^N_+)$ in the sense \eqref{formadebilnlw}, and
$$
\|v-u\|^2_{H^1_0(\mathbb{R}^N_+)}\le K E(v),
$$
for any other $v\in H^1_0(\mathbb{R}^N_+)$.
\end{theorem}
Without the global Lipschitzianity condition on $f$ in the previous statement, but only the smoothness with respect to triplets $(\mathbf{z}, z, u)$,
only a local existence result is possible. This is standard.
\section{The steady Navier-Stokes system}
For a bounded, Lipschitz, connected domain $\Omega\subset\mathbb{R}^N$, $N=2, 3$, we are concerned with the steady Navier-Stokes system
\begin{equation}\label{navsto}
-\nu\Delta\mathbf{u}+\nabla\mathbf{u}\,\mathbf{u}+\nabla u=\mathbf{f},\quad \operatorname{div}\mathbf{u}=0\hbox{ in }\Omega,
\end{equation}
for a vector field $\mathbf{u}\in H^1_0(\Omega; \mathbb{R}^N)$, and a scalar, pressure field $u\in L^2(\Omega)$. The external force field $\mathbf{f}$ is assumed to belong to the dual space $H^{-1}(\Omega; \mathbb{R}^N)$. The parameter $\nu>0$ is viscosity.
Because of the incompressibility condition, the system can also be written in the form
$$
-\nu\Delta\mathbf{u}+\operatorname{div}(\mathbf{u}\otimes\mathbf{u})+\nabla u=\mathbf{f},\quad \operatorname{div}\mathbf{u}=0\hbox{ in }\Omega.
$$
A weak solution is a divergence-free vector field $\mathbf{u}\in H^1_0(\Omega; \mathbb{R}^N)$, and a scalar field $u\in L^2(\Omega)$, normalized by demanding vanishing average in $\Omega$, such that
$$
\int_\Omega [\nu\nabla\mathbf{u}(\mathbf{x}):\nabla\mathbf{v}(\mathbf{x})-\mathbf{u}(\mathbf{x})\nabla\mathbf{v}(\mathbf{x})\mathbf{u}(\mathbf{x})-u(\mathbf{x})\operatorname{div}\mathbf{v}(\mathbf{x})]\,d\mathbf{x}=\langle\mathbf{f}, \mathbf{v}\rangle
$$
where the right-hand side stands for the duality pairing
$$
H^{-1}(\Omega; \mathbb{R}^N)-H^1_0(\Omega; \mathbb{R}^N).
$$
We propose to look at this problem incorporating the incompressibility constraint into the space as part of feasibility as is usually done; there is also the alternative to treat the same situation incorporating a penalization on the divergence into the functional, instead of including it into the class of admissible fields (see \cite{lemmunped}). The pressure field rises as the multiplier corresponding to the divergence-free constraint.
Let
$$
\mathbb{D}\equiv H^1_{0, div}(\Omega; \mathbb{R}^N)=\{\mathbf{u}\in H^1_0(\Omega; \mathbb{R}^N): \operatorname{div}\mathbf{u}=0\hbox{ in }\Omega\}.
$$
For every such $\mathbf{u}$, we determine its residual $\mathbf{U}$, in a unique way, as the solution of the restricted variational problem
$$
\hbox{Minimize in }\mathbf{V}\in \mathbb{D}:\quad \int_\Omega\left[\frac12|\nabla\mathbf{V}|^2+(\nu\nabla\mathbf{u}-\mathbf{u}\otimes\mathbf{u}):\nabla\mathbf{V}\right]\,d\mathbf{x}-\langle\mathbf{f}, \mathbf{V}\rangle.
$$
The pressure $v$ comes as the corresponding multiplier for the divergence-free constraint, in such a way that the unique minimizer $\mathbf{U}$ is determined through the variational equality
$$
\int_\Omega (\nabla\mathbf{U}:\nabla\mathbf{V}+(\nu\nabla\mathbf{u}-\mathbf{u}\otimes\mathbf{u}):\nabla\mathbf{V}+v\operatorname{div}\mathbf{V})\,d\mathbf{x}-\langle\mathbf{f}, \mathbf{V}\rangle
$$
valid for every test field $\mathbf{V}\in H^1_0(\Omega; \mathbb{R}^N)$. This is the weak form of the optimality condition associated with the previous variational problem
\begin{equation}\label{residuo}
-\Delta\mathbf{U}-\nu\Delta\mathbf{u}+\operatorname{div}(\mathbf{u}\otimes\mathbf{u})-\mathbf{f}+\nabla v=\mathbf 0\hbox{ in }\Omega,
\end{equation}
for $\mathbf{U}\in\mathbb{D}$. The multiplier $v\in L^2_0(\Omega)$ (square-integrable fields with a vanishing average) is the pressure. We define
\begin{equation}\label{funcionalns}
E(\mathbf{u}):\mathbb{D}\to\mathbb{R}^+,\quad E(\mathbf{u})=\frac12\int_\Omega|\nabla\mathbf{U}(\mathbf{x})|^2\,d\mathbf{x}.
\end{equation}
For $\mathbf{u}, \mathbf{v}\in\mathbb{D}$, let $\mathbf{U}, \mathbf{V}\in\mathbb{D}$ be their respective residuals, and $u, v$ their respective pressure fields. Put
$$
\mathbf{w}=\mathbf{u}-\mathbf{v}\in\mathbb{D},\quad \mathbf{W}=\mathbf{U}-\mathbf{V}\in\mathbb{D}, \quad w=u-v\in L^2(\Omega).
$$
It is elementary to find, by subtraction of the corresponding system \eqref{residuo} for $\mathbf{u}$ and $\mathbf{v}$, that
\begin{equation}\label{esta}
-\Delta\mathbf{W}-\nu\Delta\mathbf{w}+\operatorname{div}(\mathbf{u}\otimes\mathbf{u}-\mathbf{v}\otimes\mathbf{v})+\nabla w=\mathbf 0\hbox{ in }\Omega.
\end{equation}
It is the presence of the non-linear term $\operatorname{div}(\mathbf{u}\otimes\mathbf{u})$, so fundamental to the Navier-Stokes system, what makes the situation different compared to a linear setting.
We write
$$
\operatorname{div}(\mathbf{u}\otimes\mathbf{u}-\mathbf{v}\otimes\mathbf{v})=\operatorname{div}(\mathbf{w}\otimes\mathbf{u})+\operatorname{div}(\mathbf{v}\otimes\mathbf{w}),
$$
and bear in mind the well-know fact
\begin{equation}\label{identidades}
\int_\Omega(\mathbf{v}\otimes\mathbf{v}:\nabla\mathbf{u}+\mathbf{u}\otimes\mathbf{v}:\nabla\mathbf{v})\,d\mathbf{x}=\int_\Omega\mathbf{u}\otimes\mathbf{v}:\nabla\mathbf{u}\,d\mathbf{x}=0
\end{equation}
for every $\mathbf{u}, \mathbf{v}\in\mathbb{D}$. If we use $\mathbf{w}$ as a test function in \eqref{esta}, we find that
\begin{equation}\label{uno1}
\int_\Omega[\nabla\mathbf{W}:\nabla \mathbf{w}+\nu|\nabla\mathbf{w}|^2-(\mathbf{w}\otimes\mathbf{u}):\nabla\mathbf{w}-(\mathbf{v}\otimes\mathbf{w}):\nabla\mathbf{w}]\,d\mathbf{x}=0.
\end{equation}
Note that the integral involving $w$ vanishes because $\mathbf{w}$ is divergence-free. By \eqref{identidades},
\begin{gather}
\int_\Omega(\mathbf{w}\otimes\mathbf{u}):\nabla\mathbf{w}\,d\mathbf{x}=0,\nonumber\\
\int_\Omega(\mathbf{v}\otimes\mathbf{w}):\nabla\mathbf{w}\,d\mathbf{x}=-\int_\Omega (\mathbf{w}\otimes\mathbf{w}):\nabla\mathbf{v}\,d\mathbf{x}.\nonumber
\end{gather}
Identity \eqref{uno1} becomes
\begin{equation}\label{dos2}
\int_\Omega[\nabla\mathbf{W}:\nabla \mathbf{w}+\nu|\nabla\mathbf{w}|^2+(\mathbf{w}\otimes\mathbf{w}):\nabla\mathbf{v}]\,d\mathbf{x}=0.
\end{equation}
We can use $\mathbf{v}$ as a test function in the corresponding system \eqref{residuo} for $\mathbf{v}$ to have
$$
\int_\Omega(\nabla \mathbf{V}:\nabla\mathbf{v}+\nu|\nabla\mathbf{v}|^2-\mathbf{f}\cdot\mathbf{v})\,d\mathbf{x}=0.
$$
Again we have utilized that fields in $\mathbb{D}$ are divergence-free, and the second identity in \eqref{identidades}. This last identity implies, in an elementary way, that
\begin{equation}\label{tres3}
\nu\|\mathbf{v}\|_{H^1_0(\Omega; \mathbb{R}^N)}\le \|\mathbf{f}\|_{H^{-1}(\Omega; \mathbb{R}^N)}+\sqrt{2E(\mathbf{v})}.
\end{equation}
Recall that
$$
2E(\mathbf{v})=\|\mathbf{V}\|^2_{H^1_0(\Omega; \mathbb{R}^N)}.
$$
We now have all the suitable elements to exploit \eqref{dos2}. If $C=C(n)$ is the constant of the Sobolev embedding of $H^1(\Omega)$ into $L^4(\Omega)$ for $N\le4$, then \eqref{dos2} leads to
$$
\nu\|\mathbf{w}\|^2\le \|\mathbf{W}\|\,\|\mathbf{w}\|+C^2\|\mathbf{w}\|^2\|\mathbf{v}\|
$$
where all norms here are in $H^1_0(\Omega; \mathbb{R}^N)$. On the other hand, if we replace the size of $\mathbf{v}$ by the estimate \eqref{tres3}, we are carried to
$$
\nu\|\mathbf{w}\|^2\le \|\mathbf{W}\|\,\|\mathbf{w}\|+\frac {C^2}\nu\|\mathbf{w}\|^2\left(\|\mathbf{f}\|_{H^{-1}(\Omega; \mathbb{R}^N)}+\sqrt{2E(\mathbf{v})}\right),
$$
or
$$
\left(\nu-\frac {C^2}\nu\left(\|\mathbf{f}\|+\sqrt{2E(\mathbf{v})}\right)\right)\|\mathbf{w}\|^2\le \|\mathbf{W}\|\,\|\mathbf{w}\|.
$$
Since
$$
\|\mathbf{W}\|=\|\mathbf{U}-\mathbf{V}\|\le \|\mathbf{U}\|+\|\mathbf{V}\|,
$$
we would have
$$
\left(\nu-\frac {C^2}\nu\left(\|\mathbf{f}\|+\sqrt{2E(\mathbf{v})}\right)\right)^2\|\mathbf{u}-\mathbf{v}\|^2\le 4 (E(\mathbf{u})+E(\mathbf{v})).
$$
The form of this inequality leads us to the following interesting generalization of Definition \ref{errorgeneral}.
\begin{definition}\label{errorconcota}
A non-negative, $\mathcal C^1$-functional
$$
E(\mathbf{u}):\mathbb H\to\mathbb{R}^+
$$
defined over a Hilbert space $\mathbb H$ is called an error functional if
there is some positive constant $c$ (including $c=+\infty$) such that:
\begin{enumerate}
\item behavior as $E'\to\mathbf 0$:
$$
\lim_{E'(\mathbf{u})\to\mathbf 0}E(\mathbf{u})=0
$$
over bounded subsets of $\mathbb H$; and
\item enhanced coercivity: there is a positive constant $C$ (that might depend on $c$), such that for every pair $\mathbf{u}, \mathbf{v}$
belonging to the sub-level set $\{E\le c\}$,
we have
$$
\|\mathbf{u}-\mathbf{v}\|^2\le C(E(\mathbf{u})+E(\mathbf{v})).
$$
\end{enumerate}
\end{definition}
It is interesting to note that the sub-level sets $\{E\le d\}$ for $d<c$, for a functional $E$ verifying Definition \ref{errorconcota}, cannot maintain several connected components.
Because our basic result Proposition \ref{basicas} is concerned with zeros of $E$, it is still valid under Definition \ref{errorconcota}.
\begin{proposition}\label{basicass}
Let $E:\mathbb H\to\mathbb{R}^+$ be an error functional according to Definition \ref{errorconcota}. Then there is a unique minimizer $\mathbf{u}_\infty\in\mathbb H$ such that $E(\mathbf{u}_\infty)=0$, and
$$
\|\mathbf{u}-\mathbf{u}_\infty\|^2\le CE(\mathbf{u}),
$$
for every $\mathbf{u}\in\mathbb H$
provided $E(\mathbf{u})$ is sufficiently small ($E(\mathbf{u})\le c$, the constant in Definition \ref{errorconcota}).
\end{proposition}
The calculations that motivated this generalization yield the following.
\begin{proposition}
Let $N\le4$, and $\Omega\subset\mathbb{R}^N$, a bounded, Lipschitz, connected domain. If $\nu>0$ and $\mathbf{f}\in H^{-1}(\Omega; \mathbb{R}^N)$ are such that the quotient
$\|\mathbf{f}\|/\nu^2$ is sufficiently small, then the functional $E$ in \eqref{funcionalns} complies with Definition \ref{errorconcota}.
\end{proposition}
We now turn to examining the interconnection between $E$ and $E'$. To this end, we gather here \eqref{residuo} and \eqref{funcionalns}
\begin{gather}
E(\mathbf{u})=\frac12\int_\Omega|\nabla\mathbf{U}(\mathbf{x})|^2\,d\mathbf{x},\nonumber\\
-\Delta\mathbf{U}-\nu\Delta\mathbf{u}+\operatorname{div}(\mathbf{u}\otimes\mathbf{u})-\mathbf{f}+\nabla u=\mathbf 0\hbox{ in }\Omega,\nonumber
\end{gather}
for $\mathbf{u}, \mathbf{U}\in\mathbb{D}$ and $u\in L^2_0(\Omega)$. If we replace
$$
\mathbf{u}\mapsto\mathbf{u}+\epsilon\mathbf{v},\quad \mathbf{U}\mapsto\mathbf{U}+\epsilon\mathbf{V},\quad u\mapsto u+\epsilon v,
$$
to first-order in $\epsilon$, we would have
\begin{gather}
E(\mathbf{u}+\epsilon\mathbf{v})=\frac12\int_\Omega|\nabla\mathbf{U}+\epsilon\nabla\mathbf{V}|^2\,d\mathbf{x},\nonumber\\
-\Delta(\mathbf{U}+\epsilon\mathbf{V})-\nu\Delta(\mathbf{u}+\epsilon\mathbf{v})+\operatorname{div}((\mathbf{u}+\epsilon\mathbf{v})\otimes(\mathbf{u}+\epsilon\mathbf{v}))\nonumber\\
-\mathbf{f}+\nabla (u+\epsilon v)=\mathbf 0.\nonumber
\end{gather}
By differentiating with respect to $\epsilon$, and setting $\epsilon=0$, we arrive at
\begin{gather}
\langle E'(\mathbf{u}), \mathbf{v}\rangle=\int_\Omega\nabla\mathbf{U}\cdot\nabla\mathbf{V}\,d\mathbf{x},\nonumber\\
-\Delta\mathbf{V}-\nu\Delta\mathbf{v}+\operatorname{div}(\mathbf{u}\otimes\mathbf{v}+\mathbf{v}\otimes\mathbf{u})+\nabla v=\mathbf 0.\nonumber
\end{gather}
If we use $\mathbf{U}$ as a test function in this last system, we realize that
$$
\langle E'(\mathbf{u}), \mathbf{v}\rangle=\int_\Omega [-\nu\nabla\mathbf{v}\cdot\nabla\mathbf{U}+\mathbf{u}\otimes\mathbf{v}:\nabla\mathbf{U}+\mathbf{v}\otimes\mathbf{u}:\nabla\mathbf{U}]\,d\mathbf{x}
$$
for every $\mathbf{v}\in\mathbb{D}$. If we set $\mathbf{w}=E'(\mathbf{u})\in\mathbb{D}$, then
$$
\int_\Omega[\nabla\mathbf{w}\cdot\nabla \mathbf{v}+\nu\nabla\mathbf{U}\cdot\nabla\mathbf{v}+(\nabla\mathbf{U}\mathbf{u}+\mathbf{u}\nabla\mathbf{U})\cdot\mathbf{v}]\,d\mathbf{x},
$$
for every $\mathbf{v}\in\mathbb{D}$. In particular, if we plug $\mathbf{v}=\mathbf{U}$ in, bearing in mind that due to \eqref{identidades} the last two terms drop out, we are left with
$$
\nu\|\nabla\mathbf{U}\|^2=-\langle\nabla\mathbf{w}, \nabla\mathbf{U}\rangle,
$$
or
$$
\nu\|\nabla\mathbf{U}\|\le\|\nabla\mathbf{w}\|.
$$
If the term on the right-hand side, which is $\|E'(\mathbf{u})\|$, tends to zero, so does the one on the left-hand side, which is $\nu\sqrt{2E(\mathbf{u})}$. This shows the second basic property of an error functional.
As a result, Proposition \ref{basicass} can be applied.
\begin{theorem}
If $\Omega\subset\mathbb{R}^N$, $N\le4$, is a bounded, Lipschitz, connected domain, and $\nu>0$ and $\mathbf{f}\in H^{-1}(\Omega; \mathbb{R}^N)$ in the steady Navier-Stokes system \eqref{navsto} are such that the quotient
$\|\mathbf{f}\|/\nu^2$ is sufficiently small, then there is a unique weak solution $\mathbf{u}$ in $\mathbb{D}$, and
$$
\|\mathbf{u}-\mathbf{v}\|^2_{H^1_0(\Omega; \mathbb{R}^N)}\le CE(\mathbf{v})
$$
provided $E(\mathbf{v})$ is sufficiently small.
\end{theorem}
\end{document} |
\begin{document}
\title{On locally convex PL-manifolds \\and fast verification of convexity}
\author{Konstantin Rybnikov \\
\date{\today}
[email protected]\\
http://faculty.uml.edu/krybnikov}
\maketitle
\centerline{Short Version}
\begin{abstract}
We show that a PL-realization of a closed connected manifold of
dimension $n-1$ in $\mathbb{R}^n\:(n \ge 3)$ is the boundary of a
convex polyhedron if and only if the interior of each
$(n-3)$-face has a point, which has a neighborhood lying on the
boundary of a convex $n$-dimensional body. This result is derived from a
generalization of Van Heijenoort's theorem on locally convex
manifolds to the spherical case. Our convexity criterion
for PL-manifolds implies an easy polynomial-time algorithm for
checking convexity of a given PL-surface in $\mathbb{R}^n$.
\end{abstract}
There is a number of theorems that infer global convexity from local convexity. The oldest one belongs to Jacque Hadamard (1897) and asserts that any compact smooth surface embedded in $\R^3$, with strictly positive Gaussian curvature, is the boundary of a convex body. Local convexity can be defined in many different ways (see van Heijenoort (1952) for a survey). We will use Bouligand's (1932) notion of local convexity. In this definition a surface $M$ in the affine space $\mathbb{R}^n$ is called locally convex at point ${\bf p}$ if ${\bf p}$ has a neighborhood which lies on the boundary of a convex $n$-dimensional body $K_{\bf p}$; if $K_{\bf p}\backslash{\bf p}$ lies in an open half-space defined by a hyperplane containing ${\bf p}$, $M$ is called strictly convex at ${\bf p}$.
This paper is mainly devoted to local convexity of piecewise-linear (PL) surfaces, in particular, polytopes. A PL-surface in $\mathbb{R}^n$ is a pair $M=({\cal M},r)$, where ${\cal M}$ is a topological manifold with a fixed cell-partition and $r$ is a continuous {\it realization} map from ${\cal M}$ to $\mathbb{R}^n$ that satisfies the following conditions:
\par \noindent 1) $r$ is a bijection on the closures of all cells of ${\cal M}$
\par \noindent 1) for each $k$-cell $C$ of $\cM$ the image $r(C)$ lies on a $k$-dimensional affine subspace of $\mathbb{R}^n$; $r(C)$ is then called a $k$-face of $M$.
\par Thus, $r$ need not be an immersion, but its restriction to the closure of any cell of ${\cal M}$ must be. By a fixed cell-partition of ${\cal M}$ we mean that ${\cal M}$ has a structure of a CW-complex where all gluing mappings are homeomorphisms (such complexes are called \emph{regular} by J.H.C. Whitehead). \emph{All cells and faces are assumed to be open.} We will also call $M=({\cal M},r)$ a PL-realization of ${\cal M}$ in $\mathbb{R}^n$.
\begin{definition}
\emph{We say that $M=({\cal M},r)$ is the boundary of a convex body $P$ if $r$ is a homeomorphism between ${\cal M}$ and ${\partial P}$.}
\end{definition}
Hence, we exclude the cases when $r(\cM)$ coincides with the boundary of a convex set, but $r$ is not injective. Of course, the algorithmic and topological sides of this case are rather important for computational geometry and we will consider them in further works. Notice that for $n>2$ a closed $(n-1)$-manifold $\cM$ cannot be immersed into $\R^n$ by a
non-injective map $r$ so that $r(\cM)$ is the boundary of a convex set, since any covering space of a simply connected manifold must be simply connected. However, such immersions are possible in the hyperbolic space $\HH^n$.
Our main theorem asserts that any closed PL-surface $M$ immersed in $\mathbb{R}^n \:(n \ge 3)$ with at least one point of strict convexity, and such that each $(n-3)$-cell has a point at which $M$ is locally convex, is convex. Notice that if the last condition holds for some point on an $(n-3)$-face, it holds for all points of this face.
This theorem implies a test for global convexity of PL-surfaces: check local convexity on each of the $(n-3)$-faces. Notice that if for all $k$ and every $k$-face there is an $(n-k-1)$-sphere, lying in a complementary subspace and centered at some point of $F$, such that $\SSS \cap F$ is a convex surface, then $r$ is an immersion. The algorithm implicitly checks if a given realization is an immersion, and reports "not convex" if it is not.
The pseudo-code for the algorithm is given in this article. The complexity of this test depends on the way the surface is given as input data. Assuming we are given the coordinates of the vertices and the poset of faces of dimensions $n-1$, $n-2$, $n-3$ and $0$, OR, the equations of the facets, and the poset of faces dimensions $n-1$, $n-2$, and $n-3$, the complexity of the algorithm for a general closed PL-manifold is $O(f_{n-3,n-2})=O(f_{n-3,n-1})$, where $f_{k,l}$ is the number of incidences between cells of dimension $k$ and $l$.
If the vertices of the manifold are assumed to be in a sufficiently general position, then the dimension of the space does not affect the complexity at all. Another advantage of this algorithm is that it consist of $f_{n-3}$ independent subroutines corresponding to the $(n-3)$-faces, each with complexity not exceeding $O$ in the number of $(n-1)$-cells incident to the $(n-3)$-face.
The complexity of our algorithm is asymptotically equal to the complexity of algorithms suggested by Devillers et al (1998) and Mehlhorn et al (1999) for simplicial 2-dimensional surfaces; for $n>3$ our algorithm is asymptotically faster than theirs. These authors verify convexity not by checking it locally at $(n-3)$-faces, but by different, rather global methods (their notion of local convexity is, in fact, a global notion). Devillers et al (1998) and Mehlhorn et al (1999) make much stronger initial assumptions about the input, such as the orientability of the input surface; they also presume that for each $(n-1)$-face of the surface an external normal is given, and that the directions of these normals define an orientation of the surface. Then they call the surface locally convex at an $(n-2)$-face $F$ if the angle between the normals of two $(n-1)$-faces adjacent to $F$ is obtuse. Of course this notion of ``local'' convexity is not local.
The main theorem is deduced from a direct generalization of van Heijenoort's theorem to the spherical case.
Van Heijenoort's theorem asserts that an immersion in $\mathbb{R}^n$ of any closed connected manifold ${\cal M}$, which is locally convex at all points, strictly locally convex in at least one point, and is complete with respect to the metric induced by $r$, is the boundary of a convex $d$-dimensional set. Van Heijenoort (1952) noticed that for $n=3$ his theorem immidiately follows from four theorems contained in Alexandrov (1948); however, acoording to van Heijenoort, Alexandrov's methods do not extend to $n>3$ and his approach is technically more complicated. We show that this theorem also holds for spheres, but not for the hyperbolic space. While all notions of affine convexity can be obviously generalized to the hyperbolic space, there are two possible generalizations in the spherical case; neither of these generalizations is perfect. The main question is whether we want all geodesics joining points of a convex set $S$ to be contained in $S$, or at least one. In the first case subspheres are not convex, in the second case two convex sets can intersect by a non-convex set. The latter problem can be solved by requiring a convex set to be open, but this is not very convenient, since again, it excludes subspheres. We call a set in $\X^n$ convex if for any two points $\p,\q \in S$, there is some geodesic $[\p,\q] \subset S$.
\begin{proposition}
If the intersection $I$ of two convex sets in $\SSS^n$, $n>0$ is not convex, then $I$ contains two opposite points.
\end{proposition}
To have unified terminology we will call subspheres subspaces.
Besides the algorithmic implications, our generalization implies that any $(n-3)$-simple PL-surface in $\mathbb{R}^n$ with convex facets is the boundary of a convex polyhedron.
\section{van Heijenoort-Alexandrov's Theorem \\for Spaces of Constant Curvature}
Throughout the paper $\X^n$ denotes $\R^n$, $\SSS^n$, or $\HH^n$.
Following the original proof of van Hejeenoort's, we will now show that his theorem holds in a somewhat stronger form for $\SSS^n$ for $n>2$. van Hejeenoort's theorem does not hold for unbounded surfaces in $\HH^n$. We will give three different kinds of counterexamples and pose a conjecture about simply connected locally compact embeddings of manifolds in $\HH^n$.
Imagine a "convex strip" in 3D which is bent in the form of handwritten $\varphi$ so that it self-intersects itself, but not locally. Consider the intersection of this strip with a ball of appropriate radius so that the self-intersection of the strip happens to be inside the ball, and the boundary of the strip outside. Regarding the interior of the ball as Klein's model of $\HH^3$ we conclude that the constructed surface is strictly locally convex at all points and has a complete metric induced by the immersion into the hyperbolic space. This gives an example of an\emph{ immersion of a simply connected manifold} into $\HH^n$ which does not bound a convex surface. Notice that in this counterexample the surface self-intersects itself.
Consider the (affine) product of a non-convex quadrilateral, lying inside a unit sphere centered at the origin, and a line in $\R^n$. The result is a non-convex polyhedral cylindrical surface $P$. Pick a point $\p$ inside the sphere, but outside the cylinder, whose vertical projection on the cylinder is the affine center of one of its facets. Replace this facet of $P$ with the cone over $F \cap \{\x | \|\x\| \le 1\}$ The part of the resulting polyhedral surface, that lies inside the sphere of unit radius, is indeed a PL-surface \emph{embedded} in $\HH^n$ (in Klein's model). The surface if locally convex at every point and strictly convex at $\p$. However, it is not the boundary of a convex body. Notice that this surface is \emph{not simply connected}.
Consider a locally convex spiral, embedded in the $yz$-plane with two limiting sets: circle $\{\p | x=0, y^2+z^2=1 \}$ and the origin. That is this spiral coils around the origin and also around (from inside) the circle. Let $M$ be the double cone over this spiral with apexes at $(1,0,0)$ and $(-1,0,0$, intersected with the unit ball $\{\x | \|\x\| < 1\}$. This \emph{non-convex } surface is obviously \emph{simply connected, embedded,} locally convex at every point, and strictly convex at all point of the spiral.
A locally compact realization $r$ of $\cM$ is a realization such that for any compact subset $C$ of $\X^n$ $C \cap r(\cM)$ is compact. The question remains:
\begin{problem} Is it true that any locally compact embedding of a simply connected surface in $\HH^n$ is convex?
\end{problem}
It remains an open question whether van Heijenoort's (1952) criterion works for \emph{embedded} \emph{unbounded} surfaces in $\HH^n$. We conjecture that it is, indeed, the case. The proof of the main theorem makes use of quite a number of technical propositions and lemmas. The proofs of these statements for $\R^n$ by most part can be directly repeated for $\X^n$, but in some situations extra care is needed. If the reader is referred to van Heijenoort's paper for the proof, it means that the original proof works without any changes.
\underline{Notation:} The calligraphic font is used for sets in the abstract topological manifold. The regular mathematics font is used for the images of these sets in $\X^n$, a space of constant curvature. The interior of a set $S$ is denoted by $(S)$, while the closure by $[S]$. The boundary of S is denoted by $\partial S$. Since this paper is best read together with van Heijenoort's (1952) paper, we would like to explain the differences between his and our notations. van Heijenoort denotes a subset in the abstract manifold $\overline{M}$ by $\overline{S}$, while denoting its image in $\mathbb{R}^n$ by $S$; an interior of a set $S$ in $\mathbb{R}^n$ is denoted in his paper by $\dot{S}$.
The immersion $r$ induces a metric on ${\cal M}$ by \[ d(p,q)=\GLB\limits_{arc(p,q) \subset {\cal M}}\{|r(arc(p,q))|\} \]
where \emph{$\GLB$} stands for the greatest lower bound, and $|r(arc(p,q))|$ — for the length of an arc joining $\p$ and $\q$ on $M$, which is the $r$-image of an arc joining these points on ${\cal M}$. We will call this metric $r$-metric.
\begin{lemma}\label{arcwise} (van Heijenoort) Any two points of ${\cal M}$ can be connected by an arc of a finite length. Thus ${\cal M}$ is not only connected, but also arcwise connected.
\end{lemma}
\begin{lemma} (van Heijenoort) The metric topology defined by the $r$-metric is equivalent to the original topology on ${\cal M}$.
\end{lemma}
\begin{lemma} (van Heijenoort) $r(S)$ is closed in $\X^n$ for any closed subset $S$ of ${\cal M}$.
\end{lemma}
\begin{lemma} (van Heijenoort) If on a bounded (in $r$-metric) closed subset $S \subset {\cal M}$ mapping $r$ is one-to-one, then $r$ is a homeomorphism between $S$ and $r(S)$.
\end{lemma}
The proofs of the last two lemmas have been omitted in van Heijenoort (1952), but they are well known in topology.
\begin{theorem}\label{main}
Let $\X^n$ ($n>2$) be a Euclidean, spherical, or hyperbolic space. Let $M=({\cal M},r)$ be an immersion of an $(n-1)$-manifold ${\cal M}$ in $\X^n$, such that $r(\cM)$ is bounded in $\X^n$. Suppose that $M=({\cal M},r)$ satisfies the following conditions:
\noindent 1) ${\cal M}$ is complete with respect to the metric induced on ${\cal M}$ by the immersion $r$,
\noindent 2) ${\cal M}$ is connected,
\noindent 2) $M$ is locally convex at each point,
\noindent 4) $M$ is strictly convex in at least one point,
Then $r$ is a homeomorphism from ${\cal M}$ onto the boundary of a compact convex body.
\end{theorem}
\begin{proof}
Notice that our theorem for $\X^n=\HH^n$ directly follows from van Heijenoort's proof of the Euclidean case. Any immersion of $\cM$ into $\HH^n$ can be regarded as an immersion into the interior of a unit ball with a hyperbolic metric, according to Klein's model. If conditions 1)-4) are satisfied for the hyperbolic metric, they are satisfied for the Euclidean metric on this ball. Geodesics in Klein's model are straight line segments and, therefore, for a bounded closed surfaces in $\HH^n$, that satisfies the conditions of the theorem, the convexity follows from the Euclidean version of this theorem.
The original Van Heijenoort's proof is based on the notion of convex part. A {\it convex part} of $M$, centered at a point of strict convexity $\oo=r(o)$, $o \in \cM$, is an open connected subset $C$ of $r({\cal M})$ that contains $\oo$ and such that: (1) $\partial C= H \cap r({\cal M})$, where $H$ is a hyperplane in $\X^n$, not passing through $\mathbf{o}$, (2) $C$ lies on the boundary of a closed convex body $K_C$ bounded by $C$ and $H$. We call $H \cap K_C$ the lid of the convex part. Let $H_0$ be a supporting hyperplane at $\oo$. We call the \emph{open} half-space defined by $H_0$, where the convex part lies, the \emph{positive half-space} and denote it by $H^+_0$. We call the $r$-preimage of a convex part $C$ in ${\cal M}$ an abstract convex part, and denote it by ${\cal C}$. In van Heijenoort's paper $H$ is required to be parallel to the supporting hyperplane $H_0$ of $r({\cal M})$ at $\oo$, but this is not essential. In fact, we just need a family of hyperplanes such that: (1) they do not intersect in the positive half-space, (2) the intersections of these hyperplanes with the positive half-space form a partition of the positive half-space, (3) all these hyperplanes are orthogonal to a line $l$, passing through $\oo$. Let us call such a family a fiber bundle $\{H_z\}_{(l,H_0)}$ of the positive half-space defined by $l$ and $H_0$. (In the case of $\X^n=\R^n$ it is a vector bundle.) In fact, it is not necessary to assume that $l$ passes through $\oo$, but this assumption simplifies our proofs. Here $z>0$ denotes the distance, \emph{along the line $l$}, between the hyperplane $H_z$ in this family and $H_0$. We will call $z$ the height of $H_z$.
\begin{proposition}
A convex part exists.
\end{proposition}
\begin{proof}van Heijenoort's proof works for $\X^n$, $n>2$, without changes.
\end{proof}
Denote by $\zeta$ the least upper bound of the set of heights of the lids of convex parts centered at $\oo$ and defined by some fixed fiber bundle $\{H_z\}_{(l,H_o)}$. Since $r(\cM)$ is bounded, then $\zeta < \infty$.
Consider the union $G$ of all convex parts, centered at $\oo$. We want to prove that this union is also a convex part. Let us depart for a short while (this paragraph) from the assumption that $r(\cM)$ is bounded. $G$ may only be unbounded in the hyperbolic and Euclidean cases. As shown by van Heijennort (1952), if $\X^n=\R^n$ and $\zeta < \infty$, $G$ must be bounded even when $r(\cM)$ is allowed to be unbounded. If $\X^n=\HH^n$ and $\zeta < \infty$, $G$ can be unbounded, and this is precisely the reason why van Heijenoort's theorem does not hold for unbounded surfaces in hyperbolic spaces.
Since in this theorem $r(\cM)$ is assumed to be bounded, $G$ is bounded.
Let us presume from now on that $\X^n=\SSS^n$ (the case of $\X^n=\HH^n$ is considered in the beginning of the proof).
$\partial G$ belongs to the hyperplane $H_{\zeta}$ and is equal to $H_{\zeta} \cap M$. $\partial G$ bounds a closed bounded convex set $D$ in $H_{\zeta}$. Two mutually excluding cases are possible.
Case 1: $\dim D<n-1$. Then, following the argument of van Heljenoort (Part 2: pages 239-230, Part 5: page 241, Part 3: II on page 231), we conclude that $G \cup D$ is the homeomorphic pre-image of an $(n-1)$-sphere ${\cal G} \cup {\cal D} \subset {\cal M}$. Since ${\cal M}$ is connected, ${\cal G} \cup {\cal D} = {\cal M}$, and ${\cal M}$ is a convex surface.
Case 2: $\dim D=n-1$. The following lemma is a key part of the proof of the main theorem. Roughly speaking, it asserts that
if the lid of a convex part is of co-dimension 1, then either this convex part is a subset of a bigger convex part, or this convex part, together with the lid, is homeomorphic to $\cM$ via mapping $r$.
\begin{lemma}\label{alternative} Suppose $\X^n=\SSS^n$. Let $C$ be a convex part centered at a point $\oo$ and defined by a hyperplane $H_z$ from a fiber bundle $\{H_z\}_{(l,H_0)}$. Suppose $B=\partial C$ is the boundary of an $(n-1)$-dimensional closed convex set $S$ in $H_z$. Either $S$ is the $r$-image of an $(n-1)$-disk ${\cal S}$ in ${\cal M}$ and ${\cal M}={\cal C} \cup {\cal S}$, where ${\cal C}=r(C)$, or $C$ is a proper subset of a larger convex part, defined by the same fiber bundle $\{H_z\}_{(l,H_0)}$.
\end{lemma}
\begin{proof} Using a perturbation argument, we will prove this lemma by reducing the spherical case to the Euclidean one.
Since $S$ is $(n-1)$-dimensional and belongs to one of the hyperplanes in the fiber bundle $\{H_z\}_{(i,H_0)}$, $[\conv C] \cap H_0$ is either empty or $(n-1)$-dimensional. If it is non-empty, $[\conv C] \cap H_0$ must have a point other than $\oo$ and its opposite. The closure of a convex set in $\X^n$ is convex. Since $[\conv C ]$ is convex, if it contains a point $\p$ of $H_0$ other than $\oo$ and its opposite, it contains some geodesic segment $[\oo \p]$ \emph{lying in} $H_0$. Since $\oo$ is a point of strict convexity, there is a neighborhood of $\oo$ on $\oo \p$ all whose points, except for $\oo$, are not points of $[\conv C]$, which contradicts to the choice of $[\oo \p]$.
So, $[\conv C] \cap H_0$ is definitely empty. Since, by Lemma \ref{arcwise}, $r(M)$ is arcwise connected,
all of $[\conv C]$, except for the point $\oo$, lies in the positive subspace. Therefore, there is a hyperplane $H$ in $\SSS^n$ such that $[C]$ lies in an open halfspace $H_+$ defined by $H$. We can regard $\SSS^n$ as a standard sphere in $\R^{n+1}$. $H$ defines a hyperplane in $\R^{n+1}$. Consider an $n$-dimensional plane $E_n$ in $\R^{n+1}$ parallel to this hyperplane and not passing through the origin. Central projection $r_1$ of $M \cap H_+$ on $E_n$ obviously induces an immersion $r_1r$ of a submanifold $\cM^{\prime}$ of $\cM$ into $E_n$.
This submanifold $\cM^{\prime}$ is defined as the maximal arcwise connected open subset of $\cM$ such that (1) all points of this subset are mapped by $r$ to $H_+$, and (2) it contains $o$. It is obviously a manifold. Let us prove that it exists. Consider the union of all open arcwise connected subsets that contain $o$. It is open and is acrwise connected, since it contains $o$. Let $M^{\prime}=(\cM^{\prime}, r_1r)$.
The immersion $r_1r$ obviously satisfies Conditions 2-4 of the main theorem \ref{main}. $r_1r$ defines a metric on $\cM^{\prime}$. Any Cauchy sequence on $\cM^{\prime}$ under this metric is also a Cauchy sequence on $\cM$ under the metric induced by $r$. Therefore $\cM^{\prime}$ is complete and satisfies the conditions of the main Theorem \ref{main}. The central projection on $E_n$ maps a spherical convex part of $M$ on a Euclidean convex part of $M^{\prime}$; it also maps the fiber bundle $\{H_z\}_{(l,H_0)}$ to a fiber bundle in the Euclidean $n$-plane $E_n$. van Heijenoort (1952) proved Lemma \ref{alternative} for the Euclidean case. Therefore, either ${\cal M}={\cal C} \cup {\cal S}$, or $C$ is a proper subset of a larger convex part centered at $\oo$, and defined by the same fiber bundle $\{H_z\}_{(l,H_0)}$.
\end{proof}
The second alternative ($C$ is a subset of a larger convex part) is obviously excluded, since $C$ is the convex part corresponding to the height which is the least upper bound of all possible heights of convex parts. Therefore in Case 2 ${\cal M}$ is the boundary of a convex body which consists of a maximal convex part and a convex $(n-1)$-disk, lying in the hyperplane $H_{\zeta}$.
\end{proof}
\section{Locally convex PL-surfaces}
\begin{theorem}\label{strict} Let $r$ be a realization map from a \emph{compact} connected manifold ${\cal M}$ of dimension $n-1$ into $\mathbb{X}^n$ ($n>2$) such that $\cM$ is complete with respect to the $r$-metric. Suppose that $M=(\cM,r)$ is locally convex at all points. Then $M=(\cM,r)$ is either strictly locally convex in at least one point, or is a spherical hyper-surface of the form $\mathbb{S}^n \cap \partial C$, where $C$ is a convex cone in $\mathbb{R}^{n+1}$, whose face of the smallest dimension contains the origin (in particular, $C$ may be a hyperplane in $\mathbb{R}^{n+1}$).
\end{theorem}
\begin{proof} The proof of this rather long and technical theorem will be included in the full length paper of Rybnikov (200X).
\end{proof}
\begin{theorem}\label{PL-case} Let $r$ be a realization map from a closed connected $n$-dimensional manifold ${\cal M}$, with a regular CW-decomposition, in $\mathbb{R}^n$ or $\SSS^n$ ($n>2$) such that on the closure of each cell $C$ of ${\cal M}$ map $r$ is one-to-one and $r(C)$ lies on a subspace of dimension equal to $\dim C$. Suppose that $r({\cal M})$ is strictly locally convex in at least one point. The surface $r({\cal M})$ is the boundary of a convex polyhedron if and only if each $(n-3)$-face has a point with an $M$-neighborhood which lies on the boundary of a convex $n$-dimensional set.
\end{theorem}
\begin{proof}
$M$ is locally convex at all points of its $(n-3)$-cells. Suppose we have shown that $M$ is locally convex at each $k$-face, $0<k \le n-3$. Consider a $(k-1)$-face $F$. Consider the intersection of $\Star(F)$ with a sufficiently small $(n-k)$-sphere ${\mathbb S}$ centered at some point $\p$ of $F$ and lying in a subspace complimentary to $F$. $M$ is locally convex at $F$ if and only if the hypersurface ${\mathbb S} \cap \Star(F)$ on the sphere ${\mathbb S}$ is convex. Since $M$ is locally convex at each $k$-face, ${\mathbb S} \cap \Star(F)$ this hypersurface is locally convex at each vertex. By Theorem \ref{strict} ${\mathbb S} \cap \Star(F)$ is either the intersection of the boundary of a convex cone with ${\mathbb S}$ or has a point of strict convexity on the sphere ${\mathbb S}$. In the latter case the spherical generalization of van Heijenoort's theorem implies that ${\mathbb S} \cap \Star(F)$ is convex. Thus $M$ is convex at $\p$ and therefore at all points of $F$.
This induction argument shows that $M$ must be locally convex at all vertices. If is locally convex at all vertices, it is locally convex at all points. We assumed that $M$ had a point of strict convexity. The metric induced by $r$ is indeed complete. By van Heijenoort's theorem and Theorem \ref{main}, $M$ is the boundary of a convex polyhedron.
\end{proof}
\section{New Algorithm for Checking Global \\Convexity of PL-surfaces}
\underline{Idea:} check convexity for the star of each $(n-3)$-cell of $M$.
We present an algorithm checking convexity of PL-realizations (in the sense outlined above) of a closed compact manifold $M=({\cal M},r)$.
The main algorithm uses an auxiliary algorithm C-check. The input of this algorithm is
a pair $(T,{\cal T})$, where ${\cal T}$ is a one vertex tree with a cyclic orientation of edges and $T$ is its rectilinear realization in 3-space. This pair can be thought of as a PL-realization of a plane fan (partition of the plane into cones with common origin) in 3-space. The output is 1, if this realization is convex, and 0 otherwise. Obviously, this question is equivalent to verifying convexity of a plane polygon. For the plane of reference we choose a plane perpendicular to the sum of all unit vectors directed along the edges of the fan. The latter question can be resolved in time, linear in the number of edges of the tree (e.g. see Devillers et al (1998), Mehlhorn et al (1999)).
\underline{Input and Preprocessing:} The poset of faces of dimensions $n-3,n-2$, and $n-1$ of ${\cal M}$ and the equations of the facets, OR the poset of faces of dimensions $n-3,n-2,n-1$, and $0$ of ${\cal M}$ and the positions of the vertices. We assume that we know the correspondence between the rank of a face in the poset and its dimension. There are mutual links between the facets (or vertices) of ${\cal M}$ in the poset and the records containing their realization information. All $(n-3)$-faces of ${\cal M}$ are put into a stack $S_{n-3}$.
There are mutual links between elements of this stack and corresponding elements of the face lattice of ${\cal M}$.
\underline{Output:} YES, if $r({\cal M})$ is the boundary of a convex polyhedron, NO otherwise.
\begin{tabbing}
1. {\bf while} $S_{n-3}$ is not empty, pick an $(n-3)$-face $F$ from $S_{n-3}$;\\
2. compute the projection of $F$, and of all $(n-2)$-faces incident to $F$,\\
~~~onto an affine 3-plane complimentary to $F$; denote this
projection by $\PStar(F)$;\\
3. compute the cyclicly ordered one-vertex tree ${\cal T}(F)$, whose edges\\
~~~are the $(n-2)$-faces of $\PStar(F)$ and whose vertex is $F$; \\
4. Apply to $(\PStar(F),{\cal T}(F))$ the algorithm C-check\\
~~~~~{\bf if} C-check$(\PStar(F),{\cal T}(F))=1$ \= {\bf then} remove $F$ from the stack $S_{n-3}$ \\
\> {\bf else} Output:=NO; terminate \\
~~~{\bf endwhile}\\
5. Output:=YES \\
\end{tabbing}
\begin{remark} The algorithm processes the stars of all $(n-3)$-faces independently. On a parallelized computer the stars of all $(n-3)$-faces can be processed in parallel.
\end{remark}
\textbf{Proof of Correctness.} The algorithm checks the local convexity of $M$ at the stars of all $(n-3)$-cells. $\cM$ is compact and closed — by Krein-Milman theorem (or Lemma \ref{strict}) $M$ has at least one strictly convex vertex. By Theorem \ref{PL-case} local convexity at all vertices, together with the existence of at least one strictly convex vertex, is necessary and sufficient for $M$ to be the boundary of a convex body.
\underline{\bf Complexity estimates}
Denote by $f_k$ the number of $k$-faces of ${\cal M}$, and by $f_{k,l}$ -- the number of incidences between $k$-faces and $l$-faces in ${\cal }M$. Step 1 is repeated at most $f_{n-3}$ times. Steps 2-4 take at most $\const f_{n-2,n-3} (\Star(F))$ arithmetic operations for each $F$, where $\const$ does not depend on $F$. Thus, steps 2-4, repeated for all $(n-3)$-faces of ${\cal M}$, require $O(f_{n-2,n-3})$ operations. Therefore, the total number of operations for this algorithm is $O(f_{n-2,n-3})$.
\begin{remark} The algorithm does not use all of the face lattice of ${\cal M}$.
\end{remark}
\begin{remark} The algorithm requires computing polynomial predicates only. The highest degree of algebraic predicates that the algorithm uses is $d$, which is optimal (see Devillers et al, 1998).
\end{remark}
From a practical point of view, it makes sense to say that a surface $M$ is almost convex, if it lies within a small Hausdorf distance from a convex surface $S$ that bounds an $n$-dimensional convex set $B$. In this case, the measure of lines, that pass through interior points of $B$ and intersect $S$ in more than 2 points, will be small, as compared to the measure of all lines passing through interior points of $B$. These statements can be given a rigorous meaning in the language of integral geometry, also called ``geometric probability'' (see Klain and Rota 1997).
\begin{remark} If there is a 3-dimensional coordinate subspace $L$ of $\R^n$ such that all the subspaces spanned by $(n-3)$-faces are complementary to $L$, the polyhedron can be projected on $L$ and all computations can be done in 3-space. This reduces the degree of predicates from $d$ to 3. In such case the boolean complexity of the algorithm does not depend on the dimension at all. therefore, for sufficiently generic realizations the algorithm has degree 3 and complexity not depending on $n$.
\end{remark}
\begin{remark} This algorithm can also be applied without changes to compact PL-surfaces in $\SSS^n$ or $\HH^n$.
\end{remark}
\end{document} |
\begin{document}
\author[1]{Alex J. Chin}
\author[2]{Gary Gordon}
\author[3]{Kellie J. MacPhee}
\author[2]{Charles Vincent}
\affil[1]{North Carolina State University, Raleigh, NC}
\affil[2]{Lafayette College, Easton, PA}
\affil[3]{Dartmouth College, Hanover, NH}
\title{Random subtrees of complete graphs}
\maketitle
\begin{abstract} We study the asymptotic behavior of four statistics associated with subtrees of complete graphs: the uniform probability $p_n$ that a random subtree is a spanning tree of $K_n$, the weighted probability $q_n$ (where the probability a subtree is chosen is proportional to the number of edges in the subtree) that a random subtree spans and the two expectations associated with these two probabilities. We find $p_n$ and $q_n$ both approach $e^{-e^{-1}}\approx .692$, while both expectations approach the size of a spanning tree, i.e., a random subtree of $K_n$ has approximately $n-1$ edges.
\end{abstract}
\section{Introduction} We are interested in the following two questions:
\begin{center}
\begin{itemize}
\item [Q1.] What is the asymptotic probability that a random subtree of $K_n$ is a spanning tree?
\item [Q2.] How many edges (asymptotically) does a random subtree of $K_n$ have?
\end{itemize}
\end{center}
In answering both questions, we consider two different probability measures: a uniform random probability $p_n$, where each subtree has an equal probability of being selected, and a weighted probability $q_n$, where the probability a subtree is selected is proportional to its size (measured by the number of edges in the subtree).
As expected, weighting subtrees by their size increases the chances of selecting a spanning tree, i.e., $p_n<q_n$. Table~\ref{globalprobdata} gives data for these values when $ n \leq 100$.
\begin{table}[htdp]
\begin{center}
\begin{tabular}{c|ll}
$n$ & $p_n$ & $q_n$ \\ \hline
10 & 0.617473 & 0.652736 \\
20 & 0.657876 & 0.672725 \\
30 & 0.669904 & 0.679294 \\
40 & 0.675689 & 0.682552 \\
50 & 0.67909 & 0.684497 \\
60 & 0.681329 & 0.685789 \\
70 & 0.682915 & 0.686711 \\
80 & 0.684097 & 0.687401 \\
90 & 0.685012 & 0.687936 \\
100 & 0.685741 & 0.688365 \\
\end{tabular}
\caption{Probabilities of selecting a spanning tree using uniform and weighted probabilities.}
\label{globalprobdata}
\end{center}
\end{table}
The (somewhat) surprising result is that $p_n$ and $q_n$ approach the same limit as $n \to \infty$. This is our first main result, completely answering Q1.
\begin{thm}\label{T:main1}
\begin{enumerate}
\item Let $p_n$ be the probability of choosing a spanning tree among all subtrees of $K_n$ with uniform probability, i.e., the probability any subtree is selected is the same. Then $$\lim_{n \to \infty} p_n=e^{-e^{-1}}=0.692201\dots$$
\item Let $q_n$ be the probability of choosing a spanning tree among all subtrees of $K_n$ with weighted probability, i.e., the probability any subtree is selected is proportional to its number of edges. Then $$\lim_{n \to \infty} q_n=e^{-e^{-1}}=0.692201\dots$$
\end{enumerate}
\end{thm}
For the second question Q2, the expected number of edges of a random subtree of $K_n$ is $\sum pr(T) |E(T)|$, where $pr(T)$ is the probability a tree $T$ is selected, $E(T)$ is the edge set of $T$, and the sum is over all subtrees $T$ of $K_n$. Since we have two distinct probability functions $p_n$ and $q_n$, we obtain two distinct expected values.
The relation between these two expected values is equivalent to a famous example from elementary probability:
\begin{quote}
All universities report ``average class size.'' However, this average depends on whether you first choose a class at random, or first select a student at random, and then ask that student to randomly select one of their classes.
\end{quote}
Our uniform expectation is exactly analogous to the first situation, and our weighted expectation is equivalent to the student weighted average. In this context, edges play the role of students and the subtrees are the classes. It is a straightforward exercise to show the student weighting always produces a larger expectation. This was first noticed by Feld and Grofman in 1977 in \cite{fg}. For our purposes, this result will show the weighted expectation is always greater than the uniform expectation.
Since both of these expected values are obviously bounded above by $n-1$, the size of a spanning tree, we use a variation of {\it subtree density} first defined in \cite{jam1}. We divide our expected values by $n-1$, the number of edges in a spanning tree, to convert our expectations to densities. Letting $a_k$ equal the number of $k$-edge subtrees in $K_n$, this gives us two formulas for subtree density, one using uniform probability and one using weighted probability:
$$\mbox{Uniform density: } \mu_p(n)=\frac{\sum_{k=1}^{n-1}ka_k}{(n-1)\sum_{k=1}^{n-1}a_k} \hskip.5in \mbox{Weighted density: } \mu_q(n)=\frac{\sum_{k=1}^{n-1}k^2a_k}{(n-1)\sum_{k=1}^{n-1}ka_k}$$
Table~\ref{globaldensities} gives data for these two densities when $n \leq 100$.
\begin{table}[htdp]
\begin{center}
\begin{tabular}{c|ll}
$n$ & $\mu_p(n)$ & $\mu_q(n)$ \\ \hline
10 & 0.945976 & 0.952436 \\
20 & 0.977928 & 0.97912 \\
30 & 0.986177 & 0.986661 \\
40 & 0.989945 & 0.990205 \\
50 & 0.9921 & 0.992263 \\
60 & 0.993496 & 0.993607 \\
70 & 0.994472 & 0.994553 \\
80 & 0.995194 & 0.995255 \\
90 & 0.995749 & 0.995797 \\
100 & 0.996189 & 0.996228 \\
\end{tabular}
\caption{Subtree densities using uniform and weighted probabilities.}
\label{globaldensities}
\end{center}
\end{table}
Evidently, both of these densities approach 1. This is our second main result, and our answer to Q2.
\begin{thm}\label{T:main2}
\begin{enumerate}
\item $\displaystyle{\lim_{n\to\infty}\mu_p(n) = 1,}$
\item $\displaystyle{\lim_{n\to\infty} \mu_q(n) = 1.}$
\end{enumerate}
\end{thm}
The fact that the probabilities and the densities do not depend on which probability measure we use is an indication of the dominance of the number of spanning trees in $K_n$ compared to the the number of non-spanning trees. Theorems~\ref{T:main1} and \ref{T:main2} are proven in Section~\ref{S:global}. The proofs follow from a rather detailed analysis of the growth rate of individual terms in the sums that are used to compute all of the statistics. But we emphasize that these proof techniques are completely elementary.
Subtree densities have been studied before, but apparently only when the graph is itself a tree. Jamison introduced this concept in \cite{jam1} and studied its properties in \cite{jam2}. A more recent paper of Vince and Wang \cite{vw} characterizes extremal families of trees with the largest and smallest subtree densities, answering one of Jamison's questions. A recent survey of results connecting subtrees of trees with other invariants, including the Weiner index, appears in \cite{sw}.
There are several interesting directions for future research in this area. We indicate some possible projects in Section~\ref{S:future}.
\section{Global probabilities}\label{S:global}
Our goal in this section is to provide proofs of Theorems~\ref{T:main1} and \ref{T:main2}. As usual, $K_n$ represents the complete graph on $n$ vertices. We fix notation for the subtree enumeration we will need.
\begin{notation} Assume $n$ is fixed. We define $a_k, b_k, A$ and $B$ as follows.
\begin{itemize}
\item Let $a_k$ denote the number of $k$-edge subtrees in $K_n$. (We ignore subtrees of size 0, although setting $a_0=n$ will not change the asymptotic behavior of any of our statistics.)
\item Let $\displaystyle{A=\sum_{k=1}^{n-1}a_k}$ be the total number of subtrees of all sizes in $K_n$.
\item Let $b_k=ka_k$ denote the number of edges used by all of the $k$-edge subtrees in $K_n$.
\item Let $\displaystyle{B=\sum_{k=1}^{n-1}b_k}$ be the sum of the sizes (number of edges) of all the subtrees of $K_n$.
\end{itemize}
\end{notation}
It is immediate from Cayley's formula that $\displaystyle{ a_k= \binom{n}{k+1} (k+1)^{k-1}.}$ We can view $B$ as the sum of all the entries in a 0--1 edge--tree incidence matrix.
The four statistics we study here, $p_n, q_n, \mu_p(n)$ and $\mu_q(n)$, can be computed using $A, B, a_k$ and $b_k$. We omit the straightforward proof of the next result.
\begin{lem}\label{L:global} Let $p_n, q_n, \mu_p(n)$ and $\mu_q(n)$ be as given above. Then
\begin{enumerate}
\item $\displaystyle{p_n=\frac{a_{n-1}}{A}}$,
\item $\displaystyle{q_n=\frac{b_{n-1}}{B}}$,
\item $\displaystyle{\mu_p(n)=\frac{B}{(n-1)A}}$,
\item $\displaystyle{\mu_q(n)=\frac{\sum_{k=1}^{n-1}kb_k}{(n-1)B}=\frac{\sum_{k=1}^{n-1}k^2a_k}{(n-1)B}}$.
\end{enumerate}
\end{lem}
\begin{ex} We compute each of these statistics for the graph $K_4$. In this case, there are 6 subtrees of size one (the 6 edges of $K_4$), 12 subtrees of size two and 16 spanning trees (of size three). Then we find
$\displaystyle{p_4=\frac{16}{34}=.471\dots}$,
$\displaystyle{q_4=\frac{48}{78}=.615\dots}$,
$\displaystyle{\mu_p(4)=\frac{78}{102}=.768\dots}$,
and $\displaystyle{\mu_q(4)=\frac{198}{234}=.846\dots}$.
\end{ex}
The remainder of this section is devoted to proofs of Theorems~\ref{T:main1} and \ref{T:main2}. We first prove part (1) of Theorem~\ref{T:main2}, then use the bounds from Lemmas~\ref{lemmatop} and \ref{lemmabottom} to help prove both parts of Theorem~\ref{T:main1}. Lastly, we prove part (2) of Theorem~\ref{T:main2}.
Thus, our immediate goal is to prove that the uniform density $\displaystyle{\mu_p(n)=\frac{B}{(n-1)A}}$ approaches 1 as $n \to \infty$. Our approach is as follows: We bound the numerator $B$ from below and the term $A$ in the denominator from above so that $\mu_p(n)$ is bounded below by a function that approaches 1 as $n \to \infty$. Lemma~\ref{lemmatop} establishes the lower bound for $B$ and Lemma~\ref{lemmabottom} establishes the upper bound for $A$, from which the result follows.
\begin{lem}
\label{lemmatop} Let $\displaystyle{B=\sum_{k=1}^{n-1}k\binom{n}{k+1} (k+1)^{k-1}}$, as above. Then
\begin{equation}
B > (n-1)n^{n-2} \left( \frac{n-3}{n-1}\right) e^{e^{-1}}.
\label{topresult}
\end{equation}
\end{lem}
\begin{proof}
Recall $b_{n-1} = (n-1)n^{n-2}$ counts the total number of edges used in all the spanning trees of $K_n$. We examine the ratio $b_{i}/b_{i-1}$ in order to establish a lower bound for each $b_i$ in terms of $b_{n-1}$.
\begin{equation}
\frac{b_i}{b_{i-1}} = \frac{\binom{n}{i+1} (i+1)^{i-1} i}{\binom{n}{i} i^{i-2} (i-1)} = \frac{n-i}{i+1}\cdot \frac{i}{i-1}\cdot \frac{(i+1)^{i-1}}{i^{i-2}} = (n-i)\cdot\frac{i}{i-1} \cdot\left(\frac{i+1}{i}\right)^{i-2}
\label{topratio}
\end{equation}
We use the fact that $e$ is the least upper bound for the sequence $\left\{\left(\frac{i+1}{i}\right)^{i-2}\right\}$ to rewrite \eqref{topratio} as the inequality
$$
b_{i-1} \geq \frac{i-1}{i(n-i)e} \cdot b_i,
$$
and this is valid for $i = 2, 3,\dots,n-1$. In general, for $i<n$, an inductive argument on $n-i$ establishes the following:
\begin{equation}
b_{i} \geq \frac{i}{(n-i-1)!(n-1)e^{n-i-1}} \cdot b_{n-1}.
\label{topinequality}
\end{equation}
Then equation \eqref{topinequality} gives
\begin{equation}
B = \sum_{i=1}^{n-1} b_i \geq \frac{b_{n-1}}{n-1} \left((n-1) + \frac{n-2}{e} + \frac{n-3}{2e^2} + \frac{n-4}{6e^3} + \dots + \frac{1}{(n-2)!e^{n-2}}\right).
\end{equation}
We bound this sum below using standard techniques from calculus. Let
$$h(k) = 1 + \frac{1}{e} + \frac{1}{2e^2} + \frac{1}{6e^3} + \dots + \frac{1}{k!e^{k}}.$$
Then
\begin{equation}
B \geq \frac{b_{n-1}}{n-1}(h(n-2) + h(n-3) + \dots + h(0)).
\label{deriv}
\end{equation}
Now $\displaystyle{e^{x} = \sum_{i=0}^{k}\frac{x^i}{i!} + R_k(x)}$, where $\displaystyle{R_k(x) = \frac{e^y}{(k+1)!} x^{k+1}}$
for some $y \in (0,x)$. We are interested in this expression when $x=e^{-1}$. Then $R_k(x)$ is maximized when $x=y= e^{-1}$. So, using $e^{(e^{-1})} \approx 1.44 \ldots<2$, we have
$$
R_k\left(e^{-1}\right) \leq \frac{e^{(e^{-1})}}{{(k+1)!} }(e^{-1})^{k+1} \leq \frac{2}{(k+1)!e^{k+1}}.
$$
Now, using this upper bound on the error in the Maclaurin polynomial for $e^x$ at $x = e^{-1}$ gives
$$
e^{e^{-1}} = h(k) + R_k\left(e^{-1}\right) \leq h(k) + \frac{2}{(k+1)!e^{k+1}},
$$
so
$$
h(k) \geq e^{e^{-1}} - \frac{2}{(k+1)!e^{k+1}}.
$$
Substituting into \eqref{deriv},
\begin{align*}
B &\geq \frac{b_{n-1}}{n-1}\sum_{k=0}^{n-2} h(k)
\geq \frac{b_{n-1}}{n-1} \left((n-1)e^{e^{-1}} - 2\sum_{k=0}^{n-2}\frac{1}{(k+1)!e^{k+1}}\right) \\
&> \frac{b_{n-1}}{n-1} \left((n-1)e^{e^{-1}} - 2\sum_{i=0}^\infty\frac{1}{i!e^i}\right)
= \frac{b_{n-1}}{n-1} \left((n-1)e^{e^{-1}} - 2e^{e^{-1}}\right) = b_{n-1} \left( \frac{n-3}{n-1}\right)e^{e^{-1}}.
\end{align*}
\end{proof}
We now give an upper bound for $A$, the total number of subtrees of $K_n$.
\begin{lem}
\label{lemmabottom}
Let $\displaystyle{A=\sum_{k=1}^{n-1}a_k}$ be the total number of subtrees of all sizes in $K_n$, as above. Then, for every $\varepsilon >0$, there is a positive integer $r(\varepsilon) \in \mathbb{N}$ so that, for all $n>r(\varepsilon)$,
\begin{equation}
A <n^{n-2} \left(e^{(e-\varepsilon)^{-1}} + \frac{e}{r(\varepsilon)!}\right)
\label{bottomresult}
\end{equation}
\end{lem}
\begin{proof}
Recall $a_{n-1}=n^{n-2}$ is the number of spanning trees in $K_n$. As in Lemma~\ref{lemmatop}, we examine ratios of consecutive terms, but this time we need to establish an upper bound for the $a_i$ in terms of $a_{n-1}$. Now
\begin{equation}
\frac{a_i}{a_{i-1}} = \frac{\binom{n}{i+1} (i+1)^{i-1}}{\binom{n}{i} i^{i-2}} = \frac{n-i}{i+1}\cdot \frac{(i+1)^{i-1}}{i^{i-2}} = (n-i) \left(\frac{i+1}{i}\right)^{i-2}
\label{bottomratio}
\end{equation}
Let $\varepsilon > 0$. Since $\lim_{i\to\infty} \left(\frac{i+1}{i}\right)^{i-2} = e$, there exists a $k(\varepsilon)$ such that
$$
a_{i-1} \leq \frac{a_i}{(n-i)(e-\varepsilon)}
$$
for every $k(\varepsilon) < i \leq n-1$.
As in the proof of Lemma~\ref{lemmatop}, an inductive argument can be used to show
$$
a_i \leq \frac{a_{n-1}}{(n-i-1)!(e-\varepsilon)^{n-i-1}}
$$
for all $i$ such that $k(\varepsilon) < i \leq n-1$.
On the other hand, if $i\leq k(\varepsilon)$, then
$$
a_{i-1}= \frac{a_i}{(n-i)\left(\frac{i+1}{i}\right)^{i-2}} \leq \frac{a_i}{n-i} \leq \frac{a_{n-1}}{(n-i)!}
$$
where we have bounded $\left(\frac{i+1}{i}\right)^{i-2}$ below by 1 and the final inequality follows by a similar inductive argument.
Therefore,
\begin{equation*}
A = \sum_{i=1}^{n-1}a_i \leq a_{n-1} (f(n,k(\varepsilon)) + g(n,k(\varepsilon)))
\end{equation*}
where
\begin{equation*}
f(n,k(\varepsilon)) = 1 + \frac{1}{e-\varepsilon} + \frac{1}{2!(e-\varepsilon)^2} + \dots + \frac{1}{(n-k(\varepsilon))!(e-\varepsilon)^{n-k(\varepsilon)}},
\end{equation*}
corresponding to those terms where $i>k(\varepsilon)$, and
\begin{equation*}
g(n,k(\varepsilon)) = \frac{1}{(n-k(\varepsilon)+1)!} + \frac{1}{(n-k(\varepsilon)+2)!} + \dots + \frac{1}{(n-2)!} + \frac{1}{(n-1)!}
\end{equation*}
corresponds to the terms where $i\leq k(\varepsilon)$.
Using the Maclaurin expansion for $e^x$ evaluated at $x = (e-\varepsilon)^{-1}$ gives an upper bound for $f(n,k(\varepsilon))$:
\begin{equation*}
f(n,k(\varepsilon)) = \sum_{i=0}^{n-k(\varepsilon)} \frac{1}{i!(e-\varepsilon)^i} < \sum_{i=0}^{\infty} \frac{1}{i!(e-\varepsilon)^i} = e^{(e-\varepsilon)^{-1}}.
\end{equation*}
For $g(n,k(\varepsilon))$, we have
\begin{equation*}
g(n,k(\varepsilon)) = \sum_{i = n-k(\varepsilon)+2}^{n} \frac{1}{(i-1)!} < \sum_{i=n-k(\varepsilon)+2}^{\infty} \frac{1}{(i-1)!} <\frac{e}{r(\varepsilon)!},
\end{equation*}
where $r(\varepsilon) = n-k(\varepsilon) + 1$.
Therefore,
\begin{equation*}
A \leq a_{n-1}(f(n,k(\varepsilon)) + g(n,k(\varepsilon))) < a_{n-1}\left(e^{(e-\varepsilon)^{-1}} + \frac{e}{r(\varepsilon)!}\right).
\end{equation*}
\end{proof}
We can now prove part (1) of Theorem~\ref{T:main2}.
\begin{proof} [Proof: Theorem~\ref{T:main2} (1)]
Recall $b_{n-1} = (n-1)n^{n-2}$ and $a_{n-1} = n^{n-2}$, so
\begin{equation*}
\frac{1}{n-1} \cdot \frac{b_{n-1}}{a_{n-1}} = 1.
\end{equation*}
Therefore, \eqref{topresult} and \eqref{bottomresult} imply
\begin{equation*}
\mu_p(n) = \frac{1}{n-1} \cdot \frac{B}{A} > \frac{1}{n-1} \cdot \frac{b_{n-1}}{a_{n-1}} \cdot \frac{\left( \frac{n-3}{n-1} \right) e^{e^{-1}} }{e^{(e-\varepsilon)^{-1}} + \frac{e}{r(\varepsilon)!}} =\left( \frac{n-3}{n-1}\right) \cdot \frac{e^{e^{-1}} }{e^{(e-\varepsilon)^{-1}} + \frac{e}{r(\varepsilon)!}}.
\end{equation*}
Then
$\displaystyle{\lim_{n\to\infty} \mu_p(n) = 1}$ as $n \to \infty$ since $\varepsilon$ can be chosen arbitrarily small and $r(\varepsilon)$ can be made arbitrarily large.
\end{proof}
We now prove Theorem~\ref{T:main1}.
\begin{proof}[Proof: Theorem~\ref{T:main1}]
\begin{enumerate}
\item Recall $p_n=\frac{a_{n-1}}{A}$, where $a_{n-1}$ is the number of spanning trees in $K_n$ and $A$ is the total number of subtrees of $K_n$. Then the argument in Lemma~\ref{lemmabottom} can be modified to prove
$$A \geq a_{n-1}\sum_{i=0}^{n-1}\frac{1}{i!e^i}.$$
This follows by bounding $\displaystyle{ \left(\frac{i+1}{i}\right)^{i-2}}$ below by $e$ in equation~\eqref{bottomratio} -- this is the same bound we needed in our proof of Lemma~\ref{lemmatop}. Then, bounding $\displaystyle{ \frac{1}{p_n}=\frac{A}{a_{n-1}} }$, we have
$$e^{e^{-1}}-\varepsilon' \leq \sum_{1=0}^{n-1}\frac{1}{i!e^i} \leq \frac{A} {a_{n-1}}\leq \left(e^{(e-\varepsilon)^{-1}} + \frac{e}{r(\varepsilon)!}\right),$$ where $\varepsilon$ and $\varepsilon'$ can be made as small as we like, and $r(\varepsilon)$ can be made arbitrarily large.
Hence $$p_n=\frac{a_{n-1}}{A} \to \frac{1}{e^{e^{-1}}} \approx 0.6922\dots$$ as $n \to \infty$.
\item Note that $$q_n=\frac{(n-1)a_{n-1}}{B}=\frac{(n-1)A}{B}\cdot \frac{a_{n-1}}{A} = \frac{p_n}{\mu_p(n)}.$$
By Theorem~\ref{T:main2}(1), $\mu_p(n)\to 1$ and, by part (1) of this theorem, $p_n \to e^{-e^{-1}}$. The result now follows immediately.
\end{enumerate}
\end{proof}
An interesting consequence of our proof of part (2) of Theorem~\ref{T:main1} is that $p=q\mu_p(n)$ for any graph $G$, so $p<q$. Thus, if $G=C_n$ is a cycle, then the (uniform) density is approximately $\frac12$, so we immediately get the weighted probability that a random subtree spans is approximately twice the probability for the uniform case (although both probabilities approach 0 as $n \to \infty$).
We state this observation as a corollary.
\begin{cor}\label{C:pnqn} Let $G$ be any connected graph, let $p(G)$ and $q(G)$ be the uniform and weighted probabilities (resp.) that a random subtree is spanning, and let $\mu(G)$ be the uniform subtree density. Then $p(G)=q(G)\mu(G).$
\end{cor}
If $\mathcal{G}$ is an infinite family of graphs, then we can interpret Cor.~\ref{C:pnqn} asymptotically. In this case, the two probabilities $p$ and $q$ coincide (and are non-zero) in the limit if and only if the density approaches 1.
We conclude this section with a very short proof of part (2) of Theorem~\ref{T:main2}.
\begin{proof}[Proof: Theorem~\ref{T:main2} (2)] We have $\mu_p(n)<\mu_q(n)<1$ for all $n$ by a standard argument in probability (see \cite{fg}). Since $\mu_p(n) \to 1$ as $n \to \infty$, we are done.
\end{proof}
\section{Conjectures, extensions and open problems}\label{S:future}
We believe the study of subtrees in arbitrary graphs is a fertile area for interesting research questions. We outline several ideas that should be worthy of future study. Many of these topics are addressed in \cite{cgmv}.
\begin{enumerate}
\item {\bf Local statistics.} Compute ``local'' versions of the four statistics given here. Fix an edge $e$ in $K_n$. Then it is straightforward to compute $p_n', q_n', \mu_{p'}(n)$ and $\mu_{q'}(n)$, where each of these is described below.
\begin{itemize}
\item $p_n'$ is the (uniform) probability that a random subtree {\it containing the edge $e$} is a spanning tree. This is given by
$$p_n'=\frac{n^{n-3}}{\sum_{k=0}^{n-2}{n-2 \choose k}(k+2)^{k-1}}.$$
The number of spanning trees containing a given edge is $2n^{n-3}$ -- this is easy to show using the tree-edge incidence matrix. Incidence counts can also show the total number of subtrees containing the edge $e$ is given by $\displaystyle{\sum_{k=0}^{n-2}2{n-2 \choose k}(k+2)^{k-1}}$. This can also be derived by using the {\it hyperbinomial transform} of the sequence of 1's.
\item $q_n'$ is the (weighted) probability that a random subtree {\it containing the edge $e$} is a spanning tree. This time, we get
$$q_n'=\frac{(n-1)n^{n-3}}{\sum_{k=0}^{n-2}{n-2 \choose k}(k+2)^{k-1}}.$$
\item $ \mu_{p'}(n)$ is the local (uniform) density, so
$$\mu_{p'}(n)=\frac{\sum_{k=0}^{n-2}(k+1){n-2 \choose k}(k+2)^{k-1}}{(n-1)\sum_{k=0}^{n-2}{n-2 \choose k}(k+2)^{k-1}}.$$
\item $ \mu_{q'}(n)$ is the local (weighted) density, which gives
$$\mu_{q'}(n)=\frac{\sum_{k=0}^{n-2}(k+1)^2{n-2 \choose k}(k+2)^{k-1}}{(n-1)\sum_{k=0}^{n-2}(k+1){n-2 \choose k}(k+2)^{k-1}}.$$
\end{itemize}
It is not difficult to prove the same limits that hold for the global versions of these statistics also hold for the local versions: $p'_n$ and $q'_n$ both approach $e^{-e^{-1}}$ and $ \mu_{p'}(n)$ and $\mu_{q'}(n)$ both approach 1 as $n \to \infty$. (All of these can be proven by arguments analogous to the global versions.)
For $p'_n$, however, the connection to the global statistics is even stronger: $p_n'=q_n$ for all $n$, i.e., the global weighted probability exactly matches the local uniform probability of selecting a spanning tree. (This can be proven by a direct calculation, but it is immediate from the ``average class size'' formulation of the weighted probability.)
Other statistics that might be of interest here include larger local versions of these four: suppose we consider only subtrees that contain a given pair of adjacent edges, or a given subtree with 3 edges. Will the limiting probabilities match the ones given here?
\item {\bf Non-spanning subtrees.} Explore the probabilities that a random subtree of $K_n$ has exactly $k$ edges for specified $k<n-1$. The analysis given here can be used to show the uniform and weighted probabilities of choosing a subtree with $n-2$ edges (one less than a spanning tree) approaches $\displaystyle{\left(e^{-1-e^{-1}}\right) =0.254646\dots.}$ This follows by observing the ratio $a_{n-1}/a_{n-2} \to e$ as $n \to \infty$. (It is interesting that both sequences are decreasing here, in contrast to the sequences $p_n$ and $q_n$.)
\item {\bf Other graphs.} Explore these statistics for other classes of graphs. For instance, when $G=K_{n,n}$ is a complete bipartite graph, we get limiting values for the probability (uniform or weighted) that a random subtree spans is
$$e^{-2e^{-1}} = 0.478965\dots,$$
the square of the limiting value we obtained for the complete graph. Both the uniform and weighted densities approach 1 in this case (forced by Cor.~\ref{C:pnqn}).
On the other hand, consider the theta graph $\theta_{n,n}$, formed adding an edge ``in the middle'' of a $2n-2$ cycle (see Fig.~\ref{F:theta}). Then the uniform density approaches $\frac23$ as $n \to \infty$, and both the uniform and weighted probabilities that a random subtree spans tend to 0 as $n$ tends to $\infty$ \cite{cgmv}.
\begin{figure}
\caption{$\theta_{4,4}
\label{F:theta}
\end{figure}
We conjecture that, for any given graph, the number of edges determines whether or not these limiting densities are 0.
\begin{conj}
If $|E|=O(n^2)$, then the probability (either uniform or weighted) of selecting a spanning tree is non-zero (in the limit).
\end{conj}
\begin{conj}
If $|E|=O(n)$, then the probability (either uniform or weighted) of selecting a spanning tree is zero (in the limit).
\end{conj}
\item {\bf Optimal graphs.} Determine the ``best'' graph on $n$ vertices and $m$ edges. There are many possible interpretations for ``best'' here: for instance, we might investigate which graph maximizes $p(G)$ and which maximizes the density? Must the graph that maximizes one of these statistics maximize all of them? This is closely related to the work in \cite{vw}, where extremal classes of trees are determined for uniform density.
\item {\bf Subtree polynomial.} We can define a polynomial to keep track of the number of subtrees of size $k$. If $G$ is any graph with $n$ vertices, let $a_k$ be the number of subtrees of size $k$. Then define a {\it subtree polynomial} by
$$s_G(x) = \sum_{k=0}^{n-1} a_k x^k.$$
We can compute the subtree densities directly from this polynomial and its derivatives:
$$\mu_p(G) = \frac{s_G'(1)}{(n-1)s_G(1)} \hskip.2in \mbox{and} \hskip.2in \mu_q(G) = \frac{s_G'(1)+s''_G(1)}{(n-1)s'_G(1)}$$
It would be worthwhile to study the roots and various properties of this polynomial. In particular, we conjecture that the coefficients of the polynomial are unimodal.
\begin{conj}
The coefficients of $s_G(x)$ are unimodal.
\end{conj}
As an example of an interesting infinite 2-parameter family of graphs, the coefficients of the $\theta$-graph $s_{\theta_{a,b}}(x)$ are unimodal (see Fig.~\ref{F:thetacoef}).
\begin{figure}
\caption{Coefficients of $\theta_{24,48}
\label{F:thetacoef}
\end{figure}
The mode of the sequence of coefficients gives another measure of the subtree density. For the $\theta$-graph $\theta_{n,n}$, we can show the mode is approximately $\sqrt{2}n$ (in \cite{cgmv}).
Stating the unmodality conjecture in terms of a polynomial has the advantage of potentially using the very well developed theory of polynomials. Surveys that address unmodality of coefficients in polynomials include Stanley's classic paper \cite{st} and a recent paper of Pemantle \cite{pe}. In particular, if all the roots of $s_G(x)$ are negative reals, then the coefficients are unimodal. (Such polynomials are called {\it stable}.) Although $s_G(x)$ is not stable in general, perhaps it is for some large class of graphs.
It is not difficult to find different graphs (in fact, different trees) with the same subtree polynomial.
\begin{conj}
For any $n$, construct $n$-pairwise non-isomorphic graphs that all share the same subtree polynomial.
\end{conj}
This is expected by pigeonhole considerations -- there should be far more graphs on $n$ vertices than potential polynomials.
\item {\bf Density monotonicity.} Does adding an edge always increase the density? This is certainly false for disconnected graphs -- simply add an edge to a small component; this will lower the overall density. But we conjecture this cannot happen if $G$ is connected:
\begin{conj}
Suppose $G$ is a connected graph, and $G+e$ is obtained from $G$ by adding an edge between two distinct vertices of $G$. Then $\mu(G)<\mu(G+e)$.
\end{conj}
One consequence of this conjecture would be that, starting with a tree, we can add edges one at a time to create a complete graph, increasing the density at each stage. This could be a useful tool in studying optimal families.
\item {\bf Matroid generalizations.} Instead of using subtrees of $G$, we could use {\it subforests}. This has the advantage of being well behaved under deletion and contraction. In particular, the total number of subforests of a graph $G$ is an evaluation of its Tutte polynomial: $T_G(2,1)$. The number of spanning trees is also an evaluation of the Tutte polynomial: $T_G(1,1)$ (In fact, this is how Tutte defined his {\it dichromatic} polynomial originally.)
All of the statistics studied here would then have direct analogues to the subtree problem. This entire approach would then generalize to matroids. In this context, subforests correspond to independent sets in the matroid, and spanning trees correspond to bases. It would be of interest to study basis probabilities and densities for the class of binary matroids, for example.
\end{enumerate}
\end{document} |
\begin{document}
\title{Battle Against Fluctuating Quantum Noise: Compression-Aided Framework to Enable \\Robust Quantum Neural Network\\
{
\centering
\author{
\setlength{\baselineskip}{20pt}
Zhirui Hu$^{1,2}$, Youzuo Lin$^{3}$, Qiang Guan$^{4}$, Weiwen Jiang$^{1,2}$
\\
$^{1}$Electrical and Computer Engineering Department, George Mason University, Fairfax, VA 22030, US\\
$^{2}$Quantum Science and Engineering Center, George Mason University, Fairfax, VA 22030, US
\\$^{3}$Earth and Environmental Sciences Division, Los Alamos National Laboratory, NM, 87545, US
\\ $^{4}$Department of Computer Science, Kent State University, 800 E Summit St, Kent, OH 44240
\\ ([email protected]; [email protected])
}
}
}
\maketitle
\begin{abstract}
Recently, we have been witnessing the scale-up of superconducting quantum computers; however, the noise of quantum bits (qubits) is still an obstacle for real-world applications to leveraging the power of quantum computing.
Although there exist error mitigation or error-aware designs for quantum applications, the inherent fluctuation of noise (a.k.a., instability) can easily collapse the performance of error-aware designs.
What's worse, users can even not be aware of the performance degradation caused by the change in noise.
To address both issues, in this paper we use Quantum Neural Network (QNN) as a vehicle to present a novel compression-aided framework, namely \textit{QuCAD}, which will adapt a trained QNN to fluctuating quantum noise.
In addition, with the historical calibration (noise) data,
our framework will build a model repository offline, which will significantly reduce the optimization time in the online adaption process.
Emulation results on an earthquake detection dataset show that QuCAD can achieve 14.91\% accuracy gain on average in 146 days over a noise-aware training approach. For the execution on a 7-qubit IBM quantum processor, ibm-jakarta, QuCAD can consistently achieve 12.52\% accuracy gain on earthquake detection.
\end{abstract}
\section{Introduction}
We are currently in the Noisy Intermediate-Scale Quantum (NISQ) era where noise and scalability have been two well-known and critical issues in quantum computing.
Nowadays, we have been witnessing the rapid development of superconducting quantum computers and the scalability issue is gradually mitigated. As an example, IBM took only 6 years to scale up its quantum computers from 5 to 433 qubits and the development of quantum research\cite{liang2022variational,liang2021can,wang2021exploration,jiang2021co,hu2022design} is advancing rapidly.
However, the high noise in quantum computing is still an obstacle for real-world applications to take advantage of quantum computing.
There are many sources of noise in quantum computing, such as gate errors and readout errors.
As shown in Fig.~\ref{fig:introduction}, the color of qubits and their connections indicates the Pauli-X and CNOT gate error from the IBM Belem backend.
Unlike CMOS noise that is within a small range under $10^{-15}$ so classical computing is more focused on performance efficiency rather than noise\cite{yang2023device,yang2022hardware,liao2021shadow}, the noise on qubits can reach $10^{-2}$ to $10^{-4}$.
Moreover, as shown in Fig.~\ref{fig:introduction}, noise from 1-year-long profiling on IBM Belem backend is fluctuating in a wide range, called ``fluctuating quantum noise''.
Although there exist works to improve the robustness of quantum circuits to noise, such as error mitigation \cite{takagi2022fundamental} and noise-aware designs \cite{ji2022calibration,bhattacharjee2019muqut,wang2022quantumnat}, these works commonly perform the optimization based on the noise at one moment.
The fluctuating quantum noise can easily make the quantum circuit lose its robustness.
Thus, new innovations are needed to deal with the fluctuating noise.
In this paper, we propose a novel framework, namely ``QuCAD'', to address the above issues.
To illustrate our framework, we use Quantum Neural Network (QNN) --- a.k.a, variational quantum circuit (VQC) as an example --- since the learning approach has been shown to be an effective way for a wide range of applications from different domains (such as chemistry \cite{sajjan2022quantum}, healthcare \cite{maheshwari2022quantum}, and finance \cite{orus2019quantum}); and meanwhile, \cite{liu2021rigorous} recently has shown the potential quantum speedups using VQC.
To deal with fluctuating quantum noise, a straightforward method is to apply a noise-aware training approach to retrain QNN before each inference; however, it will obviously incur high costs.
More importantly, as the quantum noise changes in a wide range, we observe that a set of parameters will deviate from the loss surface, which impedes the noise-aware training to find optimal solutions.
\begin{figure}
\caption{The fluctuating noise observed on IBM backend belem}
\label{fig:introduction}
\end{figure}
To address this problem, we made an investigation and found that these parameters will lead to short circuits after the logical-to-physical compilation.
Equivalent to classical neural networks, these QNN parameters can be regarded as compression levels.
Therefore, QNN compression can be helpful to optimize the model in a noisy environment.
Based on our observations, in QuCAD, we develop a novel noise-aware QNN compression algorithm.
It will interplay with other two components: (1) a model repository constructor that can build a repository of compressed models offline using representative historical data; and (2) a model repository manager that can make an online adaption of models to the fluctuating noise.
As such, QuCAD can automatically and efficiently obtain the QNN models adapted to run-time noise for high performance.
The main contributions of this paper are as follows.
\begin{itemize}
\item We reveal that the fluctuating quantum noise will collapse the performance of quantum neural networks (QNNs).
\item We develop a noise-aware QNN compression algorithm to adapt pretrained QNN model to a given noise.
\item On top of the noise-aware compression algorithm, we further propose a 2-stage framework to adapt QNN model to fluctuating quantum noise automatically.
\end{itemize}
Evaluation is conducted on both IBM Aer simulator and IBM quantum processor ibm-jakarta.
Experiment results on MNIST, earthquake detection dataset, and Iris show the effectiveness and efficiency of QuCAD. Especially, QuCAD achieves superior performance among the competitors on all datasets.
It achieves 16.32\%, 38.88\%, and 15.36\% accuracy gain compared with the default configuration without optimization.
Even compared with noise-aware training before every execution, QuCAD can still achieve
over a 15\% accuracy improvement on average; meanwhile, QuCAD can reduce the training time to over 100$\times$ at the online stage.
For the execution on a 7-qubit IBM quantum processor ibm-jakarta, QuCAD can consistently achieve a 13.7\% accuracy gain on the earthquake detection dataset.
The remainder of the paper is organized as follows. Section 2
reviews the related work and provides observations, challenges and motivations. Section 3 presents the proposed
QuCAD framework. Experimental results are provided in Section 4 and concluding remarks are given in Section 5.
\section{Related Work, Challenge, and Motivation}
\subsection{Related Work}
There are two commonly applied approaches to mitigate the effect of quantum noise. One is to adjust quantum circuits by noise-aware training \cite{wang2022quantumnat} and mapping \cite{bhattacharjee2019muqut}. Noise-aware training is an adversary method, which injects noise into the training process so that the parameter of model can be learned from both datasets and devices. Noise-aware mapping is to minimize the sum of accumulated noise by changing the mapping from logical qubits to physical qubits. Another method is to estimate the ideal outputs by post-processing measurement data, e.g., zero-error noise extrapolation \cite{giurgica2020digital}, probabilistic error cancellation\cite{endo2018practical}, and virtual distillation\cite{huggins2021virtual}. However, these two kinds of approaches are based on noise at one moment, so we can hardly maintain the performance without redoing these methods as noise changes anytime.
There are a few works addressing the fluctuating-noise issue, which can be quantitatively analyzed in two aspects: the noise of device and the distance between ideal results and noisy results. \cite{dasgupta2021stability} evaluated the stability of NISQ devices with multiple metrics that characterize stability. \cite{dasgupta2022characterizing,dasgupta2022assessing} defined Hellinger distance, computational accuracy and result reproducibility to expound the distance between ideal result and noisy observation. These works show the significance of reproducing results in a long-term execution on a quantum device. However, it is still not clear what effects the fluctuating noise will make on quantum applications.
\subsection{Observation, Challenge and Motivation}
This section will provide our observations on the simulation performance of a quantum neural network model on a noisy quantum computer over a period of one year.
\begin{figure}
\caption{The accuracy of QNN on 4-class MNIST from August 2021 to August 2022 on IBM backend belem using Qiskit Simulation.}
\label{fig:motivation1}
\end{figure}
\textbf{Observation 1: Fluctuating noise can collapse the model accuracy of a noise-aware trained QNN model.}
Fig.~\ref{fig:motivation1} (a) shows the daily accuracy of a QNN model on the quantum processor in Fig.~\ref{fig:introduction}, which demonstrates that the accuracy can be varied significantly along with the fluctuating noise.
The QNN model is trained using the noise-aware training method \cite{,wang2022quantumnat} based on the calibration (noise) data on day 1 (8/10/22).
Its accuracy is over 80\% from day 1 to day 22; however, when error rates increased on day 24, the accuracy decreased to 22\%.
This observation shows that the noise-aware design can obtain a robust quantum circuit to a certain range of noise, but the performance can be collapsed when the noise rate goes beyond a threshold.
\begin{figure}
\caption{Noise-aware training may miss optimal solution: (a) Optimization surface of 2-parameter VQC under noise free environment. (b) Optimization surface of the same VQC under a noisy environment. (c) Difference between (a) and (b).}
\label{fig:loss_3d}
\end{figure}
\textbf{Challenge 1: Noise on qubits can create breakpoints beyond the optimization surface, creating difficulties in training}
Relying on training only to adapt to noise may miss the opportunity to find the optimal solution.
Fig. \ref{fig:loss_3d}(a)-(b) shows an example of the results of a quantum circuit with 2 parameters (x-axis and y-axis) in a perfect environment (simulation) and in a noisy environment (realistic).
From the difference shown in Fig. \ref{fig:loss_3d}(c), we can see that the breakpoints in the loss landscape are with much lower noise.
Given the optimization started from a random point, in order to reach these breakpoints, it requires a dedicated learning rate for each parameter, which is almost impossible in a training algorithm.
\textbf{Motivation 1: Not only relying on training but also involving compression to battle against quantum noise.}
Looking closer at these breakpoints in Fig. \ref{fig:loss_3d}(c), we observe that the parameter value of 0 creates these breakpoints. We also observe such breakpoints around other specific values, like $\frac{\pi}{2}$, $\pi$, $\frac{3\pi}{2}$, etc.
We further investigate the root cause and we found the improvement in performance is caused by the reduction of physical circuit length.
More specifically, due to the transpilation (i.e., logical-to-physic qubit mapping), the logical quantum gate with different values will result in varied circuit lengths on physical qubits.
Analog to the classical neural networks, the reduction of the network complexity is known as compression.
Based on the above understanding, it motivates us to employ compression techniques to address the quantum noise.
For the same noise setting as Fig. \ref{fig:motivation1}(a), we perform the model compression on Day 1, and results are reported in Fig. \ref{fig:motivation1}(b).
It is clear that the results of compression (i.e., yellow points) are much better than the noise-aware training (i.e., blue points).
However, we still observe a significant accuracy drop between March 15 to May 29.
\textbf{Observation 2: Behaviors of fluctuating noise on different qubits has heterogeneity.}
To explore the root cause of the accuracy drop, we investigate the change of CNOT noise on all pairs of qubits on three dates: (1) Feb. 12, (2) March 15, and (3) April 25.
Fig.\ref{fig:motivation2} (a) reports the results.
It is clear that the qubits on March 15 and April 25 have much higher noise than that on Feb. 12.
A more interesting observation is that the fluctuating noise on different qubits has heterogeneity.
Specifically, on Feb. 12, $\langle q_3,q_4\rangle$ has the highest noise, while $\langle q_1,q_2\rangle$ becomes the noisiest one on March 15 and April 25.
Along with the heterogeneous changes of noise on qubits, the compressed QNN model may lose its robustness, which causes the accuracy degradation between March 15 to May 29.
\textbf{Motivation 2: Adding noise awareness in compression to battle against fluctuating quantum noise}
To address the above challenges, only conducting noise-agnostic compression with the objective of minimizing circuit length is not enough; on the other hand, we need to take the heterogeneous noise on qubits into consideration.
This motivates us to develop a ``noise-aware compression'' on QNNs, which can involve the noise level at each qubit to guide how the model will be compressed. Details will be introduced in Sec \ref{sec:framework}.
We use an example to illustrate the necessity of noise-aware compression. As shown in Fig. \ref{fig:motivation2} (b), on March 15 and April 25, we observe significant changes in the noise of qubits, which makes the original compressed model suffer an accuracy drop from 79\% to 22.5\% and 56.5\%.
By using a noise-aware compression on these dates, we resume the accuracy to 38.5\% and 80\% on March 15 and April 25, respectively.
\begin{figure}
\caption{Noise-aware compression is needed: (a) CNOT gate noise on three days. (b) Noise-aware compressing and tuning models on the three days in (a) and testing the on the following days.
}
\label{fig:motivation2}
\end{figure}
With noise-aware compression, now, the problem is how frequently should we perform the compression.
Due to the fluctuation of quantum noise, one straightforward idea is to leverage the noise-aware compression before using it, so that it can adapt to the new noise. However, this can be too costly.
\textbf{Observation 3: Models can be re-utilized.}
Let's see the original model in Fig. \ref{fig:motivation2}(b), which experienced a dramatic accuracy drop on March 15.
However, after 9 days, on March 24, the accuracy of this model resumed to 80.5\%. Therefore, the previous model can be re-utilized later.
\textbf{Motivation 3: A model repository can avoid model optimization every day to improve efficiency.}
The above observation inspires us to build a model repository to keep a set of models.
At run time, instead of directly performing optimization (i.e., compression) for a new noise, we can first check if models in the repository can adapt to the noise.
In this way, it is possible to reuse pre-optimized models and significantly reduce the optimization cost at run time.
With the above observations and motivations, we propose a novel framework QuCAD in the following section to devise a noise-aware compression algorithm, build a model repository upon historical data offline, and manage it at run time.
\section{Compression-Aided Framework}
This section will introduce our proposed compression-aided framework, namely QuCAD, to battle against fluctuating quantum noise.
Before introducing details, we first formally formulate the problem as follows:
Given a QNN model $M$, the quantum processor $Q$ with history calibration (noise) data $D_t$, the current calibration data $D_c$ of $Q$, the problem is how to leverage the calibration data (i.e., $D_t$ and $D_c$) to find a model $M^{\prime}$ with the objective of maximizing the accuracy of $M^{\prime}$ on $Q$ with $D_c$.
\subsection{Framework Overview}\label{sec:framework}
Fig.~\ref{fig:overview} shows the overview of the QuCAD framework.
It contains 3 main components: (1) a noise-aware compression algorithm, (2) an offline model repository constructor, and (3) an online model repository manager.
The noise-aware compression algorithm is the core of QuCAD in both offline and online optimizations.
The offline optimization includes the following steps to build a model repository.
We will first use the historical calibration data to obtain their corresponding performance for the given QNN model $M$.
Then, a clustering algorithm is developed to create several groups in terms of calibration data and corresponding model performance.
The centroid of calibration data in each group ($D^{\prime}$) will be used to optimize model $M$ the compression algorithm to generate the compressed model $M^{\prime}$.
The pair of $\langle M^{\prime}, D^{\prime}\rangle$ will be placed
into the model repository.
The online optimization will get the current calibration data $D_c$ of quantum computer $Q$ as input.
We match $D_c$ with the existing calibration data $D^{\prime}$ of items $\langle M^{\prime}, D^{\prime}\rangle$ in the model repository, aiming at finding the most similar one.
The model repository manager will use the difference between calibration data $D_c$ and $D^{\prime}$ to make a judgment whether model $M^{\prime}$ can be directly used under $D_c$.
If the distance is over a pre-set threshold then performance degradation is predicted,
we will regard the current calibration data as a new centroid and generate a new model by noise-aware compression and put it into the model repository.
Otherwise, we will output matched model directly.
\begin{figure}
\caption{Illustration of the proposed Compression-Aided Framework (QuCAD).}
\label{fig:overview}
\end{figure}
\subsection{Noise-Aware Compression}
Quantum neural network compression was recently proposed \cite{hu2022quantum} to reduce circuit length, where an Alternating direction method of multipliers (ADMM) approach is applied.
In this paper, we will also employ ADMM as the optimizer for noise-aware compression, and the new challenge here is how to involve noise awareness in the compression process.
In the previous work, the authors conducted compression upon the logical quantum circuit.
However, to involve noise in the compression of quantum gates, we need to fix the physic qubits of each quantum gate.
\textit{Therefore, we will take the quantum circuit after routing on restricted topology as input, instead of the logical quantum circuit.}
Before introducing our proposed algorithm, we first define the notations and the optimization problem which will be used in ADMM.
We denote $\bm{T}$ as a table of compression-level, which are breakpoints in the example of \textit{Motivation 1}.
We denote the function of a VQC under a noise-free (a.k.a., perfect, denoted by $p$) environment as $W_p(\bm{\theta})$, where $\bm{\theta} = [\theta_1,\theta_2,\cdots,\theta_n]$ are a set of trainable parameters, and $\theta_i$ is the parameter of gate $g_i$.
With the consideration of noise, the function of VQC will be changed to $W_n(\bm{\theta})$.
On top of these, the deviation caused by noise can be defined as $N(\bm{\theta}) = W_n(\bm{\theta}) - W_p(\bm{\theta})$.
For a quantum gate $g_i$, it is associated with a physic qubit $q_k$ or a pair of physic qubits $\langle q_k,q_l\rangle$.
For simplicity, we denote such an association as a function $\langle q_k,q_l\rangle=A(g_i)$, and we use $k=l$ to represent $g_i$ is associated with one qubit $q_i$.
We denote $\bm{C}$ as a table of calibration data, and the notation $n_{k,l}\in \bm{C}$ or the function $C(q_k,q_l)$ to represent the noise rate on qubit $q_k$ and $q_l$ (or $q_k$ if $k=l$).
Then, the problem can be formulated as below.
\begin{equation}
\begin{aligned}
&\min_{\bm{\theta}} \quad W_p(\bm{\theta}) + N(\bm{\theta})
\end{aligned}
\label{eq:optprob}
\end{equation}
Now, to enable using ADMM to
perform noise-aware compression,
we first decompose the optimization problem in Eq. \ref{eq:optprob} to two sub-problems and solve them separately: (1) maximize accuracy and (2) minimize the deviation caused by noise.
The first subproblem can be solved by a gradient-based optimizer. For the second problem, we will use a set of auxiliary variables and an indicator function to resolve it, which will be introduced later.
Therefore, we reformulate the optimization problem that can be solved by ADMM as below.
\begin{equation}
\begin{aligned}
&\min_{\{ \theta_i\}} \quad f(W_p(\bm{\theta})) + N(\bm{Z}) + \sum_{\forall z_i\in\bm{Z}}{s_i(z_i)}, \\
\end{aligned}
\label{eq:admm_train}
\end{equation}
where $\bm{Z}$ is a set of auxiliary variables for subproblem decomposition and $z_i\in \bm{Z}$ is corresponding to $\theta_i\in \bm{\theta}$; function $f$ represents training loss on the given dataset; $\bm{T^{admm}}$ is gate-related compression table build on $T$; and $s_i(z_i)$ is an indicator, which will indicate whether the parameter $\theta_i$ will be pruned or not.
\begin{figure}
\caption{Noise-aware mask generation in ADMM process. }
\label{fig:maskexample}
\end{figure}
In the $r^{th}$ iteration of ADMM optimization, one key step is to determine whether or not to compress a parameter.
According to the parameter $\theta_i$, compression table $T$, and noise data $n_{k,l}$, we will build a mask.
Fig. \ref{fig:maskexample} illustrates the process to create the mask, which is composed of three steps.
First, by comparing the parameter $\theta_i$ with each compression level in table $T$, we generate two tables: $\bm{T^{admm}}$ and $\bm{D}$.
The $i^{th}$ element in $\bm{T^{admm}}$ is denoted as $T_{i}^{admm}$, which is the nearest compression level of parameter $\theta_i$; while $d_i$ is the minimum distance between parameter $\theta_i$ and any compression level.
\textit{Note that in a noise-agnostic compression \cite{hu2022quantum}, a mask will be generated by using table $\bm{D}$.
In the second step, we further consider gate noise and generate a priority table $\bm{P}$.}
Notation $p_i\in \bm{P}$ indicates the priority of gate $g_i$ to be pruned; then, we have $p_i=\frac{C(A(g_i))}{d_i}$, where $A(g_i)$ is the qubits associated with $g_i$ and $C(A(g_i))$ is the noise rate.
Based on $p_i$, we formulate the mask $mask(g_i,\bm{C})^r$ for gate $g_i$ on the given calibration data $\bm{C}$ in the $r^{th}$ iteration as below.
\begin{equation}
\begin{aligned}
mask(g_i ,\bm{C})^r =\begin{cases}
0& \text{ if } p_i^r < threshold \\
1& \text{ if } otherwise.
\end{cases}
\end{aligned}
\label{eq:mask}
\end{equation}
In the above formula, the mask equals $1$ indicating that the gate $g_i$ has a high priority, which is larger than a pre-set threshold, to be compressed. To maintain high accuracy, we will utilize the compression level $T_i^{admm}$ if $g_i$ is masked to be compressed.Based on these understandings, we define the indicator function $s_i(z_i)$:
\begin{equation}
\begin{aligned}
s_i(z_i)=\begin{cases}
0& \text{ if } z_i = T^{admm}_i\times mask(g_i,\bm{C})^r \ \\
& \ \ \ \ or \ mask(g_i,\bm{C})^r=0 \\
+\infty & \text{ if } otherwise.
\end{cases}
\end{aligned}
\label{eq:indicator_func}
\end{equation}
Note that $s_i$ is used in Eq. \ref{eq:admm_train} to restrict the value of $z_i$. It requires its value to be 0 for a valid solution.
In the above formula, we set $s_i(z_i)=0$ in two cases: (1) if $mask(g_i ,\bm{C})^r=0$, indicating we do not require the compression on gate $g_i$; or (2) $\theta_i = T^{admm}_i\times mask(g_i,\bm{C})^r$, indicating that the parameter $\theta_i$ has to be compressed to be the compression level $T^{admm}_i$.
For each round, we will get $\bm{\theta}^r$ and $\bm{Z}^r$ alternately. At last, we will get the optimized parameters $\bm{\theta}$ to minimize Eq.~\ref{eq:optprob}.
Then, we will employ noise injection to fine-tune parameters $\bm{\theta}$ to further improve the performance, where we will freeze the compressed parameters to not be tuned using the final $mask$.
\begin{table*}[t]
\centering
\caption{Performance comparison of different methods on 3 datasets in continus 146 days with fluctuating noise.}
\tabcolsep 6pt
\begin{tabular}{|c|cccccccccc|}
\hline
Dataset &
Method &
\begin{tabular}[c]{@{}c@{}}Mean\\ Accuracy\end{tabular} &
\begin{tabular}[c]{@{}c@{}}vs. \\ Baseline\end{tabular} &
Variance &
\begin{tabular}[c]{@{}c@{}}Days\\ over 0.8\end{tabular} &
\begin{tabular}[c]{@{}c@{}}vs.\\ Baseline\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Days \\ over 0.7\end{tabular} &
\begin{tabular}[c]{@{}c@{}}vs. \\ Baseline\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Days \\ over 0.5\end{tabular} &
\begin{tabular}[c]{@{}c@{}}vs.\\ Baseline\end{tabular} \\ \hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}4-class \\ MNIST\end{tabular}} &
Baseline &
59.35\% &
0.00\% &
0.070 &
24 &
0 &
93 &
0 &
100 &
0 \\
&
Noise-aware Train Once \cite{wang2022quantumnat} &
58.69\% &
-0.65\% &
0.060 &
8 &
-16 &
92 &
-1 &
100 &
0 \\
&
Noise-aware Train Everyday &
59.39\% &
0.05\% &
0.070 &
28 &
4 &
83 &
-10 &
99 &
-1 \\
&
One-time Compression \cite{hu2022quantum} &
68.44\% &
0.00\% &
0.050 &
80 &
56 &
102 &
9 &
117 &
17 \\
&
QuCAD w/o offline &
72.31\% &
12.96\% &
0.030 &
77 &
53 &
98 &
5 &
134 &
34 \\
&
QuCAD (ours) &
\textbf{75.67\%} &
\textbf{16.32\%} &
\textbf{0.020} &
\textbf{100} &
\textbf{76} &
\textbf{134} &
\textbf{41} &
\textbf{134} &
\textbf{34} \\ \hline
\multirow{6}{*}{Iris} &
Baseline &
37.85\% &
0.00\% &
\textbf{0.006} &
0 &
0 &
0 &
0 &
8 &
0 \\
&
Noise-aware Train Once \cite{wang2022quantumnat} &
54.38\% &
16.53\% &
0.043 &
29 &
29 &
46 &
46 &
70 &
62 \\
&
Noise-aware Train Everyday &
56.62\% &
18.78\% &
0.044 &
38 &
38 &
56 &
56 &
72 &
64 \\
&
One-time Compression \cite{hu2022quantum} &
69.20\% &
31.36\% &
0.043 &
84 &
84 &
90 &
90 &
103 &
95 \\
&
QuCAD w/o offline &
75.30\% &
37.46\% &
0.025 &
\textbf{84} &
\textbf{84} &
104 &
104 &
128 &
120 \\
&
QuCAD (ours) &
\textbf{76.73\%} &
\textbf{38.88\%} &
0.015 &
83 &
83 &
\textbf{108} &
\textbf{108} &
\textbf{141} &
\textbf{133} \\ \hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Seismic\\ Wave\end{tabular}} &
Baseline &
68.40\% &
0.00\% &
0.014 &
18 &
0 &
70 &
0 &
137 &
0 \\
&
Noise-aware Train Once \cite{wang2022quantumnat} &
68.85\% &
0.45\% &
0.014 &
19 &
1 &
78 &
8 &
137 &
0 \\
&
Noise-aware Train Everyday &
68.28\% &
-0.11\% &
0.013 &
22 &
4 &
69 &
-1 &
138 &
1 \\
&
One-time Compression \cite{hu2022quantum} &
78.99\% &
10.59\% &
0.007 &
80 &
62 &
130 &
60 &
144 &
7 \\
&
QuCAD w/o offline &
82.34\% &
13.95\% &
0.001 &
110 &
92 &
145 &
75 &
146 &
9 \\
&
QuCAD (ours) &
\textbf{83.75\%} &
\textbf{15.36\%} &
\textbf{0.001} &
\textbf{133} &
\textbf{115} &
\textbf{146} &
\textbf{76} &
\textbf{146} &
\textbf{9} \\ \hline
\end{tabular}
\label{tab:main_result}
\end{table*}
\subsection{Offline Model Repository Constructor}
As discussed in \textit{Motivation 3}, it is possible to use the noise-aware compression to perform optimization before using the QNN, but it is too costly and there exist opportunities to improve efficiency by building a model repository so that the model can be reused.
In this subsection, we will explain how to build the repository using a modified noise-aware k-means clustering algorithm.
There are two inputs of the model repository constructor: (1) offline calibration data, denoted as $\mathbf{C = [c_1,c_2,...,c_n]} \in R^{n\times d}$; and (2) the corresponding QNN accuracy under the calibrations, denoted as $\mathbf{p} = [p_1,p_2,...,p_n] \in R^n$, where $n$ is the number of calibration data and $d$ is the total number of noise rates in each calibration data.
Our objective is to split calibration data ($\bf{C}$) into $k$ groups, and the samples in the same group are with similar calibration data and performance; meanwhile, the centroid $\bm{r_i}$ is representative of each group ($\bm{g_i}$).
Then, we can use the representative centroid $\bm{r_i}$ to do noise-aware compression and the compressed model will be added to the repository.
\textit{Objective Function and Distance.}
To achieve our goal, we first define performance-aware weight, which is $\mathbf{w}=[w_1,w_2,...,w_d]$, where $w_j$ is the absolute correlation coefficient $\rho = |\frac{\text{cov}(X,Y)}{\sigma_x \sigma_y}|$ between the model performance $\mathbf{p}$ and the $j^{th}$ dimension in calibration data $\mathbf{C_{:,j}}$.
The weighted distance (noted as $ dist_{L1}^w $) between two samples ($\mathbf{c_i}$ and $\mathbf{c_j}$)of calibration data can be defined as
\begin{equation}
dist_{L1}^w(\mathbf{c_i},\mathbf{c_j}) = dist_{L_1}(\mathbf{w \cdot c_i}, \mathbf{w \cdot c_j})
\label{eq:distance}
\end{equation}
where $dist_{L_1}$ is the Manhattan distance between two vectors;
The objective function is designed to partition the data into $K$ clusters such that the sum of weighted Manhattan ($L_1$) absolute errors (WSAE) of all their clusters is minimized. Therefore, the objective function WSAE is designed as:
\begin{equation}
WSAE = \sum_{\mathbf{g_i} \in \mathbf{G}} \sum_{\forall\mathbf{c} \in \mathbf{g_i}} dist_{L_1}^w(\mathbf{r_i},\mathbf{c})
\label{eq:cluster}
\end{equation}
$\bf{G}$ is all groups, $\bm{r_i}$ is the representative of group $\bf{g_i}\in\bf{G}$, and $\bm{c}\in \bm{g_i}$ is the candidate calibration in group $\bm{g_i}$.
Since the noise is considered in model performance, the clustering not only considers the value of noise but also the effects of noise on the given QNN model.
\subsection{Online Model Repository Manager}
After building the model repository, the next question is how to use it for online algorithms.
We provide the following guidance to use the repository efficiently.
\textit{Guidance 1:} The cluster results can help to judge whether to generate new models into the model repository manager. At the offline stage, we can get the average weighted distance: $$\overline{(dist_{L1}^w})_{i}= \frac{\sum_{\forall\mathbf{c} \in \mathbf{g_i}} dist_{L_1}(\mathbf{r_i},\mathbf{c})}{n_i} $$ between the centroid($\mathbf{r_i}$) and all samples ($\forall\mathbf{c} \in \mathbf{g_i}$) in the $i^{th}$ cluster, where $n_i$ is the number of samples in the $i^{th}$ cluster. We set $max_{i}(\overline{(dist_{L1}^w})_{i})$ as a threshold $th_w$ to decide whether to add a new centroid (i.e., new representative) in the model repository. If the $min_{j}(dis_j) > th_w$ where $dis_j$ is the $dist_{L1}^w$ distance between the $j^{th}$ centroid and the current calibration data, we will add the current calibration data to the model repository.
\textit{Guidance 2:} The cluster results can be utilized to predict the performance of the given model with current calibration data.
Specifically, we can obtain the average accuracy of each cluster, say $\overline{acc_i}$ for the $i^{th}$ cluster.
According to users' requirement on QNN model accuracy $A$, we can set cluster $g_i$ as an invalid cluster if its average accuracy $\Bar{acc_i}$ is less than $A$.
Then, at the online stage, if the current calibration matches the centroid in an invalid cluster, we will set the current calibration data as an invalid data, and output a failure report.
\section{Experiments}
\subsection{Experiments Setup}
\textbf{Datasets and model.} We evaluate our framework on three classification tasks. (1) We extract 4 class (0,1,3,6) from MNIST \cite{lecun1998gradient} with the former 90\% samples for training and latter 200 samples for testing. To process MNIST data, we apply angle encoding \cite{larose2020robust} to encode $4 \times 4$ images to 4 qubits, and adopt 2 repeats of a VQC block (4RY +4CRY + 4RY +4RX +4CRX +4RX + 4RZ + 4CRZ +4RZ + 4CRZ) as the original model.
(2) We extract 1500 samples of the earthquake detection dataset from FDSN \cite{fsdn}. Each sample has a positive or negative label. We utilize 90\% and 10\% samples for training and testing, respectively. We encode features to 4 qubits and employ the same VQC as MNIST.
(3) We use Iris \cite{hoey2004statistical} dataset with 66.6\% and 33.4\% samples for training and testing. Features are encoded to 4 qubits with 3 repeats of VQC blocks.
\textbf{Calibrations and Environment Settings.}
We pull history calibrations from Aug. 10, 2021 to Sep. 20, 2022 from IBM backend (ibm\_belem) using Qiskit Interface. The front 243 days are used for offline optimization and the remaining 146 days' data are used for online tests. We generate noise models from history calibration data and integrate them into Qiskit noise simulator for online simulation. Besides, we also deploy QuCAD on ibm-jakarta and evaluate the output model of QuCAD on a real IBM quantum processor, ibm-jakarta. Our codes are based on Qiskit APIs and Torch-Quantum~\cite{hanruiwang2022quantumnas}.
\textbf{Competitors.}
We employ different approaches for comparison, including:
(1) Baseline: training in a noise-free environment without optimization.
(2) Noise-aware Training Once \cite{wang2022quantumnat}: applying noise injection on the first day for training.
(3) Noise-aware Training Everyday: extend the noise-aware training everyday.
(4) One-time Compression\cite{hu2022quantum}: applying compression with the objective of minimizing circuit length on the first day.
(5) QuCAD w/o offline: generating the models by our framework without an offline stage.
(6) QuCAD: generating the models by our framework during the offline and online stages.
\subsection{Main Results of our proposed QuCAD}
\textbf{Effective evaluation on noisy simulation.}
Table~\ref{tab:main_result} reports the comparison results of different methods on MNIST, Iris, and earthquake detection datasets.
Columns ``Mean accuracy'' and ``Variance'' gives the statistical information of model accuracy on the continuously 146 days.
Column ``Days over 0.8'' stands for the number of days that the model accuracy is higher than 80\%, which is similar to ``Days over 0.7'' and ``Days over 0.5'' columns.
From this table, we can clearly see that our proposed QuCAD outperforms all competitors.
Specifically, on these 3 datasets, QuCAD achieves improvements of 16.32\%, 38.88\%, and 15.36\% respectively, compared with the baseline.
Although there exists one case that the `days over 0.8' of `QuCAD w/o offline' is 1 day more than that of `QuCAD' on Iris, QuCAD has many
days to have accuracy over 70\%.
Besides, QuCAD has the lowest variance except for the baseline on Iris, showing the stability of QuCAD.
Kindly note that the mean accuracy of the baseline on Iris is much lower than QuCAD.
We also observed that QuCAD outperforms competitors on the number of days over different accuracy requirements, which again demonstrates the effectiveness of our framework.
Compared with noise-aware training, even one-time compression has a significant improvement on all datasets, which validates our observation in Fig. \ref{fig:loss_3d}.
On the other hand, QuCAD w/o offline outperforms one-time compression, indicating that online adaptation is needed.
Furthermore, compared with `QuCAD w/o offline', QuCAD achieves improvements of 3.36\%, 1.14\%, and 1.41\%, showing the effectiveness of offline optimization in QuCAD.
\begin{figure}
\caption{The comparison of training time and accuracy.}
\label{fig:effeciency}
\end{figure}
\textbf{Efficiency evaluation.}
We recorded the training time at the online stage in Fig. \ref{fig:effeciency} on 4-class MNIST.
In the figure, the bars represent the mean accuracy (i.e., right axis) and each point with a number corresponds to a normalized training time (i.e., left axis).
From the results, we can see that QuCAD can achieve 146$\times$ and 110.3$\times$ speedup over ``compression everyday'' and ``noise-aware train every data'', respectively.
This shows the efficiency of our framework, which mainly comes from the reduction of the number of online optimization.
More specifically, the centroids generated offline can well represent the calibrations at the online stage if the distribution of calibrations doesn't change much.
\begin{figure}
\caption{On earthquake detection dataset, the performance of different approaches on the 7-qubit quantum device, ibm-jakarta.}
\label{fig:real_device}
\end{figure}
\textbf{Evaluation on real quantum computers.} We also evaluate QuCAD on the earthquake detection dataset using IBM quantum processor ibm-jakarta with different calibration data at different time.
Results are shown in Fig.~\ref{fig:real_device}.
QuCAD can consistently outperform the two competitors by 13.7\% and 12.52\% on average.
Besides, we can see that the accuracy of QuCAD on different days is more stable than others, which reflects our methods can adapt QNN models to fluctuating noise.
\subsection{Ablation Study}
\begin{table}[t]
\caption{ Comparison of different cluster}
\tabcolsep 5pt
\centering
\begin{tabular}{|c|ccc|}
\hline
Method &
K &
\begin{tabular}[c]{@{}c@{}}Mean Acc.\\ of Clusters\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Mean Acc. \\ of Samples\end{tabular} \\ \hline
K-Means with L2 & 6 & 72.94\% & 78.45\% \\ \hline
Proposed K-Means with $dist_{L1}^w$ & 6 & \textbf{75.83\%} & \textbf{80.68\%} \\ \hline
\end{tabular}
\label{tab:cluster}
\end{table}
\begin{figure}
\caption{Ablation Study: (a) Compression Everyday vs. QuCAD. (b)Noise-Aware vs. Noise-Aanostic training }
\label{fig:ablation10days}
\end{figure}
\textbf{QuCAD vs. Practical Upper Bound.}
We further investigate the performance of 8 representative days in Fig.~\ref{fig:ablation10days}(a). We apply the result of noise-aware compression every day as a practical upper bound.
From the results, we can observe that QuCAD has a competitive performance compared to the practical upper bound, showing QuCAD can get approximate optimal results.
\textbf{Noise-Aware vs. Noise-Agnostic Compression.}
We further evaluate the proposed compression algorithm on those 8 representative days, as shown in Fig.~\ref{fig:ablation10days}(b). Results show that noise-aware compression can outperform noise-agnostic compression on most days, showing the effectiveness of our proposed noise-aware compression.
We also observe both compression approaches obtained the same accuracy on 5/4 and 7/14. There are two possible reasons: (1) there is no great difference among qubits; and (2) The noise level is small and the simple compression is enough to get a good performance.
\textbf{Model repository constructor.}
We also did an ablation study on the clustering algorithm developed in the model repository constructor.
We compare our proposed noise-aware distance $dist_{L1}^w$and the standard 'L2 norm' distance.
Results reported in Table~\ref{tab:cluster} show that our method can get 2.89\% higher mean accuracy of 6 clusters on average and obtain 2.23\% accuracy gain on that for all samples consistently.
Results indicate that our proposed method can improve the quality of the centroids and make centroids represent other samples better in a cluster.
\section{Conclusion}
In this work, we reveal the fluctuating noise in quantum computing and it will significantly affect the performance of quantum applications.
To battle against fluctuating noise, we further observe that the high noise level may create breakpoints in the loss surface, and in turn, the noise-aware training may find inferior solutions.
By investigating the breakpoints, we observe that quantum neural network compression can be a hammer to the noise issue.
And we build a compression-aided framework, namely
QuCAD, which can automatically adapt a given model to fluctuating quantum noise.
Evaluations on MNIST, earthquake detection dataset, and Iris show the effectiveness and efficiency of QuCAD.
specifically, QuCAD can obtain stable performance on the earthquake detection task using a real IBM quantum processor.
{\scriptsize
\scriptsize
}
\end{document} |
\begin{document}
\title{Boundary multifractal behaviour for harmonic functions in the ball
}
\author{Fr\'ed\'eric Bayart \and
Yanick Heurteaux
}
\institute{F. Bayart \at
Clermont Universit\'e, Universit\'e Blaise Pascal, Laboratoire de Math\'ematiques, BP 10448, F-63000 CLERMONT-FERRAND -
CNRS, UMR 6620, Laboratoire de Math\'ematiques, F-63177 AUBIERE \\
\email{[email protected]}
\and
Y. Heurteaux \at
Clermont Universit\'e, Universit\'e Blaise Pascal, Laboratoire de Math\'ematiques, BP 10448, F-63000 CLERMONT-FERRAND -
CNRS, UMR 6620, Laboratoire de Math\'ematiques, F-63177 AUBIERE \\
\email{[email protected]}\\
Tel.: +33 4 73 40 50 64\\
Fax: +33 4 73 40 79 72\\
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
It is well known that if $h$ is a nonnegative harmonic function in the ball of
$\mathbb R^{d+1}$ or if $h$ is harmonic in the ball with integrable
boundary values, then the radial limit of $h$ exists at almost every point of the
boundary. In this paper, we are interested in the exceptional set of points
of divergence and in the speed of divergence at these points. In particular, we prove that
for generic harmonic functions and for any $\beta\in [0,d]$, the Hausdorff
dimension of the set of points
$\xi$ on the sphere such that
$h(r\xi)$ looks like $(1-r)^{-\beta}$ is equal to $d-\beta$.
\keywords{Boundary behaviour \and Multifractal analysis \and Genericity}
\subclass{31B25 \and 35C15}
\end{abstract}
\maketitle
\section{Introduction}
The story of this paper begins in 1906, when P. Fatou proved in \cite{Fat03}
that bounded
harmonic functions in the unit disk have nontangential limits almost
everywhere on the circle.
Later on, this result was improved by Hardy and Littlewood in dimension 2,
and by Wiener, Bochner and many others in arbitrary dimension (a complete
historical account can be found in \cite{Ste}). Let us also mention R. Hunt
and R. Wheeden who proved that a similar result holds for nonnegative
harmonic functions in Lipschitz domains (\cite{HW1,HW2}).
To state the result of the
nontangential convergence of harmonic functions in the ball,
we need to introduce some terminology.
Let $d\geq 1$ and let $\mathcal S_{d}$ (resp. $B_{d+1}$) be the (euclidean) unit sphere
(resp. the unit ball) in $\mathbb R^{d+1}$. The euclidean norm in $\mathbb R^{d+1}$ will
be denoted by $\|\cdot\|$. For $\mu\in\mathcal M(\mathcal S_{d})$, the set of complex Borel
measures on $\mathcal S_{d}$, the Poisson integral of $\mu$, denoted by $P[\mu]$, is
the function on $B_{d+1}$ defined
by
$$P[\mu](x)=\int_{\mathcal S_{d}}P(x,\xi)d\mu(\xi),$$
where $P(x,\xi)$ is the Poisson kernel,
$$P(x,\xi)=\frac{1-\|x\|^2}{\|x-\xi\|^{d+1}}.$$
When $f$ is a function in $L^1(\mathcal S_{d})$, we denote simply by $P[f]$ the function
$P[fd\sigma]$.
Here and elsewhere, $d\sigma$ denotes the normalized Lebesgue measure
on $\mathcal S_{d}$.
For any $\mu\in\mathcal M(\mathcal S_{d})$, $P[\mu]$ is a harmonic function in $B_{d+1}$
and it is well known that, for instance, every bounded harmonic function in
$B_{d+1}$ is the Poisson integral
$P[f]$ of a certain $f\in L^{\infty}(\mathcal S_{d})$. It is also well known that every
nonnegative harmonic function in $B_{d+1}$ is the Poisson integral $P[\mu]$ of a
positive finite measure $\mu\in\mathcal M(\mathcal S_{d})$.
The Fatou theorem for Poisson integrals of $L^1$-functions says that,
given a function $f\in L^1(\mathcal S_{d})$,
then $P[f](ry)$ tends to $f(y)$ for almost every $y\in\mathcal S_{d}$ when $r$
increases to 1. More generally, if $\mu\in\mathcal M(\mathcal S_{d})$, $P[\mu](ry)$
tends to $\frac{d\mu}{d\sigma}(y)$ almost everywhere and in fact, the
limit exists for nontangential access.
In this paper, we are interested in the radial behaviour on exceptional sets,
and especially in the
following questions. How quickly can $P[f](ry)$ grow? For a prescribed growth
$\tau(r)$,
how big can be the sets of $y\in\mathcal S_{d}$ such that
$\limsup_{r\to 1}|P[f](ry)|/\tau(r)=+\infty$?
It is easy to see that the growth cannot be too fast. Indeed, the Poisson
kernel satisfies, for any $y,\xi\in\mathcal S_{d}$,
$$P(ry,\xi)\leq\frac2{(1-r)^d},$$
so that for any $f\in L^1(\mathcal S_{d})$, for any $y\in\mathcal S_{d}$ and any $r\in(0,1)$,
$$P[f](ry)\leq \frac{2\|f\|_1}{(1-r)^d}.$$
This motivates us to introduce, for a fixed $\beta\in(0,d)$ and any
$f\in L^1(\mathcal S_{d})$, the exceptional set
$$\mathcal E(\beta,f)=\left\{y\in\mathcal S_{d};\ \limsup_{r\to 1}\frac{|P[f](ry)|}
{(1-r)^{-\beta}}=+\infty\right\},$$
and we ask for the size of $\mathcal E(\beta,f)$. To measure the size of
subsets of $\mathcal S_{d}$, we
shall use the notion of Hausdorff dimension (see Section \ref{SECPREL}
for precise definitions).
Our first main result is the following.
\begin{theorem}\label{THMMAIN1}
Let $\beta\in[0,d]$ and let $f\in L^1(\mathcal S_{d})$. Then
$\dim_\mathcal H\big(\mathcal E(\beta,f)\big)\leq d-\beta$. Conversely,
given a subset $E$ of $\mathcal S_{d}$ such that $\dim_{\mathcal H} (E)<d-\beta$,
there exists $f\in L^1(\mathcal S_{d})$ such that $E\subset \mathcal E(\beta,f)$.
\end{theorem}
The first part of Theorem \ref{THMMAIN1} has already been obtained by D. Armitage in \cite{Ar81} in the context of Poisson integrals on the upper half-space
(see also \cite{Wat80} for analogous results regarding solutions of the heat equation).
However, we will produce a complete proof of Theorem \ref{THMMAIN1}. Our method of proof differs substantially from that of \cite{Ar81}.
Moreover, it provides a more general result (see Theorem \ref{THMAUBRYLIKE} below). It seems that this last statement cannot be obtained
from Armitage's work without adding assumptions on $\mathbf phi$ and $\tau$.
Our second task is to perform a multifractal analysis of the radial behaviour of harmonic functions,
as is done in \cite{BH11}, \cite{BH11b} for the divergence of Fourier series. For a given function
$f\in L^1(\mathcal S_{d})$ and a given $y\in\mathcal S_{d}$, we define the real number $\beta(y)$ as the infimum of the real numbers $\beta$ such that $|P[f](ry)|=O\left( (1-r)^{-\beta}\right)$. The level sets of the function $\beta$ are defined by
\begin{eqnarray*}
E(\beta,f)&=&\left\{y\in\mathcal S_{d};\ \beta(y)=\beta\right\}\\
&=&\left\{y\in\mathcal S_{d};\ \limsup_{r\to1}\frac{\log |P[f](ry)|}{-\log(1-r)}=\beta\right\}.
\end{eqnarray*}
We can ask for which values of $\beta$ the sets
$E(\beta,f)$ are non-empty. This set of values will be called the domain of definition of the spectrum of singularities of $f$.
If $\beta$ belongs to the domain of definition of the spectrum of singularities, it is also interesting to estimate the
Hausdorff dimension of the sets $E(\beta,f)$. The function $\beta\mapsto
\dim_\mathcal{H}(E(\beta,f))$ will be called the spectrum of singularities of the
function $f$.
Theorem \ref{THMMAIN1} ensures that $\dim_\mathcal{H}(E(\beta,f))\leq d-\beta$ and our second main result
is that a \emph{typical} function $f\in L^1(\mathcal S_{d})$ satisfies $\dim_\mathcal{H}(E(\beta,f))=d-\beta $ for \emph{any} $\beta\in[0,d]$.
In particular, such a function $f$ has a multifractal behavior, in the sense that the domain of definition of its spectrum of singularities contains an interval with non-empty interior.
\begin{theorem}\label{THMMAIN2}
For quasi-all functions $f\in L^1(\mathcal S_{d})$, for any $\beta\in[0,d]$, $$\dim_\mathcal{H}\big(E(\beta,f)\big)=d-\beta.$$
\end{theorem}
The terminology "quasi-all" used here is relative to the Baire category theorem. It means that this property is true for a residual
set of functions in $L^1(\mathcal S_{d})$.
\noindent \textsc{Notations.} Throughout the paper, $\mathbf N=(0,\dots,0,1)$ will denote the north pole of $\mathcal S_{d}$.
The letter $C$ will denote a positive constant whose value may change from line to line. This value may depend on the dimension $d$, but it will never depend on the other parameters which are involved.
\noindent \textsc{Acknowledgements.} We thank the referee for his/her careful reading and for having provided to us references \cite{Ar81} and \cite{Wat80}.
\section{Preliminaries}\label{SECPREL}
In this section, we survey some results regarding Hausdorff measures. We refer to \cite{Falc} and to \cite{Mat95} for more on this subject. Let $(X,d)$ be a metric space such that,
for every $\rho>0$, the space $X$ can be covered by a countable number of balls
with diameter less than $\rho$. If $B=B(x,r)$ is a ball in $X$ and $\lambda>0$,
$|B|$ denotes the diameter of $B$ and $\lambda B$ denotes the ball $B$
scaled by a factor $\lambda$, i.e. $\lambda B=B(x,\lambda r)$.
A dimension function $\mathbf phi:\mathbb R_+\to\mathbb R_+$ is a continuous nondecreasing function satisfying $\mathbf phi(0)=0$. Given $E\subset X$, the $\mathbf phi$-Hausdorff outer measure of $E$ is defined by
$$\mathcal H^{\mathbf phi}(E)=\lim_{\varepsilon\to 0}\inf_{r\in R_\varepsilon(E)}\sum_{B\in r}\mathbf phi(|B|),$$
where $R_\varepsilon(E)$ is the set of countable coverings of $E$ with balls $B$ with diameter $|B|\leq\varepsilon$.
When $\mathbf phi_s(x)=x^s$, we write for short $\mathcal H^s$ instead of $\mathcal H^{\mathbf phi_s}$. The Hausdorff dimension of a set $E$
is
$$\dim_{\mathcal H}(E):=\sup\{s>0;\ \mathcal H^s (E)>0\}=\inf\{s>0;\ \mathcal H^s(E)=0\}.$$
We will need to construct on $\mathcal S_{d}$ a family of subsets with prescribed Hausdorff dimension.
For this we shall use results of \cite{BV06}. Recall that a function
$\mathbf phi:\mathbb R_+\to\mathbb R_+$ is doubling provided there exists $\lambda>1$ such that,
for any $x>0$, $\mathbf phi(2x)\leq\lambda \mathbf phi(x)$. From now on, we suppose that the metric space $(X,d)$ supports
a doubling dimension function $\mathbf phi$ such that
$$\frac1C \mathbf phi(|B|)\leq \mathcal H^\mathbf phi(B)\leq C\mathbf phi(|B|)$$
where $C$ is a positive constant independent of $B$.
The previous assumption is satisfied when $X=\mathcal S_{d}$, endowed with the distance inherited
from $\mathbb R^{d+1}$, and $\mathbf phi(x)=x^{d}$.
Given a dimension function $\mathbf psi$ and a ball $B=B(x,r)$, we denote by $B^\mathbf psi$ the ball
$B^\mathbf psi=B(x,\mathbf psi^{-1}\circ\mathbf phi(r))$. The following mass transference principle of \cite{BV06} will be used.
\begin{lemma}[The mass transference principle]\label{LEMMTP}
Let $(B_i)$ be a sequence of balls in $X$ whose radii go to zero. Let $\mathbf psi$ be a dimension
function such that $\mathbf psi(x)/\mathbf phi(x)$ is monotonic and suppose that, for any ball $B$ in $X$,
$$\mathcal H^\mathbf phi\left(B\cap\limsup_{i\to+\infty}B_i\right)=\mathcal H^\mathbf phi(B).$$
Then, for any ball $B$ in $X$,
$$\mathcal H^\mathbf psi\left(B\cap\limsup_{i\to+\infty}B_i^\mathbf psi\right)=\mathcal H^\mathbf psi(B).$$
\end{lemma}
Finally, the following basic covering lemma due to Vitali will be required (see \cite{Mat95}).
\begin{lemma}[The $5r$-covering lemma]
Every family $\mathcal F$ of balls with uniformly bounded diameters in a
separable metric space $(X,d)$
contains a disjoint subfamily $\mathcal G$ such that
$$\bigcup_{B\in\mathcal F}B\subset \bigcup_{B\in\mathcal G}5B.$$
\end{lemma}
\section{Majorisation of the Hausdorff dimension}
Let $f\in L^1(\mathcal S_{d})$. We intend to show that $P[f](r\cdot)$ cannot grow too fast on sets with large Hausdorff dimension. More generally, we shall do this for $\mu\in\mathcal M(\mathcal S_{d})$ and $P[\mu]$ instead of $P[f]$. If $y \in\mathcal S_{d}$ and $\delta>0$, we introduce
$$\kappa(y,\delta)=\big\{\xi\in\mathcal S_{d};\ \|\xi-y\|<\delta\big\}$$
the open spherical cap on $\mathcal S_{d}$ with center $y$ and radius $\delta>0$. The set $\kappa(y,\delta)$ is just the ball with center $y$ and radius $\delta$ in the metric space $(\mathcal S_{d},\|\cdot\|)$. Let us also define the slice
$$\mathcal S(y,\delta_1,\delta_2)=\big\{\xi\in\mathcal S_{d};\ \delta_1\leq \|\xi-y\|<\delta_2\big\}$$
where $0\leq\delta_1<\delta_2$.
The starting point of our argument is a result linking the radial behaviour of $P[\mu]$ to the Hardy-Littlewood maximal function. More precisely, it is well known that if $y\in\mathcal S_{d}$, then
$$\sup_{r\in(0,1)}\big|P[\mu](ry)\big|\leq \sup_{\delta>0}\frac{|\mu|(\kappa(y,\delta))}{\sigma(\kappa(y,\delta))}$$
(see for example \cite{ABR}). Our aim is to control, for a fixed $r$ close to 1, the minimal size of the caps which come into play on the
right-hand side.
\begin{lemma}\label{LEMHL}
Let $\mu\in \mathcal M(\mathcal S_{d})$, $r\in(0,1)$ and $y\in \mathcal S_{d}$. There exists $\delta\geq 1-r$ such that
$$\big|P[\mu](ry)\big|\leq C\frac{|\mu|(\kappa(y,\delta))}{\sigma(\kappa(y,\delta))},$$
where $C$ is a constant independent of $\mu$, $r$ and $y$.
\end{lemma}
\begin{proof}
Replacing $\mu$ by $|\mu|$, we may assume that $\mu$ is positive. Moreover, without loss of generality, we may assume that $y=\mathbf N$ is the north pole. Observe that
$$P[\mu](r\mathbf N)=\int_{\mathcal S_{d}}P(r\mathbf N,\xi)d\mu(\xi),$$
with
\begin{eqnarray*}
P(r\mathbf N,\xi)&=&\frac{1-r^2}{\|r\mathbf N-\xi\|^{d+1}}\\
&=&\frac{1-r^2}{(1-2r\xi_{d+1}+r^2)^{(d+1)/2}}.
\end{eqnarray*}
Observe also that $\|\xi-\mathbf N\|^2=2(1-\xi_{d+1})$ if $\xi\in \mathcal S_{d}$. In particular, $P(r\mathbf N, \xi)$ just depends on $\|\xi-\mathbf N\|$ and $r$. Moreover, $P(r\mathbf N,\xi)$ decreases when $\|\xi-\mathbf N\|$ increases, $\xi$ keeping on $\mathcal S_{d}$.
We shall approximate $\xi\mapsto P(r\mathbf N,\xi)$ by functions which are constant on slices.
The function $\xi\mapsto P(r\mathbf N,\xi)$ is harmonic and nonnegative in the ball
$$\{\xi\in\mathbb R^{d+1};\ \|\xi-\mathbf N\|<1-r\}.$$ By the Harnack inequality, there exists $C_0>0$ (which does not depend on $r$) such that, for any $\xi\in\mathbb R^{d+1}$ with $\|\xi-\mathbf N\|\leq (1-r)/2$,
$$P(r\mathbf N,\xi)\geq C_0P(r\mathbf N,\mathbf N).$$
Necessarily, $C_0$ belongs to $(0,1)$. We then define an integer $k>0$ and a finite sequence $\delta_0,\dots,\delta_k$ by
\begin{itemize}
\item $\delta_0=0$;
\item $\delta_1=(1-r)/2$;
\item $\delta_{j+1}$ (if it exists) is the real number in $[\delta_j,2]$ such that $P(r\mathbf N,\xi^{j+1})=C_0P(r\mathbf N,\xi^j)$ where $\xi^j$ (resp. $\xi^{j+1}$) is an arbitrary point of
$\mathcal S_{d}$ such that $\|\xi^j-\mathbf N\|=\delta_j$ (resp. $\|\xi^{j+1}-\mathbf N\|=\delta_{j+1}$) (remember that $P(r\mathbf N,\xi)$ only depends on $\|\xi-\mathbf N\|$);
\item $\delta_{j+1}=2$ and $k=j+1$ otherwise.
\end{itemize}
Observe that the sequence is well defined and that, by compactness, the process ends up after a finite number of steps.
We set $c_j=P(r\mathbf N,\xi^j)$, $0\leq j \leq k-1$ where $\xi^j$ is an arbitrary point in $\mathcal S_{d}$ such that $\|\mathbf N-\xi^j\|=\delta_j$. Let us also remark that, if $\xi\in\mathcal S_{d}$, $\xi\neq-\mathbf N$,
$$C_0\sum_{j=0}^{k-1}c_j \mathbf 1_{\mathcal S(\mathbf N,\delta_j,\delta_{j+1})}(\xi)\leq P(r\mathbf N,\xi)\leq \sum_{j=0}^{k-1}c_j \mathbf 1_{\mathcal S(\mathbf N, \delta_j,\delta_{j+1})}(\xi).$$
The sequence $(c_j)_{j\ge 0}$ is decreasing. Thus, we can rewrite the step function using only caps as
$$\sum_{j=0}^{k-1}c_j \mathbf 1_{\mathcal S(\mathbf N,\delta_j,\delta_{j+1})}=
\sum_{j=1}^{k}d_j \mathbf 1_{\kappa(\mathbf N,\delta_j)}$$
where the real numbers $d_j$ are \emph{positive}. In fact, $d_1=c_0$ and $d_j=c_{j-1}-c_j$ if $j\ge 2$.
Then we get
\begin{eqnarray}\label{EQLEMHL}
C_0\sum_{j=1}^{k} d_j\mathbf 1_{\kappa(\mathbf N,\delta_j)}\leq P(r\mathbf N,\xi)\leq \sum_{j=1}^{k}d_j \mathbf 1_{\kappa(\mathbf N,\delta_j)}.
\end{eqnarray}
We integrate the right-hand inequality with respect to $\mu$ to obtain
\begin{eqnarray*}
P[\mu](r\mathbf N)&\leq&\sum_{j=1}^k d_j \mu(\kappa(\mathbf N,\delta_j))\\
&\leq&\sup_{j=1,\dots, k} \frac{\mu(\kappa(\mathbf N,\delta_j))}{\sigma(\kappa(\mathbf N,\delta_j))}\sum_{j=1}^k d_j \sigma(\kappa(\mathbf N,\delta_j))\\
&\leq&C_0^{-1}\sup_{j=1,\dots, k} \frac{\mu(\kappa(\mathbf N,\delta_j))}{\sigma(\kappa(\mathbf N,\delta_j))}\int_{\mathcal S_{d}}P(r\mathbf N,\xi)d\sigma(\xi)
\end{eqnarray*}
where the last inequality is obtained by integrating the left part of (\ref{EQLEMHL}) over $\mathcal S_{d}$ with respect to the surface measure $\sigma$. This yields the lemma, since $\int_{\mathcal S_{d}}P(r\mathbf N,\xi)d\sigma(\xi)= 1$, except that
we have found a cap with radius greater than $(1-r)/2$ instead of $1-r$. Fortunately, it is easy
to dispense with the factor $1/2$. Indeed,
$$\frac{\mu(\kappa(\mathbf N,\delta))}{\sigma(\kappa(\mathbf N,\delta)\big)}\leq C\frac{\mu(\kappa(\mathbf N,\delta))}{\sigma(\kappa(\mathbf N,2\delta))}\leq C\frac{\mu(\kappa(\mathbf N,2\delta))}{\sigma(\kappa(\mathbf N,2\delta))}.$$
\end{proof}
The previous lemma is the main step to obtain an upper bound of the Hausdorff dimension of the sets where $P[\mu](r\cdot)$ behaves badly.
\begin{theorem}\label{THMAUBRYLIKE}
Let $\mu\in\mathcal M(\mathcal S_{d})$ and let $\tau:(0,1)\to(0,+\infty)$ be nonincreasing, with $\lim_{x\to 0^+}\tau(x)=+\infty$.
Let us define
$$\mathcal E(\tau,\mu)=\left\{y\in\mathcal S_{d};\ \limsup_{r\to 1}\frac{|P[\mu](ry)|}{\tau(1-r)}=+\infty\right\}.$$
Let $\mathbf phi:(0,+\infty)\to(0,+\infty)$ be a dimension function satisfying $\mathbf phi(s)=O(\tau(s)s^d)$.
Then $$\mathcal H^\mathbf phi\big(\mathcal E(\tau,\mu)\big)=0.$$
\end{theorem}
\begin{proof}
For any $M>1$, we introduce
$$\mathcal E_M=\left\{y\in\mathcal S_{d};\ \limsup_{r\to 1}\frac{|P[\mu](ry)|}{\tau(1-r)}> M\right\}.$$
Let $\varepsilon>0$ and $y\in \mathcal E_M$. The definition of $\mathcal E_M$ and Lemma \ref{LEMHL} ensure that we can find $r_y\in(0,1)$, as close to 1 as we want, and a cap $\kappa_y=\kappa(y,\delta_y)$ such that $\delta_y\geq 1-r_y$ satisfying
\begin{eqnarray}\label{EQTAU}
M\tau(1-r_y)\leq |P[\mu](r_yy)|\leq C\frac{|\mu|(\kappa_y)}{\sigma(\kappa_y)}.
\end{eqnarray}
Observe that
$$\sigma(\kappa_y)\le\frac{C|\mu|(\mathcal S_{d})}{M\tau(1-r_y)}.$$
It follows that $\delta_y\to 0$ when $r_y\to 1$. We can then always ensure that $|\kappa_y|\le\varepsilon$.
The family $(\kappa_{y})_{y\in \mathcal E_M}$ is an $\varepsilon$-covering of $\mathcal E_M$.
By the 5r-covering lemma, one can extract from it a countable family of disjoint caps $(\kappa_{y_i})_{i\in\mathbb N}$ such that $\mathcal E_M\subset\bigcup_i 5\kappa_{y_i}$. Inequality (\ref{EQTAU}) implies that
$$M\sum_i \tau(1-r_{y_i})\sigma(\kappa_{y_i})\leq C\|\mu\|.$$
If we remark that $|5\kappa_{y_i}|\ge\delta_{y_i}\ge1-r_{y_i}$, we can conclude that
$$\sum_i \tau(|5\kappa_i|)|5\kappa_i|^d\leq \frac{C}M \|\mu\|.$$
Our assumption on $\mathbf phi$ ensures that $\mathcal H^{\mathbf phi}(\mathcal E_M)\leq C(\mathbf phi,\mu)/M$. The result follows from the equality $\mathcal E(\tau,\mu)=\bigcap_{M>1}\mathcal E_M$.
\end{proof}
Applying this to the function $\tau(s)=s^{-\beta}$, we get the first half of Theorem \ref{THMMAIN1}.
\begin{corollary}\label{CORAUBRYLIKE}
For any $\beta\in[0,d]$, for any $\mu\in \mathcal M(\mathcal S_{d})$, $\dim_{\mathcal H}\big(\mathcal E(\beta,\mu)\big)\leq d-\beta$.
\end{corollary}
\begin{remark}The corresponding result for the divergence of Fourier series was obtained in
\cite{Aub06} using the Carleson-Hunt theorem (see also \cite{BH11b} for the $L^1$-case). Our proof
in this context is much more elementary, since we do not need the maximal inequality for the
Hardy-Littlewood maximal function.
\end{remark}
\section{Minorisation of the Hausdorff dimension}
In this section, we prove the converse part of Theorem \ref{THMMAIN1}. We first need a technical lemma on the Poisson kernel.
\begin{lemma}\label{LEMPOISSONKERNEL}
There exists a constant $C>0$ such that, for any $r\in(1/2,1)$ and any $y\in\mathcal S_{d}$,
$$\int_{\kappa(y,1-r)}P(ry,\xi)d\sigma(\xi)\geq C.$$
\end{lemma}
\begin{proof} We may assume $y=\mathbf N$. Let $\rho=1-r$. A generic point $x=(x_1,\cdots,x_{d+1})\in \mathbb R^{d+1}$ will be denoted by $x=(x',x_{d+1})$ with $x'\in\mathbb R^d$. In particular, $x\in\kappa(\mathbf N,\rho)$ if and only if $\|x'\|^2+x_{d+1}^2=1$ and
$\|x'\|^2+(1-x_{d+1})^2<\rho^2$.
Let $\mathcal C$ be the cylinder
$$\mathcal C=\left\{ x\in\mathbb R^{d+1}\ ;\ \|x'\|^2<\rho^2/2\ \mbox{and}\ 1-2\rho< x_{d+1}< 1\right\}.$$
It is not hard to show that $\mathcal S_d\cap \overline{\mathcal C}\subset\kappa(\mathbf N,\rho)$ when $1/2<r<1$. We
now define two harmonic functions: $h$ is the harmonic function in $\mathcal C$ such that
$h(x)=1$ if $x\in\mathbf partial \mathcal C\cap\{ x_{d+1}=1\}$ and $h(x)=0$ if $x\in\mathbf partial \mathcal C\cap\{ x_{d+1}<1\}$;
$u$ is the harmonic function in $B_{d+1}$ such that $u=1$ on $\kappa(\mathbf N,\rho)$ and
$u=0$ elsewhere on $\mathcal S_d$ ($h$ and $u$ are the Perron-Wiener-Brelot solutions of the Dirichlet problem with the given boundary data).
We claim that $h\leq u$ on $\mathbf partial(\mathcal C\cap B_{d+1})$. Indeed, we can decompose
$\mathbf partial (\mathcal C\cap B_{d+1})$ into $E\cup F$, with $E\subset S_d\cap\overline{\mathcal C}$
and $F\subset \mathbf partial \mathcal C\cap\{x_{d+1}<1\}$. Now, $u=1\geq h$ on $E$ and $u\ge 0=h$ on $F$. By the maximum principle in $\mathcal C\cap B_{d+1}$, we deduce that $u(x)\geq h(x)$ for any $x\in \mathcal C\cap B_{d+1}$. In particular this holds for $x=(1-\rho)\mathbf N=r\mathbf N$, so that
$$\int_{\kappa(\mathbf N,\rho)}P(r\mathbf N,\xi)d\sigma(\xi)\ge h(r\mathbf N).$$
On the other hand, $\mathcal C$ is just the translation and dilation of a fixed domain : $\mathcal C=\mathbf N+\rho\mathcal U$, where
$$\mathcal U=\left\{ x\in\mathbb R^{d+1}\ ;\ \|x'\|^2<1/2\ \mbox{and}\ -2< x_{d+1}< 0\right\}.$$
Thus the quantity $h(r\mathbf N)$ is strictly positive and independent of $r$. We can then take $C=h(r\mathbf N)$.
\begin{center}
\includegraphics[width=13cm]{multifimageeps.eps}
\end{center}
\end{proof}
Here is the converse part of Theorem \ref{THMMAIN1}.
\begin{theorem}\label{THMAUBRYLIKE2}
Let $E\subset\mathcal S_{d}$, let $\mathbf phi$ be a dimension function and let $\tau:(0,1)\to(0,+\infty)$
be nonincreasing with $\lim_{x\to 0^+}\tau(x)=+\infty$. Suppose that $\mathcal H^{\mathbf phi}(E)=0$ and that $\tau(s)=O\big(s^{-d}\mathbf phi(s)\big)$. Then there exists $f\in L^1(\mathcal S_{d})$ such that, for any $y \in E$,
$$\limsup_{r\to 1}\frac{P[f](ry)}{\tau(1-r)}=+\infty.$$
\end{theorem}
A remarkable feature of Theorem \ref{THMAUBRYLIKE} and Theorem \ref{THMAUBRYLIKE2} is that they are sharp: if $\mathbf phi(s)=\tau(s)s^d$ is a dimension function and
$$\mathcal E(\tau,f)=\left\{y\in\mathcal S_{d};\ \limsup_{r\to 1}\frac{|P[f](ry)|}{\tau(1-r)}=+\infty\right\},$$
then
\begin{enumerate}
\item for any $f\in L^1(\mathcal S_{d})$, $\mathcal H^\mathbf phi\big(\mathcal E(\tau,f)\big)=0$;
\item if $E$ is a set satisfying $\mathcal H^\mathbf phi(E)=0$, we can find $f\in L^1(\mathcal S_{d})$ such that
$\mathcal E(\tau,f)\supset E$.
\end{enumerate}
\begin{proof}[Proof of Theorem \ref{THMAUBRYLIKE2}]
Let $j\ge 1$. Since $\mathcal H^\mathbf phi(E)=0$, we can find a covering $\mathcal R_j$ of $E$
by caps with diameter less than $2^{-j}$ and such that $\sum_{\kappa\in\mathcal R_j} \mathbf phi(|\kappa|)\leq 2^{-j}.$ We collect together the caps with approximately the same size. Precisely, if $n\geq 1$, let
$$\mathcal C_n=\left\{\kappa\in\bigcup_j \mathcal R_j;\ 2^{-(n+1)}<|\kappa|\leq 2^{-n}\right\}.$$
Let also $E_n=\bigcup_{\kappa\in\mathcal C_n}\kappa$ so that $E\subset\limsup_n E_n$ and
$$\sum_{n\ge 1}\sum_{\kappa\in\mathcal C_n}\mathbf phi(|\kappa|)\leq\sum_{j\ge 1} \sum_{\kappa\in\mathcal R_j}
\mathbf phi(|\kappa|)\leq 1.$$
In particular, there exists a sequence $(\omega_n)_{n\ge 1}$ tending to infinity such that
$$\sum_{n\ge 1} \sum_{\kappa\in\mathcal C_n}\omega_n \mathbf phi(|\kappa|)<+\infty.$$
For any $n\geq 1$, let $x_{n,1},\dots,x_{n,{m_n}}$ be the centers of the caps
appearing in $\mathcal C_n$ and let
$\kappa_{n,i}=\kappa(x_{n_,i},2\cdot2^{-n})$.
We define
$$f=\sum_{n\ge 1} \sum_{i=1}^{m_n}\omega_n \tau(2^{-n})\mathbf 1_{\kappa_{n,i}}.$$
$f$ belongs to $L^1(\mathcal S_{d})$. Indeed,
\begin{eqnarray*}
\|f\|_1&\leq&C\sum_{n\ge 1}\sum_{i=1}^{m_n}\omega_n \tau(2^{-n})(2^{-n})^d\\
&\leq&C\sum_{n\ge 1}\sum_{i=1}^{m_n}\omega_n\mathbf phi(2^{-n})\\
&\leq&C\sum_{n\ge 1} \sum_{\kappa\in\mathcal C_n}\omega_n \mathbf phi(|\kappa|)<+\infty.
\end{eqnarray*}
Moreover, let $y\in E_n$ and let $r=1-2^{-n}$. Let also
$\kappa_y=\kappa(x_{n,i},\delta_{n,i})\in\mathcal C_n$
such that $y$ belongs to $\kappa_y$. It is clear that $\|y-x_{n,i}\|\le\delta_{n,i}\le 2^ {-n}$ so that
$\kappa(y,2^{-n})\subset \kappa_{n,i}$. By the positivity of $f$ and of the Poisson kernel,
\begin{eqnarray*}
P[f](ry)&\geq&\int_{\kappa(y,2^{-n})}\omega_n\tau(2^{-n})P(ry,\xi)d\sigma(\xi)\\
&\geq&C \omega_n\tau(1-r)
\end{eqnarray*}
where $C$ is the constant that appears in Lemma \ref{LEMPOISSONKERNEL}.
Thus, provided $y$ belongs to $\limsup_n E_n$, we get
$$\limsup_{r\to 1}\frac{P[f](ry)}{\tau(1-r)}=+\infty,$$
which is exactly what we need.
\end{proof}
\section{Construction of saturating functions}
In this section, we turn to the construction of functions in $L^1(\mathcal S_{d})$ having multifractal behaviour. Our first step is a construction of a sequence of nets in $\mathcal S_{d}$ which play the same role as dyadic numbers in the interval.
\begin{lemma}
There exists a sequence $(\mathcal R_n)_{n\geq 1}$ of finite subsets of $\mathcal S^d$ satisfying
\begin{itemize}
\item $\mathcal R_n\subset \mathcal R_{n+1}$;
\item $\bigcup_{x\in\mathcal R_n}\kappa(x,2^{-n})=\mathcal S_{d}$;
\item $\mathop{\rm card}\,(\mathcal R_n)\leq C2^{nd}$;
\item For any $x,y$ in $\mathcal R_n$, $x\neq y$, then $|x-y|\geq 2^{-n}$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $\mathcal R_0=\varnothing$ and let us explain how to construct $\mathcal R_{n+1}$
from $\mathcal R_n$. $\mathcal R_{n+1}$ is a maximal subset of $\mathcal S_{d}$ containing $\mathcal R_n$
and such that any distinct points in $\mathcal R_{n+1}$ have their distance greater than or
equal to $2^{-(n+1)}$. Then $\bigcup_{x\in\mathcal R_{n+1}}\kappa\left(x,2^{-(n+1)}\right)=\mathcal S_{d}$
by maximality of $\mathcal R_{n+1}$. Then, taking the surface and using that the caps $\kappa\left(x,2^{-(n+2)}\right)$,
$x\in\mathcal R_{n+1}$, are pairwise disjoint, we get
$$\mathop{\rm card}\,(\mathcal R_{n+1})\times C2^{-(n+2)d}\leq 1.$$
\end{proof}
From now on, we fix a sequence $(\mathcal R_n)_{n\ge 0}$ as in the previous lemma. Our sets with big Hausdorff dimension will be based on open caps centered at points of $\mathcal R_n$. Precisely, let $\alpha>1$ and let $N_{n,\alpha}=[n/\alpha]+1$ where $[n/\alpha]$ denotes the integer part of $n/\alpha$. We introduce
$$D_{n,\alpha}=\bigcup_{x\in \mathcal R_{N_{n,\alpha}}}\kappa\left(x,2^{-n}\right).$$
\begin{lemma}\label{LEMHD}
Let $\alpha>1$ and let $(n_k)_{k\ge 0}$ be a sequence of integers growing to infinity. Then
$$\mathcal H^{d/\alpha}\left(\limsup_{k\to +\infty}D_{n_k,\alpha}\right)=+\infty.$$
\end{lemma}
\begin{proof}
This follows from an application of the mass transference principle (Lemma \ref{LEMMTP}), applied with the function
$\mathbf psi(x)=x^{d/\alpha}$ and $\mathbf phi(x)=x^d$. The key points are that
$$\bigcup_{x\in\mathcal R_{N_{n,\alpha}}}\kappa\left(x,2^{-N_{n,\alpha}}\right)=\mathcal S_{d}$$
and that $\kappa\left(x,2^{-n}\right)\supset \kappa\left(x,\mathbf psi^{-1}\circ \mathbf phi(2^{-N_{n,\alpha}})\right)$ since $\alpha N_{n,\alpha}\geq n$.
\end{proof}
We now construct saturating functions step by step.
\begin{lemma}\label{LEMSAT}
Let $n\geq 1$. There exists a nonnegative fonction $f_n\in L^1(\mathcal S_{d})$, satisfying $\|f_n\|_1=1$, such that, for any $\alpha>1$,
for any $y\in D_{n,\alpha}$,
$$P[f_n](r_ny)\geq\frac C{n}2^{(n-N_{n,\alpha})d},$$
where $1-r_n={2^{-n}}$, $N_{n,\alpha}=[n/\alpha]+1$ and $C$ is independent of $n$ and $\alpha$.
\end{lemma}
\begin{proof}
We define $\tilde f_n$ by
$$\tilde f_n:=\frac 1{n+1}\sum_{N=1}^{n+1} \sum_{x\in\mathcal R_N}2^{(n-N)d}\mathbf 1_{\kappa(x,2\cdot 2^{-n})}.$$
The triangle inequality ensures that
\begin{eqnarray*}
\|\tilde f_n\|_1&\leq&\frac C{n+1}\sum_{N=1}^{n+1} \mathop{\rm card}\,(\mathcal R_N)2^{(n-N)d}2^{-nd}\\
&\leq &C.
\end{eqnarray*}
Let $y\in D_{n,\alpha}$ and let $x\in\mathcal R_{N_{n,\alpha}}$ such that
$y\in \kappa\left(x,2^{-n}\right)$. Observe that
$\kappa\left(y,2^{-n}\right)\subset \kappa\left(x,2.2^{-n}\right)$. Moreover,
$1\le N_{n,\alpha}\le n+1$. Using the positivity of the Poisson kernel, we get
$$P[\tilde f_n](ry)\geq\int_{\kappa\left(y,2^{-n}\right)}\frac{2^{(n-N_{n,\alpha})d}}{n+1}P(ry,\xi)d\sigma(\xi).$$
Lemma \ref{LEMPOISSONKERNEL} ensures that
$$P[\tilde f_n](r_ny)\ge \frac{C}{n+1}2^{(n-N_{n,\alpha})d} $$
and it suffices to take $f_n=\frac{\tilde f_n}{\Vert \tilde f_n\Vert_1}$.
\end{proof}
We are now ready for the proof of our second main theorem.
\begin{proof}[Proof of Theorem \ref{THMMAIN2}]
Let $(g_n)_{n\ge 1}$ be a dense sequence of $L^1(\mathcal S_{d})$ such that each $g_n$ is continuous and $\|g_n\|_\infty\leq n$. The maximum principle ensures that for any $r\in(0,1)$ and for any
$\xi\in\mathcal S_{d}$,
$$|P[g_n](r\xi)|\leq n.$$
Let $(f_n)$ be the sequence given by Lemma \ref{LEMSAT} and let us set
$$h_n=g_n+\frac 1n f_n.$$
$(h_n)_{n\ge 1}$ remains dense in $L^1(\mathcal S_{d})$. Moreover, if $r_n=1-{2^{-n}}$, $\alpha>1$ and $y\in D_{n,\alpha}$,
\begin{eqnarray*}
P[h_n](r_ny)&\geq&C\frac{2^{\left(n-N_{n,\alpha}\right)d}}{n^2}-n\\
&\geq& C \frac{2^{\left(n-N_{n,\alpha}\right )d}}{2n^2}
\end{eqnarray*}
provided $n$ is sufficiently large. Let us finally consider $\delta_n>0$ sufficiently small such that
$$\|P[f](r_n\cdot)\|_\infty\leq 1\quad\mbox{if}\quad \|f\|_1\le \delta_n.$$
The residual set we will consider is the dense $G_\delta$-set
$$A=\bigcap_{l\ge 1}\bigcup_{n\geq l}B_{L^1}(h_n,\delta_n).$$
Pick any $f\in A$. One can find an increasing sequence of integers $(n_k)$
such that $f\in B_{L^1}(h_{n_k},\delta_{n_k})$ for any $k$. Let $\alpha>1$
and let $y\in\limsup_k D_{n_k,\alpha}=:D_\alpha(f)$.
Then we can find integers $n$, picked in the sequence $(n_k)_{k\ge 1}$, as large as we want such that
$$P[f](r_n y)\geq P[h_n](r_ny)-1\geq C \frac{2^{\left(n-N_{n,\alpha}\right)d}}{2n^2}-1.$$
Observe that for such values of $n$,
$$\frac{\log|P[f](r_ny)|}{-\log(1-r_n)}\ge\frac{\left( n-N_{n,\alpha}\right)d}{n}+o(1).$$
Hence,
$$\limsup_{r\to 1}\frac{\log |P[f](ry)|}{-\log(1-r)}\geq \lim_{n\to +\infty}\left(1-\frac{N_{n,\alpha}}n\right)d=\left(1-\frac1\alpha\right)d.$$
Furthermore, Lemma \ref{LEMHD} tells us that $\mathcal H^{d/\alpha}(D_\alpha(f))=+\infty$.
We divide $D_\alpha(f)$ into two parts:
\begin{eqnarray*}
D_\alpha^{(1)}(f)&=&\left\{y\in D_\alpha(f);\ \limsup_{r\to 1}\frac{\log |P[f](ry)|}{-\log(1-r)}=\left(1-\frac1\alpha\right)d\right\}\\
D_\alpha^{(2)}(f)&= &\left\{y\in D_\alpha(f);\ \limsup_{r\to 1}\frac{\log |P[f](ry)|}{-\log(1-r)}> \left(1-\frac1\alpha\right)d\right\}.
\end{eqnarray*}
Let $(\beta_n)_{n\ge 0}$ be a sequence of real numbers such that $$\beta_n>\left(1-\frac1\alpha\right)d\quad\mbox{and}\quad\lim_{n\to +\infty} \beta_n=\left(1-\frac1\alpha\right)d.$$
Then
$$D_\alpha^{(2)}(f)\subset\bigcup_{n\geq 0}\mathcal E(\beta_n,f).$$
Observe that $\frac{d}\alpha>d-\beta_n$. Then, by Corollary \ref{CORAUBRYLIKE},
$\mathcal H^{d/\alpha}(\mathcal E(\beta_n,f))=0$. We get
$$\mathcal H^{d/\alpha}(D_\alpha^{(2)}(f))=0\quad\mbox{and}\quad\mathcal H^{d/\alpha}(D_\alpha^{(1)}(f))=+\infty.$$
Finally,
$$E\left(\left(1-\frac1\alpha\right)d,f\right)\supset D_\alpha^{(1)}(f)$$
and
$$\dim_{\mathcal H}\left(E\left(\left(1-\frac1\alpha\right)d,f\right)\right)\geq \frac d\alpha.$$
By Corollary \ref{CORAUBRYLIKE} again, this inequality is necessarily an equality, and we conclude
that $f$ satisfies the conclusion of Theorem \ref{THMMAIN2} by setting
$$\left(1-\frac1\alpha\right)d=\beta\iff \frac d\alpha=d-\beta.$$
\end{proof}
One can also ask whether the Poisson integral of a typical Borel measure on $\mathcal S_{d}$ has a multifractal behaviour.
Here, we have to take care of the topology on $\mathcal M(\mathcal S_{d})$. We endow it with the
weak-star topology, which turns the unit ball $B_{\mathcal M(\mathcal S_{d})}$ of the dual space $\mathcal M(\mathcal S_{d})$ into a compact space. We need the following folklore lemma:
\begin{lemma}
The set of measures $fd\sigma$, with $f\in\mathcal C(\mathcal S_{d})$, is weak-star dense in $\mathcal M(\mathcal S_{d})$.
\end{lemma}
\begin{proof}
The set of measures with finite support is weak-star dense in $\mathcal M(\mathcal S_{d})$ (see for instance \cite{Bil}). Thus, let $\xi\in\mathcal S_{d}$, let $\varepsilon>0$ and let $g_1,\dots,g_n\in\mathcal C(\mathcal S_{d})$. It suffices to prove that one can find $f\in\mathcal C(\mathcal S_{d})$ such that, for any $\varepsilon>0$, for
any $i\in\{1,\dots,n\}$,
$$\left|g_i(\xi)-\int_{\mathcal S_{d}}g_i(y)f(y)d\sigma(y)\right|<\varepsilon.$$
Since each $g_i$ is continuous at $\xi$, one can find $\delta>0$ such that $|\xi-y|<\delta$ implies $|g_i(\xi)-g_i(y)|<\varepsilon$. Let $f$ be a continuous and nonnegative function on $\mathcal S_{d}$ with support
in $\kappa(\xi,\delta)$ and whose integral is equal to 1. Then
\begin{eqnarray*}
\left|g_i(\xi)-\int_{\mathcal S_{d}}g_i(y)f(y)d\sigma(y)\right|&\leq&\int_{\kappa(\xi,\delta)}|g_i(\xi)-g_i(y)|f(y)d\sigma(y)\\
&\leq&\varepsilon.
\end{eqnarray*}
\end{proof}
Mimicking the proof of Theorem \ref{THMMAIN2}, we can prove the following result.
\begin{theorem}
For quasi-all measures $\mu\in B_{\mathcal M(\mathcal S_{d})}$, for any $\beta\in [0,d]$,
$$\dim_{\mathcal H}\big(E(\beta,\mu)\big)=d-\beta.$$
\end{theorem}
\begin{proof}
Let $(g_n)_{n\ge 1}$ be a dense sequence of the unit ball of $\mathcal C(\mathcal S_{d})$ such that $\|g_n\|_\infty\leq 1-\frac1n$. The sequence $(g_nd\sigma)_{n\ge 1}$ is weak-star dense in $B_{\mathcal M(\mathcal S_{d})}$. Let $(f_n)_{n\ge 1}$ be the sequence given by Lemma \ref{LEMSAT} and let us set
$$h_n=g_n+\frac1n f_n$$
so that $(h_nd\sigma)_{n\ge 1}$ lives in the unit ball $B_{\mathcal M(\mathcal S_{d})}$ and is always a weak-star dense sequence in $B_{\mathcal M(\mathcal S_{d})}$. For any $\alpha>1$ and any $y\in D_{n,\alpha}$,
\begin{eqnarray*}
P[h_n](r_ny)&\geq& C \frac{2^{\left(n-N_{n,\alpha} \right)d}}{n^2}-1
\end{eqnarray*}
with $r_n=1- 2^{-n}$. The function $(y,\xi)\mapsto P(r_n y,\xi)$ is uniformly continuous on $\mathcal S_{d}\times\mathcal S_{d}$. In particular, using the compactness of $\mathcal S_{d}$, one may find $y_1,\dots,y_s\in\mathcal S_{d}$ such that, for any $y\in\mathcal S_{d}$,
there exists $j\in\{1,\dots,s\}$ satisfying
$$\forall \xi\in\mathcal S_{d},\quad \big|P(r_ny,\xi)-P(r_ny_j,\xi)\big|\leq 1.$$
Let $\mathcal U_n$ be the following weak-star open neighbourhood of $h_nd\sigma$ in
$B_{\mathcal M(\mathcal S_{d})}$:
\begin{eqnarray*}
\mathcal U_n&=&\Bigg\{\mu\in B_{\mathcal M(\mathcal S_{d})};\ \textrm{for all }j\in\{1,\dots,s\},\\
&&\quad\quad \left|\int_{\mathcal S_{d}}P(r_n y_j,\xi)d\mu-\int_{\mathcal S_{d}}P(r_n y_j,\xi)h_n(\xi)d\sigma\right|<1\Bigg\}.
\end{eqnarray*}
By the triangle inequality, for any $y\in\mathcal S_{d}$ and any $\mu\in\mathcal U_n$,
$$|P[\mu-h_nd\sigma]|(r_ny)|\leq 3.$$
We now define $A=\bigcap_{l\ge 1}\bigcup_{n\geq l}\mathcal U_n$ which is a
dense
$G_\delta$-subset
of $B_{\mathcal M(\mathcal S_{d})}$, and we conclude as in the proof of Theorem \ref{THMMAIN2}.
\end{proof}
If we remember that $\mu\mapsto P[\mu]$ is a bijection between the set of
nonnegative finite measures on the sphere $\mathcal S_{d}$ and the set of
nonnegative harmonic functions in the ball
$B_{d+1}$ we can also obtain the following result.
\begin{theorem}\label{THMPOSITIVE}
For quasi-all nonnegative harmonic functions $h$ in the unit ball $B_{d+1}$, for any
$\beta\in [0,d]$,
$$\dim_{\mathcal H}\big(E(\beta,h)\big)=d-\beta$$
where $E(\beta,h)$ is defined here by $\displaystyleplaystyle E(\beta,h)=\left\{ y\in\mathcal S_{d}\ ;\ \limsup_{r\to 1}\frac{\log h(ry)}{-\log(1-r)}=\beta\right\}$.
\end{theorem}
The set $\mathcal H^+(B_{d+1})$ of nonnegative harmonic functions in the unit ball
$B_{d+1}$ is endowed with the topology of the locally uniform convergence. It
is a closed cone in the complete vector space of all continous functions in the
ball. So it satisfies the Baire's property.
\begin{proof}[Proof of Theorem \ref{THMPOSITIVE}.]We begin with the following
lemma.
\begin{lemma}\label{LEMDENSE}
The set of nonnegative functions which are continuous in the closed unit ball
$\overline{B_{d+1}}$ and harmonic in the open ball $B_{d+1}$ is dense in
$\mathcal H^+$.
\end{lemma}
\begin{proof}
Let $h\in\mathcal H^+$ and $\rho_n<1$ be a sequence of real number that
increases to 1. Set $f_n(\xi)=h(\rho_n\xi)$ if $\xi\in\mathcal S_{d}$ and
$h_n(x)=h(\rho_nx)=P[f_n](x)$ if $x\in B_{d+1}$. The functions $h_n$ are
nonnegative, harmonic and continuous
on the closed ball $\overline{B_{d+1}}$. Moreover, let $\rho<1$. The uniform
continuity of $h$ in the closed ball $\bar B(0,\rho)=\{x\ ;\ \|x\|\le\rho\}$
ensures that $h_n$ converges uniformly to $h$ in the compact set $\bar
B(0,\rho)$.
\end{proof}
We can now prove Theorem \ref{THMPOSITIVE}, using the same way as in Theorem
\ref{THMMAIN2}. Let $(g_n)_{n\ge 1}$ be a dense sequence in the set
of nonnegative continuous functions in $\mathcal S_{d}$.
Lemma \ref{LEMDENSE} ensures that the sequence $(P[g_n])_{n\ge
1}$ is dense in $\mathcal H^+$. Moreover, we can suppose that
$\| g_n\|_\infty\le n$ so that by the
maximum principle, $0\le P[g_n](x)\le n$ for any $x\in B_{d+1}$. Let
$(f_n)_{n\ge 1}$ be the sequence given by Lemma \ref{LEMSAT} and observe that
if $\|x\|\le\rho$,
$$\left|\frac1nP[f_n](x)\right|\le\frac2{n(1-\rho)^d}\|f_n\|_1=
\frac2{n(1-\rho)^d}.$$
It follows that $\frac1nP[f_n]$ goes to 0 in $\mathcal H^+$. Define
$$h_n=P[g_n]+\frac1n P[f_n]$$
so that $(h_n)_{n\ge 1}$ is always dense in $\mathcal H^+$. Let $\alpha>1$,
$y\in D_{n,\alpha}$ and $r_n=1- 2^{-n}$. Lemma \ref{LEMSAT} ensures that
\begin{eqnarray*}
h_n(r_ny)&\geq& C \frac{2^{\left(n-N_{n,\alpha} \right)d}}{n^2}-n.
\end{eqnarray*}
We can define
$$A=\bigcap_{l\ge 1}\bigcup_{n\ge l}\left\{ h\in \mathcal H^+\ ;\
\sup_{\|x\|\le r_n}|h(x)-h_n(x)|<1\right\}$$
which is a dense $G_\delta$-set in $\mathcal H^+$ and we can
conclude as in the proof of Theorem \ref{THMMAIN2}.
\end{proof}
\end{document} |
\betaegin{document}
\input {epsf}
\newcommand{\betaegin{equation}}{\betaegin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\betaegin{equation}a}{\betaegin{eqnarray}}
\newcommand{\end{equation}a}{\end{eqnarray}}
\def\overline{\overlineerline}
\def\rhoightarrow{\rhoightarrow}
\def\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}{\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}}
\def\alpha{\alphalpha}
\def\beta{\betaeta}
\def\gamma{\gammaamma}
\def\rho{\rhoho}
\def\,-\,{\,-\,}
\def\bf x{\betaf x}
\def\bf k{\betaf k}
\def\ket#1{|\,#1\,\rhoangle}
\def\betara#1{\langle\, #1\,|}
\def\betaraket#1#2{\langle\, #1\,|\,#2\,\rhoangle}
\def\proj#1#2{\ket{#1}\betara{#2}}
\def\expect#1{\langle\, #1\, \rhoangle}
\def\trialexpect#1{\expect#1_{\rhom trial}}
\def\ensemblexpect#1{\expect#1_{\rhom ensemble}}
\def\ket{\psi}{\ket{\psi}}
\def\ket{\phi}{\ket{\phi}}
\def\betapsi{\betara{\psi}}
\def\betaphi{\betara{\phi}}
\def\rule[0.5ex]{2cm}{.4pt}\enspace{\rhoule[0.5ex]{2cm}{.4pt}\enspace}
\def\thinspace{\thinspaceinspace}
\def\noindent{\noindent}
\def\thinspaceirty{\hbox to \hsize{
\rhoule[5pt]{2.5cm}{0.5pt}
}}
\def\set#1{\{ #1\}}
\def\setbuilder#1#2{\{ #1:\; #2\}}
\def\Prob#1{{\rhom Prob}(#1)}
\def\pair#1#2{\langle #1,#2\rhoangle}
\def\bf 1{\betaf 1}
\def\dee#1#2{\frac{\partial #1}{\partial #2}}
\def\deetwo#1#2{\frac{\partial\,^2 #1}{\partial #2^2}}
\def\deethree#1#2{\frac{\partial\,^3 #1}{\partial #2^3}}
\newcommand{{\scriptstyle -}\hspace{-.5pt}x}{{\scriptstyle -}\hspace{-.5pt}x}
\newcommand{{\scriptstyle -}\hspace{-.5pt}y}{{\scriptstyle -}\hspace{-.5pt}y}
\newcommand{{\scriptstyle -}\hspace{-.5pt}z}{{\scriptstyle -}\hspace{-.5pt}z}
\newcommand{{\scriptstyle -}\hspace{-.5pt}k}{{\scriptstyle -}\hspace{-.5pt}k}
\newcommand{{\scriptscriptstyle -}\hspace{-.5pt}x}{{\scriptscriptstyle -}\hspace{-.5pt}x}
\newcommand{{\scriptscriptstyle -}\hspace{-.5pt}y}{{\scriptscriptstyle -}\hspace{-.5pt}y}
\newcommand{{\scriptscriptstyle -}\hspace{-.5pt}z}{{\scriptscriptstyle -}\hspace{-.5pt}z}
\newcommand{{\scriptscriptstyle -}\hspace{-.5pt}k}{{\scriptscriptstyle -}\hspace{-.5pt}k}
\def\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}{\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}}
\title{Quantum Cryptography using entangled photons in
energy-time Bell states}
\alphauthor{
W. Tittel, J. Brendel, H. Zbinden, and N. Gisin
\\
\small
{\it Group of Applied Physics, University of Geneva, CH-1211, Geneva 4,
Switzerland}}
\maketitle
\alphabstract{We present a setup for quantum cryptography based on photon
pairs in
energy-time Bell states and show its feasability in a laboratory experiment.
Our scheme combines the advantages of
using photon pairs instead of faint laser pulses and the
possibility to preserve energy-time
entanglement over long distances.
Moreover, using 4-dimensional energy-time states,
no fast random change of bases is required in our setup : Nature itself
decides whether to measure in the energy or in the time base.}
\noindent
PACS Nos. 3.67.Dd, 3.67.Hk
\normalsize
Quantum communication is probably one of the most rapidly growing
and most exciting fields
of physics within the last years \cite{physworld}. Its most mature
application is quantum cryptography (also called quantum key
distribution),
ensuring the distribution of a secret key between two parties. This key
can be used afterwards
to encrypt and decrypt secret messages using the one
time pad \cite{Welsh}.
In opposition to the mostly used "public key" systems
\cite{Welsh}, the security of
quantum cryptography is not based on
mathematical complexity but on an inherent property of
single quanta.
Roughly speaking, since it
is not possible to measure an unknown quantum system without modifying it,
an eavesdropper manifests herself by introducing errors
in the transmitted data. During the last years, several prototypes
based on faint
laser pulses mimicking single photons, have been
developed, demonstrating that
quantum cryptography not only works inside the laboratory, but in the
"real world" as well \cite{physworld,plug&play,expquantumcryptography}.
Besides, it has been shown that two-photon entanglement
can be preserved over large distances \cite{Longdistbell},
especially when being entangled in energy and time \cite{fullengthbell}.
As pointed out by Ekert in 1991 \cite{Ekert91},
the nonlocal correlations engendered by such states can also be
used to establish
sequences of correlated bits at distant places, the advantage compared
to systems based on
faint laser pulses being the smaller vulnerability against
a certain kind of
eavesdropper attack \cite{QNDattack,security}.
Besides improvements in the domain of quantum key distribution,
recent experimental progress in generating,
manipulating and measuring the so-called Bell-states \cite{Bellstates},
has lead to fascinating applications like
quantum teleportation \cite{teleportation}, dense-coding \cite{densecoding}
and entanglement swapping
\cite{entanglementswapping}.
In a recent paper, we proposed and tested a novel source for quantum
communication
generating a new kind of Bell states based on energy-time
entanglement \cite{newsource}.
In this paper, we present a first
application, exploiting this new source for quantum cryptography.
Our scheme
follows Ekert's
initial idea concerning the use of photon-pair correlations.
However, in opposition, it implements Bell states and
can thus be seen in the
broader context of quantum communication.
Moreover, the fact that energy-time entanglement
can be preserved over long distances renders our source particulary
interesting for long-distance applications.
To understand the principle of our idea, we look at Fig. 1.
A short light pulse emitted at time $t_0$ enters an interferometer having a
path length difference which is large compared to the duration of the pulse.
The pulse is thus split into two pulses of smaller amplitudes,
following each other with a fixed phase relation. The light is then
focussed into a nonlinear crystal where some of the pump photons are
downconverted into photon pairs.
Working with pump energies low enough to ensure that generation of
two photon pairs
by the same as well as by two subsequent pulses
can be neglected, a
created photon pair is described by
\betaegin{eqnarray}
\ket{\psi}=\frac{1}{\sqrt2}\betaigg(\ket{s}_A\ket{s}_B +
e^{i\phi} \ket{l}_A\ket{l}_B\betaigg).
\label{Bellstate}
\end{eqnarray}
$\ket{s}$ and $\ket{l}$ denote a photon created by a pump photon having
traveled via the short or
the long arm of the interferometer, and the indices A, B label the photons.
The state (\rhoef{Bellstate}) is composed of only two discrete
emission times and not
of a continuous spectrum.
This contrasts with the energy-time entangled states used up to now
\cite{Ekert91,Franson89}.
Please note that, depending on the phase $\phi$,
Eq. (\rhoef{Bellstate}) describes two
of the four Bell states. Interchanging $\ket{s}$ and $\ket{l}$
for one of the
two photons leads to generation of the remaining two Bell-states.
In general, the coefficients describing the amplitudes of
the $\ket{s}\ket{s}$ and $\ket{l}\ket{l}$ states
can be different, leading to nonmaximally entangled states. However, in this
article, we will deal only with maximally entangled states.
Behind the crystal, the photons are separated and are sent to Alice and Bob,
respectively (see Fig.1).
There, each photon travels via another interferometer,
introducing exactly the same difference of travel times through one or
the other arm
as did the previous
interferometer, acting on the pump pulse. If Alice looks at the arrival
times of the photons
with respect to the emission time of the pump pulse $t_0$ --
note that she has two detectors to look at--, she
will find the photons in one of three time slots.
For instance, detection of a photon in
the first slot corresponds to "pump photon having traveled via the short
arm and
downconverted photon via the short arm". To keep it short, we refer to
this process as
$\ket{s}_P;\ket{s}_A$,
where $P$ stands for the pump- and $A$ for Alice's photon.
However, the characterization of the complete photon pair
is still ambiguous, since,
at this point, the path of the photon having
traveled to Bob (short or long in his interferometer)
is unknown to Alice.
Fig. 1 illustrates all processes leading to a detection in the different
time slots both at
Alice's and at Bob's detector. Obviously, this reasoning holds
for any combination of two detectors.
In order to build up the secret key,
Alice and Bob now publicly agree about the events where both
detected a photon
in one of the satellite
peaks -- without revealing in which one -- or both in the central
peak -- without
revealing the detector. This additional information enables both of
them to know
exactly via which arm the sister photon, detected by the other person,
has traveled.
For instance, to come back to the above given example, if
Bob tells Alice that he detected his photon in a satellite peak as well,
she knows that
the process must have been
$\ket{s}_p; \ket{s}_A \ket{s}_B$. The same holds for Bob who now knows
that Alice photon
traveled via the short arm in her interferometer.
If both find the photons in the right peak, the process was
$\ket{l}_p; \ket{l}_A \ket{l}_B$.
In either case, Alice and Bob have correlated detection times.
The cross terms
where one of them detect a photon in the left and the other one
in the right satellite peak do not occur.
Assigning now bitvalues 0 (1) to the short (long) processes, Alice and Bob
finally end up with a sequence of correlated bits.
Otherwise, if both find the photon in
the central slot, the process must have been
$\ket{s}_p; \ket{l}_A \ket{l}_B$ or $\ket{l}_p; \ket{s}_A \ket{s}_B$. If
both possibilities are indistinguishable, we face the
situation of interference and
the probability for detection by a given combination of
detectors (e.g. the
"+"-labeled detector at Alice's and the "--" labeled one at Bob's)
depends on
the phases $\alpha$, $\beta$ and $\phi$ in the three interferometers.
The quantum mechanical treatment leads
to $P_{i,j}~=~\frac{1}{2}\betaig( 1+ijcos(\alpha+\beta-\phi)\betaig)$
with $i$,$j$ = $\pm1$ denoting the detector labels \cite{newsource}.
Hence, chosing appropriate phase settings,
Alice and Bob will always find perfect correlations in the output ports.
Either both detect the photons
in detector "--" (bitvalue "0"), or both in detector "+" (bitvalue "1").
Since the correlations depend on the phases and thus on the energy of the
pump, signal and idler photons, we refer to this base as the energy base
(showing wave like behaviour),
stressing the complementarity with the other, the time basis
(showing particle like behaviour).
\betaegin{figure}
\infig{fig1.eps}{0.9\columnwidth}
\caption{Schematics of quantum key distribution using
energy-time Bell states.}
\end{figure}
Like in the BB84 protocol \cite{BB84}
it is the use of complementary bases that ensures the detection
of an eavesdropper \cite{eavesBB84}.
If we consider for instance the most intuitive intercept/resend
strategy, the eavesdropper intercepts the photons, measures
them in one of the two bases and sends new, accordingly
prepared photons instead.
Since she never knows in which basis Bob's measurement will take place,
she will in half of the cases eavesdrop and resend the
photons in the "wrong basis"
and therefore will statistically introduce errors in Bobs results,
revealing in turn her presence. For a more general treatment of quantum
key distribution and eavesdropping using energy-time complementarity,
we refer the reader to \cite{highalphabet}.
To generate the short pump pulses, we use a pulsed diode laser
(PicoQuant PDL 800),
emitting 600ps (FWHM) pulses of
655 nm wavelength at
a repetition frequency of 80 MHz. The average power is of $\alphapprox$ 10 mW,
equivalent to an energy of 125 pJ per pulse.
The light passes a dispersive prism, preventing the small quantity
of also emitted infrared light to enter the subsequent setup, and
a polarizing beamsplitter (PBS), serving as optical isolator.
The pump is then focussed into a singlemode fiber and is
guided into a fiber optical
Michelson interferometer made of a 3 dB fiber coupler and chemically
deposited silver end mirors. The path length difference corresponds to a
difference of travel times of $\alphapprox$ 1.2 ns, splitting the pump
pulse into two,
well seperated pulses. The arm-length difference of the whole
interferometer
can be controlled using a pizoelectric actuator in order to ensure
any desired phase difference. Besides, the temperature is maintained stable.
In order to control the evolution of the polarization in the fibers,
we implement three
fiber-optical polarization controllers, each one consisting of three
inclinable fiber
loops -- equivalent to three waveplates in the case of bulk optic.
The first device is placed before the interferometer and ensures
that all light, leaving the Michelson interferometer by the input port
will be reflected by the already mentioned PBS and thus will
not impinge onto the laser diode. The second controller serves to
equalize the evolution of
the polarization states within
the different arms of the interferometer, and the last one enables
to control the
polarization state
of the light that leaves the interferometer by the second output port.
The horizontally polarized light is now focussed into a 4x3x12 mm
$KNbO_3$ crystal,
cut and oriented to ensure degenerate collinear phasematching,
hence producing photon pairs
at 1310 nm wavelength -- within the so-called second
telecommunication window.
Due to injection losses of the pump into the fiber and losses
within the interferometer, the average power
before the crystal drops to $\alphapprox$ 1 mW, and the energy per
pulse -- remember that each initial pump pulse is now split into two --
to $\alphapprox$ 6 pJ. The probability for creation of more than one photon
pair within the
same or within two subsequent pulses is smaller than 1 \%,
ensuring the assumption that lead to
Eq. \rhoef{Bellstate}. Behind the crystal, the red pump light
is absorbed by a filter (RG 1000). The downconverted photons
are then focussed into
a fiber coupler, separating them in half of the cases, and are guided
to Alice and Bob, respectively. The interferometers (type Michelson)
located there have been
described in detail in \cite{fullengthbell}. They consist of a 3-port
optical circulator,
providing access to the second output arm of the interferometer, a
3 dB fiber coupler and
Faraday mirrors in order to compensate any birefringence within
the fiber arms.
To controll their phases, the temperature
can be varied or can be maintained stable. Overall
losses are about 6 dB. The
path length differences of both interferometers are equal with respect
to the coherence length of the downconverted photons --
approximately 20 $\mu$m.
In addition, the travel time difference is the same than the one
introduced by the interferometer acting on the pump pulse. In this case,
"the same" refers to
the coherence time of the pump photons, around 800 fs
or 0.23 mm, respectively.
To detect the photons, the output ports
are connected to single-photon counters -- passively quenched
germanium avalanche photodiodes, operated in
Geiger-mode and cooled to 77 K \cite{fullengthbell}. We operate
them at dark count rates
of 30 kHz, leading to quantum efficiencies of $\alphapprox$ 5 \%.
The single photon detection
rates are of 4-7 kHz, the discrepancy being due to different
detection efficiencies and losses in the circulators.
The signals from the detectors as well as signals being coincident with
the emission of a pump pulse
are fed into fast AND-gates.
To demonstrate our scheme, we first measure the correlated events
in the time base.
Conditioning the detection at Alice's and Bob's detectors
both on the left satellite peaks,
($\ket{s_p}$,$\ket{s_A}$ and $\ket{s_p}$,$\ket{s_B}$, respectively)
we count the
number of coincident detections between both AND-gates, that is the
number of triple coincidences between emission of the pump-pulse
and detections at Alice's and Bob's.
In subsequent runs, we measure these rates
for the
right-right ($\ket{l_p}$,$\ket{l_A}$ AND $\ket{l_p}$,$\ket{l_B}$)
events, as well as for the
right-left cross terms.
We find
values around 1700 coincidences per 100 sec for the correlated,
and around 80
coincidences for the
non-correlated events (table 1).
From the four times four
rates --remember that we have four pairs of detectors--,
we calculate the different quantum bit error rates QBER, which is the
ratio of wrong to detected events.
We find values inbetween 3.7 and 5.7 \%,
leading to a mean value of QBER for the time base of (4.6 $\pm$0.1)\%.
In order to evaluate the QBER in the energy base, we condition the
detection at Alice's
and Bob's on the central peaks. Changing the phases in any of the
three interferometers, we observe interference fringes in the triple
coincidence count
rates (Fig.2).
Fits yield visibilities of 89.3 to 94.5 \% for the different detector
pairs (table 2). In the case of appropriatly chosen phases,
the number of correlated events are
around 800 in 50 sec, and the number of errors are around 35. From these
values, we calculate
the QBERs for the four detector pairs. We find values inbetween
2.8 and 5.4 \%, leading to a mean QBER for the energy base of
(3.9$\pm$0.4) \%.
Note that this experiment can be
seen as a Franson-type test of Bell inequalities
as well \cite{Franson89,Bell}. From the mean visibility of
(92.2$\pm$0.8)\%,
we can infer to a violation of Bell inequalities by 27
standard deviations.
\betaegin{figure}
\infig{fig2.eps}{0.95\columnwidth}
\caption{Results of the measurement in the energy basis.
The different mean values are due to different detector efficiencies}
\label{fig2}
\end{figure}
Like in all experimental quantum key distribution schemes,
the found QBERs are non-zero, even in the absence of any eavesdropping.
Nevertheless, they are still small enough to guarantee the detection of an
eavesdropper attack, allowing thus secure key distribution.
inequality
The remaining $\alphapprox$ 4\%
are due to accidental coincidences from uncorrelated events at the
single photon detectors,
a not perfectly localized pump pulse, non-perfect time resolution of
the photon detectors,
and, in the case of the energy basis,
non-perfect interference. Note that the last mentioned errors
decrease at the same rate as the number of correlated events when
increasing the
distance between Alice and Bob (that is when increasing the losses).
In contrast to that, the number
of errors due to uncorrelated events (point 1) stays almost constant
since it is
dominated by the detector noise. Thus,
the QBER increases with distance, however, only at a small rate since
this contribution
was found to be small.
Experimental investigations show that introducing 6 dB overall
losses -- in the best case equivalent to 20 km of
optical fiber -- leads to
an increment of only around 1\%,
hence to a QBER of 5-6\%.
Besides detector noise, another major problem of all quantum key
distribution schemes
developed up to now is stability, the only exception being \cite{plug&play}.
In order to really implement our setup for quantum cryptography,
the interferometers have to be actively stabilized, taking for instance
advantage of the free ports. The chosen passive stabilization
by controlling the temperature is not sufficient to ensure stable
phases over a
long time.
To conclude, we presented a new setup for quantum cryptography using
Bell states based on
energy-time entanglement, and demonstrated its feasability in a
laboratory experiment.
We found bit rates of around 33 Hz and
quantum bit error rates of around 4 \% which is low
enough to ensure secure key distribution. Besides a smaller
vulnerability
against eavesdropper attacks, the advantage of using discrete
energy-time states, up
to dimension 4, in our scheme, is the fact that no fast change
between non-commuting bases
is necessary. Nature itself chooses between the complementary
properties
energy and time. Furthermore, the recent demonstration that
energy-time entanglement can be preserved over long distances
\cite{fullengthbell}
shows that this scheme is perfectly adapted to long-distance
quantum cryptography.
We would like to thank J.D. Gautier for technical support and PicoQuant
for fast delivery of the laser. This work was supported by the Swiss PPOII
and the European QuCom IST projects.
\betaegin{thebibliography}{99}
\betaibitem{physworld}Physics World, March 1998, special issue in quantum
communication including an article by W. Tittel,
G.Ribordy, and N.Gisin on quantum cryptography.
\betaibitem{Welsh} See e.g. D. Welsh, Codes and Cryptography,
Oxfors Science Publication,
Clarendon Press, Oxford (1988).
\betaibitem{plug&play}A. Muller, T. Herzog, B. Huttner, W. Tittel,
H. Zbinden, and N. Gisin,
Appl. Phys. Lett {\betaf70}, 793 (1997).
G. Ribordy, J.-D. Gautier, N. Gisin, O. Guinnard,
and H. Zbinden. quant-ph/9905056.
\betaibitem{expquantumcryptography}P.D. Townsend. Opt. Fiber
Technology {\betaf4}, 345 (1998).
R.J. Hughes, G.L. Morgan, and C.G. Peterson. quant-ph/9904038.
J.-M. M\'erolla, Y. Mazurenko, J.-P. Goedgebuer, and W.T. Rhodes.
Phys. Rev. Lett. {\betaf82}, 1656 (1999).
\betaibitem{Longdistbell} P.R.Tapster, J.G. Rarity, and P.C.M. Owens.
Phys. Rev. Lett.
{\betaf73}, 1923 (1994).
W. Tittel, J. Brendel, N. Gisin, T. Herzog, and H. Zbinden.
Phys. Rev. A {\betaf57}, 3229 (1998).
W. Tittel, J. Brendel, H. Zbinden, and N. Gisin.
Phys. Rev. Lett. {\betaf81}, 3563 (1998).
G. Weihs, T. Jennewein, C. Simon, H. Weinfurter, and A. Zeilinger,
Phys. Rev. Lett. {\betaf81}, 5039 (1998).
\betaibitem{fullengthbell} W. Tittel, J. Brendel, N. Gisin, and H. Zbinden.
Phys. Rev. A {\betaf59}, 4150 (1999).
\betaibitem{Ekert91} A.K. Ekert, Phys. Rev. Lett. {\betaf67}, 661 (1991).
\betaibitem{QNDattack} When using coherent states, the eavesdropper
can take advantage
of the fact that there always is a small probability of finding
more than one photon in the same pulse, hence in the same state.
In these cases, she simply measures one and lets
pass the other one unmeasured. This
strategy can be somewhat circumvented using photon pairs, where
one photon serves as trigger for the second one.
However, since the possibility to create
more than one pair at the same time increases with the
pump intensity,
we face a similar problem concerning the simultaneous presence
of more than one photon in the same state. Note that
this probability
can be made arbitrarily small in both cases using weaker pulses or
lower pump energies. Still, there is an important difference,
favouring the implementation of photon pairs: Whenever the trigger
photon has
been detected, there is one photon left in the setup. This
contrasts with
the schemes based on faint pulses where the detecting electronics
has to be activated
whenever a weak pulse (containing most of the time no photon) was
emitted.
\betaibitem{security}G. Brassard, N. L\"utkenhaus, T. Mor, and
B.C. Sanders. quant-ph/9911054.
\betaibitem{Bellstates}H. Weinfurter, Europhys. Lett {\betaf25}, 559 (1994).
M. Michler, K. Mattle, H. Weinfurter, and A. Zeilinger.
Phys. Rev. A {\betaf53}, R1209 (1996).
\betaibitem{teleportation}C.H. Bennett, G. Brassard, C. Crépeau, R. Jozsa,
A. Peres, and W.K. Wootters. Phys. Rev. Lett. {\betaf70} 1895 (1993).
D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl,
D. Boschi, S. Branca, F. De Martini, L. Hardy, and S. Popescu.
Phys. Rev. Lett. {\betaf80}, 1121 (1998).
A. Furusawa, J.L. Sorensen, S.L. Braunstein, C.A. Fuchs,
H.J. Kimble, and E.S. Polzik. Science {\betaf 282}, 706 (1998).
\betaibitem{densecoding}K. Mattle, H. Weinfurter, P.G. Kwiat, and A. Zeilinger.
Phys. Rev. Lett. {\betaf76}, 4656 (1996).
\betaibitem{entanglementswapping}J.-W. Pan, D. Bouwmeester, H. Weinfurter,
and A. Zeilinger.
Phys. Rev. Lett. {\betaf80}, 3891 (1998).
\betaibitem{newsource}J. Brendel, N. Gisin, W. Tittel, and H. Zbinden.
Phys. Rev. Lett. {\betaf82}, 2594 (1999).
\betaibitem{Franson89}J.D. Franson, Phys. Rev. Lett. {\betaf62}, 2205 (1989).
\betaibitem{BB84}C.H. Bennett and G. Brassard, in Proceedings
of IEEE International
Conference on Computers, Systems and Signal Processing,
Bangalore, India
(IEEE, New York, 1984), p.175
\betaibitem{eavesBB84}C. Fuchs, N. Gisin, R. B. Griffiths, C. S.
Niu and A. Peres
Phys. Rev. A {\betaf 56}, 1163 (19997). I. Cirac and N. Gisin,
Phys. Lett. A
{\betaf229}, 1 (1997).
\betaibitem{highalphabet}H. Bechmann-Pasquinucci and W. Tittel.
quant-ph/9910095.
\betaibitem{Bell}J.S. Bell, Physics {\betaf1}, 195 (1964).
\end{thebibliography}
\betaegin{table}
\betaegin{tabular}[5]{c|cccc}
&++&+--&--+&-- --\\
\hline
$s_Ps_A\&s_Ps_B$&278$\pm$6&197$\pm$5&187$\pm$5&147$\pm$4\\
$l_Pl_A\&l_Pl_B$&304$\pm$7&201$\pm$5&200$\pm$5&148$\pm$5\\
$s_Ps_A\&l_Pl_B$&11$\pm$0.8&10.4$\pm$0.8&9.2$\pm$0.7&9.4$\pm$0.7\\
$l_Pl_A\&s_Ps_B$&11.2$\pm$0.4&8.6$\pm$0.4&9.1$\pm$0.4&8.5$\pm$0.4\\
\hline
QBER [\%]&3.7$\pm$0.2&4.6$\pm$0.2&4.5$\pm$0.2&5.7$\pm0.3$
\label{time-basis}
\end{tabular}
\caption{Results of the measurement in the time basis.
The different coincidence count rates are
due to different
quantum efficiencies of the detectors, and the
slight asymmetry in the correlated events can be explained by
the non-equal transmission probabilities within the interferometers.}
\end{table}
\betaegin{table}
\betaegin{tabular}[5]{c|cccc}
&++&+--&--+&-- --\\
\hline
Visibility [\%]&92.5$\pm$1.8&92.6$\pm$1.4&89.3$\pm$1.9&94.5$\pm$1.6\\
max.&518$\pm$13&416$\pm$8&359$\pm$9&279$\pm$7\\
min.&20$\pm$5&16$\pm$3&20$\pm$4&8$\pm$2\\
\hline
QBER [\%]&3.7$\pm$0.9&3.7$\pm$0.7&5.3$\pm$1&2.8$\pm0.8$
\label{energy-basis}
\end{tabular}
\caption{Results of the measurement in the energy-basis.}
\end{table}
\end{document} |
\begin{document}
\title{{\TheTitle}
\begin{abstract}
We provide sharp decay estimates in time in the context of Sobolev spaces, for smooth solutions to the one dimensional Jin-Xin model under the diffusion scaling, which are uniform with respect to the singular parameter of the scaling. This provides convergence to the limit nonlinear parabolic equation both for large time, and for the vanishing singular parameter. The analysis is performed by means of two main ingredients. First, a crucial change of variables highlights the dissipative property of the Jin-Xin system, and allows to observe a faster decay of the dissipative variable with respect to the conservative one, which is essential in order to close the estimates. Next, the analysis relies on a deep investigation on the Green function of the linearized Jin-Xin model, depending on the singular parameter, combined with the Duhamel formula in order to handle the nonlinear terms.
\end{abstract}
\begin{keywords}
Relaxation, Green function, asymptotic behavior, dissipation, global existence, decay estimates, diffusive scaling, conservative-dissipative form, BGK models.
\end{keywords}
\begin{AMS}
\end{AMS}
\section{Introduction}
We consider the following scaled version of the Jin-Xin approximation for systems of conservation laws in \cite{XinJin}:
\begin{equation}
\label{scaled_JinXin}
\begin{cases}
\partial_t u + \partial_x v=0, \\
\varepsilon^2 \partial_t v+ \lambda^2 \partial_x u=f(u)-v,
\end{cases}
\end{equation}
where $\lambda>0$ is a positive constant, $u, v$ depend on $(t, x) \in \mathbb{R}^+\times \mathbb{R}$ and take values in $\mathbb{R}$, while $f(u): \mathbb{R} \rightarrow \mathbb{R}$ is a Lipschitz function such that $f(0)=0,$ and $f'(0)=a,$ with $a$ a constant value.
The diffusion limit of this system for $\varepsilon \rightarrow 0$ has been studied in \cite{Jin, BGN}, where the convergence to the following equations is proved:
\begin{equation}
\label{limit_JinXin}
\begin{cases}
\partial_t u + \partial_x v=0 \\
v=f(u)-\lambda^2 \partial_x u.
\end{cases}
\end{equation}
From \cite{Natalini, BGN}, it is well-known that system (\ref{scaled_JinXin}) can be written in BGK formulation, \cite{Bouchut}, by means of the linear change of variables:
\begin{equation}
\label{change_BGK}
u=f_1^\varepsilon+f_2^\varepsilon, \qquad v=\dfrac{\lambda}{\varepsilon}(f_1^\varepsilon-f_2^\varepsilon).
\end{equation}
Precisely, the BGK form of (\ref{scaled_JinXin}) reads:
\begin{equation}
\label{BGK_scaled_JinXin}
\begin{cases}
\partial_t f_1^\varepsilon+\dfrac{\lambda}{\varepsilon}\partial_x f_1^\varepsilon=\dfrac{1}{\varepsilon^2}(M_1(u)-f_1^\varepsilon), \\
\partial_t f_2^\varepsilon-\dfrac{\lambda}{\varepsilon}\partial_x f_2^\varepsilon=\dfrac{1}{\varepsilon^2}(M_2(u)-f_2^\varepsilon), \\
\end{cases}
\end{equation}
where the so-called {Maxwellians} are:
\begin{equation}\label{Maxwellians_JinXin}
M_1(u)=\dfrac{u}{2}+\dfrac{\varepsilon f(u)}{2\lambda}, \qquad M_2(u)=\dfrac{u}{2}-\dfrac{\varepsilon f(u)}{2\lambda}.
\end{equation}
According to the theory on diffusive limits of the Boltzmann equation and related BGK models, see \cite{Laure, Laure1}, we take some fluctuations of the Maxwellian functions as initial data for the Cauchy problem associated with system (\ref{scaled_JinXin}). Namely, given a function $\bar{u}_0(x),$ depending on the spatial variable, we assume
\begin{equation}
\label{approx_initial_data_JinXin}
(u(0, x), v(0, x))=(u_0, v_0)= (\bar{u}_0, \; f(\bar{u}_0)-\lambda^2 \partial_x \bar{u}_0),
\end{equation}
indeed perturbations of the Maxwellians, as it is clear by expressing the initial data (\ref{approx_initial_data_JinXin}) through the change of variables (\ref{change_BGK}), i.e.
\begin{equation}
\label{approx_initial_data_JinXin_f}
(f_1^\varepsilon(0, x), f_2^\varepsilon(0, x))=\Bigg(M_1(\bar{u}_0)-\frac{\varepsilon \lambda}{2}\partial_x \bar{u}_0, \; M_2(\bar{u}_0)+\frac{\varepsilon \lambda}{2}\partial_x \bar{u}_0 \Bigg),
\end{equation}
where the fluctuations are given by $\pm \dfrac{\varepsilon \lambda}{2} \partial_x \bar{u}_0.$
\indent System (\ref{scaled_JinXin}) is the parabolic scaled version of the hyperbolic relaxation approximation for systems of conservation laws, the Jin-Xin system, introduced in \cite{XinJin} in 1995. This model has been studied in \cite{NataliniCPAM, Chern, XinJin}, and the hyperbolic relaxation limit has been investigated. A complete review on hyperbolic conservation laws with relaxation, and a focus on the Jin-Xin system is presented in \cite{Mascia}. By means of the Chapman-Enskog expansion, local attractivity of diffusion waves for the Jin-Xin model was established in \cite{Chern}. In \cite{Natalini3}, the authors showed that, under some assumptions on the initial data and the function $f(u),$ the first component of system (\ref{scaled_JinXin}) with $\varepsilon=1$ decays asymptotically towards the fundamental solution to the Burgers equation, for the case of $f(u)=\alpha u^2/2.$ Besides, \cite{Zuazua} is a complete study of the long time behavior of this model for a more general class of functions $f(u)=|u|^{q-1}u,$ with $q\ge 2.$ The method developed in \cite{Zuazua} can be also extended to the multidimensional case in space, and provides sharp decay rates. Here we study the parabolic scaled version of the system studied in \cite{Zuazua}, i.e. (\ref{scaled_JinXin}), and we consider a more general function $f(u)=a u + h(u),$ where $a$ is a constant, and $h(u)$ is a quadratic function. We point out that only the case $a=0$ has been handled in \cite{Zuazua}, and in many previous works as well. In accordance with the theory presented in \cite{Bianchini} on partially dissipative hyperbolic systems, we are able to cover also the case $a\neq 0$. Furthermore, besides the aymptotic behavior of the solutions, here we are interested in studying the diffusion limit, for vanishing $\varepsilon,$ of the Jin-Xin system, which is the main improvement of the present paper with respect to the results achieved in \cite{Bianchini}. Indeed, because of the presence of the singular parameter, we cannot approximate the analysis of the Green function of the linearized problems, as the authors did in \cite{Bianchini}, and explicit calculations in that context are needed.\\ The diffusive Jin-Xin system has been already investigated in the following works below.
In \cite{Jin}, initial data around a traveling wave were considered, while in \cite{BGN} the authors write system (\ref{scaled_JinXin}) in terms of a BGK model, and the diffusion limit is studied by using monotonicity properties of the solution. In all these cases, $u, v$ are scalar functions. For simplicity, here we also take scalar unknowns $u, v.$ However, our approach, which takes its roots in \cite{Bianchini}, can be generalized to the case of vectorial functions $u, v \in \mathbb{R}^N$. As mentioned before, the novelty of the present paper consists in dealing with the singular approximation and, in the meanwhile, with the the large time asymptotic of system (\ref{scaled_JinXin}), which behaves like the limit parabolic equation (\ref{limit_JinXin}), without using monotonicity arguments. We obtain, indeed, sharp decay estimates in time to the solution to system (\ref{scaled_JinXin}) in the Sobolev spaces, which are uniform with respect to the singular parameter. This provides the convergence to the limit nonlinear parabolic equation (\ref{limit_JinXin}) both asymptotically in time, and in the vanishing $\varepsilon$-limit. To this end, we perform a crucial change of variables that highlights the dissipative property of the Jin-Xin system, and provides a faster decay of the dissipative variable with respect to the conservative one, which allows to close the estimates. Next, a deep investigation on the Green function of the linearized system (\ref{scaled_JinXin}) and the related spectral analysis is provided, since explicit expressions are needed in order to deal with the singular parameter $\varepsilon$. The dissipative property of the diffusive Jin-Xin system, together with the uniform decay estimates discussed above, and the Green function analysis combined with the Duhamel formula provide our main results, stated in the following. Define
$$E_m=\max\{\|u_0\|_{L^1}+\varepsilon \|v_0-au_0\|_{L^1}, \|u_0\|_m+\varepsilon \|v_0-au_0\|_m\},$$
where $\|\cdot\|_m$ stands for the $H^m(\mathbb{R})$ Sobolev norm and $H^0(\mathbb{R})=L^2(\mathbb{R}).$
\begin{theorem}
\label{uniform_global_existence}
Consider the Cauchy problem associated with system (\ref{scaled_JinXin}) and initial data (\ref{approx_initial_data_JinXin}). If $E_2$ is sufficiently small,
then the unique solution $$(u, v) \in C([0, \infty), H^2(\mathbb{R})) \cap C^1([0, \infty), H^1(\mathbb{R})).$$ Moreover, the following decay estimate holds:
$$\|u(t)\|_2+\varepsilon \|v(t)-au(t)\|_2 \le C \min\{1, t^{-1/4}\}E_2.$$
\end{theorem}
Now consider the following equation
\begin{equation}
\label{CE_no_epsilon_JinXin}
\partial_t w_p + a \partial_x w_p + \partial_x h(w_p) -\lambda^2\partial_{xx} w_p=0,
\end{equation}
\begin{theorem}
\label{asymptotic_behavior_JinXin}
Let $w_p$ be the solution to the nonlinear equation (\ref{CE_no_epsilon_JinXin}) with sufficiently smooth initial data
$$w_p(0)=u(0)={u}_0,$$
where ${u}_0$ in (\ref{approx_initial_data_JinXin}) is the initial datum for the Jin-Xin system (\ref{scaled_JinXin}). For any $\mu \in [0, 1/2),$ if $E_1$ is sufficiently small with respect to $(1/2-\mu),$ then we have the following decay estimate:
\begin{equation}
\|D^\beta (u(t)-w_p(t))\|_0 \le C \varepsilon \min \{1, t^{-1/4-\mu -\beta/2}\} E_{|\beta|+4},
\end{equation}
with $C=C(E_{|\beta|+\sigma})$ for $\sigma $ large enough.
\end{theorem}
Once we identified the right scaled variable to study system (\ref{scaled_JinXin}), ($u, \varepsilon^2 v$), which are expressed at the beginning of Section \ref{Section1}, and we found the strategy, disscussed in Section \ref{Section1} and \ref{Section2}, to achieve the so-called \emph{conservative-dissipative} (C-D) form in \cite{Bianchini} for our model, our approach essentially relies on the method developed in \cite{Bianchini}, with substantial differences listed here.
\begin{itemize}
\item We need an explicit Green function analysis of the linearized system rather than expansions and approximations, in order to deal with the singular parameter $\varepsilon$. The analysis performed in first part of Section \ref{Section3} is as precise as it is possible.
\item Some estimates in \cite{Bianchini} rely on the use of the Shizuta-Kawashima (SK) condition, explained in the following. Consider a linear first order system in compact form: $\partial_t \textbf{u}+A\partial_x \textbf{u}=G\textbf{u}.$ Passing to the Fourier transform, define $E(i \xi)=G-iA\xi.$ The (SK) condition states that, if $\lambda(z)$ is an eigenvalue of $E(z),$ then ${Re}(\lambda(i\xi)) \le -c \frac{|\xi|^2}{1+|\xi|^2},$ for some constant $c>0$ and for every $\xi \in \mathbb{R} - \{0\}$. As it can be seen in (\ref{Eig_SK}), these eigenvalues for the
compact linearized system in (C-D) form (\ref{CD_real_JinXin_no_tilde}) of system (\ref{scaled_JinXin}) have different weights in $\varepsilon$. Thus, we cannot simply apply the (SK) condition to estimate the remainders in paragraph \emph {Remainders in between} as the authors did in \cite{Bianchini}, since the weights in $\varepsilon$ are essential to deal with the singular nonlinear term in the Duhamel formula (\ref{Duhamel}). Again, a further analysis is needed.
\item Differently from \cite{Bianchini}, we are not assuming to have a global in time solution, uniformly bounded in $\varepsilon$, for our singular system. The uniform global existence is proved in Theorem \ref{uniform_global_existence}.
\item The coupling between the convergence to the limit equation (\ref{limit_JinXin}) for vanishing $\varepsilon$ and for large time in the last section is the main novelty of the present paper, and new ideas are needed to get this result.
\end{itemize}
\section{General setting}\label{Section1}
First of all, we write system (\ref{scaled_JinXin}) in the following form:
\begin{equation}
\label{scaled_JinXin_new_variables}
\begin{cases}
\partial_t u + \dfrac{\partial_x(\varepsilon^2 v)}{\varepsilon^2}=0, \\
\partial_t(\varepsilon^2 v)+ \lambda^2 \partial_x u=f(u)-\dfrac{\varepsilon^2 v}{\varepsilon^2}.
\end{cases}
\end{equation}
The unknown variable is $\textbf{u}=(u, \varepsilon^2 v),$
in the spirit of the scaled variables introduced in \cite{Bianchini4}, which are the right scaling to get the conservative-dissipative form discussed below.
Here we write $f(u)=au+h(u),$ where $a=f'(0),$ and system (\ref{scaled_JinXin_new_variables}) reads
\begin{equation}
\label{scaled_JinXin_new_variables_new}
\begin{cases}
\partial_t u + \dfrac{\partial_x(\varepsilon^2 v)}{\varepsilon^2}=0, \\
\partial_t(\varepsilon^2 v)+ \lambda^2 \partial_x u=au + h(u)-\dfrac{\varepsilon^2 v}{\varepsilon^2}.
\end{cases}
\end{equation}
Equations (\ref{scaled_JinXin_new_variables_new}) can be written in compact form:
\begin{equation}
\label{JinXin_compact}
\partial_t \textbf{u} + A \partial_x \textbf{u} = -B \textbf{u}+N(u),
\end{equation}
where
\begin{equation}
\label{matrices_Jin_Xin_compact_form}
A=\left(\begin{array}{cc}
0 & \dfrac{1}{\varepsilon^2} \\
\lambda^2 & 0 \\
\end{array}\right), \qquad
-B=\left(\begin{array}{cc}
0 & 0 \\
a & -\dfrac{1}{\varepsilon^2}
\end{array}\right), \qquad
N(u)=\left(\begin{array}{c}
0 \\
h(u)
\end{array}\right).
\end{equation}
In particular, $-B\textbf{u}$ is the linear part of the source term, while $N(u)$ is the remaining nonlinear one, which only depends on the first component of $\textbf{u}=(u, \varepsilon^2 v)$.
Now, we look for a right constant symmetrizer $\Sigma $ for system (\ref{JinXin_compact}), which also highlights the dissipative properties of the linear source term. Thus, we find
\begin{equation}
\label{symmetrizer_JinXin}
\Sigma=\left(\begin{array}{cc}
1 & a \varepsilon^2 \\
a \varepsilon^2 & \lambda^2 \varepsilon^2 \\
\end{array}\right).
\end{equation}
Taking $\textbf{w}$ such that
\begin{equation}
\label{change_JinXin}
\textbf{u}=\left(\begin{array}{c}
u \\
\varepsilon^2 v
\end{array}\right)=\Sigma \textbf{w}=\left(\begin{array}{c}
(\Sigma \textbf{w})_1 \\
(\Sigma \textbf{w})_2
\end{array}\right), \;\; \text{where} \;\; \textbf{w}=\left(\begin{array}{c}
w_1 \\
w_2
\end{array}\right)
=\left(\begin{array}{c}
\dfrac{u \lambda^2-a \varepsilon^2 v}{\lambda^2 - a^2 \varepsilon^2} \\\\
\dfrac{v-au}{\lambda^2-a^2 \varepsilon^2}
\end{array}\right),
\end{equation}
system (\ref{JinXin_compact}) reads
\begin{equation}
\label{JinXin_compact_new_variables}
\Sigma \partial_t \textbf{w}+A_1\partial_x \textbf{w}=-B_1\textbf{w}+N((\Sigma \textbf{w})_1),
\end{equation}
where
\begin{equation*}
A_1=A_1^T=A\Sigma=\left(\begin{array}{cc}
a & \lambda^2 \\
\lambda^2 & a \lambda^2 \varepsilon^2
\end{array}\right),\qquad
-B_1=-B\Sigma=\left(\begin{array}{cc}
0 & 0 \\
0 & a^2 \varepsilon^2 - \lambda^2
\end{array}\right),
\end{equation*}
\begin{equation}
\label{matrices_JinXin_new_variables}
N((\Sigma \textbf{w})_1)=\left(\begin{array}{c}
0 \\
h(w_1+a\varepsilon^2 w_2)
\end{array}\right).
\end{equation}
By using the Cauchy inequality we get the following lemma.
\begin{lemma}
\label{Equivalence_scalar_product}
The symmetrizer $\Sigma$ is definite positive. Precisely
\begin{equation}
\label{Equivalence_scalar_product_equation}
\dfrac{1}{2}\|w_1\|_0^2 + \varepsilon^2\|w_2\|_0^2 (\lambda^2-2a^2 \varepsilon^2) \le (\Sigma \textbf{w}, \textbf{w})_0 \le \|w_1\|_0^2 (1+a\varepsilon^2) + \|w_2\|_0^2 (a+\lambda^2)\varepsilon^2.
\end{equation}
\end{lemma}
Notice that from the theory on hyperbolic systems, \cite{Majda}, the Cauchy problem for (\ref{JinXin_compact_new_variables}) with initial data $\textbf{w}_0$ in $H^m(\mathbb{R}), \, m \ge 2,$ has a unique local smooth solution $\textbf{w}^\varepsilon$ for each fixed $\varepsilon > 0$. We denote by $T^\varepsilon$ the maximum time of existence of this local solution and, hereafter, we consider the time interval $[0, T^*],$ with $T^* \in [0, T^\varepsilon)$ for every $\varepsilon$.
In the following, we study the Green function of system (\ref{JinXin_compact_new_variables}), and we establish some uniform energy estimates and decay rates of the smooth solution to system (\ref{JinXin_compact_new_variables}).
\subsection{The conservative-dissipative form}
In this section, we introduce a linear change of variable, so providing a particular structure for our system, the so-called \emph{conservative-dissipative form} (C-D) defined in \cite{Bianchini}. The (C-D) form allows to identify a conservative variable and a dissipative one for system (\ref{scaled_JinXin}), such that in the following a crucial faster decay of the dissipative variable is observed. Thanks to this change of variables, we are able indeed to handle the case $a\neq 0$ in (\ref{scaled_JinXin_new_variables_new}).
Hereafter, $(\cdot, \cdot)$ denotes the standard scalar product in $L^2(\mathbb{R}),$ and $\| \cdot \|_m$ is the $H^m(\mathbb{R})$-norm, for $m \in \mathbb{N},$ where $H^0(\mathbb{R})=L^2(\mathbb{R}).$
\begin{proposition}
Given the right symmetrizer $\Sigma$ in (\ref{symmetrizer_JinXin}) for system (\ref{JinXin_compact}), denoting by
\begin{equation}
\label{change_CD_JinXin}
\tilde{\textbf{w}}=M\textbf{u}=\left(\begin{array}{cc}
1 & 0 \\\\
\dfrac{-a \varepsilon}{\sqrt{\lambda^2 - a^2 \varepsilon^2}} & \dfrac{1}{\varepsilon \sqrt{\lambda^2-a^2 \varepsilon^2}}
\end{array}\right) \textbf{u}=\left(\begin{array}{c}
u \\\\
\dfrac{\varepsilon(v-au)}{\sqrt{\lambda^2-a^2\varepsilon^2}}
\end{array}\right),
\end{equation}
system (\ref{JinXin_compact}) can be written in (C-D) form defined in \cite{Bianchini}, i.e.
\begin{equation}
\label{CD_real_JinXin}
\partial_t \tilde{\textbf{w}} + \tilde{A}\partial_x \tilde{\textbf{w}} = -\tilde{B} \tilde{\textbf{w}} + \tilde{N}(\tilde{w}_1),
\end{equation}
where
\begin{equation}
\label{matrices_CD_real_JinXin}
\small{
\begin{aligned}
\tilde{A}=\left(\begin{array}{cc}
a & \dfrac{\sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon} \\\\
\dfrac{\sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon} & {-a}
\end{array}\right), \;
\tilde{B}=\left(\begin{array}{cc}
0 & 0 \\\\
0 & \dfrac{1}{\varepsilon^2}
\end{array}\right), \; \tilde{N}(\tilde{w}_1)=\left(\begin{array}{c}
0 \\\\
\dfrac{h(\tilde{w}_1)}{\varepsilon\sqrt{\lambda^2-a^2\varepsilon^2}}
\end{array}\right).
\end{aligned}}
\end{equation}
\end{proposition}
\section{The Green function of the linear partially dissipative system} \label{Section2}
We consider the linear part of the (C-D) system (\ref{CD_real_JinXin})-(\ref{matrices_CD_real_JinXin}) without the \emph{tilde} for simplicity,
\begin{equation}
\label{CD_real_JinXin_no_tilde}
\partial_t \textbf{w}+A\partial_x \textbf{w}=-B\textbf{w}.
\end{equation}
We want to apply the approach developed in \cite{Bianchini}, to study the singular approximation system above. The main difficulty here is to deal with the singular perturbation parameter $\varepsilon$.
We consider the Green kernel $\Gamma(t,x)$ of (\ref{CD_real_JinXin_no_tilde}), which satisfies
\begin{equation}
\label{Green_kernel_JinXin}
\begin{cases}
\partial_t \Gamma + A \partial_x \Gamma = -B\Gamma, \\
\Gamma (0, x)= \delta(x) I.
\end{cases}
\end{equation}
Taking the Fourier transform $\hat{\Gamma},$ we get
\begin{equation}
\label{Fourier_JinXin}
\begin{cases}
\frac{d}{dt}\hat{\Gamma}=(-B-i\xi A) \hat{\Gamma}, \\
\hat{\Gamma}(0, \xi)=I.
\end{cases}
\end{equation}
Consider the entire function
\begin{equation}
\label{E_JinXin}
E(z)=-B-zA=\left(\begin{array}{cc}
-az & -\dfrac{z\sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon} \\\\
-\dfrac{z\sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon} & {az}-\dfrac{1}{\varepsilon^2}
\end{array}\right).
\end{equation}
Formally, the solution to (\ref{Fourier_JinXin}) is given by
\begin{equation}
\label{Formal_solution_JinXin}
\hat{\Gamma}(t, \xi)=e^{E(i\xi)t}=\sum_{n=0}^\infty (-B -i\xi A)^n.
\end{equation}
Since $E(z)$ in (\ref{E_JinXin}) is symmetric, if $z$ is not exceptional we can write
$$E(z)=\lambda_1(z)P_1(z)+\lambda_2(z)P_2(z),$$
where $\lambda_1(z), \lambda_2(z)$ are the eigenvalues of $E(z),$ and $P_1(z), P_2(z)$ the related eigenprojections, given by
$$P_j(z)=-\frac{1}{2\pi i} \oint_{|\xi-\lambda_j(z)|<<1} (E(z)-\xi I)^{-1} \, d\xi, \qquad j=1, 2.$$
Following \cite{Bianchini}, we study the low frequencies (case $z=0$) and the high frequencies (case $z=\infty$) separately.
\paragraph{\textbf{Case $z=0$}}
The total projector for the eigenvalues near to $0$ is
\begin{equation}
\label{total_projector_near_zero}
P(z)=-\dfrac{1}{2 \pi i} \oint_{|\xi|<<1} (E(z)-\xi I)^{-1} \, d\xi.
\end{equation}
Besides, it has the following expansion, see \cite{Kato},
\begin{equation}
\label{total_projector_expansion}
P(z)=P_0+\sum_{n \ge 1} z^{n}P^{n}(z),
\end{equation}
where $P_0$ is the eigenprojection for $E(0)-\xi I=-B-\xi I,$ i.e.
\begin{equation}
\label{Pn_JinXin}
P_0=-\dfrac{1}{2 \pi i } \oint_{|\xi| <<1} (-B-\xi I)^{-1} \, d\xi=:Q_0=\left(\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array}\right), \; P^{n}(z)=-\dfrac{1}{2\pi i} \oint R^{(n)} (\xi) \, d\xi,
\end{equation}
with $R^{(n)}$ the $n$-th term in the expansion of the resolvent (\ref{Rn_XinJin}). Here $Q_0$ is the projection onto the null space of the source term, while we denote by $Q_{-}=I-Q_0$ the complementing projection, and by $L_{-}, L_0$ and $R_{-}, R_0$ the related left and right eigenprojectors, see \cite{Kato, Bianchini}, i.e.
$$L_{-}=R_{-}^T=\left(\begin{array}{cc}
0 & 1
\end{array}\right), \qquad L_0=R_0^T=\left(\begin{array}{cc}
1 & 0
\end{array}\right),$$
$$Q_{-}=R_{-}L_{-}, \qquad Q_0=R_0L_0.$$
On the other hand, from \cite{Kato},
\begin{align*}
R(\xi, z)=(E(z)-\xi I)^{-1} & =(-B-zA-\xi I)^{-1}=(-B-\xi I)^{-1} \sum_{n=0}^\infty (Az (-B-\xi I)^{-1})^n\\
&=(-B-\xi I)^{-1}+\sum_{n \ge 1} (-B-\xi I)^{-1} z^n (A (-B-\xi I)^{-1})^n\\
&=R_0 (\xi)+\sum_{n \ge 1} R^{(n)} (\xi),
\end{align*}
i.e.
\begin{equation}
\label{Rn_XinJin}
R^{(n)}=z^n (-B-\xi I)^{-1} (A (-B-\xi I)^{-1})^n.
\end{equation}
Since a neighborhood of $z=0$ is considered, at this point the authors in \cite{Bianchini} take the first two terms of the asymptotic expansion of the total projector (\ref{total_projector_expansion}), so obtaining an expression with a remainder $O(z^2)$. We cannot approximate the projector in the same way, since we need to check the singular terms in $\varepsilon$. Thus, we perform an explicit spectral analysis for the Green function of our problem. First of all,
\begin{equation}
A(-B-\xi I)^{-1}=\left(\begin{array}{cc}
-\dfrac{a}{\xi} & -\dfrac{\varepsilon \sqrt{\lambda^2-a^2\varepsilon^2}}{1+\varepsilon^2 \xi} \\\\
-\dfrac{\sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon \xi} & \dfrac{a\varepsilon^2}{1+\varepsilon^2 \xi}
\end{array}\right),
\end{equation}
which is diagonalizable, i.e.
$$A(-B-\xi I)^{-1}=V D V^{-1},$$
where $D$ is the diagonal matrix with entries given by the eigenvalues, and $V$ is the matrix with the eigenvectors on the columns. Explicitly, setting
\begin{equation}
\label{Box_JinXin}
\Box:=a^2+4\varepsilon^2 \lambda^2 \xi^2 + 4 \lambda^2 \xi,
\end{equation}
we have
\begin{equation}
\label{eigenvalues_vectors_XinJin}
D=diag \Bigg\{ \dfrac{-a \pm \sqrt{\Box}}{2 \xi (1+\varepsilon^2 \xi)}\Bigg\}, \, V=\left(\begin{array}{cc}
\displaystyle \frac{\varepsilon (a+\sqrt{\Box}+2a \varepsilon^2 \xi)}{2(1+\varepsilon^2 \xi) \sqrt{\lambda^2-a^2 \varepsilon^2}} & \displaystyle \frac{\varepsilon (a-\sqrt{\Box}+2a \varepsilon^2 \xi)}{2(1+\varepsilon^2 \xi) \sqrt{\lambda^2-a^2 \varepsilon^2}} \\\\
1 & 1
\end{array}\right),
\end{equation}
\begin{equation}
\label{inv_eigenvectors_JinXin}
V^{-1}=\left(\begin{array}{cc}
\dfrac{(1+\varepsilon^2 \xi) \sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon \sqrt{\Box}} & \dfrac{-a+\sqrt{\Box}-2a \varepsilon^2 \xi}{2 \sqrt{\Box}} \\\\
-\dfrac{(1+\varepsilon^2 \xi) \sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon \sqrt{\Box}} & \dfrac{a+\sqrt{\Box}+2a \varepsilon^2 \xi}{2 \sqrt{\Box}}
\end{array}\right).
\end{equation}
This way, denoting by
$$\Diamond_1=a-\sqrt{\Box}+2a\varepsilon^2 \xi, \; \Diamond_2=a+\sqrt{\Box}+2a\varepsilon^2 \xi, \; \triangle_1=-\dfrac{a+\sqrt{\Box}}{2\xi (1+\varepsilon^2 \xi)}, \; \triangle_2=-\dfrac{a-\sqrt{\Box}}{2 \xi (1+\varepsilon^2 \xi)},$$
with $\Box$ in (\ref{Box_JinXin}), from (\ref{Rn_XinJin}) we have
$$R^{(n)}=z^n (-B-\xi I)^{-1}(A(-B-\xi I)^{-1})^n=z^n (-B-\xi I)^{-1}(V D^n V^{-1})$$
$$=z^n\left(\begin{array}{cc}
\dfrac{\Diamond_1 \triangle_2^n-\Diamond_2 \triangle_1^n}{2\sqrt{\Box} \xi} & -\varepsilon \sqrt{\dfrac{\lambda^2-a^2\varepsilon^2}{\Box}} (\triangle_1^n-\triangle_2^n) \\\\
-\varepsilon \sqrt{\dfrac{\lambda^2-a^2\varepsilon^2}{\Box}} (\triangle_1^n-\triangle_2^n) &
-\dfrac{\varepsilon^2}{2(1+\varepsilon^2 \xi)\sqrt{\Box}}(\Diamond_2 \triangle_2^n-\Diamond_1 \triangle_1^n)
\end{array}\right).$$
The matrix above is completely bounded in $\varepsilon,$ and so we can approximate the expression of the total projector (\ref{total_projector_expansion}) up to the second order. To this end, we consider the previous expression of $R^{(n)}$ for $n=0, 1, 2,$ we apply the integral formula (\ref{Pn_JinXin}) and we obtain
\begin{equation}
\label{total_projector_approximation}
P(z)=\left(\begin{array}{cc}
1+O(z^2) & -\varepsilon z \sqrt{\lambda^2-a^2\varepsilon^2} + \varepsilon O(z^2) \\\\
-\varepsilon z \sqrt{\lambda^2-a^2\varepsilon^2} + \varepsilon O(z^2) \quad & \varepsilon^2 z^2(\lambda^2-a^2\varepsilon^2) + \varepsilon^2 O(z^3)
\end{array}\right).
\end{equation}
Now, we consider the left $L(z)$ and the right $R(z)$ eigenprojectors of $P(z),$ i.e.
\begin{align*}
& P(z)=R(z)L(z), \quad L(z)R(z)=I \\
& L(z)P(z)=L(z), \quad P(z)R(z)=R(z).
\end{align*}
We can limit ourselves to the second order approximation, according to (\ref{total_projector_approximation}). Then, we consider
\begin{equation}
\label{tilde_Projector}
\tilde{P}(z)=\left(\begin{array}{cc}
1& -\varepsilon z \sqrt{\lambda^2-a^2\varepsilon^2} \\\\
-\varepsilon z \sqrt{\lambda^2-a^2\varepsilon^2} \quad & \varepsilon^2 z^2(\lambda^2-a^2\varepsilon^2)
\end{array}\right),
\end{equation}
and, by applying the conditions above, we obtain
\begin{align*}
\tilde{L}(z)=\left(\begin{array}{cc}
1 & -\varepsilon z \sqrt{\lambda^2-a^2 \varepsilon^2}
\end{array}\right), \qquad \tilde{R}(z)=\left(\begin{array}{c}
1 \\
-\varepsilon z \sqrt{\lambda^2-a^2\varepsilon^2}
\end{array}\right),
\end{align*}
where
\begin{equation}
\begin{array}{cc}
\tilde{P}(z)=\tilde{R}(z)\tilde{L}(z), \quad & \tilde{L}(z)\tilde{R}(z)=1+\varepsilon^2 O(z^2), \\
\tilde{P}(z)\tilde{R}(z)=\tilde{R}(z)+\varepsilon^2 O(z^2) , \quad &
\tilde{L}(z)\tilde{P}(z)=\tilde{L}(z)+\varepsilon^2 O(z^2) ,
\end{array}
\end{equation}
and so $$P(z)=\tilde{P}(z)+O(z^2), \quad R(z)=\tilde{R}(z)+O(z^2), \quad \quad L(z)=\tilde{L}(z)+O(z^2).$$
Let us point out that further expansions of $L(z), R(z)$ are not singular in $\varepsilon$ too, since the weights in $\varepsilon$ of these vectors come from (\ref{total_projector_approximation}). Precisely, one can see that $L^\varepsilon(z)$ depends on $\varepsilon$ as follows:
$$L^\varepsilon(\cdot)=\left(\begin{array}{cc}
1 \quad & O(\varepsilon)
\end{array}\right)=[{R(\cdot)^\varepsilon}]^T.$$
Now, by using the left and the right operators, we decompose $E(z)$ in the following way, see \cite{Bianchini},
\begin{equation}
\label{E_decompostition_JinXin}
E(z)=R(z)F(z)L(z)+R_{-}(z)F_{-}(z)L_{-}(z),
\end{equation}
where $L_{-}(z), R_{-}(z)$ are left and right eigenprojectors of
$P_{-}(z)=I-P(z),$ while
$$F(z)=L(z)E(z)R(z), \quad F_{-}(z)=L_{-}(z)E(z)R_{-}(z).$$
We use the approximations of $L(z), R(z)$ above, and so
\begin{equation}
\label{F_approximation_JinXin}
F(z)=(\tilde{L}(z)+O(z^2))(-B-Az)(\tilde{R}(z)+O(z^2))=-az+(\lambda^2-a^2\varepsilon^2)z^2+O(z^3).
\end{equation}
We study $F_{-}(z)$. Matrix (\ref{total_projector_approximation}) and the definition above imply that
\begin{equation}
\label{Compl_total_projector_JinXin}
P_{-}(z)=\left(\begin{array}{cc}
O(z^2) \quad & z \varepsilon \sqrt{\lambda^2-a^2 \varepsilon^2}+\varepsilon O(z^2) \\\\
z \varepsilon \sqrt{\lambda^2-a^2 \varepsilon^2}+\varepsilon O(z^2) \quad & 1+\varepsilon^2O(z^2)
\end{array}\right),
\end{equation}
and, approximating again,
$$L_{-}(z)=\tilde{L}_{-}(z)+O(z^2)=\left(\begin{array}{cc}
z \varepsilon \sqrt{\lambda^2-a^2\varepsilon^2} \quad & 1
\end{array}\right)+O(z^2),$$
$$R_{-}(z)=\tilde{R}_{-}(z)+O(z^2)=\left(\begin{array}{c}
z \varepsilon \sqrt{\lambda^2-a^2\varepsilon^2} \\
1
\end{array}\right)+O(z^2).$$
Thus,
\begin{equation}
\label{F_meno_approximation_JinXin}
F_{-}(z)=\tilde{L}_{-}(z)(-B-Az)\tilde{R}_{-}(z)+O(z^2)=-\frac{1}{\varepsilon^2}+az+O(z^2).
\end{equation}
This yields the proposition below.
\begin{proposition}
\label{Proposition_z_small}
We have the following decomposition near $z=0$:
\begin{equation}
\label{Decomposition_z_small}
E(z)=F(z)P(z)+E_{-}(z),
\end{equation}
with $F(z)$ in (\ref{F_approximation_JinXin}), $P(z)$ in (\ref{total_projector_approximation}), $E_{-}(z)=R_{-}(z)F_{-}(z)L_{-}(z),$ and $F_{-}(z)$ in (\ref{F_meno_approximation_JinXin}).
\end{proposition}
\paragraph{\textbf{Case $z=\infty$}}
We consider
$E(z)=-B-Az=z(-B/z-A)=z E_1(1/z)$ and, setting $z=i\xi$ and $\zeta=1/z=-i\eta,$ with $\xi, \eta \in \mathbb{R},$ we have $E_1(\zeta)=-A- \zeta B$
$$=\left(\begin{array}{cc}
-a \quad & -\dfrac{\sqrt{\lambda^2-a^2 \varepsilon^2}}{\varepsilon} \\\\
-\dfrac{\sqrt{\lambda^2-a^2 \varepsilon^2}}{\varepsilon} \quad & a- \dfrac{\zeta}{\varepsilon^2}
\end{array}\right)
=\left(\begin{array}{cc}
-a \quad & -\dfrac{\sqrt{\lambda^2-a^2 \varepsilon^2}}{\varepsilon} \\\\
-\dfrac{\sqrt{\lambda^2-a^2 \varepsilon^2}}{\varepsilon} \quad & a+ \dfrac{i \eta}{\varepsilon^2}
\end{array}\right).$$
Since $E_1(\zeta)$ is symmetric, we determine the eigenvalues and the right eigenprojectors,
$$-A-\zeta B=\lambda_1^{E_1}(\zeta) R_1(\zeta) R_1^T(\zeta) + \lambda_2^{E_1}(\zeta) R_2(\zeta) R_2^T(\zeta),$$
such that, for $j=1, 2,$ $R_j^T(\zeta) R_j(\zeta)=I$.
The following expression for the eigenvalues of $E_1(i \eta)$ is provided
$$\lambda_{1, 2}^{E_1}(z)= \dfrac{i \eta}{2 \varepsilon^2} \pm \dfrac{\sqrt{4 \varepsilon^2 \lambda^2 + 4 a \eta \varepsilon^2 i - \eta^2}}{2 \varepsilon^2},$$ and it is simple to prove that both the corresponding eigenvalues of $E(z),$ which can be obtained multiplying $\lambda_1^{E_1}(z)$ and $\lambda_2^{E_1}(z)$ above by $z=i\xi=i/\eta,$
have a strictly negative real part in the high frequencies regime ($|\zeta|=|\eta| << 1$) and in the vanishing $\varepsilon$ limit. Moreover, setting
$\delta_{1, 2}=\sqrt{8 \varepsilon^2 \lambda^2 + 2 \zeta^2 - 8a\varepsilon^2 \zeta \pm (- 2 \zeta \sqrt{\mu}+4a\varepsilon^2 \sqrt{\mu})},$ where $\mu=4\varepsilon^2 \lambda^2+\zeta^2-4a\varepsilon^2\zeta,$ the normalized right eigenprojectors are given by:
$$R_1(\zeta)=\dfrac{1}{\delta_1}\left(\begin{array}{c}
{(2a\varepsilon^2-\zeta)+\sqrt{\mu}} \\\\
{2\varepsilon \sqrt{\lambda^2-a^2\varepsilon^2}}
\end{array}\right), \quad R_2(\zeta)=\dfrac{1}{\delta_2}\left(\begin{array}{c}
{(2a\varepsilon^2-\zeta)-\sqrt{\mu}} \\\\
{2\varepsilon \sqrt{\lambda^2-a^2\varepsilon^2}}
\end{array}\right).$$
The eigenprojectors are bounded in $\varepsilon,$ even for $\zeta$ near to zero. Thus, we can approximate the total projector of
$E_1(\zeta)=-A-\zeta B$ in a more convenient way, i.e. we decompose
$$A=\lambda_1 R_1 R_1^T+\lambda_2 R_2 R_2^T,$$ where
$\lambda_1=\lambda/\varepsilon, \, \lambda_2=-\lambda/\varepsilon,$ and the corresponding eigenprojectors $$ R_1=\dfrac{1}{\sqrt{2\lambda}}\left(\begin{array}{c}
\sqrt{\dfrac{\lambda^2-a^2 \varepsilon^2}{(\lambda - a \varepsilon)}} \\\\
\sqrt{{\lambda-a \varepsilon}}
\end{array}\right),
\; R_2=\dfrac{1}{\sqrt{2\lambda}}\left(\begin{array}{c}
-\sqrt{\dfrac{\lambda^2-a^2 \varepsilon^2}{(\lambda+a \varepsilon)}} \\\\
\sqrt{{\lambda+a \varepsilon}}
\end{array}\right).$$
Now, by considering the total projector for the family of eigenvalues going to $\lambda_j=\pm \lambda/\varepsilon$ as $\zeta \approx 0,$ we obtain the following approximations:
\begin{equation}
\label{F1_JinXin}
{F_1}_j(\zeta)=-\lambda_j I + \zeta R_j^T (-B) R_j + O(\zeta^2).
\end{equation}
Explicitly,
\begin{equation}
\label{F12_JinXin_explicit}
{F_1}_1(\zeta)=-\dfrac{\lambda}{\varepsilon} - \dfrac{(\lambda-a \varepsilon)\zeta}{2\lambda \varepsilon^2} + O(\zeta^2), \;\;\; {F_1}_2(\zeta)=\dfrac{\lambda}{\varepsilon} - \dfrac{(\lambda+a \varepsilon)\zeta}{2\lambda \varepsilon^2} + O(\zeta^2).
\end{equation}
Since $E(z)=z E_1(1/z),$ we multiply $F_1(\zeta)=F_1(1/z)$ by $z$ and, for $|z|\rightarrow + \infty,$
\begin{equation}
\label{eig_zeta_infty_JinXin}
\lambda_1(z)=-\dfrac{\lambda}{\varepsilon}z-\dfrac{\lambda-a \varepsilon}{2 \lambda \varepsilon^2}+O(1/z), \quad
\lambda_2(z)=\dfrac{\lambda}{\varepsilon}z-\dfrac{\lambda+a \varepsilon}{2 \lambda \varepsilon^2}+O(1/z),
\end{equation}
while the projectors are
\begin{equation}
\label{projector_z_big_JinXin}
\mathcal{P}_j(z)=R_j R_j^T+O(1/z), \quad j=1,2.
\end{equation}
\begin{remark}
\label{remark_zeta_infinity}
Notice that the term $O(1/z)$ in (\ref{eig_zeta_infty_JinXin}) could be singular in $\varepsilon$. However, from the previous discussion, the eigenvalues of $E(z)$ have a strictly negative real part. This implies that the coefficients of the even powers of $z$ in (\ref{eig_zeta_infty_JinXin}) have a negative sign, while the others are imaginary terms. Thus, $e^{\lambda_{1,2}(z)}$ are bounded in $\varepsilon$.
\end{remark}
\begin{proposition}
\label{Proposition_z_infinity}
We have the following decomposition near $z = \infty$:
\begin{equation}
\label{Decomposition_z_large}
E(z)=\lambda_1(z) \mathcal{P}_1(z)+\lambda_2(z) \mathcal{P}_2(z),
\end{equation}
with $\lambda_1(z), \lambda_2(z)$ in (\ref{eig_zeta_infty_JinXin}), and $\mathcal{P}_1(z), \mathcal{P}_2(z)$ in (\ref{projector_z_big_JinXin}).
\end{proposition}
\section{Green function estimates} \label{Section3}
\paragraph{Green function estimates near $z=0$}
We associate to (\ref{F_approximation_JinXin}) the parabolic equation
$$\partial_t w+a \partial_x w=(\lambda^2-a^2 \varepsilon^2)\partial_{xx} w.$$
We can write the explicit solution
\begin{equation}
\label{z_piccolo_JinXin}
g(t, x)= \dfrac{1}{2 \sqrt{(\lambda^2 - a^2 \varepsilon^2)\pi t}} \exp \Bigg\{ -\dfrac{(x-at)^2}{4(\lambda^2 - a^2 \varepsilon^2)t} \Bigg\}.
\end{equation}
This means that, for some $c_1, c_2>0$,
\begin{equation}
\label{z_piccolo_JinXin_estimate}
|g(t, x)| \le \dfrac{c_1}{\sqrt{t}} e^{-(x-at)^2/ct}, \qquad (t, x) \in \mathbb{R}^{+}\times \mathbb{R}, \qquad \forall \varepsilon>0.
\end{equation}
Now, recalling Proposition \ref{Proposition_z_small} and considering the approximation $\tilde{P}(z)$ in (\ref{tilde_Projector}) of the total projector $P(z)$ in (\ref{total_projector_approximation}),
$$e^{E(z)t}=\hat{g}(z) \tilde{P}(z)+R_{-}(z)e^{F_{-}(z)t}L_{-}(z)+\hat{R}_1(t, z),$$
where $\hat{g}(z)=-az-(\lambda^2-\varepsilon^2a^2)z^2,$ and $R_1(t,x)$ is a remainder term, we take the inverse of the Fourier transform of
\begin{equation}
\label{K_dritto_JinXin_Fourier}
\hat{K}(z)=\hat{g}(z) \tilde{P}(z),
\end{equation}
which yields the expression of the first part of the Green function near $z=0$, i.e.
\begin{equation}
\label{K_dritto_JinXin}
K(t, x)=\left(\begin{array}{cc}
g(t, x) \quad & \varepsilon \sqrt{\lambda^2 -a^2 \varepsilon^2} \Bigg(\dfrac{d g(t,x)}{dx}\Bigg) \\\\
\varepsilon \sqrt{\lambda^2 -a^2 \varepsilon^2} \Bigg(\dfrac{d g(t,x)}{dx}\Bigg) \quad & \varepsilon^2 (\lambda^2 - a^2 \varepsilon^2) \Bigg(\dfrac{d^2 g(t,x)}{d^2 x}\Bigg)
\end{array}\right).
\end{equation}
Here, $\hat{K}(t, \xi)$ is the approximation of $\hat{\Gamma}(t, \xi)$ in (\ref{Formal_solution_JinXin}) for $|\xi| \approx 0.$
Thus, for $\xi \in [-\delta, \delta] $ with $\delta>0$ sufficiently small, we consider the following remainder term
\begin{equation}
\label{Remainder_1_JinXin}
\begin{aligned}
R_1(t, x)&=\dfrac{1}{2\pi}\int_{-\delta}^\delta (e^{E(i \xi)t}-e^{\hat{K}(t, \xi)t}) e^{i \xi x} \, d \xi \\
& = \dfrac{1}{2\pi}\int_{-\delta}^\delta e^{i\xi(x-at)-\xi^2(\lambda^2-a^2 \varepsilon^2)t} (e^{O(\xi^3 t)}P(i\xi)-\tilde{P}(i\xi)) \, d\xi \\
& + \dfrac{1}{2 \pi} \int_{-\delta}^\delta R_{-}(i \xi) e^{F_{-}(i \xi)t} L_{-}(i \xi) e^{i\xi x} \, d\xi.
\end{aligned}
\end{equation}
We need an estimate for the remainder above. First of all, from (\ref{F_meno_approximation_JinXin}) and (\ref{Compl_total_projector_JinXin}),
\begin{align*}
\Bigg| \dfrac{1}{2 \pi} \int_{-\delta}^\delta R_{-}(i \xi) e^{F_{-}(i \xi)t} L_{-}(i \xi) e^{i\xi x} \, d\xi \Bigg| & \le \Bigg| \dfrac{1}{2 \pi} \int_{-\delta}^\delta P_{-}(i \xi) e^{(-1/\varepsilon^2+a i \xi +O(\xi^2))t} e^{i\xi x} \, d\xi \Bigg| \\\\
& \le C e^{-t/\varepsilon^2}
\end{align*}
for some constant $C$.
Following \cite{Bianchini},
$$|e^{O(\xi^3 t)} P(i \xi)-\tilde{P}(i \xi)| = |z^3|t e^{2 \mu |z|^2 t} \left(\begin{array}{cc}
O(1) \quad &O( \varepsilon ) |z| \\
O(\varepsilon) |z| \quad & O(\varepsilon^2) |z|^2
\end{array}\right),$$
for a constant $\mu > 0$.
This way,
$$R_1(t, x) = e^{-(x-at)^2/(ct)} \left(\begin{array}{cc}
O(1)(1+t)^{-1} \quad & O(\varepsilon) (1+t)^{-3/2} \\
O(\varepsilon) (1+t)^{-3/2} \quad & O(\varepsilon^2) (1+t)^{-2}
\end{array}\right).$$
\paragraph{Green function estimates near $z=\infty$}
We associate to (\ref{eig_zeta_infty_JinXin}) the following equations:
\begin{align*}
\partial_t w + \dfrac{\lambda}{\varepsilon} \partial_x w = -\dfrac{\lambda-a \varepsilon}{2 \lambda \varepsilon^2} w, \quad
\partial_t w - \dfrac{\lambda}{\varepsilon} \partial_x w = -\dfrac{\lambda+a \varepsilon}{2 \lambda \varepsilon^2} w.
\end{align*}
We can write explicitly the solutions
\begin{align*}
g_1(t, x)=\delta (x-\lambda t/ \varepsilon ) e^{-(\lambda-a \varepsilon)t/(2 \lambda \varepsilon^2) }, \quad
g_2(t, x)=\delta (x+\lambda t/ \varepsilon ) e^{-(\lambda+a \varepsilon)t/(2 \lambda \varepsilon^2) }.
\end{align*}
Thus,
$$|g_{j}(t, x)| \le C \delta( x \pm \lambda t/\varepsilon) e^{-c t/\varepsilon^2 }, \quad j=1, 2.$$
We determine the Fourier transform of the Green function for $|z|$ going to infinity,
\begin{equation}
\label{K_storto_JinXin}
\hat{\mathcal{K}}(t, \xi) = \exp \Bigg\{- i \dfrac{\lambda t \xi}{\varepsilon} - \dfrac{(\lambda - a \varepsilon)t}{2 \lambda \varepsilon^2} \Bigg\}\mathcal{P}_1(\infty) + \exp \Bigg\{ i \dfrac{\lambda t \xi}{\varepsilon} - \dfrac{(\lambda + a \varepsilon)t}{2 \lambda \varepsilon^2} \Bigg\}\mathcal{P}_2(\infty).
\end{equation}
This way, from Proposition \ref{Proposition_z_infinity}, the remainder term here is
\begin{equation}\label{Remainder2_Jin_Xin}
R_2(t, x)=\dfrac{1}{2 \pi} \int_{|\xi| \ge N} (e^{E(i \xi)t}-\hat{\mathcal{K}}(t, \xi)) e^{i\xi x} \, d\xi, \qquad \text{and}
\end{equation}
\begin{align*}
|R_2| & \le \dfrac{1}{2 \pi} \Bigg|\int_{|\xi| \ge N} e^{i\xi (x- \lambda t/\varepsilon)-(\lambda - a \varepsilon)t/(2 \lambda \varepsilon^2)} \cdot (e^{O(1)t/(i\xi) + O(1)t/\xi^2} \mathcal{P}_{1}(i\xi)-\mathcal{P}_{1}(\infty)) \, d\xi \Bigg|\\
& + \dfrac{1}{2 \pi} \Bigg|\int_{|\xi| \ge N} e^{i\xi (x+\lambda t/\varepsilon)-(\lambda+a \varepsilon)t/(2 \lambda \varepsilon^2)} \cdot (e^{O(1)t/(i\xi) + O(1)t/\xi^2} \mathcal{P}_{2}(i\xi)-\mathcal{P}_{2}(\infty)) \, d\xi \Bigg|.
\end{align*}
Following \cite{Bianchini} and thanks to Remark \ref{remark_zeta_infinity},
\begin{align*}
|R_2(t, x)| &\le Ce^{-ct/\varepsilon^2} \Bigg[ \Bigg| \int_{|\xi| \ge N} \dfrac{e^{i\xi(x\pm \lambda t/\varepsilon)}}{\xi} \, d\xi \Bigg| + \int_{|\xi|\ge N} \dfrac{1}{\xi^2} \, d\xi\Bigg] \le Ce^{-ct/\varepsilon^2}.
\end{align*}
\paragraph{Remainders in between}
\label{Remainders_between}
Until now, we studied the Green function of the linearized diffusive Jin-Xin system for $z \approx 0,$ which yields the parabolic kernel $\hat{K}$ in (\ref{K_dritto_JinXin_Fourier}), and for $z \approx \infty,$ so obtaining $\hat{\mathcal{K}}$ in (\ref{K_storto_JinXin}). In these two cases, we also provided estimates for the remainder terms:
\begin{itemize}
\item $R_1$ in (\ref{Remainder_1_JinXin}) for the parabolic kernel $K$ for $|\xi| \le \delta,$ with $\delta$ sufficiently small;
\item $R_2$ in (\ref{Remainder2_Jin_Xin}) for the transport kernel $\mathcal{K}$ for $|\xi| \ge N,$ with $N$ big enough.
\end{itemize}
It remains to estimate the last remainder terms, namely the parabolic kernel $K$ for $|\xi| \ge \delta, \; t \ge 1,$ the transport kernel $\mathcal{K}$ for $|\xi| \le N,$ and the kernel $E(z)$ for $\delta \le |\xi| \le N.$
\paragraph{Parabolic kernel $K(t, x)$ for $|\xi| \ge \delta,$ $\delta <<1$ }
Let us define
\begin{equation}
\label{Remainder_3_Jin_Xin}
R_3(t, x)=\dfrac{1}{2 \pi} \int_{|\xi| \ge \delta} \hat{K}(t, \xi) e^{i\xi x}\, d\xi.
\end{equation}
Thus, from (\ref{K_dritto_JinXin_Fourier}), for $t \ge 1,$
\begin{align*}
|R_3(t, x)|&\le C \Bigg| \int_{|\xi|\ge \delta} e^{i\xi(x-at)} e^{-(\lambda^2-a^2 \varepsilon^2)\xi^2 t} \tilde{P}(i \xi) \, d\xi \Bigg|\\
&\le \dfrac{C e^{-t/C}}{\sqrt{t}}\left(\begin{array}{cc}
O(1) \quad & O( \varepsilon) \\
O(\varepsilon) \quad & O( \varepsilon^2)
\end{array}\right).
\end{align*}
\paragraph{Transport kernel for $|\xi| \le N$}
Set
\begin{equation}
\label{Remainder_4_Jin_Xin}
R_4(t, x)=\dfrac{1}{2\pi} \int_{|\xi| \le N} \hat{\mathcal{K}}(t, \xi) e^{i\xi x} \, d\xi,
\end{equation}
and, from (\ref{K_storto_JinXin}),
\begin{align*}
|R_4(t, x)| & \le C e^{-(\lambda+|a|\varepsilon)t/(2 \lambda \varepsilon^2)} \sum \Bigg| \int_{-N}^N e^{i\xi (x\pm \lambda t/\varepsilon)} d\xi \Bigg| \\
&\le C e^{-ct/\varepsilon^2} \min \Bigg\{ N, \dfrac{1}{|x\pm \lambda t/\varepsilon|} \Bigg\}.
\end{align*}
\paragraph{Kernel $E(z)$ for $\delta \le |\xi| \le N$}
Finally, we set
$$R_5(t, x)=\dfrac{1}{2\pi}\int_{\delta \le |\xi| \le N} e^{E(i\xi)t} e^{i\xi t} d\xi.$$
Differently from \cite{Bianchini}, here we cannot simply apply the (SK) condition, as mentioned in the Introduction, and a further analysis is needed.
The eigenvalues of $E(i\xi)=-i\xi A-B$ are expressed here:
\begin{equation}
\label{Eig_SK}
\lambda_{1/2}=\dfrac{1}{2 \varepsilon^2}\Bigg(-1 \pm \sqrt{1-4\varepsilon^2(ia\xi +\lambda^2\xi^2)}\Bigg)=\dfrac{- 2(a i \xi + \lambda^2 \xi^2)}{1 \pm \sqrt{1-4\varepsilon^2(a i \xi + \lambda^2 \xi^2)}}.
\end{equation}
By using the Taylor expansion for $\varepsilon \approx 0,$
$$\lambda_1= -\dfrac{ai \xi + \lambda^2 \xi^2}{1-\varepsilon^2 (a i \xi + \lambda^2 \xi^2)}, \quad \lambda_{2}=-\dfrac{1}{\varepsilon^2}.$$
Explicitly, denoting by
$$\bigtriangleup=\sqrt{1-4\varepsilon^2 \xi (\lambda^2 \xi+ia)}, \;\;\;\; \Box_1=-1+\bigtriangleup+2ia\xi \varepsilon^2, \;\;\;\; \Box_2=1+\bigtriangleup-2ia\xi \varepsilon^2,$$
one can find that $e^{E(i\xi)t}$
\begin{equation*}
\begin{aligned}
&=\left(\begin{array}{cc}
\dfrac{e^{\lambda_2 t}}{2 \bigtriangleup}\Box_1+\dfrac{e^{\lambda_1 t}}{2\bigtriangleup}\Box_2 & -\dfrac{i\xi \varepsilon \sqrt{\lambda^2-a^2\varepsilon^2}(e^{\lambda_1 t}-e^{\lambda_2 t})}{\bigtriangleup}\\\\
-\dfrac{i\xi \varepsilon \sqrt{\lambda^2-a^2\varepsilon^2}(e^{\lambda_1 t}-e^{\lambda_2 t})}{\bigtriangleup} & \dfrac{e^{\lambda_1 t}}{2 \bigtriangleup}\Box_1+\dfrac{e^{\lambda_2 t}}{2\bigtriangleup}\Box_2
\end{array}\right),
\end{aligned}
\end{equation*}
$$\text{where} \quad \Box_1=-1+\bigtriangleup=-2\varepsilon^2 \xi (\lambda^2 \xi+ia)+O(\varepsilon^2)=O(\varepsilon^2), \quad \Box_2=1+\bigtriangleup=O(1),$$
and, in terms of the singular parameter $\varepsilon,$ this yields
\begin{equation*}
e^{E(i\xi)t}=\left(\begin{array}{cc}
O(1) (e^{\lambda_1 t}+e^{\lambda_2 t}) & O(\varepsilon) (e^{\lambda_1 t}-e^{\lambda_2 t})\\\\
O(\varepsilon) (e^{\lambda_1 t}-e^{\lambda_2 t}) & e^{\lambda_1t}O(\varepsilon^2)+O(1)e^{\lambda_2t}
\end{array}\right).
\end{equation*}
Putting the calculations above all together and integrating in space with respect to the Fourier variable for $\delta \le |\xi| \le N,$ we get
\begin{equation}
\label{Remainder_5_Jin_Xin}
|R_5(t, x)| \le C \left(\begin{array}{cc}
O(1)e^{-t/C} & O(\varepsilon) e^{-t/C}\\\\
O(\varepsilon) e^{-t/C} & O(\varepsilon^2)e^{-t/C}+O(1)e^{-t/\varepsilon^2}
\end{array}\right).
\end{equation}
From (\ref{Remainder_1_JinXin}), (\ref{Remainder2_Jin_Xin}), (\ref{Remainder_3_Jin_Xin}), (\ref{Remainder_4_Jin_Xin}), (\ref{Remainder_5_Jin_Xin}), we denote the remainder by
\begin{equation}
\label{Remainder_total}
R(t)=R_1(t)+R_2(t)+R_3(t)+R_4(t)+R_5(t).
\end{equation}
The estimates above provide the following lemma.
\begin{lemma}
\label{Decomposition_lemma}
Let $\Gamma(t, x)$ be the Green function of the linear system (\ref{CD_real_JinXin_no_tilde}). We have the following decomposition:
$$\Gamma(t, x)=K(t, x)+\mathcal{K}(t,x)+R(t, x),$$
with $K(t, x), \mathcal{K}(t, x), R(t, x)$ in (\ref{K_dritto_JinXin}), (\ref{K_storto_JinXin}) and (\ref{Remainder_total}) respectively. Moreover, for some constants $c, C,$
\begin{itemize}
\item $ |K(t, x)| \le e^{-(x-at)^2/(ct)} \left(\begin{array}{cc}
O(1)(1+t)^{-1} \quad & O(\varepsilon) (1+t)^{-3/2} \\
O(\varepsilon) (1+t)^{-3/2} \quad & O(\varepsilon^2) (1+t)^{-2}
\end{array}\right);$
\item $ |\mathcal{K}(t, x)| \le C e^{-ct/\varepsilon^2};$
\item
$
\begin{aligned}
|R(t)| \le e^{-(x-at)^2/(ct)} & \quad \left(\begin{array}{cc}
O(1)(1+t)^{-1} \quad & O(\varepsilon) (1+t)^{-3/2} \\
O(\varepsilon) (1+t)^{-3/2} \quad & O(\varepsilon^2) (1+t)^{-2}
\end{array}\right)\\
&+\left(\begin{array}{cc}
O(1) & O(\varepsilon) \\
O(\varepsilon) & O(\varepsilon^2)
\end{array}\right)e^{-ct}+ Id \; e^{-ct/\varepsilon^2}.
\end{aligned}.$
\end{itemize}
\end{lemma}
\paragraph{Decay estimates}
Let us consider the solution to the Cauchy problem associated with the linear system (\ref{CD_real_JinXin_no_tilde}) and initial data $\textbf{w}_0,$
$$\hat{\textbf{w}}(t, \xi)=\hat{\Gamma}(t, \xi) \hat{\textbf{w}}_0(\xi)=e^{E(i\xi)t} \hat{\textbf{w}}_0(\xi).$$ By using the decomposition provided by Lemma \ref{Decomposition_lemma}, we get the following theorem.
\begin{theorem}
\label{Decay_estimates_JinXin}
Consider the linear system in (\ref{CD_real_JinXin_no_tilde}), i.e.
$$\partial_t \textbf{w}+A\partial_x \textbf{w}=-B\textbf{w},$$
and let $Q_0=R_0L_0$ and $Q_{-}=R_{-}L_{-}$ as before, i.e. the eigenprojectors onto the null space and the negative definite part of $-B$ respectively. Then, for any function $\textbf{w}_0 \in L^1 \cap L^2 (\mathbb{R}, \mathbb{R}),$ the solution $\textbf{w}(t)=\Gamma (t) \textbf{w}_0$ to the related Cauchy problem can be decomposed as
$$\textbf{w}(t)=\Gamma(t) \textbf{w}_0=K(t)\textbf{w}_0+\mathcal{K}(t)\textbf{w}_0+R(t) \textbf{w}_0.$$
Moreover, for any index $\beta$, the following estimates hold:\begin{equation}
\label{K_dritto_beta_estimates_L0_JinXin}
\begin{aligned}
\|L_0 D^\beta K(t) \textbf{w}_0\|_0 &\le C \min\{1, t^{-1/4-|\beta|/2}\} \|L_0 \textbf{w}_0\|_{L^1}\\
&+C \varepsilon \min\{ 1, t^{-3/4-|\beta|/2}\} \| L_{-} \textbf{w}_0\|_{L^1},
\end{aligned}
\end{equation}
\begin{equation}
\label{K_dritto_beta_estimates_Lmeno_JinXin}
\begin{aligned}
\|L_{-} D^\beta K(t) \textbf{w}_0\|_0 &\le C \varepsilon \min\{1, t^{-3/4-|\beta|/2}\} \|L_{0} \textbf{w}_0\|_{L^1}\\
&+C \varepsilon^2 \min\{ 1, t^{-5/4-|\beta|/2}\} \|L_{-} \textbf{w}_0\|_{L^1},
\end{aligned}
\end{equation}
\begin{equation}
\label{K_storto_beta_estimates_JinXin}
\|D^\beta \mathcal{K}(t) \textbf{w}_0\|_0 \le C e^{-ct/\varepsilon^2}\|D^\beta \textbf{w}_0\|_0,
\end{equation}
\begin{equation}
\label{remainder_total_beta_estimate_L0}
\begin{aligned}
\|L_0 D^\beta R(t) \textbf{w}_0\|_0 & \le C \min\{1, t^{-1/4-|\beta|/2}\} \|L_0 \textbf{w}_0\|_{L^1}\\
&+C \varepsilon \min\{ 1, t^{-3/4-|\beta|/2}\} \| L_{-} \textbf{w}_0\|_{L^1}\\
&+C e^{-ct} \|L_0 \textbf{w}_0\|_{L^1}+C\varepsilon e^{-ct}\|L_{-}\textbf{w}_0\|_{L^1}+Ce^{-ct/\varepsilon^2} \|\textbf{w}_0\|_{L^1},
\end{aligned}
\end{equation}
\begin{equation}
\label{remainder_total_beta_estimate_Lmeno}
\begin{aligned}
\|L_{-} D^\beta R(t) \textbf{w}_0\|_0 &\le C \varepsilon \min\{1, t^{-3/4-|\beta|/2}\} \|L_{0} \textbf{w}_0\|_{L^1}\\
&+C \varepsilon^2 \min\{ 1, t^{-5/4-|\beta|/2}\} \|L_{-} \textbf{w}_0\|_{L^1}\\
&+ C \varepsilon e^{-ct} \|L_0 \textbf{w}_0\|_{L^1}+C\varepsilon^2 e^{-ct}\|L_{-}\textbf{w}_0\|_{L^1}+Ce^{-ct/\varepsilon^2} \|\textbf{w}_0\|_{L^1}.
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
From Lemma \ref{Decomposition_lemma}, for some constants $c, C >0,$ and for an index $\beta,$ it holds
\begin{equation}
\label{parabolic_kernel_L2_estimate_JinXin}
\|D^\beta \mathcal{K}(t) \textbf{w}_0\|_0 \le C e^{-ct/\varepsilon^2} \|D^\beta \textbf{w}_0\|_0.
\end{equation}
On the other hand, the hyperbolic kernel (\ref{K_dritto_JinXin_Fourier}) can be estimated as
\begin{align*}
& |L_0 \widehat{K(t)\textbf{w}_0}|\le C e^{-c|\xi|^2 t} (|L_0 \hat{\textbf{w}}_0|+\varepsilon|\xi| |L_{-}\hat{\textbf{w}}_0|), \\
& |L_{-} \widehat{K(t)\textbf{w}_0}|\le C e^{-c|\xi|^2 t} (\varepsilon|\xi||L_0 \hat{\textbf{w}}_0|+\varepsilon^2|\xi|^2 |L_{-}\hat{\textbf{w}}_0|).
\end{align*}
This yields
\begin{align*}
\|L_0 K(t) \textbf{w}_0 \|_0^2 &\le C \int_0^\infty \int_{S^0} e^{-2c|\xi|^2t} (|L_0\hat{\textbf{w}}_0(\xi)|^2 + \varepsilon^2|\xi|^2 |L_{-}\hat{\textbf{w}}_0(\xi)|^2) \, d\zeta d\xi \\
& \le C \min\{1, t^{-1/2}\} \|L_0 \hat{\textbf{w}}_0\|_\infty^2 + C \varepsilon^2 \min \{1, t^{-3/2}\} \|L_{-}\hat{\textbf{w}}_0\|_\infty^2 \\
& \le C \min\{1, t^{-1/2}\} \|L_0 {\textbf{w}}_0\|_{L^1}^2 + C \varepsilon^2 \min \{1, t^{-3/2}\} \|L_{-}{\textbf{w}}_0\|_{L^1}^2,
\end{align*}
and
\begin{align*}
\|L_{-}K(t)\textbf{w}_0\|_0^2 & \le C \int_0^\infty \int_{S^0} e^{-2c|\xi|t} (\varepsilon^2 |\xi|^2 |L_0 \hat{\textbf{w}}_0(\xi)|^2+\varepsilon^4 |\xi|^2 |L_{-}\hat{\textbf{w}}_0(\xi)|^2) \, d\zeta d \xi \\
& \le C \varepsilon^2 \min\{1, t^{-3/2}\} \|L_0 \textbf{w}_0\|_{L^1}^2 + C \varepsilon^4 \min\{1, t^{-5/2}\} \|L_{-}\textbf{w}_0\|_{L^1}^2.
\end{align*}
Besides, for every $\beta$ we multiply by $\xi^{2\beta}$ the integrand and we get
\begin{equation*}
\begin{aligned}
\|L_0 D^\beta K(t) \textbf{w}_0\|_0 &\le C \min\{1, t^{-1/4-|\beta|/2}\} \|L_0 \textbf{w}_0\|_{L^1}\\
&+C \varepsilon \min\{ 1, t^{-3/4-|\beta|/2}\} \| L_{-} \textbf{w}_0\|_{L^1},
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
\|L_{-} D^\beta K(t) \textbf{w}_0\|_0 & \le C \varepsilon \min\{1, t^{-3/4-|\beta|/2}\} \|L_{0} \textbf{w}_0\|_{L^1}\\
&+C \varepsilon^2 \min\{ 1, t^{-5/4-|\beta|/2}\} \|L_{-} \textbf{w}_0\|_{L^1}.
\end{aligned}
\end{equation*}
The estimates for $R(t)$ are obtained in a similar way.
\end{proof}
\section{Decay estimates and convergence} \label{Section4}
Consider the local solution $\textbf{w}$ to the Cauchy problem associated with (\ref{CD_real_JinXin}), where we drop the \emph{tilde}, and initial data $\textbf{w}_0$. The solution to the nonlinear problem (\ref{CD_real_JinXin}) can be expressed by using the Duhamel formula \begin{equation}
\label{Duhamel}
\begin{aligned}
\textbf{w}(t)&=\Gamma(t)\textbf{w}_0+\int_0^t \Gamma (t-s) (N(w_1(s))-DN(0)w_1(s)) \, ds \\
& = \Gamma(t)\textbf{w}_0 + \int_0^t \Gamma(t-s) \left(\begin{array}{c}
0 \\
\dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}}
\end{array}\right) \, ds \quad t \in [0, T^*].
\end{aligned}
\end{equation} From (\ref{Pn_JinXin}) and the formulas below, we recall that
$w_1=L_0\textbf{w}=(1-L_{-})\textbf{w}$ is the conservative variable, while $w_2=L_{-}\textbf{w}$ is the dissipative one. We remind the Green function decomposition given by Lemma \ref{Decomposition_lemma}. For the $\beta$-derivative,
\begin{align*}
D^\beta \textbf{w}(t)&=D^\beta K(t) \textbf{w}(0)+\mathcal{K}(t)D^\beta \textbf{w}(0)+R(t) D^\beta \textbf{w}(0) \\
&+\int_0^{t/2} D^\beta K(t-s) R_{-}L_{-} \left(\begin{array}{c}
0 \\
\dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}}
\end{array}\right) \, ds \\
&+\int_{t/2}^t K(t-s) R_{-} D^\beta L_{-} \left(\begin{array}{c}
0 \\
\dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}}
\end{array}\right) \, ds \\
&+\int_0^t \mathcal{K}(t-s)D^\beta \left(\begin{array}{c}
0 \\
\dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}}
\end{array}\right) \, ds \\
&+\int_0^{t/2} D^\beta R(t-s)R_{-}L_{-} \left(\begin{array}{c}
0 \\
\dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}}
\end{array}\right) \, ds \\
&+ \int_{t/2}^t R(t-s)R_{-} D^\beta L_{-} \left(\begin{array}{c}
0 \\
\dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}}
\end{array}\right) \, ds\\\\
&= D^\beta K(t) \textbf{w}(0)+\mathcal{K}(t)D^\beta \textbf{w}(0)+R(t)D^\beta \textbf{w}(0) \\
&+\int_0^{t/2} \left(\begin{array}{c}
D^\beta K_{12}(t-s) \\
D^\beta K_{22}(t-s)
\end{array}\right)\dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}} \, ds \\
&+\int_{t/2}^t \left(\begin{array}{c}
K_{12}(t-s)\\
K_{22}(t-s)
\end{array}\right) D^\beta \dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}} \, ds \\
&+\int_0^t \mathcal{K}(t-s)D^\beta \left(\begin{array}{c}
0 \\
\dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}}
\end{array}\right) \, ds
\end{align*}
\begin{align*}
\quad \;\; &+\int_0^{t/2} \left(\begin{array}{c}
D^\beta R_{12}(t-s) \\
D^\beta R_{22}(t-s)
\end{array}\right)\dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}} \, ds \\\\
&+\int_{t/2}^t \left(\begin{array}{c}
R_{12}(t-s)\\
R_{22}(t-s)
\end{array}\right) D^\beta \dfrac{h(w_1(s))}{\varepsilon \sqrt{\lambda^2 - a^2 \varepsilon^2}} \, ds.
\end{align*}
Notice that, from (\ref{K_dritto_JinXin}), $K_{12}, K_{22}$ are of order $\varepsilon$ and $\varepsilon^2$ respectively, and the same holds for
\begin{align*}
&R_{12}=O(\varepsilon)(1+t)^{-3/2} e^{-(x-at)^2/ct}+O(\varepsilon)e^{-ct}+O(1)e^{-ct/\varepsilon^2},\\
&R_{22}=O(\varepsilon^2)(1+t)^{-2} e^{-(x-at)^2/ct}+O(\varepsilon^2)e^{-ct}+O(1)e^{-ct/\varepsilon^2}.
\end{align*}
From the assumptions above, $f(u)=f(w_1)=aw_1+h(w_1),$ where $h(w_1)=w_1^2 \tilde{h}(w_1)$ for some function $\tilde{h}(w_1).$
Thus, by using the estimates of Theorem \ref{Decay_estimates_JinXin}, and recalling that $\|\cdot\|_{m}=\|\cdot\|_{H^{m}(\mathbb{R})},$ for $m=0,1, 2,$ ($H^0=L^2$), we have, for $j=1,2,$
\begin{align*}
\|\textbf{w}(t)\|_{m} & \le C \min \{ 1, t^{-1/4}\} \|\textbf{w}_0\|_{L^1} + Ce^{-ct/\varepsilon^2}\|\textbf{w}_0\|_{m} \\
& + C \int_0^t \min\{1, (t-s)^{-3/4}\} (\|w_1^2 \tilde{h}(w_1)\|_{L^1}+ \|w_1^2 \tilde{h}(w_1)\|_m) \, ds \\
& + C \int_0^t e^{-c(t-s)} \|w_1^2 \tilde{h}(w_1)\|_m \, ds\\
& + C \int_0^t \dfrac{1}{\varepsilon} e^{-c(t-s)/\varepsilon^2} \|w_1^2 \tilde{h}(w_1)\|_m \, ds.
\end{align*}
For $m$ big enough,
\begin{align*}
\|\textbf{w}(t)\|_{m} & \le C \min \{ 1, t^{-1/4}\} \|\textbf{w}_0\|_{L^1} + Ce^{-ct/\varepsilon^2}\|\textbf{w}_0\|_{m} \\
& + \int_0^t \min\{1, (t-s)^{-3/4}\} C(|w_1|_\infty) \|w_1\|_m^2 \, ds \\
& + \int_0^t e^{-c(t-s)} C(|w_1|_\infty) \|w_1\|_m^2 \, ds \\
& + \int_0^t \dfrac{1}{\varepsilon} e^{-c(t-s)/\varepsilon^2} C(|w_1|_\infty) \|w_1\|_j^2 \, ds.
\end{align*}
From (\ref{change_CD_JinXin}), we recall that
$\textbf{w}=\left(\begin{array}{c}
w_1\\\\
w_2
\end{array}\right)
=\left(\begin{array}{c}
u\\\\
\dfrac{\varepsilon(v- au)}{\sqrt{\lambda^2-a^2\varepsilon^2}}
\end{array}\right),$
and so, for $m=2,$
\begin{align*}
\|u(t)\|_2 + c \varepsilon \|v(t)-au(t)\|_2 & \le C \min \{ 1, t^{-1/4}\} (\|{u}_0\|_{L^1}+c\varepsilon \|v_0-au_0\|_{L^1}) \\
&+ e^{-ct/\varepsilon^2}(\|{u}_0\|_{2}+c\varepsilon \|v_0-au_0\|_{2}) \\
& + \int_0^t \min\{1, (t-s)^{-3/4}\} C(|u|_\infty)\|u\|_2^2 \, ds \\
& + \int_0^t e^{-c(t-s)} C(|u|_\infty) \|u\|_2^2 \, ds \\
& + \int_0^t \dfrac{1}{\varepsilon}e^{-c(t-s)/\varepsilon^2} C(|u|_\infty) \|u\|_2^2 \, ds.
\end{align*}
Let us denote by
\begin{equation}
E_m=\max\{\|u_0\|_{L^1}+\varepsilon \|v_0-au_0\|_{L^1}, \|u_0\|_m+\varepsilon \|v_0-au_0\|_m\},
\end{equation}
where, according to (\ref{approx_initial_data_JinXin}),
$v_0=f(u_0)-\lambda^2 \partial_x u_0,$
and
\begin{equation}
\label{M0_JinXin}
M_0(t)=\sup_{0\le \tau \le t} \{ \max\{1, \tau^{1/4}\} (\|u(\tau)\|_2+\varepsilon \|v(\tau)-au(\tau)\|_2)\}.
\end{equation}
The first term of the right hand ride of the estimate above gives
\begin{align*}
C \min \{ 1, t^{-1/4}\} (\|{u}_0\|_{L^1}+c\varepsilon \|v_0-au_0\|_{L^1})&+ Ce^{-ct/\varepsilon^2}(\|{u}_0\|_{2}+c\varepsilon \|v_0-au_0\|_{2})\\
& \le C \min \{1, t^{-1/4}\} E_2.
\end{align*}
Besides,
$$C(|u|_\infty) \|u\|_2^2 \le C(|u|_\infty) \min\{1, s^{-1/2}\} (M_0(s))^2.$$
Thus,
\begin{align*}
\|u(t)\|_2+\varepsilon \|v(t)-au(t)&\|_2 \le C \min \{1, t^{-1/4}\} E_2 \\
&+ (M_0(t))^2 \int_0^t e^{-c(t-s)} c(|u|_\infty) \min \{1, s^{-1/2}\} \, ds \\
&+ (M_0(t))^2 \int_0^t \dfrac{1}{\varepsilon} e^{-c(t-s)/\varepsilon^2} c(|u|_\infty) \min \{1, s^{-1/2}\} \, ds \\
& + (M_0(t))^2 \int_0^t c(|u|_\infty) \min \{1, (t-s)^{-3/4}\} \min \{1, s^{-1/2}\} \, ds.
\end{align*}
From the Sobolev embedding theorem,
$$c(|u(s)|_\infty) \le c(\|u(s)\|_2) \le C \min \{1, s^{-1/4}\} M_0(s) \le C M_0(s).$$
This way,
\begin{align*}
\|u(t)\|_2+\varepsilon \|v(t)-au(t)\|_2 & \le C \min \{1, t^{-1/4}\} E_2 \\
&+C(M_0(t))^3 \int_0^t e^{-c(t-s)} \min \{1, s^{-1/2}\} \, ds \\
&+C(M_0(t))^3 \int_0^t \dfrac{1}{\varepsilon} e^{-c(t-s)/\varepsilon^2} \min \{1, s^{-1/2}\} \, ds \\
& + C(M_0(t))^3 \int_0^t \min \{1, (t-s)^{-3/4}\} \min \{1, s^{-1/2}\} \, ds.
\end{align*}
Notice that
\begin{align*}
\dfrac{1}{\varepsilon} \int_0^t e^{-c(t-s)/\varepsilon^2} \min\{1, s^{-1/2}\} \, ds &= \varepsilon e^{-ct/\varepsilon^2}\int_0^{t/\varepsilon^2} e^{c \tau} \min \{1, \varepsilon\sqrt{\tau}\} \, d\tau\\
& \le \varepsilon e^{-ct/\varepsilon^2}\int_0^{t/\varepsilon^2} e^{c \tau} \, d\tau\\
& = \dfrac{\varepsilon}{c} [1-e^{-ct/\varepsilon^2}] \\
& \le C\varepsilon.
\end{align*}
By using this inequality in the estimate above,
\begin{align*}
\|u(t)\|_2+\varepsilon \|v(t)-au(t)\|_2 & \le C \min \{1, t^{-1/4}\} E_2 \\
&+ C(M_0(t))^3 \int_0^t e^{-c(t-s)} \min \{1, s^{-1/2}\} \, ds \\
& + \varepsilon C(M_0(t))^3 \\
& + C(M_0(t))^3 \int_0^t \min \{1, (t-s)^{-3/4}\} \min \{1, s^{-1/2}\} \, ds.
\end{align*}
By applying usual lemmas on integration, as Lemma 5.2 in \cite{Bianchini},
we get the following inequality
\begin{align*}
M_0(t) \le C(E_2 + (M_0(t))^3).
\end{align*}
\begin{remark}
Notice that this last estimate and the calculations above are different from \cite{Bianchini}, since here we are not assuming the global well-posedness of our model. Indeed we prove this in the following.
\end{remark}
Then, if $E_2$ is small enough,
$$M_0(t) \le C E_2, $$ i.e.
\begin{equation}
\label{s_estimate_JinXin}
\|u(t)\|_2+\varepsilon \|v(t)-au(t)\|_2 \le C \min\{1, t^{-1/4}\} E_2.
\end{equation}
By arguing as before and following \cite{Bianchini}, we have the theorem below.
\begin{proposition}
The following estimates hold, with $C$ a constant independent of $\varepsilon$,
\begin{equation}
\label{decay_estimate_beta_JinXin}
\|D^\beta \textbf{w}(t)\|_0 \le C \min \{1, t^{-1/4-|\beta|/2}\} E_{|\beta|+3/2},
\end{equation}
\begin{equation}
\label{decay_estimate_beta_dissipative_JinXin}
\|D^\beta w_2(t)\|_0 \le C \min \{1, t^{-3/4-|\beta|/2}\} E_{|\beta|+3/2},
\end{equation}
\begin{equation}
\label{decay_estimate_time_derivative_beta_JinXin}
\|D^\beta \partial_t \textbf{w} (t)\|_{0} \le C \min \{1, t^{-3/4-|\beta|/2}\} E_{|\beta|+5/2},
\end{equation}
\begin{equation}
\label{decay_estimate_time_derivative_dissipative_beta_JinXin}
\|D^\beta \partial_t w_2 (t)\|_{0} \le C \min \{1, t^{-5/4-|\beta|/2}\} E_{|\beta|+7/2}.
\end{equation}
\end{proposition}
This last result and estimate (\ref{s_estimate_JinXin}) prove Theorem \ref{uniform_global_existence}.
\begin{remark}
Notice that the estimates for the partial derivative in time of the solution (\ref{decay_estimate_time_derivative_beta_JinXin}), (\ref{decay_estimate_time_derivative_dissipative_beta_JinXin}) are uniform in $\varepsilon$ thanks to the particular form of the initial data (\ref{approx_initial_data_JinXin_f}). In fact, these estimates can be obtained by applying again the Duhamel formula as before and, similarly to (\ref{s_estimate_JinXin}), we get a bound for $\|D^\beta \partial_t \textbf{w} (t)\|_{0}$ which depends on $\|D^\beta \partial_t \textbf{w}|_{t=0}\|$. This norm is not singular in $\varepsilon$ thanks to the particular form of the initial data, as it is shown below.
The initial data satisfy (\ref{approx_initial_data_JinXin}), i.e.
$$v_0=f(u_0)-\lambda^2 \partial_x u_0=a u_0 + h(u_0) - \lambda^2\partial_x u_0.$$
In terms of the (C-D)-variable $\textbf{w},$
$$\begin{cases}
u=w_1, \\
v=aw_1+\dfrac{\sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon}w_2,
\end{cases}$$
this gives the following relation:
\begin{equation}
\label{relation_initial_data_JinXin}
\dfrac{\sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon}w_2^0=h(w_1^0)-\lambda^2 \partial_x w_1^0
\end{equation}
Using (\ref{relation_initial_data_JinXin}) in system (\ref{CD_real_JinXin}), this yields
$$\begin{cases}
\partial_t w_1|_{t=0}=-a\partial_x w_1^0 - \dfrac{\sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon}\partial_x w_2^0, \\
\partial_t w_2|_{t=0}=- \dfrac{\sqrt{\lambda^2-a^2\varepsilon^2}}{\varepsilon}\partial_x w_1^0 + a\partial_x w_2^0+\dfrac{\lambda^2}{\varepsilon \sqrt{\lambda^2-a^2\varepsilon^2}}\partial_x w_1^0 \\
=\dfrac{a^2 \varepsilon}{\sqrt{\lambda^2-a^2\varepsilon^2}}\partial_x w_1^0+a\partial_x w_2^0.
\end{cases}$$
In terms of the original variable,
$$\begin{cases}
\partial_t w_1|_{t=0}=-\partial_x f(\bar{u}_0)+\lambda^2 \partial_{xx}\bar{u}_0, \\
\partial_t w_2|_{t=0}=\dfrac{a\varepsilon}{\sqrt{\lambda^2-a^2\varepsilon^2}}(\partial_x f (\bar{u}_0)-\lambda^2 \partial_{xx}\bar{u}_0).
\end{cases}$$
\end{remark}
\paragraph{Convergence in the diffusion limit and asymptotic behavior}
We perform the one dimensional Chapman-Enskog expansion. Recalling that
$$w_1=u, \qquad w_2=u_d,$$
where $u$ is the conservative variable and $u_d$ is the dissipative one, system (\ref{CD_real_JinXin}) is
$$\partial_t \left(\begin{array}{c}
u \\
u_d
\end{array}\right) + A \partial_x \left(\begin{array}{c}
u \\
u_d
\end{array}\right) = \left(\begin{array}{c}
0 \\
q(u)
\end{array}\right),$$
with $A$ in (\ref{matrices_CD_real_JinXin}) and $q(u)=-\dfrac{w_2}{\varepsilon^2}+\dfrac{h(w_1)}{\varepsilon \sqrt{\lambda^2-a^2\varepsilon^2}}.$
We consider the following nonlinear parabolic equation
\begin{equation*}
\partial_t u + a \partial_x u + \partial_x h(u) -(\lambda^2-a^2 \varepsilon^2) \partial_{xx} u=\partial_x S,
\end{equation*}
where
\begin{equation}
\label{S_estimate}
S=\varepsilon \sqrt{\lambda^2-a^2\varepsilon^2}\{\partial_t u_d - a \partial_x u_d \}.
\end{equation}
The homogeneous equation is
\begin{equation}
\label{CE_JinXin}
\partial_t w_p + a \partial_x w_p + \partial_x h(w_p) -(\lambda^2-a^2 \varepsilon^2) \partial_{xx} w_p=0,
\end{equation}
and associated Green function is provided here
$$\Gamma_p(t)=K_{11}(t)+\tilde{\mathcal{K}}(t)+\tilde{R}(t),$$
with $K_{11}$ in (\ref{K_dritto_JinXin}). We take the difference between the conservative variable $u=w_1$ and $w_p$,
\begin{equation}
\label{difference}
\begin{aligned}
D^\beta (u(t)-w_p(t)) & = \int_0^{t/2} D^\beta D (K_{11}(t-s)+\tilde{R}(t-s)) (h(w_p(s))-h(u(s))) \, ds \\
& + \int_0^{t/2} D^\beta D(K_{11}(t-s) + \tilde{R}(t-s)) S(s) \, ds \\
& + \int_{t/2}^t D(K_{11}(t-s)+\tilde{R}(t-s))D^\beta (h(w_p(s))-h(u(s))+S(s)) \, ds \\
& + \int_0^t \tilde{\mathcal{K}}(t-s) D^\beta D(h(w_p(s))-h(u(s))+S(s)) \, ds.
\end{aligned}
\end{equation}
By using (\ref{decay_estimate_beta_JinXin}), (\ref{decay_estimate_beta_dissipative_JinXin}), (\ref{decay_estimate_time_derivative_dissipative_beta_JinXin}),
we have
$$\|D^\beta S \|_0 \le C \varepsilon \min \{1, t^{-5/4-\beta/2}\} E_{\beta+1}.$$
Let us define, for $\mu \in [0, 1/2),$
\begin{equation}
\label{m_0_JinXin}
m_0(t)=\sup_{ \tau \in [0, t] } \{ \max \{1, \tau^{1/4+\mu} \} \|u(\tau)-w_p(\tau)\|_0 \}.
\end{equation}
For $\beta=0,$
\begin{align*}
\|u(t)-w_p(t)\|_0 & \le C E_1 m_0(t) \int_0^t \min \{1, (t-s)^{-3/4}\} \min \{1, s^{-1/2-\mu}\} \, ds \\
& + C \varepsilon E_3 \int_0^t \min \{1, (t-s)^{-3/4}\} \min \{1, s^{-1} \} \, ds \\
& + {C(E_1 m_0(t) + {\varepsilon}E_4)}\int_0^t e^{-c(t-s)} \min \{1, s^{-5/4}\} \, ds \\
& \le C \min \{1, s^{-1/4-\mu} \} ( E_1 m_0(t) + \varepsilon E_1 + \varepsilon E_4+(1/2-\mu)^{-1} E_1 m_0(t)),
\end{align*}
i.e., if $E_1$ is small enough,
\begin{equation}
\label{m_0_estimate}
m_0(t) \le C \varepsilon E_4.
\end{equation}
Similarly, it can be proved by induction that if, for $\gamma < \beta,$ defining
\begin{equation}
\label{m_beta2_JinXin}
m_\beta (t)=\sup_{ \tau \in [0, t] } \{ \max \{1, \tau^{1/4+\mu+\beta/2} \} \|D^\beta (u(\tau)-w_p(\tau))\|_0 \},
\end{equation}
and assuming
$m_\gamma (t) \le C(\mu) \varepsilon E_{\gamma+4},$
then
$$\|D^\beta (h(u(s))-h(w_p(s)))\|_{0} \le C \min \{1, t^{-1/2-\mu-\beta/2}\} (C(\mu)E_{\beta+1}E_{\beta+3}+E_1m_\beta(t)).$$
Using this inequality, (\ref{S_estimate}) and (\ref{m_0_estimate}) in (\ref{difference}), finally we get
$$m_\beta (t) \le C(\mu) \varepsilon E_{\beta+4},$$
which ends the proof of Theorem \ref{asymptotic_behavior_JinXin}.
\section*{Acknowledgments}
The author is grateful to Roberto Natalini for some helpful advices and encouragements, and to Enrique Zuazua for useful discussions.
\end{document} |
\begin{document}
\title{Geometric quantum phase for displaced states for a particle with an induced electric dipole moment}
In 1984, Berry \cite{berry} investigated the evolution of a quantum state of a system during an adiabatic cyclic evolution and showed that this quantum state acquires a phase shift when the system returns to its initial state. Now, we consider that the Hamiltonian operator depends on a set of parameters. In short, if the these parameters slowly vary during the evolution of a quantum system and then return to their initial values, the quantum system returns to its initial state up to a phase. This phase shift depends only on the geometrical nature of the path of evolution of the quantum system, therefore, it became known as geometric quantum phase or Berry phase. Moreover, Berry \cite{berry} showed that the quantum phase giving rise to the Aharonov-Bohm effect \cite{ab} is a special case of the geometric quantum phase. Indeed, the Aharonov-Bohm phase \cite{ab} is a topological quantum phase because it is independent of the path taken in the parameter space. On the other hand, the study of geometric quantum phases was extended to non-adiabatic cyclic evolutions by Aharonov and Anandan \cite{ahan} in 1987. Wilczek and Zee \cite{wilk}, in turn, generalized the Berry phase for non-Abelian cases. To illustrate this geometric phase, we consider a subspace spanned by the eigenvectors of a family of Hamiltonians $\mathcal{F}=\left\{H\left(\lambda\right)=U\left(\lambda\right)H_{0}U^{\dag}\left(\lambda\right);\lambda\in
\mathcal{M}\right\}$, where $U\left(\lambda\right)$ is the unitary operator and $\lambda$ corresponds to the control parameter \cite{zr1,pzr1,pc1,pc3}. Now we obtain the geometric phase performing the following procedure: we suggest that the control parameters are changed adiabatically along a loop in the control manifold $\mathcal{M}$. The action of the unitary operator $U\left(\lambda\right)$ on an initial state $\left|\psi_{0}\right\rangle$ brings it to a final state $\left|\psi\right\rangle=U\left(\lambda\right)\left|\psi_{0}\right\rangle$ . The general expression corresponding to the action of this unitary operator is $ \left|\psi\right\rangle=U\left(\lambda\right)\left|\psi_{0}\right\rangle=e^{-i\int_{0}^{t}E\left(t'\right)\,dt'}\,\Gamma_{A}\left(\lambda\right)\left|\psi_{0}\right\rangle,
\label{eq:}$ where the first factor corresponds to the dynamical phase, and the second factor is the holonomy:
$
\Gamma_{A}\left(\lambda\right):=\hat{\mathcal{P}}\,\exp i \int_{C} \mathcal{A},$
where $\mathcal{A}=A\left(\lambda\right)\,d\lambda$ is a connection $1$-form called the Mead-Berry connection 1-form \cite{tg1}. Notice that the expression $\Gamma_{A}$ is a geometric phase well-known as Berry's Phase. This phase generally depends on paths $C$ and depends on the geometrical nature of the pathway along which the system evolves. The quantity $A\left(\lambda\right)$ corresponds to the Mead-Berry vector potential, where the components of this vector potential are defined as: $A^{\alpha\beta}=i \left\langle \psi^{\alpha}\left(\lambda\right)\right|\partial/\partial\lambda\left|\psi^{\beta}\left(\lambda\right)\right\rangle$. Besides, the observations of adiabatic and non-adiabatic Berry quantum phases have been reported in several areas of physics such as photons systems \cite{16,16a}, neutrons \cite{17} and nuclear spins \cite{18}. Several authors have investigated, from a theoretical viewpoint, the manifestation of Berry phases in various areas of physics \cite{19,20,bf21,epjcj}. Recently, Yang and Chen \cite{displaced1} have obtained the Abelian and non-Abelian Berry phases associated with displaced Fock\cite{displaced} states for Landau levels.
Recently, essential efforts have been devoted to investigating the behaviour of neutral particle systems in a situation analogous to the Landau quantization \cite{landau}. The first system was proposed by Ericsson and Sj\"oqvist \cite{er} by dealing with a neutral particle with a permanent magnetic dipole moment exposed to an electric field. The interaction between the magnetic dipole moment of the neutral particle and the electric field is described by the Aharonov-Casher coupling \cite{ac} and became known as the Landau-Aharonov-Casher quantization. By following this line of research, Ribeiro {\it et al.} \cite{lin} considered a neutral particle with a permanent electric dipole moment and, being based on the He-McKellar-Wilkens effect \cite{HMW,HMW2}, proposed the Landau-He-McKellar-Wilkens quantization. Moreover, the Landau quantization for a neutral particle possessing an induced electric dipole moment was investigated by Furtado {\it et al.} \cite{lwhw}. It is worth mentioning that the Landau quantization for a neutral particle possessing an induced electric dipole moment was based on the system proposed by Wei {\it et al.} \cite{wei,c1lwhw} in a study of geometric quantum phases. Recently, an experimental test of the He-McKellar-Wilkens effect \cite{HMW,HMW2} has been reported by Lepoutre {\it et al.} \cite{lepoutre,lepoutre1,lepoutre2,lepoutre3}, where the field configuration proposed by Wei {\it et al.} \cite{wei} was used in the experiment.
In this paper, we consider a neutral particle with an induced electric dipole moment interacting with electric and magnetic fields. We build, for the first time in the literature, the displaced Fock states from the Landau states obtained in Ref. \cite{lwhw} through the approach introduced by Feldman and Kahn \cite{feld}. Then, we investigate the arising of Abelian and non-Abelian geometric quantum phases associated with these displaced Fock states during some adiabatic cyclic evolutions. It is worth to mention that the field configuration giving rise to the Landau quantization of neutral particle with an induced electric dipole moment provides a practical perspective where geometric quantum phases can be used to investigate a way of performing the holonomic quantum computation \cite{zr1} in atomic systems. We emphasize that the displaced states and the Abelian and non-Abelian phases are new results of this present contribution.
This paper is organized as follows. In section II, we make a brief review of the Landau quantization for a neutral particle possessing an induced electric dipole moment. In section III, we build the displaced Fock states based on the Landau system for a neutral particle with an induced electric dipole moment. In section IV, we study the arising of Abelian and non-Abelian geometric phases associated with the displaced Fock states. Finally, in section V, we present our conclusions.
\section{Landau quantization}
In this section, we make a brief review of the Landau quantization for a neutral particle possessing an induced electric dipole moment. In \cite{wei} the quantum dynamics of a moving particle with an induced electric dipole moment is described as follows: first, let us suggest the dipole moment of an atom to be proportional to the external electric field in the rest frame of the particle or in the laboratory frame, that is, $\vec{d}=\alpha\,\vec{E}$, where $\alpha$ is the dielectric polarizability of the atom; thus, for a moving particle, the dipole moment of the neutral particle interacts with a different electric field $\vec{E}'$, which is given by applying the Lorentz transformation of the electromagnetic field: $\vec{E}'=\vec{E}+\frac{1}{c}\,\vec{v}\times\vec{B}$ up to $\mathcal{O}\left(v^{2}/c^{2}\right)$, where $\vec{v}$ is the velocity of the neutral particle and the fields $\vec{E}$ and $\vec{B}$ correspond to the electric and magnetic fields in the laboratory frame, respectively. From now on, let us consider the units $\hbar=c=1$, therefore, we can write $
\vec{d}=\alpha\left(\vec{E}+\vec{v}\times\vec{B}\right).\label{1.1}$ Further, the Lagrangian is given by $\mathcal{L}=\frac{1}{2}\left(M+\alpha\,B^{2}\right)\,v^{2}+\vec{v}\cdot\left(\vec{B}\times\alpha\,\vec{E}\right)+\frac{1}{2}\,\alpha\,E^{2}$, {where we suggested that velocity of the dipole} is perpendicular to the magnetic field. Note that this moving particle has an effective mass given by $m=M+\alpha\,B^{2}$, where $M$ is the mass of the neutral particle. Let us consider $B^{2}=B_{0}^{2}=\mathrm{constant}$. Thereby, after some calculations, the Schr\"odinger equation describing the quantum dynamics of a moving neutral particle with an induced electric dipole moment interacting with electric and magnetic fields can be written in the form:
\begin{eqnarray}
i\frac{\partial\psi}{\partial t}=\frac{1}{2m}\left[\vec{p}+\alpha\left(\vec{E}\times\vec{B}\right)\right]^{2}\psi-\frac{\alpha}{2}\,E^{2}\,\psi.
\label{1.2}
\end{eqnarray}
Now, basing on the discussions presented in Refs. \cite{wei,c1lwhw} and considering particles with a large polarizability $\alpha\sim10^{-39}\,\mathrm{Fm}^{2} $ and $B\leq10\,\mathrm{T} $, we find that the term $\alpha B^{2}\leq 10^{-37}\,\mathrm{kg}$ corresponds to only $ 10^{-10}$ of the mass of one nucleon, therefore this term has no significance in the Hamiltonian operator of the right-hand-side of Eq. (\ref{1.2}). Moreover, by considering an electric field with intensity $E\approx10^{7}\,\mathrm{V}/\mathrm{m}$, one sees that the term $ \alpha E^{2}=10^{-25}\,\mathrm{J}$ is very small being compared with the kinetic energy of the atoms, hence, it can be neglected in the Hamiltonian given in Eq. (\ref{1.2}) without loss of generality. In this way, we can write the Hamiltonian of Eq. (\ref{1.2}) in the form:
\begin{eqnarray}
\hat{\mathcal{H}}=\frac{1}{2M}\left[\vec{p}+\alpha\left(\vec{E}\times\vec{B}\right)\right]^{2}.
\label{1.3}
\end{eqnarray}
The Landau quantization, in its turn, was established in Ref. \cite{lwhw} to exhibit the well-known Landau levels for systems of cold atoms \cite{cold1,cold2,cold3,cold4,cold5}. In this case, a cold atom is treated as a structureless particle with an induced electric dipole moment. The field configuration proposed in Ref. \cite{lwhw} in order to achieve the Landau quantization for a neutral particle with an induced electric dipole moment is
$
\vec{E}=\frac{\lambda}{2}\left(x,y,0\right)\quad\mbox{and}\quad\vec{B}=\left(0,0,B\right),
\label{EeB}
$
where $\lambda$ and $B$ are the uniform electric charge density and the magnetic field intensity, respectively. With this field configuration, we have an effective vector potential defined as $\vec{A}_\mathrm{eff}=\vec{E}\times\vec{B}=\frac{\lambda\,B}{2}\left(y,\,-x,\,0\right)$ and, thus, an effective magnetic field given by $\vec{B}_\mathrm{eff}=-\lambda\,B\,\left(0,0,1\right)$. Note that, with the field configuration (\ref{EeB}), the interaction between the electromagnetic field and the electric dipole moment of the atom described by the Hamiltonian operator (\ref{1.3}) is analogous to the minimal coupling of a charged particle to an external magnetic field. Hence, this field configuration yields the precise condition in which the Landau quantization occurs in a cold atom system, since the quantum particle is constrained to move in a plane in the presence of an effective uniform magnetic field $\vec{B}_\mathrm{eff}=-\lambda\,B\,\left(0,0,1\right)$. It is worth mentioning recent studies that have dealt with effective vector potentials and effective magnetic fields in order to obtain the Landau-Aharonov-Casher quantization \cite{er,bf20} and the Landau-He-McKellar-Wilkens quantization \cite{lin}. Thereby, by using the field configuration given in Eq. (\ref{EeB}), the Hamiltonian operator (\ref{1.3}) can be written in the form:
\begin{eqnarray}
\hat{\mathcal{H}}=\frac{1}{2M}\left[\left(p_{x}+\frac{M\omega}{2}\,y\right)^{2}+\left(p_{y}-\frac{M\omega}{2}\,x\right)^{2}\right],
\label{Hcoml2}
\end{eqnarray}
where $\omega=\omega_{\mathrm{WHW}}=\frac{\alpha\,\lambda\,B} {M}=\sigma\,\left|\omega\right|$ is the cyclotron frequency of the corresponding Landau levels for a neutral particle with an induced electric dipole moment \cite{lwhw} and $\sigma=\mathrm{sgn}(\lambda\,B)$ indicates the direction of rotation.
\section{Displaced Fock states}
The concept of the coherent states was proposed originally in 1926 by Schr\"odinger \cite{schro} in the context of classical states of the quantum harmonic oscillator. Klauder \cite{klauder1,klauder2}, Glauber \cite{glauber} and Sudarshan \cite{sudar}, independently have developed the concepts of coherent states in quantum mechanics. Klauder and Sudarshan \cite{klauder} have discussed how coherent states can be constructed from the Fock vacuum through the action of the displacement operators.
On the other hand, Venkata Satyanarayana \cite{ven} described states of the harmonic oscillator through the action of the displacement operator on the Fock states, which are called as displaced Fock states. In this letter we use this denomination for displaced Fock states that are constructed from the Fock state $n$, in order to differ them from coherent states constructed from the Fock vacuum. The displaced Fock states have attracted a great interest in several physical systems \cite{2,3,4,5,6,babi,homodyne,banas,kuzm}. In particular, the displaced Fock states have been used within studies of geometric quantum phases \cite{displaced} and holonomic quantum computation \cite{lbf}. In this section, we concentrate on the construction of displaced Fock states for an analogue of the Landau quantization for a neutral particle with an induced electric dipole moment. We follow the formalism adopted in Ref. \cite{feld} in order to build these states in the presence of an electric and a magnetic fields. First of all, let us introduce the following operators:
\begin{eqnarray}
a_{\pm}&=&\frac{l_m}{\sqrt{2}\hbar}\left[p_{x}\mp i\sigma p_{y}\pm \frac{i\hbar}{2l_{m}^{2}}\left(x\mp i\sigma\,y\right)\right],\nonumber\\
[-2mm]\label{b1}\\[-2mm]
b_{\pm}&=&\frac{1}{\sqrt{2}\,l_{m}}\left[\frac{1}{2}\left(x\mp i\sigma\,y\right)\pm \frac{il_{m}^{2}}{\hbar}\left(p_{x}\mp i\sigma\,p_{y}\right)\right],\nonumber
\end{eqnarray}
where we have defined the parameter $l_{m}=\sqrt{\hbar/M\,\left|\omega\right|}$ as the magnetic length and $\sigma=\pm1$. In particular, the operators $b_{\pm}$ are built from the orbit
center-coordinate operators $\hat{X}=\frac{1}{\sqrt{2}\,l_{m}}\left(\frac{1}{2}\hat{x} +\frac{l_m{}^2}{\hbar}\hat{p_{y}}\right)$ and $\hat{Y}=\frac{1}{\sqrt{2}\,l_{m}}\left(\frac{1}{2}\hat{y}-\frac{l_m{}^2}{\hbar}\hat{p_{x}}\right)$ via relations: $b_{\pm}=\hat{X}\mp i\,\sigma\,\hat{Y}$. Moreover, the operators $a_{\pm}$ and $ b_{\pm}$ obey the following relation of commutation:
\begin{eqnarray}
\left[a_{i}\,,\,b_{i}\right]=\left[a_{i}\,,\,b_{j}\right]=0;\,\,\,\,\,\,\left[a_{-}\,,\,a_{+}\right]=\left[b_{+}\,,\,b_{-}\right]=1.
\end{eqnarray}
Thereby, from Eq. (\ref{b1}), the Hamiltonian operator (\ref{Hcoml2}) and the $z$-component of angular momenta $\hat{L}_{z}$ can be rewritten as:
\begin{eqnarray}
\hat{\mathcal{H}}=\hbar\,\left|\omega\right|\,\left(a_{+}\,a_{-}+\frac{1}{2}\right);\,\,\,\,\,\,\hat{L}_{z}=\sigma\,\hbar\left(b_{-}b_{+}-a_{+}a_{-}\right),
\label{HeLcomab}
\end{eqnarray}
where we have that $\left[\hat{\mathcal{H}},\,\hat{L}_{z}\right]=0$. Since $\hat{\mathcal{H}}$ commutes with $\hat{L}_{z}$, then, these operators share a set of eigenstates; thus,
\begin{eqnarray}
\hat{\mathcal{H}}\,\left|n,\,\ell\right\rangle=\mathcal{E}_{n}\,\left|n,\,\ell\right\rangle;\,\,\,\,\hat{L}_{z}\,\left|n,\,\ell\right\rangle=\ell\,\hbar\,\left|n,\ell\right\rangle,
\label{HeLcomEel}
\end{eqnarray}
where $\mathcal{E}_{n}=\hbar\,\left|\omega\right|\,\left(n+\frac{1}{2}\right)$, $n=0,\,1,\,2,\,\ldots$ and $\ell=0,\,\pm1,\pm2,\,\pm3,\ldots$. Besides, we have
\begin{eqnarray}
\left[\hat{\mathcal{H}},\,a_{\pm}\right]=\pm\hbar\,\left|\omega\right|a_{\pm};\,\,\,\,\left[\hat{L}_{z},\,a_{\pm}\right]=\mp\,\sigma\,\hbar\,a_{\pm},
\label{ComHeLcoma}
\end{eqnarray}
which means, from the first commutation relation given in Eq. (\ref{ComHeLcoma}), that the operators $a_{+}$ and $a_{-}$ correspond to the raising and lowering operators for energy, respectively. However, from the second commutation relation given in Eq. (\ref{ComHeLcoma}), by taking $\sigma=-1$, it follows that the operators $a_{+}$ and $a_{-}$ play the role of the raising and lowering operators with respect to the eigenstates for $\hat{L}_{z} $, respectively; on the other hand, by taking $\sigma=+1$, the behaviour of these operators is inverted. Thereby, let us write
\begin{eqnarray}
a_{+}\,\left|n,\,\ell\right\rangle&=&\sqrt{n+1}\,\left|n+1,\,\ell-\sigma\right\rangle\nonumber \\,a_{-}\,\left|n,\,\ell\right\rangle&=&\sqrt{n}\,\left|n-1,\,\ell+\sigma\right\rangle.
\label{anl}
\end{eqnarray}
We can also observe that $\left[\hat{\mathcal{H}},\,b_{\pm}\right]=0$ and $\left[\hat{L}_{z},\,b_{\pm}\right]=\mp\,\sigma\,\hbar\,b_{\pm}$; thus, the operators $b_{\pm}$ can raise or lower only the eigenstates of $\hat{L}_{z}$ in such a way that
\begin{eqnarray}
b_{+}\,\left|n,\,\ell\right\rangle&=&\sqrt{n+\sigma\,\ell}\,\left|n,\,\ell-\sigma\right\rangle\nonumber\\
b_{-}\,\left|n,\,\ell\right\rangle&=&\sqrt{n+\sigma\ell+1}\,\left|n,\,\ell+\sigma\right\rangle,
\label{bnl}
\end{eqnarray}
which shows that the operators $b_{+}$ and $b_{-}$ play the role of the raising and lowering operators with respect to the eigenstates of $\hat{L}_{z}$, respectively, for $\sigma=-1$. The behaviour of these operators is inverted for $\sigma=+1$.
From the relations established in Eq. (\ref{bnl}), by considering $\sigma=-1$, one can use the first relation of Eq. (\ref{bnl}) to show that for $\ell=n$ we have that $b_{+}\left|n,\,n\right\rangle=0$ \cite{feld}. This means that the possible values of the quantum number $\ell$ are defined in the range $-\infty\,<\,\ell\,\leq\,n$. On the other hand, by considering $\sigma=+1$ and $\ell=n$, we obtain the range $-n\,\leq\,\ell\,<\,+\infty$. Hence, we can see that the sum $n+\sigma\,\ell$ is always a positive number. Basing on the algebraic method, one can define the ground state $\left|0,0\right\rangle$ in such a way that $a_{-}\left|0,0\right\rangle=b_{-}\left|0,0\right\rangle=0$, therefore we can write
\begin{eqnarray}
\left|n,\,\ell\right\rangle=\frac{a_{+}^{n}b_{+}^{n+\sigma\,\ell}}{\sqrt{n!\,\left(n+\sigma\ell\right)!}}\,\left|0,\,0\right\rangle,
\label{nlstate}
\end{eqnarray}
and the corresponding the ground state given in the coordinate representation look like
\begin{eqnarray}
\psi_{00}=\left\langle x,y\right|\left.0,0\right\rangle=\frac{1}{2\,l_{m}^{2}\,\sqrt{\pi}}\,\exp\left[-\frac{\left(x^{2}+y^{2}\right)}{4\,l_{m}^{2}}\right].
\label{psi00}
\end{eqnarray}
From now on, let us build the displaced states for the Landau system of a neutral particle possessing an induced electric dipole moment. Following Refs. \cite{displaced,displaced1}, the displaced states can be constructed by applying the following unitary operator:
\begin{eqnarray}
\hat{D}\left(\nu\right)&=&\exp\left(\nu\,a_{+}-\nu^{*}\,a_{-}\right)\nonumber\\
[-2mm]\label{Dnu}\\[-2mm]
&=& e^{-\left|\nu\right|^{2}/2}\,e^{\nu\,a_{+}}\,e^{-\nu^{*}\,a_{-}},\nonumber
\end{eqnarray}
where $\nu=\nu_{x}+i\,\nu_{y}$, and the displaced states are given in the form: $\left|n\left(\nu\right),\,\ell\right\rangle=\hat{D}\left(\nu\right)\,\left|n,\,\ell\right\rangle$. In this way, the time evolution of the displaced states of the Landau system of a neutral particle possessing an induced electric dipole moment is governed by the Hamiltonian operator given by
\begin{eqnarray}
\hat{\mathcal{H}}_{\nu}&=&\hat{D}\left(\nu\right)\,\hat{\mathcal{H}}\,\hat{D}^{\dagger}\left(\nu\right)\nonumber\\
&=&\hbar\,\left|\omega\right|\left[\left(a_{+}-\nu^{*}\right)\left(a_{-}-\nu\right)+\frac{1}{2}\right]\label{Hnu}\\
&=&\frac{1}{2m}\left[\left(\hat{p}_{x}+\frac{\hbar}{2l_{m}^{2}}\,\hat{y}-\frac{\sqrt{2}\,\hbar}{l_{m}}\,\nu_{x}\right)^{2}\right. \nonumber \\ &+&\left. \left(\hat{p}_{y}-\frac{\hbar}{2l_{m}^{2}}\,\hat{x}+\frac{\sqrt{2}\,\hbar}{l_{m}}\,\nu_{y}\right)^{2}\right].\nonumber
\end{eqnarray}
Note that two new terms in the Hamiltonian given by Eq. (\ref{Hnu}) can be treated as new contributions to the analogue of the vector potential $\vec{A}_{\mathrm{eff}}=\vec{E}\times\vec{B}$ given in Eq. (\ref{1.3}). From the experimental viewpoint, new contributions to this analogue of the vector potential can be achieved by adding a constant electric field parallel to the plane of motion of the neutral particle. This is reasonable because the conditions established in Ref. \cite{lwhw} for achieving the Landau quantization for a neutral particle with an induced electric dipole moment continue to be satisfied. Hence, by including a constant electric field $\vec{E}'=\left(E_{x}',E_{y}',0\right)$, one finds the parameters $\nu_{x}$ and $\nu_{y}$ given in (\ref{Hnu}) to be
\begin{eqnarray}
\nu_{x}=-\frac{\alpha\,l_{m}}{\sqrt{2}\,\hbar}\,E_{y}';\,\,\,\,\nu_{y}=-\frac{\alpha\,l_{m}}{\sqrt{2}\,\hbar}\,E_{x}'.
\label{nuxy}
\end{eqnarray}
Observe that the modification of the Hamiltonian given in Eq. (\ref{Hnu}) related to the parameters defined in Eq. (\ref{nuxy}) corresponds to the transformation $\left(x,y\right)\rightarrow\left(x+\delta x,\,y+\delta y\right)$ in the wave function, where $\delta x=\frac{2\,\alpha\,l_{m}^{2}}{\hbar}\,E_{x}'$ and $\delta y=\frac{2\,\alpha\,l_{m}^{2}}{\hbar}\,E_{y}'$. Hence, the electric field $\vec{E}'$ causes a small shift in the states of the Landau quantization of a neutral particle with an induced electric dipole moment in the phase space in a way analogous to the Landau states discussed in Ref. \cite{displaced1}, where we must assume that this shift takes place in a real space.
\section{Berry phase}
Now let us study the Berry phase associated with the displaced states for the Landau system of a neutral particle possessing an induced electric dipole moment. It follows from \cite{wilk,tg1} that due to an adiabatic cyclic evolution in a degenerated space, the wave function of a quantum particle acquires a non-Abelian geometric phase. From the adiabatic theorem, the connection 1-form is given by
\begin{eqnarray}
A_{n}^{k,\ell}\left(\xi\right)=i \left\langle n\left(\nu\right),k \left|\frac{\partial}{\partial\xi}\right| n\left(\nu\right),\ell\right\rangle,
\label{AdeBerry}
\end{eqnarray}
where the parameter $\xi$ is the control parameter of the system. In the present case, the control parameters are defined by the components $E_{x}'$ and $E_{y}'$ of the electric field $\vec{E}'$, the electric charge density $\lambda $ and the magnetic field intensity $B$ established in Eq. (\ref{EeB}). Let us simplify our discussion by assuming that the control parameters are positive. Thereby, the non-{\bf zero} components of the connection 1-form (\ref{AdeBerry}) for an energy level $n$ are determined by
\begin{eqnarray}
&&A_{n}^{k,\ell}\left(\xi\right)=-\left(\nu_{x}\frac{\partial\nu_{y}}{\partial\xi}-\nu_{y}\frac{\partial\nu_{x}}{\partial\xi}\right)\delta_{k,\ell}\\
&-&\frac{1}{l_{m}}\frac{\partial\,l_{m}}{\partial\xi}\left(\nu^{*}\sqrt{n+\sigma\ell+1}\,\,\,\delta_{k,\ell+\sigma}+\nu\sqrt{n+\sigma\ell}\,\,\,\delta_{k,\ell-\sigma}\right).\nonumber
\label{AdeXi}
\end{eqnarray}
By handling the control parameters $E_{x}'$, $E_{y}'$, $\lambda$ and $B$, we obtain
\begin{eqnarray}
A_{n}^{k,\ell}\left(E_{x}'\right)&=&-\frac{E_{y}'}{16\,u^{2}\,\lambda\,B}\,\delta_{k,\ell};\nonumber\\
A_{n}^{k,\ell}\left(E_{y}'\right)&=&-\frac{E_{x}'}{16\,u^{2}\,\lambda\,B}\,\delta_{k,\ell};\nonumber\\
[-2mm]\label{AdeE}\\[-2mm]
A_{n}^{k,\ell}\left(\lambda\right)&=&-\frac{u}{\lambda^{3/2}\,B^{1/2}}\left[\left(E_{y}'-iE_{x}'\right)\sqrt{n+\sigma\ell+1}\,\delta_{k,\ell+\sigma}\right. \nonumber\\&+&\left.\left(E_{y}'+iE_{x}'\right)\sqrt{n+\sigma\ell}\,\delta_{k,\ell-\sigma}\right];\nonumber\\
A_{n}^{k,\ell}\left(B\right)&=&-\frac{u}{\lambda^{1/2}\,B^{3/2}}\left[\left(E_{y}'-iE_{x}'\right)\sqrt{n+\sigma\ell+1}\,\delta_{k,\ell+\sigma}\right. \nonumber \\ &+& \left.\left(E_{y}'+iE_{x}'\right)\sqrt{n+\sigma\ell}\,\delta_{k,\ell-\sigma}\right],\nonumber
\end{eqnarray}
where $u=\sqrt{\frac{\hbar}{8\alpha}}$. In particular, the connections 1-forms $A_{n}^{k,\ell}\left(E_{x}'\right)$ and $A_{n}^{k,\ell}\left(E_{y}'\right)$ given in Eq. (\ref{AdeE}) are called the Berry connections or the Mead-Berry vector potentials \cite{tg1,berry}, and these connection give rise to Abelian geometric quantum phases. In this way, by taking $k=\ell$ and keeping the control parameters $\lambda$ and $B$ unchanged, one finds the corresponding quantum phase $\Gamma(c_{1})= e^{i\gamma_{n}}$, where phase angle $\gamma_{n}$ is given by
\begin{eqnarray}
\gamma_{n}^{\left(1\right)}&=&\oint_{C_{1}}\left[A_{n}^{k,\ell}\left(E_{x}'\right)\,dE_{x}'+A_{n}^{k,\ell}\left(E_{y}'\right)\,dE_{y}'\right]\nonumber\\
&=&-\frac{1}{16\,u^{2}\,\lambda\,B}\,S_{1},
\label{gamma1}
\end{eqnarray}
where $C_{1}$ is the path of the adiabatic cyclic evolution taken in the $E_{x}'-E_{y}' $ plane and $S_{1}$ is the area enclosed by the path $C_{1}$ as shown in Fig. \ref{fig1}. Note that the expression (\ref{gamma1}) produces an Abelian geometric phase.
\begin{figure}
\caption{Closed path of the adiabatic evolution of the control parameters $E_{x}
\label{fig1}
\end{figure}
In what follows, let us deal with the non-diagonal terms of the connection 1-form defined in Eq. (\ref{AdeXi}). The non-diagonal terms are defined by $A_{n}^{k,\ell}\left(\lambda\right)$ and $A_{n}^{k,\ell}\left(B\right)$ given in Eq. (\ref{AdeE}) by taking $k\neq\ell$. A particular case is given by considering $E_{x}'=0$; thus, the corresponding phase angle is
\begin{eqnarray}
\gamma_{n}^{\left(i\right)}&=&\oint_{C_{2}}\left[A_{n}^{k,\ell}\left(E_{y}'\right)\,dE_{y}'+A_{n}^{k,\ell}\left(\lambda\right)\,d\lambda+A_{n}^{k,\ell}\left(B\right)\,dB\right]\nonumber\\
&=&2u\left[\sqrt{n+\sigma\ell+1}\,\,\delta_{k,\ell+\sigma}+\sqrt{n+\sigma\ell}\,\,\delta_{k,\ell-\sigma}\right]\,S_{i},
\label{Si}
\nonumber
\end{eqnarray}
where $S_{i}$ is defined by the closed paths showed in Fig. \ref{fig2}. The examples are three possible paths showed in Fig. \ref{fig2}: $S_{2}=S_{ABCHEFA}$, $S_{3}=S_{ABCHGFA}$ and $S_{4}=S_{ADCHEFA}$. These paths are given by
$
S_{2}=\left(\frac{E_{y2}}{\sqrt{\lambda_{2}}}-\frac{E_{y1}}{\sqrt{\lambda_{1}}}\right)\left(\frac{1}{\sqrt{B_{2}}}-\frac{1}{\sqrt{B_{1}}}\right)\nonumber -\left(\frac{E_{y1}}{\sqrt{B_{2}}}-\frac{E_{y2}}{\sqrt{B_{1}}}\right)\left(\frac{1}{\sqrt{\lambda_{2}}}-\frac{1}{\sqrt{\lambda_1}}
\right)$ ,$ S_{3}=\left(\frac{E_{y2}}{\sqrt{\lambda_{2}}}-\frac{E_{y1}}{\sqrt{\lambda_{1}}}\right)\left(\frac{1}{\sqrt{B_{2}}}-\frac{1}{\sqrt{B_{1}}}\right)\nonumber -E_{y2}\left(\frac{1}{\sqrt{B_{2}}}-\frac{1}{\sqrt{B_{1}}}\right)\left(\frac{1}{\sqrt{\lambda_{2}}}-\frac{1}{\sqrt{\lambda_{1}}}\right)$ and $
S_{4}=\left(\frac{E_{y1}}{\sqrt{\lambda_{2}}}-\frac{E_{y2}}{\sqrt{\lambda_{1}}}\right)\left(\frac{1}{\sqrt{B_{2}}}-\frac{1}{\sqrt{B_{1}}}\right)\nonumber -E_{y1}\left(\frac{1}{\sqrt{B_{2}}}-\frac{1}{\sqrt{B_{1}}}\right)\left(\frac{1}{\sqrt{\lambda_{2}}}-\frac{1}{\sqrt{\lambda_{1}}}\right).\nonumber
$
\begin{figure}
\caption{Possible paths of the adiabatic evolution in the three parameters space.}
\label{fig2}
\end{figure}
The geometric phase $ \Gamma(C_{2})=\hat{\mathcal{P}}e^{i\gamma_{n}^{\left(i\right)}}$ is obtained from Eq. (\ref{Si}) and is given by
\begin{eqnarray}\label{nap}
\Gamma(C_{2})=\hat{\mathcal{P}}\exp\left\{i \oint_{C_{2}} 2u\left[\sqrt{n+\sigma\ell+1}\,\,\delta_{k,\ell+\sigma}+\right. \right. \\ \nonumber \left. \left.\sqrt{n+\sigma\ell}\,\,\delta_{k,\ell-\sigma}\right]\,S_{i},\right\},
\end{eqnarray}
and it is a non-Abelian geometric quantum phase, where $\hat{\mathcal{P}}$ is the path ordering operator \cite{tg1,zr1}. However, we cannot obtain a general expression for the non-Abelian geometric phase $U(C_{2})$ without specifying the path. Let us first consider a fixed value of $n$ and choice a set $k, \ell $ in order to make a cyclic adiabatic evolution and, thus, choose one of possible closed paths $ABCHEFA$, $ABCHGFA$ and $ADCHEFA$ as shown in showed in Fig. \ref{fig2}. A example a similar calculation was made in a other system in Ref.\cite{lbf} However, we cannot obtain
a general expression for the non-Abelian geometric phase (\ref{nap}) without specifying the path. In this way, to obtain some values we can choice a determined $n$ in order to make a cyclic adiabatic evolution and, thus, choose the closed path $ C_{2}$.
One should note that, although the non-Abelian geometric phase given in Eq.(\ref{nap}) is determined by the control parameters $\lambda$ and $B$, the existence of such phase depends on the control parameter $E_{y}'$. If $E_{y1}=E_{y2}$, we can see that the non-Abelian geometric phase (\ref{nap}) is an identity matrix . Of course, we can obtain other geometric phases in Eq. (\ref{nap}) by choosing other paths. Another case is if one supposes the component $E_{x}'$ to be non-zero, therefore we must deal with the four control parameters: $E_{x}'$, $E_{y}'$, $\lambda$ and $B$. In this case, the calculation of the geometric quantum phase is very hard, hence, we do not discuss it here.
\section{conclusions}
By using the formalism adopted by Feldman and Kahn \cite{feld}, we have shown a way of building displaced states from the Landau quantization of a neutral particle possessing an induced electric dipole moment. The procedure of constructing the displaced states is based on adding a constant external electric field to the field configuration giving rise to the Landau quantization for a neutral particle with an induced electric dipole moment. Besides, we have investigated the arising of Abelian and non-Abelian geometric quantum phases associated with these displaced Fock states during some adiabatic cyclic evolutions. We claim that we have obtained the displaced Fock state for Landau quantization for Wei {\it et al.} Hamiltonian~(\ref{1.3}), and studied the Abelian and non-Abelian Berry quantum phase for first time in literature. It is worth pointing out that the system has degeneracies in the energy levels can have in energy levels may have Berry phases with non-Abelian structures, as in the case of the analogue of the Landau quantization studied here. We emphasize that different paths of the Hamiltonian produce a geometric phase that in general does not commute among themselves, which allows a way of implementing the holonomic quantum computation~\cite{zr1,pzr1,pc1,pc3}. As we can see from the section IV, the space of parameters defined by $\{E_{x},E_{y},B,\lambda\}$ enables us to have a greater number of possible ways to calculate the geometric phase in this space of parameters. In this theoretical study, we follow the fields configurations inspirited in the original work Wei {\.it et al}\cite{wei,c1lwhw,lin} and we have used the charge density $\lambda$ as a variable parameter. For a more realistic experimentally configuration these densities can be replaced with a capacitor, where the field produced by it induce the dipole moment of the particle(atom), and the deference of potential of this capacitor can be used as variable parameter, as it has been done recently in the experimental study to obtain the He-Mckellar-Wilkens quantum phase, where the experimental configuration proposed by Wei-Han-Wei\cite{wei,c1lwhw} was used by Vigu\'e group\cite{lepoutre,lepoutre1,lepoutre2,lepoutre3} for its experimental detection.This richness of possibilities to obtain non-Abelian geometric phases is important for investigation of holonomic quantum computation in this system of induced dipoles. An interesting point of discussion is the possible use of quantum holonomies defined by the non-Abelian geometric quantum phase in studies of the holonomic quantum computation \cite{zr1} since the present configuration can be investigated in systems of atoms with a large polarizability.
\acknowledgments
The authors would like to thank CNPq, CAPES for financial support.
\end{document} |
\begin{document}
\title{Translational Embeddings via Stable Canonical Rules}
\begin{abstract}
This paper presents a new uniform method for studying modal companions of superintuitionistic deductive systems and related notions, based on the machinery of stable canonical rules. Using our method, we obtain an alternative proof of the Blok-Esakia theorem both for logics and for rule systems, and prove an analogue of the Dummett-Lemmon conjecture for rule systems. Since stable canonical rules may be developed for any rule system admitting filtration, our method generalises smoothly to richer signatures. We illustrate this by applying our techniques to prove analogues of the Blok-Esakia theorem (for both logics and rule systems) and of the Dummett-Lemmon conjecture (for rule systems) in the setting of tense companions of bi-superintuitionistic deductive systems. We also use our techniques to prove that the lattice of rule systems (logics) extending the modal intuitionistic logic $\mathtt{KM}$ and the lattice of rule systems (logics) extending the provability logic $\mathtt{GL}$ are isomorphic.
\end{abstract}
\section{Introduction}\addcontentsline{toc}{section}{Introduction}
A modal companion of a superintuitionistic logic $\mathtt{L}$ is defined as any normal modal logic $\mathtt{M}$ extending $\mathtt{S4}$ such that the \emph{Gödel translation} fully and faithfully embeds $\mathtt{L}$ into $\mathtt{M}$. The notion of a modal companion has sparked a remarkably prolific line of research, documented, e.g., in the surveys \cite{ChagrovZakharyashchev1992MCoIPL} and \cite{WolterZakharyaschev2014OtBET}. The jewel of this research line is the celebrated \emph{Blok-Esakia theorem}, first proved independently by \citet{Blok1976VoIA} and \citet{Esakia1976OMCoSL}. The theorem states that the lattice of superintuitionistic logics is isomorphic to the lattice of normal extensions of Grzegorczyk's modal logic $\mathtt{GRZ}$, via the mapping which sends each superintuitionistic logic $\mathtt{L}$ to the normal extension of $\mathtt{GRZ}$ with the set of all Gödel translations of formulae in $\mathtt{L}$.
\citet{Zakharyashchev1991MCoSLSSaPT} developed a unified approach to the theory of modal companions, via his technique of \emph{canonical formulae}. These formulae generalise the subframe formulae of \citet{Fine1985LCKPI}. Like a subframe formula, a canonical formula syntactically encodes the structure of a finite \emph{refutation pattern}, i.e., a finite transitive frame together with a (possibly empty) set of parameters. By applying a version of the \emph{selective filtration} construction, every formula can be matched with a finite set of finite refutation patterns, in such a way that the conjunction of all the canonical formulae associated with the refutation patterns is equivalent to the original formula. By studying how the Gödel translation affects superintuitionistic canonical formulae,
Zakharyashchev gave alternative proofs of classic theorems in the theory of modal companions, and extended this theory with several novel results. Among these, he confirmed the \emph{Dummett-Lemmon conjecture}, formulated in \cite{DummettLemmon1959MLbS4aS5}, which states that a superintuitionistic logic is Kripke complete iff its weakest modal companion is. \citet{Jerabek2009CR} generalized canonical formulae to \emph{canonical rules}, and applied this notion to extend Zakharyaschev's approach to theory of modal companions to \emph{rule systems} (also known as \emph{multi-conclusion consequence relations.})
In \cite{BezhanishviliEtAl2016SCR,BezhanishviliEtAl2016CSL,BezhanishviliBezhanishvili2017LFRoHAaCF}, \emph{stable canonical formulae} and \emph{rules} were introduced as an alternative to Zakharyaschev and Je\v{r}ábek-style canonical rules and formulae. The basic idea is the same: a stable canonical formula or rule syntactically encodes the semantic structure of a finite refutation pattern. The main difference lies in how such structure is encoded, which affects how refutation patterns are constructed in the process of rewriting a formula (or rule) into a conjunction of stable canonical formulae (or rules). Namely, in the case of stable canonical formulae and rules finite refutation patterns are constructed by taking \emph{filtrations} rather than selective filtrations of countermodels. A survey of stable canonical formulae and rules can be found in \cite{BezhanishviliBezhanishvili2020JFaATfIL}.
This paper applies stable canonical rules to develop a novel, uniform approach to the study of modal companions and related notions. Our approach echoes the Zakharyaschev-Je\v{r}ábek approach in using rules encoding finite refutation patterns, but also bears circumscribed similarities with Blok's original algebraic approach in some proof strategies (see \Cref{remark:blok}). Our techniques deliver central results in the theory of modal companions in a notably uniform fashion, and with high potential for further generalisation. In particular, we obtain an alternative proof of the Blok-Esakia theorem for both logics and rule systems, and generalise the Dummett-Lemmon conjectures to rule systems. Moreover, due to the flexibility of filtration, our techniques easily generalise to rule systems in richer signatures. We illustrate this via two case studies. Firstly, we apply our methods to study the notion of \emph{tense companions} of bi-superintuitionistic deductive systems, introduced by \citet{Wolter1998OLwC} for logics. Here we generalise \cite[Theorem 23]{Wolter1998OLwC}, an analogue of the Blok-Esakia theorem, from logics to rule systems. Moreover, we obtain an analogue of the Dummett-Lemmon conjecture. Notably, these results are obtained via minimal adaptations of our technique, whereas extending the Zakharyaschev-Je\v{r}ábek technique to this setting is far from straightforward, as we argue in \Cref{sec:comparison}. Secondly, we apply our methods to study a Gödel translation-like correspondence between normal extensions of the intuitionistic provability logic $\mathtt{KM}$ and the normal extensions of the Gödel-Löb provability logic $\mathtt{GL}$. Here we prove that the lattice of normal modal superintuitionistic rule systems extending $\mathtt{KM}$ is isomorphic to the lattice of normal modal rule systems extending $\mathtt{GL}$. The corresponding result for logics, known as the \emph{Kuznetsov-Muravitsky theorem} \cite[Proposition 3]{KuznetsovMuravitsky1986OSLAFoPLE} follows as a corollary. In pursuing these two generalisations of our technique, we also develop new kinds of stable (or stable-like) canonical rules: for bi-superintuitionistic and tense logics on the one hand, and (more significantly) for modal superintuitionistic rule systems over $\mathtt{KM}$ and modal rule systems over $\mathtt{GL}$ on the other.
The techniques described in this paper can also be used to obtain axiomatic characterizations of the modal companion maps (and their counterparts in the richer signatures discussed here) in terms of stable canonical rules, as well as some results concerning the notion of stability \cite{BezhanishviliEtAl2018SML}. These results can be found in the recent master's thesis \cite{Cleani2021TEVSCR}, on which the present paper is based.
The paper is organised as follows. \Cref{ch:0} reviews general preliminaries. Each subsequent section presents and applies our methods to deductive systems in a specific pair of signature. \Cref{ch:1} studies modal companions of superintuitionistic deductive systems. \Cref{ch:2} studies tense companions of bi-superintuitionistic deductive system. \Cref{ch:3} studies the Kuznetsov-Muravitsky isomorphism between normal extensions of $\mathtt{KM}$ and normal extensions of $\mathtt{GL}$. We conclude in \Cref{ch:4}.
\section{General Preliminaries}
\label{ch:0}
This section fixes notational conventions and reviews the background theory needed throughout the paper. We collect here all definitions and results which all subsequent sections rely on. Preliminary information specific to the topic of a particular section is instead presented therein. We use \cite{BurrisSankappanavar1981ACiUA} as our standard reference for universal algebra, and \cite{Iemhoff2016CRaAR} for rule systems.
\subsection{Relations}
\label{sec:functionsrelations}
We begin by fixing some notation concerning binary relations. Let $X$ be a set, $R$ a transitive binary relation on $X$, and $U\subseteq X$. We define:
\begin{align}
\mathit{qmax}_R(U)&:=\{x\in U: \text{ for all }y\in U\text{, if } Rxy\text{ then }Ryx\}\\
\mathit{max}_R(U)&:=\{x\in U: \text{ for all }y\in U\text{, if } Rxy\text{ then }x=y\}\\
\mathit{qmin}_R(U)&:=\{x\in U: \text{ for all }y\in U\text{, if } Ryx\text{ then }Rxy\}\\
\mathit{min}_R(U)&:=\{x\in U: \text{ for all }y\in U\text{, if } Ryx\text{ then }x=y\}.
\end{align}
The elements of $\mathit{qmax}_R(U)$ and $\mathit{max}_R(U)$ are called \emph{$R$-quasi-maximal} and \emph{$R$-maximal} elements of $U$ respectively, and similarly the elements of $\mathit{qmin}_R(U)$ and $\mathit{min}_R(U)$ are called \emph{$R$-quasi-minimal} and \emph{$R$-minimal} elements of $U$ respectively. Note that if $R$ is a partial order then both $\mathit{qmax}_R(U)=\mathit{max}_R(U)$ and $\mathit{qmin}_R(U)=\mathit{min}_R(U)$. Lastly, we say that an element $x\in U$ is \emph{$R$-passive} in $U$ if for all $y\in X\smallsetminus U$, if $Rxy$ then there is no $z\in U$ such that $Ryz$. Intuitively, an $R$-passive element of $U$ is an $x\in U$ such that one cannot ``leave'' and ``re-enter'' $U$ starting from $x$ and ``moving through'' $R$. The set of all $R$-passive elements of $U$ is denoted by $\mathit{pas}_R(U)$.
\subsection{Deductive Systems} We now review \emph{deductive system}, which span both propositional logics and rule systems.
\label{sec:rulsys} The set $\mathit{Frm}_\nu(X)$ of \emph{formulae} in signature $\nu$ over a set of variables $X$ is the least set containing $X$ and such that for every $f\in \nu$ and $\varphi_1, \ldots, \varphi_n\in \mathit{Frm}_\nu(X)$ we have $f(\varphi_1, \ldots, \varphi_n)\in \mathit{Frm}_\nu(X)$, where $n$ is the arity of $f$. Henceforth we will take $\mathit{Prop}$ to be a fixed arbitrary countably infinite set of variables and write simply $\mathit{Frm}_\nu$ for $\mathit{Frm}_\nu(\mathit{Prop})$. We occasionally write formulae in the form $\varphi(p_1, \ldots, p_n)$ to indicate that the variables occurring in $\varphi$ are among $p_1, \ldots, p_n$. A \emph{substitution} is a map $s:\mathit{Prop}\to \mathit{Frm}_\nu(\mathit{Prop})$. Every substitution may be extended to a map $\bar s:\mathit{Frm}_\nu(\mathit{Prop})\to \mathit{Frm}_\nu(\mathit{Prop})$ recursively, by setting $\bar s(p)=s(p)$ if $p\in \mathit{Prop}$, and $\bar s(f(\varphi_1, \ldots, \varphi_n))=f(\bar s(\varphi_1), \ldots, \bar s(\varphi_n))$.
\begin{definition}
A \emph{logic} over $\mathit{Frm}_\nu$ is a set $\mathtt{L}\subseteq \mathit{Frm}_\nu$, such that
\begin{equation}
\varphi\in \mathtt{L}\Rightarrow \bar s(\varphi)\in \mathtt{L} \text{ for every substitution $s$.}\tag{structurality}
\end{equation}
\end{definition}
Interesting examples of logics, including those discussed in this paper, are normally closed under conditions other than structurality. If $\Gamma, \Delta$ are sets of formulae and $\mathcal{S}$ is a set of logics, we write $\Gamma\oplus_{\mathcal{S}}\Delta$ for the least logic in $\mathcal{S}$ extending both $\Gamma, \Delta$.
For any sets $X, Y$, write $X\subseteq_\omega Y$ to mean that $X\subseteq Y$ and $|X|$ is finite. A \emph{$($multi-conclusion$)$ rule} in signature $\nu$ over a set of variables $X$ is a pair $(\Gamma, \Delta)$ such that $\Gamma, \Delta\subseteq_\omega \mathit{Frm}_\nu(X)$. In case $\Delta=\{\varphi\}$ we write $\Gamma/\Delta$ simply as $\Gamma/\varphi$, and analogously if $\Gamma=\{\psi\}$. We use $;$ to denote union between finite sets of formulae, so that $\Gamma; \Delta=\Gamma\cup \Delta$ and $\Gamma; \varphi=\Gamma\cup \{\varphi\}$. We write $\mathit{Rul}_\nu(X)$ for the set of all rules in $\nu$ over $X$, and simply $\mathit{Rul}_\nu$ when $X=\mathit{Prop}$.
\begin{definition}
A \emph{rule system} is a set $\mathtt{S}\subseteq \mathit{Rul}_\nu(X)$ satisfying the following conditions.
\begin{enumerate}
\item If $\Gamma/\Delta\in \mathtt{S}$ then $\bar s[\Gamma]/\bar s[\Delta]\in \mathtt{S}$ for all substitutions $s$ (structurality).
\item $\varphi/\varphi\in \mathtt{S}$ for every formula $\varphi$ (reflexivity).
\item If $\Gamma/\Delta\in \mathtt{S}$ then $\Gamma;\Gamma'/\Delta;\Delta'\in \mathtt{S}$ for any finite sets of formulae $\Gamma',\Delta'$ (monotonicity).
\item If $\Gamma/\Delta;\varphi\in\mathtt{S}$ and $\Gamma;\varphi/\Delta\in \mathtt{S}$ then $\Gamma/\Delta\in \mathtt{S}$ (cut).
\end{enumerate}
\end{definition}
\begin{remark}
Rule systems are also called \emph{multiple-conclusion consequence relations} (e.g., in \cite{BezhanishviliEtAl2016SCR,Iemhoff2016CRaAR}). We prefer the terminology of rule systems (used in \cite{Jerabek2009CR}) for brevity.
\end{remark}
If $\mathcal{S}$ is a set of rule systems and $\Sigma, \Xi$ are sets of rules, we write $\Xi\oplus_{\mathcal{S}}\Sigma$ for the least rule system in $\mathcal{S}$ extending both $\Xi$ and $\Sigma$. A set of rules $\Sigma$ is said to \emph{axiomatise} a rule system $\mathtt{S}\in \mathcal{S}$ \emph{over} some rule system $\mathtt{S}'\in \mathcal{S}$ if $\mathtt{S}'\oplus_{\mathcal{S}}\Sigma=\mathtt{S}$.
If $\mathtt{S}$ is a rule system we let the set of \emph{tautologies} of $\mathtt{S}$ be the set
\[\mathsf{Taut}(\mathtt{S}):=\{\varphi\in \mathit{Frm}_\nu:/\varphi\in \mathtt{S}\}.\]
By the structurality condition for rule systems, it follows that $\mathsf{Taut}(\mathtt{S})$ is a logic for every rule system $\mathtt{S}$.
We interpret deductive systems over algebras in the same signature. If $\mathfrak{A}$ is a $\nu$-algebra we denote its carrier as $A$. Let $\mathfrak{A}$ be some $\nu$-algebra. A \emph{valuation} on $\mathfrak{A}$ is a map $V:\mathit{Prop}\to A$. Every valuation $V$ on $\mathfrak{A}$ may be recursively extended to a map $\bar V:\mathit{Frm}_\nu\to A$, by setting
\begin{align*}
\bar V(p)&:= V(p)\\
\bar V(f(\varphi_1, \ldots, \varphi_n))&:=f^{\mathfrak{A}}(\bar V(\varphi_1), \ldots, \bar V(\varphi_n)).
\end{align*}
A pair $(\mathfrak{A}, V)$ where $\mathfrak{A}$ is a $\nu$-algebra and $V$ a valuation on $\mathfrak{A}$ is called a \emph{model}. A rule $\Gamma/\Delta$ is \emph{valid} on a $\nu$-algebra $\mathfrak{A}$ if the following holds: for any valuation $V$ on $\mathfrak{A}$, if $\bar V(\gamma)=1$ for all $\gamma\in \Gamma$, then $\bar V(\delta)=1$ for some $\delta\in \Delta$. When this holds we write $\mathfrak{A}\models \Gamma/\Delta$, otherwise we write $\mathfrak{A}\nvDash \Gamma/\Delta$ and say that $\mathfrak{A}$ \emph{refutes} $\Gamma/\Delta$. As a special case, a formula $\varphi$ is valid on a $\nu$-algebra $\mathfrak{A}$ if the rule $/\varphi$ is. We write $\mathfrak{A}\models \varphi$ when this holds, $\mathfrak{A}\nvDash \varphi$ otherwise. The notion of validity extends to classes of $\nu$-algebras: $\mathcal{K}\models \Gamma/\Delta$ means that $\mathfrak{A}\models \Gamma/\Delta$ for every $\mathfrak{A}\in \mathcal{K}$, and $\mathcal{K}\nvDash \Gamma/\Delta$ means that $\mathfrak{A}\nvDash \Gamma/\Delta$ for some $\mathfrak{A}\in \mathcal{K}$. Analogous notation is used for formulae. Finally, if $\Xi$ is a set of formulae or rules and $\mathfrak{A}$ a $\nu$-algebra, $\mathfrak{A}\models \Xi$ means that every formula or rule in $\Xi$ is valid on $\mathfrak{A}$, $\mathfrak{A}\nvDash \Xi$ means that some formula or rule in $\Xi$ is not valid on $\mathfrak{A}$, and similarly for classes of $\nu$-algebras.
Write $\mathcal{A}_\nu$ for the class of all $\nu$-algebras. For every deductive system $\mathtt{S}$ we define
\[\mathsf{Alg}(\mathtt{S}):=\{\mathfrak{A}\in \mathcal{A}_\nu:\mathfrak{A}\models \mathtt{S}\}.\]
Conversely, if $\mathcal{K}$ is a class of $\nu$-algebras we set
\begin{align*}
\mathsf{ThR}(\mathcal{K})&:=\{\Gamma/\Delta\in \mathit{Rul}_\nu:\mathcal{K}\models \Gamma/\Delta\}\\
\mathsf{Th}(\mathcal{K})&:=\{\varphi\in \mathit{Frm}_\nu:\mathcal{K}\models \varphi\}
\end{align*}
We also interpret deductive systems over $\nu$-formulae on expansions of Stone spaces dual to $\nu$-algebras, which for the moment we refer to as \emph{$\nu$-spaces}. Precise definitions of these topological sturctures and of valuations over them are given in each subsequent section. If $\mathfrak{X}$ is a $\nu$-space we denote its underlying domain as $X$, its family of open sets as $\mathcal{O}$, and its family of clopen sets as $\mathsf{Clop}(\mathfrak{X})$. Moreover, if $U\subseteq X$ we write $-U$ for $X\smallsetminus U$. Given a valuation $V$ on a $\nu$-space $\mathfrak{X}$ and a point $x\in X$, we call $(\mathfrak{X}, V)$ a (global) \emph{model}. A formula $\varphi$ is \emph{satisfied} on a model $(\mathfrak{X}, V)$ at a point $x$ if $x\in \bar V(\varphi)$. In this case we write $\mathfrak{X}, V, x\models \varphi$, otherwise we write $\mathfrak{X}, V, x\nvDash \varphi$ and say that the model $(\mathfrak{X}, V)$ \emph{refutes} $\varphi$ at a point $x$. A rule $\Gamma/\Delta$ is \emph{valid} on a model $(\mathfrak{X}, V)$ if the following holds: if for every $x\in X$ we have $\mathfrak{X}, V, x\models \gamma$ for each $\gamma\in \Gamma$, then for every $x\in X$ we have $\mathfrak{X}, V, x\models \delta$ for some $\delta\in \Delta$. In this case we write $\mathfrak{X}, V\models \Gamma/\Delta$, otherwise we write $\mathfrak{X}, V\nvDash \Gamma/\Delta$ and say that the model $(\mathfrak{X}, V)$ \emph{refutes} $\varphi$. A rule $\Gamma/\Delta$ is \emph{valid} on a $\nu$-space $\mathfrak{X}$ if it is valid on the model $(\mathfrak{X}, V)$ for every valuation $V$ on $\mathfrak{X}$, otherwise $\mathfrak{X}$ \emph{refutes} $\Gamma/\Delta$. We write $\mathfrak{X}\models \Gamma/\Delta$ to mean that $\Gamma/\Delta$ is valid on $\mathfrak{X}$, and $\mathfrak{X}\nvDash\Gamma/\Delta$ to mean that $\mathfrak{X}$ refutes $\Gamma/\Delta$. As in the case of algebras we define validity on models and $\nu$-spaces for a formula $\varphi$ as validity of the rule $/\varphi$, and write $\mathfrak{X}\models \varphi$ if $\varphi$ is valid in $\mathfrak{X}$, otherwise $\mathfrak{X}\nvDash\varphi$. The notion of validity generalises to classes of $\nu$-spaces, so that if $\mathcal{K}$ is a class of $\nu$-space then $\mathcal{K}\models \Gamma/\Delta$ means $\mathfrak{X}\models \Gamma/\Delta$ for every $\mathfrak{X}\in \mathcal{K}$, and $\mathcal{K}\nvDash \Gamma/\Delta$ means $\mathfrak{X}\nvDash \Gamma/\Delta$ for some $\mathfrak{X}\in \mathcal{K}$. We extend the present notation for validity to sets of formulae or rules the same way as for algebras.
Write $\mathcal{S}_\nu$ for the class of all $\nu$-spaces. For every deductive system $\mathtt{S}$ we define
\[\mathsf{Spa}(\mathtt{S}):=\{\mathfrak{X}\in \mathcal{S}_\nu:\mathfrak{X}\models \mathtt{S}\}.\]
Conversely, if $\mathcal{K}$ is a class of $\nu$-spaces we set
\begin{align*}
\mathsf{ThR}(\mathcal{K})&:=\{\Gamma/\Delta\in \mathit{Rul}_\nu:\mathcal{K}\models \Gamma/\Delta\}\\
\mathsf{Th}(\mathcal{K})&:=\{\varphi\in \mathit{Frm}_\nu:\mathcal{K}\models \varphi\}
\end{align*}
Throughout the paper we study the structure of lattices of deductive systems via semantic methods. This is made possible by the following fundamental result, connecting the syntactic types of deductive systems to closure conditions on the classes of algebras validating them. \Cref{birkhoff} is widely known as \emph{Birkhoff's theorem}, after \cite{Birkhoff1935OtSoAA}.
\begin{theorem}[{\cite[Theorems II.11.9 and V.2.20]{BurrisSankappanavar1981ACiUA}}]
For every class $\mathcal{K}$ of $\nu$-algebras, the following conditions hold:\label{syntacticvarietiesuniclasses}
\begin{enumerate}
\item $\mathcal{K}$ is a variety iff $\mathcal{K}=\mathsf{Alg}(\mathtt{S})$ for some set of $\nu$-formulae $\mathtt{S}$. \label{birkhoff}
\item $\mathcal{K}$ is a universal class iff $\mathcal{K}=\mathsf{Alg}(\mathtt{S})$ for some set of $\nu$-rules $\mathtt{S}$.
\end{enumerate}
\end{theorem}
In this sense, $\nu$-logics correspond to varieties of $\nu$-algebras, whereas $\nu$-rule systems correspond to universal classes of $\nu$-algebras.
This concludes our general preliminaries. We now begin the study of modal companions via stable canonical rules.
\section{Modal Companions of Superintuitionistic Deductive Systems}
\label{ch:1}
This section studies the theory of modal companions of superintuitionistic deductive systems via stable canonical rules. Its main purpose is to present our method in detail and show that it performs as expected. After some brief preliminaries (\Cref{sec:preliminaries1}), we present superintuitionistic and modal stable canonical rules (\Cref{sec:scr1}). The main results of this section are included in \Cref{sec:structure1} and \Cref{sec:additionalresults}. The former uses stable canonical rules to give a characterisation of the set of modal companions of a superintuitionistic deductive system, and proves the Blok-Esakia theorem for both logics and rule systems. The latter proves an extension of the Dummett-Lemmon conjecture to rule systems, again using stable canonical rules.
The techniques presented in this section can also be applied to obtain axiomatic characterisations of the modal companion maps via stable canonical rules, as well as some results concerning the preservation of stability by the modal companion maps. More details on these topics can be found in \cite[Sections 2.3.3, 2.3.4]{Cleani2021TEVSCR}.
\subsection{Modal and Superintuitionistic Deductive Systems}\label{sec:preliminaries1}
We begin with a brief overview of the semantic and syntactic structures discussed throughout the present section.
\subsubsection{Superintuitionistic Deductive Systems, Heyting Algebras, and Esakia Spaces}\label{sec:int}
We work with the \emph{superintuitionistic signature}, \[si:=\{\land, \lor, \to, \bot, \top\}.\]
The set $\mathit{Frm_{si}}$ of superintuitionistic (si) formulae is defined recursively as follows.
\[\varphi::= p\, |\, \bot\, |\,\top\, |\, \varphi\land \varphi\, |\,\varphi\lor \varphi\, |\,\varphi\to \varphi.\]
We abbreviate $\varphi\leftrightarrow \psi:=(\varphi\to \psi)\land (\psi \to \varphi)$. We let $\mathtt{IPC}$ denote the \emph{intuitionistic propositional calculus}, and point the reader to \cite[Ch. 2]{ChagrovZakharyaschev1997ML} for an axiomatisation.
\begin{definition}
A \emph{superintuitionistic logic}, or si-logic for short, is a logic $\mathtt{L}$ over $\mathit{Frm}_{si}$ satisfying the following additional conditions:
\begin{enumerate}
\item $\mathtt{IPC}\subseteq \mathtt{L}$;
\item $\varphi\to \psi, \varphi\in \mathtt{L}$ implies $\psi\in \mathtt{L}$ (MP).
\end{enumerate}
A \emph{superintuitionistic rule system}, or si-rule system for short, is a rule system $\mathtt{L}$ over $\mathit{Frm}_{si}$ satisfying the following additional requirements.
\begin{enumerate}
\item $/\varphi\in \mathtt{L}$ whenever $\varphi\in \mathtt{IPC}$.
\item $\varphi,\varphi\to \psi /\psi\in \mathtt{L}$ (MP-R).
\end{enumerate}
\end{definition}
For every si-logic $\mathtt{L}$ write $\mathbf{Ext}(\mathtt{L})$ for the set of si-logics extending $\mathtt{L}$, and similarly for si-rule systems. Then $\mathbf{Ext}(\mathtt{IPC})$ is the set of all si-logics. It is well known that $\mathbf{Ext}(\mathtt{IPC})$ admits the structure of a complete lattice, with $\oplus_{\mathbf{Ext}(\mathtt{IPC})}$ serving as join and intersection as meet. Clearly, for every $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC})$ there exists a least si-rule system $\mathtt{L_R}$ containing $/\varphi$ for each $\varphi\in \mathtt{L}$. Hence $\mathtt{IPC_R}$ is the least rule system. The set $\mathbf{Ext}(\mathtt{IPC_R})$ is also a lattice when endowed with $\oplus_{\mathbf{Ext}(\mathtt{IPC_R})}$ as join and intersection as meet. Slightly abusing notation, we refer to these lattices as we refer to their underlying sets, i.e., $\mathbf{Ext}(\mathtt{IPC})$ and $\mathbf{Ext}(\mathtt{IPC_R})$ respectively. Additionally, we make use of systematic ambiguity and write both $\oplus_{\mathbf{Ext}(\mathtt{IPC})}$ and $\oplus_{\mathbf{Ext}(\mathtt{IPC_R})}$ simply as $\oplus$, leaving context to clarify which operation is meant.
The following proposition is central for transferring results about si-rule systems to si-logics. Its proof is routine.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{Ext}(\mathtt{IPC})$ and the sublattice of $\mathbf{Ext}(\mathtt{IPC_R})$ consisting of all si-rule systems $\mathtt{L}$ such that $\mathsf{Taut}(\mathtt{L})_\mathtt{R}=\mathtt{L}$.\label{deductivesystemisomorphismsi}
\end{proposition}
A \emph{Heyting algebra} is a tuple $\mathfrak{H}=(H, \land, \lor, \to, 0, 1)$ such that $(H, \land, \lor, 0, 1)$ is a bounded distributive lattice and for every $a, b, c\in A$ we have
\[c\leq a\to b\iff a\land c\leq b.\]
We let $\mathsf{HA}$ denote the class of all Heyting algebras. By \Cref{syntacticvarietiesuniclasses}, $\mathsf{HA}$ is a variety. If $\mathcal{V}\subseteq \mathsf{HA}$ is a variety (resp: universal class) we write $\mathbf{Var}(\mathcal{V})$ and $\mathbf{Uni}(\mathcal{V})$ respectively for the lattice of subvarieties (resp: of universal subclasses) of $\mathcal{V}$.
The connections between $\mathbf{Ext}(\mathtt{IPC})$ and $\mathbf{Var}(\mathsf{HA})$ on the one hand, and between $\mathbf{Ext}(\mathtt{IPC_R})$ and $\mathbf{Uni}(\mathsf{HA})$ on the other, are as intimate as they come.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{Ext}(\mathtt{IPC})\to \mathbf{Var}(\mathsf{HA})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{HA})\to \mathbf{Ext}(\mathtt{IPC})$;\label{algebraisationHAvar}
\item $\mathsf{Alg}:\mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{Uni}(\mathsf{HA})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{HA})\to \mathbf{Ext}(\mathtt{IPC_R})$.\label{algebraisationHAuni}
\end{enumerate} \label{thm:algebraisationHA}
\end{theorem}
\Cref{algebraisationHAvar} is proved in \cite[Theorem 7.56]{ChagrovZakharyaschev1997ML}, whereas \Cref{algebraisationHAuni} follows from \cite[Theorem 2.2]{Jerabek2009CR} by standard techniques.
\begin{corollary}
Every si-logic $($resp. si-rule system$)$ is complete with respect to some variety $($resp. universal class$)$ of Heyting algebras. \label{completeness_si}
\end{corollary}
An \emph{Esakia space} is a tuple $\mathfrak{X}=(X, \leq, \mathcal{O})$, such that $(X, \mathcal{O})$ is a Stone space, $\leq$ is a partial order on $X$, and
\begin{enumerate}
\item ${\uparrow} x:=\{y\in X: x\leq y\}$ is closed for every $x\in X$;
\item ${\downarrow} U:=\{x\in X:{\uparrow} x\cap U\neq\varnothing\}\in \mathsf{Clop}(\mathfrak{X})$ for every $U\in \mathsf{Clop}(\mathfrak{X})$.
\end{enumerate}
We let $\mathsf{Esa}$ denote the class of all Esakia spaces. If $\mathfrak{X}, \mathfrak{Y}$ are Esakia spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} if for all $x, y\in X$ we have that $x\leq y$ implies $f(x)\leq f(y)$, and $h(x)\leq y$ implies that there is $z\in X$ with $x\leq z$ and $h(z)=y$.
If $\mathfrak{X}$ is an Esakia space and $U\subseteq X$, we say that $U$ is an \emph{upset} if ${\uparrow}[U]=U$. We let $\mathsf{ClopUp}(\mathfrak{X})$ denote the set of clopen upsets in $\mathfrak{X}$. A \emph{valuation} on an Esakia space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{ClopUp}(\mathfrak{X})$. A valuation $V$ on $\mathfrak{X}$ extends to a truth-set assignment $\bar V:\mathit{Frm}_{si}\to \mathsf{ClopUp}(\mathfrak{X})$ in the standard way, with
\[\bar V(\varphi\to \psi):=-{\downarrow}(\bar V(\varphi)\smallsetminus\bar V(\psi)).\]
The following result recalls some important properties of Esakia spaces, used throughout the paper. For proofs the reader may consult \cite[Lemma 3.1.5, Theorem 3.2.1]{Esakia2019HADT}.
\begin{proposition}
Let $\mathfrak{X}\in \mathsf{Esa}$. Then for all $x, y\in X$ we have:\label{propesa}
\begin{enumerate}
\item If $x\not\leq y$ then there is $U\in \mathsf{ClopUp}(\mathfrak{X})$ such that $x\in U$ and $y\notin U$;\label{propesa1}
\item For all $U\in \mathsf{Clop}(U)$ and $x\in U$, there is $y\in \mathit{max}_\leq (U)$ such that $x\leq y$.\label{propesa1b}
\end{enumerate}
\end{proposition}
\citet{Esakia1974TKM} proved that the category of Heyting algebras with corresponding homomorphisms is dually equivalent to the category of Esakia spaces with continuous bounded morphisms. The reader may consult \cite[\S 3.4]{Esakia2019HADT} for a detailed proof of this result. We denote the Esakia space dual to a Heyting algebra $\mathfrak{H}$ as $\mathfrak{H_*}$, and the Heyting algebra dual to an Esakia space $\mathfrak{X}$ as $\mathfrak{X}^*$.
\subsubsection{Modal Deductive Systems, Modal Algebras, and Modal Spaces}
We shall now work in the \emph{modal signature}, \[md:=\{\land, \lor, \neg, \square, \bot, \top\}.\]
The set $\mathit{Frm_{md}}$ of modal formulae is defined recursively as follows.
\[\varphi::= p\, |\, \bot\, |\,\top\, |\, \varphi\land \varphi\, |\,\varphi\lor \varphi\, |\,\neg \varphi\, |\, \square\varphi.\]
As usual we abbreviate $\lozenge\varphi:=\neg\square\neg \varphi$. Further, we let $\varphi\to \psi:=\neg\varphi\lor \psi$ and $\varphi\leftrightarrow\psi:=(\varphi\to\psi)\land (\psi\to \varphi)$.
\begin{definition}
A \emph{normal modal logic}, henceforth simply \emph{modal logic}, is a logic $\mathtt{M}$ over $\mathit{Frm}_{md}$ satisfying the following conditions:
\begin{enumerate}
\item $\mathtt{CPC}\subseteq \mathtt{M}$, where $\mathtt{CPC}$ is the classical propositional calculus;
\item $\square (\varphi\to \psi)\to (\square \varphi\to \square \psi)\in \mathtt{M}$;
\item $\varphi\to \psi, \varphi\in \mathtt{M}$ implies $\psi\in \mathtt{M}$ (MP);
\item $\varphi\in \mathtt{M}$ implies $\square \varphi\in \mathtt{M}$ (NEC).
\end{enumerate}
We denote the least modal logic as $\mathtt{K}$. A \emph{normal modal rule system}, henceforth simply \emph{modal rule system}, is a rule system $\mathtt{M}$ over $\mathit{Frm}_{md}$, satisfying the following additional requirements:
\begin{enumerate}
\item $/\varphi\in \mathtt{M}$ whenever $\varphi\in \mathtt{K}$;
\item $\varphi\to \psi,\varphi /\psi\in \mathtt{M}$ (MP-R);
\item $\varphi/\square\varphi\in \mathtt{M}$ (NEC-R).\label{nec}
\end{enumerate}
\end{definition}
If $\mathtt{M}$ is a modal logic let $\mathbf{NExt}(\mathtt{M})$ be the set of modal logics extending $\mathtt{M}$, and similarly for modal rule systems.
Obviously, the set of modal logics coincides with $\mathbf{NExt}(\mathtt{K})$. It is well known that $\mathbf{NExt}(\mathtt{K})$ forms a lattice under the operations $\oplus_{\mathbf{NExt}(\mathtt{K})}$ as join and intersection as meet. Clearly, for each $\mathtt{M}\in \mathbf{NExt}(\mathtt{K})$ there is always a least modal rule system $\mathtt{K_R}$ containing $/\varphi$ for each $\varphi\in \mathtt{M}$. Therefore, $\mathtt{K_R}$ is the least modal rule system. The set $\mathbf{NExt}(\mathtt{K_R})$ is also a lattice when endowed with $\oplus_{\mathbf{NExt}(\mathtt{K_R})}$ as join and intersection as meet. With slight abuse of notation, we refer to these lattices as we refer to their underlying sets, i.e., $\mathbf{NExt}(\mathtt{K})$ and $\mathbf{NExt}(\mathtt{K_R})$ respectively. Additionally, we make use of systematic ambiguity and write both $\oplus_{\mathbf{NExt}(\mathtt{K})}$ and $\oplus_{\mathbf{NExt}(\mathtt{K_R})}$ simply as $\oplus$, leaving context to clarify which operation is meant.
We have a modal counterpart of \Cref{deductivesystemisomorphismsi}.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{NExt}(\mathtt{K})$ and the sublattice of $\mathbf{NExt}(\mathtt{K_R})$ consisting of all si-rule systems $\mathtt{M}$ such that $\mathsf{Taut}(\mathtt{M})_\mathtt{R}=\mathtt{M}$.\label{deductivesystemisomorphismmodal}
\end{proposition}
A \emph{modal algebra} is a tuple $\mathfrak{A}=(A, \land, \lor, \neg, \square, 0, 1)$ such that $(A, \land, \lor, \neg, 0, 1)$ is a Boolean algebra and the following equations hold:
\begin{align}
\square 1&=1,\\
\square(a\land b)&=\square a\land \square b.
\end{align}
We let $\mathsf{MA}$ denote the class of all modal algebras. By \Cref{syntacticvarietiesuniclasses}, $\mathsf{MA}$ is a variety. We let $\mathbf{Var}(\mathsf{MA})$ and $\mathbf{Uni}(\mathsf{MA})$ be the lattice of subvarieties and the lattice of universal subclasses of $\mathsf{MA}$ respectively. We have the following analogue of \Cref{thm:algebraisationHA}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms: \label{thm:algebraisationMA}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{K})\to \mathbf{Var}(\mathsf{MA})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{MA})\to \mathbf{NExt}(\mathtt{K})$;\label{algebraisationMAvar}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{K_R})\to \mathbf{Uni}(\mathsf{MA})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{MA})\to \mathbf{NExt}(\mathtt{K_R})$.\label{algebraisationMAuni}
\end{enumerate}
\end{theorem}
\Cref{algebraisationMAuni} is proved in \cite[Theorem 7.56]{ChagrovZakharyaschev1997ML}, whereas \Cref{algebraisationMAuni} follows from \cite[Theorem 2.5]{BezhanishviliGhilardi2014MCRHSaSF}.
\begin{corollary}
Every modal logic $($resp. modal rule system$)$ is complete with respect to some variety $($resp. universal class$)$ of modal algebras. \label{completeness_md}
\end{corollary}
A \emph{modal space} is a tuple $\mathfrak{X}=(X, R, \mathcal{O})$, such that $(X, \mathcal{O})$ is a Stone space, $R\subseteq X\times X$ is a binary relation, and
\begin{enumerate}
\item $R[x]:=\{y\in X: Rxy\}$ is closed for every $x\in X$;
\item $R^{-1} (U):=\{x\in X:R[x]\cap U\neq\varnothing\}\in \mathsf{Clop}(\mathfrak{X})$ for every $U\in \mathsf{Clop}(\mathfrak{X})$.
\end{enumerate}
We let $\mathsf{Mod}$ denote the class of all modal spaces. If $\mathfrak{X}, \mathfrak{Y}$ are modal spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} when for all $x, y\in X$, if $Rxy$ then $Rf(x)f(y)$, and $Rf(x) y$ implies that there is $z\in X$ with $Rx z$ and $f(z)=y$. A \emph{valuation} on a modal space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{Clop}(\mathfrak{X})$. A valuation extends to a full truth-set assignment $\bar V:\mathit{Frm}\to \mathsf{Clop}(\mathfrak{X})$ in the usual way.
By a generalisation of Stone duality, the category of modal algebras with corresponding homomorphisms is dually equivalent to the category of modal spaces with continuous bounded morphisms. A proof of this result can be found, e.g., in \cite[Sections 3, 4]{SambinVaccaro1988TaDiML}. We denote the modal space dual to a modal algebra $\mathfrak{A}$ as $\mathfrak{A_*}$, and the modal algebra dual to an modal space $\mathfrak{X}$ as $\mathfrak{X}^*$.
In this paper we are mostly concerned with modal algebras and modal spaces validating one of the following modal logics.
\begin{align*}
\mathtt{K4}&:=\mathtt{K}\oplus \square p\to \square \square p\\
\mathtt{S4}&:=\mathtt{K4}\oplus \square p\to p
\end{align*}
We let $\mathsf{K4}:=\mathsf{Alg}(\mathtt{K4})$ and $\mathsf{S4}:=\mathsf{Alg}(\mathtt{S4})$. We call algebras in $\mathsf{K4}$ \emph{transitive algebras}, and algebras in $\mathsf{S4}$ \emph{closure algebras}. It is obvious that for every $\mathfrak{A}\in \mathsf{MA}$, $\mathfrak{A}\in \mathsf{K4}$ iff $\square \square a\leq \square a$ for every $a\in A$, and $\mathfrak{A}\in \mathsf{S4}$ iff $\mathfrak{A}\in \mathsf{K4}$ and additionally $\square a\leq a$ for every $a\in A$. Moreover, it is easy to see that a modal space validates $\mathtt{K4}$ iff it has a transitive relation, and that it validates $\mathtt{S4}$ iff it has a reflexive and transitive relation (see, e.g., \citealt[Section 3.8]{ChagrovZakharyaschev1997ML}).
Let $\mathfrak{X}\in \mathsf{Spa}(\mathtt{K4})$. A subset $C\subseteq X$ is called a \emph{cluster} if it is an equivalence class under the relation $\sim$ defined by $x\sim y$ iff $Rxy$ and $Ryx$. A cluster is called \emph{improper} if it is a singleton, otherwise we call it \emph{proper}.
We recall some basic properties of $\mathtt{K4}$- and $\mathtt{S4}$-spaces.
\begin{proposition}
Let $\mathfrak{X}\in \mathsf{Spa}(\mathtt{S4})$ and $U\in \mathsf{Clop}(\mathfrak{X})$. Then the following conditions hold: \label{props4}
\begin{enumerate}
\item The set $\mathit{qmax}_R(U)$ is closed;
\item If $x\in U$ then there is $y\in \mathit{qmax}_R(U)$ such that $Rxy$.
\end{enumerate}
Moreover, let $\mathfrak{X}\in \mathsf{Spa}(\mathtt{K4})$ and $U\in \mathsf{Clop}(\mathfrak{X})$. Then the following conditions hold:
\begin{enumerate}[resume]
\item The structure $(X, R^+)$, with the same topology as $\mathfrak{X}$, is a $\mathtt{S4}$-space, where for all $x, y\in X$ we have $R^+xy$ iff $Rxy$ or $x=y$;
\item The set $\mathit{qmax}_R(U)$ is closed;
\item If $x\in U$ then there is $y\in \mathit{qmax}_R(U)$ such that $Rxy$.
\end{enumerate}
\end{proposition}
Properties 1, 2 are proved in \cite[Theorems 3.2.1, 3.2.3]{Esakia2019HADT}. Property 3 is straightforward to check, and properties 4, 5 are immediate consequences of 1, 2, and 3.
Among extensions of $\mathtt{S4}$, the modal logic $\mathtt{GRZ}$ plays a particularly central role in this paper.
\begin{align*}
\mathtt{GRZ}:&=\mathtt{K}\oplus\square (\square( p\to\square p)\to p)\to p\\
&=\mathtt{S4}\oplus\square (\square( p\to\square p)\to p)\to p
\end{align*}
We let $\mathsf{GRZ}:=\mathsf{Alg}(\mathtt{GRZ})$. It is not difficult to see that $\mathsf{GRZ}$ coincides with the class of all closure algebras $\mathfrak{A}$ such that for every $a\in A$ we have
\[\square(\square( a\to\square a)\to a)\leq a\]
or equivalently,
\[a\leq \lozenge(a\land \neg \lozenge (\lozenge a \land \neg a)).\]
A poset $(X, R)$ is called \emph{Noetherian} if it contains no infinite $R$-ascending chain of pairwise distinct points. It is well known that $\mathtt{GRZ}$ is complete with respect to the class of Noetherian partially ordered Kripke frames \cite[Corollary 5.52]{ChagrovZakharyaschev1997ML}. In general, $\mathtt{GRZ}$-spaces may fail to be partially ordered.
Still, clusters cannot occur just anywhere in a $\mathtt{GRZ}$-space, as the following result clarifies.
\begin{proposition}
For every $\mathtt{GRZ}$-space $\mathfrak{X}$ and ${U}\in \mathsf{Clop}(\mathfrak{X})$, the following hold: \label{propgrz1}
\begin{enumerate}
\item $\mathit{qmax}_R(U)\subseteq \mathit{max}_R(U)$;\label{propgrz1:1}
\item The set $\mathit{max}_R(U)$ is closed;\label{propgrz1:2}
\item For every $x\in U$ there is $y\in \mathit{pas}_R(U)$ such that $Rxy$;\label{propgrz1:3}
\item $\mathit{max}_R(U)\subseteq\mathit{pas}_R(U)$. \label{propgrz1:4}
\end{enumerate}
\end{proposition}
\Cref{propgrz1:1} is proved in \cite[Theorem 3.5.6]{Esakia2019HADT}. \Cref{propgrz1:2} follows from \Cref{propgrz1:1} and \Cref{props4}. \Cref{propgrz1:3} is immediate from the $\mathtt{GRZ}$-axiom. \Cref{propgrz1:4} then follows from \Cref{props4}, \Cref{propgrz1:1}, and \Cref{propgrz1:3}.
Let us say that $U\subseteq X$ \emph{cuts} a cluster $C\subseteq X$ if both $U\cap C\neq\varnothing$ and $U\smallsetminus C\neq\varnothing$. As an immediate consequence of \Cref{propgrz1:4} in \Cref{propgrz1} we obtain that for any $U\in \mathsf{Clop}(\mathfrak{X})$, neither $\mathit{max}_R(U)$ or $\mathit{pas}_R(U)$ cut any clusters in $\mathfrak{X}$.
\subsection{Stable Canonical Rules for Superintuitionistic and Modal Rule Systems}\label{sec:scr1}
In both the si and the modal cases, the \emph{filtration} technique can be used to construct finite countermodels to a non-valid rule $\Gamma/\Delta$. Roughly, this construction consists of expanding finitely generated subreducts in a locally finite signature of arbitrary counter-models to $\Gamma/\Delta$, in such a way that the new operation added to the subreduct agrees with the original one on selected elements. Si and modal \emph{stable canonical rules} are essentially syntactic devices for encoding finite filtrations. The present section briefly reviews this method in both the si and modal case. We point the reader to \cite{BezhanishviliEtAl2016SCR,BezhanishviliEtAl2016CSL,BezhanishviliBezhanishvili2017LFRoHAaCF,BezhanishviliBezhanishvili2020JFaATfIL} and \cite[Ch. 5]{Ilin2018FRLoSNCL} for more in-depth discussion.
\subsubsection{Supertintuitionistic Case} We begin by defining si stable canonical rules.
\begin{definition}
Let $\mathfrak{H}\in \mathsf{HA}$ be finite and $D\subseteq A\times A$. For every $a\in H$ introduce a fresh propositional variable $p_a$. The \emph{si stable canonical rule} of $(\mathfrak{H}, D)$, is defined as the rule $\scrsi{H}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma=&\{p_0\leftrightarrow 0\}\cup\{p_1\leftrightarrow 1\}\cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a, b\in H\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a, b\in H\}\cup\\
& \{p_{a\to b}\leftrightarrow p_a\to p_b:(a, b)\in D\}\\
\Delta=&\{p_a\leftrightarrow p_b:a, b\in H\text{ with } a\neq b\}.
\end{align*}
\end{definition}
We write si stable canonical rules of the form $\scrsi{H}{\varnothing}$ simply as $\srsi{H}$, and call them \emph{stable rules}.
If $\mathfrak{H}, \mathfrak{K}\in \mathsf{HA}$, let us call a map $h:\mathfrak{H}\to \mathfrak{K}$ \emph{stable} if $h$ is a bounded lattice homomorphism, i.e., if it preserves $0, 1, \land$, and $\lor$. If $D\subseteq H\times H$, we say that $h$ satisfies the \emph{bounded domain condition} (BDC) for $D$ if \[h(a\to b)=h(a)\to h(b)\] for every $(a, b)\in D$. It is not difficult to check that every stable map $h:\mathfrak{H}\to \mathfrak{K}$ satisfies $h(a\to b)\leq h(a)\to h(b)$ for every $(a, b)\in H$.
\color{black}
\begin{remark}
The BDC was originally called \emph{closed domain condition} in, e.g., \cite{BezhanishviliEtAl2016SCR,BezhanishviliBezhanishvili2017LFRoHAaCF}, following Zakharyaschev's terminology for a similar notion in the theory of his canonical formulae. The name \emph{stable domain condition} was later used in \cite{BezhanishviliBezhanishvili2020JFaATfIL} to stress the difference with Zakharyaschev's notion. However, this choice may create confusion between the BDC and the property of being a stable map. The terminology used in this paper is meant to avoid this, while concurrently highlighting the similarity between the geometric version of the BDC, to be presented in a few paragraphs, and the definition of a bounded morphism.
\end{remark}
\color{black}
The next two results characterise refutation conditions for si stable canonical rules. For detailed proofs the reader may consult \cite[Proposition 3.2]{BezhanishviliEtAl2016CSL}.
\begin{proposition}
For every finite $\mathfrak{H}\in \mathsf{HA}$ and $D\subseteq H\times H$, we have $\mathfrak{H}\nvDash \scrsi{H}{D}$. \label{prop:siscr-refutation-1}
\end{proposition}
\begin{proof}[Proof sketch]
Use the valuation $V(p_a)=a$.
\end{proof}
\begin{proposition}
For every $\mathfrak{H}, \mathfrak{K}\in \mathsf{HA}$ with $\mathfrak{H}$ finite, and every $D\subseteq H\times H$, we have $\mathfrak{K}\nvDash \scrsi{H}{D}$ iff there is a stable embedding $h:\mathfrak{H}\to \mathfrak{K}$ satisfying the BDC for $D$.\label{prop:siscr-refutation-2}
\end{proposition}
\begin{proof}[Proof sketch]
$(\Rightarrow)$ Assume $\mathfrak{K}\nvDash \scrsi{H}{D}$, and take a valuation $V$ on $\mathfrak{K}$ such that $\mathfrak{K}, V\nvDash \scrsi{H}{D}$. Define a map $h:\mathfrak{H}\to \mathfrak{K}$ by setting $h(a)=V(p_a)$. Then $h$ is the desired stable embedding satisfying the BDC for $D$.
$(\Leftarrow)$ Assume we have a stable embedding $h:\mathfrak{H}\to \mathfrak{K}$ satisfying the BDC for $D$. By the proof of \Cref{prop:siscr-refutation-1} we know that the valuation $V$ with $V(p_a)=a$ witnesses $\mathfrak{H}\nvDash \scrsi{H}{D}$. So put $V(p_a)=h(a)$.
\end{proof}
Si stable canonical rules also have uniform refutation conditions on Esakia spaces. If $\mathfrak{X}, \mathfrak{Y}$ are Esakia spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called \emph{stable} if $x\leq y$ implies $f(x)\leq f(y)$, for all $x, y\in X$. If $\mathfrak{d}\subseteq Y$ we say that $f$ satisfies the BDC for $\mathfrak{d}$ if for all $x\in X$,
\[{{\uparrow}}f(x)\cap \mathfrak{d}\neq\varnothing\Rightarrow f[{{\uparrow}}x]\cap \mathfrak{d}\neq\varnothing.\]
If $\mathfrak{D}\subseteq \wp(Y)$ then we say that $f$ satisfies the BDC for $\mathfrak{D}$ if it does for each $\mathfrak{d}\in \mathfrak{D}$. If $\mathfrak{H}$ is a finite Heyting algebra and $D\subseteq H$, for every $(a, b)\in D$ set $\mathfrak{d}_{(a, b)}:=\beta (a)\smallsetminus\beta (b)$. Finally, put \[\mathfrak{D}:=\{\mathfrak{d}_{(a, b)}:(a, b)\in D\}.\] The following result follows straightforwardly from \cite[Lemma 4.3]{BezhanishviliBezhanishvili2017LFRoHAaCF}.
\begin{proposition}
For every Esakia space $\mathfrak{X}$ and any si stable canonical rule $\scrsi{H}{D}$, we have $\mathfrak{X}\nvDash\scrsi{H}{D}$ iff there is a continuous stable surjection $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC for the family $\mathfrak{D}:=\{\mathfrak{d}_{(a, b)}:(a, b)\in D\}$.\label{refutspace}
\end{proposition}
In view of \Cref{refutspace}, when working with Esakia spaces we shall often write a si stable canonical rule $\scrsi{H}{D}$ as $\scrsi{H_*}{\mathfrak{D}}$.
Stable maps and the BDC are closely related to the filtration construction. We recall its definition in an algebraic setting, and state the fundamental theorem used in most of its applications.
\begin{definition}
Let $\mathfrak{H}$ be a Heyting algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{K}', V')$ is called a (\emph{finite}) \emph{filtration of $(\mathfrak{H}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{K}'=(\mathfrak{K}, \to)$, where $\mathfrak{K}$ is the bounded sublattice of $\mathfrak{H}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{H}\to \mathfrak{K}$ is a stable embedding satisfying the BDC for the set \[\{(\bar V'(\varphi), \bar V'(\psi)):\varphi\to \psi\in \Theta\}.\]
\end{enumerate}
\end{definition}
\begin{theorem}[Filtration theorem for Heyting algebras]
Let $\mathfrak{H}\in \mathsf{HA}$ be a Heyting algebra, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a a finite, subformula closed set of formulae. If $(\mathfrak{K}', V')$ is a filtration of $(\mathfrak{H}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{H}, V\models \Gamma/\Delta\iff \mathfrak{K}, V'\models \Gamma/\Delta.\]
\end{theorem}
A proof of the filtration theorem above follows from, e.g., the proof of \cite[Lemma 3.6]{BezhanishviliBezhanishvili2017LFRoHAaCF}.
The next result establishes that every si rule is equivalent to finitely many si stable canonical rules. This lemma was proved in \cite[Proposition 3.3]{BezhanishviliEtAl2016CSL}, but we rehearse the proof here to illustrate the exact role of filtration in the machinery of stable canonical rules.
\begin{lemma}
For every si rule $\Gamma/\Delta$ there is a finite set $\Xi$ of si stable canonical rules such that for any $\mathfrak{K}\in \mathsf{HA}$ we have $\mathfrak{K}\nvDash \Gamma/\Delta$ iff there is $\scrsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \scrsi{H}{D}$.\label{rewritesi}
\end{lemma}
\begin{proof}
Since bounded distributive lattices are locally finite there are, up to isomorphism, only finitely many pairs $(\mathfrak{H}, D)$ such that
\begin{itemize}
\item $\mathfrak{H}$ is at most $k$-generated as a bounded distributive lattice, where $k=|\mathit{Sfor}(\Gamma/\Delta)|$;
\item $D=\{(\bar V(\varphi), \bar V(\psi)):\varphi\to \psi\in \mathit{Sfor}(\Gamma/\Delta)\}$, where $V$ is a valuation on $\mathfrak{H}$ refuting $\Gamma/\Delta$.
\end{itemize}
Let $\Xi$ be the set of all rules $\scrsi{H}{D}$ for all such pairs $(\mathfrak{H}, D)$, identified up to isomorphism.
$(\Rightarrow)$ Assume $\mathfrak{K}\nvDash \Gamma/\Delta$ and take a valuation $V$ on $\mathfrak{K}$ refuting $\Gamma/\Delta$. Consider the bounded distributive sublattice $\mathfrak{J}$ of $\mathfrak{K}$ generated by $\bar V[\mathit{Sfor}(\Gamma/\Delta)]$. Since bounded distributive lattices are locally finite, $\mathfrak{J}$ is finite. Define a binary operation $\rightsquigarrow$ on $\mathfrak{J}$ by setting, for all $a, b\in J$,
\[a\rightsquigarrow b:=\bigvee\{c\in J: a\land c\leq b\}.\]
Clearly, $\mathfrak{J}':=(\mathfrak{J}, \rightsquigarrow)$ is a Heyting algebra. Define a valuation $V'$ on $\mathfrak{J}'$ with $V'(p)=V(p)$ if $p\in \Theta$, $V'(p)$ arbitrary otherwise.
Since $\mathfrak{J}'$ is a sublattice of $\mathfrak{K}$, the inclusion $\subseteq$ is a stable embedding. Now let $\varphi\to \psi\in \Theta$. Then $\bar V'(\varphi)\to \bar V'(\psi)\in J$. From the fact that $\subseteq$ is a stable embedding it follows that $\bar V'(\varphi)\rightsquigarrow \bar V'(\psi)\leq \bar V'(\varphi)\to \bar V'(\psi)$. Conversely, by definition of $\rightsquigarrow$ we find $\bar V'(\varphi)\rightsquigarrow \bar V'(\psi)\land \bar V'(\varphi)\leq \bar V'(\psi)$. But then by the properties of Heyting algebras it follows that $\bar V'(\varphi)\to \bar V'(\psi)\leq \bar V'(\varphi)\rightsquigarrow \bar V'(\psi)$. Thus $\bar V'(\varphi)\rightsquigarrow \bar V'(\psi)= \bar V'(\varphi)\to \bar V'(\psi)$ as desired. We have shown that the model $(\mathfrak{J}', V')$ is a filtration of the model $(\mathfrak{K}, V)$ through $\mathit{Sfor}(\Gamma/\Delta)$, which implies $\mathfrak{J}', V'\nvDash \Gamma/\Delta$.
$(\Leftarrow)$ Assume that there is $\scrsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \scrsi{H}{D}$. Let $V$ be the valuation associated with $D$ in the sense spelled out above. Then $\mathfrak{H}, V\nvDash \Gamma/\Delta$. Moreover $(\mathfrak{H}, V)$ is a filtration of the model $(\mathfrak{K}, V)$, so by the filtration theorem it follows that $\mathfrak{K}, V\nvDash \Gamma/\Delta$.
\end{proof}
As an immediate consequence we obtain a uniform axiomatisation of all si-rule systems by means of si stable canonical rules.
\begin{theorem}[{\cite[Proposition 3.4]{BezhanishviliEtAl2016CSL}}]
Any si-rule system $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ is axiom\-atisable over $\mathtt{IPC_R}$ by some set of si stable canonical rules. \label{axiomatisationsi}
\end{theorem}
\begin{proof}
Let $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$, and take a set of rules $\Xi$ such that $\mathtt{L}=\mathtt{IPC_R}\oplus \Xi$. By \Cref{rewritesi} and the completeness of $\mathtt{IPC_R}$ (\Cref{completeness_si}), for every $\Gamma/\Delta\in \Xi$ there is a finite set $\Pi_{\Gamma/\Delta}$ of si stable canonical rules whose conjunction is equivalent to $\Gamma/\Delta$. But then $\mathtt{L}=\mathtt{IPC_R}\oplus \bigcup_{\Delta/\Gamma\in \Xi}\Pi_{\Gamma/\Delta}$.
\end{proof}
\subsubsection{Modal Case} \label{sec:scrmod} We now turn to modal stable canonical rules.
\begin{definition}
Let $\mathfrak{A}\in \mathsf{MA}$ be finite and $D\subseteq A$. For every $a\in A$ introduce a fresh propositional variable $p_a$. The \emph{modal stable canonical rule} of $(\mathfrak{A}, D)$ is defined as the rule $\scrmod{A}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma&=\{p_{\neg a}\leftrightarrow \neg p_a:a\in A\}\cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a, b\in A\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a, b\in A\}\cup\\
& \{p_{\square a}\to \square p_a:a\in A\}\cup \{\square p_a\to p_{\square a}:a\in D\}\\
\Delta&=\{p_a:a\in A\smallsetminus \{1\}\}.
\end{align*}
\end{definition}
As in the si case, a modal stable canonical rule of the form $\scrmod{A}{\varnothing}$ is written simply as $\srmod{A}$ and called a \emph{stable rule}.
If $\mathfrak{A}, \mathfrak{B}\in \mathsf{MA}$ are modal algebras, let us call a map $h:\mathfrak{A}\to \mathfrak{B}$ \emph{stable} if for every $a\in A$ we have $h(\square a)\leq \square h(a)$. If $D\subseteq A$, we say that $h$ satisfies the \emph{bounded domain condition} (BDC) for $D$ if $h(\square a)= \square h(a)$ for every $a\in D$.
The following two propositions are modal counterparts to \Cref{prop:siscr-refutation-1,prop:siscr-refutation-2}. Their proofs are similar to the latter's, and can be found in \cite[Lemma 5.3, Theorem 5.4]{BezhanishviliEtAl2016SCR}.
\begin{proposition}
For every finite $\mathfrak{A}\in \mathsf{MA}$ and $D\subseteq A$, we have $\mathfrak{A}\nvDash \scrmod{A}{D}$. \label{prop:modscr-refutation-1}
\end{proposition}
\begin{proposition}
For every $\mathfrak{A}, \mathfrak{B}\in \mathsf{MA}$ with $\mathfrak{A}$ finite, and every $D\subseteq A$, we have $\mathfrak{B}\nvDash \scrmod{A}{D}$ iff there is a stable embedding $h:\mathfrak{A}\to \mathfrak{B}$ satisfying the BDC for $D$.\label{prop:modscr-refutation-2}
\end{proposition}
Refutation conditions for modal stable canonical rules on modal spaces are obtained in analogous fashion to the si case. If $\mathfrak{X}, \mathfrak{Y}$ are modal spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called \emph{stable} if for all $x, y\in X$, we have that $Rx y$ implies $Rf(x) f(y)$. If $\mathfrak{d}\subseteq Y$ we say that $f$ satisfies the BDC for $\mathfrak{d}$ if for all $x\in X$,
\[R[f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[R[x]]\cap \mathfrak{d}\neq\varnothing.\]
If $\mathfrak{D}\subseteq \wp(Y)$ then we say that $f$ satisfies the BDC for $\mathfrak{D}$ if it does for each $\mathfrak{d}\in \mathfrak{D}$. If $\mathfrak{A}$ is a finite modal algebra and $D\subseteq H$, for every $a\in D$ set $\mathfrak{d}_{a}:=-\beta (a)$. Finally, put $\mathfrak{D}:=\{\mathfrak{d}_{a}:a\in D\}$. The following result is proved in \cite[Theorem 3.6]{BezhanishviliEtAl2016SCR}.
\begin{proposition}
For every modal space $\mathfrak{X}$ and any modal stable canonical rule $\scrmod{A}{D}$, $\mathfrak{X}\nvDash\scrmod{A}{D}$ iff there is a continuous stable surjection $f:\mathfrak{X}\to \mathfrak{A}_*$ satisfying the BDC for $\mathfrak{D}$.\label{refutspacemod}
\end{proposition}
In view of \Cref{refutspacemod}, when working with modal spaces we may write a modal stable canonical rule $\scrmod{A}{D}$ as $\scrmod{A_*}{\mathfrak{D}}$.
As in the si case, stable maps and the BDC are closely related to the filtration technique.
\begin{definition}
Let $\mathfrak{A}$ be a modal algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{B}, V')$ is called a (\emph{finite}) \emph{filtration of $(\mathfrak{A}, V)$ through $\Theta$} if the following conditions hold:\label{filtrmod}
\begin{enumerate}
\item $\mathfrak{B}=(\mathfrak{B}', \square)$, where $\mathfrak{B}'$ is the Boolean subalgebra of $\mathfrak{A}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{B}\to \mathfrak{A}$ is a stable embedding satisfying the BDC for the set \[\{\bar V(\varphi):\square \varphi\in \Theta\}\]
\end{enumerate}
\end{definition}
The following result is proved, e.g., in \cite[Lemma 4.4]{BezhanishviliEtAl2016SCR}.
\begin{theorem}[Filtration theorem for modal algebras]
Let $\mathfrak{A}\in \mathsf{MA}$ be a modal algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. If $(\mathfrak{B}', V')$ is a filtration of $(\mathfrak{A}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{A}, V\models \Gamma/\Delta\iff \mathfrak{B}, V'\models \Gamma/\Delta.\]
\end{theorem}
Unlike the si case, filtrations of a given model through a given set of formulae are not necessarily unique when they exist. Depending on which construction is preferred, different properties of the original model may or may not be preserved. In this section we mainly deal with closure algebras, whence we are particularly interested in filtrations preserving reflexivity and transitivity. It is easy to see that any filtration preserves reflexivity. Whilst, in general, the filtration of a transitive model may fail to be transitive, transitive filtrations of transitive models can be constructed in multiple ways. Here we restrict attention to one particular construction.
\begin{definition}
Let $\mathfrak{A}\in \mathsf{S4}$, $V$ a valuation on $\mathfrak{A}$ and $\Theta$ a finite, subformula closed set of formula. The \emph{$($least$)$ transitive filtration} of $(\mathfrak{A}, V)$ is a pair $(\mathfrak{B}', V')$ with $\mathfrak{B}'=(\mathfrak{B}, \blacksquare)$, where $\mathfrak{B}$ and $V'$ are as per \Cref{filtrmod}, and for all $b\in B$ we have
\[\blacksquare b:=\bigvee\{\square a: \square a\leq \square b\text{ and }a, \square a\in B\}\]
\end{definition}
It is easy to see that transitive filtrations of transitive models are indeed based on closure algebras (cf., e.g., \cite[Lemma 6.2]{BezhanishviliEtAl2016SCR}).
Transitive filtrations provide the necessary countermodels to rewrite modal rules into (conjunctions of) modal stable canonical rules. The following lemma, which is a modal counterpart to \Cref{rewritesi}, explains how.
\begin{lemma}[{\cite[Theorem 5.5]{BezhanishviliEtAl2016SCR}}]
For every modal rule $\Gamma/\Delta$ there is a finite set $\Xi$ of modal stable canonical rules of the form $\scrmod{A}{D}$ with $\mathfrak{A}\in \mathsf{S4}$, such that for any $\mathfrak{B}\in \mathsf{S4}$ we have that $\mathfrak{B}\nvDash \Gamma/\Delta$ iff there is $\scrmod{A}{D}\in \Xi$ such that $\mathfrak{B}\nvDash \scrmod{A}{D}$.\label{rewritemod}
\end{lemma}
\begin{proof}
Since Boolean algebras are locally finite there are, up to isomorphism, only finitely many pairs $(\mathfrak{A}, D)$ such that
\begin{itemize}
\item $\mathfrak{A}$ is at most $k$-generated as a Boolean algebra, where $k=|\mathit{Sfor}(\Gamma/\Delta)|$;
\item $D=\{\bar V(\varphi):\square\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}$, where $V$ is a valuation on $\mathfrak{H}$ refuting $\Gamma/\Delta$.
\end{itemize}
Let $\Xi$ be the set of all rules $\scrmod{A}{D}$ for all such pairs $(\mathfrak{A}, D)$, identified up to isomorphism. Then we reason as in the proof of \Cref{rewritesi}, using the well-known fact that every model $(\mathfrak{B}, V)$ with $\mathfrak{B}\in \mathsf{S4}$ has a transitive filtration through $\mathit{Sfor}(\Gamma/\Delta)$ to establish the $(\Rightarrow)$ direction.
\end{proof}
Exactly mirroring the si case we apply \Cref{rewritemod} to obtain the following uniform axiomatisation of modal rule systems extending $\mathtt{S4_R}$.
\begin{theorem}
Every modal rule system $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ is axiom\-atisable over $\mathtt{S4_R}$ by some set of modal stable canonical rules of the form $\scrmod{A}{D}$, for $\mathfrak{A}\in \mathsf{S4}$.\label{thm:axiomatisationS4scr}
\end{theorem}
\subsection{Modal Companions of Superintuitionistic Deductive Systems via Stable Canonical Rules}\label{sec:modalcompanions}
We now turn to the main topic of this section. \Cref{sec:mappings1} reviews the basic ingredients of the theory of modal companions. \Cref{sec:structure1} shows how to apply stable canonical rules to give a novel proof of the Blok-Esakia theorem. Lastly, \Cref{sec:additionalresults} applies our methods to obtain an analogue of the Dummett-Lemmon conjecture to rule systems.
\subsubsection{Semantic Mappings} \label{sec:mappings1}
We begin by defining semantic transformation between Heyting and closure algebras. For more details, consult \cite[Section 3.5]{Esakia2019HADT}.
\begin{definition}
The mapping $\sigma: \mathsf{HA}\to \mathsf{S4}$ assigns every $\mathfrak{H}\in \mathsf{HA}$ to the algebra $\sigma \mathfrak{H}:=(B(\mathfrak{H}),\square)$, where $B(\mathfrak{H})$ is the free Boolean extension of $\mathfrak{H}$ and
\[\square a:=\bigvee \{b\in H: b\leq a\}.\]
\end{definition}
It can be shown that for each $\mathfrak{H}\in \mathsf{HA}$ we have that $\sigma \mathfrak{H}$ is in fact a $\mathtt{GRZ}$-algebra \cite[Corollary 3.5.7]{Esakia2019HADT}.
\begin{definition}
The mapping $\rho:\mathsf{S4}\to \mathsf{HA}$ assigns every $\mathfrak{A}\in \mathsf{S4}$ to the algebra $\rho \mathfrak{A}:=(O(A), \land, \lor, \to, 0,1)$, where
\begin{align*}
O(A)&:=\{a\in A:\square_F a=a\}=\{a\in A:\lozenge_P a=a\}\\
a\to b&:=\square_F (\neg a\lor b)
\end{align*}
\end{definition}
The algebra $\rho(\mathfrak{A})$ is called the \emph{Heyting algebra of open elements} associated with $\mathfrak{A}$.
It is easy to verify that $\rho(\mathfrak{A})$ is indeed a Heyting algebra for any closure algebra $\mathfrak{A}$.
We now give a dual description of the maps $\sigma$, $\rho$ on modal and Esakia spaces.
\begin{definition}
If $\mathfrak{X}=(X, \leq, \mathcal{O})$ is an Esakia space we set $\sigma \mathfrak{X}:=(X, R, \mathcal{O})$ with $R:=\leq$. Let $\mathfrak{Y}:=(Y, R, \mathcal{O})$ be an $\mathtt{S4}$-space. For $x, y\in Y$ write $x\sim y$ iff $Rxy$ and $Ryx$. Define a map $\rho:Y\to \wp (Y)$ by setting $\rho(x)=\{y\in Y:x\sim y\}$. We define $\rho\mathfrak{Y}:=(\rho[Y], \leq, \rho[\mathcal{O}])$ where $\rho(x)\leq \rho(y)$ iff $Rxy$.
\end{definition}
Note that $\sigma$ here is effectively the identity map, though we find useful to distinguish an Esakia space $\mathfrak{X}$ from $\sigma \mathfrak{X}$ notationally in order to signal whether we are treating the space as a model for si or modal deductive systems. On the other hand, the map $\rho$ affects a modal space $\mathfrak{Y}$ by collapsing its $R$-clusters and endowing the result with the quotient topology. We shall refer to $\rho\mathfrak{Y}$ as the \emph{Esakia skeleton} of $\mathfrak{Y}$, and to $\sigma\rho\mathfrak{Y}$ as the \emph{modal skeleton} of $\mathfrak{Y}$. It is easy to see that the map $\rho:\mathfrak{Y}\to \rho \mathfrak{Y}$ is a surjective bounded morphism which moreover reflects $\leq$.
Routine arguments show that that the algebraic and topological versions of the maps $\sigma, \rho$ are indeed dual to each other, as stated in the following proposition.
\begin{proposition}
The following hold.\label{prop:mcmapsdual}
\begin{enumerate}
\item Let $\mathfrak{H}\in \mathsf{HA}$. Then $(\sigma \mathfrak{H})_*\cong\sigma (\mathfrak{H}_*)$. Consequently, if $\mathfrak{X}$ is an Esakia space then $(\sigma \mathfrak{X})^*\cong\sigma (\mathfrak{X})^*$.\label{prop:mcmapsdual1}
\item Let $\mathfrak{X}$ be an $\mathtt{S4}$ modal space. Then $(\rho\mathfrak{X})^*\cong\rho(\mathfrak{X}^*)$. Consequently, if $\mathfrak{A}\in \mathsf{S4}$, then $(\rho\mathfrak{A})_*\cong\rho(\mathfrak{A}_*)$.\label{prop:mcmapsdual2}
\end{enumerate}
\end{proposition}
The dual description of $\rho, \sigma$ makes the following result evident.
\begin{proposition}
For every $\mathfrak{H}\in \mathsf{HA}$ we have $\mathfrak{H}\cong \rho\sigma \mathfrak{H}$. Moreover, for every $\mathfrak{A}\in \mathsf{S4}$ we have $\sigma \rho\mathfrak{A}\rightarrowtail\mathfrak{A}$.\label{cor:representationHAS4}
\end{proposition}
\subsubsection{The Gödel Translation}
The close connection between Heyting and closure algebras just outlined manifests syntactically as the existence of a well-behaved translation of si formulae into modal ones, called the \emph{Gödel translation} after \citet{Goedel1933EIDIA}.
\begin{definition}[Gödel translation]
The \emph{Gödel translation} is a mapping $T:\mathit{Frm}_{si}\to \mathit{Frm}_{md}$ defined recursively as follows.
\begin{align*}
T(\bot)&:=\bot\\
T(\top)&:=\top\\
T(p)&:=\square p\\
T(\varphi\land \psi)&:=T(\varphi)\land T(\psi)\\
T(\varphi\lor \psi)&:=T(\varphi)\lor T(\psi)\\
T(\varphi\to \psi)&:=\square (\neg T(\varphi)\lor T(\psi))
\end{align*}
\end{definition}
We extend the Gödel translation from formulae to rules by setting
\[T(\Gamma/\Delta):=T[\Gamma]/T[\Delta].\]
We close this subsection by recalling the following key lemma due to \citet{Jerabek2009CR}.
\begin{lemma}[{\cite[Lemma 3.13]{Jerabek2009CR}}]
For every $\mathfrak{A}\in \mathsf{S4}$ and si rule $\Gamma/\Delta$, \label{lem:gtskeleton}
\[\mathfrak{A}\models T(\Gamma/\Delta)\iff \rho\mathfrak{A}\models \Gamma/\Delta\]
\end{lemma}
\subsubsection{Structure of Modal Companions}\label{sec:structure1}
We now have all the material needed to develop the theory of modal companions via the machinery of stable canonical rules.
\begin{definition}
Let $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ be a si-rule system and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ a modal rule system. We say that $\mathtt{M}$ is a \emph{modal companion} of $\mathtt{L}$ (or that $\mathtt{L}$ is the si fragment of $\mathtt{M}$) whenever $\Gamma/\Delta\in \mathtt{L}$ iff $T(\Gamma/\Delta)\in \mathtt{M}$. Moreover, let $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC})$ be a si-logic and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4})$ a modal logic. We say that $\mathtt{M}$ is a \emph{modal companion} of $\mathtt{L}$ (or that $\mathtt{L}$ is the si fragment of $\mathtt{M}$) whenever $\varphi\in \mathtt{L}$ iff $T(\varphi)\in \mathtt{M}$.
\end{definition}
Obviously, $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ iff $\mathsf{Taut}(\mathtt{M})$ is a modal companion of $\mathsf{Taut}(\mathtt{L})$, and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC})$ iff $\mathtt{M_R}$ is a modal companion of $\mathtt{L_R}$.
Define the following three maps between the lattices $\mathbf{Ext}(\mathtt{IPC_R})$ and $\mathbf{NExt}(\mathtt{K_R})$.
\begin{align*}
\tau&:\mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{NExt}(\mathtt{S4_R}) & \sigma&:\mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{NExt}(\mathtt{S4_R}) \\
\mathtt{L}&\mapsto \mathtt{S4_R}\oplus \{T(\Gamma/\Delta):\Gamma/\Delta\in \mathtt{L}\} & \mathtt{L}&\mapsto \mathtt{GRZ_R}\oplus \tau \mathtt{L}\\
\end{align*}
\begin{align*}
\rho &:\mathbf{NExt}(\mathtt{S4_R}) \to \mathbf{Ext}(\mathtt{IPC_R}) \\
\mathtt{M}&\mapsto\{\Gamma/\Delta:T(\Gamma/\Delta)\in \mathtt{M}\}
\end{align*}
These mappings are readily extended to lattices of logics.
\begin{align*}
\tau&:\mathbf{Ext}(\mathtt{IPC})\to \mathbf{NExt}(\mathtt{S4}) & \sigma&:\mathbf{Ext}(\mathtt{IPC})\to \mathbf{NExt}(\mathtt{S4}) \\
\mathtt{L}&\mapsto \mathsf{Taut}(\tau\mathtt{L_R})=\mathtt{S4}\oplus \{T(\varphi):\varphi\in \mathtt{L}\} & \mathtt{L}&\mapsto \mathsf{Taut}(\sigma\mathtt{L_R})=\mathtt{GRZ}\oplus\{T(\varphi):\varphi\in \mathtt{L}\} \\
\end{align*}
\begin{align*}
\rho &:\mathbf{NExt}(\mathtt{S4}) \to \mathbf{Ext}(\mathtt{IPC}) \\
\mathtt{M}&\mapsto \mathsf{Taut}(\rho\mathtt{M_R})=\{\varphi:T(\varphi)\in \mathtt{M}\}
\end{align*}
Furthermore, extend the mappings $\sigma:\mathsf{HA}\to \mathsf{S4}$ and $\rho:\mathsf{S4}\to \mathsf{HA}$ to universal classes by setting
\begin{align*}
\sigma&:\mathbf{Uni}(\mathsf{HA})\to \mathbf{Uni}(\mathsf{S4}) & \rho&:\mathbf{Uni}(\mathsf{S4})\to \mathbf{Uni}(\mathsf{HA}) \\
\mathcal{U}&\mapsto \mathsf{Uni}\{\sigma \mathfrak{H}:\mathfrak{H}\in \mathcal{U}\} & \mathcal{W}&\mapsto \{\rho\mathfrak{A}:\mathfrak{A}\in \mathcal{W}\}.\\
\end{align*}
Finally, introduce a semantic counterpart to $\tau$ as follows.
\begin{align*}
\tau&: \mathbf{Uni}(\mathsf{HA})\to \mathbf{Uni}(\mathsf{S4}) \\
\mathcal{U}&\mapsto \{\mathfrak{A}\in \mathsf{S4}:\rho\mathfrak{A}\in \mathcal{U}\}
\end{align*}
The goal of this subsection is to give alternative proofs of the following two classic results in the theory of modal companions. Firstly, that for every si-deductive system $\mathtt{L}$, the modal companions of $\mathtt{L}$ are exactly the elements of the interval $\rho^{-1}(\mathtt{L})$ (\Cref{mcinterval}). Secondly, that the syntactic mappings $\sigma, \rho$ are mutually inverse isomorphism (\Cref{blokesakia}). This last result (restricted to logics) is widely known as the \emph{Blok-Esakia theorem}.
The main problem one needs to deal with in order to prove the results just mentioned consists in showing that the mapping $\sigma:\mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{NExt}(\mathtt{GRZ_R})$ is surjective. We solve this problem by first applying stable canonical rules to show that the semantic mapping $\sigma:\mathbf{Uni}(\mathsf{HA})\to \mathbf{Uni}(\mathsf{GRZ})$ is surjective, and subsequently establishing that the syntactic and semantic versions of $\sigma$ capture essentially the same transformation. Our key tool is the following technical lemma.
\begin{lemma}
Let $\mathfrak{A}\in \mathsf{GRZ}$. Then for every modal rule $\Gamma/\Delta$, $\mathfrak{A}\models\Gamma/\Delta$ iff $\sigma\rho\mathfrak{A}\models \Gamma/\Delta$.\label{mainlemma-simod}
\end{lemma}
\begin{proof}
$(\Rightarrow)$ This direction follows from the fact that $\sigma\rho\mathfrak{A}\rightarrowtail\mathfrak{A}$ (\Cref{cor:representationHAS4}).
$(\Leftarrow)$ We prove the dual statement that $\mathfrak{A}_*\nvDash \Gamma/\Delta$ implies $\sigma\rho\mathfrak{A}_*\nvDash \Gamma/\Delta$. Let $\mathfrak{X}:=\mathfrak{A}_*$. By \Cref{thm:axiomatisationS4scr}, $\Gamma/\Delta$ is equivalent to a conjunction of modal stable canonical rules of finite closure algebras, so without loss of generality we may assume $\Gamma/\Delta=\scrmod{B}{D}$, for $\mathfrak{B}\in \mathsf{S4}$ finite. So suppose $\mathfrak{X}\nvDash \scrmod{B}{D}$ and let $\mathfrak{F}:=\mathfrak{B}_*$. By \Cref{refutspace}, there is a stable map $f:\mathfrak{X}\to \mathfrak{F}$ satisfying the BDC for $\mathfrak{D}:=\{\mathfrak{d}_a:a\in D\}$. We construct a stable map $g:\sigma\rho\mathfrak{X}\to \mathfrak{F}$ which also satisfies the BDC for $\mathfrak{D}$. By \Cref{refutspace} again, this would show that $\sigma \rho\mathfrak{X}\nvDash\scrmod{B}{D}$, hence would conclude the proof.
Let $C\subseteq F$ be some cluster. Consider $Z_C:=f^{-1}(C)$. As $f$ is continuous, $Z_C\in \mathsf{Clop}(\mathfrak{X})$. Moreover, since $f$ is stable $Z_C$ does not cut any cluster. It follows that $\rho[Z_C]$ is clopen in $\sigma\rho\mathfrak{X}$, because $\sigma\rho \mathfrak{X}$ has the quotient topology. Enumerate $C:=\{x_1, \ldots, x_n\}$. Then $f^{-1}(x_i)\subseteq Z_C$ is clopen. By \Cref{propgrz1} we find that $M_i:=\mathit{max}(f^{-1}(x_i))$ is closed. Furthermore, as $\mathfrak{X}$ is a $\mathtt{GRZ}$ space and every element of $M_i$ is passive in $M_i$, by \Cref{propgrz1} again we have that $M_i$ does not cut any cluster. Therefore $\rho[M_i]$ is closed, again because $\sigma\rho\mathfrak{X}$ has the quotient topology. Clearly, $\rho[M_i]\cap \rho[M_j]=\varnothing$ for each $i\neq j$.
We shall now separate the closed sets $\rho[M_1], \ldots, \rho[M_n]$ by disjoint clopens. That is, we shall find disjoint clopens $U_1, \ldots, U_n\in \mathsf{Clop}(\sigma\rho\mathfrak{X})$ with $\rho[M_i]\subseteq U_i$ and $\bigcup_i U_i=\rho[Z_C]$.
Let $k\leq n$ and assume that $U_i$ has been defined for all $i<k$. If $k=n$ put $U_n=\rho[Z_C]\smallsetminus\left(\bigcup_{i< k} U_i\right)$ and we are done. Otherwise set $V_k:=\rho[Z_C]\smallsetminus\left(\bigcup_{i< k} U_i\right)$ and observe that it contains each $\rho[M_i]$ for $k\leq i\leq n$. By the separation properties of Stone spaces, for each $i$ with $k<i\leq n$ there is some $U_{k_i}\in \mathsf{Clop}(\sigma\rho\mathfrak{X})$ with $\rho[M_k]\subseteq U_{k_i}$ and $\rho[M_i]\cap U_{k_i}=\varnothing$. Then set $U_k:=\bigcap_{k<i\leq n} U_{k_i}\cap V_k$.
Now define a map
\begin{align*}
g_C&: \rho[Z_C]\to C\\
z&\mapsto x_i\iff z\in U_i.
\end{align*}
Note that $g_C$ is relation preserving, evidently, and continuous because $g_C^{-1}(x_i)=U_i$. Finally, define $g: \sigma\rho\mathfrak{X}\to F$ by setting
\[
g(\rho(z)):=\begin{cases}
f(z)&\text{ if } f(z)\text{ does not belong to any proper cluster }\\
g_C(\rho(z))&\text{ if }f(z)\in C\text{ for some proper cluster }C\subseteq F.
\end{cases}
\]
Now, $g$ is evidently relation preserving. Moreover, it is continuous because both $f$ and each $g_C$ are. Suppose $Rg(\rho(x)) y$ and $y\in \mathfrak{d}$ for some $\mathfrak{d}\in \mathfrak{D}$. By construction, $f(x)$ belongs to the same cluster as $g(\rho(x))$, so also $Rf(x)y$. Since $f$ satisfies the BDC for $\mathfrak{D}$, there must be some $z\in X$ such that $Rxz$ and $f(z)\in \mathfrak{d}$. Since $f^{-1}(f(z))\in \mathsf{Clop}(\mathfrak{X})$, by \Cref{propgrz1} there is $z'\in \mathit{max}(f^{-1}(f(z)))$ with $Rzz'$.
\color{black} Then also $Rxz'$ and $f(z')\in \mathfrak{d}$.
But from $z'\in \mathit{max}(f^{-1}(f(z))$ it follows that $f(z')=g(\rho(z'))$ by construction, so we have $g(\rho(z'))\in \mathfrak{d}$. As clearly $R\rho(x)\rho(z')$, we have shown that $g$ satisfies the BDC for $\mathfrak{D}$. By \Cref{refutspace} this implies $\sigma\rho\mathfrak{X}\not\models \scrmod{B}{D}$.
\color{black}
\end{proof}
\begin{theorem}
Every $\mathcal{U}\in \mathbf{Uni}(\mathsf{GRZ})$ is generated by its skeletal elements, i.e., $\mathcal{U}=\sigma \rho\mathcal{U}$. \label{unigrzgeneratedskel}
\end{theorem}
\begin{proof}
By $\sigma \rho\mathfrak{A}\rightarrowtail\mathfrak{A}$ (\Cref{cor:representationHAS4}), surely $\sigma\rho\mathcal{U}\subseteq \mathcal{U}$. Conversely, suppose $\mathcal{U}\nvDash \Gamma/\Delta$. Then there is $\mathfrak{A}\in \mathcal{U}$ with $\mathfrak{A}\nvDash \Gamma/\Delta$. By \Cref{mainlemma-simod} it follows that $\sigma\rho\mathfrak{A}\nvDash\Gamma/\Delta$. This shows $\mathsf{ThR}(\sigma\rho\mathcal{U})\subseteq \mathsf{ThR}(\mathcal{U})$, which is equivalent to $\mathcal{U}\subseteq \sigma\rho\mathcal{U}$. Hence indeed $\mathcal{U}=\sigma \rho\mathcal{U}$.
\end{proof}
\begin{remark}
The restriction of \Cref{unigrzgeneratedskel} to varieties plays an important role in the algebraic proof of the Blok-Esakia theorem given by \citet{Blok1976VoIA}. The unrestricted version is explicitly stated and proved in \cite[Lemma 4.4]{Stronkowski2018OtBETfUC} using a generalisation of Blok's approach, although it also follows from \cite[Theorem 5.5]{Jerabek2009CR}. Blok establishes the restricted version of \Cref{unigrzgeneratedskel} as a consequence of what is now known as the \emph{Blok lemma}. The proof of the Blok lemma is notoriously involved. By contrast, our techniques afford a direct and, we believe, semantically transparent proof of \Cref{unigrzgeneratedskel}. \label{remark:blok}
\end{remark}
Given \Cref{unigrzgeneratedskel}, the main result of this section can be obtained via known routine arguments. First, we show that the syntactic modal companion maps $\tau, \rho, \sigma$ commute with $\mathsf{Alg}(\cdot)$.
\begin{lemma}[{\cite[Theorem 5.9]{Jerabek2009CR}}]
For each $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$, the following hold:\label{prop:mcmapscommute}
\begin{align}
\mathsf{Alg}(\tau\mathtt{L})&=\tau \mathsf{Alg}(\mathtt{L}) \label{prop:mcmapscommute1}\\
\mathsf{Alg}(\sigma\mathtt{L})&=\sigma \mathsf{Alg}(\mathtt{L})\label{prop:mcmapscommute2}\\
\mathsf{Alg}(\rho\mathtt{M})&=\rho \mathsf{Alg}(\mathtt{M})\label{prop:mcmapscommute3}
\end{align}
\end{lemma}
\begin{proof}
(\ref{prop:mcmapscommute1}) For every $\mathfrak{A}\in \mathsf{S4}$ we have $\mathfrak{A}\in \mathsf{Alg}(\tau\mathtt{L})$ iff $\mathfrak{A}\models T(\Gamma/\Delta)$ for all $\Gamma/\Delta\in \mathtt{L}$ iff $\rho\mathfrak{A}\models \Gamma/\Delta$ for all $\Gamma/\Delta\in \mathtt{L}$ iff $\rho\mathfrak{A}\in \mathsf{Alg}(\mathtt{L})$ iff $\mathfrak{A}\in \tau \mathsf{Alg}(\mathtt{L})$.
(\ref{prop:mcmapscommute2}) In view of \Cref{unigrzgeneratedskel} it suffices to show that $\mathsf{Alg}(\sigma\mathtt{L})$ and $\sigma \mathsf{Alg}(\mathtt{L})$ have the same skeletal elements. So let $\mathfrak{A}=\sigma\rho\mathfrak{A}\in \mathsf{GRZ}$. Assume $\mathfrak{A}\in\sigma \mathsf{Alg}(\mathtt{L})$. Since $\sigma \mathsf{Alg}(\mathtt{L})$ is generated by $\{\sigma\mathfrak{B}:\mathfrak{B}\in \mathsf{Alg}(\mathtt{L})\}$ as a universal class, by \Cref{cor:representationHAS4} and \Cref{lem:gtskeleton} we have $\mathfrak{A}\models T(\Gamma/\Delta)$ for every $\Gamma/\Delta\in \mathtt{L}$. But then $\mathfrak{A}\in \mathsf{Alg}(\sigma\mathtt{L})$. Conversely, assume $\mathfrak{A}\in \mathsf{Alg}(\sigma\mathtt{L})$. Then $\mathfrak{A}\models T(\Gamma/\Delta)$ for every $\Gamma/\Delta\in \mathtt{L}$. By \Cref{lem:gtskeleton} this is equivalent to $\rho\mathfrak{A}\in \mathsf{Alg}(\mathtt{L})$, therefore $\sigma\rho\mathfrak{A}=\mathfrak{A}\in \sigma\mathsf{Alg}(\mathtt{L})$.
(\ref{prop:mcmapscommute3}) Let $\mathfrak{H}\in \mathsf{HA}$. If $\mathfrak{H}\in \rho \mathsf{Alg}(\mathtt{M})$ then $\mathfrak{H}=\rho \mathfrak{A}$ for some $\mathfrak{A}\in \mathsf{Alg}(\mathtt{M})$. It follows that for every si rule $T(\Gamma/\Delta)\in \mathtt{M}$ we have $\mathfrak{A}\models T(\Gamma/\Delta)$, and so by \Cref{lem:gtskeleton} in turn $\mathfrak{H}\models\Gamma/\Delta$. Therefore indeed $\mathfrak{H}\in \mathsf{Alg}(\rho\mathtt{M})$. Conversely, for all si rules $\Gamma/\Delta$, if $\rho\mathsf{Alg}(\mathtt{M})\models \Gamma/\Delta$ then by \Cref{lem:gtskeleton} $\mathsf{Alg}(\mathtt{M})\models T(\Gamma/\Delta)$, hence $\Gamma/\Delta\in \rho\mathtt{M}$. Thus $\mathsf{ThR}(\rho\mathsf{Alg}(\mathtt{M}))\subseteq \rho\mathtt{M}$, and so $\mathsf{Alg}(\rho\mathtt{M})\subseteq \rho\mathsf{Alg}(\mathtt{M})$.
\end{proof}
The result just proved leads straightforwardly to the following, purely semantic characterisation of modal companions.
\begin{lemma}
$\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ iff $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$.\label{mcsemantic}
\end{lemma}
\begin{proof}
$(\Rightarrow)$ Assume $\mathtt{M}$ is a modal companion of $\mathtt{L}$. Then we have $\mathtt{L}=\rho\mathtt{M}$. By \Cref{prop:mcmapscommute} $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$.
$(\Leftarrow)$ Assume that $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$. Therefore, by \Cref{cor:representationHAS4}, $\mathfrak{H}\in \mathsf{Alg}(\mathtt{L})$ implies $\sigma \mathfrak{H}\in \mathsf{Alg}(\mathtt{M})$. This implies that for every si rule $\Gamma/\Delta$, $\Gamma/\Delta\in \mathtt{L}$ iff $T(\Gamma/\Delta)\in \mathtt{M}$.
\end{proof}
We can now prove the main two results of this section.
\begin{theorem}[{\cite[Theorem 5.5]{Jerabek2009CR}}, {\cite[Theorem 3]{Zakharyashchev1991MCoSLSSaPT}}]
The following conditions hold: \label{mcinterval}
\begin{enumerate}
\item For every $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$, the modal companions of $\mathtt{L}$ form an interval $\{\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R}):\tau\mathtt{L}\leq \mathtt{M}\leq \sigma\mathtt{L}\}$.\label{mcinterval1}
\item For every $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC})$, the modal companions of $\mathtt{L}$ form an interval $\{\mathtt{M}\in \mathbf{NExt}(\mathtt{S4}):\tau\mathtt{L}\leq \mathtt{M}\leq \sigma\mathtt{L}\}$.\label{mcinterval2}
\end{enumerate}
\end{theorem}
\begin{proof}
(\ref{mcinterval1}) In view of \Cref{prop:mcmapscommute} it suffices to prove that $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ iff $\sigma\mathsf{Alg}(\mathtt{L})\subseteq \mathsf{Alg}(\mathtt{M})\subseteq\tau\mathsf{Alg}(\mathtt{L})$.
($\Rightarrow$) Assume $\mathtt{M}$ is a modal companion of $\mathtt{L}$. Then by \Cref{mcsemantic} we have $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$, therefore it is clear that $\mathsf{Alg}(\mathtt{M})\subseteq \tau \mathsf{Alg}(\mathtt{L})$. To see that $\sigma\mathsf{Alg}(\mathtt{L})\subseteq \mathsf{Alg}(\mathtt{M})$ it suffices to show that every skeletal algebra in $\sigma\mathsf{Alg}(\mathtt{L})$ belongs to $\mathsf{Alg}(\mathtt{M})$. So let $\mathfrak{A}\cong\sigma\rho\mathfrak{A}\in \sigma\mathsf{Alg}(\mathtt{L})$. Then $\rho\mathfrak{A}\in \mathsf{Alg}(\mathtt{L})$ by \Cref{lem:gtskeleton}, so there must be $\mathfrak{B}\in \mathsf{Alg}(\mathtt{M})$ such that $\rho\mathfrak{B}\cong \rho\mathfrak{A}$. But this implies $\sigma \rho\mathfrak{B}\cong \sigma \rho\mathfrak{A}\cong \mathfrak{A}$, and as universal classes are closed under subalgebras, by \Cref{cor:representationHAS4} we conclude $\mathfrak{A}\in \mathsf{Alg}(\mathtt{M})$.
($\Leftarrow$) Assume $\sigma\mathsf{Alg}(\mathtt{L})\subseteq \mathsf{Alg}(\mathtt{M})\subseteq\tau\mathsf{Alg}(\mathtt{L})$. It is an immediate consequence of \Cref{cor:representationHAS4} that $\rho\sigma \mathsf{Alg}(\mathtt{L})=\mathsf{Alg}(\mathtt{L})$, which gives us $\rho\mathsf{Alg}(\mathtt{M})\supseteq\mathsf{Alg}(\mathtt{L})$. But by construction $\rho\mathsf{Alg}(\mathtt{M})=\rho\tau \mathsf{Alg}(\mathtt{L})$, hence $\rho\mathsf{Alg}(\mathtt{M})\subseteq\mathsf{Alg}(\mathtt{L})$. Therefore indeed $\rho\mathsf{Alg}(\mathtt{M})=\mathsf{Alg}(\mathtt{L})$, so by \Cref{mcsemantic} we conclude that $\mathtt{M}$ is a modal companion of $\mathtt{L}$.
(\ref{mcinterval2}) Immediate from \Cref{mcinterval1} and \Cref{deductivesystemisomorphismsi,deductivesystemisomorphismmodal}.
\end{proof}
\begin{theorem}[Blok Esakia theorem]
The following conditions hold: \label{blokesakia}
\begin{enumerate}
\item The mappings $\sigma: \mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{NExt}(\mathtt{GRZ_R})$ and $\rho:\mathbf{NExt}(\mathtt{GRZ_R})\to \mathbf{Ext}(\mathtt{IPC_R})$ are complete lattice isomorphisms and mutual inverses.\label{blokesakia:1}
\item The mappings $\sigma: \mathbf{Ext}(\mathtt{IPC})\to \mathbf{NExt}(\mathtt{GRZ})$ and $\rho:\mathbf{NExt}(\mathtt{GRZ})\to \mathbf{Ext}(\mathtt{IPC})$ are complete lattice isomorphisms and mutual inverses. \label{blokesakia:2}
\end{enumerate}
\end{theorem}
\begin{proof}
(\ref{blokesakia:1}) It is enough to show that the mappings $\sigma: \mathbf{Uni}(\mathsf{HA})\to \mathbf{Uni}(\mathsf{GRZ})$ and $\rho:\mathbf{NExt}(\mathsf{GRZ})\to \mathbf{Ext}(\mathsf{HA})$ are complete lattice isomorphisms and mutual inverses. Both maps are evidently order preserving, and preservation of infinite joins is an easy consequence of \Cref{lem:gtskeleton}. Let $\mathcal{U}\in \mathbf{Uni}(\mathsf{GRZ})$. Then $\mathcal{U}=\sigma\rho\mathcal{U}$ by \Cref{unigrzgeneratedskel}, so $\sigma$ is surjective and a left inverse of $\rho$. Now let $\mathcal{U}\in \mathbf{Uni}(\mathsf{HA})$. It is an immediate consequence of \Cref{cor:representationHAS4} that $\rho\sigma \mathcal{U}=\mathcal{U}$. Hence $\rho$ is surjective and a left inverse of $\sigma$. Thus $\sigma$ and $\rho$ are mutual inverses, and therefore must both be bijections.
(\ref{blokesakia:2}) Immediate from \Cref{blokesakia:1} and \Cref{deductivesystemisomorphismsi,deductivesystemisomorphismmodal}.
\end{proof}
As noted earlier, the arguments given in the proofs of \Cref{mcinterval,blokesakia} are standard. The novelty of our strategy consists in establishing the key fact on which these standard arguments depend on, namely \Cref{unigrzgeneratedskel}, in a novel way using stable canonical rules.
\subsubsection{The Dummett-Lemmon Conjecture} \label{sec:additionalresults}
We call a modal or si-rule system \emph{Kripke complete} if it is of the form $\mathtt{L}=\{\Gamma/\Delta:\mathcal{K}\models \Gamma/\Delta\}$ for some class of Kripke frames $\mathcal{K}$. \citet[Corollary 2]{Zakharyashchev1991MCoSLSSaPT} applied his canonical formulae to prove the \emph{Dummett-Lemmon conjecture} \cite{DummettLemmon1959MLbS4aS5}, which states that a si-logic is Kripke complete iff its weakest modal companion is. To our knowledge, a proof that the Dummett-Lemmon conjecture generalises to rule systems has not been published, although perhaps one could be given by applying Je\v{r}ábek-style canonical rule to adapt Zakharyaschev's argument. Here we give a proof that the Dummett-Lemmon conjecture does indeed generalise to rule systems using stable canonical rules.
It is easy to see that refutation conditions for stable canonical rules work essentially the same way for Kripke frames as they do for Esakia and modal spaces: for every Kripke frame $\mathfrak{X}$ and si stable canonical rule $\scrsi{F}{\mathfrak{D}}$, we have that $\mathfrak{X}\nvDash \scrsi{F}{\mathfrak{D}}$ iff there is a surjective stable homomorphism $f:\mathfrak{X}\to \mathfrak{F}$ satisfying the BDC for $\mathfrak{D}$, and analogously for the modal case. For details the reader may consult, e.g., \cite{BezhanishviliEtAl2016SCR}. The mappings $\sigma, \tau, \rho$ also extend to classes of Kripke frames in an obvious way. Finally \Cref{lem:gtskeleton} works for Kripke frames as well, the latter appropriately reformulated to incorporate the refutation conditions for stable canonical rules just stated.
We now introduce the notion of a \emph{collapsed} stable canonical rule. We prefer to do so in a geometric setting, so to emphasize the main intuition behind this concept.
\begin{definition}
Let $\scrmod{F}{\mathfrak{D}}$ be some modal stable canonical rule, with $\mathfrak{F}\in \mathsf{Spa}(\mathtt{S4})$. The \emph{collapsed stable canonical rule} $\scrsi{\rho F}{\rho \mathfrak{D}}$ is obtained by setting
\[\rho \mathfrak{D}:=\{\rho [\mathfrak{d}]: \mathfrak{d}\in \mathfrak{D}\}.\]
\end{definition}
Intuitively, $\scrsi{\rho F}{\rho \mathfrak{D}}$ is obtained from $\scrmod{F}{\mathfrak{D}}$ by collapsing all clusters in $\mathfrak{F}$ and in the set of domains $\mathfrak{D}$ as well.
Collapsed rules obey the following refutation condition.
\begin{lemma}[Rule collapse lemma]
For all $\mathfrak{X}\in \mathsf{Spa}(\mathtt{S4})$ and modal stable canonical rule $\scrmod{F}{\mathfrak{D}}$ with $\mathfrak{F}\in \mathsf{Spa}(\mathtt{S4})$, if $\mathfrak{X}\nvDash \scrmod{F}{\mathfrak{D}}$ then $\rho \mathfrak{X}\nvDash \scrsi{\rho F}{\rho \mathfrak{D}}$.\label{rulecollapse}
\end{lemma}
\begin{proof}
Assume $\mathfrak{X}\nvDash\scrmod{F}{\mathfrak{D}}$. Then there is a continuous, relation preserving map $f:\mathfrak{X}\to \mathfrak{F}$ that satisfies the BDC for $\mathfrak{D}$. Consider the map $g:\rho\mathfrak{X}\to \rho\mathfrak{F}$ given by
\[g(\rho(x))=\rho(f(x)).\]
Now $\rho(x)\leq\rho(y)$ implies $Rxy$, and since $f$ is relation preserving also $Rf(x)f(y)$, which implies $\rho(f(x))\leq \rho(f(y))$. So $g$ is relation preserving. Furthermore, again because $f$ is relation preserving we have that for any $U\subseteq F$, the set $f^{-1}(U)$ does not cut clusters, whence $g^{-1}(U)=\rho[f^{-1}(\rho^{-1}(U))]$ is clopen for any $U\subseteq \rho[F]$, as $\rho\mathfrak{X}$ has the quotient topology. Thus $g$ is continuous. Let us check that $g$ satisfies the BDC for $\rho\mathfrak{D}$. Assume that ${\uparrow} g(\rho(x))\cap \rho[\mathfrak{d}]\neq \varnothing$ for $\mathfrak{d}\in \mathfrak{D}$. Then there is some $\rho(y)\in \rho[F]$ with $\rho(f(x))\leq\rho(y)$ and $\rho(y)\in \rho[\mathfrak{d}]$. By construction, wlog we may assume that $y\in \mathfrak{d}$. As $\rho$ is relation reflecting it follows that $Rf(x) y$, and so we have that $R[ f(x)]\cap \mathfrak{d}\neq\varnothing$. Since $f$ satisfies the BDC for $\mathfrak{D}$ we conclude that $f[R[x]]\cap \mathfrak{d}\neq\varnothing$. So there is some $z\in X$ with $Rxz$ and $f(z)\in \mathfrak{d}$. By definition, $\rho(f(z))\in \rho[\mathfrak{d}]$. Hence we have shown that $\rho[f[R[x]]]\cap \rho[\mathfrak{d}]\neq\varnothing$, and so $g$ indeed satisfies the BDC for $\mathfrak{D}$.
\end{proof}
We are now ready to prove the Dummett-Lemmon conjecture for rule systems.
\begin{theorem}[Dummett-Lemmon conjecture for si-rule systems]
For every si-rule system $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$, we have that $\mathtt{L}$ is Kripke complete iff $\tau\mathtt{L}$ is.\label{dummettlemmon}
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let $\mathtt{L}$ be Kripke complete. Suppose that $\Gamma/\Delta\notin \tau\mathtt{L}$. Then there is $\mathfrak{X}\in \mathsf{Spa}(\tau\mathtt{L})$ such that $\mathfrak{X}\nvDash \Gamma/\Delta$. By \Cref{thm:axiomatisationS4scr}, we may assume that $\Gamma/\Delta=\scrmod{F}{\mathfrak{D}}$ for $\mathfrak{F}$ a preorder. By the rule collapse lemma
it follows that $\rho\mathfrak{X}\nvDash \scrsi{\rho F}{\rho \mathfrak{D}}$. Moreover, by \Cref{lem:gtskeleton} it follows that $\rho\mathfrak{X}\models \mathtt{L}$, and so
we conclude $\scrsi{\rho F}{\rho \mathfrak{D}}\notin \mathtt{L}$. Since $\mathtt{L}$ is Kripke complete, there is a si Kripke frame $\mathfrak{Y}$ such that $\mathfrak{Y}\nvDash \scrsi{\rho F}{\rho \mathfrak{D}}$. By \Cref{refutspace}, there is a stable map $f:\mathfrak{Y}\to \rho \mathfrak{F}$ satisfying the BDC for $\rho \mathfrak{D}$. Work in $\rho \mathfrak{F}$. For every $x\in \rho [F]$ look at $\rho^{-1}(x)$, let $k=|\rho^{-1}(x)|$ and enumerate $\rho^{-1}(x)=\{x_1, \ldots, x_k\}$. Now work in $\mathfrak{Y}$. For every $y\in f^{-1}(x)$ replace $y$ with a $k$-cluster $y_1, \ldots, y_k$ and extend the relation $R$ clusterwise: $Ry_iz_j$ iff either $y=z$ or $Ryz$. Call the result $\mathfrak{Z}$. Clearly $\mathfrak{Z}$ is a Kripke frame, and moreover $\mathfrak{Z}\models\tau \mathtt{L}$, because $\rho \mathfrak{Z}\cong \mathfrak{Y}$. For convenience, identify $\rho\mathfrak{Z}=\mathfrak{Y}$. For every $x\in \rho [F]$ define a map $g_x:f^{-1}(x)\to \rho^{-1}(x)$ by setting $g_x(y_i)=x_i$ ($i\leq k$). Finally, define $g:\mathfrak{Z}\to \mathfrak{F}$ by putting $g=\bigcup_{x\in \rho[F]} g_x$.
The map $g$ is evidently well defined, surjective, and relation preserving. We claim that moreover, it satisfies the BDC for $\mathfrak{D}$. To see this, suppose that $R [g(y_i)] \cap \mathfrak{d}\neq\varnothing$ for some $\mathfrak{d}\in \mathfrak{D}$. Then there is $x_j\in F$ with $x_j\in \mathfrak{d}$ and $Rg(y_i)x_j$. By construction also $\rho(x_j)\in \rho [\mathfrak{d}]$ and $R f(\rho (y_i))\rho(x_j)$. As $f$ satisfies the BDC for $\rho\mathfrak{D}$ it follows that there is some $z\in Y$ such that $R\rho(y_i)z$ and $f(z)\in \rho[\mathfrak{d}]$. We may view $z$ as $\rho(z_n)$ where $\rho^{-1}(f(z))$ has cardinality $k\geq n$. Surely $Ry_iz_n$. Furtheromre, since $f(z)\in \rho[\mathfrak{d}]$ there must be some $m\leq k$ such that $f(z)_m=g(z_m)\in \mathfrak{d}$. By construction $Rz_nz_m$ and so in turn $Ry_iz_m$. This establishes that $g$ indeed satisfies the BDC for $\mathfrak{D}$. Thus we have shown $\mathfrak{Z}\nvDash \scrmod{F}{\mathfrak{D}}$. It follows that $\tau \mathtt{L}$ is Kripke complete.
$(\Leftarrow)$ Assume that $\tau(\mathtt{L})$ is Kripke complete. Suppose that $\Gamma/\Delta\notin \mathtt{L}$. Then there is an Esakia space $\mathfrak{X}$ such that $\mathfrak{X}\nvDash \Gamma/\Delta$. Therefore $\sigma\mathfrak{X}\nvDash T(\Gamma/\Delta)$. Surely $\sigma \mathfrak{X}\models \tau \mathtt{L}$, so $T(\Gamma/\Delta)\notin\tau \mathtt{L}$ and thus there is a Kripke frame $\mathfrak{Y}$ such that $\mathfrak{Y}\models \tau \mathtt{L}$ and $\mathfrak{Y}\nvDash T(\Gamma/\Delta)$. But then $\rho\mathfrak{Y}\nvDash \Gamma/\Delta$. $\rho\mathfrak{Y}$ is a Kripke frame, and validates $\mathtt{L}$ by \Cref{lem:gtskeleton}. Therefore we have shown that $\mathtt{L}$ is indeed Kripke complete.
\end{proof}
\section{Tense Companions of Super Bi-intuitionistic Deductive Systems}
\label{ch:2}
We now apply the techniques presented in \Cref{ch:1} to the study of tense companions of bi-superintuitionistic deductive systems. We begin by reviewing some preliminaries in \Cref{sec:preliminaries2}. In \Cref{sec:scr2} we develop tense and bi-superintuitionistic stable canonical rules, which generalise the modal and si stable canonical rules seen in \Cref{sec:scr1}. We then apply such rules to extend the results of \Cref{sec:modalcompanions} to the bi-superintuitionistic and tense setting in \Cref{sec:tensecompanions}. Here we obtain a characterisation of the set of tense companions of a bi-superintuitionistic deductive system, and extensions of the Blok-Esakia theorem and of the Gödel-Dummett conjecture to the bi-superintuitionistic and tense setting (\Cref{sec:semantictensecompanions}). These results were known for logics (cf. \cite{Wolter1998OLwC}), but are new for rule systems.
Besides the original results just mentioned, the main contribution of this section is showcasing the uniformity of our method across signatures. The majority of results in this section are obtained via straightforward generalisations of arguments already seen in \Cref{ch:1}. This is a major virtue of our approach, which Zakharyaschev\space and Je\v{r}ábek's canonical formulae and rules-based approach does not seem to share to the same extent (\Cref{sec:comparison}).
As in the case of modal companions, our techniques also yield axiomatic characterisations of the tense companion maps via stable canonical rules, as well as some results concerning the preservation of stability by the tense companion maps. These topics are discussed in \cite[Sections 3.3.3, 3.3.4, 3.3.5]{Cleani2021TEVSCR}.
\subsection{Bi-superintuitionistic and Tense Deductive Systems}\label{sec:preliminaries2}
We begin by reviewing definitions and basic facts concerning the structures dealt with in this section.
\subsubsection{Bi-superintuitionistic Deductive Systems, bi-Heyting Algebras, and bi-Esakia Spaces}
We work in the \emph{bi-superintuitionistic signature}, \[bsi:=\{\land, \lor, \to,\leftarrow, \bot, \top\}.\]
The set $\mathit{Frm_{bsi}}$ of bi-superintuitionistic (bsi) formulae is defined recursively as follows.
\[\varphi::= p\, |\, \bot\, |\,\top\, |\, \varphi\land \varphi\, |\,\varphi\lor \varphi\, |\,\varphi\to \varphi\, |\,\varphi\leftarrow\varphi\]
We let $\gen\varphi:=\varphi\leftarrow \top$ and $\varphi\leftrightarrow \psi:=(\varphi\to \psi)\land (\psi\to \varphi)$. The \emph{bi-intuitionistic propositional calculus} $\mathtt{bi\text{-}IPC}$ is defined as the least logic over $\mathit{Frm_{bsi}}$ containing $\mathtt{IPC}$, containing the axioms
\begin{align*}
&p\to (q\lor (q\leftarrow p)) & (q\leftarrow p)\to \gen(p\to q)\\
&(r\leftarrow (q\leftarrow p))\to ((p\lor q)\leftarrow p) &\neg (p\leftarrow q)\to (p\to q) \\
&\neg\gen (p\leftarrow p)
\end{align*}
and such that if $\varphi, \varphi\to \psi\in \mathtt{bi\text{-}IPC}$ then $\psi\in \mathtt{bi\text{-}IPC}$, and if $\varphi\in \mathtt{bi\text{-}IPC}$ then $\neg \gen \varphi\in \mathtt{bi\text{-}IPC}$. The logic $\mathtt{bi\text{-}IPC}$ was introduced and extensively studied by \citet{Rauszer1974AFotPCoHBL,Rauszer1974SBAaTAtILwDO,Rauszer1977AoKMtHBL}. It was also investigated by \citet{Esakia1975TPoDitILaBL}, and more recently by \citet{Gore2000DILR}.
\begin{definition}
A \emph{bsi-logic} is a logic $\mathtt{L}$ over $\mathit{Frm_{bsi}}$ containing $\mathtt{bi\text{-}IPC}$ and satisfying the following conditions:
\begin{itemize}
\item If $\varphi, \varphi\to \psi\in \mathtt{L}$ then $\psi\in \mathtt{L}$ (MP);
\item If $\varphi\in \mathtt{L}$ then $\neg \gen \varphi\in \mathtt{L}$ (DN).
\end{itemize}
A \emph{bsi-rule system} is a rule system $\mathtt{L}$ over $\mathit{Frm_{bsi}}$ satisfying the following conditions:
\begin{itemize}
\item $\varphi, \varphi\to \psi/\psi\in \mathtt{L}$ (MP-R);
\item $\varphi/\neg\gen\varphi\in \mathtt{L}$ (DN-R);
\item $/\varphi\in \mathtt{L}$ for every $\varphi\in \mathtt{bi\text{-}IPC}$.
\end{itemize}
\end{definition}
If $\mathtt{L}$ is a bsi-logic let $\mathbf{Ext}(\mathtt{L})$ be the set of bsi-logics containing $\mathtt{L}$, and similarly for bsi-rule systems. Then $\mathbf{Ext}(\mathtt{bi\text{-}IPC})$ is the set of all bsi-logics. It is easy to see that $\mathbf{Ext}(\mathtt{bi\text{-}IPC})$ carries a complete lattice, with $\oplus_{\mathbf{Ext}(\mathtt{bi\text{-}IPC})}$ as join and intersection as meet. Observe that for every $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC})$ there is a least bsi-rule system containing $/\varphi$ for each $\varphi\in \mathtt{L}$, which we denote by $\mathtt{L_R}$. Then $\mathtt{bi\text{-}IPC_R}$ is the least bsi-rule system and $\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ is the set of all bsi-rule systems. Again, it is not hard to verify that $\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ forms a complete lattice with $\oplus_{\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})}$ as join and intersection as meet. Henceforth we write both $\oplus_{\mathbf{Ext}(\mathtt{bi\text{-}IPC})}$ and $\oplus_{\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})}$ simply as $\oplus$, leaving context to clarify any ambiguity.
We generalise \Cref{deductivesystemisomorphismsi} to the bsi setting.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{Ext}(\mathtt{bi\text{-}IPC})$ and the sublattice of $\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ consisting of all bsi-rule systems $\mathtt{L}$ such that $\mathsf{Taut}(\mathtt{L})_\mathtt{R}=\mathtt{L}$.\label{deductivesystemisomorphismbsi}
\end{proposition}
A \emph{bi-Heyting algebra} is a tuple $\mathfrak{H}=(H, \land, \lor, \to,\leftarrow, 0, 1)$ such that the $\leftarrow$-free reduct of $\mathfrak{H}$ is a Heyting algebras, and such that for all $a, b, c\in H$ we have
\[a\leftarrow b\leq c\iff a\leq b\lor c.\]
Bi-Heyting algebras are discussed at length in \cite{Rauszer1974AFotPCoHBL,Rauszer1974SBAaTAtILwDO,Rauszer1977AoKMtHBL} and more recently in \cite{Taylor2017DHA,PedrosoDeLimaMartins2021BGAaCT}. Let $\mathsf{bi\text{-}HA}$ denote the class of all bi-Heyting algebras. By \Cref{syntacticvarietiesuniclasses}, $\mathsf{bi\text{-}HA}$ is a variety.
Let $\mathfrak{L}=(L, \land, \lor, 0, 1)$ be a bounded lattice. The \emph{order dual} of $\mathfrak{L}$ is the lattice $\bar{\mathfrak{L}}=(L, \lor, \land, 1, 0)$, where $\lor$ is viewed as the meet operation and $\land$ as the join operation. We have the following elementary but important fact.
\begin{proposition}[Order duality principle for bi-Heyting algebras]
For every bi-Heyting algebra $\mathfrak{H}$, the order dual $\bar{\mathfrak{H}}$ of $\mathfrak{H}$ is a Heyting algebra, where implication is defined, for all $a, b\in H$, by \label{orderdual2ha}
\[a\leftarrow b:=\bigwedge\{c\in H:a\leq b\lor c\}.\]
\end{proposition}
This observation can be leveraged to establish a number of properties about bi-Heyting algebras via straightforward adaptations of the theory of Heyting algebras. We shall see numerous examples of this strategy in this section.
We write $\mathbf{Var}(\mathsf{bi\text{-}HA})$ and $\mathbf{Uni}(\mathsf{bi\text{-}HA})$ respectively for the lattice of subvarieties and of universal subclasses of $\mathsf{bi\text{-}HA}$. The following result may be proved via the same techniques used to prove \Cref{thm:algebraisationHA}. A recent self-contained proof of \Cref{algebraisation2havar} may be found in \cite[Theorem 2.8.3]{PedrosoDeLimaMartins2021BGAaCT}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:\label{algebraisation2ha}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{Ext}(\mathtt{bi\text{-}IPC})\to \mathbf{Var}(\mathsf{bi\text{-}HA})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{bi\text{-}HA})\to \mathbf{Ext}(\mathtt{bi\text{-}IPC})$;\label{algebraisation2havar}
\item $\mathsf{Alg}:\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})\to \mathbf{Uni}(\mathsf{bi\text{-}HA})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{bi\text{-}HA})\to \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$.\label{algebraisation2hauni}
\end{enumerate}
\end{theorem}
\begin{corollary}
Every bsi-logic $($resp. bsi-rule system$)$ is complete with respect to some variety $($resp. universal class$)$ of bi-Heyting algebras. \label{completeness_bsi}
\end{corollary}
A \emph{bi-Esakia space} is an Esakia space $\mathfrak{X}=(X, \leq, \mathcal{O})$, satisfying the following additional conditions:
\begin{itemize}
\item ${{\downarrow}}x$ is closed for every $x\in X$;
\item ${{\uparrow}}[U]\in \mathsf{Clop}(\mathfrak{X})$ whenever $U\in \mathsf{Clop}(\mathfrak{X})$.
\end{itemize}
Bi-Esakia spaces were introduced by \citet{Esakia1975TPoDitILaBL}. We let $\mathsf{bi\text{-}Esa}$ denote the class of all bi-Esakia spaces. For $\mathfrak{X}\in \mathsf{bi\text{-}Esa}$, we write $\mathsf{ClopDown}(\mathfrak{X})$ for the set of clopen downsets in $\mathfrak{X}$. If $\mathfrak{X}, \mathfrak{Y}\in \mathsf{bi\text{-}Esa}$, a map $h:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} if for all $x, y\in X$, we have that $x\leq y$ implies that $f(x)\leq f(y)$, and moreover:
\begin{itemize}
\item $h(x)\leq y$ implies that there is $z\in X$ with $x\leq z$ and $h(z)=y$;
\item $h(x)\geq y$ implies that there is $z\in X$ with $x\geq z$ and $h(z)=y$.
\end{itemize}
If $\mathfrak{X}=(X, \leq , \mathcal{O})$ is an Esakia space, the \emph{order dual} $\bar{\mathfrak{X}}$ of $\mathfrak{X}$ is the structure $\mathfrak{X}=(X, \geq, \mathcal{O})$, where $\geq$ is the converse of $\leq$. The algebraic order duality principle of \Cref{orderdual2ha} has the following geometric counterpart.
\begin{proposition}
For every bi-Esakia space $\mathfrak{X}$, the order dual $\bar{\mathfrak{X}}$ of $\mathfrak{X}$ is an Esakia space.
\end{proposition}
As in the case of algebras, a number of results from the theory of Esakia spaces can be transferred smoothly to bi-Esakia spaces in virtue of this fact. For example, we may generalise \Cref{propesa} to the following result.
\begin{proposition}
Let $\mathfrak{X}\in \mathsf{bi\text{-}Esa}$. Then for all $x, y\in X$ we have:\label{propesa2}
\begin{enumerate}
\item If $x\not\leq y$ then there is $U\in \mathsf{ClopUp}(\mathfrak{X})$ such that $x\in U$ and $y\notin U$;\label{propesa21}
\item If $y\not\leq x$ then there is $U\in \mathsf{ClopDown}(\mathfrak{X})$ such that $x\in U$ and $y\notin U$.\label{propesa22}
\end{enumerate}
\end{proposition}
\begin{proof}
(\ref{propesa21}) is just \Cref{propesa}, whereas (\ref{propesa22}) follows from (\ref{propesa21}) and the order-duality principle.
\end{proof}
A \emph{valuation} on a bi-Esakia space space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{ClopUp}(\mathfrak{X})\cup \mathsf{ClopDown}(\mathfrak{X})$. Bsi formulae are interpreted over bi-Esakia spaces the same way si formulae are interpreted over Esakia space, except for the following additional clause for co-implication (here $\mathfrak{X}\in \mathsf{bi\text{-}Esa}$, $x\in X$ and $V$ is a valuation on $\mathfrak{X}$).
\[\mathfrak{X}, V, x\models \varphi\leftarrow \psi\iff \text{ there is }y\in {\downarrow} x: \mathfrak{X}, V, x\models \varphi\text{ and }\mathfrak{X}, V, x\nvDash \psi\]
It is known that the category of bi-Heyting algebras with corresponding homomorphisms is dually equivalent to the category of bi-Esakia spaces with continuous bounded morphisms. This result generalizes Esakia duality, and is proved in \cite{Esakia1975TPoDitILaBL}. We denote the bi-Esakia space dual to a bi-Heyting algebra $\mathfrak{H}$ as $\mathfrak{H_*}$, and the bi-Heyting algebra dual to a bi-Esakia space $\mathfrak{X}$ as $\mathfrak{X}^*$.
\subsubsection{Tense Deductive Systems, Tense Algebras, and Tense Spaces}
We now work in the \emph{tense signature},
\[ten:=\{\land, \lor, \neg, \square_F, \lozenge_P, \bot, \top\}.\]
We prefer this signature to one with two primitive boxes to strengthen the connection between bi-Heyting coimplication and backwards looking modalities. As usual, we write $\lozenge_F=\neg\square_F\neg$ and $\square_P=\neg \lozenge_P\neg$. The set $\mathit{Frm_{ten}}$ of \emph{tense formulae} is defined recursively as follows:
\[\varphi::= p\, |\, \bot\, |\,\top\, |\, \varphi\land \varphi\, |\,\varphi\lor \varphi\, |\,\square_F \varphi\, |\,\lozenge_P\varphi.\]
We introduce \emph{tense deductive systems}. Good references on tense logics include \cite[Ch. 1, Ch. 4]{BlackburnEtAl2001ML} and \cite{GabbayEtAl1994TLMFaCA}. Tense rule systems have not received much attention in the literature.
\begin{definition}
A \emph{(normal) tense logic} is a logic $\mathtt{M}$ over $\mathit{Frm}_{ten}$ satisfying the following conditions:
\begin{enumerate}
\item $\mathtt{S4}_{\square_F},\mathtt{S4}_{\lozenge_P}\subseteq \mathtt{M}$, where $\mathtt{S4}_{\heartsuit}$ is the normal modal logic $\mathtt{S4}$ formulated in the modal signature with modal operator $\heartsuit\in \{\square_F, \lozenge_P\}$;
\item $\varphi\to \square_F\lozenge_P\varphi\in \mathtt{M}$;
\item $\varphi\to \psi, \varphi\in \mathtt{M}$ implies $\psi\in \mathtt{M}$ (MP);
\item $\varphi\in \mathtt{M}$ implies $\square_F \varphi\in \mathtt{M}$ (NEC$_F$);
\item $\varphi\in \mathtt{M}$ implies $\square_P\varphi\in \mathtt{M}$ (NEC$_P$);
\end{enumerate}
We let $\mathtt{S4.t}$ denote the least normal tense logic. A \emph{(normal) tense rule system} is a rule system $\mathtt{M}$ over $\mathit{Frm}_{ten}$ satisfying the following requirements:
\begin{enumerate}
\item $\varphi, \varphi\to \psi/\psi\in \mathtt{M}$ (MP-R);
\item $\varphi/\square_F\varphi\in \mathtt{M}$ (NEC$_F$-R);
\item $\varphi/\square_P\varphi\in \mathtt{M}$ (NEC$_P$-R);
\item $/\varphi\in \mathtt{M}$ whenever $\varphi\in \mathtt{S4.t}$.
\end{enumerate}
\end{definition}
We note that, for convenience, we are using a somewhat non-standard notion of a tense deductive system by requiring that tense deductive system contain $\mathtt{S4}$. It is more customary to require only that tense deductive system contain $\mathtt{K}$.
If $\mathtt{M}$ is a tense logic let $\mathbf{NExt}(\mathtt{M})$ be the set of normal tense logics containing $\mathtt{M}$, and similarly for tense rule systems. Then $\mathbf{NExt}(\mathtt{S4.t})$ is the set of all tense logics. It is easily checked that $\mathbf{NExt}(\mathtt{S4.t})$ is a complete lattice, with $\oplus_{\mathbf{NExt}(\mathtt{S4.t})}$ as join and intersection as meet. Note that for every $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t})$ there is always a least tense rule system containing $/\varphi$ for each $\varphi\in \mathtt{M}$, which we denote by $\mathtt{M_R}$. Then $\mathtt{S4.t_R}$ is the least tense rule system and $\mathbf{NExt}(\mathtt{S4.t_R})$ is the set of all tense rule systems. Again, one can easily verify that $\mathbf{NExt}(\mathtt{S4.t_R})$ forms a complete lattice with $\oplus_{\mathbf{NExt}(\mathtt{S4.t_R})}$ as join and intersection as meet. As usual, we write both $\oplus_{\mathbf{NExt}(\mathtt{S4.t})}$ and $\oplus_{\mathbf{NExt}(\mathtt{S4.t_R})}$ simply as $\oplus$.
We have the following tense counterpart of \Cref{deductivesystemisomorphismbsi}.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{NExt}(\mathtt{S4.t})$ and the sublattice of $\mathbf{NExt}(\mathtt{S4.t_R})$ consisting of all si-rule systems $\mathtt{L}$ such that $\mathsf{Taut}(\mathtt{L})_\mathtt{R}=\mathtt{L}$.\label{deductivesystemisomorphismten}
\end{proposition}
A \emph{tense algebra} is a structure $\mathfrak{A}=(A, \land, \lor, \neg, \square_F, \lozenge_P, 0, 1)$, such that both the $\square_F$-free and the $\lozenge_P$-free reducts of $\mathfrak{A}$ are closure algebras, and $\square_F, \lozenge_P$ form a residual pair, that is, for all $a, b\in A$ we have the following identity:
\[\lozenge_P a \leq b\iff a\leq \square_F y.\]
Tense algebras are extensively discussed in, e.g., \cite{Kowalski1998VoTA} and \cite[Section 8.1]{Venema2007AaC}. We let $\mathsf{Ten}$ denote the class of tense algebras. It is well known that $\mathsf{Ten}$ is equationally definable
(see, e.g., \cite[Proposition 8.5]{Venema2007AaC}), and hence is a variety by \Cref{syntacticvarietiesuniclasses}. We let $\mathbf{Var}(\mathsf{Ten})$ and $\mathbf{Uni}(\mathsf{Ten})$ be the lattice of subvarieties and of universal subclasses of $\mathsf{Ten}$ respectively. The following result can be obtained by similar techniques as \Cref{thm:algebraisationMA}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:\label{algebraisationtense}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{S4.t})\to \mathbf{Var}(\mathsf{Ten})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{Ten})\to \mathbf{NExt}(\mathtt{S4.t})$;\label{algebraisationtensevar}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{S4.t_R})\to \mathbf{Uni}(\mathsf{Ten})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{Ten})\to \mathbf{NExt}(\mathtt{S4.t_R})$.\label{algebraisationtenseuni}
\end{enumerate}
\end{theorem}
\begin{corollary}
Every tense logic $($resp. tense rule system$)$ is complete with respect to some variety $($resp. universal class$)$ of tense algebras. \label{completeness_ten}
\end{corollary}
A \emph{tense space} is an $\mathtt{S4}$-modal space $\mathfrak{X}=(X, R, \mathcal{O})$, satisfying the following additional conditions:
\begin{itemize}
\item $R^{-1}(x)$ is closed for every $x\in X$;
\item $R[U]\in \mathsf{Clop}(\mathfrak{X})$ whenever $U\in \mathsf{Clop}(\mathfrak{X})$.
\end{itemize}
It should be clear from the above definition that tense spaces, like bi-Esakia spaces, also satisfy an order-duality principle.
\begin{proposition}
For every tense space $\mathfrak{X}=(X, R, \mathcal{O})$, its \emph{order dual} $\bar{\mathfrak{X}}=(X, \breve{R}, \mathcal{O})$, where $\breve{R}$ is the converse of $R$, is an $\mathtt{S4}$-modal space.
\end{proposition}
If $\mathfrak{X}, \mathfrak{Y}$ are tense spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} if for all $x, y\in X$, if $Rxy$ then $Rf(x)f(y)$, and moreover for all $x\in X$ and $y\in Y$ the following conditions hold:
\begin{itemize}
\item If $Rf(x)y$ then there is $z\in X$ such that $Rxz$ and $f(z)=y$;
\item If $Ryf(x)$ then there is $z\in X$ such that $Rzx$ and $f(z)=y$.
\end{itemize}
A \emph{valuation} on a tense space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{Clop}(\mathfrak{X})$. The geometrical semantics of tense logics and rule systems over tense spaces is a routine generalisation of the semantics of modal logics and rule systems on modal spaces, using $R$ to interpret $\square_F$ and $\breve{R}$ to interpret $\square_P$.
We list some important properties of tense spaces, which are obtained straightforwardly from \Cref{props4} and the order-duality principle.
\begin{proposition}
Let $\mathfrak{X}\in \mathsf{Spa}(\mathtt{S4.t})$ and $U\in \mathsf{Clop}(\mathfrak{X})$. Then the following conditions hold: \label{proptensespace}
\begin{enumerate}
\item The sets $\mathit{max}_R(U)$, $\mathit{min}_R(U)$ are closed;
\item If $x\in U$ then there is $y\in \mathit{qmax}_R(U)$ such that $Rxy$, and there is $z\in \mathit{qmin}_R(U)$ such that $Rzx$
\end{enumerate}
\end{proposition}
As a straightforward extension of the duality between modal algebras and modal spaces, one can prove that the category of tense algebras with homomorphisms is dually equivalent to the category of tense spaces with continuous bounded morphisms. We denote the tense space dual to a tense algebra $\mathfrak{A}$ as $\mathfrak{A_*}$, and the tense algebra dual to an tense space $\mathfrak{X}$ as $\mathfrak{X}^*$.
We will pay particular attention to tense algebras and spaces validating the tense logic $\mathtt{GRZ.T}$ below.
\begin{align*}
\mathtt{GRZ.T}:=\mathtt{S4.t}&\oplus \square_F(\square_F( p\to\square_F p)\to p)\to p\\
&\oplus p\to \lozenge_P(p\land \neg \lozenge_P(\lozenge_P p\land \neg p)).
\end{align*}
We name this logic $\mathtt{GRZ.T}$ rather than $\mathtt{GRZ.t}$ to emphasize that the $\mathtt{GRZ}$-axiom is required for both operators rather than just for $\square_F$. We let $\mathsf{GRZ.T}:=\mathsf{Alg}(\mathtt{GRZ.T})$. Clearly, for any $\mathfrak{A}\in \mathsf{Ten}$ we have $\mathfrak{A}\in \mathsf{GRZ.T}$ iff every $a\in A$ satisfies both the inequalties
\begin{gather*}
\square_F(\square_F( a\to\square_F a)\to a)\leq a,\\
a\leq \lozenge_P(a\land \neg \lozenge_P(\lozenge_P a\land \neg a)).
\end{gather*}
The following proposition is a counterpart to \Cref{propgrz1}, and is proved straightforwardly using the latter and the order-duality principle.
\begin{proposition}
For every $\mathtt{GRZ}$-space $\mathfrak{X}$ and ${U}\in \mathsf{Clop}(\mathfrak{X})$, the following hold: \label{propGrz.T}
\begin{enumerate}
\item $\mathit{qmax}_R(U)\subseteq \mathit{max}_R(U)$, and $\mathit{qmin}_R(U)\subseteq \mathit{min}_R(U)$;\label{propGrz.T:1}
\item The sets $\mathit{max}_R(U)$ and $\mathit{min}_R(U)$ is closed;\label{propGrz.T:2}
\item For every $x\in U$ there are $y\in \mathit{pas}_R(U)$ such that $Rxy$, and $z\in \mathit{pas}_{\breve{R}}(U)$ such that $Rzx$;\label{propGrz.T:3}
\item $\mathit{max}_R(U)\subseteq\mathit{pas}_R(U)$ and $\mathit{min}_R(U)\subseteq\mathit{pas}_{\breve{R}}(U)$. \label{propGrz.T:4}
\end{enumerate}
\end{proposition}
Recall that for $\mathfrak{X}$ a $\mathtt{GRZ.T}$-space, a set $U\subseteq X$ is said to \emph{cut} a cluster $C\subseteq X$ when both $U\cap C\neq\varnothing$ and $U\smallsetminus C\neq\varnothing$. As a consequence of \Cref{propGrz.T:4} in \Cref{propGrz.T} above, we obtain in particular that in any $\mathtt{GRZ.T}$-space $\mathfrak{X}$, no cluster $C\subseteq X$ can be cut by either of $\mathit{max}_R(U), \mathit{pas}_R(U),\mathit{min}_R(U), \mathit{pas}_{\breve{R}}(U)$ for any $U\in \mathsf{Clop}(\mathfrak{X})$.
\subsection{Stable Canonical Rules for Bi-superintuitionistic and Tense Rule Systems}\label{sec:scr2}
In this section we generalise the si and modal stable canonical rules from \Cref{sec:scr1} to the bsi and tense setting respectively. While bsi and tense stable canonical rules are not discussed in existing literature, the differences between their theory and that of si and modal stable canonical rules are few and inessential. In particular, all proofs of results in this sections are straightforward adaptations of corresponding results in \Cref{sec:scr1}, which is why we omit most of them.
\subsubsection{Bi-superintuitionistic Case} We begin by defining bsi stable canonical rules.
\label{sec:bsirules}
\begin{definition}
Let $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ be finite and $D^\to, D^\leftarrow\subseteq A\times A$. For every $a\in H$ introduce a fresh propositional variable $p_a$. The \emph{bsi stable canonical rule} of $(\mathfrak{H}, D^\to, D^\leftarrow)$, is defined as the rule $\scrbsi{H}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma=&\{p_0\leftrightarrow 0\}\cup\{p_1\leftrightarrow 1\}\cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a,b\in H\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a,b\in H\}\cup\\
&\{p_{a\to b}\leftrightarrow p_a\to p_b:(a, b)\in D^\to\}\cup \{p_{a\leftarrow b}\leftrightarrow p_a\leftarrow p_b:(a, b)\in D^\leftarrow\}\\
\Delta&=\{p_a\leftrightarrow p_b:a, b\in H\text{ with } a\neq b\}.
\end{align*}
\end{definition}
The notion of a stable map between bi-Heyting algebras is defined exactly as in the Heyting case, i.e., stable maps are simply bounded lattice homomorphisms. We note that for any stable map $h:\mathfrak{H}\to \mathfrak{K}$ with $\mathfrak{H}, \mathfrak{K}\in \mathsf{bi\text{-}HA}$, for any $a\in H$ we also have
\[h(a\leftarrow b)\geq h(a)\leftarrow h(b).\]
Indeed, this is obvious in view of the order-duality principle. If $D\subseteq H\times H$ and $\heartsuit \in \{\to, \leftarrow\}$, we say that $h$ satisfies the \emph{$\heartsuit$-bounded domain condition} (BDC$^\heartsuit$) for $D$ if $h(a\heartsuit b)=h(a)\heartsuit h(b)$ for every $(a, b)\in D$.
If $D^\to, D^\leftarrow\subseteq H\times H$, for brevity we say that $h$ satisfies the BDC for $(D^\to, D^\leftarrow)$ to mean that $h$ satisfies the BDC$^\to$ for $D^\to$ and the BDC$^\leftarrow$ for $D^\leftarrow$.
The next two results characterise algebraic refutation conditions for bsi stable canonical rules.
\begin{proposition}
For all finite $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ and $D^\to, D^\leftarrow\subseteq H\times H$, we have $\mathfrak{H}\nvDash \scrbsi{H}{D}$.
\end{proposition}
\begin{proposition}
For every bsi stable canonical rule $\scrbsi{H}{D}$ and every $\mathfrak{K}\in \mathsf{bi\text{-}HA}$, we have $\mathfrak{K}\nvDash \scrbsi{H}{D}$ iff there is a stable embedding $h:\mathfrak{H}\to \mathfrak{K}$ satisfying the BDC for $(D^\to, D^\leftarrow)$.
\end{proposition}
We now characterise geometric refutation conditions of bsi stable canonical rules on bi-Esakia spaces. Since bi-Esakia spaces are Esakia spaces, the notion of a stable map applies. Let $\mathfrak{X}, \mathfrak{Y}\in \mathsf{bi\text{-}Esa}$ and $\mathfrak{d}\subseteq {Y}$. A stable map $f:\mathfrak{X}\to \mathfrak{Y}$ is said to satisfy
\begin{itemize}
\item The BDC$^\to$ for $\mathfrak{d}$ if for all $x\in X$ we have
\[{\uparrow} f(x)\cap \mathfrak{d}\neq\varnothing\Rightarrow f[{\uparrow} x]\cap \mathfrak{d}\neq\varnothing;\]
\item The BDC$^\leftarrow$ for $\mathfrak{d}$ if for all $x\in X$ we have
\[{\downarrow} f(x)\cap \mathfrak{d}\neq\varnothing\Rightarrow f[{\downarrow} x]\cap \mathfrak{d}\neq\varnothing.\]
\end{itemize}
If $\mathfrak{D}\subseteq \wp(Y)$, we say that $f$ satisfies the BDC$^\heartsuit$ for $\mathfrak{D}$ when it does for each $\mathfrak{d}\in \mathfrak{D}$, where $\heartsuit\in \{\to, \leftarrow\}$. Given $\mathfrak{D}^\to, \mathfrak{D}^\leftarrow \in \wp(Y)$ we write that $f$ satisfies the BDC for $(\mathfrak{D}^\to, \mathfrak{D}^\leftarrow)$ if $f$ satisfies the BDC$^\to$ for $\mathfrak{D}^\to$ and the BDC$^\leftarrow$ for $\mathfrak{D}^\leftarrow$. Finally, if $\scrbsi{H}{D}$ is a bsi stable canonical rule consider $\mathfrak{X}:=\mathfrak{H}_*$ and let
\[\mathfrak{D}^\heartsuit:=\{\mathfrak{d}^\heartsuit_{(a, b)}:(a, b)\in D^\heartsuit\}\]
where
\[\mathfrak{d}^\heartsuit_{(a, b)}:= \beta (a)\smallsetminus \beta (b)\]
for $\heartsuit\in \{\to, \leftarrow\}$.
\begin{proposition}
For any bi-Esakia space $\mathfrak{X}$ and any bsi stable canonical rule $\scrbsi{H}{D}$, we have $\mathfrak{X}\nvDash\scrbsi{H}{D}$ iff there is a continuous stable surjection $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC for $(\mathfrak{D}^\to, \mathfrak{D}^\leftarrow)$ defined as above.\label{refutspace2}
\end{proposition}
In view of \Cref{refutspace2}, in geometric settings we prefer to write a bsi stable canonical rule $\scrbsi{H}{D}$ as $\scrbsi{H_*}{\mathfrak{D}}$.
We now elucidate the notion of filtration for bi-Heyting algebras presupposed by our bsi stable canonical rules.
\begin{definition}
Let $\mathfrak{H}$ be a bi-Heyting algebra, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{K}', V')$ is called a (\emph{finite}) \emph{filtration of $(\mathfrak{H}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{K}'=(\mathfrak{K}, \to, \leftarrow)$, where $\mathfrak{K}$ is the bounded sublattice of $\mathfrak{H}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{K}'\to \mathfrak{H}$ is a stable embedding satisfying the BDC for $(D^\to, D^\leftarrow)$, where
\[D^\heartsuit :=\{(\bar V(\varphi), \bar V(\psi)):\varphi\heartsuit \psi\in \Theta\}\]
for $\heartsuit\in \{\to, \leftarrow\}$.
\end{enumerate}
\end{definition}
\begin{theorem}[Filtration theorem for bi-Heyting algebras]
Let $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ be a bi-Heyting algebra, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a a finite, subformula closed set of formulae. If $(\mathfrak{K}', V')$ is a filtration of $(\mathfrak{H}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every bsi rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{H}, V\models \Gamma/\Delta\iff \mathfrak{K}', V'\models \Gamma/\Delta.\]
\end{theorem}
The next lemma is a counterpart to \Cref{rewritesi}.
\begin{lemma}
For every bsi rule $\Gamma/\Delta$ there is a finite set $\Xi$ of bsi stable canonical rules such that for any $\mathfrak{K}\in \mathsf{bi\text{-}HA}$ we have that $\mathfrak{K}\nvDash \Gamma/\Delta$ iff there is $\scrbsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \scrbsi{H}{D}$.\label{rewritebsi}
\end{lemma}
\begin{proof}
The proof is a straightforward generalisation of the proof of \Cref{rewritesi}, using the fact that every finite bounded distributive lattice $\mathfrak{J}$ may be expanded to a bi-Heyting algebra $\mathfrak{J}'=(\mathfrak{J}, \rightsquigarrow, \leftsquigarrow)$ by setting:
\begin{align*}
a\rightsquigarrow b&:=\bigvee\{c\in J: a\land b\leq c\}\\
a\leftsquigarrow b&:=\bigwedge\{c\in J: a\leq b\lor c\}.
\end{align*}
\end{proof}
Reasoning as in the proof of \Cref{axiomatisationsi} we obtain the following axiomatisation result.
\begin{theorem}
Every bsi-rule system $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ is axiomatisable over $\mathtt{bi\text{-}IPC_R}$ by some set of bsi stable canonical rules. \label{axiomatisationbsi}
\end{theorem}
\subsubsection{Tense Case} We now turn to tense stable canonical rules. \label{sec:tenrules}
\begin{definition}
Let $\mathfrak{A}\in \mathsf{Ten}$ be finite and $D^{\square_F}, D^{\lozenge_P}\subseteq A$. For every $a\in A$ introduce a fresh propositional variable $p_a$. The \emph{tense stable canonical rule} of $(\mathfrak{A}, D^{\square_F}, D^{\lozenge_P})$, is defined as the rule $\scrten{H}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma=&\{p_{a\land b}\leftrightarrow p_a\land p_b:a,b\in A\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a,b\in A\}\cup \\
&\{p_{\neg a}\leftrightarrow \neg p_a:a\in A\}\cup\\
&\{\square_Fp_a\to p_{\square_F a}: a\in A\}\cup \{p_{\lozenge_P a}\to \lozenge_Pp_a: a\in A\}\cup\\
&\{p_{\square_F a}\to \square_Fp_a: a\in D^{\square_F}\}\cup \{\lozenge_P p_a\to p_{\lozenge_P a}: a\in D^{\lozenge_P}\}\\
\Delta=&\{p_a:a\in A\smallsetminus \{1\}\}.
\end{align*}
\end{definition}
If $\mathfrak{A}, \mathfrak{B}\in \mathsf{MA}$ are tense algebras, a map $h:\mathfrak{A}\to \mathfrak{B}$ is called \emph{stable} if for every $a\in A$ the following conditions hold:
\[h(\square_F a)\leq \square_F h(a)\qquad \lozenge_P h(a)\leq h(\lozenge_P a).\]
If $D\subseteq A$ and $\heartsuit \in \{\square_F, \lozenge_P\}$, we say that $h$ satisfies the \emph{$\heartsuit$-bounded domain condition} (BDC$^\heartsuit$) for $D$ if $h(\heartsuit a)= \heartsuit h(a)$ for every $a\in D$. If $D^{\square_F}, D^{\lozenge_P}\subseteq A$, for brevity we say that $h$ satisfies the BDC for $(D^{\square_F}, D^{\lozenge_P})$ to mean that $h$ satisfies the BDC$^{\square_F}$ for $D^{\square_F}$ and the BDC$^{\lozenge_P}$ for $D^{\lozenge_P}$.
We outline algebraic refutation conditions for tense stable canonical rules.
\begin{proposition}
For all finite $\mathfrak{A}\in \mathsf{Ten}$ and $D^{\square_F}, D^{\lozenge_P}\subseteq A$, we have $\mathfrak{A}\nvDash \scrten{A}{D}$. \label{refutationten1}
\end{proposition}
\begin{proposition}
For every tense stable canonical rule $\scrten{A}{D}$ and any $\mathfrak{B}\in \mathsf{Ten}$, we have $\mathfrak{B}\nvDash \scrten{A}{D}$ iff there is a stable embedding $h:\mathfrak{A}\to \mathfrak{B}$ satisfying the BDC for $(D^{\square_F}, D^{\lozenge_P})$.\label{refutationten2}
\end{proposition}
Tense spaces are modal spaces, therefore the notion of a stable map applies. Let $\mathfrak{X}, \mathfrak{Y}$ be tense spaces. and $\mathfrak{d}\subseteq {Y}$. A stable map $f:\mathfrak{X}\to \mathfrak{Y}$ is said to satisfy
\begin{itemize}
\item The BDC$^{\square_F}$ for $\mathfrak{d}$ if for all $x\in X$ we have
\[R[f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[R[x]]\cap \mathfrak{d}\neq\varnothing;\]
\item The BDC$^{\lozenge_P}$ for $\mathfrak{d}$ if for all $x\in X$ we have
\[\breve{R}[f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[\breve{R}[x]]\cap \mathfrak{d}\neq\varnothing.\]
\end{itemize}
If $\mathfrak{D}\subseteq \wp(Y)$, we say that $f$ satisfies the BDC$^\heartsuit$ for $\mathfrak{D}$ when it does for each $\mathfrak{d}\in \mathfrak{D}$, where $\heartsuit\in \{\square_F, \lozenge_P\}$. Given $\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P} \in \wp(Y)$ we write that $f$ satisfies the BDC for $(\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P})$ if $f$ satisfies the BDC$^{\square_F}$ for $\mathfrak{D}^{\square_F}$ and the BDC$^{\lozenge_P}$ for $\mathfrak{D}^{\lozenge_P}$. Finally, if $\scrten{A}{D}$ is a tense stable canonical rule consider $\mathfrak{X}:=\mathfrak{A}_*$ and for $\heartsuit\in \{\square_F, \lozenge_P\}$ let
\[\mathfrak{D}^{\heartsuit}:=\{\mathfrak{d}^\heartsuit_{a}:a\in D^\heartsuit\}\]
where for each $a\in A$ we have
\begin{align*}
\mathfrak{d}^{\square_F}_{a}&:= -\beta (a)\\
\mathfrak{d}^{\lozenge_P}_a&:= \beta (a)
\end{align*}
\begin{proposition}
For any tense space $\mathfrak{X}$ and any tense stable canonical rule $\scrten{A}{D}$, we have $\mathfrak{X}\nvDash\scrten{A}{D}$ iff there is a continuous stable surjection $f:\mathfrak{X}\to \mathfrak{A}_*$ satisfying the BDC for $(\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P})$ defined as above.\label{refutspaceten}
\end{proposition}
In view of \Cref{refutspaceten}, in geometric settings we prefer to write a tense stable canonical rule $\scrten{A}{D}$ as $\scrten{A_*}{\mathfrak{D}}$.
We now introduce the notion of filtration implicit in tense stable canonical rules. Filtration for tense logics was considered, e.g., in \cite{Wolter1997CaDoTLCRtLaK} from a frame-theoretic perspective. Here we prefer an algebraic approach in line with \Cref{ch:1}.
\begin{definition}
Let $\mathfrak{A}$ be a tense algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{B}', V')$ is called a (\emph{finite}) \emph{filtration of $(\mathfrak{A}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{B}'=(\mathfrak{B}, \square_F, \lozenge_P)$, where $\mathfrak{B}$ is the Boolean subalgebra of $\mathfrak{A}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{B}'\to \mathfrak{A}$ is a stable embedding satisfying the BDC for $(D^{\square_F}, D^{\lozenge_P})$, where
\[D^{\heartsuit}:=\{\bar V(\varphi):\heartsuit\varphi\in \Theta\}\]
for $\heartsuit\in \{\square_F, \lozenge_P\}$.
\end{enumerate}
\end{definition}
\begin{theorem}[Filtration theorem for tense algebras]
Let $\mathfrak{A}\in \mathsf{Ten}$ be a tense algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. If $(\mathfrak{B}', V')$ is a filtration of $(\mathfrak{A}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every tense rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{A}, V\models \Gamma/\Delta\iff \mathfrak{B}', V'\models \Gamma/\Delta.\]
\end{theorem}
Just like in the $\mathtt{S4}$ case, not every filtration of some model based on a tense algebra is itself based on a tense algebra, because the $\mathtt{S4}$-axiom for either $\square_F$ or $\lozenge_P$ may not be preserved. However, given any model based on a tense algebra, there is always a method for filtrating it through any finite set of formulae which yields a model based on a tense algebra.
\begin{definition}
Let $\mathfrak{A}\in \mathsf{Ten}$, $V$ a valuation on $\mathfrak{A}$ and $\Theta$ a finite, subformula closed set of formula. The (least) \emph{transitive filtration} of $(\mathfrak{A}, V)$ is the pair $(\mathfrak{B}', V')$ with $\mathfrak{B}=(\mathfrak{B}', \blacksquare_F,\blacklozenge_P)$ where $\mathfrak{B}'$ and $V'$ are as per \Cref{filtrmod}, and for all $b\in B$ we have
\begin{align*}
\blacksquare_F b&:=\bigvee\{\square_F a: \square_F a\leq \square_F b\text{ and }a, \square_F a\in B\}\\
\blacklozenge_P b&:=\bigwedge\{\lozenge_P a:\ \lozenge_P b\leq\lozenge_P a \text{ and }a, \lozenge_P a\in B\}
\end{align*}
\end{definition}
Via duality, it is not difficult to see that the least transitive filtration of any model based on a tense algebra is again a tense algebra.
At this stage, reasoning as in the proof of \Cref{rewritemod} using transitive filtrations we obtain the following results.
\begin{lemma}
For every tense rule $\Gamma/\Delta$ there is a finite set $\Xi$ of tense stable canonical rules such that for any $\mathfrak{K}\in \mathsf{Ten}$ we have that $\mathfrak{K}\nvDash \Gamma/\Delta$ iff there is $\scrbsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \scrbsi{H}{D}$.\label{rewritetense}
\end{lemma}
\begin{theorem}
Every tense rule system is axiomatisable over $\mathtt{S4.t_R}$ by some set of tense stable canonical rules. \label{axiomatisationten}
\end{theorem}
\subsubsection{Comparison with Je\v{r}ábek-style Canonical Rules}\label{sec:comparison}
Our bsi and tense stable canonical rules generalise si and modal stable canonical rules in a way that mirrors the simple and intimate connection existing between Heyting and bi-Heyting algebras on the one hand, and modal and tense algebras on the other, explicated by the order-duality principles. Just like a bi-Heyting algebra is just a Heyting algebra whose order-dual is also a Heyting algebra, so every bsi stable canonical rule is a sort of ``independent fusion" between two si stable canonical rules, whose associated Heyting algebras are order-dual to each other. Similarly for the tense case.
Je\v{r}ábek-style si and modal canonical rules (like Zakharyaschev-style si and modal canonical formulae), by contrast, do not generalise as smoothly to the bsi and tense case. Algebraically, a Je\v{r}ábek-style si canonical rule may be defined as follows (cf. \cite{BezhanishviliBezhanishvili2009AAAtCFIC,BezhanishviliEtAl2016CSL}).
\begin{definition}
Let $\mathfrak{H}\in \mathsf{HA}$ be finite and let $D\subseteq H$. The \emph{si canonical rule} of $(\mathfrak{H}, D)$ is the rule $Zakharyascheveta(\mathfrak{H}, D)=\Gamma/\Delta$, where
\begin{align*}
\Gamma:=&\{p_0\leftrightarrow \bot\}\cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a, b\in H\}\cup\{p_{a\to b}\leftrightarrow p_a\to p_b:a, b\in H\}\cup\\
&\{p_{a\lor b}\leftrightarrow p_a\lor p_b:(a, b)\in D\}\\
\Delta:=&\{p_a\leftrightarrow p_b:a, b\in H\text{ with }a\neq b\}.
\end{align*}
\end{definition}
Generalising the proof of \cite[Corollary 5.10]{BezhanishviliEtAl2016CSL}, one can show that every si rule is equivalent to finitely many si canonical rules. The key ingredient in this proof is a characterisation of the refutation conditions for si canonical rules: $Zakharyascheveta(\mathfrak{H}, D)$ is refuted by a Heyting algebra $\mathfrak{K}$ iff there is a $(\land, \to, 0)$-embedding $h:\mathfrak{H}\to \mathfrak{K}$ preserving $\lor$ on elements from $D$. Because $(\land, \to, 0)$-algebras are locally finite, a result known as \emph{Diego's theorem}, one can then reason as in the proof of, e.g., \Cref{rewritesi} to reach the desired result.
It should be clear that if one defined the bsi canonical rule $Zakharyascheveta_B(\mathfrak{H}, D, D')$ by combining the rules $Zakharyascheveta(\mathfrak{H}, D)$ and $Zakharyascheveta(\bar{\mathfrak{H}}, D')$ the same way bsi stable canonical rule combine si stable canonical rules, then $Zakharyascheveta_B(\mathfrak{H}, D, D')$ would be refuted by a bi-Heyting algebra $\mathfrak{K}$ iff there is a bi-Heyting algebra embedding $h:\mathfrak{H}\to \mathfrak{K}$.
Since the variety of bi-Heyting algebras is not locally finite, this refutation condition is clearly too strong to deliver a result to the effect that every bsi rule is equivalent to a set of bsi canonical rules. Without such a result, in turn there is no hope of axiomatising every rule system over $\mathtt{bi\text{-}IPC}$ by means of bsi canonical rules.
Similar remarks hold in the tense case, although in this case the details are too complex to do them justice in the limited space we have at our disposal. We limit ourselves to a rough sketch of the tense case. \citet{BezhanishviliEtAl2011AAAtSLMC} show that the proof of the fact that every modal formula is equivalent, over $\mathtt{S4}$, to finitely many modal Zakharyaschev-style canonical formulae of closure algebras rests on an application of Diego's theorem \cite[cf.][Main Lemma]{BezhanishviliEtAl2011AAAtSLMC}. This has to do with how selective filtrations of closure algebras are constructed. Given a closure algebra $\mathfrak{B}$ refuting a rule $\Gamma/\Delta$, a key step in constructing a finite selective filtration of $\mathfrak{B}$ through $\mathit{Sfor}(\Gamma/\Delta)$ consists in generating a $(\land, \to, 0)$-subalgebra of $\rho \mathfrak{A}$ from a finite subset of $O(A)$. This structure is guaranteed to be finite by Diego's theorem. On the most obvious ways of generalising this construction to tense algebras, we would need to replace this step with one of the following:
\begin{enumerate}
\item Generate both a $(\land, \to, 0)$-subalgebra of $\rho \mathfrak{A}$ and a $(\lor, \leftarrow, 1)$-subalgebra of $\rho \mathfrak{A}$ from a finite subset of $O(A)$;
\item Generate a bi-Heyting subalgebra of $\rho \mathfrak{A}$ from a finite subset of $O(A)$.
\end{enumerate}
On option 1, Diego's theorem and its order dual would guarantee that both the $(\land, \to, 0)$-subalgebra of $\rho \mathfrak{A}$ and the $(\lor, \leftarrow, 1)$-subalgebra of $\rho \mathfrak{A}$ are finite. However, it is not clear how one could then combine the two subalgebras into a bi-Heyting algebra, which is required to obtain a selective filtration based on a tense algebra. On option 2, on the other hand, we would indeed obtain a bi-Heyting subalgebra of $\rho \mathfrak{A}$, but not necessarily a finite one, since bi-Heyting algebras are not locally finite.
We realise that the argument sketches just presented are far from conclusive, so we do not go as far as ruling out the possibility that Je\v{r}ábek-style bsi and tense canonical rules could somehow be developed in such a way as to be a suitable tools for developing the theory of tense companions of bsi-rule systems. What such rules would look like, and in what sense they would constitute genuine generalisations of Je\v{r}ábek's canonical rules and Zakharyaschev's canonical formulae are interesting questions, but this paper is not the appropriate space to pursue them. At this stage we merely wish to stress that answering this sort of questions is a non-trivial matter, whereas generalising stable canonical rules to the bsi and tense setting and applying them to develop the theory of tense companions is a completely routine task. On our approach, exactly the same methods used in the si and modal case work equally well in the bsi-tense case.
\subsection{Tense Companions of Bi-superintuitionistic Rule Systems}\label{sec:tensecompanions}
We turn to the main topic of this section. This section generalises the results of \Cref{sec:modalcompanions} to the bsi-tense setting. As anticipated, this is done using exactly the same techniques seen in the si and modal case, which is one of the main advantages of our method.
\subsubsection{Semantic Mappings}\label{sec:mappingstc}
We begin by generalising the semantic transformations for turning Heyting algebras into corresponding closure algebras and vice versa, seen in \Cref{sec:modalcompanions}, to transformations between bi-Heyting and tense algebras. The results in this section are well known, and the reader may consult \cite[Section 7]{Wolter1998OLwC} for a more detailed overview.
\begin{definition}
The mapping $\sigma: \mathsf{bi\text{-}HA}\to \mathsf{Ten}$ assigns every $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ to the algebra $\sigma \mathfrak{H}:=(B(\mathfrak{H}),\square_F, \lozenge_P)$, where $B(\mathfrak{H})$ is the free Boolean extension of $\mathfrak{H}$ and
\begin{align*}
\square_F a&:=\bigvee \{b\in H: b\leq a\}\\
\lozenge_P a&:=\bigwedge \{b\in H: a\leq b\}
\end{align*}
\end{definition}
That $\square_F, \lozenge_P$ are well-defined operations on $B(\mathfrak{H})$ follows from the order-duality principle and the results in the previous section. It is easy to verify that $\sigma \mathfrak{H}$ validates the $\mathtt{S4}$ axioms for both $\square_F$ and $\lozenge_P$. Moreover, for any $a\in B(H)$ clearly $\lozenge_P a\in H$, so $\square_F\lozenge_P a=\lozenge_P a$. This implies $a\leq \square_F\lozenge_P a$. Therefore indeed $\sigma \mathfrak{H}\in \mathsf{Ten}$.
\begin{definition}
The mapping $\rho:\mathsf{Ten}\to \mathsf{bi\text{-}HA}$ assigns every $\mathfrak{A}\in \mathsf{Ten}$ to the algebra $\rho \mathfrak{A}:=(O(A), \land, \lor, \to, \leftarrow, 0,1)$, where
\begin{align*}
O(A)&:=\{a\in A:\square_F a=a\}=\{a\in A:\lozenge_P a=a\}\\
a\to b&:=\square_F (\neg a\lor b)\\
a\leftarrow b&:=\lozenge_P ( a\land \neg b).
\end{align*}
\end{definition}
Using the order-duality principle, it is easy to verify that for every $\mathfrak{A}\in \mathsf{Ten}$, the algebra $\rho \mathfrak{A}$ is indeed a bi-Heyting algebra.
Recall the geometric mappings $\sigma :\mathsf{Esa}\to \mathsf{Spa}(\mathtt{GRZ})$ and $\rho:\mathsf{Spa}(\mathtt{S4})\to \mathsf{Esa}$. Since bi-Esakia spaces are Esakia spaces, and tense spaces are $\mathtt{S4}$-spaces, we may restrict these mappings to $\sigma :\mathsf{bi\text{-}Esa}\to \mathsf{Alg}(\mathtt{GRZ.T})$ and $\rho:\mathsf{Spa}(\mathtt{GRZ.T})\to \mathsf{bi\text{-}Esa}$ and obtain geometric counterparts to the algebraic mappings between bi-Heyting and tense algebras defined in the present subsection. Reasoning as in the proof of \Cref{prop:mcmapsdual} we find that the algebraic and geometric versions of the maps $\sigma, \rho$ are indeed dual to each other.
\begin{proposition}
The following hold.\label{tcmapsdual}
\begin{enumerate}
\item Let $\mathfrak{H}\in \mathsf{bi\text{-}HA}$. Then $(\sigma \mathfrak{H})_*\cong\sigma (\mathfrak{H}_*)$. Consequently, if $\mathfrak{X}$ is a bi-Esakia space then $(\sigma \mathfrak{X})^*\cong\sigma (\mathfrak{X})^*$.
\item Let $\mathfrak{X}$ be a tense space. Then $(\rho\mathfrak{X})^*\cong\rho(\mathfrak{X}^*)$. Consequently, if $\mathfrak{A}\in \mathsf{Alg}(\mathtt{S4.t})$, then $(\rho\mathfrak{A})_*\cong\rho(\mathfrak{A}_*)$.
\end{enumerate}
\end{proposition}
As an easy corollary, we obtain the following analogue of \Cref{cor:representationHAS4}.
\begin{proposition}
For every $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ we have $\mathfrak{H}\cong \rho\sigma \mathfrak{H}$. Moreover, for every $\mathfrak{A}\in \mathsf{Ten}$ we have $\sigma \rho\mathfrak{A}\rightarrowtail\mathfrak{A}$.\label{representation2haGrz.T}
\end{proposition}
\subsubsection{A Gödelian Translation}
We extend the Gödel translation of the previous section to a translation from bsi formulae to tense ones.
\begin{definition}[Gödelian translation - bsi to tense]
The \emph{Gödelian translation} is a mapping $T:\mathit{Tm}_{bsi}\to \mathit{Tm}_{ten}$ defined recursively as follows.
\begin{align*}
T(\bot)&:=\bot\\
T(\top)&:=\top\\
T(p)&:=\square p\\
T(\varphi\land \psi)&:=T(\varphi)\land T(\psi)\\
T(\varphi\lor \psi)&:=T(\varphi)\lor T(\psi)\\
T(\varphi\to \psi)&:=\square_F (\neg T(\varphi)\lor T(\psi))\\
T(\varphi\leftarrow \psi)&:=\lozenge_P ( T(\varphi)\land \neg T(\psi))
\end{align*}
\end{definition}
An essentially equivalent translation was considered in \cite{Wolter1998OLwC}, though using $\square_P$ instead of $\lozenge_P$ to interpret $\leftarrow$.
The following analogue of \Cref{lem:gtskeleton} is proved the same way as the latter.
\begin{lemma}
For every $\mathfrak{A}\in \mathsf{Ten}$ and bsi rule $\Gamma/\Delta$, \label{lem:gtskeleton2}
\[\mathfrak{A}\models T(\Gamma/\Delta)\iff \rho\mathfrak{A}\models \Gamma/\Delta\]
\end{lemma}
We note that \Cref{lem:gtskeleton2} does not appear the literature, which only mentions a similar results concerning formulae rather than rules.
\subsubsection{Structure of Tense Companions}\label{sec:semantictensecompanions}
We are now ready to generalise \Cref{mcinterval} and \Cref{blokesakia} to the bsi-tense setting. We do so in this section. All the results of this section are new inasmuch as they involve rule systems. Their restrictions to logics were established by \citet{Wolter1998OLwC}, although our proofs differ from Wolter's Blok-style algebraic approach.
We begin by formally defining the notion of a \emph{tense companion}.
\begin{definition}
Let $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ be a bsi-rule system and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R})$ a tense rule system. We say that $\mathtt{M}$ is a \emph{tense companion} of $\mathtt{L}$ (or that $\mathtt{L}$ is the bsi fragment of $\mathtt{M}$) whenever $\Gamma/\Delta\in \mathtt{L}$ iff $T(\Gamma/\Delta)\in \mathtt{M}$ for every bsi rule $\Gamma/\Delta$. Moreover, let $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC})$ be a bsi-logic and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t})$ a tense logic. We say that $\mathtt{M}$ is a \emph{tense companion} of $\mathtt{L}$ (or that $\mathtt{L}$ is the bsi fragment of $\mathtt{M}$) whenever $\varphi\in \mathtt{L}$ iff $T(\varphi)\in \mathtt{M}$.
\end{definition}
Clearly, $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ iff $\mathsf{Taut}(\mathtt{M})$ is a modal companion of $\mathsf{Taut}(\mathtt{L})$, and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC})$ iff $\mathtt{M_R}$ is a modal companion of $\mathtt{L_R}$.
Define the following three maps between $\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ and $\mathbf{NExt}(\mathtt{S4.t_R})$.
\begin{align*}
\tau&:\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})\to \mathbf{NExt}(\mathtt{S4.t_R}) & \sigma&:\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})\to \mathbf{NExt}(\mathtt{S4.t_R}) \\
\mathtt{L}&\mapsto \mathtt{S4.t_R}\oplus \{T(\Gamma/\Delta):\Gamma/\Delta\in \mathtt{L}\} & \mathtt{L}&\mapsto \mathtt{GRZ.T_R}\oplus \tau \mathtt{L}
\end{align*}
\begin{align*}
\rho &:\mathbf{NExt}(\mathtt{S4.t_R}) \to \mathbf{Ext}(\mathtt{bi\text{-}IPC_R}) \\
\mathtt{M}&\mapsto\{\Gamma/\Delta:T(\Gamma/\Delta)\in \mathtt{M}\}
\end{align*}
These mappings are readily extended to lattices of logics.
\begin{align*}
\tau&:\mathbf{Ext}(\mathtt{bi\text{-}IPC})\to \mathbf{NExt}(\mathtt{S4.t}) & \sigma&:\mathbf{Ext}(\mathtt{bi\text{-}IPC})\to \mathbf{NExt}(\mathtt{S4.t}) \\
\mathtt{L}&\mapsto \mathsf{Taut}(\tau\mathtt{L_R})=\mathtt{S4.t}\oplus \{T(\varphi):\varphi\in \mathtt{L}\} & \mathtt{L}&\mapsto \mathsf{Taut}(\sigma\mathtt{L_R})=\mathtt{GRZ.T}\oplus\{T(\varphi):\varphi\in \mathtt{L}\}
\end{align*}
\begin{align*}
\rho &:\mathbf{NExt}(\mathtt{S4.t}) \to \mathbf{Ext}(\mathtt{bi\text{-}IPC}) \\
\mathtt{M}&\mapsto \mathsf{Taut}(\rho\mathtt{M_R})=\{\varphi:T(\varphi)\in \mathtt{M}\}
\end{align*}
Furthermore, extend the mappings $\sigma:\mathsf{bi\text{-}HA}\to \mathsf{Ten}$ and $\rho:\mathsf{Ten}\to \mathsf{bi\text{-}HA}$ to universal classes by setting
\begin{align*}
\sigma&:\mathbf{Uni}(\mathsf{bi\text{-}HA})\to \mathbf{Uni}(\mathsf{Ten}) & \rho&:\mathbf{Uni}(\mathsf{Ten})\to \mathbf{Uni}(\mathsf{bi\text{-}HA}) \\
\mathcal{U}&\mapsto \mathsf{Uni}\{\sigma \mathfrak{H}:\mathfrak{H}\in \mathcal{U}\} & \mathcal{W}&\mapsto \{\rho\mathfrak{A}:\mathfrak{A}\in \mathcal{W}\}.
\end{align*}
Finally, introduce a semantic counterpart to $\tau$ as follows.
\begin{align*}
\tau&: \mathbf{Uni}(\mathsf{bi\text{-}HA})\to \mathbf{Uni}(\mathsf{Ten}) \\
\mathcal{U}&\mapsto \{\mathfrak{A}\in \mathsf{Ten}:\rho\mathfrak{A}\in \mathcal{U}\}
\end{align*}
The following lemma is a counterpart to \Cref{mainlemma-simod}. It is proved via essentially the same argument which establishes the latter, though some adaptations are necessary which may be less than completely obvious. For this reason, as well as for the central place this lemma occupies in our strategy, we spell out the proof in some detail.
\begin{lemma}
Let $\mathfrak{A}\in \mathsf{GRZ.T}$. Then for every modal rule $\Gamma/\Delta$, we have $\mathfrak{A}\models\Gamma/\Delta$ iff $\sigma\rho\mathfrak{A}\models \Gamma/\Delta$.\label{mainlemma-bsimod}
\end{lemma}
\begin{proof}
$(\Rightarrow)$ This direction follows from the fact that $\sigma\rho\mathfrak{A}\rightarrowtail\mathfrak{A}$ (\Cref{representation2haGrz.T}).
$(\Leftarrow)$ We prove the dual statement that $\mathfrak{A}_*\nvDash \Gamma/\Delta$ implies $\sigma\rho\mathfrak{A}_*\nvDash \Gamma/\Delta$. Let $\mathfrak{X}:=\mathfrak{A}_*$. In view of \Cref{axiomatisationten} it is enough to consider the case $\Gamma/\Delta=\scrten{B}{D}$, for $\mathfrak{B}\in \mathsf{Ten}$ finite. So suppose $\mathfrak{X}\nvDash \scrmod{B}{D}$ and let $\mathfrak{F}:=\mathfrak{B}_*$. Then there is a stable map $f:\mathfrak{X}\to \mathfrak{F}$ satisfying the BDC for $(\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P})$. We construct a stable map $g:\sigma\rho\mathfrak{X}\to \mathfrak{F}$ which satisfies the BDC for $(\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P})$.
Let $C:=\{x_1, \ldots, x_n\}\subseteq F$ be some cluster and let $Z_C:=f^{-1}(C)$. Reasoning as in the proof of \Cref{mainlemma-simod}, we obtain that $\rho[Z_C]$ is clopen, and so is $f^{-1}(x_i)$ for each $x_i\in C$. Now for each $x_i\in C$ let
\begin{align*}
M_i&:=\mathit{max}_R(f^{-1}(x_i))\\
N_i&:=\mathit{min}_R(f^{-1}(x_i)).
\end{align*}
By \Cref{propGrz.T}, both $M_i, N_i$ are closed, and moreover neither cuts any cluster. Since $\sigma\rho\mathfrak{X}$ has the quotient topology, it follows that both $\rho[M_i], \rho[N_i]$ are closed as well.
For each $x_i\in C$ let $O_i:=M_i\cup N_i$. Clearly, $O_i\cap O_j=\varnothing$ for each $i, j\leq n$. Therefore, using the separation properties of Stone spaces to reason as in the proof of \Cref{mainlemma-simod}, there are disjoint clopens $U_1, \ldots, U_n\in \mathsf{Clop}(\sigma\rho\mathfrak{X})$ with $\rho[O_i]\subseteq U_i$ and $\bigcup_{i\leq n} U_i=\rho[Z_C]$.
We can now define a map
\begin{align*}
g_C&: \rho[Z_C]\to C\\
z&\mapsto x_i\iff z\in U_i.
\end{align*}
Clearly, $g_C$ is relation preserving and continuous. Finally, define $g: \sigma\rho\mathfrak{X}\to F$ by setting
\[
g(\rho(z)):=\begin{cases}
f(z)&\text{ if } f(z)\text{ does not belong to any proper cluster }\\
g_C(\rho(z))&\text{ if }f(z)\in C\text{ for some proper cluster }C\subseteq F.
\end{cases}
\]
Now, $g$ is evidently relation preserving. Moreover, it is continuous because both $f$ and each $g_C$ are. Reasoning as in the proof of \Cref{mainlemma-simod}, we obtain that $g$ satisfies the BDC$^{\square_F}$ for $\mathfrak{D}^{\square_F}$. The proof of the fact that $g$ satisfies the BDC$^{\lozenge_P}$ for $\mathfrak{D}^{\lozenge_P}$ is a straightforward adaptation of the latter, using that for all $U\in \mathsf{Clop}(\mathfrak{X})$, if $x\in U$ there is $y\in \mathit{min}_R(U)$ such that $Ryx$ (\Cref{propGrz.T}).
\end{proof}
\begin{theorem}
Every $\mathcal{U}\in \mathbf{Uni}(\mathsf{GRZ.T})$ is generated by its skeletal elements, i.e. $\mathcal{U}=\sigma \rho\mathcal{U}$. \label{uniGrz.Tensegeneratedskel}
\end{theorem}
\begin{proof}
Follows easily from \Cref{mainlemma-bsimod}, reasoning as in the proof of \Cref{unigrzgeneratedskel}.
\end{proof}
As in the previous section, the next step is to apply \Cref{mainlemma-bsimod} to prove that the syntactic tense companion maps $\tau, \rho, \sigma$ commute with $\mathsf{Alg}(\cdot)$, which leads to a purely semantic characterisation of tense companions.
\begin{lemma}
For each $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R})$, the following hold:\label{prop:tenmapscommute}
\begin{align}
\mathsf{Alg}(\tau\mathtt{L})&=\tau \mathsf{Alg}(\mathtt{L}) \label{prop:tenmapscommute1}\\
\mathsf{Alg}(\sigma\mathtt{L})&=\sigma \mathsf{Alg}(\mathtt{L})\label{prop:tenmapscommute2}\\
\mathsf{Alg}(\rho\mathtt{M})&=\rho \mathsf{Alg}(\mathtt{M})\label{prop:tenmapscommute3}
\end{align}
\end{lemma}
\begin{proof}
The proof of \Cref{prop:tenmapscommute1} is trivial. To prove \Cref{prop:tenmapscommute2}, in view of \Cref{uniGrz.Tensegeneratedskel} it is enough to show that $\mathsf{Alg}(\sigma\mathtt{L})$ and $\sigma \mathsf{Alg}(\mathtt{L})$ have the same skeletal elements. This is proved the same way as \Cref{prop:mcmapscommute2} in \Cref{prop:mcmapscommute}. Finally, \Cref{prop:tenmapscommute3} is proved analogously to \Cref{prop:mcmapscommute3} in \Cref{prop:mcmapscommute}, applying \Cref{lem:gtskeleton2} instead of \Cref{lem:gtskeleton}.
\end{proof}
\begin{lemma}
$\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R})$ is a tense companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ iff $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$.\label{tcsemantic}
\end{lemma}
\begin{proof}
Analogous to \Cref{mcsemantic}.
\end{proof}
The main results of this section can now be proved.
\begin{theorem}
The following conditions hold: \label{tcinterval}
\begin{enumerate}
\item For every $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$, the modal companions of $\mathtt{L}$ form an interval \label{tcinterval1} \[\{\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R}):\tau\mathtt{L}\leq \mathtt{M}\leq \sigma\mathtt{L}\};\]
\item For every $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC})$, the modal companions of $\mathtt{L}$ form an interval \label{tcinterval2} \[\{\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t}):\tau\mathtt{L}\leq \mathtt{M}\leq \sigma\mathtt{L}\}.\]
\end{enumerate}
\end{theorem}
\begin{proof}
\Cref{tcinterval1} is proved the same way as \Cref{mcinterval1} in \Cref{mcinterval}. \Cref{tcinterval2} is immediate from \Cref{tcinterval1}.
\end{proof}
\begin{theorem}[Blok-Esakia theorem for bsi- and tense deductive systems]
The following conditions hold: \label{blokesakia2}
\begin{enumerate}
\item The mappings $\sigma: \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})\to \mathbf{NExt}(\mathtt{GRZ.T_R})$ and $\rho:\mathbf{NExt}(\mathtt{GRZ.T_R})\to \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ are complete lattice isomorphisms and mutual inverses. \label{blokesakia2:1}
\item The mappings $\sigma: \mathbf{Ext}(\mathtt{bi\text{-}IPC})\to \mathbf{NExt}(\mathtt{GRZ.T})$ and $\rho:\mathbf{NExt}(\mathtt{GRZ.T})\to \mathbf{Ext}(\mathtt{bi\text{-}IPC})$ are complete lattice isomorphisms and mutual inverses. \label{blokesakia2:2}
\end{enumerate}
\end{theorem}
\begin{proof}
\Cref{blokesakia2:1} is proved the same way as \Cref{blokesakia:1} in \Cref{blokesakia}. \Cref{blokesakia2:2} follows straightforwardly from \Cref{blokesakia2:1} and \Cref{deductivesystemisomorphismbsi,deductivesystemisomorphismten}.
\end{proof}
\subsubsection{The Dummett-Lemmon Conjecture for Bsi-Rule Systems} \label{sec:additionaltc}
The construction used to prove the Dummett-Lemmon conjecture for rule systems straightforwardly generalises to a proof of a variant of the conjecture applying to bsi-rule systems and their weakest tense companions. To establish this result, we first extend the notion of a collapsed rule and the rule collapse lemma to the bsi and tense setting.
\begin{definition}
Let $\scrten{F}{\mathfrak{D}}$ be a tense stable canonical rule. The \emph{collapsed tense stable canonical rule} $\scrbsi{\rho F}{\rho \mathfrak{D}}$ is defined by setting
\begin{align*}
\rho\mathfrak{D}^\to= \{\rho[\mathfrak{d}]:\mathfrak{d}\in \mathfrak{D}^{\square_F}\}\\
\rho\mathfrak{D}^\leftarrow= \{\rho[\mathfrak{d}]:\mathfrak{d}\in \mathfrak{D}^{\lozenge_P}\}
\end{align*}
\end{definition}
\begin{lemma}[Rule collapse lemma - bsi-tense]
For every tense space $\mathfrak{X}$ and every tense stable canonical rule $\scrten{F}{D}$, we have that $\mathfrak{X}\nvDash \scrten{F}{\mathfrak{D}}$ implies $\rho\mathfrak{X}\nvDash \scrbsi{\rho F}{\rho\mathfrak{D}}$.\label{rulecollapse2}
\end{lemma}
\begin{proof}
Analogous to the proof of \Cref{rulecollapse}.
\end{proof}
At this point, we can establish the desired result via a straightforward adaptation of our proof of \Cref{dummettlemmon}.
\begin{theorem}[Dummett-Lemmon conjecture for bsi rule systems]
For every bsi-rule system $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$, $\mathtt{L}$ is Kripke complete iff $\tau \mathtt{L}$ is.
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let $\mathtt{L}$ be Kripke complete. Suppose that $\Gamma/\Delta\notin \tau\mathtt{L}$. Then there is $\mathfrak{X}\in \mathsf{Spa}(\tau\mathtt{L})$ such that $\mathfrak{X}\nvDash \Gamma/\Delta$. By \Cref{axiomatisationten}, we may assume that $\Gamma/\Delta=\scrmod{F}{\mathfrak{D}}$ for $\mathfrak{F}$ a preorder. By \Cref{rulecollapse2}
and \Cref{lem:gtskeleton2} it follows that $\rho\mathfrak{X}\models \mathtt{L}$, and so $\scrsi{\rho F}{\rho \mathfrak{D}}\notin \mathtt{L}$. Since $\mathtt{L}$ is Kripke complete, there is a bsi Kripke frame $\mathfrak{Y}$ such that $\mathfrak{Y}\nvDash \scrsi{\rho F}{\rho \mathfrak{D}}$. Take a stable map $f:\mathfrak{Y}\to \rho \mathfrak{F}$ satisfying the BDC for $\rho \mathfrak{D}$. Proceed as in the proof of \Cref{dummettlemmon} to construct a Kripke frame $\mathfrak{Z}$ with $\mathfrak{Z}\models\tau \mathtt{L}$ by expanding clusters in $\mathfrak{Y}$. We identify $\rho\mathfrak{Z}=\mathfrak{Y}$, and define a map $g:\mathfrak{Z}\to \mathfrak{F}$ via the same construction used in the proof of \Cref{dummettlemmon}. Clearly, $g$ is well defined, surjective, and relation preserving. We know that $g$ satisfies the BDC for $\mathfrak{D}^\to$ from the proof of \Cref{dummettlemmon}, and symmetric reasoning shows that $g$ also satisfies the BDC for $\mathfrak{D}^\leftarrow$.
$(\Leftarrow)$ Analogous to the si and modal case.
\end{proof}
\section{The Kuznetsov-Muravitsky Isomorphism for Logics and Rule Systems}
\label{ch:3}
In this section, we generalise our techniques further to study translational embeddings of (normal) \emph{modal superintuitionistic} rule systems and logics into modal ones. We develop algebra-based rules for modal superintuitionistic rule systems over the intuitionistic provability logic $\mathtt{KM}$, as well as a new kind of algebra-based rules for modal rule systems over the Gödel-Löb provability logic (\Cref{sec:scrgl}). We call these \emph{pre-stable canonical rules}. We apply pre-stable canonical rules to prove that the lattice of modal superintuitionistic rule systems (resp.~logics) over $\mathtt{KM}$ is isomorphic to the lattice of modal rule systems (resp.~logics) over $\mathtt{GL}$ via a Gödel-style translational embedding (\Cref{sec:isomorphismgl}). This result was proved for logics by \citet{KuznetsovMuravitsky1986OSLAFoPLE}, but appears to be new for rule systems.
For reasons of space, this section does not pursue the full theory of modal companions of superintuitionistic logics in the sense of either \cite{Esakia2006TMHCaCMEotIL} or \cite{WolterZakharyaschev1997IMLAFoCBL,WolterZakharyaschev1997OtRbIaCML}, although we are confident that our techniques would work in that setting as well. Because of this the Dummett-Lemmon conjecture has no counterpart in the present section.
Besides supplying new results, this section further highlights the flexibility and uniformity of our techniques. Standard filtration does not work well for $\mathtt{KM}$ and $\mathtt{GL}$, suggesting a different, less standard notion of filtration should be used to generalise stable canonical rules to the present setting. The rest of our approach delivers the desired results despite this different design choice, which shows its flexibility. Moreover, it does so without needing any major changes and accommodations: the proofs of the main results in this section follow the basic blueprints of their counterparts from \Cref{ch:1}. This, once again, shows the uniformity of our approach.
\subsection{Deductive Systems for Provability}
We begin by briefly reviewing definitions and basic properties of the structures under discussion.
\subsubsection{Intuitionistic Provability, Frontons, and $\mathtt{KM}$-spaces}
In this subsection we shall work with the \emph{modal superintuitionistic signature}, \[msi:=\{\land, \lor, \to,\boxtimes, \bot, \top\}.\] The set $\mathit{Frm_{msi}}$ of \emph{modal superintuitionistic (msi) formulae} is defined recursively as follows.
\[\varphi::= p \, |\, \bot \, |\, \top \, |\, \varphi \land \varphi \, |\, \varphi\lor \varphi \, |\, \varphi\to \varphi \, |\, \boxtimes\varphi \]
where $p\in \mathit{Prop}$.
The logic $\mathtt{IPCK}$ is obtained by extending $\mathtt{IPC}$ by the $\mathtt{K}$-axiom \[\boxtimes(p\to q)\to (\boxtimes p \to \boxtimes q)\] and closing under necessitation, that is, requiring that whenever $\varphi\in \mathtt{IPCK}$ then $\boxtimes\varphi\in \mathtt{IPCK}$ as well.
\begin{definition}
A \emph{normal modal superintuitionistic logic}, or msi-logic for short, is a logic $\mathtt{L}$ over $\mathit{Frm}_{msi}$ satisfying the following additional conditions:
\begin{enumerate}
\item $\mathtt{IPCK}\subseteq \mathtt{L}$;
\item If $\varphi\to \psi, \varphi\in \mathtt{L}$ then $\psi\in \mathtt{L}$ (MP);
\item If $\varphi\in \mathtt{L}$ then $\boxtimes \varphi\in \mathtt{L}$ (NEC).
\end{enumerate}
A \emph{modal superintuitionistic rule system}, or msi-rule system for short, is a rule system $\mathtt{L}$ over $\mathit{Frm}_{msi}$ satisfying the following additional requirements.
\begin{enumerate}
\item $/\varphi\in \mathtt{L}$ whenever $\varphi\in \mathtt{IPCK}$;
\item $\varphi, \varphi\to \psi/\psi\in \mathtt{L}$ (MP-R);
\item $\varphi/\boxtimes\varphi\in \mathtt{L}$ (NEC-R).
\end{enumerate}
\end{definition}
If $\mathtt{L}$ is an msi-logic (resp. msi-rule system) we write $\mathbf{NExt}(\mathtt{L})$ for the set of msi-logics (resp. rule systems) extending $\mathtt{L}$. Surely, the set of msi-logics systems coincides with $\mathbf{NExt}(\mathtt{IPCK})$. It is easy to check that $\mathbf{NExt}(\mathtt{IPCK})$ forms a lattice under the operations $\oplus_{\mathbf{NExt}(\mathtt{K})}$ as join and intersection as meet. If $\mathtt{L}\in \mathbf{NExt}(\mathtt{IPCK})$, let $\mathtt{L_R}$ be the least msi-rule system containing $/\varphi$ for each $\varphi\in \mathtt{L_R}$. Then $\mathtt{IPCK_R}$ is the least msi-rule system. The set $\mathbf{NExt}(\mathtt{IPCK_R})$ of msi-rule systems is also a lattice when endowed with $\oplus_{\mathbf{NExt}(\mathtt{IPCK_R})}$ as join and intersection as meet. As usual, we refer to these lattices as we refer to their underlying sets, i.e. $\mathbf{NExt}(\mathtt{IPCK})$ and $\mathbf{NExt}(\mathtt{IPCK_R})$ respectively. We also write both $\oplus_{\mathbf{NExt}(\mathtt{IPCK})}$ and $\oplus_{\mathbf{NExt}(\mathtt{IPCK_R})}$ simply as $\oplus$, leaving context to resolve ambiguities. Clearly, for every $\mathtt{L}\in \mathbf{NExt}(\mathtt{IPCK})$ we have that $\mathsf{Taut}(\mathtt{L_R})=\mathtt{L}$, which establishes the following result.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{NExt}(\mathtt{IPCK})$ and the sublattice of $\mathbf{NExt}(\mathtt{IPCK_R})$ consisting of all msi-rule systems $\mathtt{L}$ such that $\mathsf{Taut}(\mathtt{L})_\mathtt{R}=\mathtt{L}$.\label{deductivesystemisomorphismmsi}
\end{proposition}
Rather than studying $\mathbf{NExt}(\mathtt{IPCK_R})$ in its entirety, we shall focus on the sublattice of $\mathbf{NExt}(\mathtt{IPCK_R})$ consisting of all normal extensions of the rule system $\mathtt{KM_R}$, where $\mathtt{KM}$ is the msi-logic axiomatised as follows.
\[\mathtt{KM}:=\mathtt{IPCK}\oplus p\to \boxtimes p \oplus (\boxtimes p\to p)\to p\oplus \boxtimes p\to (q\lor (q\to p)).\]
The logic $\mathtt{KM}$ was introduced by \citet{Kuznetsov1978PIL} (see also \cite{KuznetsovMuravitsky1986OSLAFoPLE}) and later studied by \citet{Esakia2006TMHCaCMEotIL}. Its main motivation lies in its close connection with the Gödel-Löb provability logic, to be discussed in the next section. An extensive overview of both the history and theory of $\mathtt{KM}$ may be found in \cite{Muravitsky2014LKaB}.
A \emph{fronton} is a tuple $\mathfrak{H}=(H, \land, \lor, \to,\boxtimes, 0, 1)$ such that $(H, \land, \lor,\to , 0, 1)$ is a Heyting algebra and for every $a, b\in H$, $\boxtimes$ satisfies
\begin{align}
\boxtimes 1&=1\\
\boxtimes (a\land b)&=\boxtimes a\land \boxtimes b\\
a&\leq \boxtimes a\\
\boxtimes a\to a&=a\\
\boxtimes a &\leq b\lor (b\to a)
\end{align}
Frontons are discussed in detail, e.g., in \cite{Esakia2006TMHCaCMEotIL,Litak2014CMwPS}. We let $\mathsf{Frt}$ denote the class of all frontons. By \Cref{syntacticvarietiesuniclasses}, $\mathsf{Frt}$ is a variety. We write $\mathbf{Var}(\mathsf{Frt})$ and $\mathbf{Uni}(\mathsf{Frt})$ respectively for the lattice of subvarieties and of universal subclasses of $\mathsf{Frt}$. \Cref{algebraisationfrtvar} in the following result follows from, e.g., \cite[Proposition 7]{Muravitsky2014LKaB}, whereas \Cref{algebraisationfrtuni} can be obtained via the techniques used in the proofs of \Cref{thm:algebraisationHA,thm:algebraisationMA}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:\label{algebraisationfrt}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{KM})\to \mathbf{Var}(\mathsf{Frt})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{Frt})\to \mathbf{Ext}(\mathtt{KM})$;\label{algebraisationfrtvar}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{Uni}(\mathsf{Frt})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{Frt})\to \mathbf{NExt}(\mathtt{KM_R})$.\label{algebraisationfrtuni}
\end{enumerate}
\end{theorem}
\begin{corollary}
Every msi-logic $($resp. si-rule system$)$ extending $\mathtt{KM}$ is complete with respect to some variety $($resp. universal class$)$ of Frontons. \label{completeness_msi}
\end{corollary}
We mention a simple yet important property of frontons, which plays a key role in the development of algebra-based rules for rule systems in $\mathbf{NExt}(\mathtt{KM_R})$.
\begin{proposition}[cf. {\cite[Proposition 5]{Esakia2006TMHCaCMEotIL}}]
Every fronton $\mathfrak{H}$ satisfies the identity
\[\boxtimes a=\bigwedge\{b\lor (b\to a):b\in H\}.\]
for every $a\in H$. \label{frontonsquare}
\end{proposition}
It follows that for every Heyting algebra $\mathfrak{H}$, there is at most one way of expanding $\mathfrak{H}$ to a fronton, namely by setting
\[\boxtimes a:=\bigwedge\{b\lor (b\to a):b\in H\}\]
A \emph{$\mathtt{KM}$-space} is a tuple $\mathfrak{X}=(X, \leq, \sqsubseteq, \mathcal{O})$, such that $(X,\leq, \mathcal{O})$ is an Esakia space,
and $\sqsubseteq$ is a binary relation on $X$ satisfying the following conditions, where ${\Uparrow} x:=\{y\in X:x\sqsubseteq y\}$ and ${\Downarrow} x:=\{y\in X:y\sqsubseteq x\}$, and $x<y$ iff $x\leq y$ and $x\neq y$:\label{def:kmspace}
\begin{enumerate}
\item $x<y$ implies $x\sqsubseteq y$;
\item $x\sqsubseteq y$ implies $x\leq y$;
\item ${\Uparrow} x$ is closed for all $x\in X$;
\item ${\Downarrow} [U]\in \mathsf{Clop}(\mathfrak{X})$ for every $U\in \mathsf{ClopUp}(\mathfrak{X})$;
\item For every $U\in \mathsf{ClopUp}(\mathfrak{X})$ and $x\in X$, if $x\notin U$ then there is $y\in -U$ such that $x\leq y$ and ${{\Uparrow}} y\subseteq U$.
\end{enumerate}
$\mathtt{KM}$-spaces are discussed in \cite{Esakia2006TMHCaCMEotIL}, and more at length in \cite{CastiglioniEtAl2010OFHA}.
A \emph{valuation} on a $\mathtt{KM}$ space space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{ClopUp}(\mathfrak{X})$. The geometrical semantics for msi-rule systems extending $\mathtt{KM_R}$ over $\mathtt{KM}$-spaces is obtained straightforwardly by combining the geometrical semantics of si-rule systems and that of modal rule systems. The relation $\leq$ is used to interpret the implication connective $\to$, and the relation $\sqsubseteq$ is used to interpret the modal operator $\boxtimes$.
If $\mathfrak{X}, \mathfrak{Y}$ are $\mathtt{KM}$-spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} if for all $x, y\in X$ we have:
\begin{itemize}
\item $x\leq y$ implies $f(x)\leq f(y)$;
\item $x\sqsubseteq y$ implies $f(x)\sqsubseteq f(y)$;
\item $f(x)\leq y$ implies that there is $z\in X$ with $x\leq z$ and $f(z)=y$;
\item $f(x)\sqsubseteq y$ implies that there is $z\in X$ with $x\sqsubseteq z$ and $f(z)=y$
\end{itemize}
We recall some useful properties of $\mathtt{KM}$-spaces, which are proved in \cite[Proposition 4.8]{CastiglioniEtAl2010OFHA}.
\begin{proposition}
For every $\mathtt{KM}$-space $\mathfrak{X}$, the following conditions hold:\label{propKMspa}
\begin{enumerate}
\item For every $U\in \mathsf{ClopUp}(U)$ we have $\{x\in X: {\Uparrow} x\subseteq U\}= U\cup \mathit{max}_\leq (-U)$;
\item If $\mathfrak{X}$ is finite, then for all $x, y\in X$ we have $x\sqsubseteq y$ iff $x<y$.
\end{enumerate}
\end{proposition}
It is known that the category of frontons with corresponding homomorphisms is dually equivalent to the category of $\mathtt{KM}$-spaces with continuous bounded morphisms. This result was announced in \cite[354--5]{Esakia2006TMHCaCMEotIL}, and proved in detail in \cite[Theorem 4.4]{CastiglioniEtAl2010OFHA}. We denote the $\mathtt{KM}$-space dual to a fronton $\mathfrak{H}$ as $\mathfrak{H_*}$, and the fronton dual to a $\mathtt{KM}$-space $\mathfrak{X}$ as $\mathfrak{X}^*$.
\subsubsection{Classical Provability, Magari Algebras, and $\mathtt{GL}$-spaces}
We now work in the modal signature $md$ already discussed in \Cref{ch:1}. The modal logic $\mathtt{GL}$ is axiomatised by extending $\mathtt{K}$ with the well-known \emph{Löb formula}.
\begin{align*}
\mathtt{GL}:=&\mathtt{K}\oplus \square (\square p\to p)\to \square p\\
=&\mathtt{K4}\oplus \square (\square p\to p)\to \square p
\end{align*}
The logic $\mathtt{GL}$ was independently discovered by Boolos and the Siena logic group led by Magari (cf. \cite{Sambin1974UDTDL,Sambin1976AEFPTiIDA,Magari1975TDA,SambinValentini1982TMLoPtSA,Boolos1980OSoMLwPI}) as a formalisation of the provability predicate of Peano arithmetic. The arithmetical completeness of $\mathtt{GL}$ was proved by \citet{Solovay1976PIoML} (see also \cite{JonghMontagna1988PFP}). The reader may consult \cite{Boolos1993TLoP} (as well as the more recent if less comprehensive \cite{Muravitsky2014LKaB}) for an overview of known results concerning $\mathtt{GL}$.
A modal algebra $\mathfrak{A}$ is called a \emph{Magari algebra} (after \cite{Magari1975TDA}) if it satisfies the identity
\[\square (\square a\to a)=\square a\]
for all $a\in A$.
Magari algebras are also called $\mathtt{GL}$-algebras, e.g. in \cite{Litak2014CMwPS}. We let $\mathsf{Mag}$ denote the variety of all Magari algebras. Clearly, every Magari algebra is a transitive modal algebra, and moreover $\mathsf{Mag}$ coincides with the class of all modal algebras satisfying the equation
\[\lozenge a =\lozenge (\square \neg a\land a).\]
The following result is a straightforward consequence of \Cref{thm:algebraisationMA}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:\label{algebraisationgl}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{GL})\to \mathbf{Var}(\mathsf{Mag})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{Mag})\to \mathbf{Ext}(\mathtt{GL})$;\label{algebraisationglvar}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{GL_R})\to \mathbf{Uni}(\mathsf{Mag})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{Mag})\to \mathbf{NExt}(\mathtt{GL_R})$.\label{algebraisationgluni}
\end{enumerate}
\end{theorem}
\begin{corollary}
Every modal logic $($resp. modal rule system$)$ extending $\mathtt{GL}$ is complete with respect to some variety $($resp. universal class$)$ of Magari algebras. \label{completeness_gl}
\end{corollary}
Modal spaces dual to Magari algebras are called \emph{$\mathtt{GL}$-spaces}. $\mathtt{GL}$-spaces display various similarities with $\mathtt{GRZ}$-spaces, as the reader can appreciate by comparing the following result with \Cref{propgrz1}.
\begin{proposition}[{cf. \cite{Magari1975RaDTfDA}}]
For every $\mathtt{GL}$-space $\mathfrak{X}$ and ${U}\in \mathsf{Clop}(\mathfrak{X})$, the following conditions hold: \label{propgl1}
\begin{enumerate}
\item If $x\in \mathit{max}_R(U)$ then $R[x]\cap U=\varnothing$; \label{propgl1:1}
\item $\mathit{max}_R(U)\in \mathsf{Clop}(\mathfrak{X})$; \label{propgl1:2}
\item If $x\in U$ then either $x\in \mathit{max}_R(U)$ or there is $y\in \mathit{max}_R(U)$ such that $Rxy$;\label{propgl1:3}
\item If $\mathfrak{X}$ is finite then $R$ is irreflexive. \label{propgl1:4}
\end{enumerate}
\end{proposition}
$\mathtt{GL}$ is well-known to be complete with respect to the class of irreflexive and transitive Kripke frames containing no ascending chain. However, like $\mathtt{GRZ}$-spaces, $\mathtt{GL}$-spaces may contain clusters, and a fortiori reflexive points.
\subsection{Pre-stable Canonical Rules for Normal Extensions of $\mathtt{KM_R}$ and $\mathtt{GL_R}$}\label{sec:scrgl}
In this section we develop a new kind of algebra-based rules, serving as analogues of stable canonical rules for rule systems in $\mathbf{NExt}(\mathtt{KM_R})$ and $\mathbf{NExt}(\mathtt{GL_R})$. These rules encode a notion of filtration weaker than standard filtration, and are better suited than the latter to the rule systems under discussion. We call them \emph{pre-stable canonical rules}.
\subsubsection{The $\mathtt{KM_R}$ Case}
We have seen notions of filtration for both Heyting and modal algebras. One would hope that combining the latter would yield a suitable notion of filtration for frontons, which could then be used to develop stable canonical rules for rule systems in $\mathbf{NExt}(\mathtt{KM_R})$. This is in principle possible, but suboptimal. The reason is that with filtrations understood this way, rule systems in $\mathbf{NExt}(\mathtt{KM_R})$ would turn out to admit very few filtrations. To see this, recall (\Cref{propKMspa}) that in every finite $\mathtt{KM}$-space $\mathfrak{X}$ we have that $x\sqsubseteq y$ iff $x<y$ for all $x, y\in X$. Now let $\mathfrak{X}$ be any $\mathtt{KM}$-space such that there are $x, y\in X$ with $x\neq y$ and $x\sqsubseteq y$. Then any finite image of $\mathfrak{X}$ under a $\sqsubseteq$-preserving map $h$ with $h(x)=h(y)$ would contain a reflexive point, hence would fail to be a $\mathtt{KM}$-space.
We know that every finite distributive lattice has a unique Heyting algebra expansion, and moreover that every finite Heyting algebra has a unique fronton expansion. These constructions lead to a natural method for extracting finite countermodels based on frontons to non-valid msi rules, which we illustrate in the proof of \Cref{filtrationfronton}. This result, in a somewhat different formulation, was first proved by \citet{Muravitskii1981FAotICatEoaEHNM} via frame-theoretic methods.
\begin{lemma}
For any msi rule $\Gamma/\Delta$, if $\mathsf{Frt}\nvDash\Gamma/\Delta$ then there is a finite fronton $\mathfrak{H}\in \mathsf{Frt}$ such that $\mathfrak{H}\nvDash \Gamma/\Delta$.\label{filtrationfronton}
\end{lemma}
\begin{proof}
Assume $\mathsf{Frt}\nvDash\Gamma/\Delta$ and let $\mathfrak{H}\in \mathsf{Frt}$ be a fronton with $\mathfrak{H}\nvDash \Gamma/\Delta$. Take a valuation $V$ with $\mathfrak{H}, V\nvDash \Gamma/\Delta$. Put $\Theta=\mathit{Sfor}(\Gamma/\Delta)$ and set
\begin{align*}
D^\to&:= \{(\bar V(\varphi), \bar V(\psi))\in H\times H: \varphi\to \psi\in \Theta\}\cup \{(\bar V(\varphi), a): a\in D^\boxtimes\text{ and } \varphi\in \Theta\}\\
D^\boxtimes&:=\{\bar V(\varphi)\in H:\boxtimes \varphi\in \Theta\}
\end{align*}
Let $\mathfrak{K}$ be the bounded distributive lattice generated by $\Theta$. For all $a, b\in K$ define
\begin{align*}
a\rightsquigarrow b&:=\bigvee\{c\in H: a\land c\leq b\}\\
\boxtimes' a&:= \bigwedge_{b\in K} b\lor (b\rightsquigarrow a)
\end{align*}
Obviously $(\mathfrak{K}, \rightsquigarrow)$ is a Heyting algebra, and by \Cref{frontonsquare} it follows that $\mathfrak{K}':=(\mathfrak{K}, \rightsquigarrow, \boxtimes')$ is a fronton. Moreover, the inclusion $\subseteq:\mathfrak{K}'\to \mathfrak{A}$ is a bounded lattice embedding satisfying
\begin{align*}
a\rightsquigarrow b&\leq a\to b & &\text{for all }(a,b)\in K\times K\\
a\rightsquigarrow b&= a\to b & &\text{for all }(a,b)\in D^\to\\
\boxtimes' a&=\boxtimes a & &\text{for all }a\in D^\boxtimes.
\end{align*}
The first two claims are proved the same way as in the proof of \Cref{rewritesi}. For the third claim we reason as follows. Suppose $a\in D^\boxtimes$. Then $(b, a)\in D^\to$ for every $b\in K$ by construction. Therefore,
\[\boxtimes' a=\bigwedge_{b\in K} b\lor (b\rightsquigarrow a)= \bigwedge_{b\in K} b\lor (b\to a).\]
By the axioms of frontons we have $\boxtimes a\leq b\lor (b\to a)$ for all $b\in H$, hence for all $b\in K$ in particular. Therefore $\boxtimes a\leq \boxtimes' a$. Conversely, for any $a\in K$ we have
\begin{align*}
\boxtimes' a&\leq \boxtimes a\lor \boxtimes a \rightsquigarrow a \\
&\leq\boxtimes a \lor \boxtimes a\to a \tag{by $\boxtimes a\rightsquigarrow a\leq \boxtimes a\to a$}\\
&=\boxtimes a. \tag{by $\boxtimes a\to a= a\leq \boxtimes a$}
\end{align*}
Let $V'$ be an arbitrary valuation on $\mathfrak{K}'$ with $V'(p)=V(p)$ whenever $p\in \mathit{Sfor}(\Gamma/\Delta)\cap \mathit{Prop}$. Then for every $\varphi\in \Theta$ we have $V(\varphi)=V'(\varphi)$. This is shown easily by induction on the structure of $\varphi$. Therefore, $\mathfrak{K}', V'\nvDash \Gamma/\Delta$.
\end{proof}
The proof of \Cref{filtrationfronton} motivates an alternative notion of filtration for frontons. Let $\mathfrak{H}, \mathfrak{K}\in \mathsf{Frt}$. A map $h:\mathfrak{H}\to \mathfrak{K}$ is called \emph{pre-stable} if for every $a, b\in H$ we have $h(a\to b)\leq h(a)\to h(b)$. For $a, b\in H$, we say that $h$ satisfies the $\to$-\emph{bounded domain condition} (BDC$^\to$) for $(a, b)$ if $h(a\to b)=h(a)\to h(b)$. For $D\subseteq H$, we say that $h$ satisfies the $\boxtimes$-\emph{bounded domain condition} (BDC$^\boxtimes$) for $D$ if $h(\boxtimes a)=\boxtimes h(a)$ for every $a\in D$. If $D\subseteq H\times H$, we say that $h$ satisfies the BDC$^\to$ for $D$ if it does for each $(a, b)\in D$, and analogously for the BDC$^\boxtimes$. Lastly, if $D^\to\subseteq H\times H$ and $D^\boxtimes\subseteq H$, we say that $h$ satisfies the BDC for $(D^\to, D^\boxtimes)$ if $h$ satisfies the BDC$^\to$ for $D^\to$ and the BDC$^\boxtimes$ for $D^\boxtimes$.
\begin{definition}
Let $\mathfrak{H}$ be a fronton, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{K}', V')$, with $\mathfrak{K}\in \mathsf{Frt}$, is called a (\emph{finite}) \emph{weak filtration of $(\mathfrak{H}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{K}'=(\mathfrak{K},\to , \boxtimes)$, where $\mathfrak{K}$ is the bounded sublattice of $\mathfrak{H}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{B}\to \mathfrak{A}$ is a pre-stable embedding satisfying the BDC$^\to$ for the set $\{(\bar V(\varphi), \bar V(\psi)):\varphi\to \psi\in \Theta\}$, and satisfying the BDC$^\boxtimes$ for the set $\{\bar V(\varphi):\boxtimes \varphi\in \Theta\}$
\end{enumerate}
\end{definition}
A straightforward induction on structure establishes the following filtration theorem.
\begin{theorem}[Filtration theorem for frontons]
Let $\mathfrak{H}$ be a fronton, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a a finite, subformula-closed set of formulae. If $(\mathfrak{K}', V')$ is a weak filtration of $(\mathfrak{H}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{H}, V\models \Gamma/\Delta\iff \mathfrak{K}', V'\models \Gamma/\Delta.\]
\end{theorem}
We now introduce algebra-based rules for rule systems in $\mathbf{NExt}(\mathtt{KM_R})$ by syntactically encoding weak filtrations as just defined. We call these \emph{pre-stable canonical rules} to emphasize the role of pre-stable maps as opposed to stable maps in their refutation conditions.
\begin{definition}
Let $\mathfrak{H}\in \mathsf{Frt}$ be a finite fronton, and let $D^\to\subseteq H\times H$, $D^\boxtimes\subseteq H$ be such that $a\in D^\boxtimes$ implies $(b, a)\in D^\to$ for every $b\in H$. The \emph{pre-stable canonical rule} of $(\mathfrak{H}, D^\to, D^\boxtimes)$, is defined as $\pscrmsi{H}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma:=&\{p_0\leftrightarrow 0\}\cup\{p_1\leftrightarrow 1\} \cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a\in H\}\cup
\{p_{a\lor b}\leftrightarrow p_a\lor p_b:a\in H\}\cup\\
& \{p_{a\to b}\leftrightarrow p_a\to p_b:(a, b)\in D^\to\}\cup \{p_{\boxtimes a}\leftrightarrow \boxtimes p_a:a\in D^\boxtimes\}\\
\Delta:=&\{p_a\leftrightarrow p_b:a, b\in H\text{ with } a\neq b\}.
\end{align*}
\end{definition}
The next two results outline algebraic refutation conditions for msi pre-stable canonical rules. They may be proved with straightforward adaptations of the proofs of similar results seen in earlier sections.
\begin{proposition}
For every finite fronton $\mathfrak{H}$ and $D^\to\subseteq H\times H$, $D^\boxtimes\subseteq H$, we have $\mathfrak{H}\nvDash \pscrmsi{H}{D}$. \label{msiscr-refutation1}
\end{proposition}
\begin{proposition}
For every msi pre-stable canonical rule $\pscrmsi{H}{D}$ and any $\mathfrak{K}\in \mathsf{Frt}$, we have $\mathfrak{K}\nvDash \pscrmsi{H}{D}$ iff there is a pre-stable embedding $h:\mathfrak{H}\to \mathfrak{K}$ satisfying the BDC for $(D^\to,D^\boxtimes)$.\label{msiscr-refutation2}
\end{proposition}
We now give refutation conditions for msi pre-stable canonical rules on $\mathtt{KM}$-spaces. If $\mathfrak{X}, \mathfrak{Y}$ are $\mathtt{KM}$-spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called \emph{pre-stable} if for all $x, y\in X$, $x\leq y$ implies $f(x)\leq f(y)$. Clearly, if $f$ is pre-stable then for all $x, y\in X$, $x\sqsubseteq y$ implies $f(x)\leq f(y)$. Now let $\mathfrak{d}\subseteq Y$. We say that $f$ \emph{satisfies the BDC$^\to$ for $\mathfrak{d}$} if for all $x\in X$,
\[{{\uparrow}}[ f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[{{\uparrow}} x]\cap \mathfrak{d}\neq\varnothing.\]
We say that $f$ \emph{satisfies the BDC$^\boxtimes$ for $\mathfrak{d}$} if for all $x\in X$ the following two conditions hold.
\begin{align*}
{{\Uparrow}}[h(x)]\cap \mathfrak{d}\neq\varnothing \Rightarrow h[{{\Uparrow}} x]\cap \mathfrak{d}\neq\varnothing\tag {BDC$^\boxtimes$-back}\\
h[{{\Uparrow}} x]\cap \mathfrak{d}\neq\varnothing \Rightarrow {{\Uparrow}}[h(x)]\cap \mathfrak{d}\neq\varnothing\tag {BDC$^\boxtimes$-forth}
\end{align*}
If $\mathfrak{D}\subseteq \wp(Y)$, then we say that $h$ satisfies the BDC$^\to$ for $\mathfrak{D}$ if it does for every $\mathfrak{d}\in \mathfrak{D}$, and similarly for the BDC$^\boxtimes$. Finally, if $\mathfrak{D}^\to, \mathfrak{D}^\boxtimes\subseteq \wp(Y)$, then we say that $h$ satisfies the BDC for $(\mathfrak{D}^\to, \mathfrak{D}^\boxtimes)$ if $h$ satisfies the BDC$^\to$ for $\mathfrak{D}^\to$ and the BDC$^\boxtimes$ for $\mathfrak{D}^\boxtimes$.
Let $\mathfrak{H}$ be a finite fronton. If $D^\to\subseteq H\times H$, for every $(a, b)\in D^\to$ set $\mathfrak{d}^\to_{(a, b)}:=\beta (a)\smallsetminus\beta (b)$. If $D^\boxtimes\subseteq H$, for every $a\in D^\boxtimes$ set $\mathfrak{d}^\boxtimes_{a}:=-\beta (a)$. Finally, put $\mathfrak{D}^\to:=\{\mathfrak{d}^\to_{(a, b)}:(a, b)\in D^\to\}$, $\mathfrak{D}^\boxtimes:=\{\mathfrak{d}^\boxtimes_{a}:a \in D^\boxtimes\}$.
\begin{proposition}
For every msi pre-stable canonical rule $\pscrmsi{H}{D}$ and any $\mathtt{KM}$-space, we have $\mathfrak{X}\nvDash\pscrmsi{H}{D}$ iff there is a continuous pre-stable surjection $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC $(\mathfrak{D}^\to, \mathfrak{D}^\boxtimes)$.\label{refutspacemsi}
\end{proposition}
\begin{proof}
$(\Rightarrow)$ Assume $\mathfrak{X}\nvDash\pscrmsi{H}{D}$. Then there is a pre-stable embedding $h:\mathfrak{H}\to\mathfrak{X}^*$ satisfying the BDC for $(D^\to,D^\boxtimes)$. Reasoning as in the proofs of \Cref{refutspace} and \Cref{refutspacemod} it follows that there is a pre-stable map $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC$^\to$ for $\mathfrak{D}^\to$ and satisfying the BDC$^\boxtimes$-back for $\mathfrak{D}^\boxtimes$, namely the map $f=h^{-1}$.
Let us check that $f$ satisfies the BDC$^\boxtimes$-forth for $\mathfrak{D}^\boxtimes$. Let $\mathfrak{d}^\boxtimes_a\in \mathfrak{D}^\boxtimes$. Assume $f[{{\Uparrow}} x]\cap \mathfrak{d}^\boxtimes_a\neq\varnothing$, i.e., that there is $y\in {{\Uparrow}} x$ with $f(y)\in \mathfrak{d}^\boxtimes_a$. So $x\notin \boxtimes_\sqsubseteq h(U)$, where $U:=-\mathfrak{d}^\boxtimes_a$. Since $h$ satisfies the BDC$^\boxtimes$ for $\mathfrak{d}^\boxtimes_a$ we have $\boxtimes_\sqsubseteq h(U)=h(\boxtimes_\sqsubseteq U)$, and so $x\notin h(\boxtimes_\sqsubseteq U)$. This implies $f(x)\notin \boxtimes_\sqsubseteq(U)$, therefore there must be some $z\in \mathfrak{d}^\boxtimes_a$ such that $f(x)\sqsubseteq z$, i.e. ${{\Uparrow}} [f(x)]\cap \mathfrak{d}^\boxtimes_a\neq\varnothing$.
$(\Leftarrow)$ Assume that there is a continuous pre-stable surjection $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC for $(\mathfrak{D}^\to,\mathfrak{D}^\boxtimes)$. By the proof of \Cref{refutspace}, $f^{-1}:\mathfrak{H}\to \mathfrak{X}^*$ is a pre-stable embedding satisfying the BDC$^\to$ for $D^\to$. Let us check that $f^{-1}$ satisfies the BDC$^\boxtimes$ for $D^\boxtimes$. Let $U\subseteq X$ be such that $U=\beta(a)$ for some $a\in D^\boxtimes$, and reason as follows.
\begin{align*}
x\notin f^{-1}(\boxtimes_\sqsubseteq U)&\iff {{\Uparrow}} x\cap f^{-1}(\mathfrak{d}^\boxtimes_a)\neq\varnothing\\
&\iff {{\Uparrow}} [f(x)]\cap \mathfrak{d}^\boxtimes_a\neq\varnothing \tag{$f$ satisfies the BDC$^\boxtimes$ for $\mathfrak{d}^\boxtimes_a$}\\
&\iff x\notin \boxtimes_\sqsubseteq f^{-1}(U).
\end{align*}
\end{proof}
In view of \Cref{refutspacemsi}, when working with $\mathtt{KM}$-spaces we may write an msi pre-stable canonical rule $\pscrmsi{H}{D}$ as $\pscrmsi{H_*}{\mathfrak{D}}$.
We close this subsection by proving that our msi pre-stable canonical rules are expressive enough to axiomatise every rule system in $\mathbf{NExt}(\mathtt{KM_R})$.
\begin{lemma}
For every msi rule $\Gamma/\Delta$ there is a finite set $\Xi$ of msi pre-stable canonical rules such that for any $\mathfrak{K}\in \mathsf{Frt}$ we have $\mathfrak{K}\nvDash \Gamma/\Delta$ iff there is $\pscrmsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \pscrmsi{H}{D}$.\label{rewritemsi}
\end{lemma}
\begin{proof}
Since bounded distributive lattices are locally finite there are, up to isomorphism, only finitely many triples $(\mathfrak{H}, D^\to, D^\boxtimes)$ such that
\begin{itemize}
\item $\mathfrak{H}\in \mathsf{Frt}$ and $\mathfrak{H}$ is at most $k$-generated as a bounded distributive lattice, where $k=|\mathit{Sfor}(\Gamma/\Delta)|$;
\item There is a valuation $V$ on $\mathfrak{H}$ refuting $\Gamma/\Delta$, such that
\begin{align*}
D^\to=&\{(\bar V(\varphi), \bar V(\psi)):\varphi\to \psi\in \mathit{Sfor}(\Gamma/\Delta)\}\cup \\
& \{(\bar V(\varphi), b):\boxtimes\varphi\in \mathit{Sfor}(\Gamma/\Delta)\text{ and }b\in H\}\\
D^\boxtimes=&\{\bar V(\varphi): \boxtimes\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}.
\end{align*}
\end{itemize}
Let $\Xi$ be the set of all msi pre-stable canonical rules $\pscrmsi{H}{D}$ for all such triples $(\mathfrak{H}, D^\to, D^\boxtimes)$, identified up to isomorphism.
$(\Rightarrow)$ Let $\mathfrak{K}\in \mathsf{Frt}$ and suppose $\mathfrak{H}\nvDash \Gamma/\Delta$. Take a valuation $V$ on $\mathfrak{H}$ such that $\mathfrak{K}, V\nvDash \Gamma/\Delta$. Then by the proof of \Cref{filtrationfronton} there is a weak filtration $(\mathfrak{H}', V')$ of $(\mathfrak{K}, V)$ through $\mathit{Sfor}(\Gamma/\Delta)$, which by the filtration theorem for frontons is such that $\mathfrak{H}', V'\nvDash \Gamma/\Delta$. This implies that there is a stable embedding $h:\mathfrak{H}'\to \mathfrak{K}$, which again by the proof of \Cref{filtrationfronton} satisfies the BDC for the pair $(\mathfrak{D}^\to, \mathfrak{D}^\boxtimes)$ defined as above.
Therefore $\pscrmsi{H'}{D}\in \Xi$ and $\mathfrak{K}\nvDash\pscrmsi{H'}{D}$.
$(\Leftarrow)$ Analogous to the same direction in, e.g., \Cref{rewritesi}.
\end{proof}
\begin{theorem}
Every msi-rule system $\mathtt{L}\in \mathbf{NExt}(\mathtt{KM_R})$ is axiomatisable over $\mathtt{KM_R}$ by some set of msi pre-stable canonical rules of the form $\pscrmsi{H}{D}$, where $\mathfrak{H}\in \mathsf{KM}$. \label{axiomKMscr}
\end{theorem}
\begin{proof}
Analogous to \Cref{axiomatisationsi}.
\end{proof}
\subsubsection{The $\mathtt{GL_R}$ Case}
Modal stable canonical rules as developed in \Cref{sec:scrmod} can axiomatise every rule system in $\mathbf{NExt}(\mathtt{GL_R})$ \cite[Theorem 5.6]{BezhanishviliEtAl2016SCR}. However, modal stable canonical rules differ significantly from msi pre-stable canonical rules: they are based on a different notion of filtration, which is stated in terms of stable rather than pre-stable maps.
Moreover, $\mathtt{GL_R}$ admits very few filtrations. The situation is similar to the case of $\mathbf{NExt}(\mathtt{KM_R})$. For recall (\Cref{propgl1}) that finite $\mathtt{GL}$-spaces are strict partial orders. If $\mathfrak{X}$ is a $\mathtt{GL}$-space and $f:\mathfrak{X}\to \mathfrak{Y}$ is a stable map from $\mathfrak{X}$ onto some finite modal space $\mathfrak{Y}$ such that $f(x)=f(y)$ for some $x, y\in X$ with $Rxy$, then $\mathfrak{Y}$ contains a reflexive point, hence cannot be a $\mathtt{GL}$-space.
In response to this problem, an alternative notion of filtration was introduced in \cite{BenthemBezhanishviliforthcomingMFoF}, who note that the same technique was used already in \cite{Boolos1993TLoP}. We call it \emph{weak filtration}. As usual, we prefer an algebraic definition. If $\mathfrak{A}, \mathfrak{B}$ are modal algebras and $D\subseteq A$, let us say that a map $h:\mathfrak{A}\to \mathfrak{B}$ satisfies the $\square$-\emph{bounded domain condition} (BDC$^\square$) for $D$ if $h(\square a)= \square(a)$ for every $a\in D$.
\begin{definition}
Let $\mathfrak{B}\in \mathsf{Mag}$ be a Magari algebra, $V$ a valuation on $\mathfrak{B}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{A}', V')$, with $\mathfrak{A}'\in \mathsf{Mag}$, is called a (\emph{finite}) \emph{weak filtration of $(\mathfrak{B}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{A}'=(\mathfrak{A}, \square)$, where $\mathfrak{B}$ is the Boolean subalgebra of $\mathfrak{B}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{A}'\to \mathfrak{B}$ satisfies the BDC$^\square$ for $D:=\{\bar V'(\varphi):\square\varphi\in \Theta\}$.
\end{enumerate}
\end{definition}
\begin{theorem}
Let $\mathfrak{B}\in \mathsf{Mag}$ be a Magari algebra, $V$ a valuation on $\mathfrak{B}$, and $\Theta$ a finite, subformula closed set of formulae. Let $(\mathfrak{A}', V')$ be a weak filtration of $(\mathfrak{B}, V)$. Then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
\end{theorem}
\begin{proof}
Straightforward induction on the structure of $\varphi$.
\end{proof}
Unlike weak filtrations in the msi setting, modal weak filtrations are not in general unique. We will be particularly interested in weak filtrations satisfying an extra condition, which we will construe as a modal counterpart to pre-stability in the msi setting. For any modal algebra $\mathfrak{A}$ and $a\in A$ we write $\square^+(a):=\square a\land a$. Let $\mathfrak{A}, \mathfrak{B}\in \mathsf{Mag}$ be Magari algebras. A Boolean homomorphism $h:\mathfrak{A}\to \mathfrak{B}$ is called \emph{pre-stable} if for every $a\in A$ we have $h(\square^+ a)\leq \square^+h(a)$. Clearly, every stable Boolean homomorphism $h:\mathfrak{A}\to \mathfrak{B}$ is pre-stable, since $h(\square a)\leq \square h(a)$ implies $h(\square a\land a)=h(\square a)\land h(a)\leq \square h(a)\land h(a)$. A weak filtration $(\mathfrak{A}', V')$ of some model $(\mathfrak{B}, V)$ through some finite, subformula closed set of formulae $\Theta$ is called \emph{pre-stable} if the embedding $\subseteq :\mathfrak{A}'\to \mathfrak{B}$ is pre-stable.
If $\mathfrak{A}, \mathfrak{B}$ are modal algebras and $D\subseteq A$, a map $h:\mathfrak{A}\to \mathfrak{B}$ satisfies the $\square^+$-\emph{bounded domain condition} (BDC$^{\square^+}$) for $D$ if $h(\square^+ a)= \square^+h(a)$ for every $a\in D$. Note that if $(\mathfrak{A}', V')$ is a filtration of $(\mathfrak{B}, V)$ through some $\Theta$, then for every $D\subseteq A$ the inclusion $\subseteq:\mathfrak{A}\to \mathfrak{B}$ satisfies the BDC$^{\square^+}$ for $D$ iff it satisfies the BDC$^\square$ for $D$. Indeed, since $\Theta$ is subformula-closed we have that $\square^+ \varphi\in \Theta$ implies $\square \varphi\in \Theta$, which gives the ``only if'' direction, whereas the converse follows from the fact that $\subseteq$ is a Boolean embedding.
Our algebra-based rules encode pre-stable weak filtrations as defined above, and explicitly include a parameter $D^{\square^+}$, linked to the BDC$^{\square^+}$, intended as a counterpart to the parameter $D^\to$ of msi pre-stable canonical rules. We call these rules \emph{modal pre-stable canonical rules.}
\begin{definition}
Let $\mathfrak{A}\in \mathsf{MA}$ be a finite modal algebra, and let $D^{\square^+}, D^\square\subseteq A$. Let $\square^+\varphi:=\square \varphi\land\varphi$. The \emph{pre-stable canonical rule} of $(\mathfrak{A}, D^{\square^+}, D^\square)$, is defined as $\pscrmod{A}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma:=&\{p_{a\land b}\leftrightarrow p_a\land p_b:a\in H\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a\in H\}\cup \\
&\{p_{\neg a}\leftrightarrow \neg p_a:a\in A\}\cup \{p_{\square^+a}\to \square^+p_a:a\in a\}\cup\\
& \{ \square^+ p_a\to p_{\square^+ a}:a\in D^{\square^+}\}\cup \{p_{\square a}\leftrightarrow \square p_a:a\in D^\square\}\\
\Delta:=&\{p_a:a\in A\smallsetminus \{1\}\}.
\end{align*}
\end{definition}
It is helpful to conceptualise modal pre-stable canonical rules as algebra-based rules for bi-modal rule systems in the signature $\{\land, \lor, \neg,\square, \square^+,0,1\}$ (so that $\square^+$ is an independent operator rather than defined from $\square$) and containing $\square^+ p\leftrightarrow \square p\land p$ as an axiom.\footnote{This view of $\mathtt{GL}$ as a bimodal logic is the main insight informing Litak's \cite{Litak2014CMwPS} strategy for deriving \Cref{KMisovar} of \Cref{KMiso} from the theory of polymodal companions of msi-logics as developed by \citet{WolterZakharyaschev1997IMLAFoCBL,WolterZakharyaschev1997OtRbIaCML}. In that setting, msi formulae are translated into formulae in a bimodal signature, but the two modalities of the latter can be regarded as implicitly interdefinable in logics where one satisfies the Löb formula.} From this perspective, modal pre-stable canonical rules are rather similar to msi pre-stable canonical rules.
Using by now familiar reasoning, it is easy to verify that modal pre-stable canonical rules display the intended refutation conditions. For brevity, let us say that a pre-stable map $h$ satisfies the BDC for $(D^{\square^+}, D^\square)$ if $h$ satisfies the BDC$^{\square^+}$ for $D^{\square^+}$ and the BDC$^\square$ for $D^\square$.
\begin{proposition}
For every finite modal algebra $\mathfrak{A}\in \mathsf{MA}$ and $D^{\square^+}, D^\square\subseteq A$, we have $\mathfrak{H}\nvDash \pscrmod{A}{D}$. \label{md+scr-refutation1}
\end{proposition}
\begin{proposition}
For every modal algebra $\mathfrak{B}\in \mathsf{MA}$ and any modal pre-stable canonical rule $\pscrmod{A}{D}$, we have $\mathfrak{B}\nvDash \pscrmod{A}{D}$ iff there is a pre-stable embedding $h:\mathfrak{B}\to \mathfrak{A}$ satisfying the BDC $(D^{\square^+}D^\square)$.\label{md+scr-refutation2}
\end{proposition}
If $\mathfrak{X}$ is any modal space, for any $x, y\in X$ define $R^+xy$ iff $Rxy$ or $x=y$. Let $\mathfrak{X}, \mathfrak{Y}$ be $\mathtt{GL}$-spaces. A map $f:\mathfrak{X}\to \mathfrak{Y}$ is called \emph{pre-stable} if for all $x, y\in X$ we have that $R^+x y$ implies $R^+f(x) f(y)$. If $\mathfrak{d}\subseteq Y$, we say that $f$ satisfies the BDC$^{\square^+}$ for $\mathfrak{d}$ if for all $x\in X$,
\[R^+[ f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[R^+[x]]\cap \mathfrak{d}\neq\varnothing.\]
Furthermore, we say that $f$ satisfies the BDC$^\square$ for $\mathfrak{d}$ if for all $x\in X$ the following two conditions hold.
\begin{align*}
R[f(x)]\cap \mathfrak{d}\neq\varnothing &\Rightarrow f[R[x]]\cap \mathfrak{d}\neq\varnothing\tag {BDC$^\square$-back}\\
f[R[x]]\cap \mathfrak{d}\neq\varnothing &\Rightarrow R[f(x)]\cap \mathfrak{d}\neq\varnothing.\tag {BDC$^\square$-forth}
\end{align*}
Finally, if $\mathfrak{D}\subseteq\wp(Y)$ we say that $f$ satisfies the BDC$^{\square^+}$ (resp. BDC$^{\square}$) for $\mathfrak{D}$ if it does for every $\mathfrak{d}\in \mathfrak{D}$, and if $\mathfrak{D}^{\square^+}, \mathfrak{D}^\square\subseteq\wp(Y)$ we write that $f$ satisfies the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$ if $f$ satisfies the BDC$^{\square^+}$ for $\mathfrak{D}^{\square^+}$ and the BDC$^{\square}$ for $\mathfrak{D}^\square$. Let $\mathfrak{A}$ be a finite Magari algebra. If $D^{\square^+}\subseteq A$, for every $a\in D^{\square^+}$ set $\mathfrak{d}^{\square^+}_{a}:=-\beta (a)$. If $D^\square\subseteq A$, for every $a\in D^\square$ set $\mathfrak{d}^\square_{a}:=-\beta (a)$. Finally, put $\mathfrak{D}^{\square^+}:=\{\mathfrak{d}^{\square^+}_{a}:a\in D^{\square^+}\}$, $\mathfrak{D}^\square:=\{\mathfrak{d}^\square_{a}:a \in D^\square\}$.
\begin{proposition}
For all $\mathtt{GL}$-spaces $\mathfrak{X}$ and any modal pre-stable canonical rule $\pscrmod{A}{D}$, we have $\mathfrak{X}\nvDash\pscrmod{A}{D}$ iff there is a continuous pre-stable surjection $f:\mathfrak{X}\to \mathfrak{A}_*$ satisfying the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$.\label{refutspacemd+}
\end{proposition}
As usual, in view of \Cref{refutspacemd+} we write a modal pre-stable canonical rule $\pscrmod{A}{D}$ as $\pscrmod{A_*}{\mathfrak{D}}$ in geometric settings.
We close this section by proving that pre-stable canonical rules axiomatise any rule system in $\mathbf{NExt}(\mathtt{GL_R})$.
\begin{lemma}
For every modal rule $\Gamma/\Delta$ there is a finite set $\Xi$ of modal pre-stable canonical rules of the form $\pscrmod{A}{D}$ with $\mathfrak{A}\in \mathsf{K4}$, such that for any $\mathfrak{B}\in \mathsf{Mag}$ we have $\mathfrak{B}\nvDash \Gamma/\Delta$ iff there is $\pscrmod{A}{D}\in \Xi$ such that $\mathfrak{B}\nvDash \pscrmod{A}{D}$.\label{rewritemd+}
\end{lemma}
\begin{proof}
Since Boolean algebras is locally finite there are, up to isomorphism, only finitely many triples $(\mathfrak{A}, D^{\square^+}, D^\square)$ such that
\begin{itemize}
\item $\mathfrak{A}\in \mathsf{K4}$ and $\mathfrak{A}$ is at most $k$-generated as a Boolean algebra, where $k=|\mathit{Sfor}(\Gamma/\Delta)|$;
\item There is a valuation $V$ on $\mathfrak{A}$ refuting $\Gamma/\Delta$, such that
\begin{align*}
D^{\square^+}&=\{\bar V(\varphi): \square^+\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}\\
D^{\square}&=\{\bar V(\varphi): \square\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}
\end{align*}
\end{itemize}
Let $\Xi$ be the set of all modal pre-stable canonical rules $\pscrmod{A}{D}$ for all such triples ($\mathfrak{A}, D^{\square^+}, D^\square)$, identified up to isomorphism.
$(\Rightarrow)$ Let $\mathfrak{B}\in \mathsf{Mag}$ and suppose $\mathfrak{B}\nvDash \Gamma/\Delta$. Take a valuation $V$ on $\mathfrak{B}$ such that $\mathfrak{B}, V\nvDash \Gamma/\Delta$. As is well-known, there is a transitive filtration $(\mathfrak{A}', V')$ of $(\mathfrak{B}, V)$ through $\mathit{Sfor}(\Gamma/\Delta)$. Then $\mathfrak{A}'\in \mathsf{K4}$. Moreover, clearly every filtration is a weak filtration, hence so is $(\mathfrak{A}', V')$. Therefore there is a Boolean embedding $h:\mathfrak{A}'\to \mathfrak{B}$ satisfying the BDC for $(D^{\square^+}, D^\square)$, where $D^{\square^+}:=\{\bar V'(\varphi): \square^+\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}$ and $D^{\square}:=\{\bar V'(\varphi): \square\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}$. Indeed, it is obvious that $h$ is a Boolean embedding which satisfies the BDC$^{\square}$ for $D^{\square}$. The fact that $h$ satisfies the BDC$^{\square^+}$ follows by noting that, additionally, $\square\varphi\in \mathit{Sfor}(\square^+\varphi)$ for every modal formula $\varphi$. Lastly, since $(\mathfrak{A}', V')$ is actually a filtration, $f$ is stable, a fortiori pre-stable. Hence we have shown $\mathfrak{B}\nvDash \pscrmod{A}{D}$.
$(\Leftarrow)$ Routine.
\end{proof}
\begin{theorem}
Every modal rule system $\mathtt{M}\in \mathbf{NExt}(\mathtt{GL_R})$ is axiomatisable over $\mathtt{GL_R}$ by some set of modal pre-stable canonical rules of the form $\pscrmod{A}{D}$, where $\mathfrak{A}\in \mathsf{K4}$. \label{axiomGLscr}
\end{theorem}
\subsection{The Kuznetsov-Muravitsky Isomorphism via Stable Canonical Rules}\label{sec:isomorphismgl}
We are ready for the main topic of this section, the Kuznetsov-Muravitsky isomorphism and its extension to rule systems. We apply pre-stable canonical rules to prove this and related results in the vicinity, using essentially the same techniques seen in \Cref{sec:modalcompanions,sec:tensecompanions}.
\subsubsection{Semantic Mappings}\label{sec:mappingsmsi}
We begin by reviewing the constructions for transforming frontons into corresponding Magari algebras and vice versa. The results in this paragraph are known, and recent proofs can be found in, e.g., \cite{Esakia2006TMHCaCMEotIL}.
\begin{definition}
The mapping $\sigma: \mathsf{Frt}\to \mathsf{Mag}$ assigns every $\mathfrak{H}\in \mathsf{Frt}$ to the algebra $\sigma \mathfrak{H}:=(B(\mathfrak{H}), \square)$, where $B(\mathfrak{H})$ is the free Boolean extension of $1\mathfrak{H}$ and for every $a\in B(H)$ we have
\begin{align*}
Ia&:=\bigvee\{b\in H:b\leq a\}\\
\square a&:=\boxtimes Ia.
\end{align*}
\end{definition}
Observe that if $a\in H$ then $Ia=a$, and so $\square a=\boxtimes a$. Consequently, if $a\in H$ also $\square^+ a=\boxtimes^+ a$.
\begin{definition}
The mapping $\rho:\mathsf{Mag}\to \mathsf{Frt}$ assigns every Magari algebra $\mathfrak{A}\in \mathsf{Mag}$ to the algebra $\rho \mathfrak{A}:=(O(A), \land, \lor, \to, \square, 1, 0)$, where
\begin{align*}
O(A)&:=\{a\in A:\square^+ a=a\}\\
a\to b&:=\square^+ (\neg a\lor b)\\
\boxtimes a&:= \square a
\end{align*}
\end{definition}
By unpacking the definitions just presented it is not difficult to verify that the following Proposition holds.
\begin{proposition}
For every $\mathfrak{H}\in \mathsf{Frt}$ we have $\mathfrak{H}\cong \rho\sigma \mathfrak{H}$. Moreover, for every $\mathfrak{A}\in \mathsf{GRZ}$ we have $\sigma \rho\mathfrak{A}\rightarrowtail\mathfrak{A}$.\label{representationFrtMag}
\end{proposition}
We call a Magari algebra $\mathfrak{A}$ \emph{skeletal} if $\sigma \rho\mathfrak{A}\cong\mathfrak{A}$ holds.
We now give more suggestive dual descriptions of the maps $\sigma, \rho$ on $\mathtt{KM}$- and $\mathtt{GL}$-spaces, which also make it easier to show that $\sigma, \rho$ are the intended ranges.
\begin{definition}
If $\mathfrak{X}=(X, \leq,\sqsubseteq, \mathcal{O})$ is a $\mathtt{KM}$-space we set $\sigma \mathfrak{X}:=(X, R, \mathcal{O})$, where $R=\sqsubseteq$. Let $\mathfrak{Y}:=(Y, R, \mathcal{O})$ be a $\mathtt{GL}$-space. For $x, y\in Y$ write $x\sim y$ iff $Rxy$ and $Ryx$. Define a map $\rho:Y\to \wp (Y)$ by setting $\rho(x)=\{y\in Y:x\sim y\}$. We define $\rho\mathfrak{Y}:=(\rho[Y], \leq_\rho, \sqsubseteq_\rho \rho[\mathcal{O}])$ where $\rho(x)\sqsubseteq_\rho \rho(y)$ iff $Rxy$ and $\rho(x)\leq_\rho \rho(y)$ iff $R_\rho^+\rho(x) \rho(y)$.
\end{definition}
\begin{proposition}
The following conditions hold.\label{msimapsdual}
\begin{enumerate}
\item Let $\mathfrak{H}\in \mathsf{Frt}$. Then $(\sigma \mathfrak{H})_*\cong\sigma (\mathfrak{H}_*)$. Consequently, if $\mathfrak{X}$ is a $\mathtt{KM}$-space then $(\sigma \mathfrak{X})^*\cong\sigma (\mathfrak{X})^*$. \label{msimapsdual1}
\item Let $\mathfrak{X}$ be a $\mathtt{GL}$-space. Then $(\rho\mathfrak{X})^*\cong\rho(\mathfrak{X}^*)$. Consequently, if $\mathfrak{A}\in \mathsf{Mag}$, then $(\rho\mathfrak{A})_*\cong\rho(\mathfrak{A}_*)$. \label{msimapsdual2}
\end{enumerate}
\end{proposition}
\begin{proposition}
For every fronton $\mathfrak{H}\in \mathsf{Frt}$ we have that $\sigma \mathfrak{H}$ is a Magari algebra, and for every Magari algebra $\mathfrak{A}\in \mathsf{Mag}$ we have that $\rho\mathfrak{A}$ is a fronton.
\end{proposition}
\subsubsection{A Gödelian Translation} We now show how to translate msi formulae into modal formulae in a way which suits our current goals. The main idea, already anticipated when developing msi stable canonical rules, is to conceptualise rule systems in $\mathbf{NExt}(\mathtt{GL_R})$ as stated in a signature containing two modal operators $\square, \square^+$, so to use $\square$ to translate $\boxtimes$ and $\square^+$ to translate $\to$. This leads to the following Gödelian translation function.
\begin{definition}
The Gödelian translation $T:\mathit{Tm}_{msi}\to \mathit{Tm}_{md}$ is defined recursively as follows.
\begin{align*}
T(\bot)&:=\bot\\
T(\top)&:=\top\\
T(p)&:=\square p\\
T(\varphi\land \psi)&:=T(\varphi)\land T(\psi)\\
T(\varphi\lor \psi)&:=T(\varphi)\lor T(\psi)\\
T(\varphi\to \psi)&:=\square^+ (\neg T(\varphi)\lor T(\psi))\\
T(\boxtimes \varphi)&:=\square T(\varphi)
\end{align*}
\end{definition}
The translation $T$ above was originally proposed by \citet{KuznetsovMuravitsky1986OSLAFoPLE}, and is systematically studied in \cite{WolterZakharyaschev1997IMLAFoCBL,WolterZakharyaschev1997OtRbIaCML}. Our presentation contains a revised clause for the case of $T(\boxtimes \varphi)$, which was originally defined as
\[T(\boxtimes \varphi):=\square^+\square T(\varphi).\]
However, it is not difficult to verify that $\mathsf{Mag}\models \square p\leftrightarrow \square^+\square p$, which justifies our revised clause.
As usual, we extend the translation $T$ from terms to rules by setting
\[T(\Gamma/\Delta):=T[\Gamma]/T[\Delta].\]
The following key lemma describes the semantic behaviour of $T(\cdot)$ in terms of the map $\rho$.
\begin{lemma}
For every $\mathfrak{A}\in \mathsf{Mag}$ and si rule $\Gamma/\Delta$, \label{translationmsiskeleton}
\[\mathfrak{A}\models T(\Gamma/\Delta)\iff \rho\mathfrak{A}\models \Gamma/\Delta\]
\end{lemma}
\begin{proof}
A simple induction on structure shows that for every si term $\varphi$, every modal space $\mathfrak{X}$, every valuation $V$ on $\mathfrak{X}$ and every point $x\in X$ we have
\[\mathfrak{X}, V, x\models T(\varphi)\iff \rho\mathfrak{X}, \rho[V], \rho(x)\models\varphi.\]
Using this equivalence and noting that every valuation $V$ on some $\mathtt{KM}$-space $\rho\mathfrak{X}$ can be seen as of the form $\rho[V']$ for some valuation $V'$ on $\mathfrak{X}$, the rest of the proof is easy.
\end{proof}
\subsubsection{The Kuznetsov-Muravitsky Theorem}\label{sec:kmiso}
We are now ready to state and prove the main result of the present section.
Extend the mappings $\sigma:\mathsf{Frt}\to \mathsf{Mag}$ and $\rho:\mathsf{Mag}\to \mathsf{Frt}$ by setting
\begin{align*}
\sigma&:\mathbf{Uni}(\mathsf{Frt})\to \mathbf{Uni}(\mathsf{Mag}) & \rho&:\mathbf{Uni}(\mathsf{Mag})\to \mathbf{Uni}(\mathsf{Frt}) \\
\mathcal{U}&\mapsto \mathsf{Uni}\{\sigma \mathfrak{H}:\mathfrak{H}\in \mathcal{U}\} & \mathcal{W}&\mapsto \{\rho\mathfrak{A}:\mathfrak{A}\in \mathcal{W}\}.
\end{align*}
Now define the following two syntactic counterparts to $\sigma, \rho$ between $\mathbf{NExt}(\mathtt{KM_R})$ and $\mathbf{NExt}(\mathtt{GL_R})$.
\begin{align*}
\sigma&:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R}) & \rho &:\mathbf{NExt}(\mathtt{GL_R}) \to \mathbf{NExt}(\mathtt{KM_R}) \\
\mathtt{L}&\mapsto\mathtt{GL_R}\oplus\{T(\Gamma/\Delta):\Gamma/\Delta\in \mathtt{L}\} & \mathtt{M}&\mapsto \{\Gamma/\Delta:T(\Gamma/\Delta)\in \mathtt{M}\}
\end{align*}
These maps easily extend to lattices of logics, by setting:
\begin{align*}
\sigma&:\mathbf{NExt}(\mathtt{KM})\to \mathbf{NExt}(\mathtt{GL}) & \rho &:\mathbf{NExt}(\mathtt{GL}) \to \mathbf{NExt}(\mathtt{KM}) \\
\mathtt{L}&\mapsto \mathsf{Taut}(\sigma\mathtt{L_R})=\mathtt{GL}\oplus\{T(\varphi):\varphi\in \mathtt{L}\} & \mathtt{M}&\mapsto \mathsf{Taut}(\rho\mathtt{M_R})=\{\varphi:T(\varphi)\in \mathtt{M}\}
\end{align*}
The goal of this subsection is to establish the following result using pre-stable canonical rules.
\begin{theorem}[Kuznetsov-Muravitsky theorem]
The following conditions hold: \label{KMiso}
\begin{enumerate}
\item $\sigma:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R})$ and $\rho:\mathbf{NExt}(\mathtt{GL_R}) \to \mathbf{NExt}(\mathtt{KM_R})$ are mutually inverse complete lattice isomorphisms.\label{KMisouni}
\item $\sigma:\mathbf{NExt}(\mathtt{KM})\to \mathbf{NExt}(\mathtt{GL})$ and $\rho:\mathbf{NExt}(\mathtt{GL}) \to \mathbf{NExt}(\mathtt{KM})$ are mutually inverse complete lattice isomorphisms.\label{KMisovar}
\end{enumerate}
\end{theorem}
Similarly to the previous sections, the main difficulty to overcome here consists in showing that $\sigma: \mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R})$ is surjective. We approach this problem by applying our pre-stable canonical rules, following a similar blueprint as that used in the previous sections. The following lemma is a counterpart of \Cref{mainlemma-simod}. Its proof is similar to the latter's, thanks to the similarities existing between $\mathtt{GRZ}$- and $\mathtt{GL}$-spaces.
\begin{lemma}
Let $\mathfrak{A}\in \mathsf{Mag}$. Then for every modal rule $\Gamma/\Delta$ we have $\mathfrak{A}\models\Gamma/\Delta$ iff $\sigma\rho\mathfrak{A}\models \Gamma/\Delta$.\label{mainlemma-msimod}
\end{lemma}
\begin{proof}
$(\Rightarrow)$ This direction follows from the fact that $\sigma\rho\mathfrak{A}\rightarrowtail\mathfrak{A}$ (\Cref{representationFrtMag}).
$(\Leftarrow)$ We prove the dual statement that $\mathfrak{A}_*\nvDash \Gamma/\Delta$ implies $\sigma\rho\mathfrak{A}_*\nvDash \Gamma/\Delta$. Let $\mathfrak{X}:=\mathfrak{A}_*$. In view of \Cref{axiomGLscr} it suffices to consider the case $\Gamma/\Delta=\pscrmod{B}{D}$, for $\mathfrak{B}\in \mathsf{K4}$ finite. So suppose $\mathfrak{X}\nvDash \pscrmod{B}{D}$ and let $\mathfrak{F}:=\mathfrak{B}_*$. Then there is a pre-stable map $f:\mathfrak{X}\to \mathfrak{F}$ satisfying the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$. We construct a pre-stable map $g:\sigma\rho\mathfrak{X}\to \mathfrak{F}$ which also satisfies the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$.
Let $C$ be a cluster in $\mathfrak{F}$. Consider $Z_C:=f^{-1}(C)$. As $f$ is continuous, $Z_C$ is clopen. Moreover, since $f$ is pre-stable $Z_C$ does not cut any cluster. It follows that $\rho[Z_C]$ is clopen in $\rho\mathfrak{X}$, because $\rho \mathfrak{X}$ has the quotient topology.
Enumerate $C:=\{x_1, \ldots, x_n\}$. Then $f^{-1}(x_i)\subseteq Z_C$ is clopen. By \Cref{propgl1}, we have that $M_i:=\mathit{max}_R(f^{-1}(x_i))$ is clopen. Furthermore, as every element of $M_i$ is maximal in $M_i$, by \Cref{propgl1} again we have that $M_i$ does not cut any cluster. Therefore $\rho[M_i]$ is clopen, because $\rho\mathfrak{X}$ has the quotient topology. Clearly, $\rho[M_i]\cap \rho[M_j]=\varnothing$ for each $i\neq j$. Therefore there are disjoint clopens $U_1, \ldots, U_n$ with $\rho[M_i]\subseteq U_i$ and $\bigcup_i U_i=\rho[Z_C]$. Just take $U_i:=\rho[M_i]$ if $i\neq n$, and \[U_n:=\rho[Z_C]\smallsetminus\left( \bigcup_{i<n} U_i\right).\]
Now define
\[g_C: \rho[Z_C]\to C\]
\[g_C(z)=x_i\iff z\in U_i\]
Note that $g_C$ is relation preserving, evidently, and continuous by construction. Finally, define $g: \sigma\rho\mathfrak{X}\to F$ by setting
\[g(\rho(z)):=\begin{cases}
f(z)&\text{ if } f(z)\text{ does not belong to any proper cluster }\\
g_C(\rho(z))&\text{ if }f(z)\in C\text{ for some proper cluster }C\subseteq F
\end{cases}\]
Now, $g$ is evidently pre-stable. Moreover, it is continuous because both $f$ and each $g_C$ are. Let us check that $g$ satisfies the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$.
\begin{itemize}
\item (BDC$^{\square^+}$) This may be shown reasoning the same way as in the proof of \Cref{mainlemma-simod}.
\item (BDC$^\square$-back) Let $\mathfrak{d}\in \mathfrak{D}^\square$ and $\rho(x)\in \rho[X]$. Suppose that $R[g(\rho(x))]\cap \mathfrak{d} \neq\varnothing$. Let $U:=f^{-1}(f(x))$. Then $x\in U$, so by \Cref{propgl1} either $x\in \mathit{max}_{R}(U)$ or there exists $x'\in \mathit{max}_R(U)$ such that $R xx'$. We consider the former case only, the latter is analogous. Since $x\in \mathit{max}_{R}(U)$, by construction we have
$g(\rho(x))=f(x)$. Thus $R[f(x)]\cap \mathfrak{d}\neq\varnothing$. Since $f$ satisfies the BDC for $\mathfrak{d}$, it follows that there is $y\in X$ such that $Rxy$ and $f(y)\in \mathfrak{d}$. As $x\in \mathit{max}_R(U)$ we must have $f(x)\neq f(y)$. Now let $V:=f^{-1}(f(y))$. As $y\in V$, by \Cref{propgl1} either $y\in \mathit{max}_R(V)$ or there exists some $y'\in \mathit{max}_R(V)$ such that $Ryy'$. Wlog, suppose the former. Consequently, $f(y)=g(\rho(y))$. But then we have shown that $R \rho(x)\rho(y)$ and $g(\rho(y))\in \mathfrak{d}$, i.e. $g[R[\rho(x)]]\cap \mathfrak{d}\neq\varnothing$.
\item (BDC$^\square$-forth) Let $\mathfrak{d}\in \mathfrak{D}^\square$ and $\rho(x)\in \rho[X]$. Suppose that $g[R[\rho(x)]]\cap \mathfrak{d}\neq\varnothing$. Observe that $g[R[\rho(x)]]\cap \mathfrak{d}\neq\varnothing$ is equivalent to $R[\rho(x)]\cap g^{-1}(\mathfrak{d})\neq\varnothing$. Therefore there is some $y\in \mathfrak{d}$ such that $R[\rho(x)]\cap g^{-1}(y)\neq\varnothing$. By \Cref{propgl1} there is $z\in \mathit{max}_{R}(g^{-1}(y))$ with $R_\rho \rho(x)\rho(z)$. Observe that since $g$ is pre-stable, $R^+g(\rho(x)) g(\rho(z))$, whence if $g(\rho(x))\neq g(\rho(z))$ in turn $Rg(\rho(x)) g(\rho(z))$ and we are done. So suppose otherwise that $g(\rho(x))= g(\rho(z))$. Distinguish two cases
\begin{itemize}
\item\emph{Case 1}: $y\notin R[y]$. Then $y$ cannot belong to a proper cluster, so by construction $f(x)=g(\rho(x))$ and $f(z)=g(\rho(z))$. From $R \rho(x)\rho(z)$ it follows that $Rxz$, whence $R[x]\cap f^{-1}(\mathfrak{d})\neq\varnothing$ Since $f$ satisfies the BDC-forth for $\mathfrak{d}$, there must be some $u\in \mathfrak{d}$ with $R f(x)u$ and $f(u)\in \mathfrak{d}$. Then also $Rg(\rho(x)) u$, i.e. $R[g(\rho(x))]\cap \mathfrak{d}\neq\varnothing$ as desired.
\item \emph{Case 2}: $y\in R[y]$. But then $R g(\rho(x)) y$. This shows $R[g(\rho(x))]\cap \mathfrak{d}\neq\varnothing$ as desired.
\end{itemize}
\end{itemize}
\end{proof}
\begin{proposition}
Every universal class $\mathcal{U}\in \mathbf{Uni}(\mathsf{Mag})$ is generated by its skeletal elements, i.e., $\mathcal{U}=\sigma \rho\mathcal{U}$.\label{uniglgeneratedskel}
\end{proposition}
\begin{proof}
Analogous to \Cref{unigrzgeneratedskel}, but applying \Cref{mainlemma-msimod} instead of \Cref{mainlemma-simod}.
\end{proof}
We now apply \Cref{mainlemma-msimod} to characterise the maps $\sigma:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R})$ and $\rho:\mathbf{NExt}(\mathtt{KM_R}) \to \mathbf{NExt}(\mathtt{GL_R})$ in terms of their semantic counterparts.
\begin{lemma}
For each $\mathtt{L}\in \mathbf{Ext}(\mathtt{KM_R})$ and $\mathtt{M}\in \mathbf{NExt}(\mathtt{GL_R})$, the following hold:\label{msimapscommute}
\begin{align}
\mathsf{Alg}(\sigma\mathtt{L})&=\sigma \mathsf{Alg}(\mathtt{L})\label{msimapscommute1}\\
\mathsf{Alg}(\rho\mathtt{M})&=\rho \mathsf{Alg}(\mathtt{M})\label{msimapscommute2}
\end{align}
\end{lemma}
\begin{proof}
(\ref{msimapscommute1}) By \Cref{unigrzgeneratedskel} it suffices to show that $\mathsf{Alg}(\sigma\mathtt{L})$ and $\sigma \mathsf{Alg}(\mathtt{L})$ have the same skeletal elements. So let $\mathfrak{A}=\sigma\rho\mathfrak{A}\in \mathsf{Mag}$. Assume $\mathfrak{A}\in\sigma \mathsf{Alg}(\mathtt{L})$. Since $\sigma \mathsf{Alg}(\mathtt{L})$ is generated by $\{\sigma\mathfrak{B}:\mathfrak{B}\in \mathsf{Alg}(\mathtt{L})\}$ as a universal class, by \Cref{representationFrtMag} and \Cref{translationmsiskeleton} we have $\mathfrak{A}\models T(\Gamma/\Delta)$ for every $\Gamma/\Delta\in \mathtt{L}$. But then $\mathfrak{A}\in \mathsf{Alg}(\sigma\mathtt{L})$. Conversely, assume $\mathfrak{A}\in \mathsf{Alg}(\sigma\mathtt{L})$. Then $\mathfrak{A}\models T(\Gamma/\Delta)$ for every $\Gamma/\Delta\in \mathtt{L}$. By \Cref{translationmsiskeleton} this is equivalent to $\rho\mathfrak{A}\in \mathsf{Alg}(\mathtt{L})$, therefore $\sigma\rho\mathfrak{A}=\mathfrak{A}\in \sigma\mathsf{Alg}(\mathtt{L})$.
(\ref{msimapscommute2}) Let $\mathfrak{H}\in \mathsf{Frt}$. If $\mathfrak{H}\in \rho \mathsf{Alg}(\mathtt{M})$ then $\mathfrak{H}=\rho \mathfrak{A}$ for some $\mathfrak{A}\in \mathsf{Alg}(\mathtt{M})$. It follows that for every rule $T(\Gamma/\Delta)\in \mathtt{M}$ we have $\mathfrak{A}\models T(\Gamma/\Delta)$, and so by \Cref{translationmsiskeleton} in turn $\mathfrak{H}\models\Gamma/\Delta$. Therefore indeed $\mathfrak{H}\in \mathsf{Alg}(\rho\mathtt{M})$. Conversely, for all rules $\Gamma/\Delta$, if $\rho\mathsf{Alg}(\mathtt{M})\models \Gamma/\Delta$ then by \Cref{translationmsiskeleton} $\mathsf{Alg}(\mathtt{M})\models T(\Gamma/\Delta)$, hence $\Gamma/\Delta\in \rho\mathtt{M}$. Thus $\mathsf{ThR}(\rho\mathsf{Alg}(\mathtt{M}))\subseteq \rho\mathtt{M}$, and so $\mathsf{Alg}(\rho\mathtt{M})\subseteq \rho\mathsf{Alg}(\mathtt{M})$.
\end{proof}
We are now ready to prove the main result of this section.
\begingroup
\def\ref{KMiso}{\ref{KMiso}}
\begin{theorem}[Kuznetsov-Muravitsky theorem]
The following conditions hold:
\begin{enumerate}
\item $\sigma:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R})$ and $\rho:\mathbf{NExt}(\mathtt{GL_R}) \to \mathbf{NExt}(\mathtt{KM_R})$ are mutually inverse complete lattice isomorphisms.
\item $\sigma:\mathbf{NExt}(\mathtt{KM})\to \mathbf{NExt}(\mathtt{GL})$ and $\rho:\mathbf{NExt}(\mathtt{GL}) \to \mathbf{NExt}(\mathtt{KM})$ are mutually inverse complete lattice isomorphisms.
\end{enumerate}
\end{theorem}
\addtocounter{theorem}{-1}
\endgroup
\begin{proof}
(\ref{KMisouni}) It suffices to show that the two mappings $\sigma: \mathbf{Uni}(\mathsf{Frt})\to \mathbf{Uni}(\mathsf{Mag})$ and $\rho:\mathbf{Uni}(\mathsf{Mag})\to \mathbf{Uni}(\mathsf{Frt})$ are complete lattice isomorphisms and mutual inverses. Both maps are evidently order preserving, and preservation of infinite joins is an easy consequence of \Cref{translationmsiskeleton}.
Let $\mathcal{U}\in \mathbf{Uni}(\mathsf{Mag})$. Then $\mathcal{U}=\sigma\rho\mathcal{U}$ by \Cref{uniglgeneratedskel}, so $\sigma$ is surjective and a left inverse of $\rho$. Now let $\mathcal{U}\in \mathbf{Uni}(\mathsf{Frt})$. It follows immediately from \Cref{representationFrtMag} that $\rho\sigma \mathcal{U}=\mathcal{U}$. Therefore $\rho$ is surjective and a left inverse of $\sigma$. But then $\sigma$ and $\rho$ are mutual inverses, whence both bijections.
(\ref{KMisovar}) Follows immediately from \Cref{KMisouni} and \Cref{deductivesystemisomorphismmsi}.
\end{proof}
\addcontentsline{toc}{section}{Conclusions and Further Work}
\section{Conclusions and Future Work}
\label{ch:4}
This paper presented a novel approach to the study of modal companions and related notions based on stable canonical rules. We hope to have shown that our method is effective and quite uniform. With only minor adaptations to a fixed collection of techniques, we provided a unified treatment of the theories of modal and tense companions, and of the Kuznetsov-Muravitsky isomorphism. We both offered alternative proofs of classic theorems and established new results.
The techniques presented in this paper are based on a blueprint easily applicable across signatures. Stable canonical rules can be formulated for any class of algebras which admits a locally finite expandable reduct in the sense of \cite[Ch. 5]{Ilin2018FRLoSNCL}, and once stable canonical rules are available there is a clear recipe for adapting our strategy to the case at hand. We propose that further research be done in this direction, in particular addressing the following topics.
Firstly, for reasons of space we have not addressed the full theory of modal companions of msi deductive systems, as developed in \cite{WolterZakharyaschev1997IMLAFoCBL,WolterZakharyaschev1997OtRbIaCML}. We conjecture that our techniques can recover several of the main known results in this area, and generalise them to rule systems. We hope that further work will confirm this.
Secondly, \citet{GrootEtAl2021GMTaBEfHLI} recently proved an analogue of the Blok-Esakia theorem for extensions of the \emph{Heyting-Lemmon logic}, which expands superintuitionistic logic with a strict implication connective. Our techniques could be applied to generalise this result to rule systems, and more generally to develop a rich theory of modal companions of deductive systems over the Heyting-Lemmon logic.
Thirdly, \citet{Goldblatt1974SAoO} formulated a Gödel-style translation giving a full and faithful embedding of the propositional logic $\mathtt{Ort}$ of all \emph{ortholattices} into the \emph{Browerian modal logic} $\mathtt{B}=\mathtt{K}\oplus \square p\to p\oplus p\to \square\lozenge p$. To the best of our knowledge, the theory of modal companions of extensions of $\mathtt{Ort}$ (which include quantum logics) has not been developed, and in particular it is unknown whether Goldblatt's tranlsation gives rise to an analogue of the Blok-Esakia theorem. If a suitable expandable locally finite reduct of ortholattices can be found, stable canonical rules for rule systems over $\mathtt{Ort}$ can be developed, and a clear strategy for attacking the aforementioned problem becomes available.
\addcontentsline{toc}{section}{Bibliography}
\end{document} |
\begin{document}
\title{Quantum information processing using strongly-dipolar coupled nuclear spins}
\author{T. S. Mahesh and Dieter Suter}
\address{Department of Physics, University of Dortmund, 44221 Dortmund, Germany}
\date{\today}
\begin{abstract}
{Dipolar coupled homonuclear spins present challenging, yet useful
systems for quantum information processing.
In such systems, eigenbasis of the system Hamiltonian
is the appropriate computational basis and coherent control can
be achieved by specially designed strongly modulating pulses.
In this letter we
describe the first experimental implementation of the quantum
algorithm for numerical gradient estimation on the eigenbasis of
a four spin system.}
\end{abstract}
\pacs{02.30.Yy, 03.67.Lx, 61.30.-v, 82.56.-b}
\keywords{Quantum information processing, nuclear magnetic resonance,
partially oriented molecules, strongly modulating pulses, numerical gradient estimation}
\maketitle
An important issue in experimental quantum information processing (QIP) is
achieving coherent control while increasing the number of qubits
\cite{divincenzo,corychuangrev}.
In all existing implementations of quantum computation, the execution time
is limited by durations of nonlocal gates.
In the case of nuclear magnetic resonance (NMR) implementations of QIP,
where qubits are formed by mutually coupled spin 1/2 nuclei,
most of the existing implementations used isotropic liquids.
In these systems, the durations of nonlocal gates are limited by the strength
of (indirect) scalar couplings, which typically are of the order of 10-100 Hz.
Apart from the scalar couplings, nuclear spins also interact through magnetic
dipole-dipole couplings, which are 2-3 orders of magnitude stronger than the
scalar couplings, and would therefore allow significantly faster gate operations.
In the case of isotropic liquids, the rapid molecular reorientation averages
the dipolar interactions to zero.
On the other hand, in oriented systems, the dipolar
interactions survive and are therefore potentially better candidates for NMR-QIP
\cite{cory3qss}.
The Hamiltonian for a dipolar coupled $n$-spin system is,
\begin{eqnarray}
{\cal H} = \sum_{j=1}^n \hbar \omega_j I^j_z +
\sum_{j=1,k<j}^{n} 2\pi\hbar {\mathrm D}_{jk}
(3{\mathrm{I}}_z^j \cdot {\mathrm{I}}_z^k -
{\mathrm{\bf I}}^j \cdot {\mathrm{\bf I}}^k)
\end{eqnarray}
where $\omega_j$ are the chemical shifts, ${\mathrm D}_{jk}$
are the dipolar coupling constants and
$I_z^j$ are z-components of the spin angular
momentum operators {\bf I}$^j$ \cite{ernstbook}.
The first term corresponds to the Zeeman interaction and the
second term describes the dipolar interaction.
When $\vert \mathrm{D}_{jk} \vert \ll \vert \omega_j - \omega_k \vert$,
one usually invokes the `weak-coupling' approximation \cite{ernstbook,levittbook},
where one ignores the off-diagonal parts of the Hamiltonian.
Then the Zeeman product states are
the eigenstates of the full Hamiltonian and
individual spins can be conveniently treated as qubits.
Most of the
experiments in NMR-QIP have so far been carried out on such systems
\cite{corychuangrev}.
However to maximize the
execution speed of our quantum processor it is desirable to
use stronger couplings, including
$\vert \mathrm{D}_{jk} \vert \ge \vert \omega_j - \omega_k \vert$.
In this case, we have to use the complete Hamiltonian (1), where
the Zeeman and the coupling parts do not commute and not all eigenstates
are Zeeman product states but linear combinations of them.
Unlike in the case of a weakly coupled spin system,
individual spins of a strongly coupled system loose addressability.
In such cases the eigenstates of the full Hamiltonian can
form individually controllable subsystems and then the eigenbasis
becomes the natural and accurate choice for the computational basis.
Though strongly dipolar coupled spin-systems have been suggested for QIP earlier
\cite{maheshCS}, so far it had not been possible to implement general unitary
gate operations that can be applied to arbitrary initial conditions.
Here we show how quantum computation can be realized on the eigenbasis
by efficient coherent control techniques.
As a specific system, we use a strongly dipolar coupled four-spin
system partially oriented in a liquid crystal. Such systems have certain key merits.
Unlike in the liquid state systems, the intramolecular dipolar couplings are not
averaged out completely, but are only scaled down by the order parameter of the
solute molecules oriented in the liquid crystal \cite{diehlvol6,emsleybook}.
Since intermolecular interactions are averaged out (unlike the crystalline systems),
liquid crystalline systems provide well defined quantum registers
with low decoherence rates.
We achieve high-fidelity coherent control with the help of specially designed strongly modulating pulses (SMPs).
To demonstrate these techniques, we use an interesting algorithm
suggested by S. P. Jordan \cite{spjordan}:
the quantum algorithm for numerical gradient estimation (QNGE).
Before we discuss our implementation, we summarize
the theoretical description of QNGE \cite{spjordan}.
The gradient of an one-dimensional real function $f$ over a small real range
$l$ is written as $\nabla f = \{f(l/2) - f(-l/2)\}/l$.
Thus classically two function evaluations are necessary to estimate the
gradient of a one-dimensional function and a minimum of $d+1$ function evaluations
are necessary for a $d$-dimensional function. On the other hand, QNGE requires only one
function evaluation independent of the dimension of the function.
Here the
function $f$ is encoded in an $n$-qubit input register.
An ancilla
register is also required whose size ($n_0$ qubits) depends on
the maximum possible value of the gradient.
In QIP, numbers are represented in binary form.
To encode the real number $x$ in the input register,
we have to convert it into a nonnegative integer $\delta \in \{0,1,\cdots, N-1\}$,
where $N = 2^n$.
The encoding is defined by
\begin{eqnarray}
x = \frac{l}{N-1}\left( \delta - \frac{N-1}{2} \right).
\label{eqnforx}
\end{eqnarray}
The circuit diagram for the quantum algorithm is shown in Figure
\ref{gecircuit}.
\begin{figure}
\caption{\label{gecircuit}
\label{gecircuit}
\end{figure}
Initially the ancilla register ($an$) is set to 1 and the input register
($in$) is set to 0. The inverse quantum Fourier transform (IQFT)
prepares a plane wave state on the ancilla register and
and the Hadamard transform (H) prepares a uniform superposition
on the input register \cite{chuangbook}:
\begin{eqnarray}
&&\underbrace{\vert 00 \cdots 01 \rangle}_{an}
\underbrace{\vert 00 \cdots 00 \rangle}_{in}
\stackrel{\mathrm{IQFT}^{an}}{\longrightarrow}
\stackrel{\mathrm{H}^{in}}{\longrightarrow} \nonumber \\
&&\hspace{2cm}\frac{1}{\sqrt{N_0}}
\sum_{k=0}^{N_0-1} e^{-i\frac{2 \pi k}{N_0}} \vert k \rangle
\frac{1}{\sqrt{N}}\sum_{\delta = 0}^{N-1}\vert \delta \rangle
\label{afterqft} \nonumber
\end{eqnarray}
The ancilla register
is now in an eigenstate of the addition modulo $N_0$.
On applying oracle
$U_f: \vert an,in \rangle \rightarrow \vert sf(x) \oplus_{N_0} an,in \rangle$,
where $s$ is a scaling factor, the total state becomes
\begin{equation}
\frac{1}{\sqrt{N_0N}}
\sum_{k=0}^{N_0-1} e^{-i\frac{2 \pi k}{N_0}} \vert k \rangle
\sum_{\delta =0}^{N-1} e^{i \frac{2 \pi}{N_0} sf(x)} \vert \delta \rangle
\nonumber.
\label{afteraddition}
\end{equation}
For a small $x$, $f(x) \approx f(0)+x \nabla f$. Substituting for $x$ from equation
\ref{eqnforx} and ignoring the global phase,
the input register reduces to,
\begin{eqnarray}
\frac{1}{\sqrt{N}}\sum_{\delta =0}^{N-1}
e^{i \frac{2 \pi}{N_0} \frac{ls \delta}{N-1} \nabla f}
\vert \delta \rangle \nonumber.
\end{eqnarray}
The scaling factor $s$ is set to maximize the precision i.e.,
\begin{eqnarray}
s = N_0 (N-1)/Nl.
\label{eqnfors}
\end{eqnarray}
Now a phase estimation is carried out with the quantum Fourier transform (QFT)
on the input register
\begin{eqnarray}
\frac{1}{\sqrt{N}}\sum_{\delta =0}^{N-1}
e^{i \frac{2 \pi \delta}{N} \nabla f}
\vert \delta \rangle
\stackrel{\mathrm{QFT}}{\longrightarrow}
\left\vert \nabla f \right\rangle \nonumber.
\label{phaseestim}
\end{eqnarray}
Measuring now the input registers in the computational basis gives an estimation
of $\nabla f$.
In our 4-qubit case, we select two qubits as ancillas and the other two as input qubits i.e., $n_0 =n=2$,
$N_0=N = 4$, and $\delta \in
\{\vert 00 \rangle, \vert 01 \rangle, \vert 10 \rangle, \vert 11 \rangle \}$.
From equation (\ref{eqnforx}), $x \in \{-1/2, -1/6, 1/6, 1/2\}$ for $l = 1$.
Let us consider an example function $f(x) \in \{0, 2/3, 4/3, 2\} $,
which has a gradient $\nabla f = 2$. Using equation \ref{eqnfors} we obtain
the scaling factor $s=3$
so that $sf(x) \in \{0, 2, 4, 6\}$. For $\delta \in
\{\vert 00 \rangle,\vert 10 \rangle\}$, $sf(x)_{\oplus N_0}$ remains identity.
For $\delta \in \{\vert 01 \rangle,\vert 11 \rangle\}$, $sf(x)_{\oplus N_0}$
adds 2 to the ancilla, i.e., flips the first ancilla qubit. Therefore,
for this example, the oracle is a CNOT($an_1,in_2$) gate i.e.,
a NOT operation on the first ancilla qubit
controlled by the second input qubit.
The propagator for the entire algorithm is therefore,
\begin{eqnarray}
U_{\mathrm{QNGE}} &=& U_{\mathrm{QFT}}^{in} \cdot U_{\mathrm{CNOT}(an_1,in_2)}
\cdot U_{\mathrm{H}}^{in}
\cdot U_{\mathrm{IQFT}}^{an} \nonumber.
\end{eqnarray}
We design a single SMP corresponding to this entire operation on
the input states.
\begin{figure}
\caption{\label{smprob}
\label{smprob}
\end{figure}
An SMP is a cascade of radio-frequency (RF) pulses numerically calculated based on
the precise knowledge of the internal Hamiltonian of the qubit system and the target propagator
\cite{fortunato,navin}.
Given a target operator $U_{\mathrm T}$,
Fortunato et al \cite{fortunato} have described numerically searching
an SMP propagator $U_{\mathrm{SMP}}$
based on four RF parameters for each segment $k$:
duration ($\tau^{(k)}$), amplitude ($\omega_A^{(k)}$),
phase ($\phi^{(k)}$) and frequency ($\omega_F^{(k)}$).
We found that it is useful to add one more degree of freedom:
after each pulse segment, we introduce a variable delay $\tau_d^{(k)}$.
The delays are computationally inexpensive to optimize, easy and accurate to implement
and make designing non-local gates easier.
All the parameters are determined so as to maximize the fidelity
\begin{eqnarray}
F = \mathrm{trace} \left[ U_{\mathrm T}^{-1} \cdot U_{\mathrm{SMP}} \right]/M \nonumber
\end{eqnarray}
where $U_{\mathrm{SMP}}$ is the propagator for the SMP,
$M$ is the dimension of the operators and $s$ is the number of segments.
A MATLAB package has been developed
which uses the Nelder-Mead simplex algorithm as the maximization routine
\cite{maheshsmp}.
The search constraints can be simplified if the input state
is definitely known. The fidelity of an SMP specific to
a known initial state $\rho_{\mathrm{in}}$
can be written as,
\begin{eqnarray}
F' = \frac{\mathrm{trace} \left[ \rho_T \cdot \rho_{\mathrm{SMP}} \right]}
{\sqrt{\mathrm{trace} \left[ \rho_T^2 \right] \cdot \mathrm{trace}
\left[ \rho_{\mathrm{SMP}}^2 \right]}},
\label{inspefid}
\end{eqnarray}
where $\rho_T = U_{\mathrm T} \cdot \rho_{\mathrm{in}} \cdot U_{\mathrm T}^{-1}$ and
$\rho_{\mathrm{SMP}} = U_{\mathrm{SMP}} \cdot \rho_{\mathrm{in}} \cdot U_{\mathrm{SMP}}^{-1}$.
The SMPs are made robust against the spatial inhomogeneous
distributions of RF amplitudes and of static fields ($\Delta \omega_i$)
by maximizing an average fidelity,
$F'_{avg} = \sum_i F'(\omega_{iA},\Delta\omega_i)$ \cite{nicrfi}.
Figure \ref{smprob} shows a single SMP performing the full QNGE
on the specific four-qubit system.
Though it is possible to decompose the target propagator
into several SMPs each corresponding to one or two-qubit gates,
it is more efficient, at least for small spin systems, to
design and execute a single robust SMP implementing the
entire algorithm.
\begin{figure}
\caption{\label{strhameltr}
\label{strhameltr}
\end{figure}
As the quantum register,
we used the four $^1$H spins of 1-Chloro-2-iodobenzene (CIB; Figure \ref{strhameltr})
(purchased from Sigma Aldrich$^{\scriptsize\textregistered}$) oriented in
liquid crystal ZLI-1132 (purchased from Merck$^{\scriptsize\textregistered}$)
forming a 10mM solution. All experiments were carried out on a 500 MHz
Bruker Avance spectrometer at 300 K.
Figure \ref{ngespec} shows the
$^1$H NMR spectrum of the partially oriented CIB.
The line widths of the various transitions
range from 1.7 Hz to 4.0 Hz indicating coherence times ($T_2^*$ relaxation times)
between 1.8 s and 0.7 s.
The coherence times are sufficiently long to ignore
relaxation effects in the design of the SMP.
The procedure for analyzing the NMR spectra of partially oriented systems has been
well studied \cite{diehlvol6,emsleybook}.
We have developed a numerical
procedure to iteratively determine the system Hamiltonian from
its spectrum and a guess Hamiltonian \cite{maheshsamat}.
The 37 strongest transitions of the CIB spectrum were used and a unique fit was obtained.
The mean frequency and intensity errors
between the experimental and the calculated spectra are less than
0.1 Hz and 6\% respectively.
The elements of the diagonalized system Hamiltonian and
the corresponding energy level diagram
are shown in Figure \ref{strhameltr}.
\begin{figure}
\caption{\label{ngespec}
\label{ngespec}
\end{figure}
In NMR-QIP, the initial states are not pure states but are pseudopure states
that are isomorphic to pure states.
The pseudopure states differ from the pure states by a uniform
background population on all states.
It is easier, however, to prepare a pair of
pseudopure states (POPS) \cite{Fungpps}.
We prepared the pair
$\vert 0100 \rangle \langle 0100 \vert -
\vert 0000 \rangle \langle 0000 \vert$.
The first term represents the desired initial state; the additional second part does not interfere with
the QNGE experiment,
because the operation IQFT,
when acted on $\vert 00 \rangle$, creates a uniform superposition
of the ancilla qubits.
Such a state is invariant under the $f(x) \oplus_{N_0}$ operation
and therefore the output state corresponding to $\vert 0000 \rangle \langle 0000 \vert$
is independent of the oracle $U_f$.
The inset at the center of Figure \ref{ngespec} shows the pulse sequence
for preparing POPS and the corresponding spectrum. The POPS spectrum
is obtained by linear detection pulse after inverting transition 2
and subtracting from the resulting spectrum the linearly detected
spectrum of the equilibrium state.
\begin{figure}
\caption{\label{barplots}
\label{barplots}
\end{figure}
Since we use eigenbasis as the computational basis, the projective
measurement on to the computational basis is equivalent to
measurement of populations after destroying the coherence.
First we dephase the coherence by using a pulsed field gradient (PFG).
A PFG does not efficiently dephase homonuclear
zero-quantum coherence. Therefore we use a random delay
(in between 0 and 10 ms) after the PFG and average over several (32)
transients. The populations are then measured using a linear
detection (small flip-angle) pulse \cite{ernstbook}.
Using the normalization condition,
there are 15 unknowns for the 16 eigenstates. The results of the diagonal-tomography
obtained by the mean of three sets each of 15 linearly independent transitions
are shown in Figure \ref{barplots}.
After the projective measurement in the eigenbasis at the end of the quantum algorithm,
the input qubits are in the state $\vert 10 \rangle$ encoding the gradient
$\nabla f = 2$,
while the ancilla qubits have equal probability in all possible states. Therefore,
the theoretical output diagonal state of the combined system is
$I_{an} \otimes (\vert 10 \rangle \langle 10 \vert -
\vert 00 \rangle \langle 00 \vert$),
where $I_{an}$ is the identity operator for the ancilla qubits
and the two parts in the parenthesis correspond to the two parts
of the POPS.
The diagonal
correlation $C$ between the theoretical density matrix ($\rho_{\mathrm{T}}$) and the
experimental density matrix ($\rho_{\mathrm{E}}$), is defined as
\begin{eqnarray}
C = \frac{ \mathrm{trace} [\vert \rho_{\mathrm{T}} \vert
\cdot \vert \rho_{\mathrm{E}}\vert]}
{\sqrt{ \mathrm{trace} [\vert \rho_{\mathrm{T}} \vert ^2]
\cdot \mathrm{trace} [\vert \rho_{\mathrm{E}} \vert ^2]}},
\end{eqnarray}
where $\vert \vert$ denotes extraction of the diagonal part. The average
diagonal correlations were
0.999 and 0.979 for the POPS and the result of the QNGE, respectively.
The lower value of the latter may be attributed to decoherence and
spectrometer nonlinearities.
In conclusion, we have demonstrated quantum computation on the eigenbasis
of system Hamiltonian
using coherent control techniques.
As an example, we described the first implementation of
Jordan's algorithm for numerical gradient estimation.
Compared to the usual approach using weakly coupled systems,
the present method using strongly dipolar coupled systems
yields significantly faster execution times and is therefore less susceptible
to decoherence effects.
From molecular and spectroscopic considerations a combination of homo
and heteronuclear spins in either liquid crystalline or molecular
single crystalline environments is a natural way to build
larger qubit-systems (with $>10$ qubits).
The coherent control of strongly dipolar coupled systems
then becomes important and the present work is the
first step in this direction.
We believe that the coherent control techniques demonstrated here
for dipolar coupled nuclear spins will turn out to be essential also
for other solid-state implementations of quantum information processing.
\acknowledgments
This work was supported by the Alexander von Humboldt Foundation
and the DFG through grant numbers Su192/19-1 and Su192/11-1.
\references
\bibitem{divincenzo}
D. P. DiVincenzo, Fortschr. Phys. {\bf 48}, 771 (2000).
\bibitem{corychuangrev}
C. Ramanathan, N. Boulant, Z. Chen, D. G. Cory, I. L. Chuang, M. Steffen
Quantum Information Processing, {\bf 3}, 15 (2004).
\bibitem{cory3qss}
J. Baugh, O. Moussa, C. A. Ryan, R. Laflamme, C. Ramanathan, T. F. Havel, and D. G. Cory
Phys. Rev. A {\bf 73}, 022305 (2006).
\bibitem{ernstbook}
R. R. Ernst, G. Bodenhausen, and A. Wokaun,
{\it Principles of
Nuclear Magnetic Resonance in One and Two Dimensions,
Oxford Science Publications}, (1987).
\bibitem{levittbook}
M. H. Levitt, {\it Spin Dynamics, J. Wiley and Sons Ltd.}, 2002.
\bibitem{maheshCS}
T. S. Mahesh, N. Sinha, A. Ghosh, R. Das, N. Suryaprakash,
M. H. Levitt, K. V. Ramanathan, and A. Kumar,
Curr. Sci. {\bf 85}, 932 (2003); Also available at
LANL ArXiv quant-ph:0212123.
\bibitem{diehlvol6}
NMR-Basic Principles and Progress,
P. Diehl, H. Kellerhals, and E. Lustig,
Eds. P. Diehl, E. Fluck and R. Kosfeld,
Springer-Verlog, New York, Vol. 6, 1972
\bibitem{emsleybook}
NMR spectroscopy using liquid crystal solvents,
J. W. Emsley and J. C. Lindon,
Pergamon Press, 1975.
\bibitem{spjordan}
S. P. Jordan, Phys. Rev. Lett. {\bf 95}, 050501 (2005).
\bibitem{chuangbook}
M. A. Nielsen and I. L. Chuang, {\it Quantum Computation and Quantum Information,
Cambridge University Press,} 2002.
\bibitem{fortunato}
E. M. Fortunato, M. A. Pravia, N. Boulant, G. Teklemariam, T. F. Havel and D. G. Cory,
J. Chem. Phys. {\bf 116}, 7599 (2002).
\bibitem{navin}
N. Khaneja, T. Reiss, C. Kehlet, T. S. Herbr\"{u}ggen, and S. J. Glasser,
J. Magn. Reson. {\bf 172}, 296 (2005).
\bibitem{maheshsmp}
T. S. Mahesh and D. Suter, to be published elsewhere.
\bibitem{nicrfi}
N. Boulant, J. Emerson, T. F. Havel, S. Furuta and D. G. Cory, J. Chem. Phys.
{\bf 121}, 2955 (2004).
\bibitem{maheshsamat}
T. S. Mahesh and D. Suter, to be published elsewhere.
\bibitem{Fungpps}
B. M. Fung, Phys. Rev. A. {\bf 63}, 022304 (2001).
\end{document} |
\begin{document}
\title{Knot 4--genus and the rank of classes in $ \boldsymbol{W}({\bf Q}(\boldsymbol{t})) $ }
\author{Charles Livingston }
\thanks{This work was supported in part by NSF-DMS-0707078.}
\address{Charles Livingston: Department of Mathematics, Indiana University, Bloomington, IN 47405 }
\email{[email protected]}
\begin{abstract} The Witt rank $\rho(w)$ of a class $w$ in the Witt group $ W({\bf F})$ of a field with involution ${\bf F}$ is the minimal rank of a representative of the class. In the case of the Witt group of hermitian forms over the rational function field ${\bf Q}(t)$, we define an easily computed invariant $r(w)$ and prove that modulo torsion in the Witt group, $r$ determines $\rho$; more specifically, $\rho(4w) = r(4w)$ for all $w \in W({\bf Q}(t))$. The need to determine the Witt rank arises naturally in the study of the 4--genus of knots; we illustrate the application of our algebraic results to knot theoretic problems, providing examples for which $r$ provides stronger bounds on the 4--genus of a knot than do classical signature bounds or Ozsv\'ath-Szab\'o and Rasmussen-Khovanov bounds. \end{abstract}
\maketitle
\section{Introduction.}
For a knot $K \subset S^3$, the 4--genus of $K$, $g_4(K)$, is the minimum genus of a smoothly embedded surface in $B^4$ bounded by $K$. Although the study of this invariant has been a focus of knot theoretic research for over 50 years, it remains an intractable invariant to compute; for instance, the determination of the 4--genus for knots with 10 or fewer crossings has just been recently completed, with even the computation for individual knots being the subject of papers, for instance~\cite{ka}.
The depth of continuing interest in the 4--genus is indicated by the application of the deepest tools now available in low-dimensional topology: Kronheimer-Mrowka's study of 4-dimensional gauge theory~\cite{km}, Ozsv\'ath-Szab\'o's development of Heegaard-Floer theory~\cite{os1}, and Rasmussen's work on Khovanov homology~\cite{ra}, have each been used to establish Milnor's conjecture that for torus knots $T_{p,q}$, $g_4(T_{p,q}) = (|p|-1)(|q|-1) /2$.
Work in the 1960s identified the central role of algebraically defined Witt groups to understanding the 4--genus. As we will review in a brief appendix, to each knot $K$ there is naturally associated a Witt class, $w_K \in W({\bf Q}(t))$, the Witt group of hermitian forms over the rational function field, having involution induced by $t \to t^{-1}$. (Here $w_K $ is represented by the matrix $(1-t)V_K + (1-t^{-1})V_K^t$, where $V_K$ is an integer matrix associated to $K$, the {\it Seifert matrix}.) A fundamental result states $g_4(K) \ge \frac{1}{2}\rho(w_K)$, where $\rho$ is defined as follows:
\begin{definition} For a class $w \in W({\bf F})$, the rank of $w$, $\rho(w)$, is the minimum dimension of a square hermitian matrix representing $w$.
\end{definition}
For a given class $w$, determining $\rho(w)$ can be very difficult and the most effective tools for bounding $\rho(w)$ are based on bounds on signature functions associated to the class $w$. A few of the early papers that applied signatures to the 4--genus are \cite{levine, milnor, murasugi, taylor, tristram, trotter}.
The goal of this paper is to more closely examine the function $\rho(w)$. We define an easily computed invariant $r(w)$ which provides stronger bounds on $\rho(w)$ than were previously known. We then prove that $r(w)$ completely determines $\rho(w)$, modulo torsion in the Witt group. More precisely, our main result states
\vskip.15in
\noindent{\bf Theorem} {\it For all $w \in W({\bf Q}(t))$, $\rho(w) \ge r(w)$ and $\rho(4w) = r(4w)$.}
\vskip.15in
\noindent{\bf Outline} Let $W({\bf F})$ denote the Witt group of nonsingular hermitian bilinear forms over a field ${\bf F}$ with (possibly trivial) involution. In the case of ${\bf F} = {\bf R}$ or ${\bf F} = {\bf C}$ (with involution given by conjugation), a relatively simple exercise shows that $\rho(w) = \sigma(w)$, where $\sigma$ is the signature. For ${\bf F} = {\bf Q}$ the situation is more complicated. The diagonal form $w$ with diagonal $[1, -2]$ is not Witt trivial, and thus $\rho(w) = 2$, but $\sigma(w) = 0$; note however that since $\sigma(w) = 0$, $w$ represents an element of order four in $W({\bf Q})$, and thus $\rho(4w) = 0$. More generally, for $w \in W({\bf Q})$, $\rho(4w) = \sigma(4w)$. Details are presented in Section~\ref{secwq}.
The arguments in Section~\ref{secwq} are fairly basic, but they illustrate the structure of the proof of our main theorem regarding $W({\bf Q}(t))$. In Section~\ref{sectiondefs} we will set the notation to be used throughout the paper and define the function $r$. We will also discuss explicit means for computing $r$.
In Section~\ref{secbounds} we will prove the first part of the main theorem: for $w \in W({\bf Q}(t))$, $\rho(w) \ge r(w)$. Following this we have a realization result, showing in Section~\ref{secrealize} that for any form $w$, there is a form $w'$ having an identical signature function and for which $\rho(w') = r(w')$. Finally, in Section~\ref{sector} it is shown that a class with trivial signature function represents 4--torsion in $W({\bf Q}(t))$. The main theorem is an immediate consequence of this result.
The paper concludes with Section~\ref{sectionnorm} which describes how $\rho$ leads naturally to a norm on $W({\bf Q}(t)) \otimes {\bf Q}$ and then Section~ \ref{sectionapps} presenting examples of the computation of this norm, with specific applications to determining the 4--genus of low-crossing number knots.
\vskip.1in
\noindent{\it Acknowledgments} Thanks are due to Pat Gilmer and Neal Stoltzfus for early discussions regarding this work. Thanks are also due to Jim Davis and Andrew Ranicki for their insights regarding the structure of the Witt group $W({\bf Q}(t))$. Special thanks go to Stefan Friedl for his thoughtful commentary on an earlier version of this paper.
\section{ $ \rho(4w) = \sigma(4w)$ for $w \in W({\bf Q})$. }\label{secwq}
The proof that $\rho(4w) = r(4w)$ for $w\in W({\bf Q}(t))$ is in structure the same as the proof that $\rho(4w) = 4\sigma(w)$ for $w \in W({\bf Q})$. The proof we give here is broken up into three mains steps corresponding to Sections~\ref{secbounds}, \ref{secrealize} and \ref{sector}. A fourth concluding step is identical in both settings.
\begin{theorem}\label{thmmain} If $w \in W({\bf Q})$ then $\rho(4w) = 4\sigma(w)$.
\end{theorem}
\begin{proof} $ $
\begin{enumerate}
\item $\rho(w) \ge \sigma(w)$: For $w \in W({\bf Q})$ this is immediate from the definition of the signature. In the case of $W({\bf Q}(t))$ the corresponding proof will reduce to a careful algebraic calculation, based on the details of the definition of $r(w)$ given in the next section. The argument occupies Section~\ref{secbounds}.
\item If $w \in W({\bf Q})$, there is a class $w' \in W({\bf Q})$ with $\sigma(w) = \sigma(w')$ and $\rho(w') = \sigma(w')$: The form $w'$ is simply the form represented by the identity matrix of dimension $\sigma(w)$. In the case of $W({\bf Q}(t))$, the construction of the appropriate form $w'$ for which $r(w) = r(w') $ and $\rho(w') = r(w')$ is more delicate.
\item If $\sigma(w) = 0$ then $4w = 0 $: This depends on the structure of $W({\bf Q})$. As described for instance in~\cite{mh}, there is a split short exact sequence $$0 \to W({\bf Z}) \to W({\bf Q}) \to \oplus_p W({\bf F}_p),$$ where ${\bf F}_p$ is the finite field with $p$ elements, $p$ a prime integer. The groups $W({\bf F}_p)$ are all 4--torsion, and $W({\bf Z}) \colon\thinspaceng {\bf Z}$, with the isomorphism given by the signature. In the case of $W({\bf Q}(t))$, the exact sequence is replaced with the sequence $$0 \to W({\bf Z}[t,t^{-1}]) \to W({\bf Q}(t)) \to \oplus_{\alpha} W({\bf Q}(\alpha)),$$ where the $\alpha$ are all unit complex roots of symmetric irreducible rational polynomials. For the analysis of these Witt groups, we turn to the references~\cite{con, lith, ranicki}.
\item Conclusion: Since the signatures are the same, we have $4w = 4w' \in W({\bf Q})$. Certainly $\rho(4w') \ge 4\sigma(w')$, but by construction, $4w'$ has a representative of rank exactly $4\sigma(w')$. The desired equality follows. The argument in the case of $W({\bf Q}(t))$ is identical.
\end{enumerate}
\end{proof}
\section{Definition of $r(w)$.} \label{sectiondefs}
\begin{definition} For a nonsingular matrix $A$ with entries in ${\bf Q}(t)$ that is hermitian with respect to the involution induced by $t \to t^{-1}$, the signature function $\sigma_A'(t)$ is defined by $\sigma_A'(t) =\text{signature} (A(e^{2\pi i t}))$. This is well-defined function on $[0,\frac{1}{2})$ except at the finite set of points that correspond to poles among the entries $A$.
\end{definition}
Since $A$ is hermitian, $\sigma'$ is symmetric about $\frac{1}{2}$; this justifies the restriction to the interval $[0, \frac{1}{2})$. As given in the next definition, taking averages and differences gives the Levine~\cite{levine} and Milnor~\cite{milnor} signature functions. (According to Matumoto~\cite{matumoto} the jumps in the Levine signature function are determined by the Milnor signatures. Notice that the factor of $\frac{1}{2}$ in the definition of $J_\omega$ implies that $J_\omega(t) $ represents half the jump in the signature function at $t$.)
\begin{definition} If $w \in W({\bf Q}(t))$, and $A$ is a matrix representing the class $w$, then:\vskip.05in
\begin{enumerate}
\item $\sigma_w(t) =\frac{1}{2}( \lim_{\tau \downarrow t }\sigma'_A(\tau) + \lim_{\tau \uparrow t }\sigma_A'(\tau))$.\vskip.1in
\item $J_w(t) = \frac{1}{2}( \lim_{\tau \downarrow t }\sigma'_A(\tau) - \lim_{\tau \uparrow t }\sigma_A'(\tau))$.\vskip.1in
\item For $t=0$ we define $\sigma_w(0) = \lim_{\tau \downarrow 0 }\sigma'_A(\tau) $ and $J_w(0) =0$.
\end{enumerate}
\end{definition}
An elementary argument shows that these functions are well defined; that is, $\sigma_w(t)$ and $J_w(t)$ depend only on the class in $W({\bf Q}(t))$ represented by the matrix $A$.\vskip.1in
\noindent{\bf Example.} Figure~\ref{siggraph} illustrates a possible signature function on the interval $[0, \frac{1}{2})$. We will construct a class with signature function having such a graph later, being more specific about the points $\alpha_i$. For the specific matrix used, the values of the signatures at the discontinuities will not be known, but upon averaging, the values will be as shown in the figure. In particular, the values of the jumps at the five discontinuities are $[1, 1, 1, -3, 1]$.
\begin{figure}
\caption{Example of a signature function}
\label{siggraph}
\end{figure}
To define the function $r \colon\thinspace W({\bf Q}(t)) \to {\bf Z}_{\ge 0}$ we need to focus on the set of discontinuities of the signature function and include in that set the value $t = 0$ for technical reasons. That set has a natural decomposition, indexed by symmetric irreducible rational polynomials. Throughout this paper polynomials will be Laurent polynomials $p(t) \in {\bf Q}[t, t^{-1}]$ and symmetric means $p(t) = p(t^{-1})$. We will view these polynomials as defined on the unit complex circle.
\begin{definition} For $w \in W({\bf Q}(t))$, let $T_w$ denote the (finite) set of discontinuities of $\sigma_w(t)$. For each $t \in T_w$, $e^{2 \pi i t}$
is an algebraic number with rational irreducible polynomial. For each symmetric irreducible polynomial $\delta$
let $ T_{w, \delta} $ be the set $\{ t \in T_w\ | \ \delta(e^{2 \pi i t}) = 0\}$.
\end{definition}
Given this we can define $r$.
\begin{definition} Let $w \in W({\bf Q}(t))$.
\begin{enumerate}
\item For each $\delta$ set $r_\delta (w) = \max _{t \in T_{w,\delta}} \{| \sigma_w(t)| \}+ \max_{t \in T_{w,\delta}} \{|J_w(t)| \}.$\vskip.05in
\item $r(w) = \max_\delta \{r_\delta(w), \sigma_\omega(0)\}.$
\end{enumerate}
\end{definition}
\subsection{Example} Consider the graph of the signature function illustrated in Figure~\ref{siggraph}. We assume that the five jumps occur at the roots of the same irreducible symmetric polynomial. (We will see that as part of a general realization result, Theorem~\ref{thmrealize}, such an example does occur. For example, the $\alpha$ can be primitive $22$--roots of units, the zeroes of the cyclotomic polynomial $\phi_{22}(t)$. This is the Alexander polynomial of the torus knot $T_{2,11}$, although this torus knot does not have this signature function.) In this example, the values of the signatures (at the $\alpha_i$ along with the value at 0) are $[0, 1,3, 5, 3, 1]$ and the values of the jumps are $[1, 1, 1, -3, 1]$.
The value of $r$ for this example will be $$ \max\ \{1,3, 5, 3, 1\} + \max \{1, 1, 1, 3, 1\} ) = 5+3 = 8.$$ Notice that this is greater than the maximum absolute value of the signature function, which is 6.
\vskip.1in
\noindent{\bf Computations} For any hermitian matrix $A$ representing a class in $W({\bf Q}(t))$, standard mathematical computer packages can be used to diagonalize $A$ and to arrange that the diagonal entries are Laurent polynomials. Factoring these diagonal entries and removing factors of the form $f(t)f(t^{-1})$ ensures that these diagonal entries have factorizations as $ \delta_1(t) \cdots \delta_n(t)$, so that the $\delta_i$ are distinct symmetric polynomials ($\delta_i(t^{-1}) = \delta_i(t) $) with exponent one. By symmetry, the values of the $\delta_i$ at points on the unit circle are real, and thus the signs can be determined (that is, the numerical approximation of $\delta_i(e^{i \theta})$ will be given as $a + \epsilon i$ for some small $\epsilon$, which can be ignored in the determination of the sign.) With this, the signature function can be approximated with necessary accuracy. At this same time, the roots of the $\delta_i$ on the unit circle will be identified. These computation are sufficient to completely determine the value of $r([A])$.
\section{$r(w)$ bounds $\rho(w)$ for $w \in W({\bf Q}(t))$.}\label{secbounds}
\begin{theorem} For any matrix representative $A$ of a class $w \in W({\bf Q}(t))$, dim($A$) $ \ge r(w)$.
\end{theorem}
\begin{proof} To simplify notation, we will let $\sigma_t = \sigma_w(t)$, $\sigma_t^{+} = \lim_{\tau \downarrow t} \sigma_w(\tau)$, and $\sigma_t^{-} = \lim_{\tau \uparrow t} \sigma_w(\tau)$. (For instance, for the signature function in Figure~\ref{siggraph}; $\sigma_{\alpha_2}^- = 2$ and $\sigma_{\alpha_2}^+ = 4$.)
Since $A$ is a hermitian matrix with entries in ${\bf Q}(t)$, we can diagonalize $A$, clear denominators, and remove square factors and factors of the form $f(t)f(t^{-1})$ in the diagonal entries. Thus, there is a diagonalization where each diagonal entry factors as the product of distinct symmetric irreducible rational polynomials. Let $D$ be one such diagonalization. Note that the discontinuities of the signature function can occur only at roots of the diagonal factors.
Let $T_w = \{ \alpha_1, \ldots , \alpha_k \}$ be the set of discontinuity points for the signature function on $[0,\frac{1}{2})$.
Let $\delta$ be an irreducible symmetric polynomial such that $\sigma_w(t)$ has a discontinuity at $t_0$ and $\delta(e^{2\pi i t_0}) = 0$. By reordering, we can assume that $D$ has diagonal $$[f_1 \delta, f_2 \delta, \ldots , f_m \delta, g_1, \ldots , g_n],$$ where each $f_i$ and $g_i$ is a product of distinct irreducible symmetric polynomials, none of which are $\delta$.
Let $\alpha \in T_{w, \delta}$. Evaluating the diagonal at $e^{2\pi i \alpha^-}$, where $\alpha^-$ is a number close to but smaller than $\alpha$, we denote the count of positive entries in $[f_1 \delta, f_2 \delta, \ldots , f_m \delta]$ to be $m_+$ and the number of negative entries to be $m_-$. Similarly, we denote the number of positive entries in $[g_1, \ldots , g_n]$ by $n_+$ and the number of negative entries by $n_-$.
Here are some elementary calculations:
\begin{itemize}
\item $m_+ + m_- = m $
\item $n_+ + n_- = n $
\item $\sigma^-_\alpha = m_+ + n_+ - m_- - n_-$
\end{itemize}
If we switch from $\alpha_\alpha^-$ to $\alpha_\alpha^+$ the only change in signs occurs because of the change in the sign of $\delta$ at $e^{2\pi i \alpha}$, and thus the signs of all the diagonal entries with $\delta$ factors change, so that we have $m_-$ positive entries and $m_+$ negative entries. It follows, using a little arithmetic for the second calculation, that
\begin{itemize}
\item $\sigma^+_\alpha = m_- + n_+ - m_+ - n_-$
\item $J_a = (m_{-} - m_+) $
\end{itemize}
Note that $J_a = m \mod 2$.
The following inequalities are verified by simply substituting for $m, J_a, n, $ and $\sigma_a^-$ in each:
\begin{itemize}
\item $ m_\pm = \frac{1}{2} m \mp \frac{1}{2}J_a $
\item $n_\pm = \frac{1}{2} n \pm (\frac{1}{2} \sigma^-_a +\frac{1}{2} J_a)$
\end{itemize}
Given that $m_\pm $ and $n_\pm$ are nonnegative, we have
\begin{itemize}
\item $ \frac{1}{2} m\ge |\frac{1}{2}J_a| $
\item $ \frac{1}{2} n \ge |\frac{1}{2} \sigma^-_a +\frac{1}{2} J_a| = \frac{1}{2} |\sigma_a| $
\end{itemize}
These equations hold at each $\alpha \in T_{w, \delta}$, so, multiplying by 2 and taking the maximums we find
\begin{itemize}
\item $ m\ge \max_{\alpha \in T_{w, \delta}} | J_\alpha|$
\item $ n \ge \max_{\alpha \in T_{w, \delta}} | \sigma_\alpha| $
\end{itemize}
This proves the theorem, except we have not dealt yet with the signature at 0. But clearly $\rho(w) \ge \sigma_w(0)$, the signature of $A$ evaluated near $1$ (that is, ($\sigma(0)$), so the proof is complete.
\end{proof}
\vskip.1in
\section{Realization result.}\label{secrealize}
We now want to show that every step function $s(t)$ satisfying certain criteria occurs as the signature function for some class $w \in W({\bf Q}(t))$ and for that class, $\rho(w) = r(w)$.
\begin{definition} For a step function $s(t)$, let $J_s(t) = \frac{1}{2}( \lim_{\tau \downarrow t } s(\tau) - \lim_{\tau \uparrow t } s(\tau)) $.\vskip.05in
\end{definition}
\begin{definition} Let $\mathcal{S}$ denote the set of integer valued step functions defined on $[0,\frac{1}{2})$ satisfying the following list of conditions. If $s \in \mathcal{S}$, then:\vskip.05in
\begin{enumerate}
\item The set of discontinuities of $s$ is finite and $s$ is continuous at $t = 0$.\vskip.05in
\item For all $t$, $s(t) =\frac{1}{2}( \lim_{\tau \downarrow t } s(\tau) + \lim_{\tau \uparrow t } s(\tau))$.\vskip.05in
\item For all $t$, $ J_s(t) \in {\bf Z}$. \vskip.05in
\item If $ J_s(t) \ne 0$ then $e^{2 \pi i t}$ is the root of an irreducible symmetric rational polynomial.
\item If $\alpha_1$ and $\alpha_2$ satisfy $\delta(e^{2 \pi i \alpha_i})=0$ for some symmetric irreducible rational polynomial $\delta$, then $J_s(\alpha_1) \equiv J_s(\alpha_2) \mod 2$.
\end{enumerate}
\end{definition}
The definitions in the previous section were given purely in terms of the signature function of a class $w \in W({\bf Q}(t)$, so the definition extends to $\mathcal{S}$ as now described.
\begin{definition} Let $s \in S$:\vskip.05in
\begin{itemize}
\item $T_s = \{ t \ | \ J_s(t) \ne 0 \}$.
\item For an irreducible symmetric polynomial $\delta$, $ T_{s,\delta} = \{ t\in T_s\ |\ \delta(e^{2 \pi i t}) = 0\}$.\vskip.05in
\item For each $\delta$, $r_\delta (s) = \max_{t \in T_{s,\delta}} \{| s(t)| \}+ \max_{t \in T_{s,\delta}} \{|J_s(t)| \}.$\vskip.05in
\item $r(s) = \max_\delta \{r_\delta(s), s(0)\}.$
\end{itemize}
\end{definition}
\begin{lemma} For all $s \in \mathcal{S}$, $r(s) \equiv s(0) \mod 2$.
\end{lemma}
\begin{proof} Since $J_s(t) \in {\bf Z}$ and $2J_s(\alpha)$ is the jump in the signature function at each discontinuity $\alpha$, $s(t) \equiv s(0) \mod 2$ if $t$ is not a point of discontinuity. Thus, at each discontinuity $\alpha$, $s(\alpha) + J_s(\alpha) = s(\alpha)^+ \equiv s(0) \mod 2$. For each $\delta$, for all $\alpha_1, \alpha_2 \in T_{s,\delta}$ we have $J_s(\alpha_1) \equiv J_s(\alpha_2) \mod 2$. It follows that $s(\alpha_1) \equiv s(\alpha_2) \mod 2$.
For a fixed $\delta$, $r_\delta (s) = \max_{t \in T_{s,\delta}} \{| \sigma_s(t)| \} + \max_{t \in T_{s,\delta}} \{|J_w(t)| \},$ and we have now seen that mod 2, all the terms in the set of values over which the maxima are being taken are equal. Thus, if $\alpha \in T_{s,\delta}$, $r_\delta (s) \equiv \sigma_s(\alpha) + J_s(\alpha) \mod 2$. We have already seen that this sum equals $s(0)$, modulo 2.
Finally, since $r(s) = \max_\delta \{r_\delta(s), s(0)\}$ and each of these elements equal $s(0)$ modulo 2, the maximum also equals $s(0)$ modulo 2.
\end{proof}
\begin{theorem}\label{thmrealize} Suppose that $s \in \mathcal{S}$. There exists a hermitian matrix $A$ of rank $r (s)$ having signature function $s$.
\end{theorem}
\begin{proof}
To construct $A$ we begin with the diagonal matrix $D_0$ of rank $r(s)$ in which $\delta_i$ appears as a factor of exponent one of the first $\max_{\alpha \in T_{s, \delta_i}} |J_s(\alpha)|$ entries. If this condition taken over all $i$ does not specify all the entries of $D_0 $, we make the remaining entries all 1.
Next, we change the sign of some of the diagonal entries to form $D_1$ so that the signature near 1 is $s(0)$. This is possible, since by the previous lemma, $r(s) = s(0) \mod 2$.
To continue the modification, we must introduce a family of polynomials, $q_\theta(t) = t^{-1} - 2\colon\thinspaces(2 \pi \theta) + t$, $0 <
\theta < \frac{1}{2}$.
For a dense set of $\theta$, $q_\theta(t)$ is
a rational polynomial having its only root on the upper half circle at $e^{2 \pi \theta i}$. For $t $ close to 0, $q_\theta(e^{2 \pi t i})$ is positive
and for $t$ close to $\frac{1}{2}$, $q_\theta(e^{2 \pi t i})$ is negative.
If some of the diagonal entries of $D_1$ are multiplied by $q_\theta(t)$, the value of the signature is unchanged for $t < \theta$. The signature can change for values of $t > \theta$. However, jumps can continue to appear only at the roots of the $\delta_i$, as well as, possibly, $\theta$. The construction of the desired form consists of making such modifications to create a form with signature function $s(t)$.
Suppose that $e^{2 \pi \alpha i }$ is a root of $\delta$, one of the $\delta_i$, corresponding to a nontrivial jump. Suppose also that the form $D_i$ has been constructed so that its signature function agrees with $s$ for all $t < \alpha$. We want to alter $D_i$, building $D_{i+1}$, so that its signature function is unchanged for $t < \alpha$ and has the same jump at $\alpha$ as $s(t)$; that is, $J_\alpha$. Pick a $\theta < \alpha$ with $\alpha - \theta$ small.
Suppose that the first $m$ entries of $D_i$ are the ones divisible by $\delta$, and that the number of remaining entries is $n$. If the desired form $D_{i+1}$ is to have a jump of $2 J_\alpha$ at $\alpha$, then (when evaluated at a point $\alpha^-$ close to but less than $\alpha$) the number of positive and negative entries in $D_{i+1}$ among the first $m$ diagonal entries must be $m_+ = \frac{1}{2}m - \frac{1}{2} J_\alpha$ and $m_- = \frac{1}{2}m + \frac{1}{2} J_\alpha$. Similarly, if the signature to the left of $\alpha$ is to be $s_\alpha^-$, we must have the number of positive and negative entries among the last $n$ diagonal entries of $ D_{i+1}$ be $n_+ = \frac{1}{2}n + \frac{1}{2}s_\alpha^- + \frac{1}{2} J_\alpha$ and $n_- = \frac{1}{2}n - \frac{1}{2}s_\alpha^- - \frac{1}{2} J_\alpha$. (Recall that the jump is determined by the first $m$ entries, since only those change sign as $t$ increases near $\alpha$.)
The desired sign distribution of the diagonal can be achieved by multiplying some of the diagonal entries by $q_\theta(t)$. The only concern is that each of the numbers $m_+ = \frac{1}{2}m - \frac{1}{2} J_\alpha$, $m_- = \frac{1}{2}m + \frac{1}{2} J_\alpha$, $n_+ = \frac{1}{2}n + \frac{1}{2}s_\alpha^- + \frac{1}{2} J_\alpha$, and $n_- = \frac{1}{2}n - \frac{1}{2}s_\alpha^- - \frac{1}{2} J_\alpha$, must be nonnegative. This will be the case as long as $m \ge | \frac{1}{2}J_\alpha|$ and $n \ge |s_\alpha^- + \frac{1}{2} J_\alpha|$, which is ensured by our initial choice of the dimension of $D_0$ to be $r(s)$. (Note that in the definition of $r(s)$ one of the two maximum is over the numbers $s(\alpha) $, and $s(\alpha) = s_\alpha^- + J_\alpha$.) Observe that since $s_\alpha^-$ is unchanged, no jump has been introduced at $\theta$.
\end{proof}
\section{$\rho(4w) = r(4w)$.}\label{sector}
Here we prove the main theorem:
\begin{theorem} For $w \in W({\bf Q}(t))$, $\rho(4w) = r(4w)$.
\end{theorem}
\begin{proof} For the given form $w$, we apply Theorem~\ref{thmrealize} to the function $s = \sigma_w(t)$ to find find a hermitian matrix $A$ of dimension $r(w)$. If we denote the class represented by $A$ in $W({\bf Q}(t))$ by $w'$, then $\sigma_{w'}(t) = \sigma_w(t)$.
It follows that $\sigma_{w \oplus - w'}(t) = 0$. Lemma~\ref{lemmasig} below then shows that $w \oplus w'$ represents and element of order 1, 2, or 4, in
$W({\bf Q}(t))$. Thus, $4w = 4w' \in W({\bf Q}(t))$.
Since $w'$ is constructed to have a representative of rank $r(w')$, clearly $\rho(4w') \le 4 r(w')$. On the
other hand, if follows immediately from the definition of $r$ that $r(nw) = n r(w)$ for any $w \in W({\bf Q}(t))$. Thus we have $\rho(4w') \ge r(4w') = 4r(w')$. The proof of the theorem is complete, given the next lemma.
\end{proof}
\begin{lemma}\label{lemmasig} For a class $w \in W({\bf Q}(t))$, if $\sigma_w(t) = 0$, then $4w = 0 \in W({\bf Q}(t))$.
\end{lemma}
\begin{proof}
Background for the structure of Witt groups is contained~\cite{mh} for symmetric bilinear forms. The specifics in the case of hermitian forms are contained in~\cite{lith}. A more complete description is in Ranicki's book~\cite{ranicki}, with the details of the structure of hermitian forms over number rings presented in~\cite{con}.
For each symmetric irreducible $\delta \in {\bf Q}[t,t^{-1}]$ there is a homomorphism $$\partial_\delta \colon\thinspace W({\bf Q}(t)) \to W({\bf Q}[t,t^{-1}]/\left<\delta(t)\right>),$$ defined as follows. If $w \in W({\bf Q}(t))$ is represented by a diagonal matrix $A$ with diagonal entries $[\delta f_1, \ldots , \delta f_m , g_1, \ldots , g_n]$ where each $f_i$ and $g_i$ is a symmetric irreducible polynomial prime to $\delta$, then $\partial(w) = [f_1, \ldots , f_m]$. This induces split exact sequence $$ 0 \to W({\bf Q}[t,t^{-1}]) \to W({\bf Q}(t)) \to \oplus_\delta W({\bf Q}[t,t^{-1}]/\left<\delta(t)\right>).$$
According to~\cite{ranicki} (see also \cite{lith} for an elementary argument) the inclusion $W({\bf Q}) \to W({\bf Q}[t,t^{-1}])$ is an isomorphism, so the previous sequence can be rewritten as $$ 0 \to W({\bf Q}) \to W({\bf Q}(t)) \to \oplus_\delta W({\bf Q}[t,t^{-1}]/\left<\delta(t)\right>).$$
The field ${\bf Q}[t,t^{-1}]/\delta(t)$ is an algebraic extension of ${\bf Q}$, ${\bf Q}(\alpha)$, a field with involution given by $\alpha \to \alpha^{-1}$. Denoting by $F$ the fixed field of the involution, an element in the Witt group of hermitian forms over this ${\bf Q}(\alpha)$ is of finite order (actually 4--torsion) if and only for all complex embeddings of ${\bf Q}(\alpha)$ that restrict to a real embedding of $F$, the signature of the corresponding complex hermitian form is 0. This condition on the embedding implies that $\alpha$ maps to a unit complex root of $\delta$.
Since the jump function for $w$ is 0 at all unit roots of $\delta$, it is then clear that the signature of $\partial_\delta(w)$ is also 0. It follows that $w$ maps to an element of finite order in $ \oplus_\delta W({\bf Q}[t,t^{-1}]/\left<\delta(t)\right>).$ In particular,
$w \in W({\bf Q}[t, t^{-1}]) \colon\thinspaceng W({\bf Q})$.
But any element in the image of $W({\bf Q})$ has constant
signature function, and in our case this implies that $4w$ is represented by a class in $W({\bf Q})$ with 0 signature. But $W({\bf Q}) \colon\thinspaceng {\bf Z} \oplus T$, where
$T$ is the torsion subgroup of $W({\bf Q})$ and satisfies $4T = 0$. It now follows that as desired $4w = 0 \in W({\bf Q}(t))$, using the fact the exact sequence is split exact.
\end{proof}
\section{$\rho$ as a norm on $W({\bf Q}(t))$.}\label{sectionnorm}
In order to compare $r$ as a bound on $\rho$ with bounds based on the maximum of the signature function, we want to view these functions as norms on a vector space.
The function $r$ on $W({\bf Q}(t))$ is multiplicative: $r(nw) = nr(w)$ for $n \in {\bf Z}$. Thus $r$ induces a well-defined rational valued function on $W_{\bf Q}({\bf Q}(t))= W({\bf Q}(t)) \otimes {\bf Q}$. The same is not true for $\rho$ since it can be nonzero on torsion elements in $W({\bf Q}(t))$, but we can define a stable version of $\rho$ by $\rho_s(w) = \frac{1}{4}\rho(4w)$. It follows then by Theorem~\ref{thmmain} that $\rho_s(w) = r(w)$, so $\rho_s$ also determines a well-defined rational valued function on $W_{\bf Q}({\bf Q}(t))$.
If we define $s(w) = \max(|\sigma_w(t)|)$, then $s$ also defines a function on $W_{\bf Q}({\bf Q}(t))$.
\begin{theorem} For all $w \in W_{\bf Q}({\bf Q}(t))$, $\rho_s(w) \ge s(w)$.
\end{theorem}
\begin{proof} Suppose the the maximum value of $|\sigma_w(t))|$ occurs at $t_0$ and $\alpha_0$ is the largest value of a discontinuity that is less than $t_0$. Then $ \sigma_w(t_0) \le \sigma_w(\alpha_0) + J_w(\alpha_0)$. The conclusion now follows from the definition of $ r(w)$ (which equals $\rho_s(w)$).
\end{proof}
Recall that a norm on a vector space $V$ is a function $\nu$ satisfying $\nu(v ) \ge 0 $ for all $v \in V$, $\nu(v) = 0 $ if and only if $v = 0$, and $\nu(v + w) \le \nu(v) + \nu (w)$ for all $v$ and $w$. An immediate consequence of Lemma~\ref{lemmasig} is the following.
\begin{theorem} Both $\rho_s$ and $s$ are norms on $W_{\bf Q}({\bf Q}(t))$.
\end{theorem}
\begin{definition}If $\nu$ is a norm on a vector space $V$, the unit ball of $\nu$ is defined by $B_\nu = \{ v \in V \ |\ \nu(v) \le 1\}$.
\end{definition}
\section{Knot theoretic application; an example contrasting $\rho_s$ and $s$.}\label{sectionapps}
Here we illustrate the strength of $r$ over basic signature bounds in determining the rank of a Witt class. We begin with a specific class, $w_1 \oplus w_2$ defined below. We then expand on this to consider all linear combinations $xw_1 \oplus yw_2$.
\subsection{Construction and results for $w_1 \oplus w_2$} Let $\delta_6(t) = t^{-1} -1 + t$ and let $\delta_{10} =t^{-2} - t^{-1} +1 - t +t^2$. These are the sixth and tenth cyclotomic polynomials, having roots at $e^{2\pi i t}$ for $t = \frac{1}{6}$ and $t = \frac{1}{10}, \frac{3}{10}$, respectively, on $[0,\frac{1}{2})$.
We let $w_1$ be the class in $W({\bf Q}(t))$ with diagonal representative $[- \delta_{10} \delta_6, -\delta_6 , 1, 1] $ and let $w_2$ be the class with diagonal $[- \delta_{10}, 1] $. The graphs of the signature functions of $w_1, w_2$, and $w_1 \oplus w_2$ are illustrated in Figure~\ref{fig25}. These signature functions occur for the knots $-5_1$, $10_{132}$, and $-5_1 \oplus 10_{132}$. (The choice of signs simplifies some of the calculations that follow.)
\begin{figure}
\caption{Signature functions for $w_1$, $w_2$, and $w_1 \oplus w_2$.}
\label{fig25}
\end{figure}
The maximum absolute values of the signature for these three forms are seen to be $s(w_1) = 4$, $s(w_2) = 2$, and $s(w_1 \oplus w_2) = 4$. In the first two cases we have the same result for $r$: $r(w_1) = 4$ and $r(w_2) = 2$. However, for $w_1 \oplus w_2$ we have the set of jumps at the tenth roots of unity are $\{2, 0\}$ and the signatures at the tenth roots of unity are $\{2, 4\}$. Thus, the sum of the two maximum is $r(w_1 \oplus w_2) = 6$.
These calculations lead to the following theorem, where the knots $5_1$ and $10_{132}$ are as found in the tables at~\cite{cl}.
\begin{theorem} The Witt rank of $w_1 \oplus w_2$ is $\rho(w_1 \oplus w_2) = 6$. In particular, the knot $-5_1 \# 10_{132}$ has 4--genus 3.
\end{theorem}
\begin{proof} The algebraic statements are demonstrated in the discussion preceding the statement of the theorem. For the geometric result it follows from the algebra that $g_4(-5_1 \# 10_{132}) \ge 3$. But it is known (for example,~see~\cite{cl}) that $g_4(5_1) = 2$ and $g_4(10_{132}) = 1$, so $g_4(-5_1 \# 10_{132}) \le 3$.
\end{proof}
\vskip.1in
\noindent{\bf Comment} This topological result can be obtained by using Ozsv\'ath-Szab\'o invariants~\cite{os1} or Khovanov-Rasmussen invariants~\cite{ra}, which apply only in the smooth category. In the topological category, neither the Murasugi nor the Tristram-Levine signatures~\cite{levine, murasugi, tristram} can give this genus bound.
\subsection{$\rho_s$ and $s$ on the span of $w_1$ and $w_2$ in $W_{\bf Q}({\bf Q}(t))$.}
We now compute and compare the values of $\rho_s = r$ and $s$ on the span of $w_1$ and $w_2$ in $W_{\bf Q}({\bf Q}(t))$. Both are determined by their unit balls.
The value of $s(xw_1 + yw_2)$ for $x, y \in {\bf Q}$ is given by $$s(xw_1 + yw_2)= \max\{ |2x + 2y| , |4x|\}.$$ For the value of $r$ we sum the the maximum absolute value of the signature at the points $t=\frac{1}{10} $ and $t=\frac{3}{10}$, and maximum for the jump function at those two points. The result is:
$$r( xw_1 + yw_2)= \max\{ |x + y| , |3x +y|\} + \max\{ |x + y| , |x -y|\} .$$ The unit balls for these norms are drawn in Figure~\ref{balls}; the larger region represents the $s$ ball, and the smaller hatched region is the $\rho_s$ ball.
\begin{figure}
\caption{Unit balls for $s$ and $\rho_s$.}
\label{balls}
\end{figure}
In this figure, we see that the point $(\frac{1}{4},\frac{1}{4})$ is in the unit $s$ ball; as seen earlier, the $s(w_1 \oplus w_2) = 4$. Also, as we computed, $(\frac{1}{4},\frac{1}{4})$ is not in the unit $\rho_s$ ball, but $(\frac{1}{6},\frac{1}{6})$ is, since $\rho_s(w_1 \oplus w_2) = 6.$
Another interesting point in the diagram is $(-\frac{1}{4}, \frac{3}{4})$. The graph of the signature function of $-w_1 \oplus 3w_2$ is illustrated in Figure~\ref{signgraph2}. The maximum absolute value of the signature is $4$, but the value of $\rho_s$ is $\max\{ 2 , 0\} + \max\{ 2, |-4|\} = 6$.
\begin{figure}
\caption{Signature function for $-w_1 \oplus 3 w_2$.}
\label{signgraph2}
\end{figure}
\vskip.1in
\noindent{\bf Knot theoretic comment} It follows from this calculation that $g_4(5_1 + 3(10_{132})) \ge 3$. A straightforward knot theoretic exercise in fact shows shows that $g_4(5_1 \oplus 3(10_{132}) )\le 3$. This presents another example for which $r$ detects the 4--genus of a knot, but signatures do not. More interesting, in this case the Ozsv\'ath-Szab\'o and Rasmussen-Khovanov invariants are both insufficient to determine the 4--genus; both turn out to give a lower bound of 1.
\appendix
\section{Witt class invariants of knots}
Here we summarize the geometric background related to $W({\bf Q}(t))$ invariants of knots. Details can be found in such references as~\cite{rolf}.
Every smooth oriented knot $K\subset S^3$ bounds a smoothly embedded oriented surface $F\subset S^3$. There is a {\it Seifert} pairing $V\colon\thinspace H_1(F) \times H_1(F) \to {\bf Z}$ given by $V(x, y) = lk (x, i_+(y))$, where $i_+$ is the map $F \to S^3 - F$ given by pushing off in the positive direction and $lk$ is the linking number. A simple observation is that the intersection number of classes $x, y \in H_1(F)$ is given by $V(x,y) - V(y,x)$. In particular, any matrix representation of $V$ has determinant $\pm 1$.
Suppose the genus of $F$ is $n_1$, and $K = \partial G$, where $G$ is properly embedded in $B^4$ and is of genus $n_2$. Then $F \cup G$ is a closed surface in $B^4$, and it bounds an embedded 3--manifold $M\subset B^4$. An argument using Poincar\'e duality shows that the kernel $\mathcal{K}$ of the inclusion $H_1(F \cup G, {\bf Q}) \to H_1(M, {\bf Q})$ is of dimension $(n_1 + n_2)$. Since $H_1(F)$ is a $2n_1$ dimensional subspace of $H_1(F \cup G)$, which is of dimension $2(n_1 + n_2)$, a simple linear algebra argument shows that $\mathcal{K}' = \mathcal{K} \cap H_1(F,{\bf Q})$ is of dimension at least $n_1 - n_2$.
Another simple geometric argument implies that $V$ vanishes on the subspace of $\mathcal{K}' \subset H_1(F, {\bf Q})$. If we now write $V$ for a matrix representation of the Seifert pairing, the form $(1-t)V + (1 -t^{-1})V $ defines a Hermitian pairing on the rational function field. As above, if $K$ bounds a surface of genus $n_2$ in $B^4$, then this forms vanishes on a subspace in $H_1(F,{\bf Q})$ of dimension $(n_1 - n_2)$. Thus the form splits as a direct sum of forms, one of which is metabolic and of dimension $2(n_1 -n_2)$; the other summand is of dimension $2 n_1 - (2(n_1 - n_2) = 2n_2$.
In summary, we see that if a knot $K$ bounds a surface of genus $g$ in $B^4$, then the Witt class of its hermitianized Seifert form has a representative of dimension $2g$.
\newcommand{\etalchar}[1]{$^{#1}$}
\end{document} |
\begin{document}
\title{Alexandrov's Approach to the Minkowski Problem}
\author{S.~S. Kutateladze}
\address[]{
Sobolev Institute of Mathematics\newline
\indent 4 Koptyug Avenue\newline
\indent Novosibirsk, 630090
\indent RUSSIA}
\email{
[email protected]
}
\begin{abstract}
This article is dedicated to the centenary of the birth
of Aleksandr D. Alexandrov (1912--1999). His
functional-analytical approach to the solving of the
Minkowski problem is examined and applied to the extremal problems of
isoperimetric type with conflicting goals.
\end{abstract}
\date{August 30, 2012}
\maketitle
The {\it Mathematics Subject Classification}, produced jointly by the editorial staffs
of {\it Mathematical Reviews} and {\it Zentralblatt f\"ur Mathematik} in 2010, has Section 53C45
``Global surface theory (convex surfaces \`a la A.~D. Aleksandrov).''
This article surveys some mathematics of the sort.
Good mathematics starts as a first love. If great, it turns into adult sex
and happy marriage. If ordinary, it ends in dumping, cheating or divorce.
If awesome, it becomes eternal.
Alexandrov's mathematics is great (see \cite{SW-I}-\cite{SW-II}). To demonstrate, inspect his solution of the Minkowski problem.
Alexandrov's mathematics is alive, expanding and flourishing for decades.
Dido's problem in the today's setting is one of the examples.
\section*{The Space of Convex Bodies}
A~{\it convex figure\/} is a~compact convex set. A~{\it convex body\/}
is a~solid convex figure.
The {\it Minkowski duality\/} identifies
a~convex figure $S$ in
$\mathbb R^N$ and its {\it support function\/}
$S(z):=\sup\{(x,z)\mid x\in S\}$ for $z\in \mathbb R^N$.
Considering the members of $\mathbb R^N$ as singletons, we assume that
$\mathbb R^N$ lies in the set $\mathscr V_N$
of all compact convex subsets
of $\mathbb R^N$.
The Minkowski duality makes $\mathscr V_N$ into a~cone
in the space $C(S_{N-1})$ of continuous functions on the Euclidean unit sphere
$S_{N-1}$, the boundary of the unit ball $\mathfrak z_N$.
The
{\it linear span\/}
$[\mathscr V_N]$ of~$\mathscr V_N$ is dense in $C(S_{N-1})$, bears
a~natural structure of a~vector lattice
and is usually referred to as the {\it space of convex sets}.
The study of this space stems from the pioneering breakthrough of
Alexandrov in 1937 and the further insights of
Radstr\"{o}m, H\"{o}rmander, and Pinsker.
\section*{Linear Inequalities over Convex Surfaces}
A measure $\mu$ {\it linearly majorizes\/} or {\it dominates\/}
a~measure $\nu$ on $S_{N-1}$ provided that to each decomposition of
$S_{N-1}$ into finitely many disjoint Borel sets $U_1,\dots,U_m$
there are measures $\mu_1,\dots,\mu_m$ with sum $\mu$
such that every difference $\mu_k - \nu|_{U_k}$
annihilates
all restrictions to $S_{N-1}$ of linear functionals over
$\mathbb R^N$. In symbols, we write $\mu\,{\gg}{}_{\mathbb R^N} \nu$.
Reshetnyak proved in 1954 (cp.~\cite{Reshetnyak}) that
$$
\int_{S_N-1} p d\mu \ge \int_{S_N-1} p d\nu
$$
for each sublinear functional $p$
on $\mathbb R^N$ if $\mu\,{\gg}{}_{\mathbb R^N} \nu$.
This gave an important trick for generating positive linear functionals
over various classes of convex surfaces and functions.
\section*{Choquet's Order}
A~measure $\mu$ {\it affinely majorizes\/} or {\it dominates\/}
a measure $\nu$, both given on a compact convex subset $Q$ of a locally convex space $X$,
provided that to each decomposition of
$\nu$ into finitely many summands
$\nu_1,\dots,\nu_m$ there are measures $\mu_1,\dots,\mu_m$
whose sum is $\mu$ and for which every difference
$\mu_k - \nu_k$ annihilates all restrictions
to $Q$ of affine functionals over $X$.
In symbols, $\mu\,{\gg}{}_{\operatorname{Aff}(Q)} \nu$.
Cartier, Fell, and Meyer proved in 1964 (cp.~\cite{Cartier_et_al}) that
$$
\int_{Q} f d\mu \ge \int_{Q} f d\nu
$$
for each continuous convex function $f$
on $Q$ if and only if $\mu\,{\gg}{}_{\operatorname{Aff}(Q)} \nu$.
An analogous necessity part for linear majorization was published
in 1969 (cp.~\cite{Kut69}--\cite{Dinges}).
\section*{Decomposition Theorem}
Majorization is a vast subject (cp.~\cite{Marshall_Olkin}).
The general form for many cones is as follows (cp.~\cite{Kut75}):
{\sl
Assume that $H_1,\dots,H_N$ are cones in a vector lattice~$X$.
Assume further that $f$ and $g$ are positive linear functionals on~$X$.
The inequality
$$
f(h_1\vee\dots\vee h_N)\ge g(h_1\vee\dots\vee h_N)
$$
holds for all
$h_k\in H_k$ $(k:=1,\dots,N)$
if and only if to each decomposition
of~$g$ into a~sum of~$N$ positive terms
$g=g_1+\dots+g_N$
there is a decomposition of~$f$ into a~sum of~$N$
positive terms $f=f_1+\dots+f_N$
such that
$$
f_k(h_k)\ge g_k(h_k)\quad
(h_k\in H_k;\ k:=1,\dots,N).
$$
}
\section*{Alexandrov Measures}
Alexandrov proved the unique existence of
a translate of a convex body given its surface area function, thus completing the solution of
the Minkowski problem.
Each surface area function is an {\it Alexandrov measure}.
So we call a positive measure on the unit sphere which is supported by
no great hypersphere and which annihilates
singletons.
Each Alexandrov measure is a translation-inva\-riant
additive functional over the cone
$\mathscr V_N$.
The cone of positive translation-invariant measures in the
dual $C'(S_{N-1})$ of
$C(S_{N-1})$ is denoted by~$\mathscr A_N$.
\section*{Blaschke's Sum}
Given $\mathfrak x, \mathfrak y\in \mathscr V_N$, the record
$\mathfrak x\,{=}{}_{\mathbb R^N}\mathfrak y$ means that $\mathfrak x$
and $\mathfrak y$ are equal up to translation or, in other words,
are translates of one another.
So, ${=}{}_{\mathbb R^N}$ is the associate equivalence of
the preorder ${\ge}{}_{\mathbb R^N}$ on $\mathscr V_N$ of
the possibility of inserting one figure into the other
by translation.
The sum of the surface area measures of
$\mathfrak x$ and $\mathfrak y$ generates the unique class
$\mathfrak x\# \mathfrak y$ of translates which is referred to as the
{\it Blaschke sum\/} of $\mathfrak x$ and~$\mathfrak y$.
There is no need in discriminating between a convex figure,
the coset of its translates in $\mathscr V_N/\mathbb R^N$,
and the corresponding measure in $\mathscr A_N$.
\section*{Comparison Between the Structures}
\begin{tabular}{|r|r|r|}
\hline
{\scshape Objects}&{\scshape Minkowski's Structure} &{\scshape
Blaschke's Structure}\\
\hline
cone of sets &${\mathscr V}_N/\mathbb R^N$\hfil &${\mathscr A}_N$\\
dual cone &${\mathscr V}^*_N$\hfil &${\mathscr A}^*_N$\hfil\\
positive cone &$\mathscr A^*_N $\hfil &$\mathscr A_N$\hfil\\
linear functional &$V_1 (\mathfrak z_N,\,\cdot\,)$, breadth& $V_1(\,\cdot\,,\mathfrak z_N)$, area\hfil\\
concave functional &$V^{1/N}(\,\cdot\,)$\hfil &$V^{(N-1)/N}(\,\cdot\,)$\hfil\\
convex program & isoperimetric problem\hfil & Urysohn's problem\hfil \\
operator constraint & inclusion-like\hfil & curvature-like\hfil \\
Lagrange's multiplier & surface\hfil & function\hfil \\
gradient\hfil &$V_1(\bar{\mathfrak x},\,\cdot\,) $\hfil& $V_1(\,\cdot\,,\bar{\mathfrak x})$\hfil\\
\hline
\end{tabular}
\section*{The Natural Duality}
Let $C(S_{N-1})/\mathbb R^N$ stand for the factor space of
$C(S_{N-1})$ by the subspace of all restrictions of linear
functionals on $\mathbb R^N$ to $S_{N-1}$.
Let $[\mathscr A_N]$ be the space $\mathscr A_N-\mathscr A_N$
of translation-invariant measures, in fact, the linear span
of the set of Alexandrov measures.
$C(S_{N-1})/\mathbb R^N$ and $[\mathscr A_N]$ are made dual
by the canonical bilinear form
$$
\gathered
\langle f,\mu\rangle=\frac{1}{N}\int\nolimits_{S_{N-1}}fd\mu\\
(f\in C(S_{N-1})/\mathbb R^N,\ \mu \in[\mathscr A_N]).
\endgathered
$$
For $\mathfrak x\in\mathscr V_N/\mathbb R^N$ and $\mathfrak y\in\mathscr A_N$,
the quantity
$\langle {\mathfrak x},{\mathfrak y}\rangle$ coincides with the
{\it mixed volume\/}
$V_1 (\mathfrak y,\mathfrak x)$.
\section*{Solution of Minkowski's Problem}
Alexandrov observed that the gradient of $V(\cdot)$ at $\mathfrak x$ is proportional
to $\mu(\mathfrak x)$ and so minimizing $\langle \cdot,\mu\rangle$ over $\{V=1\}$
will yield the equality $\mu=\mu(\mathfrak x)$ by the Lagrange multiplier
rule. But this idea fails since the interior of ${\mathscr V}_{N}$ is empty.
The fact that DC-functions are dense in $C(S_{N-1})$ is not helpful at all.
Alexandrov extended the volume to the positive cone of $C(S_{N-1})$ by the formula
$V(f):=\langle f,\mu(\operatorname{co}(f))\rangle$ with $co(f)$ the envelope of support functions
below $f$. The ingenious trick settled all for the Minkowski problem.
This was done in 1938 but still is one of the summits of convexity.
In fact, Alexandrov suggested a functional analytical approach to extremal problems
for convex surfaces. To follow it directly in the general setting is impossible
without the above description of the polar cones. The obvious limitations of the Lagrange multiplier rule are immaterial in the case of convex programs. It should be emphasized that the classical isoperimetric problem is not a Minkowski convex program in dimensions greater than~2. The convex counterpart is the Urysohn problem of maximizing volume given integral breadth \cite{Urysohn}.
The constraints of inclusion type are convex in the Minkowski structure, which
opens way to complete solution of new classes of Urysohn-type problems (cp.~\cite{Kut07}).
\section*{The External Urysohn Problem}
Among the convex figures, circumscribing $\mathfrak x_0 $ and having
integral breadth fixed, find a convex body of greatest volume.
{\sl
A feasible convex body $\bar {\mathfrak x}$ is a solution
to~the external Urysohn problem
if and only if there are a positive measure~$\mu $
and a positive real $\bar \alpha \in \mathbb R_+$ satisfying
$(1)$ $\bar \alpha \mu
(\mathfrak z_N)\,{\gg}{}_{\mathbb R^N}\mu (\bar {\mathfrak x})+\mu $;
$(2)$~$V(\bar {\mathfrak x})+\frac{1}{N}\int\nolimits_{S_{N-1}}
\bar {\mathfrak x}d\mu =\bar \alpha V_1 (\mathfrak z_N,\bar {\mathfrak x})$;
$(3)$~$\bar{\mathfrak x}(z)={\mathfrak x}_0 (z)$
for all $z$ in the support of~$\mu $.
}
\section*{Solutions}
If ${\mathfrak x}_0 ={\mathfrak z}_{N-1}$ then $\bar{\mathfrak x}$
is a {\it spherical lens} and $\mu$ is the restriction
of the surface area function
of the ball of radius
$\bar \alpha ^{1/(N-1)}$
to the complement of the support of the lens to~$S_{N-1}$.
If ${\mathfrak x}_0$ is an equilateral triangle then the solution
$\bar {\mathfrak x}$ looks as follows:
\centerline{\epsfxsize4cm\epsfbox{ury_ext.eps}}
$\bar {\mathfrak x}$ is the union of~${\mathfrak x}_0$
and three congruent slices of a circle of radius~$\bar \alpha$ and
centers $O_1$--$O_3$, while
$\mu$ is the restriction of $\mu(\mathfrak z_2)$
to the subset of $S_1$ comprising the endpoints
of the unit vectors of the shaded zone.
\section*{Symmetric Solutions}
This is the general solution of the internal Urysohn problem inside a triangle
in the class of centrally symmetric convex figures:
\centerline{\epsfxsize6cm\epsfbox{ury_sym.eps}}
\section*{Current Hyperplanes}
Find two convex figures $\bar{\mathfrak x}$ and $\bar{\mathfrak y}$
lying in a given convex body
$\mathfrak x_o$,
separated by a~hyperplane with the unit outer normal~$z_0$,
and having the greatest total volume
of $\bar{\mathfrak x}$ and~$\bar{\mathfrak y}$
given the sum of their integral breadths.
{\sl
A feasible pair of convex bodies $\bar{\mathfrak x}$ and $\bar{\mathfrak y}$
solves the internal Urysohn problem with a current hyperplane
if and only if
there are convex figures $\mathfrak x$ and $\mathfrak y$
and positive reals
$\bar\alpha $ and $\bar\beta$ satisfying
{\rm(1)} $\bar{\mathfrak x}=\mathfrak x \# \bar\alpha\mathfrak z_N$;
{\rm(2)} $\bar{\mathfrak y}=\mathfrak y \# \bar\alpha\mathfrak z_N$;
{\rm(3)} $\mu(\mathfrak x) \ge \bar\beta\varepsilon_{z_0} $, $\mu(\mathfrak y) \ge \bar\beta\varepsilon_{-z_0} $;
{\rm(4)} $\bar {\mathfrak x}(z)=\mathfrak x_0 (z)$ for all $z\in \operatorname{spt}(\mathfrak x)\setminus \{z_0\} $;
{\rm(5)} $\bar {\mathfrak y}(z)=\mathfrak x_0 (z)$ for all $z\in \operatorname{spt}(\mathfrak x)\setminus \{-z_0\} $,
\noindent
with $\operatorname{spt}(\mathfrak x)$ standing for the {\it support\/} of $\mathfrak x$,
i.e. the support of the surface area measure $\mu(\mathfrak x)$
of~$\mathfrak x$.
}
\section*{Is Dido's Problem Solved?}
From a utilitarian standpoint, the answer is
definitely in the affirmative. There is no evidence that Dido
experienced any difficulties, showed indecisiveness, and procrastinated the choice of the tract of land.
Practically speaking, the situation in which Dido made her decision
was not as primitive as it seems at the first glance.
Assume that Dido
had known the isoperimetric property of the circle
and had been aware of the symmetrization processes that were elaborated
in the nineteenth century. Would this knowledge be sufficient for Dido
to choose the tract of land? Definitely, it would not.
The real coastline may be rather ragged and craggy.
The photo snaps of coastlines are exhibited as the most visual
examples of fractality. From a theoretical standpoint, the free boundary
in Dido's planar problem may be nonrectifiable, and so the concept of area
as the quantity to be optimized is itself rather ambiguous.
Practically speaking, the situation in which Dido made her decision
was not as primitive as it seems at the first glance.
Choosing the tract of land, Dido had no right to trespass
the territory under the control of the local sovereign.
She had to choose the tract so as to encompass the camps of her
subjects and satisfy some fortification requirements.
Clearly, this generality is unavailable in the
mathematical models known as the classical isoperimetric problem.
Nowadays there is much research aiming at the problems with conflicting
goals (cp., for instance, \cite{MCDM}). One of the simplest and most
popular approach is based on the concept of Pareto-optimum.
\section*{ Pareto Optimality}
Consider a~bunch of economic agents
each of which intends to maximize his own income.
The {\it Pareto efficiency principle\/} asserts
that as an effective agreement of the conflicting goals it is reasonable
to take any state in which nobody can increase his income in any way other
than diminishing the income of at least one of the other fellow members.
Formally speaking, this implies the search of the maximal elements
of the set comprising the tuples of incomes of the agents
at every state; i.e., some vectors of a finite-dimensional
arithmetic space endowed with the coordinatewise order. Clearly,
the concept of Pareto optimality was already abstracted to arbitrary
ordered vector spaces.
By way of example, consider a few multiple criteria problems of isoperimetric type.
For more detail, see \cite{Kut09}.
\section*{ Vector Isoperimetric Problem}
Given are some convex bodies
$\mathfrak y_1,\dots,\mathfrak y_M$.
Find a convex body $\mathfrak x$ encompassing a given volume
and minimizing each of the mixed volumes $V_1(\mathfrak x,\mathfrak y_1),\dots,V_1(\mathfrak x,\mathfrak y_M)$.
In symbols,
$$
\mathfrak x\in\mathscr A_N;\
\widehat p(\mathfrak x)\ge \widehat p(\bar{\mathfrak x});\
(\langle\mathfrak y_1,\mathfrak x\rangle,\dots,\langle\mathfrak y_M,\mathfrak x\rangle)\rightarrow\inf\!.
$$
Clearly, this is a~Slater regular convex program in the Blaschke structure.
{\sl
Each Pareto-optimal solution $\bar{\mathfrak x}$ of the vector isoperimetric problem
has the form}
$$
\bar{\mathfrak x}=\alpha_1{\mathfrak y}_1+\dots+\alpha_m{\mathfrak y}_m,
$$
where $\alpha_1,\dots,\alpha_m$ are positive reals.
\section*{The Leidenfrost Problem}
Given the volume of a three-dimensional
convex figure, minimize its surface area and vertical breadth.
By symmetry everything reduces to an analogous plane two-objective problem,
whose every Pareto-optimal solution is by~2 a~{\it stadium\/},
a weighted Minkowski sum of a disk and
a horizontal straight line segment.
{\sl A plane spheroid, a Pareto-optimal solution of the Leidenfrost problem,
is the result of rotation of a stadium around the vertical axis through
the center of the stadium}.
\section*{ Internal Urysohn Problem with Flattening}
Given are some~convex body
$\mathfrak x_0\in\mathscr V_N$ and some flattening direction~ $\bar z\in S_{N-1}$.
Considering $\mathfrak x\subset\mathfrak x_0$ of
fixed integral breadth, maximize the volume of~$\mathfrak x$ and minimize the
breadth of $\mathfrak x$ in the flattening direction:
$\mathfrak x\in\mathscr V_N;\
\mathfrak x\subset{\mathfrak x}_0;\
\langle \mathfrak x,{\mathfrak z}_N\rangle \ge \langle\bar{\mathfrak x},{\mathfrak z}_N\rangle;\
(-p(\mathfrak x), b_{\bar z}(\mathfrak x)) \to\inf\!.
$
{\sl For a feasible convex body $\bar{\mathfrak x}$ to be Pareto-optimal in
the internal Urysohn problem with the flattening
direction~$\bar z$ it is necessary and sufficient that there be
positive reals $\alpha, \beta$ and a~convex figure $\mathfrak x$ satisfying}
$$
\gathered
\mu(\bar{\mathfrak x})=\mu(\mathfrak x)+ \alpha\mu({\mathfrak z}_N)+\beta(\varepsilon_{\bar z}+\varepsilon_{-\bar z});\\
\bar{\mathfrak x}(z)={\mathfrak x}_0(z)\quad (z\in\operatorname{spt}(\mu(\mathfrak x)).
\endgathered
$$
\section*{Rotational Symmetry}
Assume that a plane convex figure ${\mathfrak x}_0\in\mathscr V_2$ has the symmetry axis $A_{\bar z}$
with generator~$\bar z$. Assume further that ${\mathfrak x}_{00}$ is the result of rotating
$\mathfrak x_0$ around the symmetry axis $A_{\bar z}$ in~$\mathbb R^3$.
$$
\gathered
\mathfrak x\in\mathscr V_3;\\
\mathfrak x \text{\ is\ a\ convex\ body\ of\ rotation\ around}\ A_{\bar z};\\
\mathfrak x\supset{\mathfrak x}_{00};\
\langle {\mathfrak z}_N, \mathfrak x\rangle \ge \langle{\mathfrak z}_N,\bar{\mathfrak x}\rangle;\\
(-p(\mathfrak x), b_{\bar z}(\mathfrak x)) \to\inf\!.
\endgathered
$$
{\sl Each Pareto-optimal solution is the result
of rotating around the symmetry axis a Pareto-optimal solution of the plane internal
Urysohn problem with flattening in the direction of the axis}.
\section*{Soap Bubbles}
Little is known about the analogous problems in arbitrary dimensions.
An especial place
is occupied by the result of Porogelov (cp.~ who
demonstrated that the ``soap bubble'' in a tetrahedron
has the form of the result of the rolling of a ball over a~solution
of the internal Urysohn problem, i.~e. the weighted Blaschke sum of
a tetrahedron and a ball.
\section*{The External Urysohn Problem with Flattening}
Given are some convex body
$\mathfrak x_0\in\mathscr V_N$ and~flattening direction~$\bar z\in S_{N-1}$.
Considering $x\supset {\mathfrak x}_0$ of fixed integral breadth,
maximize volume and minimizing breadth in
the flattening direction:
$
\mathfrak x\in\mathscr V_N;\
\mathfrak x\supset{\mathfrak x}_0;\
\langle \mathfrak x,{\mathfrak z}_N\rangle \ge \langle\bar{\mathfrak x},{\mathfrak z}_N\rangle;\
(-p(\mathfrak x), b_{\bar z}(\mathfrak x)) \to\inf\!.
$
{\sl For a feasible convex body $\bar{\mathfrak x}$ to be a Pareto-optimal solution of
the external Urysohn problem with flattening it is necessary and
sufficient that there be
positive reals $\alpha, \beta$, and a convex figure $\mathfrak x$ satisfying}
$$
\gathered
\mu(\bar{\mathfrak x})+\mu(\mathfrak x)\gg{}_{{\mathbb R}^N} \alpha\mu({\mathfrak z}_N)+\beta(\varepsilon_{\bar z}+\varepsilon_{-\bar z});\\
V(\bar{\mathfrak x})+V_1(\mathfrak x,\bar{\mathfrak x})=\alpha V_1({\mathfrak z}_N,\bar{\mathfrak x})+ 2N\beta b_{\bar z}(\bar{\mathfrak x});\\
\bar{\mathfrak x}(z)={\mathfrak x}_0(z)\quad (z\in\operatorname{spt}(\mu(\mathfrak x)).
\endgathered
$$
\section*{ Optimal Convex Hulls}
Given ${\mathfrak y}_1,\dots,{\mathfrak y}_m$ in~ $\mathbb R^N$,
place ${\mathfrak x}_k$ within~ ${\mathfrak y}_k$, for $k:=1,\dots,m$,
maximizing the volume of
each of the $\mathfrak x_1,\dots,\mathfrak x_m$ and minimize
the integral breadth of their convex hull:
$$
\mathfrak x_k\subset\mathfrak y_k ;\
(-p({\mathfrak x}_1),\dots,-p({\mathfrak x}_m), \langle \operatorname{co}\{{\mathfrak x}_1,\dots,{\mathfrak x}_m\},{\mathfrak z}_N\rangle)\to\inf.
$$
{\sl For some feasible
${\bar{\mathfrak x}}_1,\dots,{\bar{\mathfrak x}}_m$ to have a
Pareto-optimal convex hull it is necessary and sufficient
that there be $\alpha_1,\dots,\alpha_m\mathbb R_+$ not
vanishing simultaneously and
positive Borel measures $\mu_1,\dots,\mu_m$ and $\nu_1,\dots, \nu_m$ on~$S_{N-1}$ such that}
$$
\gathered
\nu_1+\dots+\nu_m=\mu({\mathfrak z}_N);\\
\bar{\mathfrak x}_k(z)={\mathfrak y}_k(z)\quad (z\in\operatorname{spt}(\mu_k));\quad\\
\alpha_k \mu(\bar{\mathfrak x}_k)=\mu_k+\nu_k\ (k:=1,\dots,m).
\endgathered
$$
\end{document} |
\betaegin{document}
\title*{Null recurrence and transience of random difference equations in the contractive case}
\titlerunning{Null recurrence and transience of RDE in the contractive case}
\alphauthor{Gerold Alsmeyer, Dariusz Buraczewski and Alexander Iksanov}
\institute{Gerold Alsmeyer \alphat Institute of Mathematical Stochastics, Department
of Mathematics and Computer Science, University of M\"unster,
Einsteinstrasse 62, D-48149 M\"unster, Germany.\alphat
\email{[email protected]}\\
Dariusz Buraczewski \alphat Institute of Mathematics, University of Wroclaw,
pl. Grunwaldzki 2/4, 50-384 Wroclaw, Poland.\alphat
\email{[email protected]}\\
Alexander Iksanov \alphat Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, 01601 Kyiv, Ukraine and Institute of Mathematics, University of Wroclaw, pl. Grunwaldzki 2/4, 50-384 Wroclaw, Poland. \alphat
\email{[email protected]}}
\maketitle
\alphabstract{Given a sequence $(M_{k}, Q_{k})_{k\ge 1}$ of independent, identically distributed ran\-dom vectors with nonnegative components, we consider the recursive Markov chain $(X_{n})_{n\ge 0}$, defined by the random difference equation $X_{n}=M_{n}X_{n-1}+Q_{n}$ for $n\ge 1$, where $X_{0}$ is independent of $(M_{k}, Q_{k})_{k\ge 1}$. Criteria for the null recurrence/transience are provided in the situation where $(X_{n})_{n\ge 0}$ is contractive in the sense that $M_{1}\cdot\ldots\cdot M_{n}\to 0$ a.s., yet occasional large values of the $Q_{n}$ overcompensate the contractive behavior so that positive recurrence fails to hold. We also investigate the attractor set of $(X_{n})_{n\ge 0}$ under the sole assumption that this chain is locally contractive and recurrent.}
\betaigskip
{\noindent \textbf{AMS 2000 subject classifications:}
60J10; 60F15\ }
{\noindent \textbf{Keywords:} attractor set, null recurrence, perpetuity, random difference equation, transience}
\section{Introduction}\label{sec:intro}
Let $(M_{n}, Q_{n})_{n\ge 1}$ be a sequence of independent, identically distributed (iid) $\mathbb{R}} \def\Rq{\overlineerline\R_+^{2}$-valued random vectors with common law $\mu$ and generic copy $(M, Q)$, where $\mathbb{R}} \def\Rq{\overlineerline\R_+:=[0,\infty)$. Further, let $X_{0}$ be a nonnegative random variable which is independent of $(M_{n},Q_{n})_{n\ge 1}$. Then the sequence $(X_{n})_{n\ge 0}$, recursively defined by the random difference equation (RDE)
\betaegin{equation}\label{chain}
X_{n}\ :=\ M_{n}X_{n-1}+Q_{n},\quad n\ge 1,
\end{equation}
forms a temporally homogeneous Markov chain with transition kernel $P$ given by
$$ Pf(x)\ =\ \int f(mx+q)\ {\rm d}\mu(m,q) $$
for bounded measurable functions $f:\mathbb{R}} \def\Rq{\overlineerline\R\to\mathbb{R}} \def\Rq{\overlineerline\R$. The operator $P$ is Feller because it maps bounded continuous $f$ to functions of the same type. To underline the role of the starting point we occasionally write $X_{n}^{x}$ when $X_{0} = x$ a.s. Since $M$, $Q$ and $X_{0}$ are nonnegative, $(X_{n})_{n\ge 0}$ has state space $\mathbb{R}} \def\Rq{\overlineerline\R_{+}$.
The sequence $(X_{n})_{n\ge 0}$ may also be viewed as a \emph{forward iterated function system}, viz.
$$ X_{n}\ =\ \Psi_{n}(X_{n-1})\ =\ \Psi_{n}\circ\ldots\circ\Psi_{1}(X_{0}),\quad n\ge 1, $$
where $\Psi_{n}(t):=Q_{n}+M_{n} t$ for $n\ge 1$ and $\circ$ denotes composition, and thus opposed to its closely related counterpart
of \emph{backward iterations}
$$\widehat{X}_{0}\ :=\ X_{0}\quad\text{and}\quad \widehat{X}_{n}\ :=\ \Psi_{1}\circ\ldots\circ\Psi_{n}(X_{0}),\quad n\ge 1. $$
The relation is established by the obvious fact that $X_{n}$ has the same law as $\widehat{X}_{n}$ for each $n$, regardless of the law of $X_{0}$.
Put
$$\Pi_{0}\ :=\ 1\quad\text{and}\quad\Pi_{n}\ :=\ M_{1}M_{2}\cdot\ldots\cdot M_{n},\quad n\ge 1.$$
Assuming that
\betaegin{equation}\label{trivial}
\mathbb{P}(M=0)\ =\ 0\quad \text{and}\quad\mathbb{P} (Q=0)\ <\ 1
\end{equation}
and
\betaegin{equation}\label{trivial2}
\mathbb{P}(Mr+Q=r)\ <\ 1\quad\text{for all }r\ge 0,
\end{equation}
Goldie and Maller \cite[Theorem 2.1]{GolMal:00} showed (actually, these authors did not assume that $M$ and $Q$ are nonnegative) that the series
$\sum_{k\ge 1}\Pi_{k-1}Q_{k}$, called \emph{perpetuity}, is a.s.\ convergent provided that
\betaegin{equation}\label{33}
\lim_{n\to\infty}\Pi_{n}\ =\ 0\quad\text{a.s.\quad and}\quad I_{Q}\ :=\ \int_{(1,\,\infty)}J_{-}(x)\ \mathbb{P}(\log Q\in {\rm d}x)\ <\ \infty,
\end{equation}
where
\betaegin{equation}\label{jx}
J_{-}(y):=\frac{y}{\mathbb{E}(y\wedge\log_-M)},\quad y>0
\end{equation}
and $\log_- x=-\min(\log x, 0)$. Equivalently, the Markov chain $(X_{n})_{n\ge 0}$ is then positive recurrent with unique invariant distribution given by the law of the perpetuity. It is also well-known what happens in the ``trivial cases'' when at least one of the conditions \eqref{trivial} and \eqref{trivial2} fails \cite[Theorem 3.1]{GolMal:00}:
\betaegin{description}[(b)]
\item[(a)] If $\mathbb{P}(M=0)>0$, then $\tau:=\inf\{k\ge 1:M_{k}=0\}$ is a.s. finite, and the perpetuity trivially converges to the a.s.\ finite random variable $\sum_{k=1}^{\tau}\Pi_{k-1}Q_{k}$, its law being the unique invariant distribution of $(X_{n})_{n\ge 0}$.
\item[(b)] If $\mathbb{P}(Q=0)=1$, then $\sum_{k\ge 1}\Pi_{k-1}Q_{k}=0$ a.s.
\item[(c)] If $\mathbb{P}(Q+Mr=r)=1$ for some $r\ge 0$ and $\mathbb{P}(M=0)=0$, then either $\delta_{r}$, the Dirac measure at $r$, is the unique invariant distribution of $(X_{n})_{n\ge 1}$, or every distribution is invariant.
\end{description}
Further information on RDE and perpetuities can be found in the recent books \cite{BurDamMik:16} and \cite{Iksanov:17}.
If \eqref{trivial}, \eqref{trivial2},
\betaegin{equation}\label{30}
\lim_{n\to\infty}\Pi_{n}=0\quad\text{a.s.\quad and}\quad I_{Q}\,=\,\infty
\end{equation}
hold, which are assumptions in most of our results hereafter (with the exception of Section \ref{attr}) and particularly satisfied if
\betaegin{equation}\label{eq:log}
-\infty\ \le\ \mathbb{E}\log M\ <\ 0\quad\text{and}\quad\mathbb{E}\log_{+} Q\ =\ \infty,
\end{equation}
where $\log_+ x=\max(\log x, 0)$, then the afore-stated result \cite[Theorem 2.1]{GolMal:00} by Goldie and Maller implies that $(X_{n})_{n\ge 0}$ must be either null recurrent or transient. Our purpose is to provide conditions for each of these alternatives and also to investigate the path behavior of $(X_{n})_{n\ge 0}$. We refer to \eqref{30} as the \emph{divergent contractive case} because, on the one hand, $\Pi_{n}\to 0$ a.s. still renders $\Psi_{n}\circ\ldots\circ\Psi_{1}$ to be contractions for sufficiently large $n$, while, on the other hand, $I_Q=\infty$ entails that occasional large values of the $Q_{n}$ overcompensate this contractive behavior in such a way that positive recurrence does no longer hold. As a consequence, $\sum_{k\ge 1}\Pi_{k-1}Q_{k}=\infty$ a.s. and so the backward iterations $\widehat{X}_{n}=\Pi_{n}X_{0}+\sum_{k=1}^{n}\Pi_{k-1}Q_{k}$ diverge to $\infty$ a.s. regardless of whether the chain $(X_{n})_{n\ge 0}$ is null recurrent or transient. The question of which alternative occurs relies on a delicate interplay between the $\Pi_{n}$ and the $Q_{n}$. Our main results (Theorems \ref{main11} and \ref{main12}), for simplicity here confined to the situation when \eqref{trivial}, \eqref{trivial2}, \eqref{eq:log} hold and $s:=\lim_{t\to\infty}t\,\mathbb{P}(\log Q>t)$ exists, assert that $(X_{n})_{n\ge 0}$ is null recurrent if $s<-\mathbb{E}\log M$ and transient if $s>-\mathbb{E}\log M$. For deterministic $M\in (0,1)$, i.e., autoregressive sequences $(X_{n})_{n\ge 0}$, this result goes already back to Kellerer \cite[Theorem 3.1]{Kellerer:92} and was later also proved by Zeevi and Glynn \cite[Theorem 1]{ZeeviGlynn:04}, though under a further extra assumption, namely that $Q$ has log-Cauchy tails with scale parameter $s$, i.e.
$$ \mathbb{P}(\log(1+Q)>t)\ =\ \frac{1}{1+st}\quad\text{for all }t>0. $$
On the other hand, they could show null recurrence of $(X_{n})_{n\ge 0}$ even in the boundary case $s=-\log M$. Kellerer's result will be of some relevance here because we will take advantage of it in combination with a stochastic comparison technique (see Section \ref{M<=gamma<1}, in particular Proposition \ref{Kellerer's result}). Finally, we mention work by Bauernschubert \cite{Bauernschubert:13}, Buraczewski and Iksanov \cite{Buraczewski+Iksanov:15}, Pakes \cite{Pakes:83} and, most recently, by Zerner \cite{Zerner:2016+} on the divergent contractive case, yet only the last one studies the recurrence problem and is in fact close to our work. We will therefore comment on the connections in more detail in Remark \ref{zer}.
In the critical case $\mathbb{E}\log M=0$ not studied here, when $\limsup_{n\to\infty}\Pi_{n}=\infty$ a.s. and thus non-contraction holds, a sufficient criterion for the null recurrence of $(X_{n})_{n\ge 0}$ and the existence of an essentially unique invariant Radon measure $\nu$ was given by Babillot et al. \cite{BabBouElie:97}, namely
$$ \mathbb{E}|\log M|^{2+\delta}\,<\,\infty\quad\text{and}\quad\mathbb{E}(\log_{+}Q)^{2+\delta}\,<\,\infty\quad\text{for some }\delta>0. $$
For other aspects like the tail behavior of $\nu$ or the convergence $\widehat{X}_{n}$ after suitable normalization see \cite{Brofferio:03, Buraczewski:07, Grincev:76, HitczenkoWes:11,Iksanov+Pilipenko+Samoilenko:17,RachSamo:95}.
The paper is organized as follows. In Section \ref{sec:background}, we review known results about general locally contractive Markov chains which form the theoretical basis of the present work. Our main results are stated in Section \ref{results} and proved in Sections \ref{M<=gamma<1}, \ref{sec:tail lemma} and \ref{sec:main11 and main12}. In Section \ref{attr} we investigate the attractor set of the Markov chain $(X_{n})_{n\ge 0}$ under the sole assumption that $(X_{n})_{n\ge 0}$ is locally contractive and recurrent.
\section{Theoretical background}\label{sec:background}
We start by giving some useful necessary and sufficient conditions for the transience and recurrence of the sequence $(X_{n})_{n\ge 0}$. The following definition plays a fundamental role in the critical case $\mathbb{E}\log M=0$, see \cite{BabBouElie:97,Benda:98b,Brofferio:03,BroBura:13,Buraczewski:07,PeigneWoess:11a}. A general Markov chain $(X_{n})_{n\ge 0}$, possibly taking values of both signs, is called \emph{locally contractive} if, for any compact set $K$ and all $x,y\in\mathbb{R}} \def\Rq{\overlineerline\R$,
\betaegin{equation}\label{eq: local contraction}
\lim_{n\to\infty}\betaig| X_{n}^{x}-X_{n}^{y}\betaig| \cdot\vec{1}_{\{X_{n}^{x}\in K \}}\ =\ 0\quad\text{a.s.}
\end{equation}
For the chain $(X_{n})_{n\ge 0}$ to be studied here, we observe that, under \eqref{30},
$$ \betaig| X_{n}^{x}-X_{n}^{y} \betaig|\ =\ \Pi_{n}|x-y|\ \underset{n\to\infty}{\longrightarrow} \ 0\quad\text{ a.s.} $$
for all $x,y\in\mathbb{R}} \def\Rq{\overlineerline\R$. This means that $(X_{n})_{n\ge 0}$ is contractive and hence locally contractive. Yet, it may hold that
$$ \mathbb{P}\left(\lim_{n\to\infty}|X_{n}^{x}-x|=\infty\right)\ =\ 1$$
for any $x\in\mathbb{R}} \def\Rq{\overlineerline\R$ in which case the chain is called \emph{transient}. We quote the following result from \cite[Lemma 2.2]{PeigneWoess:11a}.
\betaegin{Lemma}\label{lem:1}
If $(X_{n})_{n\ge 0}$ is locally contractive, then the following dichotomy holds: either
\betaegin{equation}\label{eq:3}
\mathbb{P}\left(\lim_{n\to\infty}|X_{n}^{x}-x|=\infty\right)\ =\ 0\quad\text{for all }x\in\mathbb{R}} \def\Rq{\overlineerline\R
\end{equation}
or
\betaegin{equation}\label{eq:4}
\mathbb{P}\left(\lim_{n\to\infty}|X_{n}^{x}-x|=\infty\right)\ =\ 1\quad\text{for all }x\in\mathbb{R}} \def\Rq{\overlineerline\R.
\end{equation}
\end{Lemma}
The lemma states that either $(X_{n})_{n\ge 0}$ is transient or visits a large interval infinitely often (i.o.). The Markov chain $(X_{n})_{n\ge 0}$ is called \emph{recurrent} if there exists a nonempty closed set $L\subset\mathbb{R}} \def\Rq{\overlineerline\R$ such that $\mathbb{P}(X_{n}^{x}\in U\text{ i.o.})=1$ for every $x\in L$
and every open set $U$ that intersects $L$. Plainly, recurrence is a local property of the path of $(X_{n})_{n\ge 0}$.
The next lemma can be found in \cite[Theorem 3.8]{Benda:98b} and \cite[Theorem 2.13]{PeigneWoess:11a}.
\betaegin{Lemma}\label{lem:3}
If $(X_{n})_{n\ge 0}$ is locally contractive and recurrent, then there exists a unique (up to a multiplicative constant) invariant locally finite measure $\nu$.
\end{Lemma}
The Markov chain $(X_{n})_{n\ge 0}$ is called \emph{positive recurrent} if $\nu(L)<\infty$ and \emph{null recurrent}, otherwise.
Our third lemma was stated as Proposition 1.3 in \cite{Benda:98b}. Since this report has never been published, we present a short proof.
\betaegin{Lemma}\label{lem:2}
Let $(X_{n})_{n\ge 0}$ be a locally contractive Markov chain and $U$ an open subset of $\mathbb{R}} \def\Rq{\overlineerline\R$. Then $\mathbb{P}(X_{n}^{x}\in U~{\rm i.o.})<1$ for some $x\in\mathbb{R}} \def\Rq{\overlineerline\R$ implies $\sum_{n\ge 0} \mathbb{P}(X_{n}^{y}\in K)<\infty$ for all $y\in\mathbb{R}} \def\Rq{\overlineerline\R$ and all compact $K\subset U$.
\end{Lemma}
\betaegin{proof}
Take $x$ such that $\mathbb{P}(X_{n}^{x}\in U\ \text{i.o.})<1$. Then there exists $n_{1}\in\mathbb{N}$ such that
$$ \mathbb{P}(X_{n}^{x}\notin U\text{ for all }n\ge n_{1})\ >\ 0. $$
Now fix an arbitrary $y\in \mathbb{R}} \def\Rq{\overlineerline\R$ and a compact $K\subset U$. Defining the compact set $K_{y}:=K \cup\{y\}$, the local contractivity implies that for some $n_{2}\in\mathbb{N}$
\betaegin{equation}\label{star}
\mathbb{P}\left(X_{n}^{z}\notin K\text{ for all } n\ge n_{2}\ \text{ and some } z\in K_{y}\right)\ =:\ \delta\ >\ 0.
\end{equation}
For $z\in K_{y}$, consider the sequence of stopping times
\betaegin{align*}
T_{0}^{z}\ &=\ 0\quad\text{and}\quad T_{n}^{z}\ =\ \inf\{ k> T_{n-1}^{z}:\; X_{k}^{z}\in K\}\quad\text{for }n\ge 1.
\end{align*}
Then \eqref{star} implies that $\mathbb{P}(T^{z}_{n_{2}}<\infty)\le 1-\delta$ for each $z\in K_{y}$. Consequently,
$$ \mathbb{P}\left(T^{y}_{nn_{2}}<\infty\right)\ \le\ (1-\delta) \mathbb{P}\left(T_{(n-1)n_{2}}^{y}<\infty\right)\ \le\ (1-\delta)^{n} $$
for all $n\ge 1$ and thus
\betaegin{align*}
\sum_{n\ge 0} \mathbb{P}(X_{n}^{y} \in K)\ &=\ \mathbb{E}\left(\sum_{n\ge 0}\vec{1}_{\{X_{n}^{y}\in K\}}\right)\ \le\ \sum_{n\ge 0}\mathbb{P}\left(T_{n}^{y} <\infty\right)\\
&\le\ \sum_{n\ge 0} n_{2}\,\mathbb{P}\left(T_{n n_{2}}^{y} <\infty\right)\ \le\ n_{2}/\delta\ <\ \infty.\tag*\qed
\end{align*}
\end{proof}
A combination of Lemmata \ref{lem:1} and \ref{lem:2} provides us with
\betaegin{Prop}\label{transient}
For a locally contractive Markov chain $(X_{n})_{n\ge 0}$ on $\mathbb{R}} \def\Rq{\overlineerline\R$, the following assertions are equivalent:
\betaegin{description}[(b)]
\item[(a)] The chain is transient.
\item[(b)] $\lim_{n\to\infty}|X_{n}^{x}|=\infty$ a.s. for all $x\in\mathbb{R}} \def\Rq{\overlineerline\R$.
\item[(c)] $\mathbb{P}(X_{n}^{x}\in U~{\rm i.o.})< 1$ for any bounded open $U\subset\mathbb{R}} \def\Rq{\overlineerline\R$ and some/all $x\in\mathbb{R}} \def\Rq{\overlineerline\R$.
\item[(d)] $\sum_{n\ge 0}\mathbb{P}(X_{n}^{x}\in K)<\infty$ for any compact $K\subset\mathbb{R}} \def\Rq{\overlineerline\R$ and some/all $x\in\mathbb{R}} \def\Rq{\overlineerline\R$.
\end{description}
\end{Prop}
\betaegin{proof}
The equivalence of (a), (b) and (c) is obvious. By Lemma \ref{lem:2}, (c) entails (d), while the Borel-Cantelli lemma gives the converse.\qed
\end{proof}
Now we consider the case when \eqref{eq:4} is satisfied. For any $\omega$, we define $L^{x}(\omega)$ to be the set of accumulation points of $(X_{n}^x(\omega))_{n\ge 0}$, i.e.
$$ L^{x}(\omega)\ :=\ \betaigcap_{m\ge 1}\overlineerline{\{X_{n}^x(\omega):n\ge m\}}, $$
where $\overlineerline{C}$ denotes the closure of a set $C$. It is known \cite{Benda:98b,PeigneWoess:11a} that $L^x(\omega)$ does not depend on $x$ and $\omega$. In fact, there exists a deterministic set $L\subset\mathbb{R}} \def\Rq{\overlineerline\R$ (called the \emph{attractor set} or \emph{limit set}) such that
$$ \mathbb{P}\{L^x(\cdot) = L \ \text{ for all } x\in \mathbb{R}} \def\Rq{\overlineerline\R\}=1. $$
\betaegin{Prop}\label{recur}
For a locally contractive Markov chain $(X_{n})_{n\ge 0}$ on $\mathbb{R}} \def\Rq{\overlineerline\R$, the following assertions are equivalent:
\betaegin{description}[(b)]
\item[(a)] The chain is recurrent.
\item[(b)] $\liminf_{n\to\infty}|X_{n}^x-x|<\infty$ a.s. for all $x\in\mathbb{R}} \def\Rq{\overlineerline\R$.
\item[(c)] $\liminf_{n\to\infty}|X_{n}|<\infty$ a.s.
\item[(d)] $\sum_{n\ge 0}\mathbb{P}\{X_{n}^{x}\in K\}=\infty$ for a nonempty compact set $K$ and some/all $x\in\mathbb{R}} \def\Rq{\overlineerline\R$.
\end{description}
\end{Prop}
\betaegin{proof}
In view of the contrapositive Proposition \ref{transient}, we must only verify for ``(a)$\mathbb{R}} \def\Rq{\overlineerline\Rightarrow$(d)'' that the sum in (d) is indeed infinite for some compact $K\ne\oslash$ and \emph{all} $x\in\mathbb{R}} \def\Rq{\overlineerline\R$. W.l.o.g. let $K=[-2b,2b]$ for some $b>0$ and $y\in\mathbb{R}} \def\Rq{\overlineerline\R$ such that, by (a), $\sum_{n\ge 0}\vec{1}_{\{|X_{n}^{y}|\le b\}}=\infty$ a.s.\ and thus $\sum_{n\ge 0}\mathbb{P}(|X_{n}^{y}|\le b)=\infty$. Local contractivity implies that $\sigma_{x}:=\sup\{n\ge 0:|X_{n}^{x}-X_{n}^{y}|>b,\,|X_{n}^{y}|\le b\}$ is a.s.\ finite for \emph{all} $x\in\mathbb{R}} \def\Rq{\overlineerline\R$. Consequently, $X_{n}^{x}$ hits $[-2b,2b]$ whenever $X_{n}^{y}$ hits $[-b,b]$ for $n>\sigma_{x}$, in particular $\sum_{n\ge 0}\vec{1}_{\{|X_{n}^{x}|\le 2b\}}=\infty$ a.s.\ and thus $\sum_{n\ge 0}\mathbb{P}(|X_{n}^{x}|\le 2b)=\infty$ for all $x\in\mathbb{R}} \def\Rq{\overlineerline\R$.\qed
\end{proof}
\section{Results}\label{results}
In order to formulate the main result, we need
\betaegin{align}
\betaegin{split}\label{eq:def of s_* and s^*}
s_{*}\ :=\ &\liminf_{t\to\infty}\,t\,\mathbb{P}(\log Q>t),\\
s^{*}\ :=\ &\limsup_{t\to\infty}\,t\,\mathbb{P}(\log Q>t)
\end{split}
\end{align}
for which $0\le s_{*}\le s^{*}\le\infty$ holds true. In some places, the condition
\betaegin{equation}\label{tail2}
\lim_{t\to\infty}t\,\mathbb{P}(\log Q>t)\ =\ s\ \in\ [0,\infty]
\end{equation}
will be used. Finally, put $\fm^{\pm}:=\mathbb{E}\log_{\pm}M$ and, if $\fm^{+}\wedge\fm^{-}<\infty$,
$$ \fm\ :=\ \mathbb{E}\log M\ =\ \fm^{+}-\fm^{-} $$
which is then in $[-\infty,0)$ by our standing assumption $\Pi_{n}\to 0$ a.s.
\betaegin{Theorem}\label{main11}
Let $\fm\in [-\infty,0)$ and \eqref{trivial}, \eqref{trivial2}, \eqref{30} be valid. Then the following assertions hold:
\betaegin{description}[(b)]\itemsep2pt
\item[(a)] $(X_{n})_{n\ge 0}$ is null recurrent if $s^{*}<-\fm$.
\item[(b)] $(X_{n})_{n\ge 0}$ is transient if $s_{*}>-\fm\ ($thus $\fm>-\infty)$.
\end{description}
\end{Theorem}
\betaegin{Rem}\label{zer}\rm
In the recent paper \cite{Zerner:2016+}, Zerner studies the recurrence/transience of $(X_{n})_{n\ge 0}$ defined by \eqref{chain} in the more general setting when $M$ is a nonnegative $d\times d$ random matrix and $Q$ an $\mathbb{R}} \def\Rq{\overlineerline\R_{+}^{d}$-valued random vector. A specialization of his Theorem 5 to the one-dimensional case $d=1$ reads as follows. Suppose that
\betaegin{equation}\label{boun}
M\in [a,b]\quad\text{a.s.}
\end{equation}
for some $0<a<b<\infty$ and that either $\lim_{t\to\infty}\,t^\betaeta\,\mathbb{P}(\log Q>t)=0$ for some $\betaeta\in (2/3,1)$, or $s_{*}>-\fm$. Let $y\in (0,\infty)$ be such that $\mathbb{P}(Q\le y)>0$. Then $(X_{n})_{n\ge 0}$ is recurrent if, and only if,
\betaegin{equation}\label{eq:Zerner condition}
\sum_{n\ge 0}\prod_{k=0}^{n}\mathbb{P}(Q\le ye^{-k\fm})\ =\ \infty.
\end{equation}
It is not difficult to verify that \eqref{eq:Zerner condition} holds if $s^{*}<-\fm$ and that it fails if $s_{*}>-\fm$. Therefore, Zerner's result contains our Theorem \ref{main11} under the additional assumption \eqref{boun}.
\end{Rem}
\betaegin{Rem}\label{Lyapunov functions}\rm
If $\log M$ and $\log Q$ are both integrable and $D(x):=\log X_{1}^{x}-\log X_{0}^{x}$, then
$$ \mathbb{E} D(x)\ =\ \mathbb{E}\log(M+x^{-1}Q)\ \xrightarrow{x\to\infty}\ 0 $$
shows that $(\log X_{n})_{n\ge 0}$ forms a Markov chain with asymptotic drift zero. Such chains are studied at length by Denisov, Korshunov and Wachtel in a recent monograph-like publication \cite{DenKorWach:16}. They also provide conditions for recurrence and transience in terms of truncated moments of $D(x)$, see their Corollaries 2.11 and 2.16, but these appear to be more complicated and more restrictive than ours.
\end{Rem}
\betaegin{Rem}\rm
Here is a comment on the boundary case $s=-\fm$ not covered by Theorem \ref{main11}. Assuming $M=e^\fm$ a.s., it can be shown that the null recurrence/transience of $(X_{n})_{n\ge 0}$ is equivalent to the
divergence/convergence of the series
$$ \sum_{n\ge 1}\mathbb{P}\left(\max_{1\le k\le n}\,e^{\fm (k-1)}Q_{k}\le e^{x}\right)\ =\ \sum_{n\ge 1}\prod_{k=0}^{n-1} F(x-\fm k) $$
for some/all $x\ge 0$, where $F(y):=\mathbb{P}(\log Q\le y)$. Indeed, assuming $X_{0}=0$, the transience assertion follows when using $\mathbb{P}(X_{n}\le e^{x})\le \mathbb{P}(\max_{1\le k\le n}\,e^{\fm (k-1)}Q_{k}\le e^{x})$, while the null recurrence claim is shown by a thorough inspection and adjustment of the proof of Theorem \ref{main11}(a). Using Kummer's test as stated in \cite{Tong:94}, we then further conclude that $(X_{n})_{n\ge 0}$ is null recurrent if, and only if, there exist positive $p_{1},p_{2},\ldots$ such that $F(-\fm k)\ge p_{k}/p_{k+1}$ and $\sum_{k\ge
1}(1/p_{k})=\infty$. For applications, the following sufficient condition, which is a consequence of Bertrand's test \cite[p.~408]{Stromberg:81}, may be more convenient. If
$$ 1-F(x)\ =\ \frac{-\fm}{x}+\frac{f(x)}{x\log x}, $$
then $(X_{n})_{n\ge 0}$ is null recurrent if $\displaystyle\limsup_{x\to\infty}f(x)<-\fm$, and transient if $\displaystyle\liminf_{x\to\infty}f(x)>-\fm$.
\end{Rem}
If $\fm^{-}=\fm^{+}=\infty$ and $s^{*}<\infty$, then $(X_{n})_{n\ge 0}$ is \emph{always} null recurrent as the next theorem will confirm. Its proof will be based on finding an appropriate subsequence of $(X_{n})_{n\ge 0}$ which satisfies the assumptions of Theorem \ref{main11}(a).
\betaegin{Theorem}\label{main12}
Let $\fm^{+}=\fm^{-}=\infty$, $s^\alphast<\infty$ and \eqref{trivial}, \eqref{trivial2}, \eqref{30} be valid. Then $(X_{n})_{n\ge 0}$ is null recurrent.
\end{Theorem}
The two theorems are proved in Section \ref{sec:main11 and main12} after some preparatory work in Sections \ref{M<=gamma<1} and \ref{sec:tail lemma}.
\betaegin{Rem}\label{rem:main12}\rm
It is worthwhile to point out that the assumptions of the previous theorem impose some constraint on the tails of $\log_{+}M$. Namely, given these assumptions, the negative divergence of the random walk $S_{n}:=\log\Pi_{n}$, $n\ge 0$, that is $S_{n}\to-\infty$ a.s., entails
\betaegin{align*}
I_{M}\ =\ \int_{(1,\,\infty)}J_{-}(x)\ \mathbb{P}(\log M\in {\rm d}x)\ <\ \infty
\end{align*}
by Erickson's theorem \cite[Theorem 2]{Erickson:73}. But this in combination with $I_{Q}=\infty$ and $s^{*}<\infty$ further implies by stochastic comparison that
$$ r_{*}\ :=\ \liminf_{t\to\infty}t\,\mathbb{P}(\log M>t)\ =\ 0. $$
Indeed, if the latter failed to hold, i.e. $r_{*}>0$, then
$$ \mathbb{P}(\log M>t)\ \ge\ \frac{r_{*}}{2t}\ \ge\ \frac{r_{*}}{4s^{*}}\,\mathbb{P}(\log Q>t) $$
for all sufficiently large $t$, say $t\ge t_{0}$, which in turn would entail the contradiction
$$ I_{M}-J_-(0+)\ \ge\ \frac{r_{*}}{4s^{*}}\int_{(t_{0},\infty)}J_{-}'(x)\ \mathbb{P}(\log Q>x)\ {\rm d}x\ =\ \infty. $$
\end{Rem}
\section{The cases $M\le\gamma$ and $M\geq \gamma$: Two comparison lemmata and Kellerer's result}\label{M<=gamma<1}
This section collects some useful results for the cases when $M\le\gamma$ or $M\geq \gamma$ a.s.\ for a constant $\gamma\in (0,1)$, in particular Kellerer's unpublished recurrence result \cite{Kellerer:92} for this situation, see Proposition \ref{Kellerer's result} below. Whenever given iid nonnegative $Q_{1},Q_{2},\ldots$ with generic copy $Q$, let $(X_{n}(\gamma))_{n\ge 0}$ be defined by
$$ X_{n}(\gamma)\ =\ \gamma X_{n-1}(\gamma)+Q_{n},\quad n\ge 1,$$ where $X_{0}(\gamma)$ is independent of $(Q_{n})_{n\ge 1}$.
We start with two comparison lemmata which treat two RDE with identical $M\le\gamma$ but different $Q$.
\betaegin{Lemma}\label{comparison lemma}
Let $(M_{n},Q_{n},Q_{n}')_{n\ge 1}$ be a sequence of iid random vectors with nonnegative components and generic copy $(M,Q,Q')$ such that $M\le\gamma$ a.s. for some $\gamma\in (0,1)$ and
\betaegin{equation}\label{eq:tail condition Q'}
\mathbb{P}(Q'>t)\ \ge\ \mathbb{P}(Q>t)
\end{equation}
for some $t_{0}\ge 0$ and all $t\ge t_{0}$. Define
$$ X_{n}\,:=\,M_{n}X_{n-1}+Q_{n}\quad\text{and}\quad X_{n}'\,:\,=M_{n}X_{n-1}'+Q_{n}' $$
for $n\ge 1$, where $X_{0}^\prime$ is independent of $X_{0}$ and $(M_{k}, Q_{k})_{k\ge 1}$. Then
\betaegin{align*}
(X_{n})_{n\ge 0}\text{ transient}\quad\mathbb{L}ongrightarrow\quad (X_{n}')_{n\ge 0}\text{ transient}\\
\shortintertext{or, equivalently,}
(X_{n}')_{n\ge 0}\text{ recurrent}\quad\mathbb{L}ongrightarrow\quad (X_{n})_{n\ge 0}\text{ recurrent}
\end{align*}
\end{Lemma}
\betaegin{proof}
The tail condition \eqref{eq:tail condition Q'} ensures that we may choose a coupling $(Q,Q')$ such that $Q'\ge Q-t_{0}$ a.s. Then, with $(Q_{n},Q_{n}')_{n\ge 1}$ being iid copies of $(Q,Q')$, it follows that
\betaegin{align*}
X_{n}'-X_{n}\ &=\ M_{n}(X_{n-1}'-X_{n-1})\ +\ Q_{n}'-Q_{n}\\
&\ge\ M_{n}(X_{n-1}'-X_{n-1})\ -\ t_{0}\\
\ldots\ &\ge\ \left(\prod_{k=1}^{n}M_{k}\right)(X_{0}'-X_{0})\ -\ t_{0}\sum_{k=0}^{n-1}\gamma^{k}\quad\text{a.s.}
\shortintertext{and thereby}
&\liminf_{n\to\infty}\,(X_{n}'-X_{n})\ \ge\ -\frac{t_{0}}{1-\gamma}\quad\text{a.s.}
\end{align*}
which obviously proves the asserted implication.\qed
\end{proof}
\betaegin{Lemma}\label{comparison lemma 2}
Replace condition \eqref{eq:tail condition Q'} in Lemma \ref{comparison lemma} with
\betaegin{equation}
Q'\ =\ \vec{1}_{\{Q>\betaeta\}}Q
\end{equation}
for some $\betaeta>0$, thus $\mathbb{P}(Q'=0)=\mathbb{P}(Q\le\betaeta)$. Then
$$ (X_{n}')_{n\ge 0}\text{ recurrent}\quad\mathbb{L}ongleftrightarrow\quad (X_{n})_{n\ge 0}\text{ recurrent}. $$
\end{Lemma}
\betaegin{proof}
Here it suffices to point out that
\betaegin{align*}
|X_{n}'-X_{n}|\ &=\ |M_{n}(X_{n-1}'-X_{n-1})\ +\ Q_{n}'-Q_{n}|\\
&\le\ M_{n}|X_{n-1}'-X_{n-1}|\ +\ \vec{1}_{\{Q_{n}\le\betaeta\}}Q_{n}\\
\ldots\ &\le\ \left(\prod_{k=1}^{n}M_{k}\right)(X_{0}'-X_{0})\ +\ \betaeta\sum_{k=0}^{n-1}\gamma^{k}\\
&\le\ \gamma^{n}(X_{0}'-X_{0})\ +\ \frac{\betaeta}{1-\gamma}\quad\text{a.s.}
\end{align*}
for all $n\ge 1$.\qed
\end{proof}
The announced result by Kellerer including its proof (with some minor modifications), taken from his unpublished Technical Report \cite[Theorem ~3.1]{Kellerer:92}, is given next.
\betaegin{Prop}\label{Kellerer's result}
Let $0<\gamma<1$. Then the following assertions hold:
\betaegin{description}
\item[(a)] $(X_{n})_{n\ge 0}$ is transient if $M\ge\gamma$ a.s. and
\betaegin{equation}\label{lower tail condition}
s_{*}\ =\ \liminf_{t\to\infty}t\,\mathbb{P}(\log Q>t)\ >\ \log(1/\gamma).
\end{equation}
\item[(b)] $(X_{n})_{n\ge 0}$ is null recurrent if $M\le\gamma$ a.s. and
\betaegin{equation}\label{upper tail condition}
s^{*}\ =\ \limsup_{t\to\infty}t\,\mathbb{P}(\log Q>t)\ <\ \log(1/\gamma).
\end{equation}
\end{description}
\end{Prop}
\betaegin{proof}
It is enough to consider (in both parts) the case when $M=\gamma$ a.s. and thus the Markov chain $(X_{n}(\gamma))_{n\ge 0}$ as defined above. We may further assume that $X_{0}(\gamma)=0$ and put $\theta:=\log(1/\gamma)$.
\emph{Transience.} It suffices to show that $\sum_{n\ge 1}\mathbb{P}(\widehat{X}_{n}(\gamma)\le e^{t})<\infty$ for all $t\ge 0$. Fixing $t$ and any $\varepsilon>0$ with $(1+\varepsilon)\theta<s_{*}$, pick $m\in\mathbb{N}$ so large that
$$ \inf_{k\ge m+1}k\theta\,\mathbb{P}(\log Q>t+k\theta)\ \ge \ (1+\varepsilon)\theta. $$
Then we infer for all $n>m$
\betaegin{align*}
\mathbb{P}(\widehat{X}_{n}(\gamma)\le e^{t})\ &=\ \mathbb{P}\left(\sum_{k=1}^{n}\gamma^{k-1}Q_{k}\le e^{t}\right)\\
&\le\ \mathbb{P}\betaig(\log Q_{k}\le t+(k-1)\theta,\,1\le k\le n\betaig)\\
&\le\ \prod_{k=m+1}^{n}\betaig(1-\mathbb{P}(\log Q>t+k\theta)\betaig)\\
&\le\ \prod_{k=m+1}^{n}\left(1-\frac{1+\varepsilon}{k}\right)\\
&\le\ \prod_{k=m+1}^{n}\left(1-\frac{1}{k}\right)^{1+\varepsilon}\ =\ \left(\frac{m}{n}\right)^{1+\varepsilon}
\end{align*}
where $(1-x)^{1+\varepsilon}\ge 1-(1+\varepsilon)x$ for all $x\in [0,1]$ has been utilized for the last inequality. Consequently, $\sum_{n\ge 1}\mathbb{P}(\widehat{X}_{n}(\gamma)\le e^{t})<\infty$, and the transience of $(X_{n}(\gamma))_{n\ge 0}$ follows by Proposition \ref{transient}.
\emph{Null recurrence.}
By Lemma \ref{comparison lemma 2}, we may assume w.l.o.g. that, for some sufficiently small $\varepsilon>0$, $\delta\,:=\,\mathbb{P}(Q=0)\,\ge\,\gamma^{\varepsilon}$ and
\betaegin{align*}
&\sup_{t\ge 1}t\,\mathbb{P}(\log Q>t)\ \le\ (1-\varepsilon)\theta.
\end{align*}
Put also $m_{n}:=\theta^{-1}(m+\log n)$ for integer $m\ge 1$ so large that $$g(x,n):=(x-1)\theta-\log n\ge 1\vee (1-\varepsilon)\theta$$ for all $x\in (m_{n},\infty)$. Note that $\delta^{m_{n}}\ge (e^mn)^{-\varepsilon}$. For all $n\ge 1$ so large that $g(n,n)>\theta$, we then infer
\betaegin{align*}
\mathbb{P}(\widehat{X}_{n}(\gamma)\le 1)\ &\ge\ \mathbb{P}\left(\max_{1\le k\le n}\gamma^{k-1}Q_{k}\le\frac{1}{n}\right)\\
&\ge\ \mathbb{P}(Q=0)^{m_{n}}\prod_{m_{n}+1\le k\le n}\mathbb{P}(\log Q\le g(k,n))\\
&\ge\ \delta^{m_{n}}\prod_{m_{n}+1\le k\le n}\left(1-\frac{(1-\varepsilon)\theta}{g(k,n)}\right)\\
&\ge\ (e^m n)^{-\varepsilon}\prod_{k=m}^{n}\left(1-\frac{1-\varepsilon}{k}\right)\\
&\ge\ (e^m n)^{-\varepsilon}\prod_{k=2}^{n}\left(1-\frac{1}{k}\right)^{1-\varepsilon}\ =\ \frac{e^{-m\varepsilon}}{n}.
\end{align*}
Here $(1-x)^{1-\varepsilon}\le 1-(1-\varepsilon)x$ for all $x\in [0,1]$ has been utilized for the last inequality. Hence, $\sum_{n\ge 1}\mathbb{P}(\widehat{X}_{n}(\gamma)\le 1)=\infty$, giving the recurrence of $(X_{n}(\gamma))_{n\ge 0}$ by Proposition \ref{recur}.\qed
\end{proof}
Given a Markov chain $(Z_{n})_{n\ge 0}$, a sequence $(\sigma_{n})_{n\ge 0}$ is called a \emph{renewal stopping sequence} for this chain if the following conditions hold:
\betaegin{description}[(R2)]
\item[(R1)] $\sigma_{0}=0$ and the $\tau_{n}:=\sigma_{n}-\sigma_{n-1}$ are iid for $n\ge 1$.
\item[(R2)] There exists a filtration $\mathcal{F}=(\mathcal{F}_{n})_{n\ge 0}$ such that $(Z_{n})_{n\ge 0}$ is Markov-adapted and each $\sigma_{n}$ is a stopping time with respect to $\mathcal{F}$.
\end{description}
We define
$$ S_{n}\ :=\ \log\Pi_{n}\ =\ \sum_{k=1}^{n}\log M_{k} $$
for $n\ge 0$ and recall that, by our standing assumption, $(S_{n})_{n\ge 0}$ is a negative divergent random walk $(S_{n}\to-\infty$ a.s.). For $c\in\mathbb{R}} \def\Rq{\overlineerline\R$, let $(\sigma^{>}n(c))_{n\ge 0}$ and $(\sigma^{<}n(c))_{n\ge 0}$ denote the possibly defective renewal sequences of ascending and descending ladder epochs associated with the random walk $(S_{n}+cn)_{n\ge 0}$, in particular
\betaegin{align*}
\sigma^{>}(c)\ &=\ \sigma^{>}_{1}(c)\ :=\ \inf\{n\ge 1:S_{n}+cn>0\}\ =\ \inf\{n\ge 1:\Pi_{n}>e^{-cn}\},\\
\sigma^{<}(c)\ &=\ \sigma^{<}_{1}(c)\ :=\ \inf\{n\ge 1:S_{n}+cn<0\}\ =\ \inf\{n\ge 1:\Pi_{n}<e^{-cn}\}.
\end{align*}
Plainly, these are renewal stopping sequences for $(X_{n})_{n\ge 0}$ whenever nondefective.
\betaegin{Lemma}\label{lem:bounding lemma}
Let $c\ge 0$ and $\gamma=e^{-c}$.
\betaegin{description}[(b)]
\item[(a)] If $c$ is such that $\sigma^{<}(c)<\infty$ a.s., then, with $(\sigma_{n})_{n\ge 0}:=(\sigma^{<}n(c))_{n\ge 0}$,
$$ X_{\sigma_{n}}\ \le\ X_{\sigma_{n}}(\gamma)\quad\text{and}\quad\widehat{X}_{\sigma_{n}}\ \le\ \widehat{Y}_{n}\quad\text{a.s.} $$
for all $n\ge 0$, where $X_{0}=X_{0}(\gamma)=\widehat{Y}_{0}$ and
$$ \widehat{Y}_{n}\ :=\ \sum_{k=1}^{n}\gamma^{\sigma_{k-1}}Q_{k}^{*}\quad\text{with}\quad Q_{n}^{*}\ :=\ \sum_{k=1}^{\sigma_{n}-\sigma_{n-1}}\frac{\Pi_{\sigma_{n-1}+k-1}}{\Pi_{\sigma_{n-1}}}Q_{k} $$
for $n\ge 1$ denotes the sequence of backward iterations pertaining to the recursive Markov chain $Y_{n}=\gamma^{\sigma_{n}-\sigma_{n-1}}Y_{n-1}+Q_{n}^{*}$.
\item[(b)] If $c$ is such that $\sigma^{>}(c)<\infty$ a.s., then, with $(\sigma_{n})_{n\ge 0}:=(\sigma^{>}n(c))_{n\ge 0}$,
$$ X_{\sigma_{n}}\ \ge\ X_{\sigma_{n}}(\gamma)\quad\text{and}\quad\widehat{X}_{\sigma_{n}}\ \ge\ \widehat{Y}_{n}\quad\text{a.s.} $$
for all $n\ge 0$, where $X_{0}=X_{0}(\gamma)=\widehat{Y}_{0}$ and $\widehat{Y}_{n}$ is defined as in (a) for the $\sigma_{n}$ given here.
\end{description}
\end{Lemma}
Plainly, one can take $c\in (0,-\fm)$ in (a) and $c\in (-\fm,\infty)$ in (b) if $-\infty<\fm<0$.
\betaegin{proof}
(a) Suppose that the $\sigma^{<}n(c)$ are a.s. finite. To prove our claim for $X_{\sigma_{n}}$, we use induction over $n$. Since $\sigma_{0}=0$, we have $X_{\sigma_{0}}=X_{\sigma_{0}}(\gamma)$. For the inductive step suppose that $X_{\sigma_{n-1}}\le X_{\sigma_{n-1}}(\gamma)$ for some $n\ge 1$. Observe that, with $\tau_{n}=\sigma_{n}-\sigma_{n-1}$,
\betaegin{align}
\betaegin{split}\label{eq:crucial estimate}
M_{\sigma_{n-1}+k+1}\cdot\ldots\cdot M_{\sigma_{n}}\ &=\ \frac{\Pi_{\sigma_{n}}}{\Pi_{\sigma_{n-1}+k}}\\
&=\ e^{(S_{\sigma_{n}}+c\sigma_{n})-(S_{\sigma_{n-1}+k}+c(\sigma_{n-1}+k))-c(\tau_{n}-k)}\ \le\ \gamma^{\tau_{n}-k}
\end{split}
\end{align}
for all $0\le k\le\tau_{n}$. Using this and the inductive hypothesis, we obtain
\betaegin{align*}
X_{\sigma_{n}}\ &=\ \frac{\Pi_{\sigma_{n}}}{\Pi_{\sigma_{n-1}}}X_{\sigma_{n-1}}+\sum_{k=1}^{\tau_{n}}\frac{\Pi_{\sigma_{n}}}{\Pi_{\sigma_{n-1}+k}}\,Q_{\sigma_{n-1}+k}\\
&\le\ \gamma^{\tau_{n}}X_{\sigma_{n-1}}(\gamma)+\sum_{k=1}^{\tau_{n}}\gamma^{\tau_{n}-k}\,Q_{\sigma_{n-1}+k}\ =\ X_{\sigma_{n}}(\gamma)\quad\text{a.s.}
\end{align*}
as asserted. Regarding the backward iteration $\widehat{X}_{\sigma_{n}}$, we find more directly that
\betaegin{align*}
\widehat{X}_{\sigma_{n}}\ &=\ \sum_{k=1}^{n}\Pi_{\sigma_{k-1}}\sum_{j=1}^{\tau_{k}}\frac{\Pi_{\sigma_{k-1}+j-1}}{\Pi_{\sigma_{k-1}}}Q_{j}\\
&\le\ \sum_{k=1}^{n}\gamma^{\sigma_{k-1}}\sum_{j=1}^{\tau_{k}}\frac{\Pi_{\sigma_{k-1}+j-1}}{\Pi_{\sigma_{k-1}}}Q_{j}\ =\ \widehat{Y}_{n}\quad\text{a.s.}
\end{align*}
for each $n\ge 1$.
(b) If $c$ is such that the $\sigma^{>}n(c)$ are a.s. finite, then \eqref{eq:crucial estimate} turns into
\betaegin{equation*}
M_{\sigma_{n-1}+k+1}\cdot\ldots\cdot M_{\sigma_{n}}\ \ge\ \gamma^{\tau_{n}-k}
\end{equation*}
for all $n\in\mathbb{N}$ and $0\le k\le\tau_{n}$. Now it is easily seen that the inductive argument in (a) remains valid when reversing inequality signs and the same holds true for $\widehat{X}_{\sigma_{n}}$.\qed
\end{proof}
\section{Tail lemmata}\label{sec:tail lemma}
In order to prove our results, we need to verify that the tail condition \eqref{tail2} is preserved under stopping times with finite mean. To be more precise, let $\sigma$ be any such stopping time for $(M_{k},Q_{k})_{k\ge 1}$ and consider
$$ \widehat{X}_{\sigma}\ =\ \sum_{k=1}^{\sigma}\Pi_{k-1}Q_{k}. $$
Obviously,
\betaegin{equation}\label{Qsigma* bounds}
\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}\ \le\ \widehat{X}_{\sigma}\ \le\ \sigma\,\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}.
\end{equation}
\betaegin{Lemma}\label{tail lemma}
Assuming \eqref{trivial}, \eqref{trivial2} and $\fm<0$, condition \eqref{tail2} entails
\betaegin{equation*}
\lim_{t\to\infty}t\,\mathbb{P}(\log \widehat{X}_{\sigma}>t)\ =\ s\,\mathbb{E}\sigma,
\end{equation*}
where the right-hand side equals $0$ if $s=0$, and $\infty$ if $s=\infty$.
\end{Lemma}
\betaegin{proof}
It suffices to prove
\betaegin{equation*}
\lim_{t\to\infty}t\,\mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\ =\ s\,\mathbb{E} \sigma
\end{equation*}
because \eqref{Qsigma* bounds} in combination with $\mathbb{E}\sigma<\infty$ entails
\betaegin{align*}
\mathbb{P}&\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\ \le\ \mathbb{P}(\log \widehat{X}_{\sigma}>t)\\
&\le\ \mathbb{P}(\log\sigma>\varepsilon t)\ +\ \mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>(1-\varepsilon)t\right)\\
&=\ o(t^{-1})\ +\ \mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>(1-\varepsilon)t\right)
\end{align*}
for all $\varepsilon>0$.
(a) We first prove that
\betaegin{equation}\label{eq:upper bound}
\limsup_{t\to\infty}\,t\,\mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\ \le\ s\,\mathbb{E}\sigma
\end{equation}
which is nontrivial only when assuming $s\in [0,\infty)$. Put $\eta_{n}:=\log Q_{n}$ for $n\in\mathbb{N}$.
For any $\varepsilon\in (0,1)$, we then have
\betaegin{align*}
\mathbb{P}&\left(\log\max_{1\le k\le \sigma}\Pi_{k-1}Q_{k}>t\right)\ =\ \mathbb{P}\left(\max_{1\le k\le \sigma}\,(S_{k-1}+\eta_{k})>t\right)\\
&\le\ \mathbb{P}\left(\max_{0\le k\le \sigma}\,S_{k}>\varepsilon t\right)\ +\ \mathbb{P}\left(\max_{1\le k\le \sigma}\,\eta_{k}>(1-\varepsilon)t\right)\\
&=\ I_{1}(t)\ +\ I_{2}(t).
\end{align*}
Regarding $I_{1}(t)$, notice that
$$ \max_{0\le k\le\sigma}S_{k}\ \le\ \sum_{k=1}^{\sigma}\log_{+}M_{k}. $$
Since $\fm\in [-\infty,0)$ entails $\mathbb{E}\log_{+}M<\infty$ and thus, by Wald's identity,
$$ \mathbb{E}\left(\max_{0\le k\le\sigma}S_{k}\right)\ \le\ \mathbb{E}\left(\sum_{k=1}^{\sigma}\log_{+}M_{k}\right)\ =\ \mathbb{E}\sigma\,\mathbb{E}\log_{+}M\ <\ \infty. $$
As a consequence,
$$ \lim_{t\to\infty}t\,I_{1}(t)\ =\ 0. $$
Turning to $I_2(t)$, we obtain
\betaegin{align*}
t\,I_{2}(t)\ &\le\ t\,\mathbb{E}\sum_{k=1}^{\sigma}\vec{1}_{\{\eta_{k}>(1-\varepsilon)t\}}\\
&=\ t\,\mathbb{E}\sum_{k\ge 1}\vec{1}_{\{\eta_{k}>(1-\varepsilon)t,\,\sigma\ge k\}}\\
&=\ t\,\mathbb{P}(\eta_{1}>(1-\varepsilon)t)\sum_{k\ge 1}\mathbb{P}(\sigma\ge k)\\
&=\ t\,\mathbb{E}\sigma\,\mathbb{P}(\eta_{1}>(1-\varepsilon)t)\ <\ \infty
\end{align*}
and thereupon
$$ \limsup_{t\to\infty}\,t\,(I_{1}(t)+I_{2}(t))\ =\ \limsup_{t\to\infty}\,t\,I_{2}(t)\ \le\ \frac{s\,\mathbb{E}\sigma}{1-\varepsilon}. $$
Hence \eqref{eq:upper bound} follows upon letting $\varepsilon$ tend to 0.
(b) It remains to show the inequality
\betaegin{equation}\label{eq:lower bound}
\liminf_{t\to\infty}\,t\,\mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\ \ge\ s\,\mathbb{E}\sigma
\end{equation}
which is nontrivial only when assuming $s\in (0,\infty]$. To this end observe that
$$ \log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}\ =\ \max_{1\le k\le\sigma}(S_{k-1}+\eta_{k})\ \ge\ \max_{1\le k\le\sigma\wedge\tau(c)}\eta_{k}-c $$
for any $c>0$, where $\tau(c):=\inf\{n\ge 1:S_{n}<-c\}$. Since, furthermore,
\betaegin{align*}
\mathbb{P}\left(\max_{1\le k\le \sigma\wedge\tau(c)}\,\eta_{k}>t\right)\ &=\ \mathbb{E}\left(\sum_{k=1}^{\sigma\wedge\tau(c)}\vec{1}_{\{\eta_{1}\vee...\eta_{k-1}\le t,\eta_{k}>t\}}\right)\\
&=\ \sum_{k\ge 1}\mathbb{P}\left(\max_{1\le j\le k-1}\eta_{j}\le t,\,\eta_{k}>t,\,\sigma\wedge\tau(c)\ge k\right)\\
&=\ \mathbb{P}(\eta_{1}>t)\sum_{k\ge 1}\mathbb{P}\left(\max_{1\le j\le k-1}\eta_{j}\le t,\,\sigma\wedge\tau(c)\ge k\right),
\end{align*}
we find
\betaegin{align*}
t\,&\mathbb{P}\left(\log\max_{1\le k\le \sigma}\,\Pi_{k-1}Q_{k}>t\right)\\
&\ge\ t\,\mathbb{P}(\eta_{1}>t+c)\sum_{k\ge 1}\mathbb{P}\left(\max_{1\le j\le k-1}\eta_{j}\le t,\,\sigma\wedge\tau(c)\ge k\right)\\
&\underset{t\to\infty}{\longrightarrow}\ s\,\mathbb{E}(\sigma\wedge\tau(c)),
\end{align*}
and this implies \eqref{eq:lower bound} upon letting $c$ tend to $\infty$, for $\sigma\wedge\tau(c)\uparrow\sigma$.\qed
\end{proof}
By combining the previous result with a simple stochastic majorization argument, we obtain the following extension.
\betaegin{Lemma}\label{tail lemma 2}
Let $s_{*}$ and $s^{*}$ be as defined in \eqref{eq:def of s_* and s^*}. Then
\betaegin{align}
\limsup_{t\to\infty}\,t\,\mathbb{P}(\log \widehat{X}_{\sigma}>t)\ &\le\ s^{*}\,\mathbb{E}\sigma\label{565+}\\
\text{and}\quad\liminf_{t\to\infty}\,t\,\mathbb{P}(\log \widehat{X}_{\sigma}>t)\ &\ge\ s_{*}\,\mathbb{E}\sigma.\label{565-}
\end{align}
\end{Lemma}
\betaegin{proof}
For \eqref{565+}, we may assume $s^{*}<\infty$. Recall the notation $F(t)=\mathbb{P}(\log Q\le t)$ and put $\overlineerline{F}:=1-F$. Then define the new distribution function $G$ by
$$ \overlineerline{G}(t)\ :=\ \vec{1}_{(-\infty,0]}(t)+\left(\overlineerline{F}(t)\vee\frac{s}{s+t}\right)\vec{1}_{(0,\infty)}(t) $$
for some arbitrary $s>s^{*}\ ($we can even choose $s=s^{*}$ unless $s^{*}=0)$. Since $\overlineerline{G}\ge\overlineerline{F}$, we may construct (on a possibly enlarged probability space) random variables $Q',\,Q_{1}',\,Q_{2}',\ldots$ such that $(M,Q,Q'),\,(M_{1},Q_{1},Q_{1}'),\,(M_{2},Q_{2},Q_{2}'),\ldots$ are iid, the distribution function of $\log Q'$ is $G$, and $Q'\ge Q$, thus
$$ \widehat{X}_{\sigma}'\ :=\ \sum_{k=1}^{\sigma}\Pi_{k-1}Q_{k}'\ \ge\ \widehat{X}_{\sigma}. $$
On the other hand, $\overlineerline{G}(t)=\mathbb{P}(\log Q'>t)$ satisfies the tail condition \eqref{tail2}, whence, by an appeal to Lemma \ref{tail lemma},
$$ \limsup_{t\to\infty}\,t\,\mathbb{P}(\log \widehat{X}_{\sigma}>t)\ \le\ \lim_{t\to\infty}\,t\,\mathbb{P}(\log \widehat{X}_{\sigma}'>t)\ =\ s\,\mathbb{E}\sigma. $$
This proves \eqref{565+} because $s-s^{*}$ can be chosen arbitrarily small.
Assertion \eqref{565-} for $s>0$ is proved in a similar manner. Indeed, pick any $s\in (0,s_{*})\ ($or even $s_{*}$ itself unless $s_{*}=\infty)$ and define
$$ \overlineerline{G}(t)\ :=\ \left(\overlineerline{F}(t)\wedge\frac{s}{s+t}\right)\vec{1}_{[0,\infty)}(t) $$
which obviously satisfies $\overlineerline{G}\le\overlineerline{F}$. In the notation from before, we now have $Q'\le Q$ and thus $\widehat{X}_{\sigma}'\le \widehat{X}_{\sigma}$. Since again $\overlineerline{G}(t)=\mathbb{P}(\log Q'>t)$ satisfies the tail condition \eqref{tail2}, we easily arrive at the desired conclusion by another appeal to Lemma \ref{tail lemma}.\qed
\end{proof}
Our last tail lemma will be crucial for the proof of Theorem \ref{main12}. Given any $0<\gamma<1$, recall that $X_{0}(\gamma)=X_{0}$ and
$$ X_{n}(\gamma)\ =\ \gamma X_{n-1}(\gamma)+Q_{n} $$
for $n\ge 1$. Let $\sigma$ be any integrable stopping time for $(X_{n})_{n\ge 0}$ and note that
$$ X_{\sigma}(\gamma)\ =\ \gamma^{\sigma}X_{0}+Q(\gamma), $$
where
$$ Q(\gamma)\ :=\ \sum_{k=1}^{\sigma}\gamma^{\sigma-k}Q_{k}. $$
More generally, if $(\sigma_{n})_{n\ge 0}$ denotes a renewal stopping sequence for $(X_{n})_{n\ge 0}$ with $\sigma=\sigma_{1}$, then
$$ X_{\sigma_{n}}(\gamma)\ =\ \gamma^{\sigma_{n}-\sigma_{n-1}}X_{\sigma_{n-1}}(\gamma)+Q_{n}(\gamma) $$
for $n\ge 1$ with iid $(\gamma^{\sigma_{n}-\sigma_{n-1}},Q_{n}(\gamma))_{n\ge 1}$ and $Q_{\sigma_{1}}(\gamma)=Q(\gamma)$.
\betaegin{Lemma}\label{tail lemma 3}
Let $\gamma\in (0,1)$ and $\sigma,Q(\gamma)$ be as just introduced. If $Q$ satisfies condition \eqref{tail2}, then
\betaegin{equation*}
\lim_{t\to\infty}t\,\mathbb{P}(\log Q(\gamma)>t)\ =\ s\,\mathbb{E}\sigma,
\end{equation*}
where the right-hand side equals $0$ if $s=0$, and $\infty$ if $s=\infty$. More generally, with $s_{*},s^{*}$ as defined in \eqref{eq:def of s_* and s^*}, it is always true that
\betaegin{align*}
\betaegin{split}
s_{*}\,\mathbb{E}\sigma\ &\le\ \liminf_{t\to\infty}t\,\mathbb{P}(\log Q(\gamma)>t)\\
&\le\ \limsup_{t\to\infty}t\,\mathbb{P}(\log Q(\gamma)>t)\ \le\ s^{*}\,\mathbb{E}\sigma.
\end{split}
\end{align*}
\end{Lemma}
\betaegin{proof}
Embarking on the obvious inequality (compare \eqref{Qsigma* bounds})
$$ \max_{1\le k\le\sigma}\gamma^{\sigma-k}Q_{k}\ \le\ Q(\gamma)\ \le\ \sigma\max_{1\le k\le\sigma}\gamma^{\sigma-k}Q_{k}, $$
the arguments are essentially the same and even slightly simpler than those given for the proofs of Lemmata \ref{tail lemma} and \ref{tail lemma 2}. We therefore omit further details.\qed
\end{proof}
\section{Proof of Theorems \ref{main11} and \ref{main12}}\label{sec:main11 and main12}
\betaegin{proof}[of Theorem \ref{main11}]
(a) \emph{Null recurrence}: We keep the notation of the previous sections, in particular $S_{n}=\log\Pi_{n}$ and $\eta_{n}=\log Q_{n}$ for $n\ge 1$. For an arbitrary $c>0$, let $(\sigma_{n})_{n\ge 0}$ be the integrable renewal stopping sequence with
$$ \sigma\ =\ \sigma_{1}\ :=\ \inf\{n\ge 1:S_{n}<-c\}. $$
Then
$$ \betaig(M_{n}^{*},Q_{n}^{*}\betaig)\ :=\ \left(\frac{\Pi_{\sigma_{n}}}{\Pi_{\sigma_{n-1}}},\sum_{k=\sigma_{n-1}+1}^{\sigma_{n}}\frac{\Pi_{k-1}}{\Pi_{\sigma_{n-1}}}Q_{k}\right),\quad n\ge 1, $$
are independent copies of $(\Pi_{\sigma},\sum_{k=1}^{\sigma}\Pi_{k-1}Q_{k})$. Put
$$ \Pi_{0}^{*}\ :=\ 1\quad\text{and}\quad\Pi_{n}^{*}\ :=\ \prod_{k=1}^{n}M_{k}^{*}\quad\text{for }n\ge 1. $$
By Lemma \ref{tail lemma 2},
$$ \limsup_{t\to\infty}\,t\,\mathbb{P}(\log Q_{1}^{*}>t)\ \le\ s^{*}\,\mathbb{E}\sigma $$
As already pointed out in the Introduction, validity of \eqref{trivial}, \eqref{trivial2} and \eqref{30} implies that $(X_{n})_{n\ge 0}$ cannot be positive recurrent. We will always assume $X_{0}=\widehat{X}_{0}=0$ hereafter. By Proposition \ref{recur}, the null recurrence of $(X_{n})_{n\ge 0}$ follows if we can show that
\betaegin{equation}\label{eq:to show}
\sum_{n\ge 1}\mathbb{P}(X_{n}\le t)\ =\ \sum_{n\ge 1}\mathbb{P}(\widehat{X}_{n}\le t)\ =\ \infty
\end{equation}
for some $t>0$ or, a fortiori,
\betaegin{equation}\label{eq2:to show}
\sum_{n\ge 1}\mathbb{P}(\widehat{X}_{\sigma_{n}}\le t)\ =\ \infty.
\end{equation}
We note that $\widehat{X}_{\sigma_{n}}=\sum_{k=1}^{\sigma_{n}}\Pi_{k-1}Q_{k}=\sum_{k=1}^{n}\Pi_{k-1}^{*}Q_{k}^{*}$ and pick an arbitrary nondecreasing sequence $0=a_{0}\le a_{1}\le\ldots$ such that
$$ a\ :=\ \sum_{n\ge 0}e^{-a_{n}}\ <\ \infty. $$
Fix any $z>0$ so large that
\betaegin{equation*}
\mathbb{P}\left(Q_{1}^{*}\le\frac{z}{a}\right)\ >\ 0.
\end{equation*}
Using $M_{n}^{*}<1$ for all $n\ge 1$, we then infer that
\betaegin{align*}
\mathbb{P}\left(\max_{1\le k\le n}e^{a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right)\ &\ge\ \mathbb{P}\left(\max_{1\le k\le n}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right)\\
&\ge\ \mathbb{P}\left(Q_{1}^{*}\le\frac{t}{a}\right)^{n}\ >\ 0.
\end{align*}
Furthermore,
\betaegin{equation*}
\mathbb{P}(\widehat{X}_{\sigma_{n}}\le t)\ =\ \mathbb{P}\left(\sum_{k=1}^{n}\Pi_{k-1}^{*}Q_{k}^{*}\le t\right)\ \ge\ \mathbb{P}\left(\max_{1\le k\le n}e^{\,a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right),
\end{equation*}
because $\max_{1\le k\le n}e^{\,a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}$ implies
$$ \sum_{k=1}^{n}\Pi_{k-1}^{*}Q_{k}^{*}\ \le\ \frac{t}{a}\sum_{k=1}^{n}e^{-a_{k-1}}\ \le\ t. $$
Consequently,
\betaegin{equation}\label{eq3:to show}
\sum_{n\ge 1}\mathbb{P}\left(\max_{1\le k\le n}e^{a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right)\ =\ \infty
\end{equation}
implies \eqref{eq2:to show}, and thus \eqref{eq:to show}.
By choice of the $\sigma_{n}$, we have $\log\Pi_{k}^{*}\le -ck$ a.s. Putting $x=\log t-\log a$, we have with $a_{k}=o(k)$ as $k\to\infty\ ($choose e.g. $a_{k}=2\,\log(1+k))$
\betaegin{align*}
\sum_{n\ge 1}\,&\mathbb{P}\left(\max_{1\le k\le n}e^{a_{k-1}}\Pi_{k-1}^{*}Q_{k}^{*}\le\frac{t}{a}\right)\\
&\ge\ \sum_{n\ge 1}\mathbb{P}\left(\max_{1\le k\le n}\betaig(-c(k-1)+a_{k-1}+\log Q_{k}^{*}\betaig)\le x\right)\\
&=\ \sum_{n\ge 1}\prod_{k=1}^{n}\mathbb{P}\betaig(\log Q_{1}^{*}\le x-a_{k-1}+c(k-1)\betaig)
\end{align*}
Defining $b_{n}$ as the $n$th summand in the previous sum and writing $\sigma=\sigma(c)$ to show the dependence on $c$, Lemma \ref{tail lemma 2} provides us with
$$ \liminf_{n\to\infty}\,n\left(\frac{b_{n+1}}{b_{n}}-1\right)\ \ge\ -s^{*}\,\frac{\mathbb{E}\sigma(c)}{c}, $$
hence Raabe's test entails \eqref{eq3:to show} if we can fix $c>0$ such that
\betaegin{equation}\label{eq4:to show}
s^{*}\,\frac{\mathbb{E}\sigma(c)}{c}\ <\ 1.
\end{equation}
Plainly, the latter holds true for any $c>0$ if $s^{*}=0$. But if $s^{*}\in (0,\infty)$, then use the elementary renewal theorem to infer (also in the case $\fm=-\infty$)
$$ \lim_{c\to\infty}\,s^{*}\,\frac{\mathbb{E}\sigma(c)}{c}\ =\ \frac{s^{*}}{-\fm}\ <\ 1. $$
Hence, \eqref{eq4:to show} follows by our assumption $s^{*}<-\fm$.\qed
\noindent
(b) \emph{Transience}: By Proposition \ref{transient}, it must be shown that
$$ \sum_{n\ge 0}\mathbb{P}(\widehat{X}_{n}\le t)\ <\ \infty $$
for any $t>0$. We point out first that it suffices to show
\betaegin{equation}\label{reduced sum finite}
\sum_{n\ge 0}\mathbb{P}(\widehat{X}_{\sigma_{n}}\le t)\ <\ \infty
\end{equation}
for some integrable renewal stopping sequence $(\sigma_{n})_{n\ge 0}$. Namely, since $(\widehat{X}_{n})_{n\ge 0}$ is nondecreasing, it follows that
\betaegin{align*}
\sum_{n\ge 0}\mathbb{P}(\widehat{X}_{n}\le t)\ &=\ \sum_{n\ge 0}\mathbb{E}\left(\sum_{k=\sigma_{n}}^{\sigma_{n+1}-1}\vec{1}_{\{\widehat{X}_{k}\le t\}}\right)\\
&\le\ \sum_{n\ge 0}\mathbb{E}\betaig(\sigma_{n+1}-\sigma_{n}\betaig)\vec{1}_{\{\widehat{X}_{\sigma_{n}}\le t\}}\\
&=\ \mathbb{E}\sigma\sum_{n\ge 0}\mathbb{P}(\widehat{X}_{\sigma_{n}}\le t),
\end{align*}
where we have used that $\sigma_{n+1}-\sigma_{n}$ is independent of $\widehat{X}_{\sigma_{n}}$ for each $n\ge 0$.
Choosing $(\sigma_{n})_{n\ge 0}=(\sigma^{>}n(c))_{n\ge 0}$ as defined before Lemma \ref{lem:bounding lemma} for an arbitrary $c\in (-\fm,s_{*})$, part (b) of this lemma provides us with
\betaegin{align}\label{wh(X)_{n}>=wh(Y)_{n}}
\widehat{X}_{\sigma_{n}}\ \ge\ \widehat{Y}_{n}\ =\ \sum_{k=1}^{n}\gamma^{\sigma_{k-1}}Q_{k}^{*}\quad\text{a.s.}
\end{align}
for all $n\ge 0$, where the $Q_{n}^{*}$ are formally defined as in (a) for the $\sigma_{n}$ given here and the $\widehat{Y}_{n}$ are the backward iterations of the Markov chain defined by the RDE
$$ Y_{n}\ =\ \gamma^{\sigma_{n}-\sigma_{n-1}}Y_{n-1}+Q_{n}^{*},\quad n\ge 1. $$
By Lemma \ref{tail lemma 2},
$$ \liminf_{t\to\infty}t\,\mathbb{P}(\log Q^{*}>t)\ \ge\ s_{*}\mathbb{E}\sigma. $$
Let $(Q_{n}')_{n\ge 1}$ be a further sequence of iid random variables with generic copy $Q'$, independent of all other occurring random variables and such that
\betaegin{equation}\label{eq:tail Q'}
\lim_{t\to\infty}t\,\mathbb{P}(\log Q'>t)\ =:\ s\ \in\ (c,s_{*}).
\end{equation}
Put $\gamma:=e^{-c}$. Then Kellerer's result (Proposition \ref{Kellerer's result}) implies the transience of the Markov chain $X_{n}'(\gamma)=\gamma X_{n-1}'(\gamma)+Q_{n}'$, $n\ge 1$, and thus also of the subchain $(X_{\sigma_{n}}'(\gamma))_{n\ge 0}$. Since $\widehat{X}_{\sigma_{n}}'(\gamma)\ =\ \sum_{k=1}^{n}\gamma^{\sigma_{k-1}}\widehat{Q}_{k}$ with
$$ \widehat{Q}_{n}\ =\ \sum_{k=\sigma_{n-1}+1}^{\sigma_{n}}\gamma^{k-\sigma_{n-1}-1}Q_{k}' $$
for $n\ge 1$ and since, by \eqref{eq:tail Q'} and Lemma \ref{tail lemma},
$$ \lim_{t\to\infty}t\,\mathbb{P}(\log\widehat{Q}>t)\ =\ s\,\mathbb{E}\sigma, $$
thus $\mathbb{P}(Q^{*}>t)\ge\mathbb{P}(\widehat{Q}>t)$ for all sufficiently large $t$, we now infer by invoking our Comparison Lemma \ref{comparison lemma} that the transience of $(X_{\sigma_{n}}'(\gamma))_{n\ge 0}$ entails the transience of $(Y_{n})_{n\ge 0}$ given above and thus
$$ \sum_{n\ge 0}\mathbb{P}(\widehat{Y}_{n}\le t)\ <\ \infty $$
for all $t>0$. Finally, use \eqref{wh(X)_{n}>=wh(Y)_{n}} to arrive at \eqref{reduced sum finite}.
This completes the proof of part (b).\qed
\end{proof}
\betaegin{proof}[of Theorem \ref{main12}]
Fix $c>s^{*}$ and put as before $\gamma=e^{-c}$. Since $S_{n}=\log\Pi_{n}\to-\infty$ a.s. and $\fm^{+}=\fm^{-}=\infty$, we have
$$ \lim_{n\to\infty}\frac{S_{n}}{n}\ =\ \lim_{n\to\infty}\frac{S_{n}+an}{n}\ =\ -\infty\quad\text{a.s. for all }a\in\mathbb{R}} \def\Rq{\overlineerline\R $$
due to Kesten's trichotomy (see e.g. \cite[p.~3]{KesMal:96}) and hence in particular $S_{n}+cn\to -\infty$ a.s. As a consequence, the sequence $(\sigma_{n})_{n\ge 0}=(\sigma^{<}n(c))_{n\ge 0}$ as defined before Lemma \ref{lem:bounding lemma} is an integrable renewal stopping sequence for $(X_{n})_{n\ge 0}$. Part (a) of this lemma implies
$$ X_{\sigma_{n}}\ \le\ X_{\sigma_{n}}(\gamma)\ =\ \gamma^{\sigma_{n}-\sigma_{n-1}}X_{\sigma_{n-1}}(\gamma)+Q_{n}(\gamma)\quad\text{a.s.} $$
for all $n\ge 0$, where $Q_{n}(\gamma)=\sum_{k=\sigma_{n-1}+1}^{\sigma_{n}}\gamma^{\sigma_{n}-k}Q_{k}$ for $n\ge 1$. Hence it is enough to prove the null recurrence of $(X_{\sigma_{n}}(\gamma))_{n\ge 0}$. To this end, note first that $\fm(\gamma):=\mathbb{E}\log\gamma^{\sigma_{1}}=-c\,\mathbb{E}\sigma_{1}\in (-\infty,0)$. Moreover, Lemma \ref{tail lemma 3} provides us with
$$ \limsup_{t\to\infty}t\,\mathbb{P}(\log Q_{1}(\gamma)>t)\ \le\ s^{*}\mathbb{E}\sigma_{1}\ <\ c\,\mathbb{E}\sigma_{1}\ =\ -\fm(\gamma), $$
and so the null recurrence of $(X_{\sigma_{n}}(\gamma))_{n\ge 0}$ follows from Theorem \ref{main11}.\qed
\end{proof}
\section{On the structure of the attractor set}\label{attr}
The purpose of this section is to investigate the structure of the attractor set $L$ for the Markov chain $(X_{n})_{n\ge 0}$ defined by \eqref{chain}. Unlike before, we assume hereafter that $(X_{n})_{n\ge 0}$ is locally contractive and recurrent, the latter being an inevitable assumption for $L\ne\oslash$. To exclude the ``trivial case'' (as explained in the introduction) we assume $\mathbb{P}(M=0)=0$. Recall from the paragraph preceding Proposition \ref{recur} that $L$ consists of all accumulation points of $(X_{n}^{x}(\omega))_{n\ge 0}$ which turns out to be the same for all $x\in\mathbb{R}} \def\Rq{\overlineerline\R_{+}$ and $\mathbb{P}$-almost all $\omega$. As already mentioned in the Introduction, $(X_{n})_{n\ge 0}$ possesses a unique invariant distribution, say $\nu$, if \eqref{trivial2} and \eqref{33} hold. The attractor set then coincides with the support of $\nu$. In the positive recurrent case the structure of $L$ was analyzed in \cite{BurDamMik:16}. According to Theorem 2.5.5 from there, $L$ necessarily equals a half-line $[a,\infty)$ for some $a\ge 0$ if it is unbounded. If $L$ is bounded, no general results concerning local properties of $L$ are known. It may equally well be a fractal (for instance, a Cantor set) or an interval. Below we consider both the positive and null recurrent case. The second one is implied by hypotheses of Theorem \ref{main11} a), but also holds when $\mathbb{E} \log M = 0$ (see \cite{BabBouElie:97,Benda:98b} for more details).
For $(m,q)\in\mathbb{R}} \def\Rq{\overlineerline\R_{+}^{2}$, let $g$ be the affine transformation of $\mathbb{R}} \def\Rq{\overlineerline\R$ defined by
\betaegin{equation*}
g(x)=mx+q\,,\quad x\in\mathbb{R}} \def\Rq{\overlineerline\R.
\end{equation*}
We will write $g=(m,q)$, thereby identifying $g$ with $(m,q)$. The affine transformations constitute a group ${\sf Aff}(\mathbb{R}} \def\Rq{\overlineerline\R )$ with identity $(1,0)$
and multiplication defined by
$$g_{1}g_{2}=(m_{1},q_{1})\,(m_{2},q_{2})=(m_{1}m_{2},q_{1}+m_{1}q_{2})$$ for $g_{i}=(m_{i}, q_{i})$, $i=1,2$. The inverse of $g=(m,q)$ is given by $g^{-1}=(m^{-1},-m^{-1}q)$.
Assuming $m\ne1$, let $x_{0} = x_{0}(g)=q/(1-m)$ be the unique fixed point of $g$, that is the unique solution to the equation $g(x)=x$. Then
$$ g(x)\ =\ m \,(x-x_{0})+x_{0}\,,\quad x\in\mathbb{R}} \def\Rq{\overlineerline\R$$ and similarly
\betaegin{equation}\label{eq:x0}
g^{n}(x)\ =\ m^{n}x+q_{n}= m^{n}\, (x-x_{0}) + x_{0}\,,\quad x\in \mathbb{R}} \def\Rq{\overlineerline\R,\ n\ge 1,
\end{equation}
where $q_{n}=\sum _{i=0}^{n-1}m^i\,q$. Formula \eqref{eq:x0} tells us that, modulo $x_{0}$, the action of $g$ is either contractive or expanding depending on whether $m<1$ or $m>1$, respectively.
We interpret $\mu$, the distribution of $(M,Q)$, as a probability measure on ${\sf Aff}(\mathbb{R}} \def\Rq{\overlineerline\R)$ hereafter and let ${\rm supp}\,\mu$ denote its support. Consider the subsemigroup $T$ of ${\sf Aff} (\mathbb{R}} \def\Rq{\overlineerline\R)$
generated by ${\rm supp }\ \mu $, i.e.
$$
T\ :=\ \{ g_{1}\cdot \ldots \cdot g_{n}: g_{i} \in {\rm supp}\,\mu,\ i=1,\ldots,n,\,n\ge 1\}\,,
$$
and let $\overline T$ be its closure. A set $S\subset \mathbb{R}} \def\Rq{\overlineerline\R$ is said to be {\em $\overline T$-invariant} if for every $g\in \overline T$ and $x\in S$, $g(x)=mx+q \in S$. The following result was stated in a slightly different setting as Proposition 2.5.3 in \cite{BurDamMik:16} and can be proved by the same arguments after minor changes.
\betaegin{Lemma}\label{lem:attractor}
Let $(X_{n})_{n\ge 0}$ be locally contractive and recurrent. Then $L=\overline S_{0}$, where
$$ S_{0}\ :=\ \{(1-m)^{-1}q: g=(m,q)\in T \,, m<1\}. $$
Moreover, $L$ equals the smallest $\overline T$-invariant subset of $\mathbb{R}} \def\Rq{\overlineerline\R$.
\end{Lemma}
For positive recurrent $(X_{n})_{n\ge 0}$, we have already pointed out that $L$, if unbounded, must be
a half-line $[a,\infty)$ ($a\ge 0$). The subsequent theorem provides the extension of this fact to any locally contractive and recurrent $(X_{n})_{n\ge 0}$.
\betaegin{Theorem}
Let $(X_{n})_{n\ge 0}$ be locally contractive and recurrent with unbounded attractor set $L$. If $\mathbb{P}(M=0)=0$, then $L=[a,\infty)$ for some $a\ge 0$ .
\end{Theorem}
\betaegin{proof}
By Lemma \ref{lem:attractor}, the set $L$ is uniquely determined by ${\rm supp}\,\mu$ and does not depend on the values $\mu(A)$ for any particular sets $A$. Consequently, any modification of $\mu$ with the same support leaves $L$ invariant. We will use this observation and define a tilting $\widetilde\mu$ of $\mu$ of the form
$$ \widetilde\mu({\rm d}m,{\rm d}q)\ =\ f(m)h(q)\,\mu({\rm d}m,{\rm d}q) $$
for suitable positive functions $f,h$ such that, if $(M,Q)$ has law $\widetilde\mu$, then the corresponding Markov chain $(\widetilde X_{n})_{n\ge 0}$ is positive recurrent with unique invariant distribution $\widetilde\nu$. We thus conclude ${\rm supp}\,\widetilde\nu = L$ and thereupon the claim $L=[a,\infty)$ if $L$ is unbounded.
Put
\betaegin{align*}
f(m)\ &:=\ \betaegin{cases}
\displaystyle\frac{c_{0}}{|\log m|},&\text{if }0<m<\displaystyle\frac{1}{e},\\[1.5mm]
c_{0},&\text{if }\displaystyle\frac{1}{e}\le m<1,\\[1.5mm]
c_{0}\,c_{1},&\text{if }1\le m<e,\\[1.5mm]
\displaystyle\frac{c_{0}\,c_{1}}{\log m},&\text{if }m\ge e,
\end{cases}
\shortintertext{and}
h(q)\ &:=\ \betaegin{cases}
c_{2},&\text{if }0\le q<e,\\[1.5mm]
\displaystyle\frac{c_{2}}{\log q},&\text{if }q\ge e,
\end{cases}
\end{align*}
and fix $c_{0},c_{1},c_{2}>0$ such that
$$ \int f(m) h(q)\ \mu({\rm d}m,{\rm d}q)\ =\ 1. $$
Observe that, if $\mathbb{P}_{\widetilde\mu}$ is such that $(M,Q)$ has law $\widetilde\mu$ under this probability measure, then
\betaegin{align}\label{int}
\betaegin{split}
\mathbb{E}_{\widetilde\mu}\log M\ &=\ c_{0}\left[-\int_{(0,1]\times\mathbb{R}} \def\Rq{\overlineerline\R_{+}}(1\wedge|\log m|)\, h(q)\ \mu({\rm d}m,{\rm d}q)\right.\\
&\hspace{2cm}+\ c_{1}\left.\int_{(1,\infty)\times\mathbb{R}} \def\Rq{\overlineerline\R_{+}}(1\wedge\log m)\,h(q)\ \mu({\rm d}m,{\rm d}q)\right],
\end{split}
\end{align}
and from this it is readily seen that we can specify $c_{0},c_{1}$ further so as to have
$$ \mathbb{E}_{\widetilde\mu}\log M\ <\ 0. $$
Regarding $\mathbb{E}_{\widetilde\mu}\log^{+}Q$, we find
\betaegin{align*}
\mathbb{E}_{\widetilde\mu} \log^{+}Q\ &=\ c_{2}\left[\int_{\mathbb{R}} \def\Rq{\overlineerline\R_{+}\times [1,e]}\log q\,f(m)\ \mu({\rm d}m,{\rm d}q)\ +\ \int_{\mathbb{R}} \def\Rq{\overlineerline\R_+\times (e,\infty)} f(m)\ \mu({\rm d}m,{\rm d}q)\right]\ <\ \infty.
\end{align*}
Hence, if $(M,Q)$ has law $\widetilde\mu$, then the corresponding Markov chain $(\widetilde X_{n})_{n\ge 0}$ defined by \eqref{chain} is indeed positive recurrent. This completes the proof of the theorem.\qed
\end{proof}
The next lemma provides some conditions on $\mu$ that are easily checked and sufficient for $L$ to be unbounded.
\betaegin{Lemma}\label{lem: 3.4}
If $\mathbb{P}(M=0)=0$, then each of the following conditions on the law of $(M,Q)$ implies that $L$ is unbounded.
\betaegin{description}[(C2)]\itemsep2pt
\item[(C1)] The law of $Q$ has unbounded support.
\item[(C2)] $\mathbb{P}(M>1)>0$ and $\mathbb{P}(Q=0)<1$.
\end{description}
\end{Lemma}
\betaegin{proof}
Assume first (C1), put $\betaeta:=\sup\{x: x\in L\}$ and recall from Lemma \ref{lem:attractor} that $L$ is invariant under the action of ${\rm supp}\,\mu$, i.e., if $(m,q)\in {\rm supp }\,\mu$ and $x\in L$, then $mx+q\in L$. In particular,
\betaegin{equation}\label{eq:5}
m\betaeta+q\,\le\,\betaeta\quad\text{ for any } (m,q)\in {\rm supp }\, \mu.
\end{equation}
Hence, if $\betaeta>0$, we have $\betaeta\ge m\betaeta + q \ge q$ and conclude $\betaeta=\infty$, for $q$ can be chosen arbitrarily large.
Assuming now (C2), pick $g=(m,q)\in {\rm supp}\,\mu$ such that $m>1$. Notice that $x=x_{0}(g)=q/(1-m)$, the unique fixed point of $g$, is negative or zero because $m>1$. Since, under our hypothesis, the attractor set consists of at least two points, one can choose some positive $y\in L$. Using \eqref{eq:x0}, we then infer
$$ g^{n}(y)\ =\ m^{n}(y-x)\,+\,x\ \to\ \infty $$
as $n\to\infty$ which completes the proof.\qed
\end{proof}
The assumptions of Lemma \ref{lem: 3.4} are not optimal. Even if $\mathbb{P}(M<1)=1$ and the support of the distribution of $Q$ is bounded, the attractor set may be unbounded, as demonstrated by the next lemma.
\betaegin{Lemma}
Assume that $\mathbb{P}(M<1)=1$. Then the attractor set $L$ is bounded if, and only if, the set
$$ S_{1}\ =\ \betaig\{x_{0}=x_{0}(g):g\in {\rm supp\,}\mu\betaig\} $$
is bounded or, equivalently, $Q/(1-M)$ is a.s. bounded.
\end{Lemma}
\betaegin{proof}
Assuming that $S_{1}$ is bounded, denote by $a$ and $b$ its infimum and supremum, respectively. Since the closed interval $[a,b]$ is obviously $\overline T$-invariant, it must contain $L$ by Lemma \ref{lem:attractor} which implies that $L$ is bounded.
If $S_{1}$ is unbounded, then $S_{1}\subset S_{0}$ implies that $S_{0}$ and thus also $L=\overline S_{0}$ is unbounded by another appeal to Lemma \ref{lem:attractor}. \qed
\end{proof}
Finally, we turn to the case when the attractor set $L$ is bounded. As already mentioned, the local structure of $L$ cannot generally be described precisely. If $\mu$ is supported by $(a,0)$ and $(a,1-a)$ for some $0<a<1/2$, then $L\subset [0,1]$ equals the Cantor set obtained by initially removing $(a,1-a)$ from $[0,1]$ and successive self-similar repetitions of this action for the remaining intervals (see also \cite[Remark 7]{BurtonRoesler:95}). So the Cantor ternary set is obtained if $a=1/3$. On the other hand, we have the following result.
\betaegin{Lemma}
For $\alphalphapha,\betaeta<1$ with $\alphalphapha+\betaeta\ge 1$ suppose that $(\alphalphapha,q_{\alphalphapha}), (\betaeta,q_{\betaeta})\in {\rm supp }\,\mu$ and further $x_{\alphalphapha}:=q_{\alphalphapha}/(1-\alphalphapha)\le q_{\betaeta}/(1-\betaeta) =:x_{\betaeta}$. Then the interval $[x_{\alphalphapha},x_{\betaeta}]$ is contained in $L$.
\end{Lemma}
\betaegin{proof}
W.l.o.g. we assume that $x_{\alphalphapha}=0$ and $x_{\betaeta}=1$ so that the points in ${\rm supp}\,\mu$ are $f_{\alphalphapha}:=(\alphalphapha,0)$ and $f_{\betaeta}:=(\betaeta,1-\betaeta)$ rather than $(\alphalphapha,q_{\alphalphapha}), (\betaeta,q_{\betaeta})$ and $[0,1]\subset L$ must be verified.
Pick any $x\in (0,1)$. Let $U$ be the subsemigroup of ${\sf Aff}(\mathbb{R}} \def\Rq{\overlineerline\R)$ generated by $f_{\alphalphapha}$ and $f_{\betaeta}$. To prove that $x\in L$, it is sufficient by Lemma \ref{lem:attractor} to find a sequence $(g_{n})_{n\ge 1}$ in $U$ such that $x$ is an accumulation point of $(g_{n}(0))_{n\ge 1}$.
We construct this sequence inductively. Observe first that $\alphalphapha+\betaeta \ge 1$ implies
$$ x\,\in\,(0,1)\,\subset\, [0,\alphalphapha] \cup [1-\betaeta,1]. $$
If $x$ is an element of $[0,\alphalphapha]$, take $g_{1} = f_{\alphalphapha}$, otherwise take $g_{1}=f_{\betaeta}$. In both cases,
$$ x\,\in\, [g_{1}(0), g_{1}(1)]\quad\text{and} \quad |g_{1}(1) -g_{1}(0)|\,\le\,\alphalphapha\vee\betaeta. $$
Assume we have found $g_{n}=(a_{n},b_{n})$ such that
$$ x\,\in\,[g_{n}(0), g_{n}(1)]\quad\text{and}\quad |g_{n}(1) -g_{n}(0)|\,=\, a_{n}\,\le\,\betaig(\alphalphapha \vee \betaeta\betaig)^n. $$
Using again $\alphalphapha+\betaeta\ge 1$, we have
\betaegin{align*}
x\ &\in\ [g_{n}(0), g_{n}(1)]\ =\ [b_{n}, a_{n} + b_{n}]\\
&\subset\ [b_{n}, \alphalphapha a_{n} + b_{n}] \cup [(1-\betaeta)a_{n} + b_{n}, a_{n} + b_{n}]\\
&=\ [g_{n}f_{\alphalphapha}(0), g_{n}f_{\alphalphapha}(1)] \cup [g_{n}f_{\betaeta}(0), g_{n}f_{\betaeta}(1)].
\end{align*}
Thus $x$ must belong to one of these intervals. If $x\in [g_{n}f_{\alphalphapha}(0), g_{n}f_{\alphalphapha}(1)]$, put $g_{n+1}=g_{n} f_{\alphalphapha}$, otherwise put $g_{n+1}=g_{n} f_{\betaeta}$. In both cases,
$$ x\,\in\,[g_{n+1}(0), g_{n+1}(1)]\quad\text{and} \quad |g_{n+1}(1)-g_{n+1}(0)|\,=\,a_{n} (\alphalphapha \vee \betaeta)\,\le\,\betaig(\alphalphapha \vee \betaeta\betaig)^{n+1}. $$
Hence, $x$ is indeed an accumulation point of the sequence $(g_{n}(0))_{n\ge 1}$ and therefore an element of $L$.\qed
\end{proof}
\footnotesize
\noindent {\betaf Acknowledgements.}
The authors wish to thank two anonymous referees for various helpful remarks that helped to improve the presentation and for bringing reference \cite{DenKorWach:16} to our attention.
G. Alsmeyer was partially supported by the Deutsche Forschungsgemeinschaft (SFB 878) "Geometry, Groups and Actions". D. Buraczewski was partially supported by the National Science Centre, Poland (Sonata Bis, grant number DEC-2014/14/E/ST1/00588).
Part of this work was done while A.~Iksanov was visiting M\"unster in January, February and July 2015, 2016. He gratefully acknowledges hospitality and financial support.
\def$'${$'$}
\betaegin{thebibliography}{10}
\betaibitem{BabBouElie:97}
M.~Babillot, P.~Bougerol, and L.~Elie.
\newblock The random difference equation {$X_{n}=A_{n}X_{n-1}+B_{n}$} in the critical
case.
\newblock {\em Ann. Probab.}, 25(1):478--493, 1997.
\betaibitem{Bauernschubert:13}
E.~Bauernschubert.
\newblock Perturbing transient random walk in a random environment with cookies
of maximal strength.
\newblock {\em Ann. Inst. Henri Poincar\'e Probab. Stat.}, 49(3):638--653,
2013.
\betaibitem{Benda:98b}
M.~Benda.
\newblock Schwach kontraktive dynamische {S}ysteme ({W}eakly contractive
dynamical systems), 1998.
\newblock Ph.D. Thesis, Ludwig-Maximilians-Universit\"at M\"unchen.
\betaibitem{Brofferio:03}
S.~Brofferio.
\newblock How a centred random walk on the affine group goes to infinity.
\newblock {\em Ann. Inst. H. Poincar\'e Probab. Statist.}, 39(3):371--384,
2003.
\betaibitem{BroBura:13}
S.~Brofferio and D.~Buraczewski.
\newblock On unbounded invariant measures of stochastic dynamical systems.
\newblock {\em Ann. Probab.}, 43(3):1456--1492, 2015.
\betaibitem{Buraczewski:07}
D.~Buraczewski.
\newblock On invariant measures of stochastic recursions in a critical case.
\newblock {\em Ann. Appl. Probab.}, 17(4):1245--1272, 2007.
\betaibitem{BurDamMik:16}
D.~Buraczewski, E.~Damek, and T.~Mikosch.
\newblock {\em Stochastic models with power-law tails. The equation $X=AX+B$.}
\newblock Springer Series in Operations Research and Financial Engineering.
Springer International Publishing, Switzerland, 2016.
\betaibitem{Buraczewski+Iksanov:15}
D.~Buraczewski and A.~Iksanov.
\newblock Functional limit theorems for divergent perpetuities in the contractive case.
\newblock{\em Electron. Commun. Probab.}, 20, article 10: 1--14, 2015.
\betaibitem{BurtonRoesler:95}
R.~M. Burton and U.~R{\"o}sler.
\newblock An {$L_2$} convergence theorem for random affine mappings.
\newblock {\em J. Appl. Probab.}, 32(1):183--192, 1995.
\betaibitem{DenKorWach:16}
D. Denisov, D. Korshunov and V.~Wachtel.
\newblock At the edge of criticality: Markov chains with asymptotically zero drift.
\newblock \texttt{www.arxiv.org}:1612.01592, 2016.
\betaibitem{Erickson:73}
K.~B. Erickson.
\newblock The strong law of large numbers when the mean is undefined.
\newblock {\em Trans. Amer. Math. Soc.}, 185:371--381 (1974), 1973.
\betaibitem{GolMal:00}
C.~M. Goldie and R.~A. Maller.
\newblock Stability of perpetuities.
\newblock {\em Ann. Probab.}, 28(3):1195--1218, 2000.
\betaibitem{Grincev:76}
A.~{Grincevi\v{c}ius}.
\newblock {Limit theorems for products of random linear transformations on the
line.}
\newblock {\em {Lith. Math. J.}}, 15:568--579, 1976.
\betaibitem{HitczenkoWes:11}
P.~Hitczenko and J.~Weso{\l}owski.
\newblock Renorming divergent perpetuities.
\newblock {\em Bernoulli}, 17(3):880--894, 2011.
\betaibitem{Iksanov:17}
A.~Iksanov.
\newblock {\em Renewal theory for perturbed random walks and similar
processes}.
\newblock Probability and {I}ts {A}pplications. Birkh\"{a}user, Switzerland, 2016.
\betaibitem{Iksanov+Pilipenko+Samoilenko:17}
A.~Iksanov, A.~Pilipenko and I.~Samoilenko.
\newblock Functional limit theorems for the maxima of perturbed random walk and
divergent perpetuities in the $M_1$-topology.
\newblock {\em Extremes}, 2017, to appear.
\betaibitem{Kellerer:92}
H.~G. Kellerer.
\newblock Ergodic behaviour of affine recursions {I}: criteria for recurrence
and transience, 1992.
\newblock Technical Report, Univ. of Munich, Germany. Available at
http://www.mathematik.uni-muenchen.de/$\sim$ kellerer/.
\betaibitem{KesMal:96}
H.~Kesten and R.~A. Maller.
\newblock Two renewal theorems for general random walks tending to infinity.
\newblock {\em Probab. Theory Related Fields}, 106(1):1--38, 1996.
\betaibitem{Pakes:83}
A.~G. Pakes.
\newblock Some properties of a random linear difference equation.
\newblock {\em Austral. J. Statist.}, 25(2):345--357, 1983.
\betaibitem{PeigneWoess:11a}
M.~Peign{\'e} and W.~Woess.
\newblock Stochastic dynamical systems with weak contractivity properties {I}.
{S}trong and local contractivity.
\newblock {\em Colloq. Math.}, 125(1):31--54, 2011.
\betaibitem{RachSamo:95}
S.~T. Rachev and G.~Samorodnitsky.
\newblock Limit laws for a stochastic process and random recursion arising in
probabilistic modelling.
\newblock {\em Adv. in Appl. Probab.}, 27(1):185--202, 1995.
\betaibitem{Stromberg:81}
K.~R. Stromberg.
\newblock {\em Introduction to classical real analysis}.
\newblock Wadsworth International, Belmont, Calif., 1981.
\newblock Wadsworth International Mathematics Series.
\betaibitem{Tong:94}
J.~C. Tong.
\newblock Kummer's test gives characterizations for convergence or divergence
of all positive series.
\newblock {\em Amer. Math. Monthly}, 101:450--452, 1994.
\betaibitem{ZeeviGlynn:04}
A.~Zeevi and P.~W. Glynn.
\newblock Recurrence properties of autoregressive processes with
super-heavy-tailed innovations.
\newblock {\em J. Appl. Probab.}, 41(3):639--653, 2004.
\betaibitem{Zerner:2016+}
M.~Zerner.
\newblock Recurrence and transience of contractive autoregressive processes and related Markov chains.
\newblock \texttt{www.arxiv.org}:1608.01394v2, 2016.
\end{thebibliography}
\end{document} |
\betaegin{document}
\title[Algebraic central division algebras]{On algebraic central division
algebras over Henselian fields of finite absolute Brauer
$p$-dimensions and residually arithmetic type}
\kappaeywords{Division LBD-algebra, Henselian field, field of
arithmetic type, absolute Brauer
$p$-dimension, tamely ramified extension, $p$-splitting field\\
2020 MSC Classification: 16K40, 12J10 (primary), 12E15, 12G05,
12F10 (secondary).}
\alphauthor{I.D. Chipchakov}
\alphaddress{Institute of Mathematics and Informatics\\Bulgarian Academy
of Sciences\\muathbf 113 Sofia, Bulgaria: E-mail address:
[email protected]}
\betaegin{abstract}
Let $(K, v)$ be a Henselian field with a residue field $\widehat
K$ and a value group $v(K)$, and let $\muathbb{P}$ be the set of
prime numbers. This paper finds conditions on $K$, $v(K)$ and
$\widehat K$ under which every algebraic associative central
division $K$-algebra $R$ contains a $K$-subalgebra $\widetilde R$
decomposable into a tensor product of central $K$-subalgebras $R
_{p}$, $p \in \muathbb{P}$, of finite $p$-primary degrees, such
that each finite-dimensional $K$-subalgebra $\Delta $ of $R$ is
isomorphic to a $K$-subalgebra $\widetilde \Delta $ of $\widetilde
R$.
\varepsilonnd{abstract}
\muaketitle
\sigmaection{\betaf Introduction}
\muedskip
All algebras considered in this paper are assumed to be
associative with a unit. Let $E$ be a field, $E _{\rm sep}$ its
separable closure, Fe$(E)$ the set of finite extensions of $E$ in
$E _{\rm sep}$, $\muathbb{P}$ the set of prime numbers, and for
each $p \in \muathbb{P}$, let $E(p)$ be the maximal $p$-extension
of $E$ in $E _{\rm sep}$, i.e. the compositum of all finite Galois
extensions of $E$ in $E _{\rm sep}$ whose Galois groups are
$p$-groups. It is known, by the Wedderburn-Artin structure theorem
(cf. \cite{He}, Theorem~2.1.6), that an Artinian $E$-algebra
$\muathcal{A}$ is simple if and only if it is isomorphic to the
full matrix ring $M _{n}(\muathcal{D}_{\muathcal{A}})$ of order $n$
over a division $E$-algebra $\muathcal{D}_{\muathcal{A}}$. When this
holds, $n$ is uniquely determined by $\muathcal{A}$, and so is
$\muathcal{D}_{\muathcal{A}}$, up-to isomorphism;
$\muathcal{D}_{\muathcal{A}}$ is called an underlying division
$E$-algebra of $\muathcal{A}$. The $E$-algebras $\muathcal{A}$ and
$\muathcal{D}_{\muathcal{A}}$ share a common centre
$Z(\muathcal{A})$; we say that $\muathcal{A}$ is a central
$E$-algebra if $Z(\muathcal{A}) = E$.
\par
Denote by Br$(E)$ the Brauer group of $E$, by $s(E)$ the class of
finite-dimensional central simple algebras over $E$, and by $d(E)$
the subclass of division algebras $D \in s(E)$. For each $A \in
s(E)$, let deg$(A)$, ind$(A)$ and exp$(A)$ be the degree, the
Schur index and the exponent of $A$, respectively. It is
well-known (cf. \cite{P}, Sect. 14.4) that exp$(A)$ divides
ind$(A)$ and shares with it the same set of prime divisors; also,
ind$(A) \muid {\rm deg}(A)$, and deg$(A) = {\rm ind}(A)$ if and
only if $A \in d(E)$. Note that ind$(B _{1} {\mathord{\,\otimes }\,}imes _{E} B _{2}) =
{\rm ind}(B _{1}){\rm ind}(B _{2})$ whenever $B _{1}, B _{2} \in
s(E)$ and g.c.d.$\{{\rm ind}(B _{1}), {\rm ind}(B _{2})\} = 1$;
equivalently, $B _{1} ^{\prime } {\mathord{\,\otimes }\,}imes _{E} B _{2} ^{\prime }
\in d(E)$, if $B _{j} ^{\prime } \in d(E)$, $j = 1, 2$, and
g.c.d.$\{{\rm deg}(B _{1} ^{\prime }), {\rm deg}(B _{2} ^{\prime
})\}$ $= 1$ (see \cite{P}, Sect. 13.4). Since Br$(E)$ is an
abelian torsion group, and ind$(A)$, exp$(A)$ are invariants both
of $A$ and its equivalence class $[A] \in {\rm Br}(E)$, these
results prove the classical primary tensor product decomposition
theorem, for an arbitrary $D \in d(E)$ (see \cite{P}, Sect. 14.4).
They also indicate that the study of the restrictions on the pairs
ind$(A)$, exp$(A)$, $A \in s(E)$, reduces to the special case of
$p$-primary pairs, for an arbitrary $p \in \muathbb{P}$. The Brauer
$p$-dimensions Brd$_{p}(E)$, $p \in \muathbb P$, contain essential
information on these restrictions. We say that Brd$_{p}(E) = n <
\infty $, for a given $p \in \muathbb P$, if $n$ is the least
integer $\gammae 0$, for which ind$(A _{p}) \muid {\rm exp}(A _{p})
^{n}$ whenever $A _{p} \in s(E)$ and $[A _{p}]$ lies in the
$p$-component Br$(E) _{p}$ of Br$(E)$; if no such $n$ exists, we
put Brd$_{p}(E) = \infty $. For instance, Brd$_{p}(E) \lambdae 1$, for
all $p \in \muathbb P$, if and only if deg$(D) = {\rm exp}(D)$, for
each $D \in d(E)$; Brd$_{p'}(E) = 0$, for some $p ^{\prime } \in
\muathbb P$, if and only if Br$(E) _{p'}$ is trivial.
\par
The absolute Brauer $p$-dimension of $E$ is defined to be the
supremum abrd$_{p}(E)$ of Brd$_{p}(R)\colon R \in {\rm Fe}(E)$. It
is a well-known consequence of Albert-Hochschild's theorem (cf.
\cite{S1}, Ch. II, 2.2) that abrd$_{p}(E) = 0$, $p \in
\muathbb{P}$, if and only if $E$ is a field of dimension $\lambdae 1$,
i.e. Br$(R) = \{0\}$, for every finite extension $R/E$. When $E$
is perfect, we have dim$(E) \lambdae 1$ if and only if the absolute
Galois group $\muathcal{G}_{E} = \muathcal{G}(E _{\rm sep}/E)$ is a
projective profinite group, in the sense of \cite{S1}. Also, by
class field theory, Brd$_{p}(E) = {\rm abrd}_{p}(E) = 1$, $p \in
\muathbb{P}$, if $E$ is a global or local field.
\par
\muedskip
This paper is devoted to the study of locally finite-dimensional
(abbr., LFD) subalgebras of algebraic central division algebras
over a field $K$ with abrd$_{p}(K)$ finite, for every $p \in
\muathbb{P}$. Our research is motivated by the following
conjecture:
\par
\muedskip
\betaegin{conj}
\lambdaambdabel{conj1.1} Assume that $K$ is a field with {\rm abrd}$_{p}(K)
< \infty $, $p \in \muathbb{P}$, and let $R$ be an algebraic
central division $K$-algebra. Then $R$ possesses a $K$-subalgebra
$\widetilde R$ with the following properties:
\par
{\rm (a)} $\widetilde R$ is $K$-isomorphic to the tensor product
${\mathord{\,\otimes }\,}imes _{p \in \muathbb P} R _{p}$, where ${\mathord{\,\otimes }\,}imes = {\mathord{\,\otimes }\,}imes
_{K}$ and $R _{p} \in d(K)$ is a $K$-subalgebra of $R$ of
$p$-primary degree $p ^{k(p)}$, for each $p \in \muathbb P$;
\par
{\rm (b)} Every $K$-subalgebra $\Delta $ of $R$, which is {\rm
LFD} of at most countable dimension, is embeddable in $\widetilde
R$ as a $K$-subalgebra; in particular, $K$ equals the centralizer
$C _{R}(\widetilde R) = \{c \in R\colon c\tilde r = \tilde rc,
\tilde r \in \widetilde R\}$, and for each $p \in \muathbb{P}$,
$k(p)$ is the maximal integer for which there is $\rho _{p} \in R$
such that $p ^{k(p)}$ divides $[K(\rho _{p})\colon K]$.
\varepsilonnd{conj}
\par
\muedskip
For technical reasons, we restrict our considerations almost
exclusively to the special case where $K$ is a virtually perfect
field. By definition, this means that char$(K) = q$, and in case
$q > 0$, the degree $[K\colon K ^{q}]$ is finite, where $K ^{q} =
\{\alphalpha ^{q}\colon \alphalpha \in K\}$ is the subfield of $K$ formed
by the $q$-th powers of its elements. Our main results, stated as
Theorems \ref{theo3.1} and \ref{theo3.2}, prove Conjecture
\ref{conj1.1}, under the hypothesis that $K$ lies in a large class
of noncountable Henselian fields with virtually perfect residue
fields of arithmetic type (see Definition~2), including higher
local fields and maximally complete equicharacteristic fields.
\par
\muedskip
\sigmaection{\betaf Background and further motivation}
\par
\muedskip
Let $E$ be a field of characteristic $q$ with Brd$_{p}(E) < \infty
$, for all $p \in \muathbb{P}$. It follows from well-known general
properties of the basic types of algebraic extensions (cf.
\cite{L}, Ch. V, Sects.~4 and 6) that Brd$_{p}(E ^{\prime }) \lambdae
{\rm abrd}_{p}(E)$, for any algebraic extension $E ^{\prime }/E$,
and any $p \in \muathbb{P}$ not equal to $q$ (see also \cite{Ch4},
(1.3), and \cite{P}, Sect.~13.4). The question of whether
algebraic extensions $E ^{\prime }/E$ satisfy the same inequality
in case $p = q$ seems to be difficult. Fortunately, it is not an
obstacle to our considerations if the field $E$ is virtually
perfect. When this holds, finite extensions of $E$ are virtually
perfect fields as well; in case $q > 0$, this is implied by the
following result (cf. \cite{BH}, Lemma~2.12, or \cite{L}, Ch. V,
Sect. 6):
\par
\muedskip\nuoindent
(2.1) $[E ^{\prime }\colon E ^{\prime q}] = [E\colon E ^{q}]$, for
every finite extension $E ^{\prime }/E$.
\par
\muedskip\nuoindent
Statement (2.1) enables one to deduce from Albert's theorem (cf.
\cite{A1}, Ch. VII, Theorem~28) and the former conclusion of
\cite{Ch6}, Lemma~4.1, that if $[E\colon E ^{q}] = q ^{\kappaappa }$,
where $\kappaappa \in \muathbb{N}$, then Brd$_{q}(E ^{\prime }) \lambdae
\kappaappa $. It is easily verified (cf. \cite{Ch3}, Proposition~2.4)
that every virtually perfect field $K$ with abrd$_{p}(K) < \infty
$, for all $p \in \muathbb{P}$, is an FC-field, in the sense of
\cite{Ch1} and \cite{Ch3}. As shown in \cite{Ch1} (see also
\cite{Ch3}), this sheds light on the structure of central division
LFD-algebras over $K$, as follows:
\par
\sigmamallskip
\betaegin{prop}
\lambdaambdabel{prop2.1} Let $K$ be a virtually perfect field with {\rm
abrd}$_{p}(K) < \infty $, for each $p \in \muathbb{P}$, and suppose
that $R$ is a central division {\rm LFD}-algebra over $K$, i.e.
finitely-generated $K$-subalgebras of $R$ are finite-dimensional.
Then $R$ possesses a $K$-subalgebra $\widetilde R$ with the
following properties:
\par
{\rm (a)} $\widetilde R$ is $K$-isomorphic to the tensor product
${\mathord{\,\otimes }\,}imes _{p \in \muathbb P} R _{p}$, where ${\mathord{\,\otimes }\,}imes = {\mathord{\,\otimes }\,}imes _{K}$
and $R _{p} \in d(K)$ is a $K$-subalgebra of $R$ of $p$-primary
degree $p ^{k(p)}$, for each $p$;
\par
{\rm (b)} Every $K$-subalgebra $\Delta $ of $R$ of at most
countable dimension is embeddable in $\widetilde R$ as a
$K$-subalgebra; hence, for each $p \in \muathbb{P}$, $k(p)$ is the
greatest integer for which there exists $r _{p} \in R$ of degree
$[K(r _{p})\colon K]$ divisible by $p ^{k(p)}$;
\par
{\rm (c)} $\widetilde R$ is isomorphic to $R$ if the
dimension $[R\colon K]$ is at most countable.
\varepsilonnd{prop}
\par
\muedskip
By the main result of \cite{Ch1}, the conclusion of Proposition
\ref{prop2.1} remains valid whenever $K$ is an FC-field; in
particular, this holds if char$(K) = q$, abrd$_{p}(K) < \infty $,
$p \in \muathbb{P} \sigmaetminus \{q\}$, and in case $q > 0$, there
exists $\muu \in \muathbb{N}$, such that Brd$_{q}(K ^{\prime }) \lambdae
\muu $, for every finite extension $K ^{\prime }/K$. As already
noted, the latter condition is satisfied if $q > 0$ and $[K\colon
K ^{q}] = q ^{\muu }$. It is worth mentioning, however, that the
existence of an upper bound $\muu $ as above is sometimes possible
in case $[K\colon K ^{q}] = \infty $. More precisely, for each $q
\in \muathbb{P}$, there are fields $E _{n}$, $n \in \muathbb{N}$,
with the following properties, for each $n$ (see Proposition
\ref{prop4.9}):
\par
\muedskip\nuoindent
(2.2) char$(E _{n}) = q$, $[E _{n}\colon E _{n} ^{q}] = \infty $,
Brd$_{p}(E _{n}) = {\rm abrd}_{p}(E _{n}) = [n/2]$, for all $p \in
\muathbb{P} \sigmaetminus \{q\}$, and Brd$_{q}(E _{n} ^{\prime }) = n -
1$, for every finite extension $E _{n} ^{\prime }/E _{n}$.
\par
\muedskip\nuoindent
In particular, FC-fields of characteristic $q > 0$ need not be
virtually perfect. Therefore, it should be pointed out that if
$F/E$ is a finitely-generated extension of transcendency degree
$\nuu > 0$, where $E$ is a field of characteristic $q > 0$, then
Brd$_{q}(F) < \infty $ if and only if $[E\colon E ^{q}] < \infty $
\cite{Ch6}, Theorem~2.2; when $[E\colon E ^{q}] = q ^{u} < \infty $,
we have $[F\colon F ^{q}] = q ^{u+\nuu }$, which means that
abrd$_{q}(F) \lambdae \nuu + u$. This attracts interest in the following
open problem:
\par
\muedskip
\betaegin{prob}
\lambdaambdabel{prob2.2} Let $E$ be a field with {\rm abrd}$_{p}(E) <
\infty $, for some $p \in \muathbb{P}$ different from {\rm
char}$(E)$. Find whether {\rm abrd}$_{p}(F) < \infty $, for any
finitely-generated transcendental field extension $F/E$.
\varepsilonnd{prob}
\par
\muedskip
Global fields and local fields are virtually perfect (cf.
\cite{Ef}, Example~4.1.3) of absolute Brauer $p$-dimensions one,
for all $p \in \muathbb{P}$, so they satisfy the conditions of
Proposition \ref{prop2.1}. In view of a more recent result of
Matzri \cite{Mat}, Proposition \ref{prop2.1} also applies to any
field $K$ of finite Diophantine dimension, that is, to any field
$K$ of type $C _{m}$, in the sense of Lang, for some integer $m
\gammae 0$. By type $C _{m}$, we mean that every nonzero homogeneous
polynomial $f$ of degree $d$ and with coefficients from $K$ has a
nontrivial zero over $K$, provided that $f$ depends on $n > d
^{m}$ algebraically independent variables over $K$. For example,
algebraically closed fields are of type $C _{0}$; finite fields
are of type $C _{1}$, by the Chevalley-Warning theorem (cf.
\cite{GiSz}, Theorem~6.2.6). Complete discrete valued fields with
algebraically closed residue fields are also of type $C _{1}$
(Lang's theorem, see \cite{S1}, Ch. II, 3.3), and by Koll\'{a}r's
theorem (see \cite{FJ}, Remark~21.3.7), so are pseudo
algebraically closed (abbr., PAC) fields of characteristic zero.
Perfect PAC fields of characteristic $q > 0$ are of type $C _{2}$
(cf. \cite{FJ}, Theorem~21.3.6).
\par
\muedskip
The present research is essentially a continuation of \cite{Ch3}.
Since the class of fields of finite Diophantine dimensions
consists of virtually perfect fields and it is closed under the
formation of both field extensions of finite transcendency degree
(by the Lang-Nagata-Tsen theorem \cite{Na}) and formal Laurent
power series fields in one variable \cite{Gr}, the above-noted
result of \cite{Mat} significantly extends the scope of
applicability of the main result of \cite{Ch1}. This gives rise to
the expectation, expressed by Conjecture \ref{conj1.1}, that it is
possible to reduce the research into noncommutative algebraic
central division algebras over finitely-generated extensions $E$
of fields $E _{0}$ with interesting arithmetic, algebraic,
diophantine, topological, or other specific properties to the
study of their finite-dimensional $E$-subalgebras (see \cite{Ch3},
Theorem~4.2 and Sect.~5, for examples of such a reduction). In
view of Problem \ref{prob2.2}, it is presently unknown whether one
may take as $E _{0}$ any virtually perfect field with abrd$_{p}(E
_{0}) < \infty $, $p \in \muathbb{P}$. Therefore, it should be
noted that the suggested approach to the study of algebraic
central division $K$-algebras can be followed whenever $K$ is a
virtually perfect field with abrd$_{p}(K) < \infty $, $p \in
\muathbb{P}$, over which Conjecture \ref{conj1.1} holds in general.
\par
\muedskip
One may clearly restrict considerations of the main aspects of our
conjecture to the case where $[R\colon K] = \infty $. For reasons
clarified in the sequel, in this paper we assume further that $R$
belongs to the class of $K$-algebras of linearly bounded degree,
in the sense of Amitsur \cite{Am}. This class is defined as
follows:
\par
\muedskip\nuoindent
{\betaf Definition 1.} An algebraic algebra $\Psi $ over a field $F$ is
said to be an algebra of linearly (or locally) bounded degree
(briefly, an LBD-algebra), if the following condition holds, for any
finite-dimensional $F$-subspace $V$ of $\Psi $: there exists $n(V)
\in \muathbb{N}$, such that $[F(v)\colon F] \lambdae n(V)$, for each $v
\in V$.
\par
\muedskip
It is not known whether every algebraic associative division
algebra $\Psi $ over a field $F$ is LFD. This problem has been
posed by Kurosh in \cite{Ku} as a division ring-theoretic analogue
to the Burnside problem for torsion (periodic) groups. Evidently,
if the stated problem is solved affirmatively, then Conjecture
\ref{conj1.1} will turn out to be a restatement of Proposition
\ref{prop2.1}, in case $K$ is virtually perfect. The problem will
be solved in the same direction if and only if the answers to
following two questions are positive:
\par
\muedskip\nuoindent
{\betaf Questions 2.3.} {\it Let $F$ be a field.
\par
{\rm (a)} Find whether algebraic division $F$-algebras are {\rm
LBD}-algebras over $F$.
\par
{\rm (b)} Find whether division {\rm LBD}-algebras over $F$ are
{\rm LFD}.}
\par
\muedskip
Although Questions~2.3~(a) and (b) are closely related to the
Kurosh problem, each of them makes interest in its own right. For
example, the main results of \cite{Ch3} indicate that an
affirmative answer to Question~2.3~(a) would prove Conjecture
\ref{conj1.1} in the special case where $K$ is a global or local
field, or more generally, a virtually perfect field of arithmetic
type, in the following sense:
\par
\muedskip\nuoindent
{\betaf Definition 2.} A field $K$ is said to be of arithmetic type, if
abrd$_{p}(K)$ is finite and abrd$_{p}(K(p)) = 0$, for each $p \in
\muathbb{P}$.
\par
\muedskip\nuoindent
It is of primary importance for the present research that the
answer to Question~2.3~(a) is affirmative when $F$ is a
noncountable field. Generally, by Amitsur's theorem \cite{Am},
algebraic associative algebras over such $F$ are LBD-algebras.
Furthermore, it follows from Amitsur's theorem that if $A$ is an
arbitrary LBD-algebra over any field $E$, then the tensor product
$A {\mathord{\,\otimes }\,}imes _{E} E ^{\prime }$ is an LBD-algebra over any extension
$E ^{\prime }$ of $E$ (see \cite{Am}). These results are
repeatedly used (without an explicit reference) for proving the
main results of this paper.
\par
\muedskip
\sigmaection{\betaf Statement of the main result}
\par
\muedskip
Assume that $K$ is a virtually perfect field with abrd$_{p}(K) <
\infty $, for all $p \in \muathbb{P}$, and let $R$ be an algebraic
central division $K$-algebra. Evidently, if $R$ possesses a
$K$-subalgebra $\widetilde R$ with the properties described by
Conjecture \ref{conj1.1}, then there is a sequence $k(p)$, $p \in
\muathbb{P}$, of integers $\gammae 0$, such that $p ^{k(p)+1}$ does not
divide $[K(r)\colon K]$, for any $r \in R$, $p \in \muathbb{P}$.
The existence of such a sequence is guaranteed if $R$ is an
LBD-algebra over $K$ (cf. \cite{Ch3}, Lemma~3.9). When $k(p) =
k(p) _{R}$ is the minimal integer satisfying the stated condition,
it is called a $p$-power of $R/K$. In this setting, the notion of
a $p$-splitting field of $R/K$ is defined as follows:
\par
\muedskip\nuoindent
{\betaf Definition 3.} Let $K ^{\prime }$ be a finite extension of $K$,
$R ^{\prime }$ the underlying (central) division $K ^{\prime
}$-algebra of $R {\mathord{\,\otimes }\,}imes _{K} K ^{\prime }$, and $\gammaamma (p)$ the
integer singled out by the Wedderburn-Artin $K ^{\prime
}$-isomorphism $R {\mathord{\,\otimes }\,}imes _{K} K ^{\prime } \cong M _{\gammaamma (p)}(R
^{\prime })$. We say that $K ^{\prime }$ is a $p$-splitting field of
$R/K$ if $p ^{k(p)}$ divides $\gammaamma (p)$.
\par
\muedskip\nuoindent
Note that the class of $p$-splitting fields of a central division
LBD-algebra $R$ over a virtually perfect field $K$ with
abrd$_{p'}(K) < \infty $, $p' \in \muathbb{P}$, is closed under the
formation of finite extensions. Indeed, it is well-known (cf.
\cite{He}, Lemma~4.1.1) that $R {\mathord{\,\otimes }\,}imes _{K} K ^{\prime }$ is a
central simple $K ^{\prime }$-algebra, for any field extension $K
^{\prime }/K$. This algebra is a left (and right) vector space
over $R$ of dimension equal to $[K ^{\prime }\colon K]$, which
implies it is Artinian whenever $[K ^{\prime }\colon K]$ is
finite. As $R {\mathord{\,\otimes }\,}imes _{K} K _{2}$ and $(R {\mathord{\,\otimes }\,}imes _{K} K _{1})
{\mathord{\,\otimes }\,}imes _{K _{1}} K _{2}$ are isomorphic $K _{2}$-algebras, for
any tower of field extensions $K \sigmaubseteq K _{1} \sigmaubseteq K
_{2}$ (cf. \cite{P}, Sect. 9.4, Corollary~(a)), these observations
enable one to deduce our assertion about $p$-splitting fields of
$R/K$ from the Wedderburn-Artin theorem (and well-known properties
of tensor products of matrix algebras, see \cite{P}, Sect. 9.3,
Corollary~b). Further results on $k(p)$ and the $p$-power of the
underlying division $K ^{\prime }$-algebra of $R {\mathord{\,\otimes }\,}imes _{K} K
^{\prime }$, obtained in the case where $K ^{\prime }/K$ is a
finite extension, are presented at the beginning of Section~5 (see
Lemma \ref{lemm5.1}). They have been proved in \cite{Ch3}, Sect.
3, under the extra hypothesis that dim$(K _{\rm sol}) \lambdae 1$,
where $K _{\rm sol}$ is the compositum of finite Galois extensions
of $K$ in $K _{\rm sep}$ with solvable Galois groups. These
results partially generalize well-known facts about
finite-dimensional central division algebras over arbitrary
fields, leaving open the question of whether the validity of the
derived information depends on the formulated hypothesis (see
Remark \ref{rema5.2}).
\par
\muedskip
The results of Section 5, combined with Amitsur's theorem referred
to in Section 2, form the basis of the proof of the main results
of the present research. Our proof also relies on the theory of
Henselian fields and their finite-dimensional division algebras
(cf. \cite{JW}). Taking into consideration the generality of
Amitsur's theorem, we recall that the class $\muathcal{HNF}$ of
Henselian noncountable fields contains every maximally complete
field, i.e. any nontrivially valued field $(K, v)$ which does not
admit a valued proper extension with the same value group and
residue field. For instance, $\muathcal{HNF}$ contains the
generalized formal power series field $K _{0}((\Gamma ))$ over a
field $K _{0}$, where $\Gamma $ is a nontrivial ordered abelian
group, and $v$ is the standard valuation of $K _{0}((\Gamma ))$
trivial on $K _{0}$ (see \cite{Ef}, Example~4.2.1 and
Theorem~18.4.1). Moreover, for each $m \in \muathbb{N}$,
$\muathcal{HNF}$ contains every complete $m$-discretely valued
field with respect to its standard $\muathbb{Z} ^{m}$-valued
valuation, where $\muathbb{Z} ^{m}$ is viewed as an ordered abelian
group by the inverse-lexicographic ordering.
\par
\muedskip
By a complete $1$-discretely valued field, we mean a complete
discrete valued field, and when $m \gammae 2$, a complete
$m$-discretely valued field with an $m$-th residue field $K _{0}$
means a field $K _{m}$ which is complete with respect to a
discrete valuation $w _{0}$, such that the residue field $\widehat
K _{m} := K _{m-1}$ of $(K _{m}, w _{0})$ is a complete $(m -
1)$-discretely valued field with an $(m - 1)$-th residue field $K
_{0}$. If $m \gammae 2$ and $v _{m-1}$ is the standard $\muathbb{Z}
^{m-1}$-valued valuation of $K _{m-1}$, then the composite
valuation $v _{m} = v _{m-1} {\alphast }w _{0}$ is the standard
$\muathbb{Z} ^{m}$-valued valuation of $K _{m}$. It is known that
$v _{m}$ is Henselian (cf. \cite{TW}, Proposition~A.15) and $K
_{0}$ equals the residue field of $(K _{m}, v _{m})$. This applies
to the important special case where $K _{0}$ is a finite field,
i.e. $K _{m}$ is an $m$-dimensional local field, in the sense of
Kato and Parshin.
\par
\muedskip
The purpose of this paper is to prove Conjecture \ref{conj1.1} for
two types of Henselian fields. Our first main result can be stated
as follows:
\par
\muedskip
\betaegin{theo}
\lambdaambdabel{theo3.1} Let $K = K _{m}$ be a complete $m$-discretely
valued field with a virtually perfect $m$-th residue field $K
_{0}$, for some integer $m > 0$, and let $R$ be an algebraic
central division $K$-algebra. Suppose that {\rm char}$(K _{0}) =
q$ and $K _{0}$ is of arithmetic type. Then $R$ possesses a
$K$-subalgebra $\widetilde R$ with the properties claimed by
Conjecture \ref{conj1.1}.
\varepsilonnd{theo}
\par
\muedskip
When char$(K _{m}) = {\rm char}(K _{0})$, $(K _{m}, w _{m})$ is
isomorphic to the iterated formal Laurent power series field
$\muathcal{K} _{m} := K _{0}((X _{1})) \deltaots ((X _{m}))$ in $m$
variables, considered with its standard $\muathbb{Z} ^{m}$-valued
valuation, say $\tilde w _{m}$, acting trivially on $K _{0}$; in
particular, this is the case where char$(K _{0}) = 0$. It is known
that $(\muathcal{K} _{m}, \tilde w _{m})$ is maximally complete
(cf. \cite{Ef}, Sect. 18.4). As $K _{0}$ is a virtually perfect
field, whence, so is $\muathcal{K} _{m}$, this enables one to prove
the assertion of Theorem \ref{theo3.1} by applying our second main
result to $(\muathcal{K} _{m}, \tilde w _{m})$:
\par
\muedskip
\betaegin{theo}
\lambdaambdabel{theo3.2} Let $(K, v)$ be a Henselian field with $\widehat
K$ of arithmetic type. Assume also that {\rm char}$(K) = {\rm
char}(\widehat K) = q$, $K$ is virtually perfect and {\rm
abrd}$_{p}(K)$ is finite, for each $p \in \muathbb{P} \sigmaetminus
\{q\}$. Then every central division {\rm LBD}-algebra $R$ over $K$
has a central $K$-subalgebra $\widetilde R$ admissible by
Conjecture \ref{conj1.1}.
\varepsilonnd{theo}
\par
\muedskip
The assertions of Theorems \ref{theo3.1} and \ref{theo3.2} are
known in case $R \in d(K)$. When $[R\colon K] = \infty $, they can
be deduced from the following lemma, by the method of proving
Theorem~4.1 of \cite{Ch3} (see Remark \ref{rema5.7} and Section
9).
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm3.3} Assume that $K$ is a field and $R$ is a central
division $K$-algebra, which satisfy the conditions of Theorem
\ref{theo3.1} or Theorem \ref{theo3.2}. Then, for each $p \in
\muathbb{P}$, $K$ has a finite extension $E _{p}$ in $K(p)$ that is
a $p$-splitting field of $R/K$; equivalently, $p$ does not divide
$[E _{p}(\rho _{p})\colon E _{p}]$, for any element $\rho _{p}$ of
the underlying division $E _{p}$-algebra $\muathcal{R} _{p}$ of $R
{\mathord{\,\otimes }\,}imes _{K} E _{p}$.
\varepsilonnd{lemm}
\par
\muedskip
The fulfillment of the conditions of Lemma \ref{lemm3.3} ensures
that dim$(K _{\rm sol}) \lambdae 1$ (see Lemmas \ref{lemm6.4} and
\ref{lemm6.5} below). This plays an essential role in the proof of
Theorems \ref{theo3.1} and \ref{theo3.2}, which also relies on the
following known result:
\par
\muedskip\nuoindent
(3.1) Given an HDV-field $(K, v)$, the scalar extension map Br$(K)
\to {\rm Br}(K _{v})$, where $K _{v}$ is a completion of $K$ with
respect to the topology of $v$, is an injective homomorphism which
preserves Schur indices and exponents (cf. \cite{Cohn},
Theorem~1); hence, Brd$_{p'}(K) \lambdae {\rm Brd}_{p'}(K _{v})$, for
every $p' \in \muathbb P$.
\par
\muedskip
The earliest draft of this paper is contained in the manuscript
\cite{Ch2}. Here we extend the scope of results of \cite{Ch2},
obtained before the theory of division algebras over Henselian
fields in \cite{JW}, and the progress in absolute Brauer
$p$-dimensions made in \cite{Mat}, \cite{PS} and other papers
allowed us to consider the topic of the present research in the
desired generality (including, for example, $m$-dimensional local
fields which are not of arithmetic type, in the sense of
Definition~2, for any $m \gammae 2$, see Lemmas \ref{lemm4.8},
\ref{lemm7.4} and Remark \ref{rema7.5}).
\par
\muedskip
The basic notation, terminology and conventions kept in this paper
are standard and virtually the same as in \cite{He}, \cite{L},
\cite{P} and \cite{Ch6}. Throughout, Brauer groups, value groups
and ordered abelian groups are written additively, Galois groups
are viewed as profinite with respect to the Krull topology, and by
a profinite group homomorphism, we mean a continuous one. For any
algebra $A$, we consider only subalgebras containing its unit.
Given a field $E$, $E ^{\alphast }$ denotes its multiplicative group,
$E ^{\alphast n} = \{a ^{n}\colon \ a \in E ^{\alphast }\}$, for each $n
\in \muathbb N$, and for any $p \in \muathbb P$, $_{p}{\rm Br}(E)$
stands for the maximal subgroup $\{b _{p} \in {\rm Br}(E)\colon \
pb _{p} = 0\}$ of Br$(E)$ of period dividing $p$. We denote by
$I(E ^{\prime }/E)$ the set of intermediate fields of any field
extension $E ^{\prime }/E$, and by Br$(E ^{\prime }/E)$ the
relative Brauer group of $E ^{\prime }/E$ (the kernel of the
scalar extension map Br$(E) \to {\rm Br}(E ^{\prime })$). In case
char$(E) = q > 0$, we write $[a, b) _{E}$ for the $q$-symbol
$E$-algebra generated by elements $\xi $ and $\varepsilonta $, such
\par\vskip0.032truecm\nuoindent
that $\varepsilonta \xi = (\xi + 1)\varepsilonta $, $\xi ^{q} - \xi = a \in
E$ and $\varepsilonta ^{q} = b \in E ^{\alphast }$.
\par
\muedskip
Here is an overview of the rest of this paper. Section 4 includes
preliminaries on Henselian fields used in the sequel. It also
shows that a Henselian field $(K, v)$ satisfies the condition
abrd$_{p}(K) < \infty $, for some $p \in \muathbb{P}$ not equal to
char$(\widehat K)$, if and only if abrd$_{p}(\widehat K) < \infty
$ and the subgroup $pv(K)$ of the value group $v(K)$ is of finite
index. When $(K, v)$ is maximally complete with char$(K) = q > 0$,
we prove in addition that abrd$_{q}(K) < \infty $ if and only if
$K$ is virtually perfect. These results fully characterize
generalized formal power series fields (and, more generally,
maximally complete equicharacteristic fields, see \cite{Ka},
page~320, and \cite{Ef}, Theorem~18.4.1) of finite absolute Brauer
$p$-dimensions, for all $p \in \muathbb{P}$, and so prove their
admissibility by Proposition \ref{prop2.1}. Section 5 presents the
main ring-theoretic and Galois cohomological ingredients of the
proofs of Lemma \ref{lemm3.3} and our main results. Most of them
have been extracted from \cite{Ch3}, wherefore, they are stated
here without proof. As noted above, we also show in Section~5 how
to deduce Theorems \ref{theo3.1} and \ref{theo3.2} from Lemma
\ref{lemm3.3} by the method of proving \cite{Ch3}, Theorem~4.1. In
Section 6 we prove that Henselian fields $(K, v)$ with char$(K) =
q > 0$ satisfy abrd$_{q}(K(q)) = 0$, and so do HDV-fields of
residual characteristic $q$. Section 7 collects
valuation-theoretic ingredients of the proof of Lemma
\ref{lemm3.3}; these include a tame version of the noted lemma,
stated as Lemma \ref{lemm7.6}. Sections~8 and 9 are devoted to the
proof of Lemma \ref{lemm3.3}, which is done by adapting to
Henselian fields the method of proving \cite{Ch3}, Lemma~8.3.
Specifically, our proof relies on Lemmas \ref{lemm7.3} and
\ref{lemm7.6}, as well as on results of Sections~5 and 6. In the
setting of Theorem \ref{theo3.1}, when $m \gammae 2$ and char$(K _{m})
= 0 < q$, we use Lemma \ref{lemm4.3} at a crucial point of the
proof.
\par
\muedskip
\sigmaection{\betaf Preliminaries on Henselian fields and their
finite-dimensional division algebras and absolute Brauer
$p$-dimensions}
\par
\muedskip
Let $K$ be a field with a nontrivial valuation $v$, $O _{v}(K) =
\{a \in K\colon \ v(a) \gammae 0\}$ the valuation ring of $(K, v)$, $M
_{v}(K) = \{\muu \in K\colon \ v(\muu ) > 0\}$ the maximal ideal of
$O _{v}(K)$, $O _{v}(K) ^{\alphast } = \{u \in K\colon \ v(u) = 0\}$
the multiplicative group of $O _{v}(K)$, $v(K)$ and $\widehat K =
O _{v}(K)/M _{v}(K)$ the value group and the residue field of $(K,
v)$, respectively; put $n+abla _{0}(K) = \{\alphalpha \in K\colon \alphalpha
- 1 \in M _{v}(K)\}$. We say that $v$ is Henselian if it extends
uniquely, up-to equivalence, to a valuation $v _{L}$ on each
algebraic extension $L$ of $K$. This holds, for example, if $K = K
_{v}$ and $v(K)$ is an ordered subgroup of the additive group
$\muathbb R$ of real numbers (cf. \cite{L}, Ch. XII). Maximally
complete fields are also Henselian, since Henselizations of valued
fields are their immediate extensions (see, e.g., \cite{Ef},
Proposition~15.3.7, or \cite{TW}, Corollary~A.28). In order that $v$
be Henselian, it is necessary and sufficient that any of the
following two equivalent conditions is fulfilled (cf. \cite{Ef},
Theorem~18.1.2, or \cite{TW}, Theorem~A.14):
\par
\muedskip\nuoindent
(4.1) (a) Given a polynomial $f(X) \in O _{v}(K) [X]$ and an
element $a \in O _{v}(K)$, such that $2v(f ^{\prime }(a)) <
v(f(a))$, where $f ^{\prime }$ is the formal derivative of $f$,
there is a zero $c \in O _{v}(K)$ of $f$ satisfying the equality
$v(c - a) = v(f(a)/f ^{\prime }(a))$;
\par
(b) For each normal extension $\Omega /K$, $v ^{\prime }(\tilde{\alpha}u (\muu ))
= v ^{\prime }(\muu )$ whenever $\muu \in \Omega $, $v ^{\prime }$ is
a valuation of $\Omega $ extending $v$, and $\tilde{\alpha}u $ is a
$K$-automorphism of $\Omega $.
\par
\muedskip\nuoindent
When $v$ is Henselian, so is $v _{L}$, for any algebraic field
extension $L/K$. In this case, we put $O _{v}(L) = O _{v
_{L}}(L)$, $M _{v}(L) = M _{v_{L}}(L)$, $v(L) = v _{L}(L)$, and
denote by $\widehat L$ the residue field of $(L, v _{L})$.
Clearly, $\widehat L/\widehat K$ is an algebraic extension and
$v(K)$ is an ordered subgroup of $v(L)$; the index $e(L/K)$ of
$v(K)$ in $v(L)$ is called a ramification index of $L/K$. By
Ostrowski's theorem (see \cite{Ef}, Sects.~17.1, 17.2) if
$[L\colon K]$ is finite, then $[\widehat L\colon \widehat
K]e(L/K)$ divides $[L\colon K]$, and the integer $[L\colon
K][\widehat L\colon \widehat K] ^{-1}e(L/K) ^{-1}$ is not
divisible by any $p \in \muathbb P$, $p \nueq {\rm char}(\widehat
K)$. The extension $L/K$ is defectless, i.e. $[L\colon K] =
[\widehat L\colon \widehat K]e(L/K)$, in the following three
cases:
\par
\muedskip\nuoindent
(4.2) (a) If char$(\widehat K) \numid [L\colon K]$ (apply
Ostrowski's theorem);
\par
(b) If $(K, v)$ is HDV and $L/K$ is separable (see \cite{TW},
Theorem~A.12);
\par
(c) When $(K, v)$ is maximally complete (cf. \cite{Wa},
Theorem~31.22).
\par
\muedskip\nuoindent
Assume that $(K, v)$ is a Henselian field and $R/K$ is a finite
extension. We say that $R/K$ is inertial if $[R\colon K] =
[\widehat R\colon \widehat K]$ and $\widehat R/\widehat K$ is a
separable extension; $R/K$ is said to be totally ramified if
$e(R/K) = [R\colon K]$. Inertial extensions of $K$ have the
following useful properties (see \cite{TW}, Theorem~A.23,
Proposition~A.17 and Corollary~A.25):
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm4.1} Let $(K, v)$ be a Henselian field. Then:
\par
{\rm (a)} An inertial extension $R ^{\prime }/K$ is Galois if and
only if $\widehat R ^{\prime }/\widehat K$ is Galois. When this
holds, the Galois groups $\muathcal{G}(R ^{\prime }/K)$ and
$\muathcal{G}(\widehat R ^{\prime }/\widehat K)$ are isomorphic.
\par
{\rm (b)} The compositum $K _{\rm ur}$ of inertial extensions of
$K$ in $K _{\rm sep}$ is a Galois extension of $K$ with
$\muathcal{G}(K _{\rm ur}/K) \cong \muathcal{G}_{\widehat K}$.
\par
{\rm (c)} Finite extensions of $K$ in $K _{\rm ur}$ are inertial,
and the natural mapping of $I(K _{\rm ur}/K)$ into $I(\widehat K
_{\rm sep}/\widehat K)$ is bijective.
\par
{\rm (d)} For each $K _{1} \in {\rm Fe}(K)$, the intersection $K _{0}
= K _{1} \cap K _{\rm ur}$ equals the maximal inertial extension of
$K$ in $K _{1}$; in addition, $\widehat K _{0} = \widehat K _{1}$.
\varepsilonnd{lemm}
\par
\muedskip\nuoindent
When $(K, v)$ is Henselian, the finite extension $R/K$ is called
tamely ramified, if char$(\widehat K) \numid e(R/K)$ and $\widehat
R/\widehat K$ is separable (this holds if char$(\widehat K) \numid
[R\colon K]$). The next lemma gives an account of some basic
properties of tamely ramified extensions of $K$ in $K _{\rm sep}$
(see \cite{MT}, and \cite{TW}, Theorems~A.9 (i),(ii) and A.24):
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm4.2} Let $(K, v)$ be a Henselian field with {\rm
char}$(\widehat K) = q$, $K _{\rm tr}$ the compositum of tamely
ramified extensions of $K$ in $K _{\rm sep}$, $\muathbb{P} ^{\prime
} = \muathbb{P} \sigmaetminus \{q\}$, and let $\hat \varepsilon _{p}$
be a primitive $p$-th root of unity in $\widehat K _{\rm sep}$,
for each $p \in \muathbb{P} ^{\prime }$. Then $K _{\rm tr}/K$ is a
Galois extension with $\muathcal{G}(K _{\rm tr}/K _{\rm ur})$
abelian, and the following holds:
\par
{\rm (a)} All finite extensions of $K$ in $K _{\rm tr}$ are tamely
ramified.
\par
{\rm (b)} There is $T(K) \in I(K _{\rm tr}/K)$ with $T(K) \cap K
_{\rm ur} = K$ and $T(K).K _{\rm ur} = K _{\rm tr}$; hence, finite
extensions of $K$ in $T(K)$ are tamely and totally ramified.
\par
{\rm (c)} The field $T(K)$ singled out in {\rm (b)} is isomorphic
as a $K$-algebra to \par\nuoindent
${\mathord{\,\otimes }\,}imes _{p \in \muathbb{P}'} T
_{p}(K)$, where ${\mathord{\,\otimes }\,}imes = {\mathord{\,\otimes }\,}imes _{K}$, and for each $p$, $T
_{p}(K) \in I(T(K)/K)$ and every finite extension of $K$ in $T
_{p}(K)$ is of $p$-primary degree; in particular, $T(K)$ equals
the compositum of the fields $T _{p}(K)$, $p \in \muathbb{P}
^{\prime }$.
\par
{\rm (d)} With notation being as in {\rm (c)}, $T _{p}(K) \nueq K$,
for some $p \in \muathbb{P} ^{\prime }$, if and only if $v(K) \nueq
pv(K)$; when this holds, $T _{p}(K) \in I(K(p)/K)$ if and only if
$\hat \varepsilon _{p} \in \widehat K$ (equivalently, if and only
if $K$ contains a primitive $p$-th root of unity).
\varepsilonnd{lemm}
\par
\muedskip
The Henselian property of $(K, v)$ guarantees that $v$ extends to
a unique, up-to equivalence, valuation $v _{D}$ on each $D \in
d(K)$ (cf. \cite{TW}, Sect. 1.2.2). Put $v(D) = v _{D}(D)$ and
denote by $\widehat D$ the residue division ring of $(D, v _{D})$.
It is known that $\widehat D$ is a division $\widehat K$-algebra,
$v(D)$ is an ordered abelian group and $v(K)$ is an ordered
subgroup of $v(D)$ of finite index $e(D/K)$ (called the
ramification index of $D/K$). Note further that $[\widehat D\colon
\widehat K] < \infty $, and by the Ostrowski-Draxl theorem (cf.
\cite{Dr2} and \cite{TW}, Propositions~4.20, 4.21), $[\widehat
D\colon \widehat K]e(D/K) \muid [D\colon K]$ and $[D\colon
K][\widehat D\colon \widehat K] ^{-1}e(D/K) ^{-1}$ has no prime
divisor $p \nueq {\rm char}(\widehat K)$. The division $K$-algebra
$D$ is called inertial if
\par\nuoindent
$[D\colon K] = [\widehat D\colon \widehat K]$ and $\widehat D \in
d(\widehat K)$; it is called totally ramified if
\par\nuoindent
$[D\colon K] = e(D/K)$. We say that $D/K$ is defectless if $[D\colon
K] = [\widehat D\colon \widehat K]e(D/K)$; this holds in the
following two cases:
\par
\muedskip
\nuoindent (4.3) (a) If char$(\widehat K) \numid [D\colon K]$ (apply
the Ostrowski-Draxl theorem);
\par
(b) If $(K, v)$ is an HDV-field (see \cite{TY}, Proposition~2.2).
\par
\muedskip\nuoindent
The algebra $D \in d(K)$ is called nicely semi-ramified (abbr.,
NSR), in the sense of \cite{JW}, if $e(D/K) = [\widehat D\colon
\widehat K] = {\rm deg}(D)$ and $\widehat D/\widehat K$ is a
separable field extension. As shown in \cite{JW}, when this holds,
$\widehat D/\widehat K$ is a Galois extension,
$\muathcal{G}(\widehat D/\widehat K)$ is isomorphic to the quotient
group $v(D)/v(K)$, and $D$ decomposes into a tensor product of
cyclic NSR-algebras over $K$ (see also \cite{TW},
Propositions~8.40 and 8.41). The result referred to allows to
prove our next lemma, stated as follows:
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm4.3} Let $(K, v)$ be a Henselian field, such that {\rm
abrd}$_{p}(\widehat K(p)) = 0$, for some $p \in \muathbb{P}$ not
equal to {\rm char}$(\widehat K)$. Then every $\Delta _{p} \in
d(K)$ of $p$-primary degree has a splitting field that is a finite
extension of $K$ in $K(p)$.
\varepsilonnd{lemm}
\par
\muedskip
Lemma \ref{lemm4.3} shows that if $R$ is a central division
LBD-algebra over a field $K$ satisfying the conditions of some of
the main results of the present paper, and if there is a
$K$-subalgebra $\widetilde R$ of $R$ with the properties claimed
by Conjecture \ref{conj1.1}, then for each $p \in \muathbb{P}$ with
at most one exception, $K$ has a finite extension $E _{p}$ in
$K(p)$ that is a $p$-splitting field of $R/K$ (see also Lemma
\ref{lemm5.3} (c) below). This leads to the idea of proving
Theorems \ref{theo3.1} and \ref{theo3.2} on the basis of Lemma
\ref{lemm3.3} (for further support of the idea and a step to its
implementation, see Theorem \ref{theo6.3}).
\par
\vskip0.4truecm\nuoindent {\it Proof of Lemma \ref{lemm4.3}.} The
assertion is obvious if $\Delta _{p}$ is an NSR-algebra over $K$,
or more generally, if $\Delta _{p}$ is Brauer equivalent to the
tensor product of cyclic division $K$-algebras of $p$-primary
degrees. When the $K$-algebra $\Delta _{p}$ is inertial, we have
$\widehat \Delta _{p} \in d(\widehat K)$ (cf. \cite{JW},
Theorem~2.8), so our conclusion follows from the fact that
abrd$_{p}(\widehat K(p)) = 0$ and $\widehat {K(p)} = \widehat
K(p)$, which ensures
\par\vskip0.048truecm\nuoindent
that $[\Delta _{p}] \in {\rm Br}(K _{\rm ur} \cap K(p)/K)$. Since,
by \cite{JW}, Lemmas~5.14 and 6.2,
\par\vskip0.04truecm\nuoindent
$[\Delta _{p}] = [I _{p} {\mathord{\,\otimes }\,}imes _{K} N _{p} {\mathord{\,\otimes }\,}imes _{K} T
_{p}]$, for some inertial $K$-algebra $I _{p}$, an NSR-algebra $N
_{p}/K$, and a tensor product $T _{p}$ of totally ramified cyclic
division $K$-algebras,
\par\vskip0.04truecm\nuoindent
such that $[I _{p}], [N _{p}]$ and $[T _{p}] \in {\rm Br}(K)
_{p}$, these observations prove Lemma \ref{lemm4.3}.
\par
\muedskip
The following two lemmas give a valuation-theoretic characterization
of those Henselian virtually perfect fields $(K, v)$ with char$(K) =
{\rm char}(\widehat K)$, which satisfy the condition abrd$_{p}(K) <
\infty $, for some $p \in \muathbb{P}$.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm4.4} Let $(K, v)$ be a Henselian field. Then {\rm
abrd}$_{p}(K) < \infty $, for a given $p \in \muathbb{P}$ different
from {\rm char}$(\widehat K)$, if and only if {\rm
abrd}$_{p}(\widehat K) < \infty $ and the quotient group
$v(K)/pv(K)$ is finite.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
We have abrd$_{p}(\widehat K) \lambdae {\rm abrd}_{p}(K)$ (by
\cite{JW}, Theorem~2.8, and \cite{TW}, Theorem~A.23), so our
assertion can be deduced from \cite{Ch8}, Proposition~6.1,
Theorem~5.9 and Remark~6.2 (or \cite{Ch8}, (3.3) and Theorem~2.3).
\varepsilonnd{proof}
\par
\muedskip
Lemma \ref{lemm4.4} and our next lemma show that a maximally
complete field $(K, v)$ with char$(K) = {\rm char}(\widehat K)$
satisfies abrd$_{p}(K) < \infty $, $p \in \muathbb{P}$, if and only
if $\widehat K$ is virtually perfect and for each $p \in
\muathbb{P}$, abrd$_{p}(\widehat K) < \infty $ and $v(K)/pv(K)$ is
finite. When this holds, $K$ is virtually perfect as well (see
\cite{Ch8}, Lemma~3.2).
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm4.5} Let $(K, v)$ be a Henselian field with {\rm
char}$(\widehat K) = q > 0$. Then:
\par
{\rm (a)} $[\widehat K\colon \widehat K ^{q}]$ and $v(K)/qv(K)$
are finite, provided that {\rm Brd}$_{q}(K) < \infty $;
\par
{\rm (b)} The inequality {\rm abrd}$_{q}(K) < \infty $ holds, in
case $\widehat K$ is virtually perfect and some of the following
two conditions is satisfied:
\par
{\rm (i)} $v$ is discrete;
\par
{\rm (ii)} {\rm char}$(K) = q$ and $K$ is virtually perfect; in
particular, this occurs if {\rm char}$(K) = q$, $v(K)/qv(K)$ is
finite and $(K, v)$ is maximally complete.
\varepsilonnd{lemm}
\par
\muedskip
\betaegin{proof}
Statement (a) is implied by \cite{Ch7}, Proposition~3.4, so one
may assume that $[\widehat K\colon \widehat K ^{q}] = q ^{\muu }$
and $v(K)/qv(K)$ has order $q ^{\tilde{\alpha}u }$, for some integers $\muu
\gammae 0$, $\tilde{\alpha}u \gammae 0$. We prove statement (b) of the lemma. Suppose
first that $v$ is discrete. Then Brd$_{q}(K) \lambdae {\rm Brd}_{q}(K
_{v})$, by (3.1), so it is sufficient to prove that abrd$_{q}(K) <
\infty $, provided that $K = K _{v}$. If char$(K) = 0$, this is
contained in \cite{PS}, Theorem~2, and when char$(K) = q$, the
finitude of abrd$_{q}(K)$ is obtained as a special case of Lemma
\ref{lemm4.5} (b) (ii) and the fact that $(K, v)$ is maximally
complete (cf. \cite{L}, Ch. XII, page~488). It remains for us to
prove Lemma \ref{lemm4.5} (b) (ii). Our former assertion follows
from \cite{A1}, Ch. VII, Theorem~28, statement (2.1) and
\cite{Ch6}, Lemma~4.1 (which ensure that abrd$_{q}(K) \lambdae {\rm
log}_{q}[K\colon K ^{q}]$). Observe finally that if $(K, v)$ is
maximally complete with char$(K) = q$, then $[K\colon K ^{q}] = q
^{\muu + \tilde{\alpha}u }$. This can be deduced from (4.2) (c), since $(K
^{q}, v _{q})$ is maximally complete, $v _{q}(K ^{q}) = qv(K)$ and
$\widehat K ^{q}$ is the residue field of $(K ^{q}, v _{q})$,
where $v _{q}$ is the valuation of $K ^{q}$ induced by $v$. More
precisely, it follows from (4.2) (c) and the noted properties of
$(K ^{q}, v _{q})$ that the degrees of finite extensions of $K
^{q}$ in $K$ are at most equal to $q ^{\muu + \tilde{\alpha}u }$, which yields
$[K\colon K ^{q}] \lambdae q ^{\muu + \tilde{\alpha}u }$ and so allows to conclude
that $[K\colon K ^{q}] = q ^{\muu + \tilde{\alpha}u }$ (whence, abrd$_{q}(K)
\lambdae \muu + \tilde{\alpha}u $). Thus Lemma \ref{lemm4.5} (b) (ii) is proved.
\varepsilonnd{proof}
\par
\muedskip
\betaegin{coro}
\lambdaambdabel{coro4.6} Let $K _{0}$ be a field and $\Gamma $ a nontrivial
ordered abelian group. Then the formal power series field $K = K
_{0}((\Gamma ))$ satisfies the inequalities {\rm abrd}$_{p}(K) <
\infty $, $p \in \muathbb{P}$, if and only if $K _{0}$ is virtually
perfect, {\rm abrd}$_{p}(K _{0}) < \infty $ whenever $p \in
\muathbb{P} \sigmaetminus \{{\rm char}(K _{0})\}$, and the quotient
groups $\Gamma /p\Gamma $ are finite, for all $p \in \muathbb{P}$.
\varepsilonnd{coro}
\par
\sigmamallskip
\betaegin{proof}
Let $v _{\Gamma }$ be the standard valuation of $K$ inducing on $K
_{0}$ the trivial valuation. Then $(K, v _{\Gamma })$ is maximally
complete (cf. \cite{Ef}, Theorem~18.4.1) with $v(K) = \Gamma $ and
$\widehat K = K _{0}$ (see \cite{Ef}, Sect. 2.8 and
Example~4.2.1), so Corollary \ref{coro4.6} can be deduced from
Lemmas \ref{lemm4.4} and \ref{lemm4.5}.
\varepsilonnd{proof}
\par
\muedskip
\betaegin{rema}
\lambdaambdabel{rema4.7} Given a field $K _{0}$ and an ordered abelian
group $\Gamma \nueq \{0\}$, the standard realizability of the field
$K _{1} = K _{0}((\Gamma ))$ as a maximally complete field, used
in the proof of Corollary \ref{coro4.6}, allows us to determine
the sequence \par\nuoindent
$(b, a) = {\rm Brd}_{p}(K _{1}), {\rm
abrd}_{p}(K _{1})\colon p \in \muathbb{P}$, in the following two
cases: {\rm (i)} $K _{0}$ is a global or local field (see
\cite{Ch8}, Proposition~5.1, and \cite{Ch7}, Corollary~3.6 and
Sect. 4, respectively); {\rm (ii)} $K _{0}$ is perfect and dim$(K
_{0}) \lambdae 1$ (see \cite{Ch7}, Proposition~3.5, and \cite{Ch8},
Propositions~5.3, 5.4). In both cases, $(b, a)$ depends only on $K
_{0}$ and $\Gamma $. Moreover, if $(K, v)$ is Henselian with
$\widehat K = K _{0}$ and $v(K) = \Gamma $, then: {\rm (a)} $(b,
a) = {\rm Brd}_{p}(K), {\rm abrd}_{p}(K)$, $p \in \muathbb{P}$,
provided that $(K, v)$ is maximally complete, $K _{0}$ is perfect and
char$(K) = {\rm char}(K _{0})$; {\rm (b)} {\rm Brd}$_{p}(K) = {\rm
Brd}_{p}(K _{1})$ and {\rm abrd}$_{p}(K) = {\rm abrd}_{p}(K _{1})$,
for each $p \nueq {\rm char}(K _{0})$. When $K _{0}$ is finite and $p
\nueq {\rm char}(K _{0})$, Brd$_{p}(K)$ has been computed also in
\cite{Br}, Sect. 7, by a method independent of \cite{Ch7} and
\cite{Ch8}.
\varepsilonnd{rema}
\par
\muedskip
Next we show that abrd$_{p}(K _{m}) < \infty $, $p \in
\muathbb{P}$, if $K _{m}$ is a complete $m$-discretely valued field
with $m$-th residue field admissible by Theorem \ref{theo3.2}.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm4.8} Let $K _{m}$ be a complete $m$-discretely valued
field with an $m$-th residue field $K _{0}$. Then {\rm
abrd}$_{p}(K _{m}) < \infty $, for all $p \in \muathbb{P}$, if and
only if $K _{0}$ is virtually perfect with {\rm abrd}$_{p}(K _{0})
< \infty $, for every $p \in \muathbb{P} \sigmaetminus \{{\rm char}(K
_{0})\}$.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
Clearly, it suffices to consider the case where abrd$_{p}(K
_{m-1}) < \infty $, for all $p \in \muathbb{P}$. Then our assertion
follows from Lemmas \ref{lemm4.4} and \ref{lemm4.5}.
\varepsilonnd{proof}
\par
\muedskip
The concluding result of this Section proves (2.2) and leads to
the following open question: given a field $E$ with char$(E) = q >
0$, $[E\colon E ^{q}] = \infty $ and abrd$_{q}(E) < \infty $, does
there exist an integer $\muu (E)$, such that Brd$_{q}(E ^{\prime })
\lambdae \muu (E)$, for every finite extension $E ^{\prime }/E$? An
affirmative answer to this question would imply the removal of the
condition that $K$ is a virtually perfect field does not affect
the validity of the assertion of Proposition \ref{prop2.1}.
\par
\muedskip
\betaegin{prop}
\lambdaambdabel{prop4.9} Let $F _{0}$ be an algebraically closed field of
nonzero characteristic $q$ and $F _{n}$: $n \in \muathbb{N}$, be a
tower of extensions of $F _{0}$ defined inductively as follows:
when $n > 0$, $F _{n} = F _{n-1}((T _{n}))$ is the formal Laurent
power series field in a variable $T _{n}$ over $F _{n-1}$. Then
the following holds, for each $n \in \muathbb{N}$:
\par
{\rm (a)} $F _{n}$ possesses a subfield $\Lambda _{n}$ that is a
purely transcendental extension of infinite transcendency degree over
the rational function field $F _{n-1}(T _{n})$.
\par
{\rm (b)} The maximal separable (algebraic) extension $E _{n}$ of
$\Lambda _{n}$ in $F _{n}$
satisfies the equalities $[E _{n}\colon E _{n} ^{q}] = \infty $, {\rm
Brd}$_{p}(E _{n}) = {\rm abrd}_{p}(E _{n}) = [n/2]$, for all
\par\nuoindent
$p \in \muathbb{P} \sigmaetminus \{q\}$, and {\rm Brd}$_{q}(E _{n}
^{\prime }) = n - 1$, for every finite extension $E _{n} ^{\prime }/E
_{n}$.
\varepsilonnd{prop}
\par
\sigmamallskip
\betaegin{proof}
The assertion of Proposition \ref{prop4.9} (a) is known (cf.,
e.g., \cite{BlKu}), and it implies $[E _{n}\colon E _{n} ^{q}] =
\infty $. Let $w _{n}$ be the natural discrete valuation of $F
_{n}$ trivial on $F _{n-1}$, and $v _{n}$ be the valuation of $E
_{n}$ induced by $w _{n}$. Then $(F _{n}, w _{n})$ is complete and
$E _{n}$ is dense in $F _{n}$, which yields $v _{n}(E _{n}) = w
_{n}(F _{n})$ and $F _{n-1}$ is the residue field of $(E _{n}, v
_{n})$ and $(F _{n}, w _{n})$; hence, $v _{n}$ is discrete.
Similarly, if $n \gammae 2$, then the natural $\muathbb{Z} ^{n}$-valued
valuation $\thetaeta _{n} '$ of $F _{n}$ (trivial on $F _{0}$) is
Henselian and induces on $E _{n}$ a valuation $\thetaeta _{n}$. Also,
$v _{n}$ is Henselian (cf. \cite{Ef}, Corollary~18.3.3), and
$\thetaeta _{n}$ extends the natural $\muathbb{Z} ^{n-1}$-valued
valuation $\thetaeta _{n-1}'$ of $F _{n-1}$. As $\thetaeta _{n-1}'$ is
Henselian and $F _{n-1}(T _{n}) \sigmaubset E _{n}$, this ensures that
so is $\thetaeta _{n}$ (see \cite{TW}, Proposition~A.15), $F _{0}$ is
the residue field of $(E _{n}, \thetaeta _{n})$, and $\thetaeta _{n}(E
_{n}) = \muathbb{Z} ^{n}$. At the same time, it follows from (3.1)
and the Henselian property of $v _{n}$ that Brd$_{p}(E _{n}) \lambdae
{\rm Brd}_{p}(F _{n})$, for each $p$. In addition, $(F _{n},
\thetaeta _{n}')$ is maximally complete with a residue field $F _{0}$
(cf. \cite{Ef}, Theorem~18.4.1),
\par\vskip0.057truecm\nuoindent whence, by \cite{Ch7},
Proposition~3.5, Brd$_{q}(F _{n}) = {\rm abrd}_{q}(F _{n}) = n -
1$. Since
\par\vskip0.057truecm\nuoindent
$\thetaeta _{n}(E _{n} ^{\prime }) \cong \muathbb{Z} ^{n} \cong \thetaeta
_{n}'(F _{n} ^{\prime })$ and $F _{0}$ is the residue field of $(E
_{n} ^{\prime }, \thetaeta _{n,E _{n}'})$ and $(F _{n}, \thetaeta _{n,F
_{n}'}')$ whenever $E _{n} ^{\prime }/E _{n}$ and $F _{n} ^{\prime
}/F _{n}$ are finite extensions, one obtains from \cite{Ch8},
Proposition~5.3 (b), and \cite{Ch6}, Lemma~4.2, that Brd$_{q}(E
_{n} ^{\prime }) \gammae n - 1$ and
\par\vskip0.057truecm\nuoindent
Brd$_{p}(E _{n} ^{\prime }) = {\rm Brd}_{p}(F _{n} ^{\prime }) =
[n/2]$, for each $p \nueq q$. Note finally that
\par\vskip0.057truecm\nuoindent
$v _{n}(E _{n} ^{\prime }) \cong \muathbb{Z} \cong w _{n}(F _{n}
^{\prime })$, and the completion of $(E _{n} ^{\prime }, v _{n,E
_{n}'})$ is a finite
\par\vskip0.063truecm\nuoindent extension of $F _{n}$ (cf.
\cite{L}, Ch. XII, Proposition~3.1), so it follows from (3.1) and
the preceding observations that Brd$_{q}(E _{n} ^{\prime }) = n -
1$, for all $n$.
\varepsilonnd{proof}
\par
\muedskip
\sigmaection{\betaf On $p$-powers and finite-dimensional central
subalgebras of division {\rm LBD}-algebras}
\par
\muedskip
Let $R$ be a central division LBD-algebra over a virtually perfect
field $K$ with abrd$_{p}(K) < \infty $, $p \in \muathbb{P}$. The
existence of finite $p$-powers $k(p)$ of $R/K$, $p \in
\muathbb{P}$, imposes essential restrictions on a number of
algebraic properties of $R$, especially, on those extensions of
$K$ which are embeddable in $R$ as $K$-subalgebras. For example,
it turns out that if $K(p) \nueq K$, for some $p > 2$, then
$K(p)/K$ is an infinite extension (the additive group $\muathbb{Z}
_{p}$ of $p$-adic integers, endowed with its natural topology, is
a homomorphic image of $\muathcal{G}(K(p)/K)$, see \cite{Wh}),
whence, $K(p)$ is not isomorphic to a $K$-subalgebra of $R$. In
this Section we present results on $p$-powers and $p$-splitting
fields, obtained in the case of dim$(K _{\rm sol}) \lambdae 1$. These
results form the basis for the proofs of Theorems \ref{theo3.1},
\ref{theo3.2} and Lemma \ref{lemm3.3}. The first one is an
immediate consequence of \cite{Ch3}, Lemmas~3.12 and 3.13, and can
be stated as follows:
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm5.1} Assume that $R$ is a central division {\rm
LBD}-algebra over a virtually perfect field $K$ with {\rm dim}$(K
_{\rm sol}) \lambdae 1$ and {\rm abrd}$_{p}(K) < \infty $, $p \in
\muathbb{P}$. Let $K ^{\prime }/K$ be a finite extension, $R
^{\prime }$ the underlying (central) division $K ^{\prime
}$-algebra of the {\rm LBD}-algebra $R {\mathord{\,\otimes }\,}imes _{K} K ^{\prime }$,
$\gammaamma $ the integer for which $R {\mathord{\,\otimes }\,}imes _{K} K ^{\prime }$ and
the full matrix ring $M _{\gammaamma }(R ^{\prime })$ are isomorphic
as $K ^{\prime }$-algebras, and for each $p \in \muathbb{P}$, let
$k(p)$ and $k(p)'$ be the $p$-powers of $R/K$ and $R ^{\prime }/K
^{\prime }$, respectively. Then:
\par
{\rm (a)} The greatest integer $\muu (p) \gammae 0$ for which $p ^{\muu
(p)} \muid \gammaamma $ is equal to $k(p) - k(p)'$; hence, $k(p) \gammae
k(p)'$ and $p ^{1+k(p)} \numid \gammaamma $, for any $p \in \muathbb{P}$;
\par
{\rm (b)} The equality $k(p) = k(p)'$ holds if and only if $p
\numid \gammaamma $; specifically, if
\par\nuoindent
$k(p) = 0$, then $k(p)'= 0$ and $p \numid \gammaamma $.
\par
{\rm (c)} $K ^{\prime }$ is a $p$-splitting field of $R/K$ if and
only if $k(p)' = 0$, that is,
\par\nuoindent
$p \numid [K ^{\prime }(r')\colon K ^{\prime }]$, for any $r' \in R
^{\prime }$.
\varepsilonnd{lemm}
\par
\muedskip
As a matter of fact, Lemma \ref{lemm5.1} (a) is identical in
content with \cite{Ch3}, Lemmas~3.12 and 3.13, and it also implies
Lemma \ref{lemm5.1} (b) and (c).
\par
\muedskip
\betaegin{rema}
\lambdaambdabel{rema5.2} The proofs of \cite{Ch3}, Lemmas~3.12 and 3.13,
rely essentially on the condition that dim$(K _{\rm sol}) \lambdae 1$,
more precisely, on its restatement that abrd$_{p}(K _{p}) = 0$,
for each $p \in \muathbb{P}$, where $K _{p}$ is the fixed field of
a Hall pro-$(\muathbb{P} \sigmaetminus \{p\})$-subgroup $H _{p}$ of
$\muathcal{G}(K _{\rm sol}/K)$. It is not known whether the
assertions of Lemma \ref{lemm5.1} remain valid if this condition
is dropped; also, the question of whether {\rm dim}$(E _{\rm sol})
\lambdae 1$, for every field $E$ (posed in \cite{Koe}) is open. Here we
note that the conclusion of Lemma \ref{lemm5.1} holds if the
assumption that dim$(K _{\rm sol}) \lambdae 1$ is replaced by the one
that $R {\mathord{\,\otimes }\,}imes _{K} K ^{\prime }$ is a division $K ^{\prime
}$-algebra. Then it follows from \cite{Ch3}, Proposition~3.3, that
$k(p) = k(p)'$, for every $p \in \muathbb{P}$.
\varepsilonnd{rema}
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm5.3} Assuming that $K$ and $R$ satisfy the conditions
of Lemma \ref{lemm5.1}, let $D \in d(K)$ be a $K$-subalgebra of
$R$, and let $k(p)$ and $k(p)'$, $p \in \muathbb{P}$, be the
$p$-powers of $R/K$ and $C _{R}(D)/K$, respectively. Then:
\par
{\rm (a)} For each $p \in \muathbb{P}$, $k(p) - k(p)'$ equals the
power of $p$ in the primary decomposition of {\rm deg}$(D)$; in
particular, $k(p) \gammae k(p)'$;
\par
{\rm (b)} $k(p) = k(p)'$ if and only if $p \numid {\rm deg}(D)$; in
this case, a finite extension $K ^{\prime }$ of $K$ is a
$p$-splitting field of $R/K$ if and only if so is $K ^{\prime }$ for
$C _{R}(D)/K$;
\par
{\rm (c)} If $k(p)' = 0$, for some $p \in \muathbb{P}$, then a
finite extension $K ^{\prime }$ of $K$ is a $p$-splitting field of
$R/K$ if and only if $p \numid {\rm ind}(D {\mathord{\,\otimes }\,}imes _{K} K ^{\prime
})$.
\varepsilonnd{lemm}
\par
\muedskip
\betaegin{proof}
It is known (cf. \cite{P}, Sect. 13.1, Corollary~b) that if $K
_{1}$ is a maximal subfield of $D$, then $K _{1}/K$ is a field
extension, $[K _{1}\colon K] = {\rm deg}(D) := d$ and $D {\mathord{\,\otimes }\,}imes
_{K} K _{1} \cong M _{d}(K _{1})$ as $K _{1}$-subalgebras. Also,
by the Double Centralizer Theorem (see \cite{He}, Theorems~4.3.2
and 4.4.2), $R = D {\mathord{\,\otimes }\,}imes _{K} C _{R}(D)$ and $C _{R}(D) {\mathord{\,\otimes }\,}imes
_{K} K _{1}$ is a central division $K _{1}$-algebra equal to $C
_{R}(K _{1})$. In view of \cite{Ch3}, Propositions~3.1 and 3.3,
this ensures that $k(p)'$ equals the $p$-power of $(C _{R}(D)
{\mathord{\,\otimes }\,}imes _{K} K _{1})/K _{1}$, for each $p \in \muathbb{P}$.
Applying now Lemma \ref{lemm5.1}, one proves Lemma \ref{lemm5.3}
(a). Lemma \ref{lemm5.3} (b)-(c) follows from Lemmas \ref{lemm5.1}
and \ref{lemm5.3} (a), combined with \cite{Ch3}, Lemma~3.5, and
\cite{P}, Sect. 9.3, Corollary~b.
\varepsilonnd{proof}
\par
\muedskip
The following lemma (for a proof, see \cite{Ch3}, Lemma~7.4) can be
viewed as a generalization of the uniqueness part of the primary
tensor product decomposition theorem for algebras $D \in d(K)$ over
an arbitrary field $K$.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm5.4} Let $\Pi $ be a finite subset of $\muathbb P$, and
let $S _{1}$, $S _{2}$ be central division {\rm LBD}-algebras over
a field $K$ with {\rm abrd}$_{p}(K) < \infty $, for all $p \in
\muathbb P$. Assume that $k(p)_{S _{1}} = k(p)_{S _{2}} = 0$, $p
\in \Pi $, the $K$-algebras $R _{1} {\mathord{\,\otimes }\,}imes _{K} S _{1}$ and $R
_{2} {\mathord{\,\otimes }\,}imes S _{2}$ are $K$-isomorphic, where $R _{i} \in s(K)$,
$i = 1, 2$, and {\rm deg}$(R _{1}){\rm deg}(R _{2})$ is not
divisible by any $\betaar p \in \muathbb P \sigmaetminus \Pi $. Then
$R_{1} \cong R _{2}$ as $K$-algebras.
\varepsilonnd{lemm}
\par
\muedskip
For a proof of our next lemma, we refer the reader to \cite{Ch3},
Lemma~8.3, which has been proved under the assumption that $R$ is
a central division LBD-algebra over a field $K$ of arithmetic
type. Therefore, we note that the proof in \cite{Ch3} remains
valid if the assumption on $K$ is replaced by the one that
abrd$_{p}(K) < \infty $, $p \in \muathbb{P}$, dim$(K _{\rm sol})
\lambdae 1$, $K$ is virtually perfect, and there exist $p$-splitting
fields $E _{p}: p \in \muathbb{P}$, of $R/K$ with $E _{p} \sigmaubseteq
K(p)$, for each $p$.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm5.5} Let $K$ be a field with {\rm dim}$(K _{\rm sol})
\lambdae 1$, $R$ a central division {\rm LBD}-algebra over $K$, and
$k(p)\colon p \in \muathbb{P}$, the sequence of $p$-powers of
$R/K$. Assume that, for each $p \in \muathbb{P}$, $E _{p}$ is a
finite extension of $K$ in $K(p)$, which is a $p$-splitting field
of $R/K$. Then:
\par
{\rm (a)} The full matrix ring $M _{\gammaamma (p)}(R)$, where $\gammaamma
(p) = [E _{p}\colon K].p ^{-k(p)}$, is an
\par\nuoindent
Artinian central simple {\rm LBD}-algebra over $K$, which
possesses a subalgebra
\par\nuoindent
$\Delta _{p} \in s(K)$, such that {\rm deg}$(\Delta _{p}) = [E
_{p}\colon K]$ and $E _{p}$ is isomorphic to a $K$-subalgebra of
$\Delta _{p}$. Moreover, if $[E _{p}\colon K] = p ^{k(p)}$, i.e.
$E _{p}$ is embeddable in $R$ as a $K$-subalgebra, then $\Delta
_{p}$ is a $K$-subalgebra of $R$.
\par
{\rm (b)} The centralizer of $\Delta _{p}$ in $M _{\gammaamma (p)}(R)$
is a central division $K$-algebra of $p$-power zero.
\varepsilonnd{lemm}
\par
\muedskip
The following lemma generalizes \cite{Ch3}, Lemma~8.5, to the case
where $K$ and $R$ satisfy the conditions of Lemma \ref{lemm5.5}.
For this reason, we take into account that the proof of the lemma
referred to, given in \cite{Ch3}, remains valid under the noted
weaker conditions. Our next lemma can also be viewed as a
generalization of the well-known fact that, for any field $E$, $D
_{1} {\mathord{\,\otimes }\,}imes _{E} D _{2} \in d(E)$ whenever $D _{i} \in d(E)$, $i
= 1, 2$, and $\gammacd \{{\rm deg}(D _{1}), {\rm deg}(D _{2})\} = 1$
(see \cite{P}, Sect. 13.4). Using this lemma and the uniqueness
part of the Wedderburn-Artin theorem, one obtains that, in the
setting of Lemma \ref{lemm5.5}, the underlying central division
$K$-algebra of $\Delta _{p}$ is embeddable in $R$ as a
$K$-subalgebra.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm5.6} Let $K$ be a field, $R$ a central division {\rm
LBD}-algebra over $K$, and $E _{p}$, $p \in \muathbb{P}$, be
extensions of $K$ satisfying the conditions of Lemma
\ref{lemm5.5}. Also, let $D \in d(K)$ be a division $K$-algebra
such that $\gammacd \{{\rm deg}(D), [K(\alphalpha )\colon K]\} = 1$, for
each $\alphalpha \in R$. Then $D {\mathord{\,\otimes }\,}imes _{K} R$ is a central division
{\rm LBD}-algebra over $K$.
\varepsilonnd{lemm}
\par
\muedskip
\betaegin{rema}
\lambdaambdabel{rema5.7} Assume that $K$ is a field and $R$ is a central
division $K$-algebra satisfying the conditions of Theorem
\ref{theo3.1} or Theorem \ref{theo3.2}, and let $E _{p}\colon p
\in \muathbb{P}$, be $p$-splitting fields of $R/K$ with the
properties required by Lemma \ref{lemm3.3}. Then it follows from
Lemmas \ref{lemm5.5} and \ref{lemm5.6} that, for each $p \in
\muathbb{P}$, there exists a unique, up-to $K$-isomorphism,
$K$-subalgebra $R _{p} \in d(K)$ of $R$ of degree deg$(R _{p}) = p
^{k(p)}$, where $k(p)$ is the $p$-power of $R/K$. Moreover, Lemma
\ref{lemm5.3} implies $R _{p}$, $p \in \muathbb{P}$, can be chosen
so that $R _{p'} \sigmaubseteq C _{R}(R _{p''})$ whenever $p', p'' \in
\muathbb{P}$ and $p' \nueq p''$. Therefore, there exist
$K$-subalgebras $T _{n}$, $n \in \muathbb{N}$, of $R$, such that $T
_{n} \cong {\mathord{\,\otimes }\,}imes _{j=1} ^{n} R _{p _{j}}$ and $T _{n} \sigmaubseteq
T _{n+1}$, for each $n$; here ${\mathord{\,\otimes }\,}imes = {\mathord{\,\otimes }\,}imes _{K}$ and
$\muathbb{P}$ is presented as a sequence $p _{n}\colon n \in
\muathbb{N}$. Hence, the union $\widetilde R = \cup _{n=1} ^{\infty
} T _{n} := {\mathord{\,\otimes }\,}imes _{n=1} ^{\infty } R _{p _{n}}$ is a
\par\vskip0.048truecm\nuoindent central
$K$-subalgebra of $R$. Note further that $R = T _{n} {\mathord{\,\otimes }\,}imes _{K}
C _{R}(T _{n})$, for every
\par\vskip0.04truecm\nuoindent
$n \in \muathbb{N}$, which enables one to deduce from Lemmas
\ref{lemm5.1}, \ref{lemm5.4}, \ref{lemm5.6}, and \cite{Ch3},
Lemma~3.5, that a finite-dimensional $K$-subalgebra $T$ of $R$ is
embeddable in $T _{n}$ as a $K$-subalgebra in case $p _{n'} \numid
[T\colon K]$, for any $n' > n$. One also sees that $K = \cap
_{n=1} ^{\infty } C _{R}(T _{n}) = C _{R}(\widetilde R)$, and by
\cite{Ch3}, Lemma~9.3, every LFD-subalgebra of $R$ (over $K$) of
countable dimension is embeddable in $\widetilde R$.
\varepsilonnd{rema}
\par
\muedskip
The following two lemmas are used at crucial points of our proof
of Lemma \ref{lemm3.3}. The former one has not been formally
stated in \cite{Ch3}. However, special cases of it have been used
in the proof of \cite{Ch3}, Lemma~8.3.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm5.8} Let $K$ and $R$ satisfy the conditions of Lemma
\ref{lemm5.1}, and let $K _{1}$, $K _{2}$ be finite extensions of
$K$ in an algebraic closure of $K _{\rm sep}$. Denote by $R _{1}$
and $R _{2}$ the underlying division algebras of $R {\mathord{\,\otimes }\,}imes _{K} K
_{1}$ and $R {\mathord{\,\otimes }\,}imes _{K} K _{2}$, respectively, and suppose that
there exist $D _{i} \in d(K _{i})$, $i = 1, 2$, such that $D _{i}$
is a $K _{i}$-subalgebra of $R _{i}$ and {\rm deg}$(D _{i}) = p
^{k(p)}$, for a given $p \in \muathbb{P}$ and each index $i$, where
$k(p)$ is the $p$-power of $R/K$. Then:
\par
{\rm (a)} The underlying division $K _{1}K _{2}$-algebras of
$R _{1} {\mathord{\,\otimes }\,}imes _{K _{1}} K _{1}K _{2}$, $R _{2} {\mathord{\,\otimes }\,}imes _{K _{2}} K
_{1}K _{2}$ and $R {\mathord{\,\otimes }\,}imes _{K} K _{1}K _{2}$ are isomorphic;
\par
{\rm (b)} $p$ does not divide $[K _{i}(c _{i})\colon K _{i}]$, for
any $c _{i} \in C _{R _{i}}(D _{i})$, and $i = 1, 2$.
\par
{\rm (c)} If $p \numid [K _{1}K _{2}\colon K]$, then $D _{1}
{\mathord{\,\otimes }\,}imes _{K _{1}} K _{1}K _{2}$ and $D _{2} {\mathord{\,\otimes }\,}imes _{K _{2}} K _{1}K
_{2}$ are isomorphic central division $K _{1}K _{2}$-algebras; for
example, this holds in case $p \numid [K _{i}\colon K]$, $i = 1, 2$, and
$\gammacd \{[K _{1}\colon K _{0}], [K _{2}\colon K _{0}]\} = 1$, where $K
_{0} = K _{1} \cap K _{2}$.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
Note that $R {\mathord{\,\otimes }\,}imes _{K} K _{1}K _{2}$ and $(R {\mathord{\,\otimes }\,}imes _{K} K
_{i}) {\mathord{\,\otimes }\,}imes _{K _{1}} K _{1}K _{2}$, $i = 1, 2$, are isomorphic
$K _{1}K _{2}$-algebras. These algebras are central simple and
Artinian, which enables one to deduce Lemma \ref{lemm5.8} (a) from
Wedderburn-Artin's theorem and \cite{P}, Sect. 9.3, Corollary~b.
In addition, it follows from Lemma \ref{lemm5.1}, the assumptions
on $D _{1}$ and $D _{2}$, and the Double Centralizer Theorem that
$k(p)$ equals the $p$-powers of $R _{i}/K _{i}$, and $C _{R
_{i}}(D _{i})$ is a central division $K _{i}$-subalgebra of $R
_{i}$, for each $i$. Hence, by Lemma \ref{lemm5.3} (a), $C _{R
_{1}}(D _{1})/K _{1}$ and $C _{R _{2}}(D _{2})/K _{2}$ are of
$p$-power zero, which proves Lemma \ref{lemm5.8} (b).
\par
We turn to the proof of Lemma \ref{lemm5.8} (c). Assume that $p
\numid [K _{1}K _{2}\colon K]$ and denote by $R ^{\prime }$ the
underlying division $K _{1}K _{2}$-algebra of $R {\mathord{\,\otimes }\,}imes _{K} K
_{1}K _{2}$. Then, by Lemma \ref{lemm5.1}, $k(p)$ equals the
$p$-power of $R ^{\prime }/K _{1}K _{2}$. Applying \cite{Ch3},
Lemma~3.5
\par\vskip0.054truecm\nuoindent
(or results of
\cite{P}, Sect. 13.4), one also obtains that $D _{i} {\mathord{\,\otimes }\,}imes _{K
_{i}} K _{1}K _{2} \in d(K _{1}K _{2})$
\par\vskip0.05truecm\nuoindent
and $D _{i} {\mathord{\,\otimes }\,}imes _{K _{i}} K _{1}K _{2}$ are embeddable in $R
^{\prime }$ as $K _{1}K _{2}$-subalgebras, for $i = 1, 2$.
\par\vskip0.07truecm\nuoindent
Let $D _{1} ^{\prime }$ and $D _{2} ^{\prime }$ be $K _{1}K
_{2}$-subalgebras of $R ^{\prime }$ isomorphic to $D _{1} {\mathord{\,\otimes }\,}imes _{K
_{1}} K _{1}K _{2}$ and
\par\vskip0.06truecm\nuoindent
$D _{2} {\mathord{\,\otimes }\,}imes _{K _{2}} K _{1}K _{2}$, respectively. As above,
then it follows that, for each index $i$,
\par\vskip0.06truecm\nuoindent
$R ^{\prime }$ coincides with $D _{i} ^{\prime } {\mathord{\,\otimes }\,}imes _{K _{1}K
_{2}} C _{R'}(D _{i} ^{\prime })$, $C _{R'}(D _{i} ^{\prime })$ is
a central division algebra
\par\vskip0.064truecm\nuoindent
over $K _{1}K _{2}$, and $C _{R'}(D _{i} ^{\prime })/K _{1}K _{2}$
is of zero $p$-power; thus $p \numid [K _{1}K _{2}(c')\colon K
_{1}K _{2}]$,
\par\vskip0.064truecm\nuoindent
for any $c' \in C _{R'}(D _{1} ^{\prime }) \cup C _{R'}(D _{2}
^{\prime })$. Therefore, by Lemma \ref{lemm5.4}, $D _{1} ^{\prime
} \cong D _{2} ^{\prime }$,
\par\vskip0.064truecm\nuoindent
whence, $D _{1} {\mathord{\,\otimes }\,}imes _{K _{1}} K _{1}K _{2} \cong D _{2}
{\mathord{\,\otimes }\,}imes _{K _{2}} K _{1}K _{2}$ as $K _{1}K _{2}$-algebras. The
latter part
\par\vskip0.06truecm\nuoindent
of our assertion is obvious, so Lemma \ref{lemm5.8} is proved.
\varepsilonnd{proof}
\par
\sigmamallskip
\betaegin{lemm}
\lambdaambdabel{lemm5.9} Let $D$ be a finite-dimensional simple algebra
over a field $K$. Suppose that the centre $B$ of $D$ is a
compositum of extensions $B _{1}$ and $B _{2}$ of $K$ of
relatively prime degrees, and the following conditions are
fulfilled:
\par
{\rm (a)} $[D\colon B] = n ^{2}$ and $D$ possesses a maximal
subfield $E$ such that \par\nuoindent $[E\colon B] = n$ and $E =
B\widetilde E$, for some separable extension $\widetilde E/K$ of
degree $n$;
\par
{\rm (b)} $p > n$, for every $p \in \muathbb P$ dividing $[B\colon
K]$;
\par
{\rm (c)} $D \cong D _{i} {\mathord{\,\otimes }\,}imes _{B _{i}} B$ as a $B$-algebra, for
some $D _{i} \in s(B _{i})$, $i = 1, 2$.
\par\nuoindent
Then there exist $\widetilde D \in s(K)$ with $[\widetilde D\colon K]
= n ^{2}$, and isomorphisms of
\par\nuoindent
$B _{i}$-algebras $\widetilde D {\mathord{\,\otimes }\,}imes _{K} B _{i} \cong D _{i}$, $i
= 1, 2, 3$, where $B _{3} = B$ and $D _{3} = D$.
\varepsilonnd{lemm}
\par
\muedskip
Lemma \ref{lemm5.9} has been proved as \cite{Ch3}, Lemma~8.2 (see
also \cite{Ch1}). It has been used for proving \cite{Ch3},
Lemma~8.3, and the main result of \cite{Ch1}. In the present
paper, the application of Lemma \ref{lemm5.9} in the proof of
Lemma \ref{lemm8.2} gives us the possibility to deduce Lemma
\ref{lemm3.3} by the method of proving \cite{Ch3}, Lemma~8.3.
\par
\muedskip
\sigmaection{\betaf Henselian fields $(K, v)$ with char$(\widehat K) = q >
0$ and {\rm abrd}$_{q}(K(q)) \lambdae 1$}
\par
\muedskip
The question of whether abrd$_{q}(\Phi (q)) = 0$, for every field
$\Phi $ of characteristic $q > 0$ seems to be open. This Section
gives a criterion for a Henselian field $(K, v)$ with
char$(\widehat K) = q$ and $\widehat K$ of arithmetic type to
satisfy the equality abrd$_{q}(K(q)) = 0$. To prove this criterion
we need the following two lemmas.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm6.1} Let $(K, v)$ be a Henselian field with {\rm
char}$(\widehat K) = q > 0$ and
\par\nuoindent
$\widehat K \nueq \widehat K ^{q}$, and in case {\rm char}$(K) =
0$, suppose that $v$ is discrete and
\par\nuoindent
$v(q) \in qv(K)$. Let also $\widetilde \Lambda /\widehat K$ be an
inseparable extension of degree $q$. Then there is $\Lambda \in
I(K(q)/K)$, such that $[\Lambda \colon K] = q$ and $\widehat
\Lambda $ is $\widehat K$-isomorphic to $\widetilde \Lambda $.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
The assumption on $\widetilde \Lambda /\widehat K$ shows that
$\widetilde \Lambda = \widehat K(\sigmaqrt[q]{\hat a})$, for some
$\hat a \in \widehat K \sigmaetminus \widehat K ^{q}$. Hence, by the
Artin-Schreier theorem, one may take as $\Lambda $ the extension
of $K$ in $K _{\rm sep}$ obtained by adjunction of a root of the
polynomial $X ^{q} - X - a\pi ^{-q}$ (equivalently, of the
polynomial $X ^{q} - \pi ^{q-1}X - a$), for any fixed $\pi \in K
^{\alphast }$ with $v(\pi ) > 0$. When char$(K) = 0$, our assertion is
contained in \cite{Ch9}, Lemma~5.4, so Lemma \ref{lemm6.1} is
proved.
\varepsilonnd{proof}
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm6.2} Let $(K, v)$ be a Henselian field, $L/K$ an
inertial extension, and $N(L/K)$ the norm group of $L/K$. Then
$n+abla _{0}(K)$ is a subgroup of $N(L/K)$.
\varepsilonnd{lemm}
\par
\muedskip
\betaegin{proof}
This is a special case of \cite{Er}, Proposition~2.
\varepsilonnd{proof}
\par
\muedskip
Next we show that a Henselian field $(K, v)$ with char$(\widehat
K) = q > 0$ satisfies abrd$_{q}(K(q)) = 0$, provided that char$(K)
= q$ or the valuation $v$ is discrete. Note here that by the
Albert-Hochschild theorem, the inequality abrd$_{q}(K(q)) = 0$
ensures that Br$(K(q) ^{\prime }) _{q} = \{0\}$, for every finite
extension $K(q) ^{\prime }/K(q)$.
\par
\vskip0.32truecm
\betaegin{theo}
\lambdaambdabel{theo6.3} Let $(K, v)$ be a Henselian field with {\rm
char}$(\widehat K) = q > 0$, and in case {\rm char}$(K) = 0$, let
$v$ be discrete. Then $v(K(q)) = qv(K(q))$, the residue field
$\widehat {K(q)} = \widehat K(q)$ of $(K(q), v _{K(q)})$ is
perfect, and {\rm abrd}$_{q}(K(q)) = 0$.
\varepsilonnd{theo}
\par
\muedskip
Since the proof of Theorem \ref{theo6.3} relies on the
presentability of cyclic \par\nuoindent $K$-algebras of degree $q$
as $q$-symbol algebras over $K$, we recall some basic facts
related to such algebras over any field $E$ with char$(E) = q >
0$. Firstly, for each pair $a \in E$, $b \in E ^{\alphast }$, $[a, b)
_{E} \in s(E)$ and deg$([a, b) _{E}) = q$ (cf. \cite{GiSz},
Corollary~2.5.5). Secondly, if $[a, b) _{E} \in d(E)$, then the
polynomial
\par\nuoindent $f
_{a}(X) = X ^{q} - X - a \in E[X]$ is irreducible over $E$. This
follows from the Artin-Schreier theorem (see \cite{L}, Ch. VI,
Sect. 6), which also shows that if $f _{a}(X)$ is irreducible over
$E$, then $E(\xi )/E$ is a cyclic field extension, $[E(\xi )\colon
E] = q$, and $[a, b) _{E}$ is isomorphic to the cyclic $E$-algebra
$(E(\xi )/E, \sigmaigma , b)$, where $\sigmaigma $ is the $E$-automorphism
of $E(\xi )$ mapping $\xi $ into $\xi + 1$; hence, by \cite{P},
Sect. 15.1, Proposition~b, $[a, b) _{E} \in d(E)$ if and only if
$b \nuotin N(E(\xi )/E)$.
\par
\vskip0.48truecm\nuoindent {\it Proof of Theorem \ref{theo6.3}.} It
is clear from Galois theory, the definition of $K(q)$ and the
closeness of the class of pro-$q$-groups under the formation of
profinite group extensions that $\widetilde K(q) = K(q)$, for
every $\widetilde K \in I(K(q)/K)$; in particular, $K(q)(q) =
K(q)$, which means that $K(q)$ does not admit cyclic extensions of
degree $q$. As $(K(q), v_{K(q)})$ is Henselian, this allows to
deduce from \cite{Ch6}, Lemma~4.2, and \cite{Ch9}, Lemma~2.3, that
$v(K(q)) = qv(K(q))$. We show that the field $\widehat {K(q)} =
\widehat K(q)$ is perfect. It follows from Lemma \ref{lemm6.1} and
\cite{Ch9}, Lemma~2.3, that in case char$(K) = 0$ (and $v$ is
discrete), one may assume without loss of generality that $v(q)
\in qv(K)$. Denote by $\Sigma $ the set of those fields $U \in
I(K(q)/K)$, for which $v(U) = v(K)$, $\widehat U \nueq \widehat K$
and $\widehat U/\widehat K$ is a purely inseparable extension. In
view of Lemma \ref{lemm6.1}, our extra hypothesis ensures that
$\Sigma \nueq \varepsilonmptyset $. Also, $\Sigma $ is a partially ordered
set with respect to set-theoretic inclusion, so it follows from
Zorn's lemma that it contains a maximal element, say $U ^{\prime
}$. Using again Lemma \ref{lemm6.1}, one proves by assuming the
opposite that $\widehat U ^{\prime }$ is a perfect field. Since
$(K(q), v _{K(q)})/(U ^{\prime }, v _{U'})$ is a valued extension
and $\widehat K(q)/\widehat {U'}$ is an algebraic extension, this
implies $\widehat K(q)$ is perfect as well.
\par
\sigmamallskip
It remains to be seen that abrd$_{q}(K(q)) = 0$. Suppose first
that
\par\nuoindent
char$(K) = q$, fix an algebraic closure $\overline K$ of $K _{\rm
sep}$, and put $\betaar v = v _{\overline K}$. It is known \cite{A1},
Ch. VII, Theorem~22, that if $K$ is perfect, then Br$(K ^{\prime
}) _{q} = \{0\}$, for every finite extension $K ^{\prime }/K$. We
assume further that $K$ is imperfect and $K _{\rm ins}$ is the
perfect closure of $K$ in $\overline K$. It is easily verified
that $K _{\rm ins}$ equals the union $\cup _{\nuu =1} ^{\infty } K
^{q^{-\nuu }}$ of the fields $K ^{q^{-\nuu }} = \{\betaeta \in
\overline K\colon \betaeta ^{q^{\nuu }} \in K\}$, $\nuu \in
\muathbb{N}$, and $[K ^{q^{-\nuu }}\colon K] \gammae q ^{\nuu }$, for
each index $\nuu $. To prove the equality abrd$_{q}(K(q)) = 0$ it
suffices to show that Br$(L ^{\prime }) _{q} = \{0\}$, for an
arbitrary $L ^{\prime } \in {\rm Fe}(K(q))$. Clearly, Br$(L
^{\prime }) _{q}$ coincides with the union of the images of Br$(L
_{0} ^{\prime }) _{q}$ under the scalar extension maps Br$(L _{0}
^{\prime }) \to {\rm Br}(L ^{\prime })$, where $L _{0} ^{\prime }$
runs across the set of finite extensions of $K$ in $L ^{\prime }$.
Moreover, one may restrict to the set $\muathcal{L}$ of those
finite extensions $L _{0} ^{\prime }$ of $K$ in $L ^{\prime }$,
for which $L _{0}'.K(q) = L ^{\prime }$ (evidently, $\muathcal{L}
\nueq \varepsilonmptyset $). These observations, together with basic results
on tensor products (cf. \cite{P}, Sect. 9.4, Corollary~a),
indicate that the concluding assertion of Theorem \ref{theo6.3}
can be deduced from the following statement:
\par
\muedskip\nuoindent
(6.1) Br$(L) _{q} = {\rm Br}(L.K(q)/L)$, for an arbitrary $L \in
{\rm Fe}(K)$.
\par
\muedskip\nuoindent We prove (6.1) by showing that, for any fixed
$L$-algebra $D \in d(L)$ of
\par\nuoindent
$q$-primary degree, there is a finite extension $K _{1}$ of $K$ in
$K(q)$ (depending on $D$), such that $[D] \in {\rm Br}(LK
_{1}/L)$, i.e. the compositum $LK _{1}$ is a splitting field of
$D$. Our proof relies on the fact that $K _{\rm ins}$ is perfect.
This ensures that Br$(K _{\rm ins} ^{\prime }) _{q} = \{0\}$
whenever $K _{\rm ins} ^{\prime } \in I(\overline K/K _{\rm
ins})$, which implies Br$(L _{1}) _{q} = {\rm Br}(L _{1}K _{\rm
ins}/L _{1})$, for every $L _{1} \in {\rm Fe}(L)$. Thus it turns
out that $[D] \in {\rm Br}(L.J'/L)$, for some finite extension $J
^{\prime }$ of $K$ in $K _{\rm ins}$. In particular, $J'$ lies in
the set, say $\muathcal{D}$, of finite extensions $I ^{\prime }$ of
$K$ in $K _{\rm ins}$, for which $K$ has a finite extension
$\Lambda _{I'}$ in $K(q)$, such that $[D {\mathord{\,\otimes }\,}imes _{L} L\Lambda
_{I'}] \in {\rm Br}(L\Lambda _{I'}I'/L\Lambda _{I'})$. Choose $J
\in \muathcal{D}$ to be of minimal degree over $K$. We prove that
$J = K$, by assuming the opposite. For this purpose, we use the
following fact:
\par
\muedskip\nuoindent
(6.2) For each $\betaeta \in K _{\rm ins} ^{\alphast }$ and any nonzero
element $\pi \in M _{v}(K(q))$, there exists $\betaeta ^{\prime } \in
K(q) ^{\alphast }$, such that $\betaar v(\betaeta ^{\prime } - \betaeta ) >
v(\pi )$.
\par
\muedskip\nuoindent
To prove (6.2) it is clearly sufficient to consider only the
special case of $\betaar v(\betaeta ) \gammae 0$. Note also that if $\betaeta
\in K$, then one may put $\betaeta ^{\prime } = \betaeta (1 + \pi
^{2})$, so we assume further that $\betaeta \nuotin K$. A standard
inductive argument leads to the conclusion that, one may assume,
for our proof, that $[K(\betaeta )\colon K] = q ^{n}$ and the
assertion of (6.2) holds for any pair $\betaeta _{1} \in K _{\rm ins}
^{\alphast }$, $\pi _{1} \in M _{v}(K(q)) \sigmaetminus \{0\}$ satisfying
$[K(\betaeta _{1})\colon K] < q ^{n}$. Since $[K(\betaeta ^{q})\colon K]
= q ^{n-1}$, our extra hypothesis ensures the existence of an
element $\tilde \betaeta \in K(q)$ with $\betaar v(\tilde \betaeta - \betaeta
^{q}) > qv(\pi )$. Applying Artin-Schreier's theorem to the
polynomial $X ^{q} - X - \tilde \betaeta \pi ^{-q^{3}}$, one
\par\vskip0.038truecm\nuoindent
proves that the polynomial $X ^{q} - \pi ^{q^{2}(q-1)}X - \tilde
\betaeta \in K(q)[X]$ has a root
\par\vskip0.032truecm\nuoindent
$\betaeta ^{\prime } \in K(q)$. In view of the inequality $\betaar
v(\tilde \betaeta ) \gammae 0$, this implies consecutively
\par\vskip0.038truecm\nuoindent
that $\betaar v(\betaeta ^{\prime }) \gammae 0 $ and $\betaar v(\betaeta ^{\prime
q} - \tilde \betaeta ) \gammae q ^{2}(q-1).v(\pi )$. As $\betaar v(\tilde
\betaeta - \betaeta ^{q}) > qv(\pi )$, it is
\par\vskip0.038truecm\nuoindent
now easy to see that $\betaar v(\betaeta ^{\prime q} - \betaeta ^{q})
> qv(\pi )$, whence, $\betaar v(\betaeta ^{\prime } - \betaeta ) > v(\pi
)$, as claimed by (6.2).
\par
\sigmamallskip
We continue with the proof of (6.1). The assumption that $J \nueq
K$ shows that there exists $I \in I(J/K)$ with $[I\colon K] =
[J\colon K]/q$; this means that $I \nuotin \muathcal{D}$. Take an
element $b \in I$ so that $J = I(\sigmaqrt[q]{b})$ and $\betaar v(b) \gammae
0$, and put $\Lambda = \Lambda _{J}.I$, $\Lambda ^{\prime } =
L\Lambda $. As $\widehat K(q)$ is a perfect field (i.e. $\widehat
K ^{q} = \widehat K$) and $v(K(q)) = qv(K(q))$, one may assume,
for our proof, that $\Lambda _{J}$ is chosen so that $b = b _{1}
^{q}.\tilde b$, for some $b _{1} \in O _{v}(\Lambda _{J})$ and
$\tilde b \in n+abla _{0}(\Lambda _{J})$.
\par
Let now $\Delta $ be the underlying division $\Lambda ^{\prime
}$-algebra of $D {\mathord{\,\otimes }\,}imes _{L} \Lambda ^{\prime }$. Then follows
from \cite{P}, Sect. 13.4, Corollary, and the choice of $J$ that
$\Delta \nueq \Lambda ^{\prime }$ and $[\Delta ] \in {\rm
Br}(\Lambda ^{\prime }J/\Lambda ^{\prime })$. This implies $\Delta
\cong [a, b) _{\Lambda '}$ as $\Lambda ^{\prime }$-algebras, for
some $a \in \Lambda ^{\prime \alphast }$ (see, for instance, the end
of the proof of \cite{He}, Theorem~3.2.1). It is therefore clear
that the polynomial $h _{a}(X) = X ^{q} - X - a \in \Lambda
^{\prime }[X]$ has no root in $\Lambda ^{\prime }$, so it follows
from the Artin-Schreier theorem (see \cite{L}, Ch. VI, Sect. 6)
that $h _{a}$ is irreducible over $\Lambda ^{\prime }$, and the
field $W _{a} = \Lambda ^{\prime }(\xi _{a})$ is a degree $q$
cyclic extension of $\Lambda ^{\prime }$, where $\xi _{a} \in
\overline K$ and $h _{a}(\xi _{a}) = 0$. One also sees that $W
_{a}$ is embeddable in $\Delta $ as a $\Lambda ^{\prime
}$-subalgebra, and $\Delta $ is isomorphic to the cyclic $\Lambda
^{\prime }$-algebra $(W _{a}/\Lambda ^{\prime }, \sigmaigma , b)$, for
a suitably chosen generator $\sigmaigma $ of $\muathcal{G}(W
_{a}/\Lambda ^{\prime })$. Because of the above-noted presentation
$b = b _{1} ^{q}\tilde b$, this indicates that $\Delta \cong (W
_{a}/\Lambda ^{\prime }, \sigmaigma , \tilde b)$. Note further that
the extension $W _{a}/\Lambda ^{\prime }$ is not inertial.
Assuming the opposite, one obtains from Lemma \ref{lemm6.2} that
$\tilde b \in N(W _{a}/K)$ which means that $[\Delta ] = 0$ (cf.
\cite{P}, Sect. 15.1, Proposition~b). Since $\Delta \in d(\Lambda
^{\prime })$ and $\Delta \nueq \Lambda ^{\prime }$, this is a
contradiction, proving our assertion. In view of Ostrowski's
theorem and the equality $[W _{a}\colon\Lambda ^{\prime }] = q$,
the considered assertion can be restated by saying that $\widehat
W _{a}/\widehat \Lambda ^{\prime }$ is a purely inseparable
extension of degree $q$ unless $\widehat W _{a} = \widehat \Lambda
^{\prime }$.
\par
Next we observe, using (4.1) (b), that $\varepsilonta = (\xi _{a} + 1)\xi
_{a} ^{-1}$ is a primitive element of $W _{a}/\Lambda ^{\prime }$
and $\varepsilonta \in O _{v}(W _{a}) ^{\alphast }$; also, we denote by $f
_{\varepsilonta }(X)$ the minimal polynomial of $\varepsilonta $ over
$\Lambda^{\prime }$, and by $D(f _{\varepsilonta })$ the discriminant of $f
_{\varepsilonta }$. It is easily verified that $f _{\varepsilonta }(X) \in O _{v}(W
_{a})[X]$, $f _{\varepsilonta }(0) = (-1) ^{q}$, $D(f _{\varepsilonta }) \nueq 0$,
and
\par\vskip0.041truecm\nuoindent
$\betaar v(D(f _{\varepsilonta })) = q\betaar v(f _{\varepsilonta } ^{\prime }(\varepsilonta ))
> 0$ (the inequality is strict, since $[W _{a}\colon K] = q$ and
\par\vskip0.04truecm\nuoindent
$W _{a}/\Lambda ^{\prime }$ is not inertial). Moreover, it follows
from Ostrowski's theorem that
\par\vskip0.044truecm\nuoindent
there exists $\pi _{0} \in O _{v}(K)$ of value $v(\pi _{0}) =
[K(D(f _{\varepsilonta }))\colon K]\betaar v(D(f _{\varepsilonta }))$. Note also that
$b ^{q ^{n-1}} \in K ^{\alphast }$ (whence, $q ^{n-1}\betaar v(b) \in
v(K)$), put $\pi '= \pi _{0}b ^{q ^{n-1}}$, and let
\par\vskip0.032truecm\nuoindent
$b'$ be the $q$-th root of $b$ lying in $K _{\rm ins}$. Applying
(6.2) to $b'$ and $\pi '$ (which is allowed because $v(\pi ') \gammae
v(\pi _{0}) > 0$), one obtains that there is $\lambdaambdambda \in K(q)
^{\alphast }$
\par\vskip0.032truecm\nuoindent
with $\betaar v(\lambdaambdambda ^{q} - b) > qv(\pi ')$. Consider now the
fields $\Lambda _{J}(\lambdaambdambda )$, $\Lambda (\lambdaambdambda )$ and $\Lambda
^{\prime }(\lambdaambdambda )$
\par\vskip0.032truecm\nuoindent
instead of $\Lambda _{J}$, $\Lambda $, and $\Lambda ^{\prime }$,
respectively. Clearly, $\Lambda _{J}(\lambdaambdambda )$ is a finite
extension of
\par\vskip0.032truecm\nuoindent
$K$ in $K(q)$, $\Lambda (\lambdaambdambda ) = \Lambda _{J}(\lambdaambdambda ).I$ and
$\Lambda ^{\prime }(\lambdaambdambda ) = L.\Lambda (\lambdaambdambda )$, so our
choice of $J$ indicates that one may assume, for the proof of
(6.1), that $\lambdaambdambda \in \Lambda _{J}$.
\par
\sigmamallskip
We can now rule out the possibility that $J \nueq K$, by showing that
\par\nuoindent
$[a, b) _{\Lambda '} \nuotin d(\Lambda ^{\prime })$ (in
contradiction with the choice of $J$ which requires that $I \nuotin
\muathcal{D}$). Indeed, the norm $N _{\Lambda '}^{W _{a}}(\lambdaambdambda
\varepsilonta )$ is equal to $\lambdaambdambda ^{q}$, and it follows from the
equality $\pi '= \pi _{0}b ^{q ^{n-1}}$ that $v(\pi ') \gammae v(\pi
_{0}) + \betaar v(b)$. Thus it turns out that
$$\betaar v(\lambdaambdambda ^{q}b ^{-1} - 1) > qv(\pi ') - v(b) > v(\pi _{0})
\gammae \betaar v(D(f _{\varepsilonta })) = q\betaar v(f _{\varepsilonta } ^{\prime }(\varepsilonta
)).$$
Therefore, applying (4.1) to the polynomial $f _{\varepsilonta }(X) + (-1)
^{q}(\lambdaambdambda ^{q}b ^{-1} - 1)$ and the element $\varepsilonta $, one
obtains that $\lambdaambdambda ^{q}b ^{-1}$ and $b$ are contained in $N(W
_{a}/\Lambda ^{\prime })$, which means that $[a, b) _{\Lambda '}
\nuotin d(\Lambda ^{\prime })$, as claimed. Hence, $J = K$, and by
the definition of the set $\muathcal{D}$, there exists a finite
extension $\Lambda _{K}$ of $K$ in $K(q)$, such that $[D {\mathord{\,\otimes }\,}imes
_{L} L\Lambda _{K}] \in {\rm Br}(L\Lambda _{K}/L\Lambda _{K}) =
\{0\}$. In other words, $[D] \in {\rm Br}(L\Lambda _{K}/L)$, so
(6.1) and the equality abrd$_{q}(K(q)) = 0$ are proved in case
char$(K) = q$.
\par
\sigmamallskip
Our objective now is to prove Theorem \ref{theo6.3} in the special
case where $v$ is discrete. Clearly, one may assume, for our
proof, that char$(K) = 0$. Note that there exist fields $\Psi
_{\nuu } \in I(K(q)/K)$, $\nuu \in \muathbb{N}$, such that $\Psi
_{\nuu }/K$ is a totally ramified Galois extension with $[\Psi
_{\nuu }\colon K] = q ^{\nuu }$ and $\muathcal{G}(\Psi _{\nuu }/K)$
abelian of period $q$, for each index $\nuu $, and $\Psi _{\nuu '}
\cap \Psi _{\nuu ''} = K$ whenever $\nuu ', \nuu '' \in \muathbb{N}$
and $\nuu '\nueq \nuu ''$. This follows from \cite{Ch9}, Lemma~2.3
(and Galois theory, which ensures that each finite separable
extension has finitely many intermediate fields). Considering, if
necessary, $\Psi _{1}$ instead of $K$, one obtains further that it
is sufficient to prove Theorem \ref{theo6.3} under the extra
hypothesis that $v(q) \in qv(K)$. In addition, the proof of the
$q$-divisibility of $v(K(q))$ shows that, for the proof of Theorem
\ref{theo6.3}, one may consider only the special case where
$\widehat K$ is perfect.
\par
\sigmamallskip
Let now $\Phi $ be a finite extension of $K$ in $K _{\rm sep}$,
and $\Omega \in d(\Phi )$ a division algebra, such that $[\Omega ]
\in {\rm Br}(\Phi ) _{q}$ and $[\Omega ] \nueq 0$. We complete the
proof of Theorem \ref{theo6.3} by showing that $[\Omega ] \in {\rm
Br}(\Psi _{\nuu }\Phi /\Phi )$, for every sufficiently large $\nuu
\in \muathbb{N}$. As $v$ is discrete and Henselian with $\widehat
K$ perfect, the prolongation of $v$ on $\Phi $ (denoted also by
$v$) and its residue field $\widehat \Phi $ preserve the same
properties, so it follows from the assumptions on $\Omega $ that
it is a cyclic NSR-algebra over $\Phi $, in the sense of
\cite{JW}. In other words, there exists an inertial cyclic
extension $Y$ of $\Phi $ in $K _{\rm sep}$ of degree $[Y\colon
\Phi ] = {\rm deg}(\Omega )$, as well as an element $\tilde \pi
\in \Phi ^{\alphast }$ and a generator $y$ of $\muathcal{G}(Y/\Phi )$,
such that $v(\tilde \pi ) \nuotin qv(\Phi )$ and $\Omega $ is
isomorphic to the cyclic $\Phi $-algebra $(Y/\Phi , y, \tilde \pi
)$. It follows from Galois theory and our assumptions on the
fields $\Psi _{\nuu }$, $\nuu \in \muathbb{N}$, that $\Psi _{\nuu }
\cap Y = K$, for all $\nuu $, with, possibly, finitely many
exceptions. Fix $\nuu $ so that $\Psi _{\nuu } \cap Y = K$ and
deg$(\Omega ).q ^{\muu } \lambdae q ^{\nuu }$, where $\muu $ is the
greatest integer for which $q ^{\muu } \muid [\Phi \colon K]$. Put
$\Omega _{\nuu } = \Omega {\mathord{\,\otimes }\,}imes _{\Phi } \Psi _{\nuu }\Phi $ and
denote by $v _{\nuu }$ the valuation of $\Psi _{\nuu }\Phi $
extending $v$. It is easily obtained from Galois theory and the
choice of $\nuu $ (cf. \cite{L}, Ch. VI, Theorem~1.12) that $\Psi
_{\nuu }Y/\Psi _{\nuu }\Phi $ is a cyclic extension, $[\Psi _{\nuu
}Y\colon \Psi _{\nuu }\Phi ] = [Y\colon \Phi ] = {\rm deg}(\Omega
)$, $y$ extends uniquely to a $\Psi _{\nuu }\Phi $-automorphism $y
_{\nuu }$ of $\Psi _{\nuu }Y$, $y _{\nuu }$ generates
$\muathcal{G}(\Psi _{\nuu }Y/\Psi _{\nuu }\Phi )$, and $\Omega _{\nuu
}$ is isomorphic to the cyclic $\Psi _{\nuu }\Phi $-algebra $(\Psi
_{\nuu }Y/\Psi _{\nuu }\Phi , y _{\nuu }, \tilde \pi )$. Also, the
assumptions on $\Psi _{\nuu }$ show that $v _{\nuu }(\tilde \pi )
\in q ^{\nuu -\muu }v _{\nuu }(\Psi _{\nuu }\Phi )$. Therefore, by the
theory of cyclic algebras (cf. \cite{P}, Sect. 15.1), and the
divisibility deg$(\Omega ) \muid q ^{\nuu -\muu }$, $\Omega _{\nuu }$
is $\Psi _{\nuu }\Phi $-isomorphic to $(\Psi _{\nuu }Y/\Psi _{\nuu
}\Phi , y _{\nuu }, \lambdaambdambda _{\nuu })$, for some $\lambdaambdambda _{\nuu }
\in O _{v _{\nuu }}(\Psi _{\nuu }\Phi ) ^{\alphast }$. Since $\widehat
K$ is perfect (that is, $\widehat K = \widehat K ^{q ^{\varepsilonll }}$,
for each $\varepsilonll \in \muathbb{N}$), a similar argument shows that
$\lambdaambdambda _{\nuu }$ can be chosen to be an element of $n+abla
_{0}(\Psi _{\nuu }\Phi )$. Taking also into account
\par\vskip0.032truecm\nuoindent
that $\Psi _{\nuu }Y/\Psi _{\nuu }\Phi $ is inertial, one obtains
from Lemma \ref{lemm6.2} that \par\vskip0.032truecm\nuoindent
$\lambdaambdambda _{\nuu } \in N(\Psi _{\nuu }Y/\Psi _{\nuu }\Phi )$. Hence,
by the cyclicity of $\Psi _{\nuu }Y/\Psi _{\nuu }\Phi $, $[\Omega
_{\nuu }] = 0$, i.e.
\par\vskip0.032truecm\nuoindent
$[\Omega ] \in {\rm Br}(\Psi _{\nuu }Y/\Psi _{\nuu }\Phi )$. As
$\Psi _{\nuu } \in I(K(q)/K)$ and $\Omega \in d(\Phi )$ represents
an arbitrary nonzero element of Br$(\Phi ) _{q}$, now it is clear
that
\par\vskip0.028truecm\nuoindent
Br$(\Phi ) _{q} = {\rm Br}(K(q)\Phi /\Phi )$, for each $\Phi \in
{\rm Fe}(K)$, so Theorem \ref{theo6.3} is proved.
\par
\muedskip
At the end of this Section, we prove two lemmas which show that
\par\nuoindent
dim$(K _{\rm sol}) \lambdae 1$ whenever $K$ is a field satisfying the
conditions of Lemma \ref{lemm3.3}.
\par
\sigmamallskip
\betaegin{lemm}
\lambdaambdabel{lemm6.4} Let $(K, v)$ be a Henselian field with {\rm
char}$(\widehat K) = q$ and {\rm dim}$(\widehat K _{\rm sol})$
$\lambdae 1$, and in case {\rm char}$(K) = 0 < q$, let $v$ be discrete.
Then {\rm dim}$(K _{\rm sol}) \lambdae 1$.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
Put $\muathbb{P} ^{\prime } = \muathbb{P} \sigmaetminus \{q\}$, and for
each $p \in \muathbb{P} ^{\prime }$, fix a primitive $p$-th root of
unity $\varepsilon _{p} \in K _{\rm sep}$ and a field $T _{p}(K)
\in I(K _{\rm tr}/K)$ in accordance with Lemma \ref{lemm4.2}
(b)-(c). Note first that the compositum $T(K)$ of fields $T
_{p}(K)$, $p \in \muathbb{P} ^{\prime }$, is a subfield of $K _{\rm
sol}$. Indeed, $T _{p}(K) \in I(K(\varepsilon _{p})(p)/K)$, for
each $p \in \muathbb{P} ^{\prime }$, so our assertion follows from
Galois theory, the cyclicity of the extension $K(\varepsilon
_{p})/K$, and the fact that finite solvable groups form a closed
class under taking subgroups, quotient groups and group
extensions. Secondly, Lemma \ref{lemm4.1} implies the field $K
_{\rm ur} \cap K _{\rm sol} := U$ satisfies $\widehat U = \widehat
K _{\rm sol}$. Observing also that $v(T(K)) = pv(T(K))$, $p \in
\muathbb{P} ^{\prime }$, and dim$(\widehat K _{\rm sol}) \lambdae 1$,
and using (4.2)~(a), (4.3)~(a) and \cite{JW}, Theorem~2.8, one
obtains that $v(K ^{\prime }) = pv(K ^{\prime })$, Br$(K ^{\prime
}) _{p} \cong {\rm Br}(\widehat K ^{\prime }) _{p}$ and
Brd$_{p}(\widehat K ^{\prime }) = {\rm Brd}_{p}(K ^{\prime }) =
0$, for each $p \in \muathbb{P} ^{\prime }$ and every finite
extension $K ^{\prime }/K_{\rm sol}$. When $q = 0$, this proves
Lemma \ref{lemm6.4}, and when $q > 0$, our proof is completed by
applying Theorem \ref{theo6.3}.
\varepsilonnd{proof}
\par
\sigmamallskip
\betaegin{lemm}
\lambdaambdabel{lemm6.5} Let $K _{m}$ be a complete $m$-discretely valued
field with {\rm dim}$(K _{0,{\rm sol}})$ $\lambdae 1$, $K _{0}$ being
the $m$-th residue field of $K _{m}$. Then {\rm dim}$(K _{m,{\rm
sol}}) \lambdae 1$.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
In view of Lemma \ref{lemm6.4}, one may consider only the case
where $m \gammae 2$. Denote by $K _{m-j}$ the $j$-th residue field of
$K _{m}$, for $j = 1, \deltaots , m$. Suppose first that char$(K _{m})
= {\rm char}(K _{0})$. Using repeatedly Lemma \ref{lemm6.4}, one
obtains that dim$(K _{m,{\rm sol}}) \lambdae 1$, which allows to
assume, for the rest of our proof, that char$(K _{m}) = 0$ and
char$(K _{0}) = q > 0$. Let $\muu $ be the maximal integer for
which char$(K _{m-\muu }) = 0$. Then $0 \lambdae \muu < m$, char$(K
_{m-\muu -1}) = q$, and in case $\muu < m - 1$, $K _{m-\muu -1}$ is a
complete $m - \muu - 1$-discrete valued field with last residue
field $K _{0}$; also, $K _{m-\muu }$ is a complete discrete valued
field with a residue field $K _{m-\muu -1}$. Therefore, Lemma
\ref{lemm6.4} yields dim$(K _{m-\muu ',{\rm sol}}) \lambdae 1$, for $\muu
' = \muu , \muu + 1$. Note finally that if $\muu > 0$, then $K _{m}$
is a complete $\muu $-discretely valued field with $\muu $-th
residue field $K _{m-\muu }$, and by Lemma \ref{lemm6.4} (used
repeatedly), dim$(K _{m-m',{\rm sol}}) \lambdae 1$, $m' = 0, \deltaots ,
\muu -1$, as required.
\varepsilonnd{proof}
\par
\sigmamallskip
\sigmaection{\betaf Tame version of Lemma \ref{lemm3.3} for admissible
Henselian fields}
\par
\muedskip
Let $(K, v)$ be a Henselian field with $\widehat K$ of arithmetic
type and characterisitc $q$, put $\muathbb{P} _{q} = \muathbb{P}
\sigmaetminus \{q\}$, and suppose that abrd$_{p}(K) < \infty $, $p \in
\muathbb{P}$, and $R$ is a central division LBD-algebra over $K$.
Our main objective in this Section is to prove a modified version
of Lemma \ref{lemm3.3}, where the fields $E _{p}$, $p \in
\muathbb{P}$, are replaced by tamely ramified extensions $V _{p}$,
$p \in \muathbb{P} _{q}$, of $K$ in $K _{\rm sep}$, chosen so as to
satisfy the following conditions, for each $p \in \muathbb{P}
_{q}$: $V _{p}$ is a $p$-splitting field of $R/K$, $[V _{p}\colon
K]$ is a $p$-primary number, and $V _{p} \cap K _{\rm ur}
\sigmaubseteq K(p)$. The desired modification is stated as Lemma
\ref{lemm7.6}, and is also called a tame version of Lemma
\ref{lemm3.3}. Our first step towards this goal can be formulated
as follows:
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm7.1} Let $(K, v)$ be a Henselian field and let $T/K$ be
a tamely totally ramified extension of $p$-primary degree
$[T\colon K] > 1$, for some $p \in \muathbb{P}$. Then there exists
a degree $p$ extension $T _{1}$ of $K$ in $T$. Moreover, $T
_{1}/K$ is a Galois extension if and only if $K$ contains a
primitive $p$-th root of unity.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
Our assumptions show that $v(T)/v(K)$ is an abelian $p$-group of
order equal to $[T\colon K]$, whence, there is $\thetaeta \in T$
with $v(\thetaeta ) \nuotin v(K)$ and $pv(\thetaeta ) \in v(K)$. Therefore,
it follows that $K$ contains elements $\thetaeta _{0}$ and $a$, such
that
\par\vskip0.044truecm\nuoindent
$v(\thetaeta _{0}) = pv(\thetaeta ) = v(\thetaeta ^{p})$, $v(a) = 0$ and
$v(\thetaeta ^{p} - \thetaeta _{0}a) > 0$. This implies the
\par\vskip0.044truecm\nuoindent
existence of an
element $\thetaeta ' \in T$ satisfying $v(\thetaeta ') > 0$ and $\thetaeta
^{p} = \thetaeta _{0}a(1 + \thetaeta ')$. Note further that, by the
assumption on $T/K$, $p \nueq {\rm char}(\widehat T)$ and $\widehat T
= \widehat K$;
\par\vskip0.04truecm\nuoindent
hence, by (4.1) (a), applied to the binomial $X ^{p} - (1 + \thetaeta
')$, $1 + \thetaeta ' \in T ^{\alphast p}$. More precisely, $1 + \thetaeta '
= (1 + \thetaeta _{1}) ^{p}$, for some $\thetaeta _{1} \in T$ of value
$v(\thetaeta _{1}) > 0$. Observing
\par\vskip0.044truecm\nuoindent
now that $v(\thetaeta _{0}a)
\nuotin pv(K)$ and $(\thetaeta (1 + \thetaeta _{1}) ^{-1}) ^{p} = \thetaeta
_{0}a$, one obtains that the
\par\vskip0.044truecm\nuoindent
field $T _{1} = K(\thetaeta (1 + \thetaeta _{1}) ^{-1})$ is a degree $p$
extension of $K$ in $T$. Suppose finally that $\varepsilon $ is a
primitive $p$-th root of unity lying in $T _{\rm sep}$. It is
clear from the noted properties of $T _{1}$ that $T
_{1}(\varepsilon )$ is the Galois closure of $T _{1}$ (in $T _{\rm
sep}$) over $K$. Since $[K(\varepsilon )\colon K] \muid p - 1$ (see
\cite{L}, Ch. VI, Sect. 3), this ensures that $T _{1}/K$ is a
Galois extension if and only if $\varepsilon \in K$, so Lemma
\ref{lemm7.1} is proved.
\varepsilonnd{proof}
\par
\muedskip
The fields $V _{p}(K)$, $p \in \muathbb{P} \sigmaetminus \{{\rm
char}(\widehat K)\}$, singled out by the next lemma play the same
role in our tame version of Lemma \ref{lemm3.3} as the role of the
maximal $p$-extensions $K(p)$, $p \in \muathbb{P}$, in the original
version of Lemma \ref{lemm3.3}.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm7.2} Let $(K, v)$ be a Henselian field with {\rm
abrd}$_{p}(\widehat K(p)) = 0$, for some $p \in \muathbb{P}$
different from {\rm char}$(\widehat K)$. Fix $T _{p}(K) \in
I(T(K)/K)$ in accordance with Lemma \ref{lemm4.2} {\rm (c)}, and
put $K _{0}(p) = K(p) \cap K _{\rm ur}$. Then {\rm abrd}$_{p}(V
_{p}(K)) = 0$, where $V _{p}(K) = K _{0}(p).T _{p}(K)$.
\varepsilonnd{lemm}
\par
\muedskip
\betaegin{proof}
It follows from Lemma \ref{lemm4.2} (b) and (c) that $v(T ^{\prime
}) = pv(T ^{\prime })$, for any $T ^{\prime } \in I(K _{\rm sep}/T
_{p}(K))$; therefore, if $D ^{\prime } \in d(T ^{\prime })$ is of
$p$-primary degree $\gammae p$, then it is neither totally ramified
nor NSR over $T ^{\prime }$. As $p \nueq {\rm char}(\widehat K)$,
this implies in conjunction with Decomposition Lemmas~5.14 and 6.2
of \cite{JW}, that $D ^{\prime }/T ^{\prime }$ is inertial. Thus
it turns out that Br$(\widehat T ^{\prime }) _{p}$ must be
nontrivial. Suppose now that $T ^{\prime } \in I(K _{\rm sep}/V
_{p}(K))$. Then $\widehat T ^{\prime }/\widehat K(p)$ is a
separable field extension, so the condition that
abrd$_{p}(\widehat K(p)) = 0$ requires that Br$(\widehat T
^{\prime }) _{p} = \{0\}$. It is now easy to see that Br$(T
^{\prime }) _{p} = \{0\}$, i.e. Brd$_{p}(T ^{\prime }) = 0$. Since
the field $T ^{\prime }$ is an arbitrary element of $I(K _{\rm
sep}/V _{p}(K))$, this proves Lemma \ref{lemm7.2}.
\varepsilonnd{proof}
\par
\muedskip
The following lemma presents the main properties of finite
extensions of $K$ in $V _{p}(K)$, which are used for proving Lemma
\ref{lemm3.3}.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm7.3} In the setting of Lemma \ref{lemm7.2}, let $V$ be
an extension of $K$ in $V _{p}(K)$ of degree $p ^{\varepsilonll } > 1$.
Then there exist fields $\Sigma _{0}, \deltaots , \Sigma _{\varepsilonll } \in
I(V/K)$, such that $[\Sigma _{j}\colon K] = p ^{j}$, $j = 0, \deltaots
, \varepsilonll $, and $\Sigma _{j-1} \sigmaubset \Sigma _{j}$ for every index
$j > 0$.
\varepsilonnd{lemm}
\par
\muedskip
\betaegin{proof}
By Lemma \ref{lemm4.1} (d), the field $K$ has an inertial
extension $V _{0}$ in $V$ with $\widehat V _{0} = \widehat V$.
Moreover, it follows from (4.2) (a) and the inequality $p \nueq
{\rm char}(\widehat K)$ that $V/V _{0}$ is totally ramified.
Considering the extensions $V _{0}/K$ and $V/V _{0}$, one
concludes that it is sufficient to prove Lemma \ref{lemm7.3} in
the special case where $V _{0} = V$ or $V _{0} = K$. If $V _{0} =
V$, then our assertion follows from Lemma \ref{lemm4.1} (c),
Galois theory and the subnormality of proper subgroups of finite
$p$-groups (cf. \cite{L}, Ch. I, Sect. 6). When $V _{0} = K$, by
Lemma \ref{lemm7.2}, there is a degree $p$ extension $V _{1}$ of
$K$ in $V$. As $V/V _{1}$ is totally ramified, this allows to
complete the proof of Lemma \ref{lemm7.3} by a standard inductive
argument.
\varepsilonnd{proof}
\par
\muedskip
Theorem \ref{theo6.3} and our next lemma characterize the fields
of arithmetic type among all fields admissible by some of Theorems
\ref{theo3.1} and \ref{theo3.2}. These lemmas show that an
$m$-dimensional local field is of arithmetic type if and only if
$m = 1$. They also prove that if $(K, v)$ is a Henselian field
with char$(K) = {\rm char}(\widehat K)$, then $K$ is a field of
arithmetic type, provided that it is virtually perfect, $\widehat
K$ is of arithmetic type, $v(K)/pv(K)$ are finite groups, for all
$p \in \muathbb{P}$, and $\widehat K$ contains a primitive $p$-th
root of unity, for each $p \in \muathbb{P} \sigmaetminus \{{\rm
char}(\widehat K)\}$.
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm7.4} Assume that $(K, v)$ and $p$ satisfy the
conditions of Lemma \ref{lemm7.2}, $\hat \varepsilon \in \widehat
K _{\rm sep}$ is a primitive $p$-th root of unity, and $\tilde{\alpha}u (p)$
is the dimension of the group $v(K)/pv(K)$, viewed as a vector
space over the field $\muathbb{Z}/p\muathbb{Z}$. Then {\rm
abrd}$_{p}(K(p)) = 0$ unless $\hat \varepsilon \nuotin \widehat K$
and $\tilde{\alpha}u (p) + {\rm cd}_{p}(\muathcal{G}_{\widehat K(p)}) \gammae 2$.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
It follows from (4.2) (a) and Lemma \ref{lemm4.1} that if $v(K) =
pv(K)$, then $K(p) \sigmaubseteq K _{\rm ur}$, whence, $K(p) = K
_{0}(p) = V _{p}(K)$, and by Lemma \ref{lemm7.2}, abrd$_{p}(K(p))
= 0$. This agrees with the assertion of Lemma \ref{lemm7.4} in
case $v(K) = pv(K)$, since $p \nueq {\rm char}(\widehat K)$ and, by
Galois cohomology, we have abrd$_{p}(\widehat K(p)) = 0$ if and
only if cd$_{p}(\muathcal{G}_{\widehat K(p)}) \lambdae 1$ (see
\cite{GiSz}, Theorem~6.1.8, or \cite{S1}, Ch. II, 3.1). Therefore,
we assume in the rest of the proof that $v(K) \nueq pv(K)$. Fix a
primitive $p$-th root of unity $\varepsilon \in K _{\rm sep}$, and
as in Lemma \ref{lemm7.3}, consider a finite extension $V$ of $K$
in $V _{p}(K)$. It is easily verified that $\varepsilon \in K$ if
and only if $\hat \varepsilon \in \widehat K$, and this holds if
and only if $\varepsilon \in V _{p}(K)$. Suppose that $[V\colon K]
= p ^{\varepsilonll } > 1$ and take fields $\Sigma _{j} \in I(V/K)$, $j =
0, 1, \deltaots , \varepsilonll $, as required by Lemma \ref{lemm7.3}.
Observing that $K(p) = K _{1}(p)$, for any $K _{1} \in I(K(p)/K)$,
and using Galois theory and the normality of maximal subgroups of
nontrivial finite $p$-groups, one obtains that $V \in I(K(p)/K)$
if and only if $\Sigma _{j}/\Sigma _{j-1}$ is a Galois extension,
for every $j
> 0$. In view of Lemma \ref{lemm7.1}, this occurs if and only if
$\varepsilon \in K$ or $V \in I(K _{0}(p)/K)$. It is now easy to
see that $K(p) = V _{p}(K)$ if $\varepsilon \in K$, and $K(p) = K
_{0}(p)$, otherwise. Hence, by Lemma \ref{lemm7.2},
abrd$_{p}(K(p)) = 0$ in case $\varepsilon \in K$, as claimed by
Lemma \ref{lemm7.4}.
\par
Assume finally that $v(K) \nueq pv(K)$ and $\varepsilon \nuotin K$
(in this case, $p > 2$). It is easy to see that if
cd$_{p}(\muathcal{G}_{\widehat K(p)}) = 1$, then there is a finite
extension $Y$ of $K _{0}(p)$ in $K _{\rm ur}$, such that $\widehat
Y(p) \nueq \widehat Y$. Therefore, there exists a degree $p$ cyclic
extension $Y ^{\prime }$ of $Y$ in $Y _{\rm ur} = Y.K _{\rm ur}$,
which ensures the existence of a nicely semi-ramified $Y$-algebra
$\Lambda \in d(Y)$, in the sense of \cite{JW}, of degree $p$; this
yields abrd$_{p}(K _{0}(p)) \gammae {\rm Brd}_{p}(Y) \gammae 1$. The
inequality abrd$_{p}(K _{0}(p)) \gammae 1$ also holds if $\tilde{\alpha}u (p) \gammae
2$, i.e. $v(K)/pv(K)$ is noncyclic. Indeed, then Brd$_{p}(K
_{0}(p)(\varepsilon )) \gammae 1$; this follows from the fact that
$v(K _{0}(p)(\varepsilon )) = v(K)$, which implies the symbol $K
_{0}(p)(\varepsilon )$-algebra $A _{\varepsilon }(a _{1}, a _{2};
K _{0}(p)(\varepsilon ))$ (defined, e.g., in \cite{Mat}) is a
division one whenever $a _{1}$ and $a _{2}$ are elements of $K
^{\alphast }$ chosen so that the cosets \par\nuoindent $v(a _{i}) +
pv(K)$, $i = 1, 2$, generate a subgroup of $v(K)/pv(K)$ of order
$p ^{2}$.
\par
In order to complete the proof of Lemma \ref{lemm7.4} it remains
to be seen that abrd$_{p}(K _{0}(p)) = 0$ in case
cd$_{p}(\muathcal{G}_{\widehat K(p)}) = 0$ and $\tilde{\alpha}u (p) = 1$.
Since $p \nueq {\rm char}(\widehat K)$, this is the same as to
prove that cd$_{p}(\muathcal{G}_{K _{0}(p)}) \lambdae 1$. As $K _{0}(p)
= K _{\rm ur} \cap K(p)$, we have $v(K _{0}(p)) = v(K)$ and
$\widehat {K _{0}(p)} = \widehat K(p)$, so it follows from
\cite{Ch5}, Lemma~1.2, that cd$_{p}(\muathcal{G}_{K _{0}(p)}) =
{\rm cd}_{p}(\muathcal{G}_{\widehat K(p)}) + \tilde{\alpha}u (p) = 1$, as
claimed.
\varepsilonnd{proof}
\par
\sigmamallskip
\betaegin{rema}
\lambdaambdabel{rema7.5} Summing-up Lemmas \ref{lemm4.4}, \ref{lemm4.5} and
\ref{lemm7.4}, one obtains a complete valuation-theoretic
characterization of the fields of arithmetic type among the
maximally complete fields $(K, v)$ with abrd$_{p}(K) < \infty $,
for every $p \in \muathbb{P}$. As demonstrated in the proof of
Corollary \ref{coro4.6}, this fully describes the class
$\muathcal{C} _{0}$ of those fields of arithmetic type, which lie
in the class $\muathcal{C}$ of generalized formal power series
fields of finite absolute Brauer $p$-dimensions. Note that
$\muathcal{C}$ is considerably larger than $\muathcal{C} _{0}$. For
example, if $K _{0}$ is a finite field and $\Gamma $ is an ordered
abelian group with finite quotients $\Gamma /p\Gamma $, for all $p
\in \muathbb{P}$, then $K _{0}((\Gamma )) \in \muathcal{C} \sigmaetminus
\muathcal{C} _{0}$ in case $\Gamma /p\Gamma $ are noncyclic, for
infinitely many $p$.
\varepsilonnd{rema}
\par
\muedskip
\lambdaambdabel{tame} The conclusion of Lemma \ref{lemm7.3} remains valid
if $K$ is an arbitrary field, $p \in \muathbb{P}$, and $V$ is a
finite extension of $K$ in $K(p)$ of degree $p ^{\varepsilonll } > 1$; then
the extensions $\Sigma _{j}/\Sigma _{j-1}$, $j = 1, \deltaots , \varepsilonll
$, are Galois of degree $p$ (see \cite{L}, Ch. I, Sect. 6).
Considering the proof of Lemma \ref{lemm7.4}, one also sees that,
in the setting of Lemma \ref{lemm7.3}, $V _{p}(K) \sigmaubseteq K(p)$
if and only $\widehat K$ contains a primitive $p$-th root of unity
or $v(K) = pv(K)$. These observations and Lemma \ref{lemm7.4}
allow to view the following result as a tame version of Lemma
\ref{lemm3.3}:
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm7.6} Assume that $K$, $q$ and $R$ satisfy the
conditions of Theorem \ref{theo3.1} or Theorem \ref{theo3.2}. Put
$\muathbb{P} ^{\prime } = \muathbb{P} \sigmaetminus \{q\}$, and for each
$p \in \muathbb{P} ^{\prime }$, denote by $k(p)$ the $p$-power of
$R/K$, and by $V _{p}(K)$ the extension of $K$ in $K _{\rm sep}$
singled out by Lemma \ref{lemm7.2}. Then there exist finite
extensions $V _{p}$ of $K$, $p \in \muathbb P ^{\prime }$, with the
following properties, for each $p$:
\par
{\rm (c)} $V _{p}$ is a $p$-splitting field of $R/K$, i.e. $p$ does
not divide $[V _{p}(\deltaelta _{p})\colon V _{p}]$, for any element
$\deltaelta _{p}$ of the underlying central division $V _{p}$-algebra
$\Delta _{p}$ of $R {\mathord{\,\otimes }\,}imes _{K} V _{p}$;
\par
{\rm (cc)} $V _{p} \in I(V _{p}(K)/K)$, so $[V _{p}\colon
K] = p ^{\varepsilonll (p)}$, for some integer $\varepsilonll (p) \gammae k(p)$, and the
maximal inertial extension $U _{p}$ of $K$ in $V _{p}$ is a subfield
of $K(p)$.
\varepsilonnd{lemm}
\par
\muedskip
\betaegin{proof}
It is clearly sufficient to show that $R {\mathord{\,\otimes }\,}imes _{K} V _{p}(K)
\cong M _{p^{k(p)}}(R ^{\prime })$, for some central division $V
_{p}(K)$-algebra $R ^{\prime }$. Our proof relies on the inclusion
$V _{p}(K) \sigmaubseteq K _{\rm sol}$. In view of the $V
_{p}(K)$-isomorphism $R {\mathord{\,\otimes }\,}imes _{K} V _{p}(K) \cong (R {\mathord{\,\otimes }\,}imes
_{K} Y _{p}) {\mathord{\,\otimes }\,}imes _{Y_{p}} V _{p}(K)$, for each $Y _{p} \in I(V
_{p}(K)/K)$, this enables one to obtain from Lemma \ref{lemm5.1}
that $R {\mathord{\,\otimes }\,}imes _{K} V _{p}(K)$ is $V _{p}(K)$-isomorphic to $M
_{s(p)}(R _{p})$, for some central division $V _{p}(K)$-algebra $R
_{p}$ and some $s(p) \in \muathbb{N}$ dividing $p ^{k(p)}$. In
order to complete the proof of Lemma \ref{lemm7.6} we show that $p
^{k(p)} \muid s(p)$. Since, by Lemma \ref{lemm7.2}, abrd$_{p}(V
_{p}(K)) = 0$, it can be deduced from \cite{Ch3}, Lemma~3.6, that
for any finite extension $Y ^{\prime }$ of $V _{p}(K)$, $R _{p}
{\mathord{\,\otimes }\,}imes _{V _{p}(K)} Y ^{\prime }$ is isomorphic as an $Y ^{\prime
}$-algebra to $M _{y'}(R ^{\prime })$, for some $y' \in
\muathbb{N}$ not divisible by $p$, and some central division
LBD-algebra $R ^{\prime }$ over $Y ^{\prime }$. Note further that
there is an $Y ^{\prime }$-isomorphism
\par\vskip0.031truecm\nuoindent $R {\mathord{\,\otimes }\,}imes
_{K} Y ^{\prime } \cong (R {\mathord{\,\otimes }\,}imes _{K} Y) {\mathord{\,\otimes }\,}imes _{Y} Y ^{\prime
}$, for any $Y \in I(Y ^{\prime }/K)$. This, applied to the case
\par\vskip0.031truecm\nuoindent
where $Y = V _{p}(K)$, and together with the Wedderburn-Artin
theorem and
\par\vskip0.031truecm\nuoindent
\cite{P}, Sect. 9.3, Corollary~b, leads to the conclusion that $R
{\mathord{\,\otimes }\,}imes _{K} Y ^{\prime } \cong M _{s(p).y'}(R ^{\prime })$
\par\vskip0.031truecm\nuoindent
as $Y ^{\prime }$-algebras. Considering again an arbitrary $Y \in I(Y
^{\prime }/K)$, one obtains
\par\vskip0.023truecm\nuoindent
similarly that if $R _{Y}$ is the underlying division $Y$-algebra of
$R {\mathord{\,\otimes }\,}imes _{K} Y$, then
\par\vskip0.027truecm\nuoindent
there exists an $Y$-isomorphism $R {\mathord{\,\otimes }\,}imes _{K} Y \cong
M _{y}(R _{Y})$, for some $y \in \muathbb{N}$ dividing
\par\vskip0.027truecm\nuoindent
$s(p).y'$. Suppose now that $Y ^{\prime } = V _{p}(K)Y$, for some
finite extension $Y$ of $K$ in an algebraic closure of $V _{p}(K)$,
such that $p ^{k(p)} \muid [Y\colon K]$ and $Y$ embeds in $R$ as a
$K$-subalgebra. Then, by
the previous observation, $p ^{k(p)} \muid s(p).y'$; since $p \numid
y'$, this implies $p ^{k(p)} \muid s(p)$ and so completes the proof
of Lemma \ref{lemm7.6}.
\varepsilonnd{proof}
\par
\muedskip
Lemmas \ref{lemm7.2}, \ref{lemm7.3}, \ref{lemm7.6} and the results
of Sections 4 and 5 give us the possibility to deduce Lemma
\ref{lemm3.3} by the method of proving \cite{Ch3}, Lemma~8.3. This
is done in the following two Sections in two steps.
\par
\muedskip
\sigmaection{\betaf A special case of Lemma \ref{lemm3.3}}
\par
\muedskip
Let $K$ be a field and $R$ a central division LBD-algebra over $K$
satisfying the conditions of Theorem \ref{theo3.1} or Theorem
\ref{theo3.2}, and put $q = {\rm char}(K _{0})$ in the former
case, $q = {\rm char}(K)$ in the latter one. This Section gives a
proof of Lemma \ref{lemm3.3} in the case where $q$ does not divide
$[K(r)\colon K]$, for any $r \in R$. In order to achieve this goal
we need the following two lemmas:
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm8.1} Let $(K, v)$ be a field with {\rm dim}$(K _{\rm
sol}) \lambdae 1$ and {\rm abrd}$_{\varepsilonll }(K) < \infty $, for all $\varepsilonll
\in \muathbb{P}$. Fix $p \in \muathbb{P}$ and a field $M \in I(M
^{\prime }/K)$, for some finite Galois extension $M ^{\prime }$ of
$K$ in $K _{\rm sep}$ with $\muathcal{G}(M ^{\prime }/K)$ nilpotent
and $[M ^{\prime }\colon K]$ not divisible by $p$. Assume that $R$
is a central division {\rm LBD}-algebra over $K$, $R _{M}$ is the
underlying division $M$-algebra of $R {\mathord{\,\otimes }\,}imes _{K} M$, and there
is an $M$-subalgebra $\Delta _{M}$ of $R _{M}$, such that the
following (equivalent) conditions hold:
\par
{\rm (c)} $M$ is a $p'$-splitting field of $R/K$, for every $p' \in
\muathbb{P}$ dividing $[M\colon K]$;
\par\nuoindent
$\Delta _{M} \in d(M)$ and {\rm deg}$(\Delta _{M}) = p ^{k(p)}$,
where $k(p)$ is the $p$-power of $R/K$;
\par
{\rm (cc)} $\gammacd \{p[M\colon K], [M(z _{M})\colon M]\} = 1$, for
every $z _{M} \in C _{R _{M}}(\Delta _{M})$.
\par\sigmamallskip\nuoindent
Then $\Delta _{M} \cong \Delta {\mathord{\,\otimes }\,}imes _{K} M$ as $M$-algebras, for
some subalgebra $\Delta \in d(K)$ of $R$.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
The equivalence of conditions (c) and (cc) follows from Lemmas
\ref{lemm5.1} and \ref{lemm5.3}. Note further that if $M \nueq K$,
then $M$ contains as a subfield a cyclic extension $M _{0}$ of $K$
of degree $p' \nueq p$. This is a consequence of the normality of
maximal subgroups of nilpotent finite groups (established by the
Burnside-Wielandt theorem, see \cite{KM}, Theorem~17.1.4) and
Galois theory. Considering $M _{0}$ and the underlying division $M
_{0}$-algebra $R _{0}$ of $R {\mathord{\,\otimes }\,}imes _{K} M _{0}$, instead of $K$
and $R$, respectively, and taking into account that the $M$-algebras
$R {\mathord{\,\otimes }\,}imes _{K} M$ and $(R {\mathord{\,\otimes }\,}imes _{K} M _{0}) {\mathord{\,\otimes }\,}imes _{M _{0}} M$
are isomorphic, one concludes that conditions (c) and (cc) of Lemma
\ref{lemm8.1} are fulfilled again. Therefore, a standard inductive
argument shows that it suffices to prove Lemma \ref{lemm8.1} under
the extra hypothesis that there exists a subalgebra $\Delta _{0} \in
d(M _{0})$ of $R _{0}$, such that $\Delta _{0} {\mathord{\,\otimes }\,}imes _{M _{0}} M
\cong \Delta _{M}$ as $M$-algebras. Let $\varphi $ be a
$K$-automorphism of $M _{0}$ of order $p'$, and let $\betaar \varphi $
be the unique $K$-automorphism of $R {\mathord{\,\otimes }\,}imes _{K} M _{0}$ extending
$\varphi $ and acting on $R$ as the identity. Then it follows from
the Skolem-Noether theorem (cf. \cite{He}, Theorem~4.3.1) and from
the existence of an $M _{0}$-isomorphism $R {\mathord{\,\otimes }\,}imes _{K} M _{0} \cong
M _{p*}(M _{0}) {\mathord{\,\otimes }\,}imes _{M _{0}} R _{0}$ (where $p* = p$ or $p* = 1$
depending on whether or not $M _{0}$ is embeddable in $R$ as a
$K$-subalgebra) that $R _{0}$ has a $K$-automorphism $\tilde \varphi
$ extending $\varphi $. Note also that $p \numid [M _{0}(z _{0})\colon
M _{0}]$, for any $z _{0} \in C _{R _{0}}(\Delta _{0})$. This is
implied by Lemma \ref{lemm5.1}, condition (cc) of Lemma
\ref{lemm8.1}, and the fact that $C _{R _{M}}(\Delta _{M})$ is the
underlying division $M$-algebra of $C _{R _{M _{0}}}(\Delta _{0})
{\mathord{\,\otimes }\,}imes _{M _{0}} M$. Hence, by Lemma \ref{lemm5.4}, $\Delta _{0}$
is $M _{0}$-isomorphic to its image $\Delta _{0} ^{\prime }$ under
$\tilde \varphi $, so it follows from the Skolem-Noether theorem
that $\varphi $ extends to a $K$-automorphism of $\Delta _{0}$. As
$p \numid [M _{0}\colon K]$ and deg$(\Delta _{0}) = {\rm
deg}(\Delta ) = p ^{k(p)}$, this enables one to deduce from
Teichm\"{u}ller's theorem (cf. \cite{Dr1}, Sect. 9, Theorem~4) and
\cite{Ch3}, Lemma~3.5, that there exists an $M _{0}$-isomorphism
$\Delta _{0} \cong \Delta {\mathord{\,\otimes }\,}imes _{K} M _{0}$, for some central
$K$-subalgebra $\Delta $ of $R$.
\varepsilonnd{proof}
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm8.2} Let $(K, v)$ be a Henselian field with {\rm
abrd}$_{\varepsilonll }(K) < \infty $, $\varepsilonll \in \muathbb{P}$, and $\widehat
K$ of arithmetic type, and let $R$ be a central division {\rm
LBD}-algebra over $K$. Fix a primitive $p$-th root of unity
$\varepsilon \in K _{\rm sep}$, for some $p \in \muathbb{P}$, $p
\nueq {\rm char}(\widehat K)$, and suppose that {\rm dim}$(K _{\rm
sol}) \lambdae 1$ and $R$ satisfies the following conditions:
\par
{\rm (i)} $p ^{2}$ and {\rm char}$(\widehat K)$ do not divide the
degree $[K(\deltaelta )\colon K]$, for any $\deltaelta \in R$;
\par
{\rm (ii)} There is a $K$-subalgebra $\Theta $ of $R$, which is a
totally ramified extension of $K$ of degree $[\Theta \colon K] =
p$.
\par\nuoindent
Then there exists a central $K$-subalgebra $\Delta $ of $R$, such
that {\rm deg}$(\Delta ) = p$ and $\Delta $ possesses a
$K$-subalgebra isomorphic to $\Theta $. Moreover, if $\varepsilon
\nuotin K$, then $\Delta $ contains as a $K$-subalgebra an inertial
cyclic extension of $K$ of degree $p$.
\varepsilonnd{lemm}
\par
\muedskip
\betaegin{proof}
As in the proof of Lemma \ref{lemm7.1}, one obtains from the
assumption on $\Theta /K$ that $\Theta = K(\xi )$, where $\xi $ is
a $p$-th root of an element $\thetaeta \in K ^{\alphast }$ of value
$v(\thetaeta ) \nuotin pv(K)$. Suppose first that $\varepsilon \in K$.
Then $\Theta /K$ is a cyclic extension, so it follows from the
Skolem-Noether theorem that there exists $\varepsilonta \in R ^{\alphast }$,
such that $\varepsilonta \xi \varepsilonta ^{-1} = \varepsilon \xi $. As a first
step towards our proof, we show that $\varepsilonta $ can be chosen so as
to satisfy the following:
\par
\muedskip\nuoindent
(8.1) The field extension $K(\varepsilonta ^{p})/K$ is inertial.
\par
\muedskip\nuoindent
Put $\varepsilonta ^{p} = \rho $, $B = K(\rho )$, and $r = [B\colon
\muathcal{B}]$, where $\muathcal{B}$ is the maximal inertial
extension of $K$ in $B$. It is easily verified that $\xi \rho =
\rho \xi $. Since $\xi \varepsilonta \nueq \varepsilonta \xi $ and $\varepsilon \in
K$, this means that $\varepsilonta \nuotin B$ and $[K(\varepsilonta )\colon K] = p$.
Observing that $[K(\varepsilonta )\colon K] = [K(\varepsilonta )\colon B].[B\colon
K]$, and by assumption, $p ^{2} \numid [K(\varepsilonta )\colon K]$, one
also obtains that $p \numid [B\colon K]$. Therefore, $p \numid r$,
whence, the pairs $\xi , \varepsilonta $ and $\xi , \varepsilonta ^{r}$ generate the
same $K$-subalgebra of $R$. Similarly, condition (i) of Lemma
\ref{lemm8.2} shows that char$(\widehat K) \numid r$, which leads
to the conclusion that the set of those $b \in B$, for which $v(b)
\in rv(B)$ equals the inner group product $\muathcal{B} ^{\alphast
}.n+abla _{0}(B)$. Since, by the Henselian property of $(B, v
_{B})$, $n+abla _{0}(B) \sigmaubset B ^{\alphast pr}$, this observation
indicates that there exists a pair $\rho _{0} \in \muathcal{B}
^{\alphast }$, $\rho _{1} \in B ^{\alphast }$, such that $\rho ^{r} = \rho
_{0}\rho _{1} ^{pr}$. Putting $\varepsilonta _{1} = (\varepsilonta \rho _{1} ^{-1})
^{r.r'}$, for a fixed $r' \in \muathbb{N}$ satisfying $r.r' \varepsilonquiv
1 ({\rm mod} \ p)$, one obtains that $\varepsilonta _{1}\xi \varepsilonta _{1} ^{-1}
= \varepsilon \xi $ and $\varepsilonta _{1} ^{p} = \rho _{0} ^{r'} \in
\muathcal{B}$, which proves (8.1).
\par
Our objective now is to prove the existence of a $K$-subalgebra
$\Delta $ of $R$ with the properties required by Lemma
\ref{lemm8.2}. Let $\muathbb{P} ^{\prime } = \muathbb{P} \sigmaetminus
\{{\rm char}(\widehat K), p\}$, and for each $p' \in \muathbb{P}
^{\prime }$, take an extension $V _{p'}$ of $K$ in $K _{\rm tr}$
in accordance with Lemma \ref{lemm7.6}, and put $U _{p'} = V _{p'}
\cap K _{\rm ur}$. Consider a sequence $\Pi _{n}$, $n \in
\muathbb{N}$, of pairwise distinct finite subsets of $\muathbb{P}
^{\prime }$, such that $\cup _{n=1} ^{\infty } \Pi _{n} =
\muathbb{P} ^{\prime }$ and $\Pi _{n} \sigmaubset \Pi _{n+1}$, for each
index $n$. Denote by $W _{n}$ the compositum of the fields $V _{p
_{n}}$, $p _{n} \in \Pi _{n}$, and by $R _{n}$ the underlying
division $W _{n}$-algebra of $R {\mathord{\,\otimes }\,}imes _{K} W _{n}$, for any $n$.
We show that $W _{n}$, $R _{n}$ and the $W _{n}$-algebra $\Theta
_{n} = \Theta {\mathord{\,\otimes }\,}imes _{K} W _{n}$ satisfy conditions (i) and (ii)
of Lemma \ref{lemm8.2}. It is easily verified that $[W _{n}\colon
K] = \prod _{p _{n} \in \Pi _{n}} [V _{p _{n}}\colon K]$; in
particular, $p \numid [W _{n}\colon K]$ which ensures that $\Theta
_{n}$ is a field. Moreover, it follows from (4.2) (a), that
$\Theta _{n}/W _{n}$ is a totally ramified extension of degree
$p$. Using the fact that $R {\mathord{\,\otimes }\,}imes _{K} \Theta _{n}$ is
isomorphic to the $W _{n}$-algebras $(R {\mathord{\,\otimes }\,}imes _{K} \Theta )
{\mathord{\,\otimes }\,}imes _{\Theta } \Theta _{n}$ and $(R {\mathord{\,\otimes }\,}imes _{K} W _{n})
{\mathord{\,\otimes }\,}imes _{W _{n}} \Theta _{n}$ (cf. \cite{P}, Sect. 9.4,
Corollary~a), one obtains from \cite{Ch3}, Lemma~3.5, and the
uniqueness part of the Wedderburn-Artin theorem, that $\Theta
_{n}$ embeds in $R _{n}$ as a $W _{n}$-subalgebra. Note also that
$W _{n}$ and $R _{n}$ satisfy condition (i) of Lemma
\ref{lemm8.2}; since $p$ and char$(\widehat K)$ do not divide $[W
_{n}\colon K]$, this follows from Lemma \ref{lemm5.1} and
\cite{Ch3}, Lemma~3.5.
\par
\sigmamallskip
The next step towards our proof of the lemma can be stated as
follows:
\par
\muedskip\nuoindent
(8.2) When $n$ is sufficiently large, $R _{n}$ has a $W
_{n}$-subalgebra $\Delta _{n} \in d(W _{n})$, such that
deg$(\Delta _{n}) = p$ and $\Theta _{n}$ embeds in $\Delta _{n}$
as a $W _{n}$-subalgebra.
\par
\muedskip\nuoindent
Our proof of (8.2) relies on Lemma \ref{lemm5.1} and the choice of
the fields $W _{\nuu }$, $\nuu \in \muathbb{N}$, which indicate that,
for any $\nuu $, the degrees $[W _{\nuu }(\deltaelta _{\nuu })\colon W
_{\nuu }]$, $\deltaelta _{\nuu } \in R _{\nuu }$, are not divisible by
any $p _{\nuu } \in \Pi _{\nuu }$. Arguing as in the proof of Lemma
\ref{lemm5.5}, given in \cite{Ch3}, Sect.~8, one obtains from
(8.1) the existence of a finite-dimensional
\par\nuoindent $W _{\nuu }$-subalgebra $\Lambda _{\nuu }$ of $R _{\nuu
}$ satisfying the following:
\par
\muedskip\nuoindent
(8.3) (i) The centre $B _{\nuu }$ of $\Lambda _{\nuu }$ is an
inertial extension of $W _{\nuu }$ of degree not divisible by
char$(\widehat K)$, $p$ and any $p _{n} \in \Pi _{n}$; moreover,
by (4.2) (a) and Lemma \ref{lemm4.1} (d), $B _{\nuu } = \muathcal{B}
_{\nuu }W _{\nuu }$ and $[B _{\nuu }\colon W _{\nuu }] = [\muathcal{B}
_{\nuu }\colon \muathcal{W} _{\nuu }]$, where $\muathcal{B} _{\nuu }$
and $\muathcal{W} _{\nuu }$ are the maximal inertial extensions of
$K$ in $B _{\nuu }$ and $W _{\nuu }$, respectively.
\par
(ii) $\Lambda _{\nuu }$ has degree $p$ as an algebra in $d(B _{\nuu
})$, $C _{R _{\nuu }}(\Lambda _{\nuu })$ is a central division $B
_{\nuu }$-algebra, and $C _{R _{\nuu }}(\Lambda _{\nuu })/B _{\nuu }$
is of $p$-power zero (see Lemmas \ref{lemm5.1} and \ref{lemm5.3}).
\par
\muedskip\nuoindent
It is easy to see that the field $\muathcal{W} _{\nuu }$ defined in
(8.3) (i) equals the compositum of the fields $U _{p _{\nuu }}$, $p
_{\nuu } \in \Pi _{\nuu }$, for each $\nuu \in \muathbb{N}$. Let now
$\muathcal{W} _{\nuu } ^{\prime }$ be the Galois closure of
$\muathcal{W} _{\nuu }$ in $K _{\rm sep}$ over $K$, and let $B _{\nuu
} ^{\prime }$ be a $W _{\nuu }$-isomorphic copy of $B _{\nuu }$ in
$K _{\rm sep}$. Then it follows from Galois theory and Lemma
\ref{lemm7.6} (cc) that $\muathcal{G}(\muathcal{W} _{\nuu } ^{\prime
}/K)$ decomposes into a direct product of finite $p _{\nuu
}$-groups, indexed by $\Pi _{\nuu }$; hence, $p \numid [\muathcal{W}
_{\nuu } ^{\prime }\colon K]$, and by the Burnside-Wielandt
theorem, $\muathcal{G}(\muathcal{W} _{\nuu } ^{\prime }/K)$ is
nilpotent, for every $\nuu $. Next, observing that, by the same
theorem, maximal subgroups of nilpotent finite groups are normal
of prime indices, and using Galois theory and (8.3) (i), one
obtains that:
\par
\muedskip\nuoindent
(8.4) For any pair of indices $\nuu , \nuu '$ with $\nuu < \nuu '$, $B
_{\nuu } ^{\prime }W _{\nuu '}/W _{\nuu '}$ is a field extension of
degree dividing $[B _{\nuu } ^{\prime }\colon W _{\nuu }] = [B _{\nuu
}\colon W _{\nuu }]$.
\par
\muedskip\nuoindent
It is clear from (8.3) (i) and the assumptions on $\Pi _{\nuu }$,
$\nuu \in \muathbb{N}$, that there exists an index $\nuu _{0}$, such
that all prime divisors of $[B _{\nuu }\colon W _{\nuu }]$ are
greater than $p$, for each $\nuu > \nuu _{0}$. Similarly, for any
$\nuu $, one can find $\xi (\nuu ) \in \muathbb{N}$ satisfying the
condition $\gammacd \{[B _{\nuu '}\colon W _{\nuu '}], [B _{\nuu }\colon
W _{\nuu }]\} = 1$ whenever $\nuu ' \in \muathbb{N}$ and $\nuu ' > \nuu
+ \xi (\nuu )$. Thus it follows that, for each pair $\nuu , n \in
\muathbb{N}$ with $\nuu _{0} < \nuu < \xi (\nuu ) < n - \nuu $, we have
$\gammacd \{[B _{\nuu }\colon W _{\nuu }], [B _{n}\colon W _{n}]\} = 1$
and $\gammacd \{[B _{\nuu }\colon W _{\nuu }][B _{n}\colon W _{n}],
\tilde p\} = 1$, for
\par\vskip0.04truecm\nuoindent
every $\tilde p \in \muathbb{P}$
less than or equal to $p$.
\par
\sigmamallskip
We show that $R _{n}$ possesses a central $W _{n}$-subalgebra
$\Delta _{n}$ with the properties required by (8.2). Take a
generator $\varphi $ of $\muathcal{G}(\Theta /K)$, and for each
$\xi \in \muathbb{N}$, let $\varphi _{\xi }$ be the unique $W _{\xi
}$-automorphism of $\Theta _{\xi }$ extending $\varphi $. Fix an
embedding $\psi _{\xi }$ of $B _{\xi }$ in $K _{\rm sep}$ as a $W
_{\xi }$-algebra, and denote by $B _{\xi } ^{\prime }$ the image
of $B _{\xi }$ under $\psi _{\xi }$. Clearly, $\psi _{\xi }$ gives
rise to a canonical bijection of $s(B _{\xi })$ upon $s(B _{\xi }
^{\prime })$, which in turn induces an isomorphism $\psi _{\xi
}'\colon {\rm Br}(B _{\xi }) \to {\rm Br}(B _{\xi } ^{\prime })$.
Denote by $\Sigma _{\xi }$ and $\Sigma _{\xi } ^{\prime }$ the
underlying division algebras of $R _{\xi } {\mathord{\,\otimes }\,}imes _{W _{\xi }} B
_{\xi }$ and $R _{\xi } {\mathord{\,\otimes }\,}imes _{W _{\xi }} B _{\xi } ^{\prime
}$, respectively, and let $\widetilde B _{\xi }$ be a $W _{\xi
}$-isomorphic copy of $B _{\xi }$ in the full matrix $W _{\xi
}$-algebra $M _{b _{\xi }}(W _{\xi })$, where $b _{\xi } = [B
_{\xi }\colon W _{\xi }]$. Using the fact that $M _{b _{\xi }}(R
_{\xi }) \cong M _{b _{\xi }}(W _{\xi }) {\mathord{\,\otimes }\,}imes _{W _{\xi }} R
_{\xi }$ over $W _{\xi }$, and applying the Skolem-Noether theorem
to $B _{\xi }$ and $\widetilde B _{\xi }$, one obtains that $R
_{\xi } {\mathord{\,\otimes }\,}imes _{W _{\xi }} B _{\xi }$ and $M _{b _{\xi }}(C _{R
_{\xi }}(B _{\xi }))$ are isomorphic as $B _{\xi }$-algebras.
Hence, by the Wedderburn-Artin theorem, so are $\Sigma _{\xi }$
and $C _{R _{\xi }}(B _{\xi })$. These observations allow to
identify the $B _{\xi } ^{\prime }$-algebras $R _{\xi } {\mathord{\,\otimes }\,}imes
_{W _{\xi }} B _{\xi } ^{\prime }$ and $M _{b _{\xi }}(\Sigma
_{\xi } ^{\prime })$ and to prove the following fact:
\par
\muedskip\nuoindent
(8.5) There exists a $W _{\xi }$-isomorphism $\tilde \psi _{\xi
}\colon M _{b _{\xi }}(C _{R _{\xi }}(B _{\xi })) \to (R _{\xi }
{\mathord{\,\otimes }\,}imes _{W _{\xi }} B _{\xi } ^{\prime })$, which extends $\psi
_{\xi }$ and maps $C _{R _{\xi }}(B _{\xi })$ upon $\Sigma _{\xi }
^{\prime }$. The image $\Lambda _{\xi } ^{\prime }$ of $\Lambda
_{\xi }$ under $\tilde \psi _{\xi }$ is a central $B _{\xi }
^{\prime }$-subalgebra of $\Sigma _{\xi } ^{\prime }$ of degree
$p$, which is a representative of the equivalence class $\psi
_{\xi }'([\Lambda _{\xi }]) \in {\rm Br}(B _{\xi } ^{\prime })$.
\par
\muedskip\nuoindent
Now fix a pair $\nuu , n$ so that $\nuu _{0} < \nuu < \xi (\nuu ) < n
- \nuu $. Retaining notation as in (8.5), we turn to the proof of
the following assertion:
\par
\vskip0.22truecm \nuoindent (8.6) The tensor products $\Lambda
_{\nuu } ^{\prime } {\mathord{\,\otimes }\,}imes _{B _{\nuu }'} (B _{\nuu } ^{\prime }B
_{n} ^{\prime })$, $(\Lambda _{\nuu } ^{\prime } {\mathord{\,\otimes }\,}imes _{B _{\nuu
}'} B _{\nuu } ^{\prime }W _{n}) {\mathord{\,\otimes }\,}imes _{B _{\nuu }'W _{n}} (B
_{\nuu } ^{\prime }B _{n} ^{\prime })$ and $\Lambda _{n} ^{\prime }
{\mathord{\,\otimes }\,}imes _{B _{n}'} (B _{\nuu } ^{\prime }B _{n} ^{\prime })$ are
isomorphic central division $B _{\nuu } ^{\prime }B _{n} ^{\prime
}$-algebras.
\par
\vskip0.22truecm\nuoindent
The statement that $\Lambda _{\nuu } ^{\prime } {\mathord{\,\otimes }\,}imes _{B _{\nuu }'}
(B _{\nuu } ^{\prime }B _{n} ^{\prime }) \cong (\Lambda _{\nuu }
^{\prime } {\mathord{\,\otimes }\,}imes _{B _{\nuu }'} B _{\nuu } ^{\prime }W _{n}) {\mathord{\,\otimes }\,}imes
_{B _{\nuu }'W _{n}} (B _{\nuu } ^{\prime }B _{n} ^{\prime })$ as
\par
\vskip0.032truecm\nuoindent
$B _{\nuu } ^{\prime }B _{n} ^{\prime }$-algebras is known (cf.
\cite{P}, Sect. 9.4, Corollary~a), so it suffices to
\par
\vskip0.032truecm\nuoindent
show that $\Lambda _{n} ^{\prime }
{\mathord{\,\otimes }\,}imes _{B _{n}'} (B _{\nuu } ^{\prime }B _{n} ^{\prime }) \cong
(\Lambda _{\nuu } ^{\prime } {\mathord{\,\otimes }\,}imes _{B _{\nuu }'} B _{\nuu }
^{\prime }W _{n}) {\mathord{\,\otimes }\,}imes _{B _{\nuu }'W _{n}} (B _{\nuu } ^{\prime
}B _{n} ^{\prime })$ over $B _{\nuu } ^{\prime }B _{n} ^{\prime }$.
\par
\vskip0.07truecm\nuoindent Denote by $\Sigma _{\nuu ,n}$ and $\Sigma
_{\nuu ,n} ^{\prime }$ the underlying division $B _{\nuu } ^{\prime
}B _{n} ^{\prime }$-algebras of
\par
\vskip0.05truecm\nuoindent $\Sigma _{\nuu } ^{\prime } {\mathord{\,\otimes }\,}imes _{B
_{\nuu }'} (B _{\nuu } ^{\prime }B _{n} ^{\prime })$ and $\Sigma
_{n} ^{\prime } {\mathord{\,\otimes }\,}imes _{B _{n}'} (B _{n} ^{\prime }B _{\nuu }
^{\prime })$, respectively. Using Lemma \ref{lemm5.8}, one
\par
\vskip0.05truecm\nuoindent obtains that $\Sigma _{\nuu ,n}$ and
$\Sigma _{\nuu ,n} ^{\prime }$ are isomorphic to the underlying
division
\par
\vskip0.048truecm\nuoindent
$B _{\nuu } ^{\prime }B _{n} ^{\prime }$-algebra of $R {\mathord{\,\otimes }\,}imes _{K} B
_{\nuu } ^{\prime }B _{n} ^{\prime }$. Note also that $p \numid [(B
_{\nuu } ^{\prime }B _{n} ^{\prime })\colon K]$; since
\par\vskip0.056truecm\nuoindent
$[B _{\nuu } ^{\prime }B _{n} ^{\prime }\colon K] = [B _{\nuu }
^{\prime }B _{n} ^{\prime }\colon W _{n}].[W _{n}\colon K]$ and $B
_{\nuu } ^{\prime }B _{n} ^{\prime } = B _{\nuu } ^{\prime }W _{n}.B
_{n} ^{\prime }$, the assertion
\par \vskip0.052truecm\nuoindent
follows from (8.3) (i), (8.4) and the fact that $p \numid [W
_{n}\colon K]$. In view of
\par\vskip0.034truecm\nuoindent
(8.3) (ii) and \cite{Ch3}, Lemma~3.5, this ensures that $\Lambda
_{\nuu } ^{\prime } {\mathord{\,\otimes }\,}imes _{B _{\nuu }'} (B _{\nuu } ^{\prime }B
_{n} ^{\prime })$ and
\par\vskip0.038truecm\nuoindent
$\Lambda _{n} ^{\prime } {\mathord{\,\otimes }\,}imes _{B _{n}'} (B _{\nuu } ^{\prime }B
_{n} ^{\prime })$ are central division $B _{\nuu } ^{\prime }B
_{n}$-algebras which are embeddable in
\par\vskip0.034truecm\nuoindent
$\Sigma _{\nuu ,n} ^{\prime }$ as $B _{\nuu } ^{\prime }B _{n}
^{\prime }$-subalgebras. At the same time, it follows from Lemma
\ref{lemm5.1} and the observation on $[B _{\nuu } ^{\prime }B _{n}
^{\prime }\colon K]$ and $p$ that $\Sigma _{\nuu ,n} ^{\prime }/(B
_{\nuu } ^{\prime }B _{n} ^{\prime })$ is of $p$-power one. Now the
proof of (8.6) is completed by applying Lemma \ref{lemm5.8}.
Since, by (8.4) and the choice of the indices $\nuu , n$, we have
$\gammacd \{[B _{\nuu } ^{\prime }W _{n}\colon W _{n}], [B _{n}
^{\prime }\colon W _{n}]\} = 1$, statement (8.6) and Lemma
\ref{lemm5.9} imply the following:
\par
\muedskip\nuoindent
(8.7) There exists $\Delta _{n} \in d(W _{n})$, such that $\Delta
_{n} {\mathord{\,\otimes }\,}imes _{W _{n}} B _{n} ^{\prime } \cong \Lambda _{n}
^{\prime }$ and
\par
\nuoindent $\Delta _{n} {\mathord{\,\otimes }\,}imes _{W _{n}} (B _{\nuu }
^{\prime }W _{n}) \cong \Lambda _{\nuu } ^{\prime } {\mathord{\,\otimes }\,}imes _{B _{\nuu
}'} (B _{\nuu } ^{\prime }W _{n})$ (over $B _{n} ^{\prime }$ and $B
_{\nuu } ^{\prime }W _{n}$, respectively).
\par
\muedskip\nuoindent
It is clear from (8.7) and the $W _{n}$-isomorphism $B _{n} \cong
B _{n} ^{\prime }$ that the $B _{n}$-algebras $\Delta _{n} {\mathord{\,\otimes }\,}imes
_{W _{n}} B _{n}$ and $\Lambda _{n}$ are isomorphic, which proves
(8.2). Applying (8.1), one obtains that $\Delta _{n,0} {\mathord{\,\otimes }\,}imes
_{\muathcal{W} _{n}} W _{n} \cong \Delta _{n}$ as $W
_{n}$-algebras, for some $\Delta _{n,0} \in d(\muathcal{W} _{n})$
(here $\muathcal{W} _{n} = K _{\rm ur} \cap W _{n}$). Since
$\muathcal{G}(\muathcal{W} _{n} ^{\prime }/K)$ is nilpotent and $p
\numid [\muathcal{W} _{n} ^{\prime }/K]$ (see the observations
proving (8.4)), this allows to deduce the former assertion of
Lemma \ref{lemm8.2} from Lemma \ref{lemm8.1}, in case $\varepsilon
\in K$.
\par
\sigmamallskip
Let now $\varepsilon \nuotin K$, $[K(\varepsilon )\colon K] = m$,
and $R _{\varepsilon }$ be the underlying division algebra of the
central simple $K(\varepsilon )$-algebra $R {\mathord{\,\otimes }\,}imes _{K}
K(\varepsilon )$. Then $K(\varepsilon )/K$ is a cyclic field
extension and $m \muid p - 1$, which implies $\Theta (\varepsilon
)/K(\varepsilon )$ is a totally ramified Kummer extension of
degree $p$. Observing also that $R _{\varepsilon }$ is a central
LBD-algebra over $K(\varepsilon )$, one obtains that $\Theta
(\varepsilon )$ embeds in $R _{\varepsilon }$ as a $K(\varepsilon
)$-subalgebra. At the same time, it follows from Lemma
\ref{lemm5.1} that the $p$-power $k(p)_{\varepsilon }$ of $R
_{\varepsilon }/K(\varepsilon )$ is less than $2$, i.e. $p ^{2}
\numid [K(\varepsilon , \deltaelta ')\colon K(\varepsilon )]$, for any
$\deltaelta ' \in R _{\varepsilon }$. Hence, $k(p)_{\varepsilon } =
1$, and by the already considered special case of our lemma, $R
_{\varepsilon }$ possesses a central $K(\varepsilon )$-subalgebra
$\Delta _{\varepsilon }$, such that deg$(\Delta _{\varepsilon }) =
p$ and there exists a $K(\varepsilon )$-subalgebra of $\Delta
_{\varepsilon }$ isomorphic to $\Theta (\varepsilon )$. Let now
$\varphi $ be a generator of $\muathcal{G}(K(\varepsilon )/K)$.
Then $\varphi $ extends to an automorphism $\betaar \varphi $ of $R
_{\varepsilon }$ (as a $K$-algebra), so Lemma \ref{lemm5.4}
ensures that $\Delta _{\varepsilon }$ is $K(\varepsilon
)$-isomorphic to its image under $\betaar \varphi $. Together with
the Skolem-Noether theorem, this shows that $\betaar \varphi $ can be
chosen so that $\betaar \varphi (\Delta _{\varepsilon }) = \Delta
_{\varepsilon }$. Now it follows from Teichm\"{u}ller's theorem
(and the equality $\gammacd \{m, p\} = 1$) that there is a
$K(\varepsilon )$-isomorphism $\Delta _{\varepsilon } \cong \Delta
{\mathord{\,\otimes }\,}imes _{K} K(\varepsilon )$, for some $\Delta \in d(K)$ with
deg$(\Delta ) = p$. Moreover, it can be deduced from \cite{Ch3},
Lemma~3.5, that $\Delta $ is isomorphic to a $K$-subalgebra of
$R$, which in turn has a $K$-subalgebra isomorphic to $\Theta $.
Hence, by Albert's criterion (see \cite{P}, Sect. 15.3), $\Delta $
is a cyclic $K$-algebra. Observe finally that cyclic degree $p$
extensions of $K$ are inertial. Since $p \nueq {\rm char}(\widehat
K)$ and $\varepsilon \nuotin K$, this is implied by (4.2) (a) and
Lemma \ref{lemm7.1}, so Lemma \ref{lemm8.2} is proved.
\varepsilonnd{proof}
\par
\sigmamallskip
The main lemma of the present Section can be stated as follows:
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm8.3} Assume that $(K, v)$ is a Henselian field with
$\widehat K$ of arithmetic type, {\rm dim}$(K _{\rm sol}) \lambdae 1$,
and {\rm abrd}$_{p}(K) < \infty $, $p \in \muathbb{P}$, and let $R$
be a central division {\rm LBD}-algebra over $K$, such that {\rm
char}$(\widehat K) \numid [K(\deltaelta )\colon K]$, for any $\deltaelta
\in R$. Then, for any $p \in \muathbb{P}$ not equal to {\rm
char}$(\widehat K)$, there exists a $p$-splitting field $E _{p}$
of $R/K$, that is included in $K(p)$.
\varepsilonnd{lemm}
\par
\sigmamallskip
\betaegin{proof}
Fix an arbitrary $p \in \muathbb{P} \sigmaetminus \{{\rm char}(\widehat
K)\}$, take a primitive $p$-th root of unity $\varepsilon =
\varepsilon _{p}$ in $K _{\rm sep}$, suppose that $T _{p}(K)$ is
defined as in Lemma \ref{lemm4.1} (c), and put $V _{p}(K) = K
_{0}(p).T _{p}(K)$, where $K _{0}(p) = K(p) \cap K _{\rm ur}$. For
each $z \in \muathbb{P}$, denote by $k(z)$ the $z$-power of $R/K$,
and let $\varepsilonll $ be the minimal integer $\varepsilonll (p) \gammae 0$, for which
there exists an extension $V _{p}$ of $K$ in $V _{p}(K)$
satisfying conditions (c) and (cc) of Lemma \ref{lemm7.6}. As
shown in the proof of Lemma \ref{lemm7.4}, $K(p) = V _{p}(K)$ if
$\varepsilon \in K$ or $v(K) = pv(K)$, and $K(p) = K _{0}(p)$,
otherwise. In the former case, $V _{p}$ clearly has the properties
claimed by Lemma \ref{lemm8.3}, so we suppose, for the rest of our
proof, that $\varepsilon \nuotin K$, $v(K) \nueq pv(K)$ and $V
_{p}/K$ is chosen so that $[V _{p}\colon K] = p ^{\varepsilonll }$ and the
ramification index $e(V _{p}/K)$ be minimal. Let $E _{p}$ be the
maximal inertial extension of $K$ in $V _{p}$. Then it follows
from Lemma \ref{lemm4.1}~(d) and the inequality $p \nueq {\rm
char}(\widehat K)$ that $\widehat E _{p} = \widehat V _{p}$; using
also (4.2) (a), one sees that $V _{p}/E _{p}$ is totally ramified
and $[V _{p}\colon E _{p}] = e(V _{p}/K)$. Note further that $E
_{p} \sigmaubseteq K _{0}(p)$, by Lemma \ref{lemm7.6}, so it suffices
for the proof of Lemma \ref{lemm8.3} to show that $V _{p} = E
_{p}$ (i.e. $e(V _{p}/K) = 1$). Assuming the opposite and using
Lemma \ref{lemm7.3}, with its proof, one obtains that there is an
extension $\Sigma $ of $E _{p}$ in $V _{p}$, such that $[\Sigma
\colon K] = p ^{\varepsilonll -1}$.
\par
\sigmamallskip
The main step towards the proof of Lemma \ref{lemm8.3} is to show
that $p$, the underlying division $\Sigma $-algebra $R _{\Sigma }$
of $R {\mathord{\,\otimes }\,}imes _{K} \Sigma $, and the field extension $V
_{p}/\Sigma $ satisfy the conditions of Lemma \ref{lemm8.2}. Our
argument relies on the assumption that dim$(K _{\rm sol}) \lambdae 1$.
In view of Lemma \ref{lemm5.1}, it guarantees that, for each $z
\in \muathbb{P} \sigmaetminus \{p\}$, $k(z)$ is the $z$-power of $R
_{\Sigma }/\Sigma $. Thus it turns out that char$(\widehat K)
\numid [\Sigma (\rho ')\colon \Sigma ]$, for any $\rho ' \in R
_{\Sigma }$. At the same time, it follows from the
Wedderburn-Artin theorem and the choice of $V _{p}$ and $\Sigma $
that there exist isomorphisms $R {\mathord{\,\otimes }\,}imes _{K} \Sigma \cong M
_{\gammaamma }(R _{\Sigma })$ and $R {\mathord{\,\otimes }\,}imes _{K} V _{p} \cong M
_{\gammaamma '}(R _{V_{p}})$ (as algebras over $\Sigma $ and $V _{p}$,
respectively), where $\gammaamma '= p ^{k(p)}$, $\gammaamma \muid p
^{k(p)-1}$ and $R _{V_{p}}$ is the underlying division $V
_{p}$-algebra of $R {\mathord{\,\otimes }\,}imes _{K} V _{p}$. Note further that the
$\Sigma $-algebras $M _{\gammaamma }(R _{\Sigma })$ and $M _{\gammaamma
}(\Sigma ) {\mathord{\,\otimes }\,}imes _{\Sigma } R _{\Sigma }$ are isomorphic, which
enables one to deduce from the existence of a $V _{p}$-isomorphism
$R {\mathord{\,\otimes }\,}imes _{K} V _{p} \cong (R {\mathord{\,\otimes }\,}imes _{K} \Sigma ) {\mathord{\,\otimes }\,}imes
_{\Sigma } V _{p}$ (cf. \cite{P}, Sect. 9.4, Corollary~a) that $M
_{\gammaamma '}(R _{V_{p}}) \cong M _{\gammaamma }(V _{p}) {\mathord{\,\otimes }\,}imes
_{V_{p}} (R _{\Sigma } {\mathord{\,\otimes }\,}imes _{\Sigma } V _{p})$ as $V
_{p}$-algebras; hence, by Wedderburn-Artin's theorem and the
inequality $\gammaamma < \gammaamma '$, $R _{\Sigma } {\mathord{\,\otimes }\,}imes _{\Sigma } V
_{p}$ is not a division algebra. This, combined with \cite{Ch3},
Lemma~3.5, and the equality $[V _{p}\colon \Sigma ] = p$, proves
that $R _{\Sigma } {\mathord{\,\otimes }\,}imes _{\Sigma } V _{p} \cong M _{p}(R
_{V_{p}} ^{\prime })$, for some central division $V _{p}$-algebra
$R _{V_{p}} ^{\prime }$ (which means that $V _{p}$ is embeddable
in $R _{\Sigma }$ as a $\Sigma $-subalgebra). It is now easy to
see that
$$M _{\gammaamma '}(R _{V_{p}}) \cong M _{\gammaamma
}(V _{p}) {\mathord{\,\otimes }\,}imes (M _{p}(V _{p}) {\mathord{\,\otimes }\,}imes _{V _{p}} R _{V_{p}}
^{\prime }) \cong (M _{\gammaamma }(V _{p}) {\mathord{\,\otimes }\,}imes _{V _{p}} M _{p}(V
_{p})) {\mathord{\,\otimes }\,}imes _{V_{p}} R _{V_{p}} ^{\prime }$$ $$\cong M _{\gammaamma
p}(V _{p}) {\mathord{\,\otimes }\,}imes _{V _{p}} R _{V_{p}} ^{\prime } \cong M
_{p\gammaamma }(R _{V _{p}} ^{\prime }).$$
Using Wedderburn-Artin's theorem, one obtains that $\gammaamma =
\gammaamma '/p = p ^{k(p)-1}$
\par\vskip0.07truecm\nuoindent
and $R _{V_{p}} \cong R _{V_{p}} ^{\prime }$ over $V _{p}$.
Therefore, by Lemma \ref{lemm5.1}, $p ^{2} \numid [\Sigma (\rho
')\colon \Sigma ]$, for any $\rho ' \in R _{\Sigma }$, which
completes the proof of the fact that $p$, $R _{\Sigma }$ and $V
_{p}/\Sigma $ satisfy the conditions of Lemma \ref{lemm8.2}.
Furthermore, it follows that a finite extension of $\Sigma $ is a
$p$-splitting field of $R _{\Sigma }/\Sigma $ if and only if it is
a such a field for $R/K$.
\par
\sigmamallskip
We are now in a position to complete the proof of Lemma
\ref{lemm8.3} in the case where $\varepsilon \nuotin K$ and $v(K)
\nueq pv(K)$. By Lemma \ref{lemm8.2}, there exists a central
$\Sigma $-subalgebra $\Delta $ of $R _{\Sigma }$, such that
deg$(\Delta ) = p$ and $V _{p}$ is embeddable in $\Delta $ as a
$\Sigma $-subalgebra; hence, by \cite{He}, Theorem~4.4.2, $R
_{\Sigma } = \Delta {\mathord{\,\otimes }\,}imes _{\Sigma } C(\Delta )$, where
$C(\Delta )$ is the centralizer of $\Delta $ in $R _{\Sigma }$. In
addition, $C(\Delta )$ is a central division $\Sigma $-algebra,
and since $p ^{2} \numid [\Sigma (\rho ')\colon \Sigma ]$, for any
$\rho ' \in R _{\Sigma }$, it follows from Lemma \ref{lemm5.3}
that $p \numid [\Sigma (c)\colon \Sigma ]$, for any $c \in C(\Delta
)$. Note also that $\gammacd \{[K(\varepsilon )\colon K], [\Sigma
\colon K]\} = 1$, whence, $K(\varepsilon ) \cap \Sigma = K$ and
$\varepsilon \nuotin \Sigma $. Therefore, Lemma \ref{lemm8.2}
requires the existence of a degree $p$ cyclic extension $\Sigma
^{\prime }$ of $\Sigma $ in $K _{\rm sep}$, which is inertial over
$\Sigma $ (by Lemma \ref{lemm7.1}) and embeds in $\Delta $ as a
$\Sigma $-subalgebra. This implies $\Sigma ^{\prime }$ is a
$p$-splitting field of $R _{\Sigma }/\Sigma $ and $R/K$ (see Lemma
\ref{lemm5.3} (c) and \cite{P}, Lemma~13.4 and Corollary~13.4),
$[\Sigma ^{\prime }\colon K] = p ^{\varepsilonll }$, $e(\Sigma ^{\prime
}/K) = e(V _{p}/K)/p$, and $\widehat \Sigma ^{\prime }/\widehat
\Sigma $ is a cyclic extension of degree $p$. Taking finally into
consideration that $\widehat \Sigma \in I(\widehat K(p)/\widehat
K)$, and using Lemma \ref{lemm4.1}, one obtains consecutively that
$\widehat \Sigma ^{\prime } \in I(\widehat K(p)/\widehat K)$ and
$E _{p}$ has a degree $p$ extension $E ^{\prime }$ in $\Sigma
^{\prime } \cap K _{0}(p)$. It is now easy to see that $\Sigma
^{\prime } = E ^{\prime }\Sigma $ and $\Sigma ^{\prime } \in I(V
_{p}(K)/K)$. The obtained properties of $\Sigma ^{\prime }$ show
that it satisfies conditions (c) and (cc) of Lemma \ref{lemm7.6}.
As $e(\Sigma ^{\prime }/K) < e(V _{p}/K)$, this contradicts our
choice of $V _{p}$ and thereby yields $e(V _{p}/K) = 1$, i.e. $V
_{p} = E _{p}$, so Lemma \ref{lemm8.3} is proved.
\varepsilonnd{proof}
\par
\muedskip
\sigmaection{\betaf Proof of Lemma \ref{lemm3.3} and the main results}
\par
\muedskip
We begin this Section with a lemma which shows how to deduce Lemma
\ref{lemm3.3} in general from its validity in the case where $q >
0 = k(q)$ ($q$ is defined at the beginning of Section 8, and
$k(q)$ is the $q$-power of $R/K$).
\par
\muedskip
\betaegin{lemm}
\lambdaambdabel{lemm9.1} Let $(K, v)$ be a Henselian field with $\widehat
K$ of arithmetic type,
\par\nuoindent
{\rm char}$(\widehat K) = q$, {\rm dim}$(K _{\rm sol}) \lambdae 1$ and
{\rm abrd}$_{p}(K) < \infty $, $p \in \muathbb{P}$. Put $\muathbb{P}
^{\prime } = \muathbb{P} \sigmaetminus \{q\}$, take a central division
{\rm LBD}-algebra $R$ over $K$, and in case $q > 0$, assume that
$K$ has an extension $E _{q}$ in $K(q)$ that is a $q$-splitting
field of $R/K$. Then, for each $p \in \muathbb{P} ^{\prime }$,
there is a $p$-splitting field $E _{p}$ of $R/K$, lying in
$I(K(p)/K)$.
\varepsilonnd{lemm}
\par
\muedskip
\betaegin{proof}
Our assertion is contained in Lemma \ref{lemm8.3} if $q = 0$, so
we assume that $q > 0$. Let $\muathcal{R} _{q}$ be the underlying
division $E _{q}$-algebra of $R {\mathord{\,\otimes }\,}imes _{K} E _{q}$, and for each
$p \in \muathbb{P}$, let $k(p)'$ be the $p$-power of $\muathcal{R}
_{q}/E _{q}$. Lemma \ref{lemm5.1} (c) and the assumption on $E
_{q}$ ensure that $k(q)' = 0$, and $k(p)'$ equals the $p$-power of
$R/K$ whenever $p \in \muathbb{P} ^{\prime }$. Therefore, by Lemma
\ref{lemm8.3}, for each $p \in \muathbb{P} ^{\prime }$, there is an
extension $E _{p} ^{\prime }$ of $E _{q}$ in $E _{q}(p)$, which is
a $p$-splitting field of $\muathcal{R} _{q}/E _{q}$. This enables
one to deduce from Lemmas \ref{lemm5.5} and \ref{lemm5.6} that
there exist $E _{q}$-algebras $\Delta _{p} ^{\prime } \in d(E
_{q})$, $p \in \muathbb{P} ^{\prime }$, embeddable in $\muathcal{R}
_{q}$, and such that deg$(\Delta _{p} ^{\prime }) = p ^{k(p)'}$,
for every $p \in \muathbb{P} ^{\prime }$. Hence, by Lemma
\ref{lemm8.1}, $R$ possesses central $K$-subalgebras $\Delta _{p}
\in d(K)$, $p \in \muathbb{P} ^{\prime }$, with $\Delta _{p}
{\mathord{\,\otimes }\,}imes _{K} E _{q} \cong \Delta _{p} ^{\prime }$ as $E
_{q}$-algebras, for each index $p$. In view of Lemmas
\ref{lemm4.3} and \ref{lemm5.3} (c), this proves Lemma
\ref{lemm9.1}.
\varepsilonnd{proof}
\par
\muedskip
We are now prepared to complete the proof of Lemma \ref{lemm3.3}
in general, and thereby to prove Theorems \ref{theo3.1} and
\ref{theo3.2}. If $(K, v)$ and $\widehat K$ satisfy the conditions
of Theorem \ref{theo3.2}, then the conclusion of Lemma
\ref{lemm3.3} follows from Theorem \ref{theo6.3} and Lemma
\ref{lemm9.1}. As noted in Remark \ref{rema5.7}, this leads to a
proof of Theorem \ref{theo3.2}.
\par
\muedskip
\betaegin{rema}
\lambdaambdabel{rema9.2} Let $(K, v)$ be an HDV-field with $\widehat K$ of
arithmetic type and virtually perfect, and let $R$ be a central
division {\rm LBD}-algebra over $K$. Then it follows from Theorem
\ref{theo6.3} and Lemma \ref{lemm9.1} that, for each $p \in
\muathbb{P}$, there exists a finite extension $E _{p}$ of $K$ in
$K(p)$, which is a $p$-splitting field of $R/K$. Therefore, as in
Remark \ref{rema5.7}, one concludes that $R$ has a central
$K$-subalgebra $\widetilde R$ subject to the restrictions of
Conjecture \ref{conj1.1}. This proves Theorem \ref{theo3.1} in
case $m = 1$.
\varepsilonnd{rema}
\par
\muedskip
In the rest of the proof of Lemma \ref{lemm3.3}, we assume that $m
\gammae 2$, $K = K _{m}$ is a complete $m$-discretely valued field
whose $m$-th residue field $K _{0}$ is virtually perfect of
characteristic $q$ and arithmetic type, $v$ is the standard
Henselian $\muathbb{Z} ^{m}$-valued valuation of $K _{m}$ with
$\widehat K _{m} = K _{0}$, and $K _{m-m'}$ is the $m'$-th residue
field of $K _{m}$, for $m' = 1, \deltaots , m$. Recall that $K
_{m-m'+1}$ is complete with respect to a discrete valuation $w
_{m'-1}$ with a residue field $K _{m-m'}$, for each $m'$, and $v$
equals the composite valuation $w _{m-1} \circ \deltaots \circ w
_{0}$. Considering $(K, v)$ and $(K, w _{0})$, and arguing as in
the proof of Theorem \ref{theo3.2}, one obtains that the
conclusions of Theorem \ref{theo3.1} and Lemma \ref{lemm3.3} hold
if either $q = 0$ or char$(K _{m-1}) = q > 0$. It remains for us
to prove Theorem \ref{theo3.1} under the hypothesis that $q
> 0$ and char$(K _{m-1}) = 0$ (so char$(K _{m}) = 0$). Denote by
$\muu $ the maximal index for which char$(K _{m-\muu }) = 0$, fix a
primitive $q$-th root of unity $\varepsilon \in K _{\rm sep}$, put
$K ^{\prime } = K(\varepsilon )$, $v'= v _{K'}$, and denote by $R
^{\prime }$ the underlying division $K ^{\prime }$-algebra of $R
{\mathord{\,\otimes }\,}imes _{K} K ^{\prime }$. It is clear from Lemma \ref{lemm7.4},
applied to $R ^{\prime }$ and $(K ^{\prime }, v')$, that $R
^{\prime }$ satisfies the condition of Lemma \ref{lemm9.1},
whence, for each $p \in \muathbb{P}$, there exists a finite
extension $E _{p} ^{\prime }$ of $K ^{\prime }$ in $K ^{\prime
}(p)$, which is a $p$-splitting field of $R ^{\prime }/K ^{\prime
}$. Similarly to the proof of Lemma \ref{lemm9.1}, this allows to
show that, for each $p \in \muathbb{P}$, $R ^{\prime }$ possesses a
$K ^{\prime }$-subalgebra $\Delta _{p} ^{\prime } \in d(K ^{\prime
})$ of degree $p ^{k(p)'}$, where $k(p)'$ is the $p$-power of $R
^{\prime }/K ^{\prime }$. Using the fact that $K ^{\prime }/K$ is
a cyclic field extension with $[K ^{\prime }\colon K] \muid q - 1$,
and applying Lemma \ref{lemm8.1} to $K ^{\prime }$, $R ^{\prime }$
and $\Delta _{q} ^{\prime }$, one concludes that $R$ has a
$K$-subalgebra $\Delta _{q} \in d(K)$, such that $\Delta _{q}
{\mathord{\,\otimes }\,}imes _{K} K ^{\prime } \cong \Delta _{q} ^{\prime }$ as a $K
^{\prime }$-subalgebra. Obviously, deg$(\Delta _{q}) = q
^{k(q)'}$, and it follows from Lemma \ref{lemm5.1} and the
divisibility $[K ^{\prime }\colon K] \muid q - 1$ that $k(q)'$
\par\vskip0.032truecm\nuoindent
equals the $q$-power $k(q)$ of $R/K$. Observe now that abrd$_{q}(K
_{m-\muu }(q)) \lambdae 1$ and abrd$_{q}(K _{m-\muu }) < \infty $. As
char$(K _{m-\muu -1}) = q$, the former inequality is implied by
Theorem \ref{theo6.3} and the fact that $(K _{m-\muu }, w _{\muu })$
is an HDV-field with a residue field $K _{m-\muu -1}$. The latter
one can be deduced from \cite{PS}, Corollary~2.5, since
\par\vskip0.032truecm\nuoindent
$(K _{m-\muu }, w _{\muu })$ is complete and $[K _{m-\muu -1}\colon K
_{m-\muu -1} ^{q}] < \infty $. Note also that the
\par\vskip0.032truecm\nuoindent
composite valuation $\kappaappa _{\muu } = w _{\muu -1} \circ \deltaots \circ w
_{0}$ of $K$ is Henselian with a residue
\par\vskip0.032truecm\nuoindent
field $K _{m-\muu }$ and $\kappaappa _{\muu }(K) \cong \muathbb{Z} ^{\muu
}$. Hence, by Lemma \ref{lemm4.4}, abrd$_{q}(K _{m}) < \infty $.
Applying finally Lemma \ref{lemm4.3} to $\Delta _{q}/K$ and
$\kappaappa _{\muu }$, as well as Lemma \ref{lemm5.3} to $R$ and
$\Delta _{q}$, one concludes that $(K, v)$, $q$ and $R/K$ satisfy
the conditions of Lemma \ref{lemm9.1}. Therefore, for each $p \in
\muathbb{P}$, $K$ has a finite extension $E _{p}$ in $K(p)$, which
is a $p$-splitting field of $R/K$. Thus Lemma \ref{lemm3.3} is
proved. As explained in Remark \ref{rema5.7}, this enables one to
complete the proof of Theorem \ref{theo3.1}.
\par
\muedskip
Note finally that, in the setting of Conjecture \ref{conj1.1}, it
is unknown whether there exists a sequence $E _{p}$, $p \in
\muathbb{P}$, of $p$-splitting fields of $R/K$, such that
\par\nuoindent
$E _{p} \sigmaubseteq K(p)$, for each $p$. In view of Proposition
\ref{prop2.1} and \cite{Me}, Conjecture~1 (see also the end of
\cite{P}, Ch. 15), and since Questions~2.3~(a) and (b) are open,
the answer is affirmative in all presently known cases. When $R$
is an LFD-algebra and $K$ contains a primitive $p$-th root of
unity, for every $p \in \muathbb{P}$, $p \nueq {\rm char}(K)$, such
an answer follows from Proposition \ref{prop2.1}, combined with
\cite{A1}, Ch. VII, Theorem~28, and the Merkur'ev-Suslin theorem
\cite{MS}, (16.1) (see also \cite{GiSz}, Theorem~9.1.4 and Ch. 8,
respectively). This supports the idea to make further progress in
the study on Conjecture \ref{conj1.1}, by finding a generalization
of Lemma \ref{lemm3.3} for more fields $K$ with abrd$_{p}(K) <
\infty $, $p \in \muathbb{P}$, than those singled out by Theorems
\ref{theo3.1} and \ref{theo3.2}. To conclude with, it would surely
be of interest to learn whether a proof of Conjecture
\ref{conj1.1}, for a field $K$ admissible by Proposition
\ref{prop2.1}, could lead to an answer to Question~2.3~(b), for
central division LBD-algebras over $K$.
\par
\muedskip
\varepsilonmph{Acknowledgement.} I am grateful to A.A. Panchishkin, P.N.
Siderov (1952-2016), A.V. Mikhal\"{e}v (1939-2022), V.N. Latyshev
(1934-2020), N.I. Dubrovin, and V.I. Yanchevskij for the useful
discussions on a number of aspects of valuation theory and
associative simple rings concerning the topic of this paper, which
stimulated my work on its earliest draft in \cite{Ch2}. The present
research has been partially supported by Grant KP-06 N 32/1 of the
Bulgarian National Science Fund.
\par
\vskip0.04truecm
\betaegin{thebibliography}{aa}
\betaibitem{A1} A.A. Albert, \varepsilonmph{Structure of Algebras}, Amer.
Math. Soc. Colloq. Publ., XXIV, 1939.
\betaibitem{Am} S.A. Amitsur, \varepsilonmph{Algebras over infinite fields},
Proc. Amer. Math. Soc. 7 (1956), No. 1, 35-48. DOI:
https://doi.org/10.1090/S0002-9939-1956-0075933-2
\betaibitem{BH} N. Bhaskhar, B. Haase, \varepsilonmph{Brauer $p$-dimension of
complete discretely valued fields}, Trans. Amer. Math. Soc. 373
(2020), No. 5, 3709-3732. DOI: https://doi.org/10.1090/tran/8038
\betaibitem{BlKu} A. Blaszczok, F.-V. Kuhlmann, \varepsilonmph{Algebraic
independence of elements in valued fields}, J. Algebra 425 (2015),
179-214. DOI: https://doi.org/10.1016/j.jalgebra.2014.10.050
\betaibitem{Br} E. Brussel, \varepsilonmph{Hasse invariant for the tame Brauer
group of a higher local field}, Trans. Amer. Math. Soc., Ser. B 9
(2022), 258-287. DOI: https://doi.org/10.1090/btran/107
\betaibitem{Ch1} I.D. Chipchakov, \varepsilonmph{The normality of locally finite
associative division algebras over classical fields}, Mosc. Univ.
Math. Bull. 43 (1988), No. 2, 18-21; translation from Vestn. Mosk.
Univ., Ser. I 1988 (1988), No. 2, 15-17 (Russian).
\betaibitem{Ch2} I.D. Chipchakov, \varepsilonmph{The structure of algebraic
skew-fields over fields of arithmetic type}, Manuscript registered
in VINITI USSR, 16 Nov. 1988, No. 8150-B-88 (Russian).
\betaibitem{Ch3} I.D. Chipchakov, \varepsilonmph{On the classification of central
division algebras of linearly bounded degree over global fields
and local fields}, J. Algebra 160 (1993), No. 2, 342-379. DOI:
https://doi.org/10.1006/jabr.1993.1191
\betaibitem{Ch4} I.D. Chipchakov, \varepsilonmph{On the residue fields of stable
fields with Henselian valuations}, J. Algebra 319 (2008), No. 1,
16-49. DOI: https://doi.org/10.1016/j.jalgebra.2007.08.034
\betaibitem{Ch5} I.D. Chipchakov, \varepsilonmph{On the Galois cohomological
dimensions of stable fields with Henselian valuation}, Comm.
Algebra 30 (2002), No.~4, 1549-1574. DOI:
https://doi.org/10.1081/AGB-120013200
\betaibitem{Ch6} I.D. Chipchakov, \varepsilonmph{On Brauer $p$-dimensions and
index-exponent relations over finitely-generated field
extensions}, Manuscr. Math. 148 (2015), 485-500. DOI:
https://doi.org/10.1007/s00229-015-0745-7
\betaibitem{Ch7} I.D. Chipchakov, \varepsilonmph{On index-exponent relations
over Henselian fields with local residue fields}, Serdica Math. J.
44 (2018), No. 3-4, 303-328.
\betaibitem{Ch8} I.D. Chipchakov, \varepsilonmph{On Brauer $p$-dimensions and
absolute Brauer $p$-dimensions of Henselian fields}, J. Pure Appl.
Algebra 223 (2019), No. 1, 10-29. DOI:
https://doi.org/10.1016/j.jpaa.2018.02.032
\betaibitem{Ch9} I.D. Chipchakov, \varepsilonmph{On Brauer $p$-dimension of
Henselian discrete valued fields of residual characteristic $p >
0$}, J. Pure Appl. Algebra {\betaf 226} (2022), No. 8, Paper No.
106948. DOI: https://doi.org/10.1016/j.jalgebra.2014.12.035
\betaibitem{Cohn} P.M. Cohn, \varepsilonmph{On extending valuations in division
algebras}, Studia Sci. Math. Hungar. 16 (1981), 65-70.
\betaibitem{Dr1} P.K. Draxl, \varepsilonmph{Skew Fields}, London Math. Soc.,
Lecture Note Series, vol. 81, Cambridge etc., Cambridge Univ.
Press, 1983.
\betaibitem{Dr2} P. Draxl, Ostrowski's theorem for Henselian valued
skew fields, J. Reine Angew. Math. {\betaf 354} (1984), 213-218.
DOI: https://doi.org/10.1515/crll.1984.354.213
\betaibitem{Ef} I. Efrat, \varepsilonmph{Valuations, Orderings, and Milnor
$K$-Theory}, Math. Surveys and Monographs, 124, Amer. Math. Soc.,
Providence, RI, 2006.
\betaibitem{Er} Yu.L. Ershov, \varepsilonmph{Henselian valuations of division
rings and the group $SK_{1}$}. Math. USSR, Sb. 45 (1983), 63-71
(translation from Mat. Sb. 117(159) (1982), No. 1, 60-68
(Russian)).
\betaibitem{FJ} M. Fried, M. Jarden, \varepsilonmph{Field Arithmetic}, Revised by
Moshe Jarden, third revised ed., Ergebnisse der Math. und ihrer
Grenzgebiete, 3. Folge, 11, Springer, Berlin, 2008.
\betaibitem{GiSz} Ph. Gille, T. Szamuely, \varepsilonmph{Central Simple Algebras
and Galois Cohomology}, Cambridge Stud. Adv. Math., vol. 101,
Cambridge Univ. Press, XI, Cambridge, 2006.
\betaibitem{Gr} M.J. Greenberg, \varepsilonmph{Rational points in Henselian
discrete valuation rings}, Inst. Hautes \'{E}tudes Sci. Publ.
Math. No. 31 (1966), 563-568. DOI:
https://doi.org/10.1007/BF02684802
\betaibitem{He} I.N. Herstein, \varepsilonmph{Noncommutative Rings}, Carus
Mathematical Monographs, No. 15, Math. Assoc. of America (distributed
by Wiley), 1968.
\betaibitem{JW} B. Jacob, A. Wadsworth, \varepsilonmph{Division algebras over
Henselian fields}, J. Algebra {\betaf 128} (1990), 126-179. DOI:
https://doi.org/10.1016/0021-8693(90)90047-R
\betaibitem{Ka} I. Kaplansky, \varepsilonmph{Maximal fields with valuations},
Duke Math. J. 9 (1942), No. 2, 303-321. DOI:
10.1215/S0012-7094-42-00922-0
\betaibitem{KM} M.I. Kargapolov, Yu.I. Merzlyakov, \varepsilonmph{Fundamentals of
Group Theory}, third ed., Nauka, Moscow, 1982 (Russian).
\betaibitem{Koe} J. Koenigsmann, \varepsilonmph{Elementary characterization of
fields by their absolute Galois group}, Siberian Adv. Math. 14
(2004), No. 3, 16-42.
\betaibitem{Ku} A.G. Kurosh, \varepsilonmph{Problems in ring theory which are
related to the Burnside problem for periodic groups}, Izv. Akad.
Nauk SSSR 5 (1941), No. 3, 233-240 (Russian).
\betaibitem{L} S. Lang, \varepsilonmph{Algebra}, revised third ed., Graduate Texts
in Math., vol. 211, Springer, New York, 2005.
\betaibitem{Mat} E. Matzri, \varepsilonmph{Symbol length in the Brauer group of
a field}, Trans. Amer. Math. So. 368 (2016), No. 1, 413-427. DOI:
https://doi.org/10.1090/tran/6326
\betaibitem{MT} O.V. Mel'nikov, O.I. Tavgen', \varepsilonmph{The absolute
Galois group of a Henselian field}, Dokl. Akad. Nauk BSSR 29
(1985), 581-583 (Russian).
\betaibitem{Me} A.S. Merkur'ev, \varepsilonmph{Generators and relations of the
Brauer group of a field}, Proc. Steklov Inst. Math. 183 (1991),
163-169 (translation from Trudy Mat. Inst. Steklov 183 (1990),
139-144 (Russian)).
\betaibitem{MS} A.S. Merkur'ev, A.A. Suslin, \varepsilonmph{$K$-cohomology of
Severi-Brauer varieties and the norm residue homomorphism}, Math.
USSR, Izv. 21 (1983), 307-340 (translation from Izv. Akad. Nauk
SSSR 46 (1982), 1011-1046 (Russian)). DOI:
https://doi.org/10.1070/IM1983v021n02ABEH001793
\betaibitem{Na} M. Nagata, \varepsilonmph{Note on a paper of Lang concerning
quasi algebraic closure}, Mem. Coll. Sci., Univ. Kyoto, Ser. A,
{\betaf 30} (1957), No. 3, 237-241. DOI: 10.1215/kjm/1250777008
\betaibitem{PS} R. Parimala, V. Suresh, \varepsilonmph{Period-index and
$u$-invariant questions for function fields over complete
discretely valued fields}, Invent. Math. 197 (2014), No. 1,
215-235. DOI: https://doi.org/10.1007/s00222-013-0483-y
\betaibitem{P} R. Pierce, \varepsilonmph{Associative Algebras}, Graduate Texts in
Math., vol. 88, Springer-Verlag, New York-Heidelberg-Berlin, 1982.
\betaibitem{S1} J.-P. Serre, \varepsilonmph{Galois Cohomology}, translated from
the French by Patrick Ion, Springer-Verlag, X,
Berlin-Heidelberg-New York, 1997.
\betaibitem{TW} J.-P. Tignol, A.R. Wadsworth, \varepsilonmph{Value Functions on
Simple Algebras, and Associated Graded Rings}, Springer Monographs
in Math., Springer, Cham-Heidelberg-New York-Dordrecht-London,
2015.
\betaibitem{TY} I.L. Tomchin, V.I. Yanchevskij, \varepsilonmph{On defects of
valued division algebras}, St. Petersbg. Math. J. 3 (1992), No. 3,
631-647 (translation from Algebra Anal. 3 (1991), No. 3, 147-164
(Russian)).
\betaibitem{Wa} S. Warner, \varepsilonmph{Topological Fields}. North-Holland
Math. Studies, 157; Notas de Mat\'{e}matica, 126; North-Holland
Publishing Co., Amsterdam, 1989.
\betaibitem{Wh} G. Whaples, \varepsilonmph{Algebraic extensions of arbitrary
fields}, Duke Math. J. 24 (1957), No. 2, 201-204. DOI:
10.1215/S0012-7094-57-02427-4
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\newtheorem{thm}{Theorem}[section]
\newtheorem{Def}{Definition}[section]
\newtheorem{lem}{Lemma}[section]
\newtheorem{rem}{Remark}[section]
\newtheorem{cor}{Corollary}[section]
\newtheorem{ex}{Example}[section]
\newtheorem{ass}{Assumption}[section]
\newtheorem*{bew}{Proof}
\title{Loss-guided Stability Selection}
\author{Tino Werner\footnote{Institute for Mathematics, Carl von Ossietzky University Oldenburg, P/O Box 2503, 26111 Oldenburg (Oldb), Germany, \texttt{[email protected]}}}
\maketitle
\begin{footnotesize}
\begin{abstract}
In modern data analysis, sparse model selection becomes inevitable once the number of predictors variables is very high. It is well-known that model selection procedures like the Lasso or Boosting tend to overfit on real data. The celebrated Stability Selection overcomes these weaknesses by aggregating models, based on subsamples of the training data, followed by choosing a stable predictor set which is usually much sparser than the predictor sets from the raw models. The standard Stability Selection is based on a global criterion, namely the per-family error rate, while additionally requiring expert knowledge to suitably configure the hyperparameters. Since model selection depends on the loss function, i.e., predictor sets selected w.r.t. some particular loss function differ from those selected w.r.t. some other loss function, we propose a Stability Selection variant which respects the chosen loss function via an additional validation step based on out-of-sample validation data, optionally enhanced with an exhaustive search strategy. Our Stability Selection variants are widely applicable and user-friendly. Moreover, our Stability Selection variants can avoid the issue of severe underfitting which affects the original Stability Selection for noisy high-dimensional data, so our priority is not to avoid false positives at all costs but to result in a sparse stable model with which one can make predictions. Experiments where we consider both regression and binary classification and where we use Boosting as model selection algorithm reveal a significant precision improvement compared to raw Boosting models while not suffering from any of the mentioned issues of the original Stability Selection.
\end{abstract}
\end{footnotesize}
\section{Introduction}
The first milestone in high-dimensional data analysis has been achieved once the Lasso (\cite{tibsh96}) was introduced, performing automated sparse variable selection for the squared loss and a shrinking of the coefficients. Several years later, B\"{u}hlmann and Yu proposed $L_2$-Boosting (\cite{bu03}, see also \cite{bu06}), a sophisticated forward step-wise approach that combines simple linear regression models which turned out to be competitive to the Lasso, see for example \cite{efron04}, \cite{bu07}, \cite{bu06} for experiments and discussions on the differences of Lasso and $L_2$-Boosting.
However, although both algorithms have attractive asymptotic properties (\cite{bu06}, \cite{bu}), they still show an overfitting behaviour in practice. This issue resulted in several modifications of the Lasso, for example, the Adaptive Lasso, which is a two-stage procedure (\cite{zou06}) where the second stage thins out the selected set of variables from the first stage, or the Multistep Adaptive Lasso, originally introduced in \cite{bu08}, which is an extension of the Adaptive Lasso and asymptotically performs $\log$-regularized least squares minimization.
As for Boosting, the variant Sparse Boosting (\cite{bu06a}) has been proposed which respects the model complexity, measured by the trace of the corresponding Boosting operator, leading to sparser models. Nevertheless, a standard approach to prevent Boosting models from overfitting is a suitable choice of the number of iterations. Since iterating the Boosting algorithm as long as convergence occurs is not reasonable in terms of sparse variable selection, one tries to stop the algorithm before convergence. For this method, called ''early stopping'', there already has been done a lot of work, see for example \cite{bu03}, \cite{zhang05b}, \cite{bu06} and \cite{mayr12}. Note that \cite{hofner15} point out that even Boosting with early stopping still tends to overfitting.
The analogous problem of finding the optimal number of iterations in Boosting models is to define a suitably chosen regularization parameter for Lasso-type models. Although cross-validation is a standard approach, \cite{bu10} point out that even for a sophisticatedly chosen grid $\Lambda$ for the regularization parameter, the optimal model may not be contained in the set $\{\hat S(\lambda) \ |\ \lambda \in \Lambda\}$ if $\hat S(\lambda)$ is the set of variables chosen by the Lasso with regularization parameter $\lambda$. They suggest a very fruitful strategy called ''Stability Selection'' which essentially combines the variable selection procedure with bagging (\cite{breiman96}) or, more precisely, subagging (\cite{bu02}), by aggregating models computed on subsamples. Only those variables that have been selected on sufficiently many subsamples enter the final stable model. The reason behind this model aggregation procedure is to immunize the final model against peculiar sample configurations. Stability Selection is a very powerful method that can be applied to Lasso or Graphical Lasso models (\cite{bu10}), but which also has been extended to Boosting models in \cite{hofner15}. The Stability Selection has been applied successfully in the context of gene expression (\cite{bu10}, \cite{stekhoven}, \cite{shah13}, \cite{hofner15}), fMRI data (\cite{ryali}) and voice activity detection (\cite{hamaidi}), which are exemplary for having very few observations with a huge number of predictors.
Despite the magnificent success of Stability Selection, there are still open questions. The Lasso resp. Boosting models are computed on different (training) subsamples and these models are aggregated without ever performing an out-of-sample validation, which is known to be potentially misleading. Second, the choice of the tuning parameters of the Stability Selection may require expert knowledge to be defined appropriately. We experimentally show that, on noisy high-dimensional data, the error control in terms of the expected number of falsely selected variables to which both parameters are related (\cite[Thm. 1]{bu10}) seems to be too strict, may resulting in an empty model. Third, variable selection clearly depends on the loss function in the sense that model selection w.r.t. different loss functions ends up in different models which is not represented by the global per-family error rate criterion.
Therefore, we propose a loss-guided Stability Selection variant that includes a validation step after having aggregated the subsample-specific models. In this validation step, we compare the performance w.r.t. the chosen loss function for different stable candidate models on an out-of-sample validation data set. The candidate stable models are computed using a pre-defined grid of either thresholds, i.e., the stable models include the variables whose aggregated selection frequencies exceed the respective threshold, or of cardinalities, i.e., the stable models consist of the respective number of variables with the highest aggregated selection frequencies. Then, the final stable model is the best performing candidate stable model. Thanks to the sparseness of the candidate stable models, this validation step induces only very little computational overhead compared to the original Stability Selection. Motivated by the issue that, on very noisy data, even the relevant variables may be selected on a rather low fraction of the models so that in may even happen that they achieve even lower selection frequencies than some noise variables, we propose another strategy where the grid search is replaced by an exhaustive search over the best variables, again with a limited computational overhead. Summarizing, we focus on finding a stable model with which one can make predictions instead of concentrating on avoiding false positives.
This article is organized as follows. In Sec. \ref{prelimsec}, we recapitulate Gradient Boosting and strategies that have been proposed to improve the model selection quality on real data and briefly summarize how Stability Selection works. Sec. \ref{ourstabselsec} is devoted to our Stability Selection variants. In Sec. \ref{simsec}, we conduct experiments, both on simulated and real data, that show the potential of our Stability Selection variant. Sec. \ref{outlook} concludes.
\section{Preliminaries} \label{prelimsec}
\subsection{Boosting and variable selection} \label{boostselsec}
Let $\mathcal{D}=(X,Y)$ be a data set where $X \in \mathbb{R}^{n \times p}$ is the predictor matrix and where $Y \in \mathbb{R}^n$ is the corresponding response column. The rows of the regressor matrix $X_i$ are contained in some regressor space $\mathcal{X}\subset \mathbb{R}^p$, the responses $Y_i$ belong to some space $\mathcal{Y}\subset \mathbb{R}$. We assume that the instances $(X_i,Y_i)$ are i.i.d. realizations of some underlying joint distribution. We denote the submatrix of $X$ with all column indices in some index set $J$ as $X_{\cdot,J}$. Let $L$ be a loss function which in this work is a map $L: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}_{\ge 0}$.
The idea behind Boosting is to combine simple models (''weak learners'', ''baselearners'') that are easy to fit in order to generate a final ''strong'' learning model. The first algorithm of this kind was the \texttt{Adaboost} algorithm (\cite{freund97}) for binary classification problems. It has been shown in \cite{breiman99} that \texttt{AdaBoost} can be seen as a gradient descent algorithm for the exponential loss. \cite{friedman} showed that \texttt{AdaBoost} can also be identified with a forward stage-wise procedure. We recapitulate the general \textbf{functional gradient descent (FGD)} algorithm in Alg. \ref{funcBoosting} (cf. \cite{bu07}).\ \\
\begin{algorithm}[H]
\label{funcBoosting}
\textbf{Initialization:} Data $(X,Y)$, step size $\kappa \in ]0,1]$, number $m_{iter}$ of iterations and the offset value $\hat f^{(0)}(\cdot)\equiv \argmin_c\left(\frac{1}{n}\sum_i L(Y_i,c)\right)$\;
\For{$k=1,...,m_{iter}$}{
Compute the negative gradients $U_i=-\partial_f L(Y_i,f)|_{f=\hat f^{(k-1)}(X_i)}$ and evaluate them at the current model for all $i=1,...,n$\;
Treat the vector $U=(U_i)_i$ as response and fit a model $(X_i,U_i)_i \overset{\text{base procedure}}{\longrightarrow} \hat g^{(k)}(\cdot)$ with a pre-selected learning algorithm as base procedure \;
Update the current model via $\hat f^{(k)}(\cdot)=\hat f^{(k-1)}(\cdot)+\kappa \hat g^{(k)}(\cdot)$
}
\caption{Generic functional Gradient Boosting}
\end{algorithm}\
The weak learners in the ``base procedure'' can, for example, be trees, smoothing splines or simple least squares models. The special case where the loss function is the squared loss is referred to as ``$L_2$-Boosting'' in this article. For an overview on Gradient Boosting algorithms and their paradigms, we refer to \cite{bu03} and \cite{bu07}.
The number $m_{iter}$ of Boosting iterations becomes very important if a pure Boosting algorithm is applied without further variable selection criteria due to a general overfitting behaviour of Boosting (see e.g. \cite{bu07}). More precisely, a small number of $m_{iter}$ leads to biased models with a low variance whereas performing many iterations reduces the bias but increases the variance (\cite{mayr12}). While \cite{bu03} considered the relative difference of mean squared errors as a stopping criterion, \cite{bu06} invoked the corrected AIC, minimizing it over a suitable set of values for $m_{iter}$. Stopping before convergence is referred to as ''early stopping'' (see \cite{bu07} and \cite{mayr12}). \cite{mayr12} complain that early stopping often requires a very high initial number of iterations and propose a strategy, based on a cross validation scheme, to circumvent this issue.
Boosting algorithms are extremely efficiently implemented in the $\mathsf{R}$-package \texttt{mboost} (\cite{mboost}, \cite{hofner14}, \cite{hothorn10}, \cite{bu07}, \cite{hofner15}, \cite{hothorn06}).
\subsection{Stability Selection} \label{stabselsec}
We now briefly recapitulate the Stability Selection introduced in \cite{bu10}. Let $\Lambda \subset \mathbb{R}_{\ge 0}$ be the set of regularization parameters $\lambda$ for Lasso-type algorithms. The estimated set of variables corresponding to a specific $\lambda$ is denoted by $\hat S(\lambda)$. When tuning the algorithm, usually by defining a grid of candidate values for $\lambda$ and fitting a model for each element of the grid, one essentially would pick one of the models which behaves best w.r.t. some quality criterion. Following \cite{bu10}, variable selection by just choosing one element of the set $ \{ \hat S(\lambda) \ | \ \lambda \in \Lambda \}$ does generally not suffice due to the overestimating behaviour of algorithms like the Lasso. Instead, one draws $B$ subsamples of a size of around $n_{sub}=\lfloor n/2 \rfloor$ and only the variables that have been selected on sufficiently many subsamples are finally selected.
The probabilities $\hat \Pi_j(\lambda):=P(j \in \hat S(\lambda))$ are, for each $j$, approximated by computing the relative fraction of subsamples whose corresponding model contains variable $j$. Then, by fixing a \textbf{cutoff} $\pi_{thr}$, the Stability Selection defines the set \begin{equation} \label{stabsetthres} \hat S^{stab}:=\{j \ | \ \max_{\lambda \in \Lambda}(\hat \Pi_j(\lambda)) \ge \pi_{thr} \}. \end{equation} An important issue when selecting variables are type I errors, i.e., falsely selected variables. \cite[Thm. 1]{bu10} provides an upper bound for the expected number of false positives. They additionally show under which assumptions exact error control is possible with Stability Selection even in high-dimensional settings.
Since the original Stability Selection of \cite{bu10} was basically tailored to Lasso-type models, \cite{hofner15} provided a Stability Selection for Boosting models. Again, one generates $B$ subsamples and performs Boosting on each subsample. Each Boosting model is iterated until a pre-defined number $Q$ of variables is selected, respectively for each subsample. The number $Q$ and the threshold $\pi_{thr}$ (as in Eq. \ref{stabsetthres}) are related by the per-family error-rate due to \cite[Thm. 1]{bu10}, so fixing two of these quantities, the third can be reasonably set. The Stability Selection is implemented in the $\mathsf{R}$-package \texttt{stabs} (\cite{stabs}, \cite{hofner14}, \cite{mayr17}).
An extension of the Stability Selection is introduced in \cite{shah13} who provide bounds for the type I error that are free from the exchangeability assumption and the assumption that the selection procedure is better than random guessing needed in \cite{bu10}. Another variant of the Stability Selection from \cite{bu10} has been proposed in \cite{zhou13} who criticize that the selection of the stable features is done according to $\max_{\lambda \in \Lambda}(\hat \Pi_j(\lambda))$. They suggest to take the average of the $\hat \Pi_j(\lambda)$ for the best $k$ parameters $\lambda$, calling their method therefore Top-$k$-Stability Selection. A Stability Selection including a filtering step in the sense that the grid $\Lambda$ is condensed to a sub-grid of regularization parameters corresponding to the lowest aggregated out-of-sample loss has been suggested in \cite[Sec. 8]{yu20}. \cite{yu16} proposed a method based on the so-called estimation stability which however is restricted to the choice of the regularization parameter for Lasso. \cite{pfister} suggested a stabilized regression method where regression models on different subsets of $\{1,..,p\}$ are computed and where a stability score is assigned to each model. Then, from the subset of those models with the highest stability scores, a subset of models with the highest prediction scores is taken and the corresponding models are averaged. A trimmed Stability Selection for contaminated data based on in-sample losses was introduced in \cite{TW21c}. The stability of variable selection and variable ranking has been considered in \cite{nogueira16}, \cite{nogueira17}, \cite{nogueira17b}. \cite{nogueira17b} who discuss and propose similarity metrics in order to quantify this stability.
An algorithm called Bolasso (\cite{bach08}) can be interpreted version of Stability Selection which is based on bootstrapping instead of subsampling. The final set of selected variables is the intersection of all selected sets. Therefore, one may identify the threshold $\pi_{thr}=1$ for the Bolasso.
\section{A modified, loss-guided Stability Selection} \label{ourstabselsec}
Although the original Stability Selection already led to magnificent results in literature, there are still open questions. First, the original Stability Selection, including variants in literature, solely considers a training data set, so the stable model is completely based on in-sample losses. Second, \cite{bu} and \cite{hofner15} have made recommendations for the choice of $\pi_{thr}$ and advise not to give too much attention to it as long as it lies in a reasonable interval. On the other hand, issues with the Stability Selection as well as a considerable hyperparameter sensitivity have already been reported in literature (\cite{li13b}, \cite{wang20g}). We believe that this parameter should also be chosen data-driven by respecting the out-of-sample performance of the resulting models, analogously to cross-validation techniques to find the optimal hyperparameter from a grid of hyperparameters. Furthermore, for a user, it is much more intuitive to define the number of variables in the final model than to define a threshold since there is no intuition about how many variables a particular threshold corresponds to. This has already been suggested in \cite{zhou13}, but we also allow for a grid of such numbers so that the optimal number of stable variables is derived by the out-of-sample performance of the corresponding candidate stable models. The main feature of our aggregation and selection procedure is that it is not based on a global criterion like the PFER as in the original Stability Selection but adapted to the actual problem in the sense that the loss function directly guides the selection of the stable model.
A further problem is that the excellent implementation of the Stability Selection based on Boosting where the Boosting models are iterated until a given number of variables have been selected backfires if the hyperparameters have not been chosen appropriately since increasing this number may require to compute all Boosting models again which is rather expensive on high-dimensional data. Another problem is that, on noisy data, even the relevant variables may tend to be selected on a rather low relative part of the resamples. Therefore, it can happen that noise variables are selected more frequently than the actual relevant variables which, if one concentrates solely on keeping the number of false positives low, can lead to too sparse or even empty models.
\subsection{A grid search for the optimal stable model} \label{gridsubsec}
First, we partition the data $\mathcal{D}$ into a training set $\mathcal{D}^{train} \in \mathbb{R}^{n_{train} \times (p+1)}$ and a validation set $\mathcal{D}^{val} \in \mathbb{R}^{n_{val} \times (p+1)}$. The most generic procedure that we also apply in this paper is to draw $B$ subsamples $\mathcal{D}^{b;train}$ of size $n_{sub}$ of $\mathcal{D}^{train}$ and to compute predictor sets $\hat S^{(b)}$, $b=1,...,B$, by applying some model selection procedure (without any restrictions like stopping it once $Q$ variables are selected as in \cite{hofner15}) to subsample $D^{b;train}$, respectively. Then, as in literature, we compute the aggregated selection frequencies \begin{equation} \label{hatpi} \hat \Pi_j=\frac{1}{B}\sum_{b=1}^B I(j \in \hat S^{(b)}) \end{equation} for $j=1,...,p$, and either only take all variables $j$ for which $\hat \Pi_j \ge \pi_{thr}$, or we rank the components in a descending order and take the first $q$ ones. Thus, we produce one of the final sets of selected predictors \begin{equation} \label{finalq} \hat S^{stab}(q):=\{j \ | \ \hat \Pi_j \ge \hat \Pi_{((p-q+1):p)}\} \end{equation} or \begin{equation} \label{finalpi} \hat S^{stab}(\pi_{thr}):=\{j \ | \ \hat \Pi_j \ge \pi_{thr}\} \end{equation} where we denote the largest element of a vector $x$ of length $p$ by $x_{(p:p)}$ and so forth.
We first decide to define the stable model either according to $q$ or to $\pi_{thr}$. In the case of adjusting $q$, we need a reasonable subset of $\mathbb{N}$ which clearly satisfies \begin{equation} \label{qgrid} q_{grid} \subset \{1,2,...,\#\{j \ | \ \hat \Pi_j>0\}\}. \end{equation} In the case of adjusting $\pi_{thr}$, we discretize a reasonable interval, w.l.o.g. $]0,1]$, according to some mesh size $\mathcal{D}elta>0$, so we get the grid \begin{equation} \label{pigrid} \pi_{grid}=\{\mathcal{D}elta,2\mathcal{D}elta,...,1-\mathcal{D}elta,1\} . \end{equation} Then, we search for the optimal element of the grid by first computing the candidate stable model corresponding to each grid element according to Eq. \ref{finalpi} resp. Eq. \ref{finalq}. Then, using the validation data set $\mathcal{D}^{val}$, we compute the final coefficient set (w.l.o.g. least squares coefficients, see Subsec. \ref{coeffsec}) for each candidate stable model and compute the loss on $\mathcal{D}^{val}$. This guarantees that not only the predictor sets $\hat S^{(b)}$ are adapted to the loss function but that also the stable predictor set respects the loss appropriately. See Alg. \ref{stabselalg} for an overview of our loss-guided Stability Selection.
Of course, \cite[Thm. 1]{bu10} which bounds the expected number of falsely selected variables in terms of $\bar q$, $\pi_{thr}$ and the number $p$ of variables may be applied ex post where $\bar q=\mathop{mean}_b(|\hat S^{(b)}|)$ is the average number of selected variables in each Boosting model. If $q_{opt}$ is the optimal number $q$ selected from the grid $q_{grid}$, the corresponding threshold lies in the interval $]\hat \Pi_{(p-q_{opt}):p},\hat \Pi_{(p-q_{opt}+1):p}]$ Therefore, if a grid as in Eq. \ref{qgrid} has been used, it holds (under the assumptions in \cite{bu}) that \begin{center} $ \displaystyle \mbox{I\negthinspace E}[V] \le \frac{1}{2\hat \Pi_{(p-q_{opt}):p}-1}\frac{(\bar q)^2}{p}$, \end{center} provided that $\Pi_{(p-q_{opt}):p}>0.5$. If we use a threshold grid as in Eq. \ref{pigrid} for our Stability Selection, just replace $\Pi_{(p-q_{opt}):p}$ by $\pi_{opt}$ for $\pi_{opt}$ being the optimal element in $\pi_{grid}$, provided that $\pi_{opt}>0.5$. It is important to note that we rather risk to include wrong variables in our final stable model than to get an empty or heavily underfitted model, so we do not think of not being able to compute the error control probability ex ante as a weakness of our Stability Selection. \\
\begin{footnotesize}
\begin{algorithm}[H]
\label{stabselalg}
\textbf{Initialization:} Data $\mathcal{D}^{train}$, $\mathcal{D}^{val}$, size $n_{sub}$ of subsamples, binary variable \texttt{gridtype}, grid, hyperparameters for the underlying model selection procedure\;
\For{b=1,...,B}{
Draw a subsample $\mathcal{D}^{b;train} \in \mathbb{R}^{n_{sub} \times (p+1)}$ from $\mathcal{D}^{train}$\;
Apply the model selection algorithm an get a model $\hat S^{(b)}$\;
}
Compute the $\hat \Pi_j$ as in Eq. \ref{hatpi} \;
\eIf{\texttt{gridtype=='qgrid'}}{\For{$k=1,...,|q_{grid}|$}{
Get the stable model $\hat S^{stab}((q_{grid})_k)$ according to Eq. \ref{finalq}\;
Compute the coefficients on the reduced data $(X^{train}_{\cdot,\hat S^{stab}((q_{grid})_k)},Y^{train})$ and get the loss $L^{(val,k)}$ on the validation data $\mathcal{D}^{val}$
}}{\For{$k=1,...,|\pi_{grid}|$}{Get the stable model $\hat S^{stab}((\pi_{grid})_k)$ according to Eq. \ref{finalpi}\;
Compute the coefficients on the reduced data $(X^{train}_{\cdot, \hat S^{stab}((\pi_{grid})_k)},Y^{train})$ and get the loss $L^{(val,k)}$ on the validation data $\mathcal{D}^{val}$}}
Choose the model corresponding to $k_{opt}=\argmin_k(L^{(val,k)})$\;
Compute final coefficients w.r.t. the stable model on the whole data $\mathcal{D}$
\caption{Loss-guided Stability Selection}
\end{algorithm}
\end{footnotesize}
\begin{rem} \label{aggrem} The aggregation procedure may suffer from several issues, for example, contaminated instances/cells in the data set. Of course, one can apply robust model selection methods on the subsamples. Furthermore, several methods like computing the performance of the individual models on the subsamples and downweighting or trimming them when computing the $\hat \Pi_j$ have been considered in \cite{TWphd}, also including an additional outer cross-validation scheme where different partitions of the data into training and validation data are drawn. A trimming procedure based on the in-sample losses was suggested in \cite{TW21c}. \end{rem}
\begin{rem} We recommend to use $q$-grids instead of $\pi$-grids since they are more intuitive and since they guarantee that the Stability Selection does not end up in an empty model. It is important to note that the aggregated selection frequencies can clearly also be inspected, so if we force the Stability Selection to output $q$ stable variables, the analyst can investigate if they attain acceptable aggregated selection frequencies. \end{rem}
\subsection{Post Stability Selection model selection}
Consider, for simplicity, the situation that there are $k_r$ relevant variables, indexed by $i_{k_1},...,i_{k_r}$, and $k_n$ non-relevant variables, indexed by $j_{k_1},...,j_{k_n}$, whose aggregated selection frequencies satisfy $\hat \Pi_{j_{k_1}}>\hat \Pi_{j_{k_2}}>...>\hat \Pi_{j_{k_n}}>\hat \Pi_{i_{k_1}}>\hat \Pi_{i_{k_2}}>...>\hat \Pi_{i_{k_r}}$. If the threshold is low resp. if the number $q$ of stable variables is high, one inevitably selects noise variables. These noise variables may decrease the out of sample performance, so our loss-guided Stability Selection may select only very few variables, i.e., whose aggregated selection frequencies are larger than $\hat \Pi_{j_{k_1}}$. Of course, the same argument holds for more flexible settings as above when, for example, a sequence of true variables is disturbed by single noise variable in terms of aggregated selection frequencies. Due to strictly respecting the ordering of the variables in terms of their aggregated selection frequencies, there is no chance to select the true best model if at least one relevant variable has a lower aggregated selection frequency than at least one noise variable.
We suggest to use the advantage that stable variable sets (due to the rather low threshold that we use at this stage, we term them ``meta-stable'' variable sets) are usually rather sparse. Therefore, we propose a ``Post Stability Selection model selection'' approach which combines the resampling strategy of Stability Selection with well-established classical model selection and a validation step corresponding to the loss function. We perform an exhaustive search in order to overcome the problem of inappropriate variable orderings and call the strategy `` Post Stability Selection Exhaustive Search'' (PSS-ES). See Alg. \ref{stabselexh} for an overview where $\mathcal{P}(A)$ denotes the power set of a discrete set $A$. Note that we left a definition of the ``performance'' in the algorithm open since there are several possible alternatives like standard exhaustive search w.r.t. the validation loss, additional cross-validation or a hybrid approach between the in-sample and out-of-sample performances that we apply in Sec. \ref{simsec}.
Since the computation of the coefficients on the reduced data set is cheap, this strategy only induces an insignificant computational overhead provided that it is based on a sufficiently low number $q_0$ of variables (see Alg. \ref{stabselexh}). Alternatively, one can execute the corresponding forward or backward selection strategies. \\
\begin{footnotesize}
\begin{algorithm}[H]
\label{stabselexh}
\textbf{Initialization:} Training data $\mathcal{D}^{train}$, validation data $\mathcal{D}^{val}$, threshold $\pi_{thr}$, maximum number $q_0$ of candidate variables, aggregated selection frequencies $\hat \Pi_j$\;
$\hat S^{meta-stab}=\{j \ | \ \hat \Pi_j \ge \pi_{thr}\}$\;
\If{$|\hat S^{meta-stab}|>q_0$}{$\hat S^{meta-stab}=\{j \ | \ \hat \Pi_j \ge \hat \Pi_{((p-q_0+1):p)}\}$}
\For{$S \in \mathcal{P}(\hat S^{meta-stab})$}{
Compute $\hat \beta^S$ on $(X^{train}_{\cdot,S},Y^{train})$\;
Compute the performance of the model}
Select the model $\hat S^{stab}$ with the best performance\;
Compute final coefficients w.r.t. the stable model on $\mathcal{D}$
\caption{Post Stability Selection Exhaustive Search (PSS-ES)}
\end{algorithm}
\end{footnotesize}
\subsection{Final coefficients} \label{coeffsec}
We want to point out that Stability Selection aggregates models, but not coefficients. Therefore, all coefficients computed on the subsamples are not considered further. We suggest just computing the coefficients that minimize the selected loss function on the stable model, e.g., least squares coefficients for the quadratic loss function. In the presumably very rare, but not impossible case that the stable model still has more than $n$ predictors, we would apply the underlying model selection algorithm again on the reduced data set where only the stable predictors remain. Note that \cite{cher18} propose a much more sophisticated strategy for computing the final coefficients by averaging coefficients computed from solving moment equations in a cross-validation scheme which of course also could be applied here.
\subsection{Computational aspects}
As for the computational complexity of both the original and our loss-guided Stability Selection, we can state the following lemma.
\begin{lem} \label{complem} \textbf{i)} The computational complexity of our loss-guided Stability Selection is contained in the same complexity class as the original Stability Selection provided that the complexity $\mathcal{O}(c_{base}(n,p))$ of the model selection algorithm applied on training data with $n$ observations and $p$ predictors is at least $\mathcal{O}(np)$, that the loss function can be evaluated in at most $\mathcal{O}(np)$ steps and that the final coefficients can be computed in at most $\mathcal{O}(np)$ steps. \\
\textbf{ii)} Statement i) remains valid for the Post Stability Selection model selection provided that the true number $s^0$ of relevant variables does not depend neither on $n$ nor $p$.\\
\textbf{iii)} Selecting the (candidate) stable model according to the rank-based strategy does not cause a computational overhead compared to the threshold-based strategy provided that $p$ is of order $\exp(n)$. \end{lem}
\begin{bew} The original Stability Selection requires drawing $B$ subsamples of size $n_{sub}$, leading to a complexity of $\mathcal{O}(Bn_{sub})$, the application of the base procedure on each subsample with a total complexity of $\mathcal{O}(Bc_{base}(n_{sub},p))$, the computation of the $\hat \Pi_j$ which is done in $\mathcal{O}(Bp)$ steps and the selection of the variables according to Eq. \ref{finalpi}, requiring $\mathcal{O}(p)$ steps. Summarizing, the computational complexity is dominated by the base procedure according to the assumption, letting the total complexity be $\mathcal{O}(Bc_{base}(n,p))$. \\
\textbf{i)} The loss-guided Stability Selection with a grid of thresholds additionally requires $|\pi_{grid}|$ times to check which variables exceed the threshold, followed by the computation of the coefficients and the loss evaluation, leading to additional costs of order $\mathcal{O}(|\pi_{grid}|(p+n_{train}p+n_{val}p))=\mathcal{O}(np)$ which is captured by $\mathcal{O}(Bc_{base}(n,p))$ according to the assumptions.\\
\textbf{ii)} After having computed the meta-stable model, one additionally has to compute the coefficients and the performance measure for each considered predictor set whose number is at most $2^{q_0}$ for the exhaustive search and definitely smaller for forward or backward strategies. This again leads to an additional complexity of $\mathcal{O}(np)$. Note that this argumentation only holds if the number $q_0$ does not grow with $n$ or $p$ which is true if the assumption is valid.\\
\textbf{iii)} The only difference is that the selection of the (candidate) stable models cannot be done in $\mathcal{O}(p)$ steps due to the ordering procedure which requires $\mathcal{O}(p\ln(p))$ steps if quick sorting algorithms are applied. However, this quantity is captured by $\mathcal{O}(np)$ for $p$ being of order $\exp(n)$.
\begin{flushright} $_\mbox{I\negthinspace B}ox $ \end{flushright} \end{bew}
\begin{ex} \textbf{i)} A variant of Lem. \ref{complem} clearly generally holds if the model selection procedure complexity dominates the coefficient computation and the loss evaluation. However, the $\mathcal{O}(np)$ complexity spelled out in the lemma is most natural for many coefficient estimation costs and holds for the concrete ones in this paper.\\
\textbf{ii)} It may sound counter-intuitive to require the complexity of the loss evaluation to be contained in $\mathcal{O}(np)$ since the loss requires $Y$ and $\hat Y$ as input, so $p$ does not appear. We formulated it in this way due to loss functions like ranking loss functions (see \cite{TW19b} for an overview) which require $\mathcal{O}(n\ln(n))$ steps to be evaluated. Then, assuming that $p$ is of order $\exp(n)$ maintains the assumption that the loss evaluation complexity is $\mathcal{O}(np)$. \\
\textbf{iii)} For $L_2$-Boosting models with $m_{iter}$ Boosting iterations each, the complexity of the Stability Selection is $\mathcal{O}(Bn_{train}pm_{iter})$. \end{ex}
\begin{rem} As for the storage complexity, the Stability Selection essentially only requires to memorize the data set and, for each $b=1,...,B$, the coefficient vector (or its logical counterpart where a TRUE appears if the coefficient is non-zero), leading to storage costs of order $\mathcal{O}(np+Bp)$. Additionally, one has to store at least the current subsample, which however could be deleted afterwards. In principle, the aggregated selection frequencies can be updated iteratively so that one only had to memorize the current coefficient vector or its logical counterpart. During the grid search, we only memorize the validation loss w.r.t. each element of the grid and finally report the coefficients of the optimal model and the aggregated selection frequencies, so denoting our grid by $*_{grid}$ for $* \in \{\pi,q\}$, the storage costs for the whole Stability Selection are given by $\mathcal{O}(np+Bp+|*_{grid}|+p)$. A Post Stability Selection model selection strategy would require the same storage capacities where $|*_{grid}|$ is replaced by $2^{q_0}$. \end{rem}
Evidently, both Stability Selection strategies do not require any interaction between the different Bootstrap replications at any stage except for the final aggregation of the selection frequencies. Therefore, one can easily parallelize them similarly as the original Stability Selection by running the base procedure on the subsamples on different nodes.
\subsection{Other potential applications}
We want to emphasize that our Stability Selection is not limited to Boosting models on which we focus in this paper but clearly also applicable to model selection procedures like the Lasso and its variants, as done in \cite{bu10}, where the threshold in Eq. \ref{stabsetthres} resp. a corresponding number $q$ would be selected data-driven. As in \cite{bu10}, it can also be applied to sparse covariance estimation, for example with the Graphical Lasso (\cite{banerjee08}, \cite{friedman08}). The stable model is in this case a set of nodes, i.e., a subset of $\{(i,j) \ |\ i,j=1,...,p\}$. After having computed candidate stable models, one can proceed by computing the classical covariances based on the candidate node sets and by evaluating the non-penalized log-likelihood, so the stable model with the highest value is chosen as the final model.
A completely different setting are variable length Markov chains (VLMCs, see e.g. \cite{bu99}) where the memory length for each transition probability depends on the realizations in the previous time steps. Model selection in VLMCs consists of finding the optimal variable lengths for which \cite{rissanen} proposed the context tree algorithm where first an overfitted maximal context tree is grown which is pruned afterwards. In this sense, the final context tree can be identified with a sparse model, i.e., a sparse VLMC representation of a full MC model of the respective order. A Stability Selection would aggregate the context trees fitted on suitable subsamples of the data to get a stable VLMC representation.
\section{Applications} \label{simsec}
\subsection{Data generation}
We generate data by drawing the $n$ rows $X_i$ of the regressor matrix independently from a multivariate normal distribution $\mathcal{N}_p(M_x,I_p)$ where $I_p$ denotes the identity matrix of dimension $p \times p$ and where $M_x=(\mu_x,\mu_x,...,\mu_x) \in \mathbb{R}^p$ for some $\mu_x \in \mathbb{R}$. We fix the number $s^0$ of true relevant variables and randomly select $s^0$ variable indices from the $p$ column indices without replacement. The $s^0$ coefficients $\beta_j$ corresponding to the relevant variables are drawn independently from a $\mathcal{N}(\mu_{\beta},1)$-distribution.
As for the signal-to-noise ratio (SNR), we can easily generate regression data with a pre-scribed SNR by computing $\Var(X\beta)$ and by adjusting the scaling of the noise term $\epsilon$ in the linear model $Y=X\beta+\epsilon$. For classification data however, we generate $X\beta$ and treat $\eta_i:=\exp(\overline{X_i \beta})/(1+\exp(\overline{X_i \beta}))$ with $\overline{X_i \beta}:=X_i\beta-mean(X\beta)$ (to avoid the issue of highly imbalanced data if (nearly) all components of $X\beta$ are positive resp. negative) as $P(Y=1)$, i.e., we draw the responses according to $Y_i \sim Bin(1,\eta_i)$. \cite[Sec. 16]{elstat} propose a noise-to-signal ratio $NSR=\Var(Y|X\beta)/\Var(X\beta)$. We are not aware of any method that allows for generating classification data with a pre-scribed SNR (or NSR) but for the sake of transparency, we compute the inverse of $NSR$ on our data sets as a replacement of the SNR.
\subsection{Training}
We compare standard Boosting ($L_2$-Boosting for regression and \texttt{LogitBoost} for classification), the loss-based Stability Selection and PSS-ES. As for Boosting, the whole data set $\mathcal{D}$ is used as training set. As for our Stability Selection variants, we partition $\mathcal{D}$ into a training set $\mathcal{D}^{train}$ with $n_{train}$ instances and a validation set $\mathcal{D}^{val}$ with $n_{val}$ instances. In Sec. \ref{artificialsec} and \ref{realsec}, we generate an independent test data set with $n_{test}$ instances that will only appear during evaluation. In all cases, in the spirit of randomized cross-validation, we draw 10 such partitions into (test,) training and validation data. The subsamples in our Stability Selection are of fixed size $n_{sub}$. Note that after having selected the final stable model, the final coefficients are computed on the whole set $\mathcal{D}$ using standard least squares regression resp. logit regression. The Boosting parameters are per default $m_{iter}=100$ and $\kappa=0.1$. The whole procedure is repeated $V$ times for each scenario, i.e., we generate $V$ independent data sets.
As for PSS-ES, we suggest a hybrid approach that considers both in-sample and out-of-sample losses. More precisely, due to the issue that the number of observations is often low, the number of validation samples may not be sufficiently representative so that only focusing on out-of-sample data when selecting the best candidate stable model could be misleading. Our strategy is to perform the usual exhaustive search based on the in-sample losses as implemented in the $\mathsf{R}$-packages \texttt{leaps} (\cite{leaps}) and \texttt{bestglm} (\cite{bestglm}) that compute the best model for each cardinality up to at most $q_0$ (see Alg. \ref{stabselexh}). Normally, one would select the best of these competing models by some criterion like the AIC or the BIC. In our case, we compare each of these models again by computing the out-of-sample performance of the least squares resp. logit coefficients computed only on these variables. Finally, the best (measured in the out-of-sample performance) of these best (measured in the in-sample performance) models is chosen as the final stable model. We nevertheless also implemented a forward and a backward search purely based on the out-of-sample performance and refer to them as PSS-FW resp. PSS-BW.
\subsection{Evaluation}
\subsubsection{Simulated data}
As already elucidated in \cite{wang20g}, Stability Selection is a model selection strategy. The final performance therefore depends on how the analyst uses the resulting stable model. Therefore, computing test losses would not be fair for comparing the performance of Stability Selection with that of the underlying model selection algorithm if the true model is known.
Instead, we compute the average number of variables that have been selected for all model selection strategies. Furthermore, we compute the average number of true positives. All averages are first taken over the 10 cross-validation loops and again averaged over the $V$ repetitions. Of course, one can expect raw Boosting models to achieve a higher TPR than stable models, therefore, just reporting the number of true positives would be somewhat unfair. We compute the precision which is defined as $pr=TP/PP$ for the number $TP$ of true positives and the number $PP$ of predicted positives, i.e., the relative part of relevant variables among the selected variables, and also report the average over the cross-validation loops and the $V$ data sets.
\subsubsection{Real data}
On real data, there is no known ground truth model. Of course, we can report the mean number of true positives, but this number alone would obviously bias the evaluation towards overfitting models.
Therefore, we compute the losses on the test data and report the average test loss over all cross-validation samples of the given data set. The intention behind this strategy is that if the assumption of a linear model is suitable, the prediction based on the true relevant variables should be better than any prediction based on a different set of variables. Note that we compute the least squares coefficients on the selected models, including that for $L_2$-Boosting since directly using the $L_2$-Boosting output would incorporate the additional shrinking effect of that algorithm which would make the results incomparable.
\subsection{Comparison on simulated data} \label{artificialsec}
We first compare our loss-guided Stability Selection and PSS-ES with the Stability Selection from \cite{hofner15} on three scenarios in order to highlight some issues with the latter one. Afterwards, we compare our Stability Selection variants with the corresponding Boosting algorithm on both 16 regression scenarios and 8 classification scenarios.
\subsubsection{Issues with the original Stability Selection} \label{hofnersec}
We start by showing that the Stability Selection proposed by \cite{hofner15} can fail on high-dimensional noisy data. The reason is that in such cases, there are seldom clearly dominating variables in terms of selection frequencies, or in other words, the Boosting algorithm gets irritated by the noise variables, resulting in no variable at all passing the threshold in the Stability Selection. As for Hofner's Stability Selection, we use its implementation as \texttt{stabsel} in the $\mathsf{R}$-packages \texttt{mboost} resp. \texttt{stabs}.
One can argue that one just has to modify either $Q$, the PFER or the threshold when applying \texttt{stabsel}, but as we show in scenario A, even that does not necessarily result in good models, apart from the immense computational overhead due to increasing $Q$ necessitates to re-compute all Boosting models.
\begin{table}[H]
\begin{center}
\begin{tabular}{|p{1.25cm}|p{0.75cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{0.5cm}|p{1cm}|p{0.75cm}|p{0.75cm}|p{0.75cm}|p{0.75cm}|} \hline
Scenario&$p$ & $n_{train}$ & $n_{sub}$ & $n_{val}$ & $n_{test}$ & $s^0$ & $\SNR$ & $\mu_{\beta}$ & $\mu_X$ & $B$ & $V$ \\ \hline
A & 100 & 100 & 50 & 25 & 25 & 5 & 0.5 & 4 & -2 & 50 & 200 \\ \hline
B & 1000 & 100 & 50 & 25 & 25 & 5 & 0.25 & 0 & 0 & 50 & 100 \\ \hline
C & 1000 & 100 & 50 & 25 & 25 & 5 & 0.25 & 0 & -2 & 50 & 100 \\ \hline
\end{tabular}
\caption{Scenario specification} \label{hofnertable}
\end{center}
\end{table}
In all scenarios A-C, we apply the Stability Selection 30 times for each combination of $Q \in \{5,6,...,10\}$ and $PFER \in \{1,2,3,4,5\}$. The reason is that using one single configuration would potentially be highly misleading and discredit the original Stability Selection. Therefore, we use the test losses in order to determine the best stable model from the original Stability Selection. We do not intend to detect the best configuration here (besides, different configurations may lead to the same stable model) but to approximate the optimal model that the original Stability Selection can select.
We then only compute the number of selected variables and true positives \textbf{corresponding to the best model} in each cross-validation loop and the average over all 10 loops. We report the average of these cross-validated quantities over all $V$ replications. We also count the number of cases where no variable has been selected. The sampling type is per default the Shah-Samworth sampling type with $B$ complementary pairs.
For the loss-guided Stability Selection, we always use a $q$-grid $q_{grid}=\{1,2,...,10\}$. We set $\pi_{thr}=0.25$ and $q_0=20$ for PSS-ES.
\begin{table}[H]
\begin{center}
\begin{tabular}{|p{0.8cm}|p{1.25cm}|p{1cm}|p{1.35cm}|p{1.25cm}|p{1cm}|p{1.35cm}|p{1.75cm}|p{2.25cm}|} \hline
Scen.& \multicolumn{3}{c|}{Mean number of positives} & \multicolumn{3}{c|}{Mean number of TPs} & \multirow{2}{2cm}{\#\{empty models\}} & \multirow{2}{2.25cm}{\#\{all mod\-els empty\}} \\
& Hofner & LSS & PSS-ES & Hofner & LSS & PSS-ES & & \\ \hline
A & 3.002 & 4.71 & 5.511 & 2.783 & 2.965 & 3.102 & 0.158 & 0 \\ \hline
B & 0.672 & 3.164 & 3.39 & 0.596 & 0.909 & 0.893 & 14.012 & 11 \\ \hline
C & 0.628 & 3.222 & 3.535 & 0.547 & 0.887 & 0.902 & 15.233 & 9 \\ \hline
\end{tabular}
\caption{Results for Scenarios A-C} \label{hofnertableres}
\end{center}
\end{table}
We see in Tab. \ref{hofnertableres} that there do not seem to be severe issues with the original Stability Selection in scenario A where it achieves a significantly higher precision than our variants. Among the different hyperparameter configurations, less than 1\% of them (0.158 out of 30) led to an empty model. However, the results have to be interpreted with caution since they correspond to the best of the 30 different configurations.
Regarding scenarios B and C where weak coefficients appear\footnote{This ``weak'' here has nothing to do with the $\SNR$ but with the magnitude of the true coefficients which has a mean of 4 in scenario A but which is centered in the other ones.} in contrast to the strong signals in scenario A, the original Stability Selection reveals that, although the best model again achieves a somewhat higher precision than our variants, it tends to severe underfitting, more precisely, around the half of the configurations (14.012 resp. 15.233 out of 30, in average) result in an empty model. Even worse, in 9 resp. 11 out of the 100 repetitions in scenario B resp. C, all configurations end up in an empty model, hence all the computational effort that has been done was in vain since on these training data sets, no variable ever entered the stable model.
From the perspective of avoiding false positives at all costs, the original Stability Selection clearly beats our variants. From the perspective of finding a predictive model, the original Stability Selection backfires if weak coefficients appear since a considerable computational effort is lost and there is no obvious way how to proceed in this situation (would one apply the Stability Selection again with a higher false positive tolerance?). Our Stability Selection in contrast still includes false positives for the benefit that it is very easy to use without requiring expert knowledge at any stage while, when using a $q$-grid, guaranteeing to output a model to work with.
\subsubsection{Comparison of our Stability Selection with Boosting for regression}
We study the performance of our Stability Selection variants on 16 different regression scenarios, see Tab. \ref{regtable}.
\begin{table}[H]
\begin{center}
\begin{tabular}{|p{1.5cm}|p{0.75cm}|p{1cm}|p{0.75cm}|p{0.75cm}|p{0.25cm}|p{0.75cm}|p{0.5cm}|p{0.5cm}|p{0.75cm}|p{0.75cm}|} \hline
Scenario &$p$ & $n_{train}$ & $n_{sub}$ & $n_{val}$ & $s^0$ & $\SNR$ & $\mu_{\beta}$ & $\mu_X$ & $B$ & $V$ \\ \hline
I & 1000 & 300 & 200 & 100 & 5 & 1 & 4 & -2 & 100 & 1000 \\ \hline
II & 1000 & 300 & 200 & 100 & 5 & 0.1 & 4 & -2 & 100 & 1000 \\ \hline
III & 100 & 300 & 200 & 100 & 5 & 1 & 4 & -2 & 100 & 1000 \\ \hline
IV & 100 & 300 & 200 & 100 & 5 & 0.1 & 4 & -2 & 100 & 1000 \\ \hline
V & 100 & 120 & 80 & 40 & 5 & 1 & 4 & -2 & 100 & 1000 \\ \hline
VI & 100 & 120 & 80 & 40 & 5 & 0.1 & 4 & -2 & 100 & 1000 \\ \hline
VII & 1000 & 120 & 80 & 40 & 5 & 1 & 4 & -2 & 100 & 1000 \\ \hline
VIII & 1000 & 120 & 80 & 40 & 5 & 0.1 & 4 & -2 & 100 & 1000 \\ \hline
IX-XVI & \multicolumn{10}{c|}{Identical to I-VIII but $\mu_X=\mu_{\beta}=0$} \\ \hline
\end{tabular}
\caption{Scenario specification for regression} \label{regtable}
\end{center}
\end{table}
For the loss-guided Stability Selection, we always use a $q$-grid $q_{grid}=\{1,2,...,10\}$. We set $\pi_{thr}=0.25$ and $q_0=20$ for PSS-ES resp. $q_0=50$ for PSS-FW and PSS-BW.
In Fig. \ref{scenIXVI}, we can observe that our loss-guided Stability Selection and PSS-ES lead to the highest precision. More precisely, they achieve from at least double to more than five times the precision as $L_2$-Boosting in scenarios I-VIII while achieving from at least 2.8 times up to nearly 9 times the precision of $L_2$-Boosting in scenarios IX-XVI. The forward and backward strategy, although being never significantly worse than Boosting, achieve much lower precision than the other variants. We believe that this results from the rather reduced data availability due to these strategies being solely based on $\mathcal{D}^{val}$ while PSS-ES takes both $\mathcal{D}^{train}$ and $\mathcal{D}^{val}$ into account when finding the stable model. However, the clear dominance of the forward over the backward search can hardly be explained. One can observe a tendency of PSS-ES being slightly inferior to the loss-guided Stability Selection.
\begin{figure}\label{scenIXVI}
\end{figure}
As for the dependence of the performances on the data configuration, it is not surprising that noisy data are far more challenging than data with less noise, leading to a much lower precision on the scenarios with $\SNR=0.1$. The impact of changing $p$ does not seem to be considerable since there is no clear tendency when comparing the results of scenarios I, II, IX and X ($p$ high, $n$ high) with those of scenarios III, IV, XI and XII ($p$ low, $n$ high). Scenarios V, VI, XIII and XIV ($p$ low, $n$ low) reveal that decreasing $n$ leads to worse results while the performance on scenarios VII, VIII, XV and XVI ($p$ high, $n$ low) is significantly worse than for scenarios I, II, IX and X, so there is a clear joint impact, as expected.
It is noteworthy that the precision in the scenarios IX-XVI is comparable to that in the scenarios I-VIII although the coefficients tend to be weak while the number of true positives is lower in nearly all scenarios. Moreover, the regressors are also centered while having a mean at -2 in scenarios I-VIII. Although, mathematically spoken, the effect of the weak coefficients is just that the norms of the response and error vectors are smaller in scenarios IX-XVI than in I-VIII (in expectation), \cite{bu10} distinguish between ``active'' variables which are those with a corresponding non-zero coefficient and ``relevant'' variables whose coefficient exceeds some threshold in absolute value. However, oddly, the precision even tends to increase on the scenarios with a low $\SNR$, especially for scenarios VIII and XVI, which are the most challenging ones.
Summarizing, our loss-guided Stability Selection and PSS-ES perform well on all scenarios by achieving a drastically higher precision than standalone Boosting, without suffering from any issue like resulting in an empty model.
\subsubsection{Comparison of our Stability Selection with Boosting for classification}
We now consider 8 classification scenarios specified in Tab. \ref{classtable}. For the loss-guided Stability Selection, we always use a $q$-grid $q_{grid}=\{1,2,...,10\}$. We set $\pi_0=0.25$ and $q_0=15$ since \texttt{bestglm} does not seem to support exhaustive, forward or backward search for classification with more than 15 variables.
\begin{table}[H]
\begin{center}
\begin{tabular}{|p{1.5cm}|p{0.75cm}|p{1cm}|p{0.75cm}|p{0.75cm}|p{0.5cm}|p{1cm}|p{0.5cm}|p{0.5cm}|p{0.75cm}|p{0.75cm}|} \hline
Scenario&$p$ & $n_{train}$ & $n_{sub}$ & $n_{val}$ & $s^0$ & $\widehat{\SNR}$ & $\mu_{\beta}$ & $\mu_X$ & $B$ & $V$ \\ \hline
XVII & 1000 & 300 & 200 & 100 & 5 & 337.6 & 4 & -2 & 100 & 1000 \\ \hline
XVIII & 100 & 300 & 200 & 100 & 5 & 337.6 & 4 & -2 & 100 & 1000 \\ \hline
XIX & 100 & 120 & 80 & 40 & 5 & 340.4 & 4 & -2 & 100 & 1000 \\ \hline
XX & 1000 & 120 & 80 & 40 & 5 & 340.8 & 4 & -2 & 100 & 1000 \\ \hline
XXI & 1000 & 300 & 200 & 100 & 5 & 20.3 & 0 & 0 & 100 & 1000 \\ \hline
XXII & 100 & 300 & 200 & 100 & 5 & 19.7 & 0 & 0 & 100 & 1000 \\ \hline
XXIII & 100 & 120 & 80 & 40 & 5 & 19.9 & 0 & 0 & 100 & 1000 \\ \hline
XXIV & 1000 & 120 & 80 & 40 & 5 & 20.2 & 0 & 0 & 100 & 1000 \\ \hline
\end{tabular}
\caption{Scenario specification for classification. $\widehat{\SNR}$ refers to the average of the SNR's of the individual data sets} \label{classtable}
\end{center}
\end{table}
\begin{figure}\label{scenXVIIXXIV}
\end{figure}
Our loss-guided Stability Selection and PSS-ES achieve from at least the same to more than 10 times the precision as $L_2$-Boosting in scenarios XVII-XXIV. Note that the loss-guided Stability Selection achieves a precision higher than 99\% in scenarios XVII-XXI which may result from the seemingly very large $\SNR$, nevertheless, the Boosting precision is far from that value in all scenarios.
The data configuration clearly affects the Boosting precision while the precision of the loss-guided Stability Selection and PSS-ES is nearly unaffected. The most striking difference occurs once the coefficients are weak. Besides the number of true positives also significantly decreases, there is a serious loss in the Boosting precision while the precision of the loss-guided Stability Selection and PSS-ES only decreases slightly. The PSS-FW and PSS-BW strategies again show a rather fragile behaviour.
Summarizing, the loss-guided Stability Selection and the PSS-ES show an excellent performance on all scenarios, drastically increasing the precision in comparison to \texttt{LogitBoost}.
\subsection{Real data} \label{realsec}
Finally, we consider the \texttt{riboflavin} data set from the package \texttt{hdi} (\cite{hdi}) which contains $n=71$ observations with $p=4088$ metric predictor variables. The response variable \texttt{y} is the log-transformed riboflavin production rate.
We use $B=50$ (which for Hofner's Stability Selection means to sample 50 complementary pairs) and $L_2$-Boosting as base algorithm. We partition the data 100 times according to $n_{train}=50$, $n_{val}=10$ and $n_{test}=11$. For Hofner's Stability Selection, we use the default Shah-Samworth sampling scheme and apply a grid search for each of the 50 combinations of $Q \in \{11,12,...,20\}$ and $PFER \in \{1,2,3,4,5\}$ and only report the model and test loss for the best model. For our variants, we use $n_{sub}=35$ and use $\pi_{thr}=0.25$ and $q_0=50$ resp. $\pi_{thr}=0.1$ and $q_0=50$ for PSS-ES. As for the grid in the loss-guided Stability Selection, we apply different $q$-grids, namely a) $\{11,...,20\}$; b) $\{1,...,20\}$; c) $\{1,...,10\}$; d) $\{21,...,30\}$ and e) $\{1,...,30\}$.
While these configurations serve as base scenario (call it \textbf{R1}), we also study the effect of $n_{sub}$ by setting it to $n_{sub}=25$ for our Stability Selection variants while keeping everything else as before resp. the grid as in a) (\textbf{R2}) and the effect of the partition by considering $n_{train}=40$, $n_{val}=16$, $n_{test}=15$ while keeping everything else as in R1 and while using only the grid as in a) (\textbf{R3}).
In Fig. \ref{scenribo}, we depict the mean number of selected variables and the mean test losses. Note that the results for Hofner's Stability Selection and $L_2$-Boosting in scenario R1 and R2 are identical as the subsample size here only affects our Stability Selection variants.
We can observe that, as expected, Hofner's Stability Selection leads to the sparsest models while raw $L_2$-Boosting selects the richest models. Our loss-based Stability Selection chooses quite a lot variables if the $q$-grid allows it. Of course, the mean number of selected variables in 1c) is smallest. One should note the difference between 1a) and 1b) and between 1d) and 1e). In 1a) and 1d), there is a lower bound for the number of stable variables which is not given in b) and e). Since the average number of stable variables is lower in b) resp. in e) than in a) resp. in d), there were data partitions that resulted in models with lower than 11 variables in b) and lower than 21 variables in e). Interestingly, PSS-ES selects a rather sparse stable model which is richer than the one selected by the original Stability Selection. The effect of the subsample size or the data partition seems to be rather limited concerning the number of stable variables.
\begin{figure}\label{scenribo}
\end{figure}
Concerning the test losses, we can make the following observation. Our loss-guided Stability Selection variants achieve a better test loss than the original Stability Selection. The astounding result is that the rather sparse model selected by PSS-ES indeed achieves a much better test performance than all other stable methods, except for R1d) where the loss-guided Stability Selection outputs rather rich models which, however, only achieve a slightly better test performance. We interpret this result as follows: There are relevant variables that achieve a very high aggregated selection frequency and that enter all stable sets. There are many non-relevant variables which are however persistent and are selected on many subsamples. These variables are discarded by the original Stability Selection which cannot select the relevant variables which get even lower aggregated selection frequencies, therefore this method leads to very sparse models. Our loss-guided Stability Selection includes more variables, including these persistent noise variables, until the inclusion of these noise variables becomes infavorable concerning the validation loss. Therefore, it is capable to select much richer models than the original Stability Selection to a certain extent but is stuck to the ordering of the variables. PSS-ES in contrast reliably picks variables that, disregarding the concrete ordering in terms of aggregated selection frequencies as long as the particular frequency is at least $\pi_0$, improve the test loss.
In this experiment, all stable models lead to a worse performance than the model selected by $L_2$-Boosting, which is not that surprising on such a high-dimensional data set since one can expect the number of relevant variables to be rather high so that stable models are doomed to underfit. In our interpretation, the false positives selected by $L_2$-Boosting have a rather low effect as the corresponding coefficients can be expected to be small in absolute value. Therefore, the rich $L_2$-Boosting model in total works better than the stable models as the benefit of the inclusion of further true variables is larger than the performance decrease due to overfitting. We want to emphasize again that an important advantage of sparse stable models is the better interpretability and the guidance of future experiments since one can focus on much less variables there.
We have also run the simulations again with $m_{iter}=300$ but this leads to such rich $L_2$-Boosting models that the number of variables is higher than the number of observations. The stable models in contrast were not affected which is not surprising since increasing the iteration number only includes some more variables while maintaining the variables selected for the first 100 iterations. The ordering of the variables would only be altered if some of these late variables would get higher aggregated selection frequencies than the early variables which is not the case and which clearly is not expected. As for PSS-ES, the number $q_0$ of meta-stable variables is quickly attained, in most cases, even for $m_{iter}=100$, so that additional variables for a larger number of iterations would just be discarded here.
\section{Conclusion and outlook} \label{outlook}
We have shown that the standard Stability Selection where the hyperparameters are chosen manually may fail on high-dimensional noisy data due to severe underfitting, potentially caused by both inappropriate hyperparameter settings by the analyst as well as the very strict false-positive avoiding paradigm. We presented a loss-guided Stability Selection variant that chooses the hyperparameters in a data-driven way according to a grid search w.r.t. the out-of-sample loss.
Motivated by the issue that true variables may get lower aggregated selection frequencies than some noise variables so that even our loss-guided Stability Selection cannot include these true variables without including the respective noise variables, we proposed another variant called Post Stability Selection exhaustive search (PSS-ES) which selects a meta-stable predictor set and performs an exhaustive search on this set.
The computational overhead of our Stability Selection variants is limited since the additional operations are executed on already sparse predictor sets. One can show that the complexity is captured by the same $\mathcal{O}$-class as the complexity of the original Stability Selection. Honestly speaking, the original Stability Selection may be better in the case of strong signals, i.e., high signal-to-noise ratios, since the relevant variables generally enter the models quickly.
Essentially, we do not pay too much attention to the error control but rather concentrate on finding a well-performing model rather than a too parsimonious or even an empty model which can be interpreted as a relaxed perspective in contrast to that of \cite{bu10} or \cite{hofner15}. Although we lose the error control that the original Stability Selection provided, our Stability Selection variants are user-friendly as they do not require expert knowledge, intuitive as they allow for fixing a range of number of variables in the stable model instead of some threshold, and as they, at least when using a grid over these numbers of stable variables, are guaranteed to result in a non-empty model.
Our Stability Selection variants prove to reliably achieve somewhat higher precision, up to a factor of 10, compared to pure Boosting, in a plethora of simulated scenarios for regression and classification where the data configurations range from easy to very challenging (large number of predictors, low number of observations, weak coefficients, very large noise variance). We also applied our Stability Selection variants to a real high-dimensional data set where they prove to lead to reasonable models and where PSS-ES proves to be capable to ignore the variable ordering in terms of selection frequencies in the favor of better stable models.
\setcitestyle{authoryear,open={((},close={))}}
\end{document} |
\begin{document}
\prvastrana=381
\poslednastrana=6
\defM. Fortunato et al{M. Fortunato et al}
\defQuantum control of chaos{Quantum control of chaos}
\headings{381}{386}
\setcounter{page}{381}
\title{Quantum Control of Chaos inside a Cavity}
\author{M.Fortunato\footnote{\email{[email protected]}},
W. P. Schleich\footnote{Also at Max-Planck Institut
f\"ur Quantenoptik, D-85748 Garching bei M\"unchen, Germany}}
{Abteilung f\"ur Quantenphysik, Universit\"at Ulm,
Albert-Einstein-Allee 11 \\ D-89069 Ulm, Germany}
\author{G. Kurizki}
{Department of Chemical Physics, The Weizmann Institute of Science \\
Rehovot 76100, Israel}
\datumy{31 May 1996}{7 June 1996}
\abstract{By sending many two-level atoms through a cavity resonant
with the atomic transition, and letting the interaction times
between the atoms and the cavity be randomly distributed, we end
up with a predetermined Fock state of the electromagnetic field
inside the cavity if we perform after the interaction with the
cavity a conditional measurement of the internal state of each atom
in a coherent superposition of its ground and excited states.
Differently from previous schemes, this procedure turns out to be
very stable under fluctuations in the interaction times.}
\kapitola{1. Introduction}
In the last decade, a great deal of attention has been dedicated to
the problem of quantum state preparation [1-5], since the
availability of non-classical states can allow the investigation of
fundamental problems in Quantum Mechanics [1]. Among them, Fock
states [2] are particularly intriguing because they do not
present intensity fluctuations.
Two major approaches to achieve this goal have been
proposed: the first one is based on unitary evolution [3],
that is on finding the right Hamiltonian which
evolves the initial state to the desired final one. The second approach is
based on the conditional measurement (CM) scheme [4] in which
the desired state is achieved after a measurement is performed on one of
two interacting systems. The CM approach has the disadvantage
that unsuccessful runs (experiments
in which the measurement does not give the right result) must be discarded,
and therefore it has a success probability which is always less than unity.
On the other hand, it has the clear advantage of a simple Hamiltonian
evolution, as, for example, the Jaynes-Cummings model.
In this paper, we present a new scheme which---differently from previous
ones [5]---allows the preparation of Fock states inside a
cavity in the presence of even large fluctuations in the interaction times
between the two-level atoms and the cavity field. It is based on the
CM approach and on the quantum interference between the two possible
final states of the atom. The presence of both these effects performs
a strong suppression of the fluctuations in the atomic velocities, and
makes the convergence of the photon-number distribution towards that of
a number state possible.
Our proposal connects as well to the recently introduced field of
chaos control [6]. In fact, it has been shown theoretically
and experimentally that it is possible to use the extreme sensitivity
of chaotic systems to stabilise regular periodic orbits in the chaotic
dynamics. The classical version of our model, implemented
via non-selective measurements (NSMs), is indeed chaotic even for a
small spread in the interaction times [5].
Our CM scheme could then be interpreted as a ``quantum way'' of
controlling chaos. In this view, our method is a new scheme
which can effectively restore fixed points in the quantum dynamics
of a classically chaotic system.
\kapitola{2. The model}
We consider a model in which many two-level atoms are sent through
a cavity whose frequency is resonant with the atomic transition.
The atoms cross the cavity sequentially (one at a time) so that at most
one atom is present inside the cavity.
In the general case, the atoms are initially prepared in a coherent
superposition of their ground and excited states [7] with
the help of two classical fields $E_1$ (resonant) and $E_2$ (non-resonant).
After the preparation, the state of the $k$th atom is
$
|\phi_k^{(i)}\rangle = \alpha^{(i)}_k |e\rangle + \beta^{(i)}_k|g\rangle\; ,
$
where $|g\rangle$ and
$|e\rangle$ are the ground and the excited state of the atom, respectively.
On the other hand, the cavity field is initially prepared by a classical
oscillator $E_3$ in a coherent
state
\begin{equation}
|\psi_0\rangle = \sum_{n=0}^{\infty} d_n^{(0)} |n\rangle =
\exp\left(-{|\alpha|^2 \over 2}\right) \sum_{n=0}^{\infty}
{\alpha^n\over \sqrt{n!}} |n\rangle\; .
\label{eq:infstate}
\end{equation}
The interaction between the atoms and the cavity is described [8]
by the resonant Jaynes-Cummings model, namely, the total Hamiltonian
of the system (atom and field) is given by
\begin{equation}
\hat{H} = \frac{1}{2}\hbar\omega\hat{\sigma}_z+\hbar\omega
\left(\hat{a}^\dagger\hat{a}\right)+\hbar g\left(\hat{a}^\dagger
\hat{\sigma}_-+\hat{a}\hat{\sigma}_+\right)
= \hat{H}_0+\hat{H}_{\rm int}\; ,
\label{eq:totham}
\end{equation}
where $\omega$ is the resonance frequency of the atoms and of the cavity,
$\hat{a}$ and $\hat{a}^\dagger$ are the usual annihilation and creation
operators for the field mode, $\hat{\sigma}_i$ are the Pauli operators and
$
\hat{H}_{\rm int} = \hbar g\left(\hat{a}^\dagger
\hat{\sigma}_-+\hat{a}\hat{\sigma}_+\right)\;.
$
In Eq.~(\ref{eq:totham}) $g$ denotes the coupling constant between the
atoms and the field mode.
The atoms are detected, after they have passed through the cavity,
in the coherent superposition
$
|\phi^{(f)}_{k}\rangle = \alpha^{(f)}_k |e\rangle +
\beta^{(f)}_k|g\rangle\;,
$
again thanks to two classical fields with which
the atoms interact after they exit the cavity: $E_4$ (non-resonant)
and $E_5$ (resonant), like in the preparation region but in
reverse order [7].
The problem is such that it can be treated iteratively, finding the
recurrence relation between the coefficients of the Fock basis
expansion of the field state inside the cavity after the
interaction and the conditional measurement of the $k$th
atom and the corresponding coefficients before
[after the $(k-1)$th atom]. Then, by repeatedly applying such a recurrence
relation, we can compute the coefficients in the number basis of the
final field state (after a sequence of $N$ atoms), starting from the
initial coherent state, Eq.~(\ref{eq:infstate}).
For convenience we will work in the
interaction picture [where $H_{\rm int}$ is regarded as the interaction
part of the Hamiltonian~(\ref{eq:totham})], and we will assume that
the resonant fields $E_1$, $E_3$, and $E_5$ are phase-locked.
In what follows we neglect spontaneous emission (since the transit
time of the atoms is much smaller than the typical decay time) and
any dissipation inside the cavity, assuming that the time required
for the whole sequence of atoms is much smaller than the cavity
lifetime.
Computing the evolved atom-field entangled state through the
unitary evolution given by the Hamiltonian~(\ref{eq:totham}),
and then projecting it onto the final atomic state, the following
recurrence relation between the state of the field
$|\psi_k\rangle=\sum_n d_n^{(k)}|n\rangle$ after the $k$th atom
and the corresponding state after the $(k-1)$th atom can be found
\begin{eqnarray}
d_n^{(k)} = P_k^{-1/2}\left\{ \left[
\alpha_k^{(i)}\alpha_k^{(f)*} C_n^{(k)}\right.\right.
& + & \left.\left.\beta_k^{(i)}\beta_k^{(f)*}C_{n-1}^{(k)}\right]
d_n^{(k-1)}
- i\alpha_k^{(i)}\beta_k^{(f)*}S_{n-1}^{(k)} d_{n-1}^{(k-1)}
\right.
\nonumber \\
& - & \left. i\beta_k^{(i)}\alpha_k^{(f)*}S_n^{(k)} d_{n+1}^{(k-1)}\right\}\;,
\label{eq:transf}
\end{eqnarray}
where $C_n^{(k)}=\cos(g\tau_k\sqrt{n+1})$,
$S_n^{(k)}=\sin(g\tau_k\sqrt{n+1})$, and $P_k$ is the
success probability of the CM, which is given by
the norm of the projection onto the final atomic state.
In Eq.~(\ref{eq:transf}) it is understood that $d_{-1}^{(k-1)}=0$.
\kapitola{3. Field state dynamics in the presence of random fluctuations}
In this section we study the behaviour of the final field state
(after many atoms have passed through the cavity) when we
allow a spread in the atomic velocities, that is in the interaction
times of the atoms with the cavity.
The JC model has already been proposed [5] for the
production of Fock states of the electromagnetic field inside a
cavity, in connection with NSMs.
That model, however, is very sensitive to even a small spread in
atomic velocities [5], which eventually makes the system
escape any fixed points in the evolution of the photon number distribution.
As a consequence, such a scheme---notwithstanding its great pioneering
value---is of no practical use in the production of Fock states, since
any velocity selector for atomic beams allows a spread in the atomic
velocities. In that approach, the convergence to a Fock state is due to
the existence of the well known ``trapping states'' in
the Jaynes-Cummings evolution [8]. However, in the case of
$|e\rangle \rightarrow |e\rangle$ (elastic CMs) or $|e\rangle\rightarrow
|g\rangle$ (inelastic CMs) schemes, if the interaction times $\tau_k$
fluctuate randomly with $k$, there is a critical value of the spread
$\Delta\tau$ above which the number
distribution will broaden rather than converge. We can estimate this
critical value $\Delta\tau_c$ as the difference between the trapping
and the anti-trapping interaction times for a given $n$ [8], namely,
\begin{equation}
\Delta\tau_c\cong{\pi\over g \sqrt{n+1}}-{\pi\over 2 g \sqrt{n+1}}
={\pi\over 2 g \sqrt{n+1}}\; ,
\label{eq:dtcrit}
\end{equation}
where the sub-ensemble of $|e\rangle \rightarrow |e\rangle$
CMs has been considered.
This phenomenon is shown in Fig.~1, where we plot for
$\Delta\tau=\Delta\tau_c$ the mean
value $\langle n \rangle$ and the rms spread $\Delta n =(\langle n^2
\rangle - \langle n \rangle^2)^{1/2}$ as a function of the number of
atoms injected into the cavity in their excited state
and detected afterwards in the same excited state.
Even though for $\Delta\tau\ll\Delta\tau_c$ a convergence
towards a Fock state is still possible, for $\Delta\tau\approx\Delta\tau_c$
such a convergence is completely destroyed: the system escapes every fixed
point because the trapping condition is different for each atom.
\begin{figure}
\caption{Fig.~1. Evolution of $\langle n \rangle$ and $\Delta n$ versus $k$
(number of atoms) in the scheme of elastic CMs in the case of a
large spread ($\Delta\tau=\Delta\tau_c$) in the interaction times.
The initial coherent state is $|\alpha\rangle = |3\rangle$.}
\label{fg:ee}
\end{figure}
In spite of this, we will show that it is possible to restore the
convergence to fixed points even for large fluctuations
($\Delta\tau > \Delta\tau_c$), if we allow the presence of quantum
interference between the sub-ensembles $|e\rangle\rightarrow |e\rangle$
and $|e\rangle \rightarrow |g\rangle$.
We explain this effect with a simple argument. Let us suppose that
the initial state of each atom is the excited one so that the general
transformation~(\ref{eq:transf}) simplifies to
\begin{equation}
d_n^{(k)} = P_k^{-1/2} \left[ \alpha_k^{(f)*} C_n^{(k)} d_n^{(k-1)}
-i\beta_k^{(f)*}S_{n-1}^{(k)} d_{n-1}^{(k-1)}\right]\;,
\label{eq:transfe}
\end{equation}
where the coefficients of the final atomic
superposition are expressed [9]
in terms of the Rabi frequency $\Omega_k^{(f)}$ and of the interaction
time $T_k^{(f)}$ with the resonant classical field $E_5$ according to
$
\alpha_k^{(f)} = \cos\left(\Omega_k^{(f)}T_k^{(f)}/2\right)
$
and
$
\beta_k^{(f)} = \sin\left(\Omega_k^{(f)}T_k^{(f)}/2\right)e^{i\varphi_f}\;.
$
If we now choose $\varphi_f\simeq -\pi/2$, and neglect the difference
between $n$ and $n-1$, that is $d_n^{(k)}\simeq d_{n-1}^{(k)}$ and
$S_{n-1}^{(k)}\simeq S_n^{(k)}$, Eq.~(\ref{eq:transfe})
approximately reads
\begin{equation}
d_n^{(k)}\simeq P_k^{-1/2}\cos\left(\Omega_k^{(f)}T_k^{(f)}/2-g\tau_k\sqrt{n+1}
\right) d_n^{(k-1)}\; .
\label{eq:transfes}
\end{equation}
Since the atoms (with thermal velocity) cross first the cavity and then
the classical field, $T_k^{(f)}$ and $\tau_k$ in Eq.~(\ref{eq:transfes})
are correlated, even if they are random. This yields a strong suppression
of the fluctuations in the argument of cosine in~(\ref{eq:transfes}).
This is shown in Fig.~2 where we plot (for the scheme
$|e\rangle \rightarrow |e\rangle + |g\rangle$) $\langle n\rangle$ and
$\Delta n$ for fixed interaction times, and for small and large spreads
in the interaction times. Notwithstanding the large fluctuations, the
convergence towards the desired Fock state is still very good. This is
confirmed by the final photon-number distribution $P(n)$, which corresponds to
that of a number state: all the $P(n)$ vanish except one, $P(21)=1$.
\begin{figure}
\caption{Fig.~2. Convergence towards a Fock state---even for a large spread in
the interaction times---in the scheme $|e\rangle \rightarrow
|e\rangle + |g\rangle$. Top: behaviour of $\langle n\rangle$ (left)
and
$\Delta n$ (right) versus $k$ (number of atoms) in the case of
fixed interaction times; middle: the same for small fluctuations
in the interaction times ($\Delta\tau = \Delta\tau_c/10$);
bottom: the same for large spread in the interaction times
($\Delta\tau = 2\Delta\tau_c$). In all cases the initial coherent
state is $|\alpha\rangle = |\protect\sqrt{21}
\label{fg:int}
\end{figure}
\kapitola{4. Discussion and Conclusions}
In this paper we have presented a novel scheme which is able to
produce preselected Fock states inside a cavity in which a coherent
state was initially prepared. The final number state is achieved by
sending many two-level atoms in their excited state through the
cavity and by performing a conditional measurement of their internal
degree of freedom in a superposition of the ground and excited states
after they leave the cavity. The proposed scheme---differently from
previous ones [5]--- is quite effective and immune even
to large fluctuations in the interaction times between the atoms and
the cavity field. This is achieved essentially thanks to two basic
ingredients: (a) the conditional measurement of the final state of the
atom, and (b) the quantum interference between the two possible atomic
states ($|e\rangle$ and $|g\rangle$) after the interaction with the
cavity. Since the classical NSMs counterpart of our model is chaotic
(in the regime of random interaction times), and has a quantum dynamics
similar to the classical one, such a striking behaviour
suggests an analogy with recently proposed methods [6] of
controlling classical chaos. These methods, mainly based on classical
feedback, use the extreme sensitivity of chaotic systems to small
perturbations in order to stabilise regular periodic orbits in the
chaotic dynamics. In this perspective, our method can be considered
as a novel (fully quantum) way of stabilising---even for large
fluctuations---fixed points in the quantum dynamics of a system which
is classically chaotic.
\noindent {\bf Acknowledgements}
\noindent
We acknowledge helpful and stimulating discussions with I. Averbukh,
R. Chiao, and K. Vogel. M.~F. would like to thank the European
Community (Human Capital and Mobility programme) for support, and
Prof. Gershon Kurizki and his group for the kind hospitality at the
Weizmann Institute of Science, Rehovot, Israel.
\small
\kapitola{References}
\begin{description}
\itemsep0pt
\item{[1]} J.~A.~Wheeler and W.~H.~Zurek (eds.), {\it Quantum Theory and
Measurement} (Princeton University Press, 1983);
\item{[2]} For previously proposed ways of producing Fock states, see:
\refer{J.~Krause, M.~O.~Scully, T.~Walther, and H.~Walther}
{Phys. Rev. A}{39}{1989}{1915}
J.~M.~ Raimond {\it et al.}, in {\it Laser Spectroscopy IX},
edited by M.~S.~Feld, J.~E.~Thomas, and A.~Mooradian
(Academic Press, New York, 1989);
\refer{J.~R.~Kuklinski}{Phys. Rev. Lett.}{64}{1990}{2507}
\item{[3]} \refer{W.~E.~Lamb}{Physics Today}{22}{1969}{23} for a notable
example of this approach, see:
\refer{A.~S.~Parkins, P.~Marte, P.~Zoller, and H.~J.~Kimble}
{Phys. Rev. Lett.}{71}{1993}{3095}
\item{[4]} The CM approach has been suggested
by \refer{B.~Sherman and G.~Kurizki}{Phys. Rev. A}{45}{1992}{R7674}
and developed by \refer{B.~M.~Garraway, B.~Sherman, H.~Moya-Cessa,
P.~L.~Knight, and G.~Kurizki}{Phys. Rev. A}{49}{1994}{535}
see also \refer{K.~Vogel, V.~M.~Akulin,
and W.~P.~Schleich}{Phys. Rev. Lett.}{71}{1993}{1816}
\item{[5]}
\refer{P.~Filipowicz, J.~Javainen, and P. Meystre}
{J. Opt. Soc. Am. B}{3}{1986}{906}
\item{[6]} The use of small perturbations to control
chaos has been introduced by \refer{E.~Ott, C.~Grebogi, and
J.~A.~Yorke}{Phys. Rev. Lett.}{64}{1990}{1196}
it has been demonstrated experimentally by
\refer{E.~R.~Hunt}{Phys. Rev. Lett.}{67}{1991}{1953}
successive developments can be found, {\it e.g.}, in
\refer{S.~Boccaletti and F.~T.~Arecchi}
{Europhys. Lett.}{31}{1995}{127}
for a review, see \refer{T.~Shinbrot,
C.~Grebogi, E.~Ott, and J.~A.~Yorke}{Nature}{363}{1993}{411}
\item{[7]}
\refer{G.~Harel, G.~Kurizki, J.~K.~McIver, and E.~Coutsias}
{Phys. Rev. A}{53}{1996}{}
\item{[8]} P.~Meystre, in {\it Progress in Optics XXX}
E. Wolf (ed.), Elsevier (1992);
\item{[9]} L.~Allen and J.~H.~Eberly, {\it Optical Resonance and Two-Level
Atoms} (Dover, New York, 1987);
\end{description}
\end{document} |
\begin{document}
\begin{abstract}
We prove that for a polynomial diffeomorphism of ${\cc^2}$,
the support of any invariant measure, apart from a few obvious cases, is contained in the closure of the
set of saddle periodic points.
\varepsilonnd{abstract}
{\bf M}aketitle
\section{Introduction and results}
Let $f$ be a polynomial diffeomorphism of ${\cc^2}$ with non-trivial dynamics. This hypothesis can
be expressed in a variety of ways, for instance it is equivalent to the positivity of topological entropy.
The dynamics of such transformations has attracted a lot of attention in the past few decades (the reader can consult
e.g. \cite{bedford} for basic facts and references).
In this paper we make the standing assumption that $f$ is dissipative, i.e.
that the (constant) Jacobian of $f$ satisfies $\abs{\jac(f)}<1$.
We classically denote by $J^+$ the forward Julia set,
which can be characterized as usual in terms of normal families, or by saying that
$J^+ = \partial K^+$, where $K^+$ is the set of points with bounded forward orbits. Reasoning analogously for backward iteration
gives the backward Julia set $J^- = \partial K^-$.
Thus the 2-sided Julia set is naturally defined by $J = J^+\cap J^-$
Another interesting dynamically defined subset is
the closure $J^*$ of the set of saddle periodic points (which is also the support of the unique entropy maximizing
measure \cite{bls}).
The inclusion $J^*\subset J$ is obvious.
It is a major open question in this area of research whether the converse inclusion holds.
Partial answers have been given in \cite{bs1, bs3, connex, lyubich peters 2, peters guerini}.
{\bf M}edskip
Let $\nu$ be an ergodic $f$-invariant probability measure. If
$\nu$ is hyperbolic, that is, its two Lyapunov exponents\footnote{Recall that in holomorphic dynamics,
Lyapunov exponents always have even multiplicity.} are non-zero and of opposite sign,
then the so-called Katok closing lemma \cite{katok} implies that $\supp(\nu) \subset J^*$. It may also be the case that $\nu$ is supported in the Fatou set: then from the classification of recurrent Fatou components in \cite{bs2}, this happens if and only if $\nu$ is supported on an attracting or semi-Siegel periodic orbit, or is the Haar measure on
a cycle of $k$ circles along which $f^k$ is conjugate to an irrational rotation (recall that
$f$ is assumed dissipative). Here by semi-Siegel periodic orbit,
we mean a linearizable periodic orbit with one attracting and one
irrationally indifferent multipliers.
The following ``ergodic closing lemma'' is the main result of this note:
\begin{thm}\lambdabel{thm:main}
Let $f$ be a dissipative polynomial diffeomorphism of ${\cc^2}$ with non-trivial dynamics, and $\nu$ be any invariant measure supported on $J$. Then $\supp(\nu)$ is contained in $J^*$.
\varepsilonnd{thm}
A consequence is that if $J\setminus J^*$ happens to be
non-empty, then the dynamics on $J\setminus J^*$ is ``transient" in a measure-theoretic sense.
Indeed, if $x\in J$, we can form an invariant
probability measure by taking a cluster limit of $\unsur{n}\sum_{k=0}^n \delta_{f^k(x)}$
and the theorem says that any such invariant measure will be concentrated on $J^*$. More generally the same argument implies:
\begin{cor}
Under the assumptions of the theorem, if $x\in J^+$, then $\Omegaega(x)\cap J^*\neq \varepsilonmptyset$.
\varepsilonnd{cor}
Here as usual $\Omegaega(x)$ denotes the $\Omegaega$-limit set of $x$. Note that for $x\in J^+$ then it is obvious
that $\Omegaega(x)\subset J$.
It would be interesting to know whether the conclusion of the corollary can be replaced by the sharper one: $\Omegaega(x)\subset J^*$.
{\bf M}edskip
Theorem {\bf M}athbb{R}f{thm:main} can be formulated slightly more precisely as follows.
\begin{thm}\lambdabel{thm:precised}
Let $f$ be a dissipative polynomial diffeomorphism of ${\cc^2}$ with non-trivial dynamics, and $\nu$ be any ergodic
invariant probability measure. Then one of the following situations holds:
\begin{enumerate}[{(i)}]
\item either $\nu$ is atomic and supported on an attracting or semi-Siegel cycle;
\item or $\nu$ is the Haar measure on an invariant cycle of circles contained in a periodic rotation domain;
\item or $\supp(\nu)\subset J^*$.
\varepsilonnd{enumerate}
\varepsilonnd{thm}
Note that the additional ergodicity assumption on $\nu$ is harmless since any invariant measure is an integral of ergodic ones.
The only new ingredient with respect to Theorem {\bf M}athbb{R}f{thm:main}
is the fact that measures supported on periodic orbits that do not fall in case {\varepsilonm (i)}, that is,
are either semi-parabolic or semi-Cremer, are supported on $J^*$. For semi-parabolic points this is certainly known to the experts although apparently not available in print.
For semi-Cremer points this follows from the hedgehog construction of Firsova, Lyubich, Radu and Tanase
(see \cite{lyubich radu tanase}). For completeness we give complete proofs below.
{\bf M}edskip
\noindent{\bf Acknowledgments.} Thanks to Sylvain Crovisier and Misha Lyubich for inspiring conversations.
This work was motivated
by the work of Crovisier and Pujals on strongly
dissipative diffeomorphisms (see \cite[Thm 4]{crovisier pujals}) and by the work of Firsova, Lyubich, Radu and Tanase
\cite{firsova lyubich radu tanase, lyubich radu tanase}
on hedgehogs in higher dimensions (and the question whether hedgehogs for Hénon maps are contained in $J^*$).
\section{Proofs}
In this section we prove Theorem {\bf M}athbb{R}f{thm:precised} by dealing separately with the atomic and the non-atomic case. Theorem {\bf M}athbb{R}f{thm:main} follows immediately. Recall that $f$ denotes a
dissipative polynomial diffeomorphism with non trivial dynamics and $\nu$ an $f$-invariant ergodic probability measure.
\subsection{Preliminaries}
Using the theory of laminar currents, it was shown in \cite{bls} that any saddle periodic point belongs to $J^*$. More generally, if $p$ and $q$ are saddle points, then
$J^* = \overline{ {W^u(p)\cap W^u(q)}}$ (see Theorems 9.6 and 9.9 in \cite{bls}).
This result was generalized in \cite{tangencies} as follows. If $p$ is any saddle point and $X\subset W^u(p)$, we respectively
denote by ${\bf M}athrm{Int}_i X$, ${\bf M}athrm{cl}_i X$, $\partial_i X$ the interior, closure and boundary of
$X$ relative to the intrinsic topology of
$W^u(p)$, that is the topology induced by the biholomorphism $W^u(p)\simeq {\bf M}athbb{C}$.
\begin{lem}[{\cite[Lemma 5.1]{tangencies}}]\lambdabel{lem:homoclinic}
Let $p$ be a saddle periodic point.
Relative to the intrinsic topology in $W^u(p)$, $\partial_i(W^u(p)\cap K^+)$ is
contained in the closure of the set of transverse homoclinic intersections. In particular $\partial_i(W^u(p)\cap K^+)\subset J^*$.
\varepsilonnd{lem}
Here is another statement along the same lines, which can easily be extracted from \cite{bls}.
\begin{lem}\lambdabel{lem:entire}
Let $\psi:{\bf M}athbb{C}\to {\cc^2}$ be an entire curve such that $\psi({\bf M}athbb{C})\subset K^+$. Then for any saddle point
$p$, $\psi({\bf M}athbb{C})$ admits transverse intersections with $W^u(p)$.
\varepsilonnd{lem}
\begin{proof}
This is identical to the first half of the proof of \cite[Lemma 5.4]{tangencies}.
\varepsilonnd{proof}
We will repeatedly use the following alternative which follows from the combination of the two previous lemmas. Recall that
a Fatou disk is a holomorphic disk along which the iterates $(f^n)_{n\geq 0}$ form a normal family.
\begin{lem}\lambdabel{lem:inters}
Let ${\bf M}athcal{E}$ be an entire curve contained in $K^+$, $p$ be any saddle point, and $t$ be a transverse intersection point between
${\bf M}athcal{E}$ and $W^u(p)$. Then either $t\in J^*$ or there is a Fatou disk $\Deltaelta\subset W^u(p)$ containing $t$.
\varepsilonnd{lem}
\begin{proof}
Indeed, either $t\in \partial_i(W^u(p)\cap K^+)$ so by Lemma {\bf M}athbb{R}f{lem:homoclinic}, $t\in J^*$,
or $t\in {\bf M}athrm{Int}_i(W^u(p)\cap K^+)$. In the latter case,
pick any open disk $\Deltaelta\subset{\bf M}athrm{Int}_i(W^u(p)\cap K^+) $ containing $t$. Since $\Deltaelta$ is contained in
$K^+$, its forward iterates remain bounded so it is a Fatou disk.
\varepsilonnd{proof}
\subsection{The atomic case}
Here we prove Theorem {\bf M}athbb{R}f{thm:precised} when $\nu$ is atomic. By ergodicity, this implies that $\nu$ is concentrated on a single
periodic orbit.
Replacing $f$ by an iterate we may assume that it is concentrated on a fixed point. Since $f$ is dissipative there must be an attracting eigenvalue. A first possibility is that this fixed point is attracting or semi-Siegel. Then we are
in case {\varepsilonm (i)} and there is nothing to say. Otherwise
$p$ is semi-parabolic or semi-Cremer and we must show that $p\in J^*$. In both cases, $p$ admits a strong stable manifold
$W^{ss}(p)$ associated to the contracting eigenvalue, which is biholomorphic to ${\bf M}athbb{C}$ by a theorem of Poincaré. Let $q$ be a
saddle periodic
point and $t$ be a point of transverse intersection between $W^{ss}(p)$ and $W^u(q)$. If $t\in J^*$, then
since $f^n(t)$ converges to $p
$ as $n\to \infty$ we are done. Otherwise there is a non-trivial Fatou disk $\Deltaelta$ transverse to $W^{ss}(p)$ at $t$.
Let us show that this is contradictory.
In the semi-parabolic case, this is classical. A short argument goes as follows (compare \cite[Prop. 7.2]{ueda}). Replace $f$ by an iterate so that the neutral eigenvalue is equal to 1.
Since $f$ has no curve of fixed points there are local coordinates $(x,y)$
near $p$ in which $p=(0,0)$, $W^{ss}_{\rm loc}(p)$ is the $y$-axis $\set{x=0}$
and $f$ takes the form
$$(x,y) {\lambda_0}ngmapsto (x+ x^{k+1}+ {h.o.t.} , by+ {h.o.t.})\ , $$ with $\abs{b}<1$
(see \cite[\S 6]{ueda}). Then $f^n $ is of the form
$$(x,y) {\lambda_0}ngmapsto (x+ n x^{k+1}+ {h.o.t.} , b^n y + {h.o.t.} ) \ , $$
so we see that $f^n$ cannot be normal along any disk transverse to the $y$ axis and we are done.
In the semi-Cremer case we rely on the hedgehog theory of \cite{firsova lyubich radu tanase,
lyubich radu tanase}.
Let $\phi:{\bf M}athbb{D}\to \Deltaelta$ be any parameterization, and fix local coordinates $(x,y)$ as before in which
$p=(0,0)$, $W^{ss}_{\rm loc}(p)$ is the $y$-axis
and $f$ takes the form $$(x,y) {\lambda_0}ngmapsto (e^{i2\pi\theta} x , by) + {h.o.t.}$$
Let $B$ be a small neighborhood of the origin in which the hedgehog is well-defined.
Reducing $\Deltaelta$ and iterating a few times if necessary, we can
assume that for all $k\geq 0$, $f^k(\Deltaelta)\subset B$ and $\phi$ is of the form $s{\bf M}apsto (s, \phi_2(s))$.
Then the first coordinate of
$f^n\circ \phi $ is of the form $s{\bf M}apsto e^{i2n\pi\theta} s+ {h.o.t.}$.
If $(n_j)_{j\geq 0}$ is a
subsequence such that $f^{n_j}\circ \phi$ converges to some $\psi = (\psi_1, \psi_2)$, we get that
$\psi_1(s) = \alpha s+ h.o.t.$,
where $\abs{\alpha} = 1$. Thus $\psi({\bf M}athbb{D}) = \lim f^{n_j}(\Deltaelta)$ is a non-trivial holomorphic
disk $\Gamma$ through 0 that is smooth at the origin.
For
every $k\in {\bf M}athbb{Z}$ we have that $f^{k}(\Gamma) = \lim f^{n_j+k}(\Deltaelta)\subset B$.
Therefore by the local uniqueness of hedgehogs (see \cite[Thm 2.2]{lyubich radu tanase})
$\Gamma$ is contained in ${\bf M}athcal{H}$. It follows that ${\bf M}athcal{H}$ has non-empty relative interior in any local center
manifold of $p$ and
from \cite[Cor. D.1]{lyubich radu tanase} we infer that $p$ is semi-Siegel, which is the desired contradiction.
\subsection{The non-atomic case}
Assume now that $\nu$ is non-atomic. If $\nu$ gives positive mass to the Fatou set, then by
ergodicity it must give full mass to a cycle of recurrent Fatou components. These were classified in
\cite[\S 5]{bs2}: they are either attracting basins or rotation domains.
Since $\nu$ is non-atomic we must be in the second situation. Replacing $f$ by $f^k$ we may assume that we are
in a fixed
Fatou component $\Omega$. Then $\Omega$ retracts onto some Riemann surface $S$ which is a biholomorphic
to a disk or an annulus and on which the
dynamics is that of an irrational rotation. Furthermore all orbits in $\Omega$ converge to $S$. Thus $\nu$ must give full mass to $S$, and since $S$ is foliated by invariant circles, by ergodicity $\nu$ gives full mass to a single circle. Finally the
unique ergodicity of irrational rotations implies that $\nu$ is the Haar measure.
{\bf M}edskip
Therefore we are left with the case where $\supp(\nu)\subset J$, that is, we must prove Theorem {\bf M}athbb{R}f{thm:main}.
Let us start by recalling some facts on the Oseledets-Pesin theory of our mappings.
Since $\nu$ is ergodic by the Oseledets theorem there exists
$1\leq k\leq 2$, a set ${\bf M}athcal R$ of full measure and for $x \in {\bf M}athcal R$ a
measurable splitting of $T_x{\cc^2}$, $T_x{\cc^2} = \bigoplus_{i=1}^k E_i(x)$ such that
for $v\in E_i(x)$, $\lim_{n\rightarrow\infty}\unsur{n}{\lambda_0}g \norm{df^n_x(v)} = \chi_i$.
Moreover, $\sum \chi_i = {\lambda_0}g \abs{\jac(f)}<0$,
and since $\nu$ is non-atomic both $\chi_i$ cannot be both negative (this is already part of Pesin's theory, see \cite[Prop. 2.3]{bls}).
Thus $k=2$ and the exponents satisfy $\chi_1<0$ and $\chi_2\geq 0$ (up to relabelling).
Without loss of generality, we may further assume that points in ${\bf M}athcal R$ satisfy
the conclusion of the Birkhoff ergodic theorem for $\nu$.
As observed in the introduction,
the ergodic closing lemma is well-known when $\chi_2 >0$ so
we might only consider the case $\chi_2=0$ (our proof actually treats both cases simultaneously).
To ease notation, let us denote by $E^s(x)$ the stable Oseledets subspace and by $\chi^s$ the corresponding Lyapunov exponent
($\chi^s<0$). The Pesin stable manifold theorem (see e.g. \cite{fathi herman yoccoz} for details)
asserts that
there exists a measurable
set ${\bf M}athcal{R}'\subset {\bf M}athcal R$ of full measure,
and a family of holomorphic disks
$W^s_{\lambda_0}c (x)$,
tangent to $E^s(x)$ at $x$ for $x\in {\bf M}athcal R'$, and such that
$f(W^s_{\lambda_0}c (x))\subset W^s_{\lambda_0}c (f(x))$. In addition for every $\varepsilon>0$ there exists a set
${\bf M}athcal{R}'_\varepsilon$ of measure $\nu({\bf M}athcal{R}'_\varepsilon) \geq 1-\varepsilon$ and constants
$r_\varepsilon$ and $C_\varepsilon$
such that for
$x\in {\bf M}athcal{R}'_\varepsilon$, $W^s_{\lambda_0}c (x)$ contains a graph of slope at most 1 over a ball of radius $r_\varepsilon$ in $E^s(x)$
and for $y\in W^s_{\lambda_0}c (x)$, $d(f^n(y), f^n(x))\leq C_\varepsilon\varepsilonxp ( (\chi^s+\varepsilon)n)$ for every $n\geq 0$. Furthermore, local stable
manifolds vary continuously on ${\bf M}athcal{R}'_\varepsilon$.
From this we can form global stable manifolds by declaring\footnote{If $\nu$ has a zero exponent, this may not be the stable manifold of $x$ in the usual sense, that is, there might exists points outside $W^s(s)$ whose orbit approach that of $x$.} that $W^s(x)$ is the increasing union
of $f^{-n} (W^s_{\lambda_0}c(f^n(x)))$. Then it is a well-known fact that $W^s(x)$ is a.s. biholomorphically equivalent to ${\bf M}athbb{C}$
(see e.g. \cite[Prop 2.6]{bls}). Indeed,
almost every point visits ${\bf M}athcal{R}'_\varepsilon$ infinitely many times, and from this we
can view $W^s(x)$ as an increasing union of disks $D_j$ such that the modulus of the annuli $D_{j+1}\setminus D_j$ is
uniformly bounded from below. Discarding a set of zero measure if necessary, it is no loss of generality to
assume that $\bigcup_{\varepsilon>0} {\bf M}athcal R'_\varepsilon = {\bf M}athcal R'$ and that for every
$x\in {\bf M}athcal{R}'$, $W^s(x)\simeq {\bf M}athbb{C}$.
{\bf M}edskip
To prove the theorem we show that for every $\varepsilon>0$, ${\bf M}athcal R'_\varepsilon \subset J^*$.
Fix $x\in {\bf M}athcal R'_\varepsilon$ and a saddle point $p$. By Lemma {\bf M}athbb{R}f{lem:entire}
there is a transverse intersection $t$ between $W^s(x)$
and $W^u(p)$. Since $x$ is recurrent and $d(f^n(x), f^n(t))\to 0$,
to prove that $x\in J^*$ it is enough to show that $t\in J^*$. We argue by contradiction so assume that this is not the case. Then by Lemma {\bf M}athbb{R}f{lem:inters} there is a Fatou disk $\Deltaelta$ through
$t$ inside $W^u(p)$. Reducing $\Deltaelta$ a little if necessary we may assume that $f^n$ is a normal family in some
neighborhood of $\overline \Deltaelta$ in $W^u (p)$.
Since $\nu$ is non-atomic and stable manifolds vary continuously for the $C^1$ topology on
${\bf M}athcal R'_\varepsilon$, there is a set $A$ of positive measure such that if $y\in A$, $W^s(y)$
admits a transverse intersection with $\Deltaelta$. The iterates $f^n(\Deltaelta)$ form a normal family and $f^n(\Deltaelta)$ is
exponentially close to $f^n(A)$. Let $(n_j)$ be some subsequence such that $f^{n_j}{\bf M}athbb{R}st\Deltaelta$ converges. Then the
limit map has either generic rank 0 or 1, that is if $\phi : {\bf M}athbb{D}\to \Deltaelta$ is a parameterization, $f^{n_j}\circ \phi$ converges uniformly on $ {\bf M}athbb{D}$ to some limit map $\psi$, which is either constant or has generic rank 1.
Set $\Gamma = \psi({\bf M}athbb{D})$. Let $\nu'$ be a cluster
value of the sequence of measures $(f^{n_j})_*(\nu{\bf M}athbb{R}st{A})$. Then $\nu'$ is a measure of mass
$\nu(A)$, supported on $\overline \Gamma$ and $\nu'\leq \nu$. Since $\nu$ gives no mass to points, the rank 0 case is excluded so $\Gamma$ is a (possibly singular) curve. Notice also that if $z$ is an interior point of $\Deltaelta$
(i.e. $z= \phi(\zeta)$ for some $\zeta \in {\bf M}athbb{D}$), then $\lim f^{n_j}(z) = \psi(\zeta)$ is an interior point of $\Gamma$. This shows that $\nu'$ gives full mass to $\Gamma$ (i.e. it is not concentrated on its boundary). Then
the proof of Theorem {\bf M}athbb{R}f{thm:main}
is concluded by the following result of independent interest.
\begin{prop}\lambdabel{prop:subvariety}
Let $f$ be a dissipative polynomial diffeomorphism of ${\cc^2}$ with non-trivial dynamics, and $\nu$ be an ergodic non-atomic
invariant measure, giving positive measure to a subvariety. Then
$\nu$ is the Haar measure on an invariant cycle of circles contained in a periodic rotation domain.
In particular a non-atomic invariant measure supported on $J$ gives no mass to subvarieties.
\varepsilonnd{prop}
\begin{proof}
Let $f$ and $\nu$ be as in the statement of the proposition, and $\Gamma_0$ be a subvariety such that $\nu(\Gamma_0)>0$.
Since $\nu$ gives no mass to the singular points of $\Gamma_0$, by reducing $\Gamma_0$ a bit we may assume that $\Gamma_0$ is smooth. If $M$ is an integer such that $1/M < \nu(\Gamma_0)$, by the pigeonhole principle
there exists $0\leq k \leq l \leq M$ such that
$\nu(f^k(\Gamma_0)\cap f^l(\Gamma_0))>0$, so $f^k(\Gamma_0)$ and $f^l(\Gamma_0)$
intersect along a relatively open set. Thus replacing
$f$ by some iterate $f^N$ (which does not change the Julia set)
we can assume that $\Gamma_0\cap f(\Gamma_0)$ is
relatively open in $\Gamma_0$ and $f(\Gamma_0)$.
Let now $\Gamma = \bigcup_{k\in {\bf M}athbb{Z}} f^k(\Gamma_0)$. This is an invariant,
injectively
immersed Riemann surface with $\nu(\Gamma)>0$. Notice that replacing $f$ by $f^N$ may
corrupt the ergodicity
of $\nu$ so if needed we replace $\nu$ by a component of its ergodic decomposition (under $f^N$)
giving positive (hence full) mass to $\Gamma$.
{\bf M}edskip
We claim that $\Gamma$ is biholomorphic to a domain of the form $\set{z\in {\bf M}athbb{C}, \ r <\abs{z} <R}$
for some $0\leq r< R \leq \infty$, that
$f{\bf M}athbb{R}st{\Gamma_0}$ is conjugate to an irrational rotation, and $\nu$ is the Haar measure on an invariant circle. This is
{\varepsilonm a priori} not enough to
conclude the proof since at this stage nothing prevents such an invariant ``annulus'' to be contained in $J$.
To prove the claim, note first that since $\Gamma$ is non-compact, it is either biholomorphic
to ${\bf M}athbb{C}$ or ${\bf M}athbb{C}^*$, or it is a hyperbolic Riemann surface\footnote{In the situation of Theorem {\bf M}athbb{R}f{thm:main} we further know that $ \Gamma\subset K$ so the first two cases are excluded.}. In addition $\Gamma$ possesses an automorphism $f$
with a non-atomic ergodic invariant measure. In the case of ${\bf M}athbb{C}$ and ${\bf M}athbb{C}^*$ all automorphisms are affine and
the only possibility is that $f$ is an irrational rotation. In the case of a hyperbolic Riemann surface, the list of possible dynamical
systems is also well-known (see e.g. \cite[Thm 5.2]{milnor}) and again the only possibility is that $f$ is conjugate to
an irrational rotation on a disk or an annulus. The fact that $\nu$ is a Haar measure follows as before.
{\bf M}edskip
Let $\gamma $ be the circle supporting $\nu$, and
$\widetilde \Gamma \subset \Gamma$ be a relatively compact invariant
annulus containing $\Gamma$ in its interior.
To conclude the proof we must show that $\gamma$ is contained in the Fatou set.
This will result from the following lemma, which will be proven afterwards.
\begin{lem}\lambdabel{lem:dominated}
$f$ admits a dominated splitting along $\widetilde \Gamma$.
\varepsilonnd{lem}
See \cite{sambarino} for generalities on the notion of dominated splitting.
In our setting, since $\Gamma$ is an invariant complex submanifold and $f$ is dissipative,
the dominated splitting actually implies a normal hyperbolicity property.
Indeed, observe first that
$f{\bf M}athbb{R}st{\tilde \Gamma}$ is an isometry for the Poincaré metric ${\bf M}athrm{Poin}_\Gamma$ of $\Gamma$,
which is equivalent to the induced Riemannian metric on $\widetilde \Gamma$. In particular
$C^{-1} \leq \norm{df^n{\bf M}athbb{R}st{T\tilde\Gamma}}\leq C$ for some $C>0$ independent of $n$.
Therefore a dominated splitting for
$f{\bf M}athbb{R}st{\widetilde \Gamma}$ means that there is a continuous splitting of
$T{\cc^2}$ along $\widetilde \Gamma$, $T_x{\cc^2} = T_x\Gamma\oplus V_x$, and for every
$x\in \widetilde \Gamma$ and $n\geq 0$ we have $\norm{df_x^n{\bf M}athbb{R}st{V_x}} \leq C'\lambdambda^n$ for some $C'>0$ and
$\lambdambda <1$.
In other words, $f$ is normally contracting along $\widetilde\Gamma$.
Thus in a neighborhood of $\gamma$,
all orbits converge to $\Gamma$. This completes the proof of Proposition {\bf M}athbb{R}f{prop:subvariety}.
\varepsilonnd{proof}
\begin{proof}[Proof of Lemma {\bf M}athbb{R}f{lem:dominated}]
By the cone criterion for dominated splitting (see \cite[Thm 1.2]{newhouse cone} or \cite[Prop. 3.2] {sambarino}) it is enough to
prove that for every $x\in \Gamma$ there exists a cone ${\bf M}athcal{C}_x$ about $T_x \Gamma$
in $T_x{\cc^2}$ such that the field of
cones $({\bf M}athcal{C}_x)_{x\in \widetilde \Gamma}$
is strictly contracted by the dynamics. For $x\in \Gamma$, choose a vector $e_x\in T_x\Gamma$ of unit norm relative to the Poincaré metric
${\bf M}athrm{Poin}_\Gamma$
and pick $f_x$ orthogonal to $e_x$ in $T_x{\cc^2}$ and such that $\det(e_x, f_x)=1$.
Since ${\bf M}athrm{Poin}_\Gamma{\bf M}athbb{R}st{\widetilde\Gamma}$ is equivalent to the metric induced by the ambient Riemannian metric,
there exists a constant $C$ such that for all $x\in \widetilde\Gamma$,
$C^{-1}\leq \norm{e_x}\leq C$. Thus,
the basis $(e_x, f_x)$ differs from an orthonormal basis by bounded multiplicative constants, i.e.
there exists $C^{-1}\leq \alpha(x)\leq C$ such that $(\alpha(x)e_x, \alpha^{-1}(x)f_x)$ is orthonormal.
Let us work in the frame $\set{(e_x, f_x), x\in \Gamma}$. Since $df{\bf M}athbb{R}st{\Gamma}$ is an isometry for the Poincaré metric
and $f(\Gamma) = \Gamma$,
the matrix expression of $df_x$ in this frame is of the form
$$\begin{pmatrix}
e^{i\theta(x)} & a(x) \\ 0 & e^{-i\theta(x)} J
\varepsilonnd{pmatrix},$$
where $J$ is the (constant) Jacobian. Fix $\lambdambda$ such that
$\abs{J}<\lambdambda < 1$, and for $\varepsilon>0$, let ${\bf M}athcal{C}_x^\varepsilon\subset T_x{\cc^2}$ be the cone defined by
$${\bf M}athcal{C}_x^\varepsilon = \set{ue_x+vf_x, \ \abs{v} \leq \varepsilon \abs{u}}.$$
Let also $A = \sup_{x\in \widetilde \Gamma}\abs{a(x)}$.
Working in coordinates, if $(u,v)\in {\bf M}athcal{C}_x^\varepsilon$ then
$$df_x(u,v) = :(u_1, v_1) = (e^{i\theta(x)} u + a(x) v , e^{-i\theta(x)}Jv),$$
hence $$ \abs{u_1} \geq \abs{u} - A \abs{v} \geq \abs{u} (1-A\varepsilon)
\text{ and } \abs{v_1} = \abs{Jv} \leq \varepsilon \abs{J} \abs{u} $$
We see that if $\varepsilon$ is so small that $\abs{J} < \lambdambda(1-A\varepsilon)$, then for every $x\in \widetilde \Gamma$ we have that
$\abs{v_1}\leq \lambdambda\varepsilon\abs{u_1}$, that is,
$df_x({\bf M}athcal C _x^\varepsilon)\subset {\bf M}athcal{C}_{f(x)}^{\lambda\varepsilon}$. The proof is complete.
\varepsilonnd{proof}
\begin{thebibliography}{[ABCD]}
\bibitem[B]{bedford} Bedford, Eric. {\varepsilonm
Dynamics of polynomial diffeomorphisms in ${\bf M}athbb C^2$: foliations and laminations.}
ICCM Not. 3 (2015), no. 1, 58--63.
\bibitem[BS1]{bs1} Bedford, Eric; Smillie, John.
{\varepsilonm Polynomial
diffeomorphisms of ${\bf M}athbb{C}^ 2$: currents, equilibrium measure and
hyperbolicity.}
Invent. Math. 103 (1991), 69--99.
\bibitem[BS2]{bs2} Bedford, Eric; Smillie, John. {\varepsilonm
Polynomial diffeomorphisms
of ${\bf M}athbb{C}^2$. II: Stable manifolds and recurrence.}
J. Amer. Math. Soc. 4 (1991), 657--679.
\bibitem[BS3]{bs3} Bedford, Eric; Smillie, John. {\varepsilonm Polynomial
diffeomorphisms of ${\bf M}athbb{C}^ 2$. III. Ergodicity, exponents and entropy of
the equilibrium measure.} Math. Ann. 294 (1992), 395--420.
\bibitem[BLS]{bls}
Bedford, Eric; Lyubich, Mikhail; Smillie, John. {\varepsilonm Polynomial
diffeomorphisms of ${\bf M}athbb{C}^ 2$. IV. The measure of maximal entropy and
laminar currents. }Invent. Math. 112 (1993), 77--125.
\bibitem[CP]{crovisier pujals} Crovisier, Sylvain; Pujals, Enrique. {\varepsilonm Strongly dissipative surface diffeomorphisms.}
Preprint
{\tt arxiv:1608.05999}
\bibitem[D]{connex} Dujardin, Romain.
\newblock{\varepsilonm Some remarks on the connectivity of Julia sets for
2-dimensional diffeomorphisms,}
\newblock in {\varepsilonm Complex Dynamics,} 63-84, Contemp. Math., 396,
Amer. Math. Soc., Providence, RI, 2006.
\bibitem[DL]{tangencies} Dujardin, Romain; Lyubich, Mikhail. {\varepsilonm Stability and bifurcations for dissipative polynomial automorphisms of ${\cc^2}$}.
Invent. Math. 200 (2015), 439--511.
\bibitem[FHY]{fathi herman yoccoz} Fathi, Albert; Herman, Michael R.; Yoccoz, Jean-Christophe. {\varepsilonm
A proof of Pesin's stable manifold theorem. }Geometric dynamics (Rio de Janeiro, 1981), 177--215, Lecture Notes in Math., 1007, Springer, Berlin, 1983.
\bibitem[FLRT]{firsova lyubich radu tanase} Firsova, Tanya; Lyubich, Mikhail; Radu, Remus; Tanase, Raluca.
{\varepsilonm Hedgehogs for neutral dissipative germs of holomorphic diffeomorphisms of $({\cc^2}, 0)$}
Preprint {\tt arxiv:1611.09342}
\bibitem[GP]{peters guerini} Guerini, Lorenzo; Peters, Han. {\varepsilonm When are $J$ and $J^*$ equal?} Preprint
{\tt arxiv:1706.00220}
\bibitem[K]{katok} Katok, Anatole. {\varepsilonm Lyapunov exponents, entropy and periodic orbits for diffeomorphisms.}
Publications Mathématiques de l'IHÉS 51 (1), 137-173.
\bibitem[LRT]{lyubich radu tanase} Lyubich, Mikhail; Radu, Remus; Tanase, Raluca. {\varepsilonm Hedgehogs in higher dimension and their
applications.} Preprint {\tt arxiv:1611.09840}
\bibitem[LP]{lyubich peters 2} Lyubich, Mikhail; Peters, Han.
{\varepsilonm Structure of partially hyperbolic Hénon maps.}
To appear.
\bibitem[M]{milnor} Milnor, John. {\varepsilonm Dynamics in one complex variable. }
Third edition. Annals of Mathematics Studies, 160. Princeton University Press, Princeton, NJ, 2006. viii+304 pp.
\bibitem[N]{newhouse cone} Newhouse, Sheldon
{\varepsilonm Cone-fields, domination, and hyperbolicity.}
Modern dynamical systems and applications, 419--432, Cambridge Univ. Press, Cambridge, 2004.
\bibitem[S]{sambarino} Sambarino, Martín.
{\varepsilonm A (short) survey on dominated splittings.} Mathematical Congress of the Americas, 149--183,
Contemp. Math., 656, Amer. Math. Soc., Providence, RI, 2016.
\bibitem[U]{ueda} Ueda, Tetsuo. {\varepsilonm Local structure of analytic transformations of two complex variables. I.}
J. Math. Kyoto Univ. 26 (1986), no. 2, 233--261
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\begin{center}
\thispagestyle{empty}
{\large \bf Tail Asymptotics of Random Sum and Maximum of Log-Normal Risks}
\vskip 0.4 cm
\centerline{\large
Enkelejd Hashorva\footnote{University of Lausanne, UNIL-Dorigny 1015 Lausanne, Switzerland} and Dominik Kortschak\footnote{
Universit\'e de Lyon, F-69622, Lyon, France; Universit\'e Lyon 1, Laboratoire SAF, EA 2429, Institut de Science Financi\`ere et d'Assurances, 50 Avenue Tony Garnier, F-69007 Lyon, France}
}
\vskip 0.4 cm
\end{center}
{\bf Abstract:} In this paper we derive the asymptotic behaviour of the survival function of both random sum and random maximum of log-normal risks. As for the case of finite sum and maximum investigated in Asmussen and Rojas-Nandaypa (2008) also for the more general setup of random sums and random maximum the principle of a single big jump holds. We investigate both the log-normal sequences and some related dependence structures motivated by stationary Gaussian sequences.
{\bf Key words}: Risk aggregation; log-normal risks; exact asymptotics; Gaussian distribution; product of random variables.
\section{Introduction}
Let $Y_i,i\ge 1$ be positive random variables \rE{(rv's)} which model claim sizes of an insurance portfolio for a given observation period.
Denote by $N$ the total number of \rrE{claims} reported during the observation period, thus $N$ is a discrete rv, which we assume to be independent of claim sizes $Y_i,i\ge 1$. The classical \rE{risk} model $S_N= \sum_{i=1}^N Y_i$ for the total loss amount assumes that $Y_i$'s are independent and identically distributed (iid) rv's.
If the assumption of independence of claim sizes is dropped, \pD{one faces the problem how to choose a meaningful dependence structure. Further this dependence structure should be tractable from a theoretical point of view. For example Constantinescu et al.\ (2011) consider \rE{a} model \rE{where} the survival copula of claim sizes is assumed to be Archimedean. Such a model
has the interpretation that for some positive rv $V$
and iid unit exponential rv's $E_i,i\ge 1$ independent of $V$, \rE{then} $Y_i=V E_i,i\ge 1$ form a dependent sequence of claim sizes
derived by randomly scaling of iid claim sizes $E_i,i\ge 1$.}\\
\pD{In this paper we use dependent Gaussian sequences and related dependence structures \rE{to model claim sizes}.}
Specifically, if $X_i,i\ge 1$ \rE{are} dependent Gaussian rv's with $N(0,1)$ distribution, then $Y_i= e^{X_i},i\ge 1$ is the corresponding sequence of dependent log-normal rv's that can be used for modeling claim sizes. For instance, if $X_i,i\ge 1$ is a centered stationary Gaussian sequence of $N(0,1)$ components and constant correlation $\rho= \E{X_1X_i}\in (0,1),i>1$, then $Y_i=e^{X_i}$ is a sequence of dependent log-normal rv's.
Since we have (see e.g., Berman (1992))
\begin{eqnarray}
X_i= \rho Z_0+ \sqrt{1- \rho^2} Z_i, \label{seq}
\end{eqnarray}
with $Z_i,i\ge 0$ iid $N(0,1)$ rv's, then $Y_i= e^{\rho Z_0} e^{\sqrt{1- \rho^2}Z_i},i\ge 1$. For such $Y_i$'s,
by Asmussen and Rojas-Nandaypa (2008)
\begin{eqnarray}\label{NN}
\pk{S_n > u} \sim n \pk{X_1> \log u}, \quad u\to \infty
\end{eqnarray}
holds for any $n\ge 2$, where $\sim$ stands for asymptotic equivalence of two functions when the argument tends to infinity. In view of Asmussen et al.\ (2011) (see also Hashorva (2013)) $S_n$ is asymptotically tail equivalent with the maximum $Y_{n:n}= \max_{1 \le i \le n} Y_i$, i.e., $\pk{S_n> u} \sim \pk{Y_{n:n} >u}$ as $u\to \infty$.
Our analysis in this paper is concerned with the probability of observing large values for the random sum $S_N$, thus we shall investigate $\pk{S_N> u}$ when $u$ is large. Additionally, we shall consider also the tail asymptotics of the maximum claim $Y_{N:N}$ among
the claim sizes $Y_1 ,\ldots, Y_N$; we set $Y_{0:0}=0$ if $N=0$. \rrE{For the case that $N$ is non-random see for recent results on max-sum equivalence Jiang et al.\ (2014) and the references therein.}
For our investigations of the tail behaviours of $S_N$ and $Y_{N:N}$ we shall follow two objectives:\\
A) We shall exploit the tractable
dependence structure \rE{implied} by \eqref{seq} choosing general $Z_i$'s such that $e^{Z_i}$ has survival function similar to that of a log-normal
rv; \\
B) We consider a log-normal dependence structure induced by a general Gaussian sequence $X_i,i\ge 1$ where $X_i,X_n$ can have a correlation $\rho_{in}$ which is allowed to converge to 1 as $n\to \infty$.
For both cases of dependent $Y_i$'s we show that the principle of a single big jump (see Foss et al.\ (2013) for details in iid setup)
holds if for the discrete rv $N$
we require that
\begin{eqnarray} \label{conN}
\E{(1+ \delta)^N} < \infty
\end{eqnarray}
is valid for some $\delta>0$; a large class of discrete rv's satisfies condition \eqref{conN}.\\
Brief organisation of the rest of the paper: We present our main results in Section 2
followed by the proofs in Section 3.
\section{Main Results}
We consider first $X_i$'s which \rrE{are in general not Gaussian}. So for a given fixed $\rho \in [0,1)$ let $Z_i,i\ge 0$ be independent rv's which define $X_i$'s via the dependence structure \eqref{seq}. We shall assume that
\begin{eqnarray}\label{14b}
\pk{e^{Z_0} > u}\sim \mathcal{L}(u) \Psi(\log(u)), \quad u\to \infty,
\end{eqnarray}
with $\Psi$ the survival function of an $N(0,1)$ rv and $\mathcal{L}(\cdot)$ a regularly varying function at $\infty$ with index $\beta\in \mathbb{R}$, see Bingham et al. (1987) or Mikosch (2009) for details on regularly \rrE{varying} functions. Clearly, \eqref{14b} is satisfied if $Z_0$ is an $N(0,1)$ rv. Considering $Z_0$ as a base risk, we shall further assume that with $c_i\in [0,\infty)$ uniformly in $i$
\begin{eqnarray}\label{14}
\pk{Z_i > u}\sim c_i\pk{{Z_0} > u}, \quad u\to \infty.
\end{eqnarray}
For such models the claim sizes
$Y_i=e^{X_i},i\ge 1$ have marginal distributions which are in general neither log-normal nor with tails which are proportional \rrE{to} those of log-normal rv's.
We state next our first result \rE{for $Y_{N:N}$ the maximal claim size among $Y_1=e^{X_1} ,\ldots, Y_N=e^{X_N}$
and the random sum $S_N=\sum_{i=1}^N Y_i$; we set $Y_{0:0}=0$ and $S_0:=0$.}
\begin{theo}\label{th0} Let $N$ be an integer-valued rv satisfying $\E{(1+\delta)^N} < \infty$ for some $\delta>0$. Let $X_i,i\ge 1$ be a sequence of rv's given by \eqref{seq}
with $Z_i,i\ge 0$ iid rv's and $\rho\in [0,1)$ some given constant. Suppose that \eqref{14b} and \eqref{14} hold
with $\max_{i \ge 1} c_i < \infty$. \rE{If further $N$ is independent of $X_i,i\ge 1$, then}
\begin{eqnarray}\label{NN2}
\pk{S_N > u} \sim \pk{Y_{N:N} > u} \sim \E{\sum_{i=1}^N c_i} \frac{\kal{L}(u^{\rho^2}) \kal{L}(u^{1-\rho^2}) }{ \sqrt{ 2 \pi} \log u} \exp\Bigl( - \frac{(\log u)^2}{2} \Bigr) , \quad u \to \infty.
\end{eqnarray}
\end{theo}
{\bf Remarks:} a) Clearly, if $Y=e^{Z}$ with $Z$ an $N(0,1)$ rv (thus $Y$ is a log-normal rv with $LN(0,1)$ distribution), then \eqref{14b} holds with $\kal{L}(u)=1, u>0$. \\
b) If $\kal{L}(\cdot)$ in \netheo{th0} is constant, then the tail asymptotic behaviour of $S_N$ and $Y_{N:N}$ is not influenced by the value of the dependence parameter $\rho$, and hence as expected the principle of a
single big jump holds. \rrE{However}, for non-constant $\kal{L}(\cdot)$ the dependence parameter $\rho$ plays a crucial role in the tail asymptotics derived in \eqref{NN2}. The reason for this is that by Lemma \ref{L1}
\begin{equation}
\pk{Y_i>u} \sim c_i \frac{\kal{L}(u^{\rho^2}) \kal{L}(u^{1-\rho^2}) }{ \sqrt{ 2 \pi} \log u} \exp\Bigl( - \frac{(\log u)^2}{2} \Bigr),
\quad u\to \infty.
\label{ff}
\end{equation}
Hence also in this case the principle of a single big jump applies.\\
c) In the proof of Theorem \ref{th0} we can show $S_N\stackrel{d}{=}e^{\rho Z_0}e^{\sqrt{1-\rho^2} Z^\star}$ for some $Z^*$ independent of $Z_0$ and then we apply Lemma \ref{L1}. Here we want to mention that after proving
\eqref{ff} we can also apply Proposition 2.2 of Foss and Richards (2010) to determine the asymptotic of $\pk{S_n> u}$ as $u\to \infty$.
If we condition on $Z_0$ and set $\overline F(x) =\mathbb{P}(Y_1>x)$,
$B_i(x)=\{x:e^{\rho Z_0}\le x^\gamma\}$ for some $\gamma \in (\rho,1)$ and define $\pD{h(x)=x^{\xi}}$ with \[1-\frac 12 \left(\frac{1-\gamma}{\sqrt{1-\rho^2}}\right)<\xi^2<1,
\]
then it is straightforward to show that the conditions of Proposition 2.2 of Foss and Richards (2010) are met.
Our second result is for log-normal rv's where we remove the assumptions of equi-correlations.
Specifically, we consider for each $n$ claim sizes
$Y_{1,n}=e^{X_{1,n}},,\ldots,s,Y_{n,n}= e^{X_{n,n}}, $ where $(X_{1,n}, ,\ldots,s, X_{n,n})$ is a normal random vector with mean zero and covariance matrix $\Sigma^{(n)}$ which is a correlation matrix with entries $\sigma^{(n)}_{i,j}$. We shall assume that $\rho^n_{i,j}:=\sigma^{(n)}_{i,j}$ is bounded by some sequence $\rho_n$ and some $\rho\in (0,1)$, i.e.,
\begin{eqnarray}\label{rhon}
\rho^n_{i,j}\le\max(\rho_n,\rho), \quad \rE{n\ge 1}
\end{eqnarray}
for all $i\not =j$. \rrE{Further, we suppose that the} sequence $\rho_n,n\ge 1$ \rrE{satisfies} for some $c^*>8$ and some $\eta> 0$
\begin{eqnarray}\label{cnu}
\rho_{n(u)} \le 1-\frac{c^* \log(\log(u))}{\log(u)}, \quad \text{with }
n(u)= \left \lfloor(1+ \eta ) \frac{ \rrE{(\log (u))^2} }{2\log(1+\delta)}\right\rfloor .
\end{eqnarray}
If for instance all \rE{${\rho_{i,j}^{n}}$} are bounded, then clearly condition \eqref{cnu} is valid; it holds also if for some $c$ large enough $\rho_n\le 1-c \log(n)/\sqrt{n}$.\\
We present next our final result.
\begin{theo}\label{thm}
Let $Y_{1,n} ,\ldots, Y_{n,n}, n\ge 1$ be claim sizes as above being further independent of some integer-valued rv $N$ which satisfies
\eqref{conN} for some $\delta>0$. If further \eqref{rhon} holds with $\rho_n$ satisfying \eqref{cnu}, then
\begin{eqnarray}
\mathbb{P}\left( \max_{1\le i \le N}Y_{i,N}>u\right)\sim \mathbb{P}(S_N>u)\sim
\rE{\frac{ \mean{N} }{ \sqrt{ 2 \pi} \log u} \exp\Bigl( - \frac{(\log u)^2}{2} \Bigr), \quad u\to \infty}.
\end{eqnarray}
\end{theo}
{\bf Remarks:}
a) Our second result in \netheo{thm} shows that the principle of a single big jump still holds even if we allow \rrE{for a more general dependence structure}.\\
b) Kortschak (2012) derives second order asymptotic results for subexponential risks. Similar ideas as therein are utilised to derive second order asymptotic results for the aggregation of log-normal random vectors in Kortschak and Hashorva (\rrE{2013,2014}). In the setup of randomly weighted sums it is also possible to derive such results.
\def\overline{F}{\overline{F}}
\section{Proofs}
We give next two lemmas needed in the proofs below. \rrE{The first lemma is of some interest on its own,
in particular it implies Lemma 2.3 in Farkas and Hashorva (2013) (see also Lemma 8.6 in Piterbarg (1996)).}
\def \eta{ \eta}
\begin{lem} \label{L1} Let $\kal{L}_i\rE{(\cdot)}, i=1,2$ be
some regularly varying functions at infinity with index $\beta_i$.
If $Z_1,Z_2$ are two independent rv such that $\pk{e^{Z_i}> u} \sim \kal{L}_i(u) \Psi(\log(u)), i=1,2$, then
for any $\sigma_1,\sigma_2$ two positive constants
\begin{eqnarray}
\pk{e^{\sigma_1 Z_1+\sigma_2 Z_2} >u} \sim \sigma^2 e^{\frac{\sigma_1^2 \sigma_2^2}{2 \sigma^2} (\beta_1 - \beta_2)^2 } \kal{L}_1(u^{\gamma})\kal{L}_2(u^{1-\gamma}) \Psi( (\log u)/\sigma)
\label{38}
\end{eqnarray}
holds as $u\to \infty$, where $\gamma=\sigma_1^2/(\sigma_1^2+\sigma_2^2)$ and $\sigma=\sqrt{\sigma_1^2+ \sigma_2^2}$.
\end{lem}
\def\sigma_*{\sigma_*}
\def\sigma_*d{\sigma_*^2}
\prooflem{L1}
Choose an $\alpha>0$ such that \[
\frac{\sigma_1^2}{\sigma_2^2}<\frac{1+\alpha}{1-\alpha}.
\]
Then for any $a>0$ we have
\begin{eqnarray}
\frac{ \pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_2 Z_2} \le a}}{\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2 }> u, e^{\sigma_2 Z_2}> a}} & \le &
\frac{ \pk{e^ { \sigma_1 Z_1 } > u/a}} {\pk{e^{\sigma_1 Z_1} > u^\alpha} \pk{{e^{\sigma_2Z_2}}> u^{1-\alpha}}}\notag\\
&\sim & \frac{ \kal{L}_1( (u/a)^{1/\sigma_1})\Psi(\frac{1}{\sigma_1}\log (u/a))}{ \kal{L}_1( u^{\alpha/\sigma_1}) \kal{L}_2( u^{(1-\alpha)/\sigma_1}) \Psi(\frac{\alpha}{\sigma_1}\log (u)) \Psi(\frac{1-\alpha}{\sigma_2}\log (u)) }\notag\\
& \to & 0, \quad u\to \infty,\label{eq:temp1}
\end{eqnarray}
with $\Psi$ the survival function of an $N(0,1)$ rv. With the same argument we get that
for any $a>0$ we have
\begin{eqnarray}Y
\frac{ \pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_1 Z_1} \le a}}{\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2 }> u, e^{\sigma_1 Z_1}> a}} \to 0, \quad u\to \infty,\label{eq:temp1C}
\end{eqnarray}Y
and hence
\begin{align}
\pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u}=&\pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_1 Z_1} > a, e^{\sigma_2 Z_2} > a}\notag\\&+\pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_1 Z_1} \le a}+\pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_2 Z_2} \le a}\notag
\\
\sim&\pk{e^{\sigma_1 Z_1 +\sigma_2 Z_2 }> u, e^{\sigma_1 Z_1} > a, e^{\sigma_2 Z_2} > a}, \quad u\to \infty.\label{eq:temp1b}
\end{align}
\COM{For $\xi_u=u^{1-\gamma }/\xi,\xi>0$ we have $u/\xi_u= \xi u^{-\gamma}$
\begin{eqnarray}Y
\pk{ e^{\sigma_1 Z_1 }> u/ \xi_u } \pk{e^{\sigma_2 Z_2}> \xi_u}&= &
\pk{ e^{\sigma_1 Z_1 }> \xi u^\gamma } \pk{e^{\sigma_2 Z_2}> u^{1- \gamma}/\xi}\\
&=& \pk{ Z_1 > (\gamma \ln u + \ln \xi)/\sigma_1} \pk{Z_2 > ((1- \gamma) \ln u - \ln \xi)/\sigma_2}.
\end{eqnarray}Y
}
In view of \eqref{eq:temp1b} we have
\begin{eqnarray}Y
\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u}
&\sim& \pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,e^{\sigma_1 Z_1}>\xi,e^{\sigma_2 Z_2}>\xi}, \quad u\to \infty.
\end{eqnarray}Y
Assume next without loss of generalty that $\sigma_1 \ge \sigma_2$.
If $H$ denotes the distribution of $e^{\sigma_1 Z_1}$, then for any $\xi>0$ with $u>2\xi$
\begin{eqnarray}Y
\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,e^{\sigma_1 Z_1}>\xi,e^{\sigma_2 Z_2}>\xi}
&=&\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,u/\xi \ge e^{\sigma_1 Z_1}>\xi,e^{\sigma_2 Z_2}>\xi}\\&&+\ \pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,e^{\sigma_1 Z_1}>u/\xi,e^{\sigma_2 Z_2}>\xi}\\
&=&\pk{e^{\sigma_1 Z_1+ \sigma_2 Z_2} > u,u/\xi \ge e^{\sigma_1 Z_1}>\xi}+\pk{e^{\sigma_1 Z_1}>u/\xi,e^{\sigma_2 Z_2}>\xi}\\
&=& \int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s)+ \pk{ e^{\sigma_1 Z_1 }> u/ \xi ,e^{\sigma_2 Z_2}> \xi}.
\end{eqnarray}Y
For all $u $ and $\xi$ large enough
$$\int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s)\ge
\frac{1}{2}\pk{e^{\sigma_2 Z_2}> u/\xi}\ge \pk{ e^{\sigma_1 Z_1 }> u/ \xi ,e^{\sigma_2 Z_2}> \xi}$$
implying as $u\to \infty$
\begin{eqnarray}Y
\int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s)+ \pk{ e^{\sigma_1 Z_1 }> u/ \xi ,e^{\sigma_2 Z_2}> \xi}
&\sim& \int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s).
\end{eqnarray}Y
Further, since again the constant $\xi$ can be chosen arbitrary large we get for $\gamma=\sigma_1^2/(\sigma_1^2+\sigma_2^2)$
\begin{eqnarray}Y
\lefteqn{\int_{\xi}^{u/\xi} \pk{ e^{\sigma_2 Z_2}> u/s} \ d H(s)} \\
&\sim&
\int_{\xi}^{u/\xi} \frac{\sigma_2^2\kal{L}_2(u/s)} {\sqrt{2 \pi \sigma_2^2}\log (u/s)} \exp\Biggl( - \frac{(\log (u/s))^2}{2 \sigma_2^2}\Biggr)\ d H(s)\\
&=& \frac{\sigma_2^2\kal{L}_2\left(u^{1-\gamma}\right)} {\sqrt{2 \pi \sigma_2^2}\log (u^{1- \gamma})} \int_{\xi u^{-\gamma}}^{\frac 1\xi u^{1-\gamma}} \frac{\kal{L}_2\left(\frac{u^{1-\gamma}} s\right) } {\kal{L}_2\left(u^{1-\gamma}\right) }
\frac{\log \left(u^{1-\gamma}\right) } {\log \left(\frac{u^{1-\gamma}} s\right) }
\exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr) \ d H( u^{\gamma}s)\\
&=& \frac{(\sigma_1^2+\sigma_2^2)\kal{L}_2\left(u^{1-\gamma}\right)} {\sqrt{2 \pi \sigma_2^2}\log (u)} \int_{\xi u^{-\gamma}}^{\frac 1\xi u^{1-\gamma}}
q(u, \gamma,s) \exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr) \ d H( u^{\gamma}s),
\end{eqnarray}Y
with $q(u,\gamma,s)=\frac{\kal{L}_2\left(\frac{u^{1-\gamma}} s\right) } {\kal{L}_2\left(u^{1-\gamma}\right) }
\frac{\log \left(u^{1-\gamma}\right) } {\log \left(\frac{u^{1-\gamma}} s\right) }.$ For some $c>0$, \rE{by the uniform convergence theorem for regularly varying functions (see Theorem A3.2 in Embrechts et al.\ (1997))} we get uniformly in $1/c<s<c$
\[
\lim_{u\to\infty} q(u,\gamma,s)
=s^{-\beta_2}.
\]
\rrE{Further note that in the light of Potter's bound \rE{(see Bingham et al.\ (1987))}} for every $\epsilon>0$ and $A>1$ we can find a positive constant \eeE{$\xi$} such that for all $\xi u^{-\gamma} <s< \frac 1\xi u^{1-\gamma}$
\[
\frac 1 A s^{-\beta_2} \min(s^\epsilon,s^{-\epsilon}) \le
q(u,\gamma,s) \le A s^{-\beta_2} \max(s^\epsilon,s^{-\epsilon}).
\]
Consequently, for different values of $0 <a<b$ (that might depend on $u$) and $\beta$
we want to find the asymptotics of
\begin{align*}
& \int_{a}^{b}
s^\beta
\exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr) \ d H( u^{\gamma}s)\\&= -s^\beta
\exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr)\mathbb{P}\left(e^{\sigma_1Z_1}>u^{\gamma}s \right) \Bigg|_{s=a}^b \\&
\quad
+\int_a^b s^{\beta-1} \left(\beta + \frac{\log\left(\frac{u^{1-\gamma}} s \right)}{\sigma_2^2} \right)
\exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr)
\mathbb{P}\left(e^{\sigma_1Z_1}>u^{\gamma}s \right) \ ds.
\end{align*}
Since we can choose $\xi$ arbitrary large we can replace $\mathbb{P}\left(e^{\sigma_1Z_1}>u^{\gamma}s \right)$ by its asymptotic form and hence we can use the approximation (set $\sigma_*:= \sigma_1 \sigma_2/\sqrt{\sigma_1^2+ \sigma_2^2}$)
\begin{align*}
&\exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}\Biggr)
\mathbb{P}\left(e^{\sigma_1Z_1}>u^{\gamma}s \right)\\&
\approx \sigma_1^2\frac{\kal{L}_1(u^\gamma s)} {\sqrt{2 \pi \sigma_1^2}\log (u^\gamma s)} \exp\Biggl( - \frac{\left(\log \left(\frac{u^{1-\gamma}} s \right)\right)^2}{2 \sigma_2^2}-\frac{\left(\log \left(u^{\gamma}s \right)\right)^2}{2 \sigma_1^2}\Biggr)\\
&= \sigma_1^2\frac{\kal{L}_1(u^\gamma s)} {\sqrt{2 \pi \sigma_1^2}\log (u^\gamma s)}
\exp\Biggl( - \frac{\left(\sigma_1^2(1-\gamma)^2+\sigma_2^2\gamma^2\right) (\log (u))^2 + 2\left(\sigma_1^2(\gamma-1)+\sigma_2^2\gamma\right) \log (u)\log (s)+(\sigma_1^2+\sigma_2^2)(\log (s))^2 }{2 \sigma_1^2\sigma_2^2}\Biggr)\\
&=\sigma_1^2
\frac{\kal{L}_1(u^\gamma s)} {\sqrt{2 \pi \sigma_1^2}\log (u^\gamma s)}
\exp\Biggl( - \frac{(\log (u))^2}{2(\sigma_1^2+\sigma_2^2)}\Biggr) \exp\Biggl( -\frac{(\log (s))^2 }{2 \sigma_*d}\Biggr).
\end{align*}
Since $ \sigma_1^2(\gamma -1)+ \sigma_2^2 \gamma=0$, using again Potter's bounds \rE{(see Bingham et al.\ (1987))} and the fact that $\kal{L}_1(\cdot)$ is regularly varying at infinity,
the above derivations imply
\begin{align*}
& \mathbb{P}(e^{\sigma_1 Z_1+\sigma_2 Z_2}>u)\\ &\sim \frac{\sigma_1^2 (\sigma_1^2 +\sigma_2^2) \kal{L}_1(u^{\gamma}) \kal{L}_2(u^{1-\gamma})} {\sigma_2^2\sqrt{2\pi \sigma_2^2\sigma_1^2} \log (u)}\frac{1-\gamma}{\gamma\sqrt{2\pi} }
\exp\Biggl( - \frac{(\log (u))^2}{2(\sigma_1^2+\sigma_2^2)}\Biggr)
\int_0^\infty s^{\beta_1-\beta_2-1} \exp\Biggl( -\frac{(\log (s))^2 }{2 \sigma_*d }\Biggr) \ ds \\
&= \frac{ \sqrt{\sigma_1^2+\sigma_2^2} \kal{L}_1(u^{\gamma}) \kal{L}_2(u^{1-\gamma})} {\sqrt{2 \pi} \log (u)}
\exp\Biggl( - \frac{(\log (u))^2}{2(\sigma_1^2+\sigma_2^2)}\Biggr)
\int_0^\infty \frac{1} {\sqrt{2\pi \sigma_*^2}}s^{\beta_1-\beta_2-1} \exp\Biggl( - \frac{(\log (s))^2 }{2 \sigma_*d}\Biggr) \ ds\\
&= \sqrt{\sigma_1^2+\sigma_2^2} e^{\frac{\sigma_*d }{2} (\beta_1-\beta_2)^2 }\frac{ \kal{L}_1(u^{\gamma}) \kal{L}_2(u^{1-\gamma})} {\sqrt{2 \pi} \log (u)}
\exp\Biggl( - \frac{(\log (u))^2}{2(\sigma_1^2+\sigma_2^2)}\Biggr),
\end{align*}
hence the proof is complete.
$\Box$
\begin{lem}\label{lem:asymptotictwo} Assume that $n\le n(u)$ with $n(u)$ \rE{defined in \eqref{cnu}} and \rrE{set} $\epsilon(u)=4\log(\log(u))/\log(u)$. If $Y_1$ is an $LN(0,1)$ rv and $X_{i,n}, i\le n$ are as in \netheo{thm}, then as $u\to \infty$
\[
\mathbb{P}(Y_1>u-n u^{1-\epsilon(u)}) \sim \mathbb{P}(Y_1>u)
\]
and for $i\not=j$
\[
\mathbb{P}(Y_{i,n}>u^{1-\epsilon(u)} ,Y_{j,n}>u^{1-\epsilon(u)}) =o(\mathbb{P}(Y_1>u)).
\]
\end{lem}
\prooflem{lem:asymptotictwo} By the assumptions on $n$ and $n(u)$ as $u\to \infty$ we have
\begin{align*}
\mathbb{P}(Y_1>u)\le \mathbb{P}(Y_1>u-n u^{1-\epsilon(u)}) &\le \mathbb{P}(Y_1>u-n(u) u^{1-\epsilon(u)})\\
&=\mathbb{P}\left(Y_1>u-\frac u{ \rrE{(\log (u))^4}} (1+ \eta ) \frac{ \rrE{(\log (u))^2} }{2\log(1+\delta)}\right)\\
&=\mathbb{P}\left(Y_1>u-\frac{(1+ \eta )}{2\log(1+\delta)} \frac u{ \rrE{(\log (u))^2} } \right)\\&\sim\mathbb{P}(Y_1>u).
\end{align*}
\rE{Next, denote by $f$ the probability density function of $Y_1$}. Let further
$\rrE{W_1} $ and $\rrE{W_2} $ be two independent $N(0,1)$ rv's, and write \rE{$\rho_*$} for the correlation between \rE{$\log Y_{i,n}$ and $\log Y_{j,n}$}.
We may write for $u>0$
\begin{eqnarray}Y
\mathbb{P}(Y_{i,n}>u^{1-\epsilon(u)} ,Y_{j,n}>u^{1-\epsilon(u)})
&=&
\mathbb{P}( e^{\rrE{W_1} }> u^{1-\epsilon(u)}, e^{\rE{\rho_*}\rrE{W_1} +\sqrt{1-\rE{\rho_*}^2} \rrE{W_2} } > u^{1-\epsilon(u)})\\
&=& \mathbb{P}\left(e^{\rrE{W_1} }>\frac{u}{ \rrE{(\log (u))^4}},e^{ \rE{\rho_*}\rrE{W_1} } e^{\sqrt{1-\rE{\rho_*}^2} \rrE{W_2} } >\frac{u}{ \rrE{(\log (u))^4} } \right)\\
&\le &\mathbb{P}\left(\frac u{ \rrE{(\log (u))^4} }<e^{\rrE{W_1} }<2u,e^{ \rE{\rho_*} \rrE{W_1} } e^{\sqrt{1-\rE{\rho_*}^2} \rrE{W_2} } >\frac{u}{ \rrE{(\log (u))^4} } \right)+\mathbb{P}(e^{\rrE{W_1} }>2u)\\
&=&
\int_{\frac{u}{ \rrE{(\log (u))^4} }}^{2u}
\mathbb{P}\left(e^{\rrE{W_2} }>\left(\frac{u}{ \rrE{(\log (u))^4} x^{\rE{\rho_*}}}\right)^{1/\sqrt{1-\rE{\rho_*}^2}} \right)
f(x) d x +\mathbb{P}(e^{\rrE{W_1} }>2u)\\
&\le &\int_{\frac{u}{ \rrE{(\log (u))^4} }}^{2u}
\mathbb{P}\left(e^{\rrE{W_2} }>\left(\frac{u^{1-\rho_*}}{ \rrE{(\log (u))^4} 2^{\rho_*}}\right)^{1/\sqrt{1-\rE{\rho_*}^2}} \right)
f(x) d x+\mathbb{P}(e^{\rrE{W_1} }>2u)\\
&\le &\mathbb{P}\left(Y_1>\frac{u^{\sqrt{\frac{1-\rho_*}{1+\rho_*}}}}{2^{\rho_*} \rrE{(\log (u))^4} }\right)\mathbb{P}\left(Y_1>\frac{u}{ \rrE{(\log (u))^4} } \right)+\mathbb{P}(e^{\rrE{W_1} }>2u)\\
&=&o(\mathbb{P}(Y_1>u)), \quad u\to \infty
\end{eqnarray}Y
since
\begin{eqnarray}Y
\left(1+\frac{1-\rho_*}{1+\rho_*}\right)\log(u)&=&\frac{2}{1+\rho_*}\log(u)\\
&\ge& \frac{2}{1+\rho_{n(u)}}\log(u)\\
& \ge& \frac{2}{2-\frac{c^* \log(\log(u))} {\log(u)}} \log(u)\\
&=&\log(u) +\frac{2}{2-\frac{c^* \log(\log(u))} {\log(u)}} c^* \log(\log(u))\\
& \sim &\log(u) +c^* \log(\log(u)).
\end{eqnarray}Y
Consequently, the assumption $c^*>8$ entails
\begin{align*}
& 2\log\left(\mathbb{P}\left(Y_1>\frac{u^{\sqrt{\frac{1-\rho_*}{1+\rho_*}}}}{2^{\rho_*} \rrE{(\log (u))^4} }\right)\mathbb{P}\left(Y_1>\frac{u}{ \rrE{(\log (u))^4} } \right) \right)\\& \sim \log\left(\frac{u^{\sqrt{\frac{1-\rho_*}{1+\rho_*}}}}{2^{\rho_*} \rrE{(\log (u))^4} }\right)^2+
\log\left(\frac{u}{ \rrE{(\log (u))^4} }\right)^2\\
&\sim \left(1+\frac{1-\rho_*}{1+\rho_*}\right)\log(u)^2-8\log(u) \log(\log(u))-8\sqrt{\frac{1-\rho_*}{1+\rho_*}}\log(u) \log(\log(u))\\
&\lesssim \log(u)^2+(c^*-8) \log(\log(u))
\end{align*}
establishing the proof.
$\Box$
\prooftheo{th0}
For any $u>0$ we have
\begin{eqnarray}Y
\pk{S_N> u}&=& \pk{ e^{\rho Z_0} \sum_{i=1}^N e^{ \sqrt{1- \rho ^2} Z_i} > u}\\
&=:& \pk{ e^{\rho Z_0} W_N > u}.
\end{eqnarray}Y
Since $e^{ \sqrt{1- \rho^2} Z_i},i\ge 1$ are subexponential risks, then along the lines of the proof of
Theorem 3.37 in Foss et al.\ (2013) (see also for similar result Theorem 1.3.9 in Embrechts et al.\ (1997))
$$\pk{W_N> u} \sim \Theta\pk{e^{ \sqrt{1- \rho^2} Z^*}> u}, \quad \Theta:= \E{\sum_{i=1}^N c_i} $$
as $u\to \infty$, with $Z^*$ an independent copy of $Z_0$. It can be easily checked that $Z_0$ and $\log(W_N)/(1-\rho^2)$ fulfill the conditions of \nelem{L1}, hence
the asymptotic of $\mathbb{P}(S_N>u)$ follows. Similarly,
\begin{eqnarray}Y
Y_{N:N}&=& \max_{1 \le i \le N} e^{ \rho Z_0+ \sqrt{1- \rho^2} Z_i}
= e^{\rho Z_0 } \max_{1 \le i\le N} e^{ \sqrt{1 -\rho^2} Z_i}=:e^{\rho Z_0} W_N^*.
\end{eqnarray}Y
Since we have
$$ \pk{W_N^*> u} \sim \pk{W_N>u} \sim \Theta \pk{ \exp( \sqrt{1 - \rho^2} Z^*)> u}, \quad u \to \infty$$
the proof follows by applying once again \nelem{L1}.
$\Box$
\prooftheo{thm} Denote next $Y_1$ an $LN(0,1)$ rv and let $\mathcal{I}_{\{\cdot\}}$ denote the indicator function.
Since for all fixed $n\ge 1$ we get by interchanging limit and finite sum that
\begin{eqnarray}Y
\mathbb{P}(S_N>u)&=&\mathbb{P}(S_N>u,N\le n)+\mathbb{P}(S_N>u,N> n)\\
&\sim& \mean{N \mathcal I_{\{ N\le n\}}} \mathbb{P}(Y_1>u)+\mathbb{P}(S_N>u,N> n)
\end{eqnarray}Y
we can assume w.l.o.g. that $\rho_{i,j}^n\le \rho_n$. From \eqref{conN} it follows that there exist $C_1,C_2>0$ such that
\begin{align*}
p_n:=\mathbb{P}(N=n)\le C_1 (1+\delta)^{-n} \quad\text{and}\quad \mathbb{P}(N>n)\le C_2(1+\delta)^{-n}.
\end{align*}
By the independence of $N$ and the claim sizes
\[
\mathbb{P}(S_N>u)=\sum_{n=1}^\infty p_n\mathbb{P}(S_n>u)
\]
and for $n(u)$ defined in \eqref{cnu}
\begin{eqnarray}Y
\sum_{n=n(u)}^\infty p_n\mathbb{P}(S_n>u)&\le& \mathbb{P}(N>n(u)) \\
&\le &C_2(1+\delta)^{-n(u)} \\
&\le &C_2\exp\left(-\frac{1+ \eta }2 \rrE{(\log (u))^2} \right)\\
&=&o(\mathbb{P}(Y_1>u)).
\end{eqnarray}Y
Since
\[\mathbb{P}(S_n>u)\ge n\mathbb{P}(Y_1>u)-\sum_{i\not=j} \mathbb{P}(Y_i>u,Y_j>u)\]
and by Lemma \ref{lem:asymptotictwo}
$$\mathbb{P}(Y_i>u,Y_j>u)=o(\mathbb{P}(Y_1>u)), \quad u\to \infty$$
it follows that
\begin{eqnarray}Y
\sum_{n=0} ^{n(u)} p_n\mathbb{P}(S_n>u)&\ge& \mathbb{P}(Y_1>u) \left(\sum_{n=0} ^{n(u)} n p_n- o(1)\sum_{n=0} ^{n(u)} n^2 p_n \right)\\
&\sim &\mean{N} \mathbb{P}(Y_1>u), \quad u\to \infty.
\end{eqnarray}Y
So we are left with finding an asymptotic upper bound. For $n\le n(u)$ we use the \rrE{following} decomposition (c.f. \pD{Asmussen and Rojas-Nandaypa (2008)})
\begin{align*}
\mathbb{P}(S_n>u)=\sum_{i=1}^n \mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n},\max_{j\not=i} Y_{j,n}>u^{1-\epsilon(u)}\right) +\mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n},\max_{j\not=i} Y_{j,n}\le u^{1-\epsilon(u)}\right),
\end{align*}
where $\epsilon(u)=4\log(\log(u))/\log(u)$. By Lemma \ref{lem:asymptotictwo} we have
\begin{align*}
\sum_{i=1}^n \mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n},\max_{j\not=i} Y_{j,n}>u^{1-\epsilon(u)}\right)
&\le \sum_{i=1}^n\sum_{j\not=i} \mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n}, Y_{j,n}>u^{1-\epsilon(u)}\right)\\
&\le \sum_{i=1}^n\sum_{j\not=i} \mathbb{P}\left(Y_{i,n}>u^{1-\epsilon(u)},Y_{j,n}>u^{1-\epsilon(u)}\right)\\
&=n(n-1) o(\mathbb{P}(Y_1>u)).
\end{align*}
Further
\begin{align*}
\mathbb{P}\left(S_n>u,Y_{i,n}\ge Y_{j,n},\max_{j\not=i} Y_{j,n}\le u^{1-\epsilon(u)}\right)&\le\mathbb{P}\left(Y_{i,n}>u-\sum_{i=1}^n Y_{j,n},\max_{j\not=i} Y_{j,n}\le u^{1-\epsilon(u)}\right)\\
&\le \mathbb{P}\left(Y_{i,n}>u-n u^{1-\epsilon(u)},\max_{j\not=i} Y_{j,n}\le u^{1-\epsilon(u)}\right)\\
&\le\mathbb{P}\left(Y_{i,n}>u-n u^{1-\epsilon(u)}\right) \\
& \sim \mathbb{P}(Y_1>u)
\end{align*}
as $u \to \infty$, hence the proof for the tail asymptotics of $S_N$ follows \eeE{by} applying \eqref{lem:asymptotictwo}. Since for any $u>0$
\[ n\mathbb{P}(Y_1>u)-\sum_{i\not=j} \mathbb{P}(Y_i>u,Y_j>u)\le \mathbb{P}\left( \max_{1\le i \le n}Y_{i,n}>u\right)\le \mathbb{P}(S_n>u)
\]
\rrE{the tail asymptotics of $\max_{1\le i \le N}Y_{i,N}$ can be easily established, \eeE{and} thus the proof is complete.}
$\Box$
\textbf{Acknowledgments.} We would like to that the referees of the paper for several suggestions which improved our manuscript.
E. Hashorva kindly acknowledges partially support by the Swiss
National Science Foundation Grant 200021-140633/1 and RARE -318984 (an FP7 Marie Curie IRSES Fellowship).
\begin{thebibliography}{9}
\bibitem{}
Asmussen, S., Blanchet, J., Juneja, S., Rojas-Nandayapa, L. (2011)
Efficient simulation of tail probabilities of sums of correlated lognormals. \emph{Ann. Oper. Res.} {\bf 189}, 5�-23.
\bibitem{}
Asmussen, S., Rojas-Nandaypa, L. (2008) Sums of dependent log-normal random variables with Gaussian copula. \emph{Stat. Probab. Lett.} {\bf 78}, 2709--2714.
\bibitem{BERMANc}
{Berman, M.S. (1992) {\it Sojourns and Extremes of Stochastic Processes}, Wadsworth \& Brooks/ Cole, Boston.}
\bibitem{}
Bingham, N., Goldie, C.M., Teugels, J.L. (1987) {\it Regular Variations}. Cambridge, Cambridge University Press.
\bibitem{}
Constantinescu, C., Hashorva, E., Ji, L. (2011) The Archimedean copula in finite and infinite dimensions - with applications to ruin problems. \emph{Insurance: Mathematics and Economics}, {\bf 49}, 487�-495.
\bibitem{}Embrechts, P., Kl\"{u}ppelberg, C., Mikosch, T. (1997)
\textit{Modeling Extremal Events for Finance and Insurance.} Berlin, Springer.
\bibitem{}
Farkas, J., Hashorva, E. (2013) Tail approximation for reinsurance portfolios of Gaussian-like risks.
\emph{Scandinavian Actuarial Journal}, DOI 10.1080/03461238.2013.825639, in press.
\bibitem{}Foss, S., Korshunov, D., Zachary, S. (2013)
\textit{An introduction to Heavy-tailed and Subexponential Distributions.} Springer-Verlag, 2nd Edition, New York.
\bibitem{} Foss, S., Richards, A. (2010) On sums of conditionally independent subexponential random variables,
\emph{Mathematics Operations Research}, {\bf 35}, 102--119.
\bibitem{} Hashorva, E. (2013) Exact tail asymptotics of aggregated parametrised risk. \emph{J. Math. Anal. Appl.}, {\bf 400}, 187--199.
\bibitem{}
Jiang, T., Gao, Q., Wang, Y. (2014) Max-sum equivalence of conditionally dependent random variables.
\emph{Stat. Probab. Lett.} {\bf 84}, 60--66.
\bibitem{}
Kortschak, D. (2012) Second order tail asymptotics for the sum of dependent, tail-independent regularly varying risks.
\emph{Extremes}, {\bf 15}, 353--388.
\bibitem{}
Kortschak, D., Hashorva, E. (2014) Second order asymptotics of aggregated log-elliptical risk. \emph{Meth. Comp. Appl. Probab.}, DOI 10.1007/s11009-013-9356-5, in press.
\bibitem{}
Kortschak, D., Hashorva, E. (2013)
Efficient simulation of tail probabilities for sums of log-elliptical risks. \emph{J. Comp. Appl. Math.}, {\bf 247}, 53--67.
\bibitem{}
Mikosch, T. (2009) {\it Non-Life Insurance Mathematics. An Introduction with the Poisson Process}. 2nd Edition, Springer.
\bibitem{}
Mitra, A., Resnick, S.I. (2009) Aggregation of rapidly varying risks and asymptotic independence. \emph{Adv. Appl. Probab.} {\bf 41}, 797--828.
\bibitem {PIT96} Piterbarg, V.I. (1996) \textit{Asymptotic Methods in the Theory of Gaussian Processes and Fields}. AMS, Providence.
\bibitem {resnick2} Resnick, S.I. (1987) \textit{Extreme Values, Regular Variation and Point Processes.} \emph{Springer, New York}.
\end{thebibliography}
\end{document}
{\bf Another question}:
With $Y_i(\rho)=e^{ \sqrt{\rho} Z+ \sqrt{1- \rho^2} X_i}$ and $Z,X_i$ are iid $N(0,1)$ what can we say about
\begin{eqnarray}
\pk{Y_1(\rho_1)+ Y_2(\rho_1)> u,Y_1(\rho_2)+ Y_2(\rho_2)> a u}, \quad u \to \infty
\end{eqnarray}
for $\rho_1 \not=\rho_2$ and $a\in (0,1]$?
\section{Ad another question}
The first task is to find $\theta_1,\theta_2,\theta_3$ which maximize the function
\[
f(\mathbf {\theta}):= \min\left(\max\left( \sqrt{\rho} \theta_1+\sqrt{1-\rho}\theta_2,\sqrt{\rho} \theta_1+\sqrt{1-\rho}\theta_3\right),\max\left( \sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\theta_2,\sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\theta_3\right) \right)
\]
under the condition $\theta_1^2+\theta_2^2+\theta_3^2=1$. W.l.o.g. we can assume that $\theta_i\ge0$.
Now there are different cases.
If $\theta_2>\theta_3$ (the case $\theta_2<\theta_3$ is symmetric and hence will not be considered) then
\[
f(\mathbf {\theta})= \min\left(\sqrt{\rho} \theta_1+\sqrt{1-\rho}\theta_2, \sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\theta_2 \right)
\]
Now this is maximized if $\theta_3=0$ and $\theta_2=\sqrt{1-\theta_1^2}$.
hence we have to consider
\[
f(\mathbf {\theta})= \min\left(\sqrt{\rho} \theta_1+\sqrt{1-\rho}\sqrt{1-\theta_1^2}, \sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\sqrt{1-\theta_1^2} \right)
\]
Since $\sqrt{\rho} \theta_1+\sqrt{1-\rho}\sqrt{1-\theta_1^2}$ is a concave function we get that the optimal $\theta_1$ fulfills the equation
\[
\sqrt{\rho} \theta_1+\sqrt{1-\rho}\sqrt{1-\theta_1^2}= \sqrt{\gamma} \theta_1+\sqrt{1-\gamma}\sqrt{1-\theta_1^2}.
\]
It follows that
\begin{align*}
\theta^*:=\theta_1&=\frac {1}{\sqrt{1+\left(\frac{\sqrt{\rho}-\sqrt{\gamma}}{\sqrt{1-\gamma}-\sqrt{1-\rho}} \right)^2 }}=\frac {1}{\sqrt{1+\frac{\gamma +\rho-\sqrt{ \rho\gamma}}{2-\rho-\gamma -\sqrt{(1-\rho)(1-\gamma)}} }}=\sqrt{\frac{2-\rho-\gamma -\sqrt{(1-\rho)(1-\gamma)}} {2-\sqrt{\rho\gamma} -\sqrt{(1-\rho)(1-\gamma)}}}.\\
\theta_2&=\frac {\frac{\sqrt{\rho}-\sqrt{\gamma}}{\sqrt{1-\gamma}-\sqrt{1-\rho}} }{\sqrt{1+\left(\frac{\sqrt{\rho}-\sqrt{\gamma}}{\sqrt{1-\gamma}-\sqrt{1-\rho}} \right)^2 }}\quad\text{and}\quad \theta_3=0.
\end{align*}
Finally we have that
\begin{align*}
f(\theta^*)&=\left(\sqrt{\rho}+ \sqrt{1-\rho} \frac{\sqrt{\rho}-\sqrt{\gamma}}{\sqrt{1-\gamma}-\sqrt{1-\rho}} \right)\theta^*\\
&=\frac{\sqrt{\rho(1-\gamma)}-\sqrt{\gamma(1-\rho)} }{\sqrt{1-\gamma}-\sqrt{1-\rho}} \theta^*\\
&=\frac{\left|\sqrt{\rho(1-\gamma)}-\sqrt{\gamma(1-\rho)} \right|}{\sqrt{2-\sqrt{\rho\gamma} -\sqrt{(1-\rho)(1-\gamma)}}}.
\end{align*}
Now the marginal distribution of $\theta_1$ is given by
\[
f_1(x)=\frac12, \quad x\in (\cK{-1},1)
\]
and the marginal distribution of $\theta_2/(\sqrt{1-\theta_1^2})|\theta_1$ is given by
\[ f_1(x|\theta_1)=\frac{1}{\pi} (1-x^2)^{-\frac{1}2}, \quad x\in (\cK{-1},1).
\]
We have to evaluate the integrals
\begin{align*}
&\frac{1}{2\pi}\int_{-1}^1 \int_{-1}^1
\mathbb{P}\Bigg(
e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)}+
e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)(1-x^2)} \right)}>u\\&\quad\quad\quad\quad\quad\quad,
e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right)}+e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)(1-x^2)} \right)}>au
\Bigg)
(1-x^2)^{-\frac{1}2}d xd \theta\\
&+\frac{1}{2\pi}\int_{-1}^1 \int_{-1}^1
\mathbb{P}\Bigg(
e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)}+
e^{R\left(\sqrt{\rho} \theta- \sqrt{(1-\rho)(1-\theta^2)(1-x^2)} \right)}>u\\&\quad\quad\quad\quad\quad\quad,
e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right)}+e^{R\left(\sqrt{\gamma} \theta- \sqrt{(1-\gamma)(1-\theta^2)(1-x^2)} \right)}>au
\Bigg) (1-x^2)^{-\frac{1}2}
d xd \theta.
\end{align*}
Considering the integration domain we see that an asymptotic significant contribution implies that $\theta \approx\theta^*$ and either $x\approx 1$ or $x\approx0$. Since the case $x\approx0$ can be best dealt with replacing the role of $\theta_2$ and $\theta_3$ and hence is some how symmetric to the case $x\approx1$ we will only consider the case $x\approx1$ and $\theta \approx \theta^*$.
For an $\epsilon>0$ define the sets
$A_\epsilon=\{(\theta,x): |\theta-\theta^*|<\epsilon,x>1-\epsilon\}$ and we choose $\epsilon$ small enough such that there exists an $\delta>0$ with
\begin{align*}
\sup_{(\theta,x)\in A_\epsilon} \left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)(1-x^2)} \right)+\delta&<\inf_{(\theta,x)\in A_\epsilon} \left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)\\
\sup_{(\theta,x)\in A_\epsilon} \left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)(1-x^2)} \right)+\delta&<\inf_{(\theta,x)\in A_\epsilon} \left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right)
\end{align*}
It follows that as $u\to\infty$
\begin{align*}
&\frac{1}{2\pi}\int_{A_\epsilon}
\mathbb{P}\Bigg(
e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)}+
e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)(1-x^2)} \right)}>u\\&\quad\quad\quad\quad\quad\quad,
e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right)}+e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)(1-x^2)} \right)}>au
\Bigg)
(1-x^2)^{-\frac{1}2}d xd \theta\\
&\sim \frac{1}{2\pi}\int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{1-\epsilon}^1
\mathbb{P}\Bigg(
e^{R\left(\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)} x\right)}>u,e^{R\left(\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x\right)}>au \Bigg)
(1-x^2)^{-\frac{1}2}d xd \theta\\
&= \frac{1}{2\pi}\int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{1-\epsilon}^1
\mathbb{P}\Bigg(
R>\frac{\log(u)} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)(1-\theta^2)}x},R>\frac{\log(u) +\log(a)}{\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)(1-\theta^2)} x} \Bigg)
(1-x^2)^{-\frac{1}2}d xd \theta\\
&\approx \frac 1 {\sqrt{1-{\theta^*}^2} } \frac{1}{2\pi}\int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{\sqrt{1-\theta^2} (1-\epsilon)}^{\sqrt{1-\theta^2}}
\mathbb{P}\Bigg(
R>\frac{\log(u)} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)}x},R>\frac{\log(u) +\log(a)}{\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)} x} \Bigg)
\left(1-\frac{x^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta\\
&\approx \frac 1 {\sqrt{1-{\theta^*}^2} } \frac{\log(u)}{\sqrt{2\pi^3}\sqrt{\gamma} \theta^*+ \sqrt{(1-\gamma)(1-{\theta^*}^2)} } \\&\quad\times \int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{\sqrt{1-\theta^2} -\epsilon}^{\sqrt{1-\theta^2 }}
\exp\left\{-\frac 12 \max\left(
\frac{\log(u)} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)}x},\frac{\log(u) +\log(a)}{\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)} x} \right)^2\right\}
\left(1-\frac{x^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta
\end{align*}
Were we used that
\[
\mathbb{P}(R>u)=\int_u^\infty \frac{\sqrt{2}}{\sqrt{\pi}} x^2e^{-\frac{x^2}2} d x\sim\frac{\sqrt{2}}{\sqrt{\pi}} ue^{-\frac{u^2}2}.
\]
To evaluate the last integral note that by Taylor expansion Further note that
\begin{align*}
& \frac{1} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)}x}=\frac{1} {\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)}x}-\frac{(\theta-\theta^*)\sqrt{\rho}} {(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)}x)^2} +\frac{(\theta-\theta^*)^2\rho} {\left(\sqrt{\rho} (\theta^*+\xi_1)+ \sqrt{(1-\rho)}x\right)^3}\\
& =\frac{1} {\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-\theta^2)}} -\frac{\sqrt{1-\rho}(x-\sqrt{1-\theta^2})} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)(1-\theta^2)}\right)^2} -\frac{(\theta-\theta^*)\sqrt{\rho}} {(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-\theta^2)})^2}
\\&\quad+2 \frac{\sqrt{\rho(1-\rho)}(\theta-\theta^*)(x-\sqrt{1-\theta^2})} {\left(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)}\left(\sqrt{(1-\theta^2)}+\xi_3\right) \right)^3}
\\&\quad +\frac{(\theta-\theta^*)^2\rho} {\left(\sqrt{\rho} (\theta^*+\xi_1)+ \sqrt{(1-\rho)}x\right)^3}+ \frac{(1-\rho)(x-\sqrt{1-\theta^2})^2} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)}\left(\sqrt{(1-\theta^2)}+\xi_2\right)\right)^3}
\\
& =\frac{1} {\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-{\theta^*}^2)}} -\frac{\sqrt{1-\rho}(x-\sqrt{1-{\theta}^2})} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)(1-{\theta^*}^2)}\right)^2} -\frac{(\theta-\theta^*)\left(\sqrt{\rho}-\frac {\sqrt{1-\rho} \theta^*}{\sqrt{1-{\theta^*}^2}} \right)}
{(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-{\theta^*}^2)})^2}
\\&\quad+\frac{(\theta-\theta^*)^2\frac {(1-\rho) (\theta^*+\xi_3)^2}{1-(\theta^*+\xi_3)^2}}
{(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-(\theta^*+\xi_3)^2)})^3}
+\frac{(\theta-\theta^*)^2 \frac{\sqrt{1-\rho}}{\sqrt{(1-(\theta^*+\xi_3)^2)^3}}}
{(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-(\theta^*+\xi_3)^2)})^2}
\\
&\quad - \frac{2 (1-\rho)(x-\sqrt{1-\theta^2})(\theta-\theta^*) \frac{\theta^*+\xi_4}{\sqrt{1-(\theta^*+\xi_4)^2} }} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)(1-{(\theta^*+\xi_4)}^2)}\right)^3}- \frac{2 \sqrt{\rho(1-\rho)}(\theta-\theta^*)^2 \frac{\theta^*+\xi_5}{\sqrt{1-(\theta^*+\xi_5)^2} }} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)(1-{(\theta^*+\xi_5)}^2)}\right)^3}
\\&\quad+2 \frac{\sqrt{\rho(1-\rho)}(\theta-\theta^*)(x-\sqrt{1-\theta^2})} {\left(\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)}\left(\sqrt{(1-\theta^2)}+\xi_3\right) \right)^3}
\\&\quad +\frac{(\theta-\theta^*)^2\rho} {\left(\sqrt{\rho} (\theta^*+\xi_1)+ \sqrt{(1-\rho)}x\right)^3}+ \frac{(1-\rho)(x-\sqrt{1-\theta^2})^2} {\left(\sqrt{\rho} \theta^* +\sqrt{(1-\rho)}\left(\sqrt{(1-\theta^2)}+\xi_2\right)\right)^3}
\end{align*}
A first check proofed (This has to be redone to be sure) that for $\epsilon$ sufficiently small the sum of all terms with that are of higher order then $1$ is positive (the matrix of the second order derivatives is positive definite).
Define the constants
\begin{align*}
c_\rho=\frac{1} {\sqrt{\rho} \theta^*+ \sqrt{(1-\rho)(1-{\theta^*}^2)}} \quad &\text{and}\quad c_\gamma=\frac{1} {\sqrt{\gamma} \theta^*+ \sqrt{(1-\gamma)(1-{\theta^*}^2)}}\\
d_\rho= \left(\sqrt{\rho}-\sqrt{1-\rho}\frac{\theta^*}{\sqrt{1-{\theta^*}^2}} \right) \quad &\text{and}\quad d_\gamma= \left(\sqrt{\gamma}-\sqrt{1-\gamma}\frac{\theta^*}{\sqrt{1-{\theta^*}^2}} \right)
\end{align*}
and note that $c_\gamma=c_\rho$ as well as $d_\gamma=-d_\rho$.
To get the asymptotic of the integral we will use our Taylor expansion to get an upper bound (Since it is then straight forward to proof that it is also a lower bound we are not going to do it) here $A_\rho$ respectively $A_\gamma$ are certain positive definite matrices.
\begin{align*}
&\int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{\sqrt{1-\theta^2} -\epsilon}^{\sqrt{1-\theta^2 }}
\exp\left\{-\frac 12 \max\left(
\frac{\log(u)} {\sqrt{\rho} \theta+ \sqrt{(1-\rho)}x},\frac{\log(u) +\log(a)}{\sqrt{\gamma} \theta+ \sqrt{(1-\gamma)} x} \right)^2\right\}
\left(1-\frac{x^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta\\
&\le \int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{\sqrt{1-\theta^2} -\epsilon}^{\sqrt{1-\theta^2 }}
\exp\Bigg\{-\frac { \rrE{(\log (u))^2} }2 \max\Bigg(
c_\rho -c_\rho^2\sqrt{1-\rho}(x-\sqrt{1-{\theta}^2}) -c_\rho^2d_\rho(\theta-\theta^*)
+{{x-\sqrt{1-\theta^2}}\choose{\theta- \theta^* }}^T A_\rho {{x-\sqrt{1-\theta^2}}\choose{\theta- \theta^* }}
,\\&\quad c_\gamma -c_\gamma^2\sqrt{1-\rho}(x-\sqrt{1-{\theta}^2}) -c_\gamma^2d_\gamma(\theta-\theta^*)+c_\gamma\frac{\log(a)}{\log(u)}
+{{x-\sqrt{1-\theta^2}}\choose{\theta- \theta^* }}^T A_\gamma {{x-\sqrt{1-\theta^2}}\choose{\theta- \theta^* }} \Bigg)^2\Bigg\}
\left(1-\frac{x^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta\\
&= \int_{\theta^*-\epsilon}^{\theta^*+\epsilon} \int_{0}^{\epsilon}
\exp\Bigg\{-\frac { \rrE{(\log (u))^2} }2 \max\Bigg(
c_\rho +c_\rho^2\sqrt{1-\rho}x -c_\rho^2d_\rho(\theta-\theta^*)
+{{-x}\choose{\theta- \theta^* }}^T A_\rho {{-x}\choose{\theta- \theta^* }}
,\\&\quad c_\gamma +c_\gamma^2\sqrt{1-\rho}x -c_\gamma^2d_\gamma(\theta-\theta^*)+c_\gamma\frac{\log(a)}{\log(u)}
+ {{-x}\choose{\theta- \theta^* }}^T A_\gamma {{-x}\choose{\theta- \theta^* }} \Bigg)^2\Bigg\}
\left(1-\frac{(-x+\sqrt{1-\theta^2})^2}{{1-{\theta}^2}}\right)^{-\frac{1}2}d xd \theta\\
&= \frac 1{ \rrE{(\log (u))^2} }\int_{-\epsilon\log(u)}^{\epsilon\log(u)} \int_{0}^{\epsilon\log(u)}
\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \max\Bigg(
1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)}
+ \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }}
,\\&\quad 1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)}
+\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\}
\left(1-\frac{\left(\frac {-x} {\log(u)} +\sqrt{1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2}\right)^2}{1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2}\right)^{-\frac{1}2}d xd \theta\\
&= \frac 1{ \rrE{(\log (u))^2} }\int_{-\epsilon\log(u)}^{\epsilon\log(u)} \int_{0}^{\epsilon\log(u)}
\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \max\Bigg(
1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)}
+ \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }}
,\\&\quad 1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)}
+\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\}
\left(\frac{ \frac {2x} {\log(u)} \sqrt{1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2}-\frac {x^2} { \rrE{(\log (u))^2} }}{1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2}\right)^{-\frac{1}2}d xd \theta\\
&\approx \frac 1{\sqrt{2}\log(u)^{3/2}} \left(1-\left(\theta^* +\frac \theta {\log(u)}\right) ^2\right)^{1/4}
\\&\times\int_{-\epsilon\log(u)}^{\epsilon\log(u)} \int_{0}^{\epsilon\log(u)}x^{-1/2}
\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \max\Bigg(
1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)}
+ \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }}
,\\&\quad 1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)}
+\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\}
d xd \theta
\end{align*}
Intuitively when $d_\rho>0$ this leads to the integrals
\begin{align*}
& \int_{-\epsilon\log(u)}^{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2}
\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(
1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)}
+ \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }}
\Bigg)^2\Bigg\}
d xd \theta\\
&\int^{\epsilon\log(u)}_{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2}
\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)}
+\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\}
d xd \theta
\end{align*}
and for $d_\rho<0$ this leads to the integrals
\begin{align*}
& \int^{\epsilon\log(u)}_{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2}
\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(
1 +c_\rho\sqrt{1-\rho}\frac x{\log(u)} -c_\rho d_\rho \frac \theta{\log(u)}
+ \frac 1{c_\rho \rrE{(\log (u))^2} } {{-x}\choose{\theta }}^T A_\rho {{-x}\choose{\theta }}
\Bigg)^2\Bigg\}
d xd \theta\\
&\int_{-\epsilon\log(u)}^{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2}
\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} -c_\gamma d_\gamma \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)}
+\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\}
d xd \theta
\end{align*}
As an example lets do one of the integrals on an heuristic asymptotic level.
\begin{align*}
&\int^{\epsilon\log(u)}_{\frac{-\log(a)}{2c_\rho d_\rho}} \int_{0}^{\epsilon\log(u)}x^{-1/2}
\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)}+\frac{\log(a)}{\log(u)}
+\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta}}^T A_\gamma {{-x}\choose{\theta }} \Bigg)^2\Bigg\}
d xd \theta\\
&=\int^{\epsilon\log(u)+\frac{\log(a)}{2c_\rho d_\rho}}_{0} \int_{0}^{\epsilon\log(u)}x^{-1/2}
\\&\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)}+\frac{\log(a)}{2\log(u)}
+\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta -\frac{\log(a)}{2c_\rho d_\rho}}}^T A_\gamma {{-x}\choose{\theta-\frac{\log(a)}{2c_\rho d_\rho} }} \Bigg)^2\Bigg\}
d xd \theta\\
&=\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +\frac{\log(a)}{2\log(u)}\Bigg)^2\Bigg) \int^{\epsilon\log(u)+\frac{\log(a)}{2c_\rho d_\rho}}_{0} \int_{0}^{\epsilon\log(u)}x^{-1/2}
\\&
\exp\Bigg\{- {c_\rho^2 \rrE{(\log (u))^2} }\Bigg(1 +\frac{\log(a)}{2\log(u)}\Bigg) \Bigg(c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)}
+\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta -\frac{\log(a)}{2c_\rho d_\rho}}}^T A_\gamma {{-x}\choose{\theta-\frac{\log(a)}{2c_\rho d_\rho} }} \Bigg)\Bigg\} \\
&\exp\Bigg\{ -\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)}
+\frac 1{c_\gamma \rrE{(\log (u))^2} } {{-x}\choose{\theta -\frac{\log(a)}{2c_\rho d_\rho}}}^T A_\gamma {{-x}\choose{\theta-\frac{\log(a)}{2c_\rho d_\rho} }} \Bigg)^2\Bigg\}
d xd \theta\\
&\approx\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +\frac{\log(a)}{2\log(u)}\Bigg)^2\Bigg)\\&\times \int^{\epsilon\log(u)+\frac{\log(a)}{2c_\rho d_\rho}}_{0} \int_{0}^{\epsilon\log(u)}x^{-1/2}
\exp\Bigg\{- {c_\rho^2 \rrE{(\log (u))^2} } \Bigg(c_\gamma\sqrt{1-\rho}\frac x{\log(u)} +c_\gamma d_\rho \frac \theta{\log(u)}
\Bigg)\Bigg\}
d xd \theta\\
&\approx \log(u)^{3/2}\exp\Bigg\{-\frac {c_\rho^2 \rrE{(\log (u))^2} }2 \Bigg(1 +\frac{\log(a)}{2\log(u)}\Bigg)^2\Bigg) \int^{\infty}_{0} \int_{0}^{\infty}x^{-1/2}
\exp\Bigg\{- c_\gamma^3\sqrt{1-\rho}x -c_\gamma^3 d_\rho \theta
\Bigg)\Bigg\}
d xd \theta\\
\end{align*}
Note that the last integral can be integrated explicitly.
\end{document} |
\begin{document}
\date{}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\volume{}
\pubyear{}
\artmonth{}
\doi{}
\label{firstpage}
\begin{abstract}
Motivated by regression analysis for microbiome compositional data, this paper considers generalized linear regression analysis with compositional covariates, where a group of linear constraints on regression coefficients are imposed to account for the compositional nature of the data and to achieve subcompositional coherence. A penalized likelihood estimation procedure using a generalized accelerated proximal gradient method is developed to efficiently estimate the regression coefficients. A de-biased procedure is developed to obtain asymptotically unbiased and normally distributed estimates, which leads to valid confidence intervals of the regression coefficients. Simulations results show the correctness of the coverage probability of the confidence intervals and smaller variances of the estimates when the appropriate linear constraints are imposed. The methods are illustrated by a microbiome study in order to identify bacterial species that are associated with inflammatory bowel disease (IBD) and to predict IBD using fecal microbiome.
\end{abstract}
\begin{keywords}
Accelerated proximal gradient; De-biased estimation; High dimensional data; Metagenomics; Penalized estimation;
\end{keywords}
\maketitle
\section{Introduction}
Human micorbiome consists of all living microorganisms that are in and on human body. These micro-organisms have been shown to be associated with complex diseases and to influence our health. Advanced sequencing technologies such as 16S sequencing and shotgun metagenomic sequencing, provide powerful methods to quantify the relative abundance of bacterial taxa in large samples. Since only the relative abundances are available, the resulting data are compositional with a unit sum constraint. The compositional nature of the data requires additional care in statistical analysis, including linear regression analysis \citep{lin2014variable,shi2016,aitchison1984log}.
The main challenges of analyzing compositional data are to account for the unit sum structure and to achieve subcompositional coherence \citep{aitchison1982statistical}, which requires that the same results are obtained regardless of the way the data is normalized based on the whole compositions or only a subcomposition. To explore the association between outcome and the compositional data, \citet{aitchison1984log} proposed a linear log-contrast model to link the outcome and the log of the compositional data. \citet{lin2014variable} further developed this model and considered variable selection by a $\ell_{1}$-penalized estimation procedure. To achieve subcompositional coherence, \citet{shi2016} extended the linear regression model by imposing a set of linear constraints. The log-contrast model and its extensions are suitable when the outcome variable is continuous and normal distributed.
In this paper, the generalized linear regression models (GLMs) with linear constraints in the regression coefficients are proposed for microbiome compositional data, where a group of linear constraints are imposed to achieve subcompositional coherence. In order to identify the bacterial taxa that are associated with the outcome, a penalized estimation procedure for the regression coefficients via a $\ell_1$ penalty is introduced. To solve the computational problem, a generalized accelerated proximal gradient method is developed, which extends the standard accelerated proximal gradient method \citep{nesterov2013introductory} to account for linear constraints. The proposed method can efficiently solve the optimization problem of minimizing the penalized negative log-likelihood subjects to a group of linear constraints.
Previous works on the inference of Lasso for the generalized linear models include \citet{buhlmann2011statistics}, who provided properties of the penalized estimates such as bound for $\ell_1$ loss and oracle inequality. However, the methods cannot be applied directly to the setting with linear constraints. Furthermore, it is known that the $\ell_1$ penalized estimates are biased and do not have a tractable asymptotic distribution. In order to correct such biases, works have been done for the Lasso estimate, including \citet{zhang2014confidence}, who proposed a low-dimensional projection estimator to correct the bias and \citet{javanmard2014confidence}, who used a quadratic programming method to carry out the task. \citet{van2014asymptotically} considers an extension to generalized linear models. However, these methods still cannot be directly applied to our problem due to the linear constraints.
In order to make statistical inference on the regression coefficients, we propose a bias correction procedure for GLMs with linear constraints by extending the method of \citet{javanmard2014confidence}. Such a debiased procedure provides asymptotically unbiased and normal distributed estimates of the regression coefficients, which can be used to construct confidence intervals. Our simulations results show the correctness of the coverage probability of the confidence intervals and smaller variances of the estimates when the appropriate linear constraints are imposed.
Section \ref{sec: model} develops the GLMs for compositional data and provides an efficient algorithm to solve the optimization problem. Section \ref{sec: de-biased} provides a de-biased procedure to correct the biases of the penalized estimates and derives the asymptotic distribution of the de-biased estimates. Section \ref{sec: IBD} presents the result of identifying gut bacterial species that are associated with inflammatory bowel disease. Section \ref{sec: simulation} provides the simulation results that illustrate the correctness of the proposed method. Some discussion and suggestion for future work are provided in Section \ref{sec: discussion}. Proofs of the theorems are included in the Appendix.
\section{GLMs with Linear Constraints for Microbiome Compositional Data}
\label{sec: model}
\subsection{GLMs with linear constraints}
Consider a microbiome study with outcome $y_i$ and a $p$ dimensional compositional covariates $\mathbf{X}_i=(x_{i1}, \cdots, x_{ip})$ with the unit sum constraint $\sum_{j}x_{ij}=1$ for $i=1, \cdots, n$, where $x_{ij}$ represents the relative abundance of the $j$th taxon for the $i$th samples.
To account for compositional nature of the covariates, \citet{lin2014variable} proposed the linear model with constraint:
\begin{eqnarray}\label{linear}
y_i = \mathbf{Z}_i^\top {\boldsymbol \beta} + \epsilon_i, \mbox{ subject to } C^\top{\boldsymbol \beta}=0
\end{eqnarray}
where $\mathbf{Z}_i = \{\log(x_{ij})\} \in {\mathbb R}^{n \times p}$ and $C = (1,1, \ldots,1)^\top$.
\citet{shi2016} further developed this method to allow multiple linear constraints by specifying the $p\times r$ constraint matrix $C$. Such constraints ensure that the regression coefficients are independent
of an arbitrary scaling of the basis from which a composition is obtained, and remain
unaffected by correctly excluding some or all of the zero components. This subcompositional coherence property is one of the principals of compositional data analysis \citep{aitchison1982statistical}.
For general outcome, we extend the linear model \eqref{linear} to the generalized linear model with its density function specidied as
\begin{equation}\label{eq:exp_fam}
\begin{split}
f(y_i|{\boldsymbol \beta}, \mathbf{Z}_i) = h(y_i)\exp \left\{\eta_i y_i-A(\eta_i)\right\},\quad \eta_i = \mathbf{Z}_i^\top{\boldsymbol \beta}\\
{\mathbb E} y_i = {\rm tr}iangledown_{\eta_i} A(\eta_i)\equiv \mu({\boldsymbol \beta}, \mathbf{Z}_i),\quad {\rm Var} y_i = {\rm tr}iangledown_{\eta_i}^2 A(\eta_i) \equiv v({\boldsymbol \beta}, \mathbf{Z}_i) \\
\end{split}
\end{equation}
where ${\boldsymbol \beta} = (\beta_{1}, \beta_{2}, \ldots \beta_{p})^\top \in {\mathbb R}^{p}$ and satisfies
$$C^\top{\boldsymbol \beta}=0,$$
and $\mathbf{Z}_{i}^\top = (Z_{i1}, Z_{i2}, \ldots, Z_{ip})$. For simplicity, we assume the intercept being zero, though our formal justification will allow for an intercept. For binary outcome and logistic regression, we have
$$A(\eta) = \log(1+e^{\eta}), \mu({\boldsymbol \beta}, \mathbf{Z}_i) = \dfrac{e^{\mathbf{Z}_i^\top{\boldsymbol \beta}}}{1+e^{\mathbf{Z}_i^\top{\boldsymbol \beta}}}, v({\boldsymbol \beta}, \mathbf{Z}_i) = \dfrac{e^{\mathbf{Z}_i^\top{\boldsymbol \beta}}}{(1+e^{\mathbf{Z}_i^\top{\boldsymbol \beta}})^2}.$$
\subsection{$\ell_1$ penalized estimation with constraints}
The log-likelihood function based on model \eqref{eq:exp_fam} is given by
\begin{align}
\label{eq: loglik}
\ell({\boldsymbol \beta}|{\mathbf Y},\mathbf{Z}) &= \sum_{i=1}^n\log h(y_i) + {\mathbf Y}^\top \mathbf{Z}{\boldsymbol \beta} - \sum_{i=1}^nA(\mathbf{Z}_i^\top{\boldsymbol \beta}),
\end{align}
with score function
and information matrix:
\begin{align*}
{\rm tr}iangledown_{{\boldsymbol \beta}}\ell({\boldsymbol \beta}|{\mathbf Y},\mathbf{Z}) = ({\mathbf Y} - \boldsymbol{\mu}({\boldsymbol \beta}, \mathbf{Z}))^\top \mathbf{Z}, \mbox{ }
{\rm tr}iangledown_{{\boldsymbol \beta}}^2\ell({\boldsymbol \beta}|{\mathbf Y},\mathbf{Z}) = -\mathbf{Z}^\top \mathbf{V}({\boldsymbol \beta},\mathbf{Z})\mathbf{Z},
\end{align*}
where $\mathbf{V}({\boldsymbol \beta},\mathbf{Z})={\rm diag}\{v({\boldsymbol \beta}, Z_1),\dots,v({\boldsymbol \beta}, Z_n)\}$.
The constraints on ${\boldsymbol \beta}$ are given by $C^\top {\boldsymbol \beta} =0$, where $C$ is a $p \times r$ matrix. Without lose of generality, the columns of $C$ are assumed to be orthonormal. Define $P_C = CC^\top$, $\widetilde{\mathbf{Z}}=\mathbf{Z}(\mathbf{I}_p-P_C)$ and $\widetilde{Z}_i = (\mathbf{I}_p-P_C)\mathbf{Z}_i$, then under the constraints of $C^\top{\boldsymbol \beta}=0$, all the $\mathbf{Z}$ and $Z_i$ can be replaced by $\widetilde{\mathbf{Z}}$ and $\widetilde{Z}_i$ because $\mathbf{Z}{\boldsymbol \beta}=\widetilde{\mathbf{Z}}{\boldsymbol \beta}$.
In high-dimensional settings, ${\boldsymbol \beta}$ is assumed to be $s$-sparse, where $s = \#\{i: {\boldsymbol \beta}_i \neq 0\}$ and $s = o(\sqrt{n} / \log p)$. The $\ell_1$ penalized estmates of ${\boldsymbol \beta}$ is given as the solution to the following problem:
\begin{equation}\label{eq:L1}
\hat{{\boldsymbol \beta}}^n = \argmin_{{\boldsymbol \beta}} \left\{-\dfrac{1}{n}[{\mathbf Y}^\top \widetilde{\mathbf{Z}}{\boldsymbol \beta} - \sum_{i=1}^nA(\widetilde{\mathbf{Z}}_i^\top{\boldsymbol \beta})]+\lambda||{\boldsymbol \beta}||_1\right\} \mbox{ subject to } C^\top{\boldsymbol \beta}=0,
\end{equation}
where $\lambda$ is a tuning parameter.
\subsection{Generalized accelerated proximal gradient method}
Due to the linear constraints in the optimization problem \eqref{eq:L1}, the standard coordinate descent algorithm cannot be applied directly. We develop a generalized accelerated proximal gradient algorithm. Specifically, define $g,h$ as following
\[
g({\boldsymbol \beta}) = -\dfrac{1}{n}[Y^\top \widetilde{\mathbf{Z}}{\boldsymbol \beta} - \sum_{i=1}^nA(\widetilde{Z}_i^\top{\boldsymbol \beta})], \quad h({\boldsymbol \beta}) = \lambda||{\boldsymbol \beta}||_{1}
\]
so the optimization problem \eqref{eq:L1} becomes
\[
\hat{{\boldsymbol \beta}}^n = \argmin_{{\boldsymbol \beta}} \left\{ g({\boldsymbol \beta})+ h({\boldsymbol \beta})\right\} \mbox{ subject to } C^\top{\boldsymbol \beta}=0.
\]
Since $g$ is convex and differentiable and $h$ is convex, the standard accelerated proximal gradient method \citep{nesterov2013introductory} is given by the following iterations:
\begin{align*}
& {\boldsymbol \beta}^{(k)} = \textbf{prox}_{t_{k}h} \left(y^{(k-1)} - t_{k} \nabla g(y^{(k-1)}) \right), \\
& y^{(k)} = {\boldsymbol \beta}^{(k)} + \frac{k-1}{k+r-1} ({\boldsymbol \beta}^{(k)} - {\boldsymbol \beta}^{(k-1)}),
\end{align*}
where $t_{k}$ is the step size in the $k$-th iteration and $r$ is a friction parameter. The proximal mapping of a convex function $h$, which is the key ingredient of this algorithm, is defined as:
\[
\textbf{prox}_{h} (x) = \argmin_{u} \left\{ h(u) + \frac{1}{2}||x - u||^{2}_{2}\right\}.
\]
We generalize this method to handle the linear constraints. Denote $S_{C} = \{{\boldsymbol \beta} \in {\mathbb R}^{p} \ | \ C^\top {\boldsymbol \beta} =0\}$, a linear subspace of ${\mathbb R} ^p$. The generalized accelerated proximal gradient method becomes
\begin{align}
& {\boldsymbol \beta}^{(k)} = \argmin_{{\boldsymbol \beta} \in S_{C}} \left\{ \lambda t_{k} \|{\boldsymbol \beta}\|_{1} + \frac{1}{2} || y^{(k-1)} - t_{k} \nabla g(y^{(k-1)}) -{\boldsymbol \beta} ||^{2}_{2}\right\} \label{Prox},\\
& y^{(k)} = {\boldsymbol \beta}^{(k)} + \frac{k-1}{k+r-1} ({\boldsymbol \beta}^{(k)} - {\boldsymbol \beta}^{(k-1)}).
\end{align}
The minimization of \eqref{Prox} can be solved by soft thresholding and projection:
\[
{\boldsymbol \beta}^{(k)} ={\mathbb{P}}i_{S_{C}} \left( S_{t_{k} \lambda} \left(y^{(k-1)} - t_{k} \nabla g(y^{(k-1)})\right) \right),
\]
where linear operator ${\mathbb{P}}i_{S_{C}} (u)$ projects $u$ onto space $S_{C}$.
Since $C^\top$ is a matrix and can be regarded as a linear mapping from $R^{p} \mapsto R^{r}$, we have $S_{C} = \ker (C^\top)$.
Denote $u_p = {\mathbb{P}}i_{S_{C}} (u)$, we have:
\[
C^\top (u - u_p) = C^\top u
\]
So $u - u_p$ is given by least square estimates: $ u - u_p = (CC^\top)^{\dagger}CC^\top u$, where $A^\dagger$ is the Moore-Penrose pseudo inverse of a matrix $A$. Hence,
\[
{\mathbb{P}}i_{S_{C}} (u) = u - (CC^\top)^{\dagger}CC^\top u.
\]
The step size $t_{k}$ can be fixed or chosen by line search. The procedure of line search consists of the following iterations:
we start with a initial $t = t_{k-1}$ and repeat $t = 0.5 t$ until the following inequality holds:
\[
g(y - tG_{t}(y)) \leq g(y) - t \nabla g(y)^{\top} G_{t}(y) + \frac{t}{2}\|G_{t}(y)\|_{2}^{2}
\]
where $y = y^{(k-1)}$. For the friction parameter $r$, \citet{su2014differential} suggested that $r > 4.5$ will lead to fast convergence rate and is set to 10.
\section{De-biased Estimator and its Asymptotic Distribution}
\label{sec: de-biased}
We collect here the notations used in the rest of the paper. For a vector ${\boldsymbol x}$, $\|{\boldsymbol x}\|_{p}$ is the standard $\ell_{p}$-norm. For a matrix ${\mathbf A}_{m \times n}$, $\|{\mathbf A}\|_{p}$ is the $\ell_{p}$ operator norm defined as $\|{\mathbf A}\|_{p} = \sup_{\|x\|_{p}=1}\|{\mathbf A} {\boldsymbol x}\|_{p}$. In particular, $\|{\mathbf A}\|_{\infty} = \max_{1\leq i \leq m} \sum_{j=1}^{n} |a_{ij}|$ and $|{\mathbf A}|_{\infty}$ is defined as $|{\mathbf A}|_{\infty} = \max_{i,j}|a_{ij}|$. For square matrix ${\mathbf A}$, denote $\sigma_{\textrm{max}}({\mathbf A})(\sigma_{\textrm{min}}(A))$ is the largest (smallest) non-zero eigenvalue of ${\mathbf A}$.
\subsection{A de-biased Estimator}
Since $\widehat{{\boldsymbol \beta}}^n$ in equation \eqref{eq:L1} is a biased estimator for ${\boldsymbol \beta}$ due to $\ell_1$ penalization, we propose the following de-biased procedure, detailed as Algorithm \ref{alg: de-biased}, to obtain asymptotically unbiased estimates of ${\boldsymbol \beta}$.
\begin{algorithm}[h]
\caption{Constructing a de-biased estimator}\label{alg: de-biased}
\textbf{Input:} ${\mathbf Y}$, $\mathbf{Z}$, $\widehat{{\boldsymbol \beta}}^n$, and $\gamma$.
\textbf{Output:} $\widehat{{\boldsymbol \beta}}^u$
\begin{algorithmic}[1]
{\mathcal S}tate
Let $\widehat{{\boldsymbol \beta}}^n$ be the regularized estimator from optimization problem~\eqref{eq:L1}.
{\mathcal S}tate
Set $\widetilde{\mathbf{Z}}=\mathbf{Z}(\mathbf{I}_p-P_C)$, $\widehat{{\boldsymbol {\mathcal S}igma}} = (\widetilde{\mathbf{Z}}^\top \mathbf{V}(\hat{{\boldsymbol \beta}}^n,\widetilde{\mathbf{Z}})\widetilde{\mathbf{Z}})/n$.
{\mathcal S}tate
\textbf{for} $i=1,2,\dots,p$ \textbf{do}
{\mathcal S}tate
Let $m_i$ be a solution of the convex program:
\begin{equation}\label{eq:opt}
\begin{split}
\mbox{minimize }& m^\top\widehat{{\boldsymbol {\mathcal S}igma}}m\\
\mbox{subject to }& ||\widehat{{\boldsymbol {\mathcal S}igma}}m-(\mathbf{I}_p-P_C )e_i||_{\infty}\leq \gamma. \\
\end{split}
\end{equation}
where $e_{i} \in {\mathbb R}^{p}$ is the vector with one at the $i$-th position and zero everywhere else.
{\mathcal S}tate
Set $M=(m_1,\dots,m_p)^\top$, set
\begin{equation}\label{eq:tildeM}
\widetilde{M}=(\mathbf{I}_p-P_C)M.
\end{equation}
{\mathcal S}tate
Define the estimator $\widehat{{\boldsymbol \beta}}^u$ as follows:
\begin{equation}\label{eq:algorithm}
\widehat{{\boldsymbol \beta}}^u=\widehat{{\boldsymbol \beta}}^n+\dfrac{1}{n}\widetilde{M}\widetilde{\mathbf{Z}}^\top({\mathbf Y}-\boldsymbol{\mu}(\widehat{{\boldsymbol \beta}}^n, \widetilde{\mathbf{Z}})).
\end{equation}
\end{algorithmic}
\end{algorithm}
From the construction of $\hat{{\boldsymbol \beta}}^u$, it is easy to check that $\hat{{\boldsymbol \beta}}^u$ still satisfies $C^\top\hat{{\boldsymbol \beta}}^u = 0$. To provide insights into this algorithm,
using the mean value theorem, there exists ${\boldsymbol \beta}_{i}^{0}$ such that
\begin{equation*}
\mu(\hat{{\boldsymbol \beta}}^n,\mathbf{Z}_i) - \mu({\boldsymbol \beta},\mathbf{Z}_i) = v({\boldsymbol \beta}_{i}^0,\mathbf{Z}_i)\mathbf{Z}_i^\top (\hat{{\boldsymbol \beta}}^n-{\boldsymbol \beta}), \ i=1,2, \ldots, n.
\end{equation*}
Define $\widehat{{\boldsymbol {\mathcal S}igma}}^{0} = (\widetilde{\mathbf{Z}}^\top \mathbf{V}({\boldsymbol \beta}^{0},\widetilde{\mathbf{Z}})\widetilde{\mathbf{Z}})/n$, where $\mathbf{V}({\boldsymbol \beta}^{0},\widetilde{\mathbf{Z}}) = {\rm diag}\{v({\boldsymbol \beta}_{1}^{0}, Z_1),\dots,v({\boldsymbol \beta}_{n}^{0}, Z_n)\}$, we have
\begin{align*}
\sqrt{n}\left( \hat{{\boldsymbol \beta}}^u - {\boldsymbol \beta} \right)
\ignore{&= \sqrt{n}\left(\hat{{\boldsymbol \beta}}^n-{\boldsymbol \beta}+\dfrac{1}{n}\widetilde{M}\widetilde{\mathbf{Z}}^\top(Y-\boldsymbol{\mu}({\boldsymbol \beta},\widetilde{\mathbf{Z}})) - \dfrac{1}{n}\widetilde{M}\widetilde{\mathbf{Z}}^\top(\boldsymbol{\mu}(\hat{{\boldsymbol \beta}}^n,\widetilde{\mathbf{Z}})-\boldsymbol{\mu}({\boldsymbol \beta},\widetilde{\mathbf{Z}})) \right)\\
& = \sqrt{n}\left((\mathbf{I}_p - \widetilde{M}\widehat{{\boldsymbol {\mathcal S}igma}^{0}})(\hat{{\boldsymbol \beta}}^n-{\boldsymbol \beta})+\dfrac{1}{n}\widetilde{M}\widetilde{\mathbf{Z}}^\top(Y-\boldsymbol{\mu}({\boldsymbol \beta},\widetilde{\mathbf{Z}})) \right)\\}
&= \sqrt{n}[(\mathbf{I}_p-P_C) - \widetilde{M}\widehat{{\boldsymbol {\mathcal S}igma}}^{0}](\hat{{\boldsymbol \beta}}^n-{\boldsymbol \beta})+\dfrac{1}{\sqrt{n}}\widetilde{M}\widetilde{\mathbf{Z}}^\top(Y-\boldsymbol{\mu}({\boldsymbol \beta},\widetilde{\mathbf{Z}})) \tag{$*$} \label{difference} \\
& \equiv {\cal{D}}elta +R.
\end{align*}
Define ${\boldsymbol {\mathcal S}igma} = (\widetilde{\mathbf{Z}}^\top \mathbf{V}({\boldsymbol \beta},\widetilde{\mathbf{Z}})\widetilde{\mathbf{Z}})/n$ and ${\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}} = {\mathbb E} {\boldsymbol {\mathcal S}igma} = {\mathbb E}(v({\boldsymbol \beta},\widetilde{Z}_1)\widetilde{Z}_1\widetilde{Z}_1^\top)$, and suppose ${\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}} = V_{{\boldsymbol \beta}}{\mathcal L}ambda_{{\boldsymbol \beta}}V_{{\boldsymbol \beta}}^\top$ is the eigenvalue decomposition of ${\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}}$. Since $(V_{{\boldsymbol \beta}},C)$ is full rank and orthonormal, we have
$${\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}}=(V_{{\boldsymbol \beta}},C)\left(\begin{array}{cc}
{\mathcal L}ambda & 0 \\
0 & 0
\end{array}\right)(V_{{\boldsymbol \beta}},C)^\top, \mbox{ }
{\cal{O}}mega_{{\boldsymbol \beta}}=(V_{{\boldsymbol \beta}},C)\left(\begin{array}{cc}
{\mathcal L}ambda_{{\boldsymbol \beta}}^{-1} & 0 \\
0 & 0
\end{array}\right)(V_{{\boldsymbol \beta}},C)^\top,
$$
which implies
$${\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}}{\cal{O}}mega_{{\boldsymbol \beta}}=(V_{{\boldsymbol \beta}},C)\left(\begin{array}{cc}
\mathbf{I}_{p-r} & 0 \\
0 & 0
\end{array}\right)(V_{{\boldsymbol \beta}},C)^\top=V_{{\boldsymbol \beta}}V_{{\boldsymbol \beta}}^\top=\mathbf{I}_p-P_C.$$
So Step 4 of Algorithm \ref{alg: de-biased} approximates ${\cal{O}}mega_{{\boldsymbol \beta}}$ by rows.
\subsection{Asymptotic distribution}
In order to derive the asymptotic distribution of the de-biased estimator $\hat{{\boldsymbol \beta}}^u$, several regularity conditions are required.
\begin{enumerate}
\item[C1.] \label{bound on I-Pc} $\|\mathbf{I}_p - P_{C}\|_{\infty} \leq k_{0}$ for a constant $k_{0}$ that is free of $p$.
\item[C2.] \label{bound on diagonal} The diagonal elements of $\mathbf{I}_p - P_{C}$ are greater than zero.
\end{enumerate}
Conditions C1 and C2 have been used in \citet{shi2016} and naturally hold in our setting as well.
In addition, define $\widetilde{\mathbf{Z}}^{*} = D\widetilde{\mathbf{Z}}$, where $D\in \widetilde{D}_{ab}$ is defined as:
$$ \widetilde{D}_{ab}= \{D \in {\mathbb R}^{n \times n}: {\rm diag}(d_{1}, d_{2}, \ldots, d_{n}), a \leq d_{i} \leq b , \, 0 < a < b \}.$$
For any matrix $A \in {\mathbb R}^{n \times m}$, the upper and lower restricted isometry property (RIP) constant of order $k$, $\delta_{k}^{+}(A)$ and $\delta_{k}^{-}(A)$, are defined as:
\begin{align*}
\delta_{k}^{+}(A) = \sup \left(\dfrac{\|A\alpha\|_{2}^{2}}{\|\alpha\|_{2}^{2}}: \ \alpha \in {\mathbb R}^{m} \ \textrm{is $k$-sparse vector}\right), \\
\delta_{k}^{-}(A) = \inf \left(\dfrac{\|A\alpha\|_{2}^{2}}{\|\alpha\|_{2}^{2}}: \ \alpha \in {\mathbb R}^{m} \ \textrm{is $k$-sparse vector}\right).
\end{align*}
\ignore{and the restricted orthogonal constant (ROC) of order $k_{1}$ and $k_{2}$ is
\begin{align*}
\theta_{k_{1}, k_{2}}(A) = \sup \left(\dfrac{|\langle A\alpha_{1}, A\alpha_{2} \rangle|}{\|\alpha_{1}\|_{2}\|\alpha_{2}\|_{2}}: \ \alpha_{1} \ \textrm{is $k_{1}$-sparse vector}, \alpha_{2} \ \textrm{is $k_{2}$-sparse vector} \right)
\end{align*}}
We assume the following RIP condition:
\begin{enumerate}[3.]
\item[C3.] $\inf_{\widetilde{D}_{01}} \left((3\tau-1)\delta_{2s}^{-}(\widetilde{\mathbf{Z}}^{*}/\sqrt{n}) -(\tau+1)\delta_{2s}^{+}(\widetilde{\mathbf{Z}}^{*}/\sqrt{n}) \right)\geq 4\tau\phi_{0}$ for some constant $\phi_{0}$.
\end{enumerate}
Condition C3 is slightly stronger than the one used for linear regression, which here we require the inequality holds uniformly over a set of matrices. The following theorem quantifies the difference between $\hat{{\boldsymbol \beta}}^{n}$ and ${\boldsymbol \beta}$ in $\ell_1$ norm.
\begin{theorem}
\label{thm: consistency}
Let $\hat{{\boldsymbol \beta}}^{n}$ be the solution for (\ref{eq:L1}), where ${\boldsymbol \beta}$ is $s$-sparse. If Conditions C1-C3 hold, and the tuning parameter $\lambda = \tau \tilde{c}\sqrt{(\log p)/n}$, then
\[
{\mathbb{P}} \left( \|\hat{{\boldsymbol \beta}}^{n} - {\boldsymbol \beta}\| _1 \geq \dfrac{s\lambda(k_0+1/\tau)}{\phi_{0}} \right) \leq 2p^{-c'}
\]
where
$c^{\prime} = \dfrac{\tilde{c}^{2}}{2K^2}-1$ and $K =\max_{i} \sqrt{(\widetilde{\mathbf{Z}}^\top\widetilde{\mathbf{Z}}/n)_{i,i}}$.
\end{theorem}
\ignore{Furthermore, define the sub-exponential norm of a random variable $X$, denoted by $\|X\|_{\psi_{1}}$, as:
\[
\|X\|_{\psi_{1}} = \sup_{p \geq 1}p^{-1}({\mathbb E}|X|^P)^{1/p}
\]
}
In order to establish the asymptotic distribution of the de-biased estimates, additional conditions are required:
\begin{enumerate}
\item[C4.] There exist uniform constants $C_{\text{min}}$ and $C_{\text{max}}$ such that $0< C_{\text{min}} \leq \sigma_{\textrm{min}}({\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}}) \leq \sigma_{\textrm{max}}({\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}})\leq C_{\text{max}} < \infty$.
\item[C5] $| {\cal{O}}mega_{{\boldsymbol \beta}}{\rm T}heta|_{\infty} < \infty$.
\item[C6] The variance function $v({\boldsymbol \beta}, \mathbf{Z}_i)$ satisfies Lipschitz condition with constant $C$;
\item[C7] There exists a uniform constant $\kappa>0$ such that $\|{\cal{O}}mega^{1/2} \widetilde{Z}_{k}\|_{\psi_{2}} \leq \kappa $ for all $k=1,\ldots, n$.
\end{enumerate}
In Condition C7, the sub-Gaussian norm of a random vector $Z \in {\mathbb R}^{n}$ is defined as
\[
\|Z\|_{\psi_{2}} = \sup\left( \|Z^\top x\|_{\psi_{2}}: \ x\in {\mathbb R}^{n} \, \text{and} \, \|x\|_{2}=1\right),
\]
and
the sub-Gaussian norm for a random variable $X$, is defined as
\[
\|X\|_{\psi_{2}} = \sup_{q \geq 1}q^{-1/2}({\mathbb E}|X|^q)^{1/q}.
\]
Conditions C4 and C7 are bounded eigenvalue assumption and bounded sub-Gaussian norm that are widely used in the literature of inference with respect to Lasso type estimator \citep{shi2016, javanmard2014confidence}. Condition C5 eliminates extreme situations on $| {\cal{O}}mega_{{\boldsymbol \beta}}{\rm T}heta|_{\infty}$, which actually can be relaxed to hold in probability. For logistic regression, similar conditions are used in \citet{ning2017general}. Condition C6 is a Lipschitz condition on the variance function, which holds for many of the GLMs including logistic regression.
The following Lemma shows that if the tuning parameter $\gamma$ in the optimization problem \eqref{eq:opt} is chosen to be $c \sqrt{(\log p) / n}$, then ${\cal{O}}mega_{{\boldsymbol \beta}}$ is in the feasible set with a large probability.
\begin{comment}
\begin{thm}
\label{thm: feasible set}
Suppose Condition 5-7 hold. Then for any constant $c>c_0$, the following inequality holds:
\[
{\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}\widehat{{\boldsymbol {\mathcal S}igma}} -(\mathbf{I}_p-P_C)|_{\infty} \geq
c \sqrt{(\log p) / n} \right) \leq 2p^{-c_{1}^{''}} + 2p^{-c_{2}^{''}}
\]
where $c_{1}^{''} = c^{2}C_{\text{min}}/ (24e^2 C_{\text{max}} \kappa^4)-2$ and $c_{2}^{''} = (c-c_0)^{2}C_{\text{min}}/ (24e^2 C_{\text{max}} \kappa^4)-2$.
\end{thm}
A directly corollary of theorem \ref{thm: feasible set} is that the event
\[
|(\mathbf{I}_p-P_C) - M\widehat{{\boldsymbol {\mathcal S}igma}}|_{\infty} \leq c \sqrt{(\log p) / n}
\]
happens with large probability.
\end{comment}
\begin{lemma}
\label{thm: feasible set}
Denote ${\rm T}heta= {\mathbb E} \widetilde{Z}_{1}\widetilde{Z}_{1}^\top $. Suppose Conditions C1-C7 hold, then for any constant $c >0$, the following inequality holds:
\[
{\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}\widehat{{\boldsymbol {\mathcal S}igma}} -(\mathbf{I}_p-P_C)|_{\infty} \geq
c \sqrt{(\log p) / n} \right) \leq 2p^{-c_{1}^{''}}+2p^{-c_{2}^{''}}
\]
where $c_{1}^{''} = c^{2}C_{\text{min}}/ (24e^2 C_{\text{max}} \kappa^4)-2$ and $c_{2}^{''} = \hat{c}^{2}/2K^2-1$, with $\hat{c} = c \phi_{0}/C|\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}{\rm T}heta|_{\infty}s(k_0 \tau +1)$
and $K =\max_{i} \sqrt{(\widetilde{\mathbf{Z}}^\top\widetilde{\mathbf{Z}}/n)_{i,i}}$.
\end{lemma}
The following Theorem provides the bound on $ \|{\cal{D}}elta\|_{\infty}$ and also the asymptotic distribution of the de-biased estimates.
\begin{theorem}
\label{thm:bound on Delta}
For ${\cal{D}}elta= \sqrt{n} [(\mathbf{I}_p-P_C) - \widetilde{M}\widehat{{\boldsymbol {\mathcal S}igma}}^{0}](\hat{{\boldsymbol \beta}}^n-{\boldsymbol \beta})$, if conditions C1-C7 hold, then for $n$ large enough,
\[
\sqrt{n}(\hat{{\boldsymbol \beta}}^u - {\boldsymbol \beta}) = R + {\cal{D}}elta,
\]
where $R|\mathbf{Z} \rightarrow N(0,\widetilde{M} \widehat{{\boldsymbol {\mathcal S}igma}} \widetilde{M}^\top)$ in distribution and $ \|{\cal{D}}elta\|_{\infty}$ converge to $0$ as $n,p \rightarrow \infty$, i.e.,
\[
{\mathbb{P}}\left( \|{\cal{D}}elta\|_{\infty} > \dfrac{c\tilde{c}k_{0}(k_0\tau +1)}{\phi_0} \cdot \dfrac{s\log p}{\sqrt{n}} \right) \leq 2p^{-c'} +2p^{-c_{1}^{''}}+6p^{-c_{2}^{''}}
\]
for some constants $c^\prime$, $c_1^{\prime\prime}$ and $c_1^{\prime\prime}$ defined in Theorem \ref{thm: consistency} and Lemma \ref{thm: feasible set}.
\ignore{where $K =\max_{i} \sqrt{(\widetilde{\mathbf{Z}}^\top\widetilde{\mathbf{Z}}/n)_{i,i}}$ and
\[
c^{\prime} = \dfrac{\tilde{c}^{2}}{2K^2}-1, c_{1}^{''} = \dfrac{c^{2}C_{\text{min}}}{ 24e^2 C_{\text{max}} \kappa^4}-2, c_{2}^{''} = \dfrac{\hat{c}^{2}}{2K^2}-1,
\]
with $\hat{c} = c \phi_{0}/C|\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}{\rm T}heta|_{\infty}s(k_0 \tau +1)$.}
\end{theorem}
This theorem allows us to obtain the confidence intervals for the regression coefficients, which can be used to further select the variables based on their statistical significance.
\ignore{To sum up, we have the following conclusion on the asymptotic distribution of the de-biased estimator:
\begin{theorem}
\label{thm: summary}
Supposed Conditions 1-7 hold, with $\hat{{\boldsymbol \beta}}^{n}$ be the solution for \eqref{eq:L1} and $\hat{{\boldsymbol \beta}}^{u}$ defined in \eqref{eq:algorithm}, we have:
\[
\sqrt{n}(\hat{{\boldsymbol \beta}}^u - {\boldsymbol \beta}) = R + {\cal{D}}elta,
\]
where $R|\mathbf{Z} \rightarrow N(0,\widetilde{M} \widehat{{\boldsymbol {\mathcal S}igma}} \widetilde{M}^\top)$ in distribution and $ \|{\cal{D}}elta\|_{\infty}$ converge to $0$ as $n,p \rightarrow \infty$.
\end{theorem}
}
\subsection{Selections of tuning parameters}
The tuning parameter $\lambda$ in \eqref{eq:L1} can be selected using extended Bayesian information criterion (EBIC) \citep{chen2008extended}, which is an extension of the standard BIC in high dimensional cases. Specifically, denote $\hat{{\boldsymbol \beta}}^{n}_{\lambda}$ the solution of \eqref{eq:L1} using $\lambda$ as the tuning parameter, the EBIC is defined as
\[
\text{EBIC}(\hat{{\boldsymbol \beta}}^{n}_{\lambda}) = -2\ell(\hat{{\boldsymbol \beta}}^{n}_{\lambda}|y,\mathbf{Z}) + \nu(\hat{{\boldsymbol \beta}}^{n}_{\lambda}) \log n + 2\nu(\hat{{\boldsymbol \beta}}^{n}_{\lambda})\xi \log p,
\]
where $\nu(s)$ is the number of none zero components of $s$. The choice of $\xi$ is to solve for $p = n^\delta$ and set $\xi = 1- 1/ (2 \delta)$ as suggested by \citet{chen2008extended}. The optimal $\lambda_{\text{opt}}$ is to minimize the EBIC
\begin{equation}
\label{opt: tuning}
\lambda_{\text{opt}} = \argmin_{\lambda} \text{EBIC}(\hat{{\boldsymbol \beta}}^{n}_{\lambda}) .
\end{equation}
over $\lambda_{1}, \lambda_{2}, \ldots$, with $\nu(\hat{{\boldsymbol \beta}}^{n}_{\lambda_{i}}) = i$.
Tunning parameter $\gamma $ in \eqref{eq:opt} is chosen as $0.01 \lambda_{\text{opt}}$.
\section{Applications to Gut Microbiome Studies}
\label{sec: IBD}
The proposed method was applied to a study aiming at exploring the association between the pediatric inflammatory bowel disease and gut microbiome conducted at the University of Pennsylvania \citep{lewis2015inflammation}.
This study collected the fecal samples of 85 IBD cases and 26 normal controls and conducted a metagenomic sequencing for each sample, resulting a total of 97 bacterial species identified. Among these bacterial species, 77 have non-zero values in at least 20 percent of the samples were used in our analysis. The zero values in the relative abundance matrix were replaced with 0.5 times the minimum abundance observed, which is commonly used in microbiome data analyses \citep{kurtz2015sparse, CaoLinLi}. Since the relative abundances of major species are relatively large, replacing those zeros with a small value would not influence our results. The composition of species is then computed after replacing the zeros and used to fit the regression model.
\subsection{Identifying bacterial species associated with IBD}
The proposed method was first applied to the logistic regression analysis between IBD and log-transformed compositions of the 77 species as covariates. To be specific, let $y$ be the binary indicator of IBD and $\log (X_{k})$ is the logarithm of the relative abundance of the $k$-th species. We consider the following model
\[
\text{logit}(Pr(y=1)) = {\boldsymbol \beta}_{0} + \sum_{k=1}^{77} {\boldsymbol \beta}_{k} \log (X_{k}), \quad \text{where} \ \sum_{k=1}^{77} {\boldsymbol \beta}_{k} =0.
\]
Our goal is to identify the bacteria species that are associated with IBD and to evaluate how well one can predict IBD based on the gut microbiome composition.
\begin{figure}
\caption{Analysis of the IBD microbiome data. (a) Lasso estimates, de-biased estimates and $95 \%$ confidence intervals of the regression coefficients. Species selected based on the CIs are annotated. (b) Boxplots of log-relative abundances of the five identified species. The red and blue boxplots correspond to controls and cases samples respectively.
(c) Fitted probability plot. (d) Selection stability plot.
}
\label{IBD_onec}
\end{figure}
Figure \ref{IBD_onec} (a) shows the Lasso estimates, de-biased estimates and $95 \%$ confidence intervals of the regression coefficients in the model. Five bacteria were selected using our methods with the 95\% CI not including zero, including \emph{Prevotella\_copri}, \emph{Ruminococcus\_bromii}, \emph{Clostridium\_leptum}, \emph{Escherichia\_coli} and \emph{Ruminococcus\_gnavus}. The estimated coefficients and the corresponding 95\% CIs are summarized in Table \ref{Estimation: IBD}. Among them, \emph{Prevotella\_copri}, \emph{Ruminococcus\_bromii}, \emph{Clostridium\_leptum} are negatively associated with the risk of IBD, indicating possible beneficial effects on IBD. On the other hand, \emph{Escherichia\_coli} and \emph{Ruminococcus\_gnavus} are positively associated with IBD.
Figure \ref{IBD_onec} (b) plots the log-relative abundances of the five identified species in IBD children and in controls, indicating the the identified bacterial species indeed showed differential abundances between IBD cases and controls.
Our results confirm the results from other studies. \citet{kaakoush2012microbial} showed healthy people have high level of \emph{Prevotella\_copri} within their fecal microbial compared to Crohn's disease patients. \emph{Ruminococcus\_bromii} and \emph{Clostridium\_leptum} \citep{mondot2011highlighting, sokol2009low,kabeerdoss2013clostridium} were also shown to be negatively associated with the risk of IBD. Furthermore, \citet{rhodes2007role} pointed out the association of an increase of \emph{Escherichia\_coli} and IBD. \citet{matsuoka2015gut} also indicated the abundance of \emph{Ruminococcus\_gnavus} is higher in IBD patients.
\begin{table}[ht!]
\caption{Selected bacteria and their corresponding phylum, estimated coefficients(standard errors in the parenthesis) and $95 \%$ confidence intervals.} \label{Estimation: IBD}
\begin{center}
\begin{tabular}{llr@{.}lc}
\hline
Bacteria name & Phylum &\multicolumn{2}{c}{${\boldsymbol \beta}$(se)} & CI\\
\hline
\emph{Prevotella\_copri} & Bacteroidetes & $-0$&$15(0.042)$ & $(-0.23 ,-0.064)$\\
\emph{Ruminococcus\_bromii} & Firmicutes & $-0$&$22(0.043)$ & $(-0.31 ,-0.18)$ \\
\emph{Clostridium\_leptum} & Firmicutes & $-0$&$15(0.052)$ & $(-0.25, -0.048)$ \\
\emph{Escherichia\_coli} & Proteobacteria & $0$&$14(0.035) $ & $(0.074 , 0.21)$ \\
\emph{Ruminococcus\_gnavus} & Firmicutes & $0$&$13(0.045)$ & $(0.043, 0.22)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\ignore{
\begin{table}[ht!]
\caption{Selected bacteria and their corresponding phylum, estimated coefficients(standard errors in the parenthesis) and $95 \%$ confidence intervals.} \label{Estimation: IBD}
\begin{center}
\begin{tabular}{llcc}
\hline
Bacteria name & Phylum &{${\boldsymbol \beta}$(se)} & CI\\
\hline
\emph{Prevotella\_copri} & Bacteroidetes & $-0.15(0.042)$ & $(-0.23 ,-0.064)$\\
\emph{Ruminococcus\_bromii} & Firmicutes & $-0.22(0.043)$ & $(-0.31 ,-0.18)$ \\
\emph{Clostridium\_leptum} & Firmicutes & $-0.15(0.052)$ & $(-0.25, -0.048)$ \\
\emph{Escherichia\_coli} & Proteobacteria & $0.14(0.035) $ & $(0.074 , 0.21)$ \\
\emph{Ruminococcus\_gnavus} & Firmicutes & $0.13(0.045)$ & $(0.043, 0.22)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
}
\subsection{Stability, model fit and prediction evaluation}
To assess how stable the results are, we performed stability selection analysis \citep{meinshausen2010stability} by sample splitting. Among the 50 replications, each time we randomly sampled two third of the data including 56 cases and 16 controls and fit the model under different tuning parameters. Figure \ref{IBD_onec} (d) shows the selection probability for each of the bacteria versus values of the tuning parameter. We see that the selected species in the previous section have the highest stability selection probabilities, indicating the 5 species selected are very stable. Figure \ref{IBD_onec} (c) shows the fitted probability curve that is constructed based on the five identified species, indicating that our model fits the data well.
We then evaluate the performance of prediction based on the IBD data. The data was randomly separated into a training set of 56 cases and 16 controls that is used to estimate the parameters and a testing set of 28 cases and 8 controls that is used to evaluate the prediction performance. We used the estimated parameters to predict the IBD status in the testing set and evaluated the performance based on area under the ROC curve (AUCs). The procedure was repeated 50 times. The average AUC (se) are 0.92(0.049) , 0.93(0.043) and 0.93 (0.051) based on Lasso, debiased Lasso and de-biased Lasso using only the selected bacterial species, indicating that the model can predict IBD very well.
\section{Simulation Studies}
\label{sec: simulation}
We evaluate the performance of of the proposed methods through a set of simulation studies. In order to simulate covariate $Z$ and outcome $Y$, we simulate the true bacterial abundances $W$, where each row of $W$ is generated from a log-normal distribution $\ln N(\mu, {\boldsymbol {\mathcal S}igma})$, where ${\boldsymbol {\mathcal S}igma}_{ij} = \zeta^{|i-j|}$ with $\zeta =0.2$ is the covariance matrix to reflect the correlation between different taxa. Mean parameters are set as $\mu_{j} = \frac{p}{2}$ for $j= 1, \ldots,5$ and $\mu_{j} =1$ for $j=6, \ldots p$. The log-compositional covariate matrix $\mathbf{Z}$ is obtained by normalizing the true abundances
\[
\mathbf{Z}_{ij} = \log \left(\frac{W_{ij}}{\sum_{k=1}^{p} W_{ik}}\right),
\]
for $i= 1,2, \ldots, n$ and $j =1 ,2, \ldots, p$. The true parameter ${\boldsymbol \beta}$ is
\[
{\boldsymbol \beta} = (0.45, -0.4, 0.45, 0, -0.5, 0, 0, 0, 0, 0, -0.6, 0, 0.3, 0, 0, 0.3, 0, \ldots 0)
\]
and ${\boldsymbol \beta}_{0}=-1$. Based on these covariates, we simulate the binary outcome $Y$ based on the logistic probability $p_{i} = \text{expit}( \mathbf{Z}_i^\top{\boldsymbol \beta}+ {\boldsymbol \beta}_{0})$ and obtained the number of cases and controls at a 2:3 ratio. Different dimensions and sample sizes are considered and simulations are repeated 100 times for each setting. The true regression coefficients ${\boldsymbol \beta}$ are assumed to satisfy the following linear constraints:
\begin{equation*}
\begin{aligned}
\sum_{i=1}^{10} {\boldsymbol \beta}_{i}=0, \sum_{i=11}^{16} {\boldsymbol \beta}_{i}=0, \sum_{i=17}^{20} {\boldsymbol \beta}_{i}=0, \sum_{i=21}^{23} {\boldsymbol \beta}_{i}=0,\\
\sum_{i=24}^{30} {\boldsymbol \beta}_{i}=0, \sum_{i=31}^{32} {\boldsymbol \beta}_{i}=0, \sum_{i=33}^{40} {\boldsymbol \beta}_{i}=0, \sum_{i=41}^{p} {\boldsymbol \beta}_{i}=0.
\end{aligned}
\end{equation*}
\subsection{Simulation results}
We evaluate the performance of the simulation by comparing the coverage probability, length of the confidence interval and the true positive and false positive of selecting variables based on the confidence interval. We compare the results of fitting the models with no constraint, one constraint, true constraint and
misspecified constraints specified below,
\begin{equation*}
\begin{aligned}
\sum_{i=1}^{4} {\boldsymbol \beta}_{i}=0, \sum_{i=5}^{12} {\boldsymbol \beta}_{i}=0, \sum_{i=13}^{23} {\boldsymbol \beta}_{i}=0, \sum_{i=24}^{30} {\boldsymbol \beta}_{i}=0,
\sum_{i=31}^{p} {\boldsymbol \beta}_{i}=0.
\end{aligned}
\end{equation*}
Figure \ref{fig:sim_generated_cov} shows that the coverage probabilities are closer to $95 \%$ and the length of CIs decrease as sample size becomes larger. In addition, the coverage probabilities under true constraints are closer to the correct coverage probability ($95\%$) especially when $n$ is relatively larger($n=200, 500$). As for length of CIs, the CIs using the true constraints have the shortest CIs while the length of the CIs for single constraint and no constraints are relatively wider. We did not compare the length of CI for using misspecified constraints because the coverage probability in this case is really poor. The figure also shows that the coverage probabilities are sensitive to the constraints when sample size becomes larger and the length is sensitive to the constraints for small sample size. This is expected as when the sample size is small, we are more likely to obtain wider CI, and using the correct constraints, which provide more information, would provide shorter CI. While for the coverage probability, since our algorithm provides an asymptotic CI, the sample size has bigger effects than the constraints. The coverage probability becomes really poor when the constraints are misspecified when $n=500$.
\begin{figure}
\caption{Coverage probabilities and length of confidence intervals based on 100 simulations for $p=50$ ((a) and (b)) and $p=100$ ((c) and (d)) and $n = 50,100,200,500$ (separated by vertical dashed lines).}
\label{fig:sim_generated_cov}
\end{figure}
Table \ref{table:T&F rate} shows the true positive and false positive rates of selecting the significant variables using the $95\%$ confidence interval under multiple, one, no and misspecified constraints for various dimensions $p$ and sample sizes $n$. The false positive rates are correctly controlled under $5\%$ for all models, even when the constraints are misspecified. However, models with correctly specified linear constraints have higher true positive rates. When the sample size is 500, true positive rate is greater than $90\%$, which is the highest among all models considered.
\begin{table}[htbp]
\centering
\caption{True /False positive rates of the significant variables selected by the $95\%$ confidence interval using multiple, one, no and misspecified constraints. $p=50, 100$ and $n=50,100,200,500$ are considered. } \label{table:T&F rate}
\begin{tabular}{ccccccccccccc}
\hline
$n$ & & TP & FP && TP & FP && TP & FP & & TP & FP \\
\cline{1-1} \cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13}
&&\multicolumn{2}{c}{Multi}& & \multicolumn{2}{c}{One} &&\multicolumn{2}{c}{No}&&\multicolumn{2}{c}{Wrong} \\
\cline{1-1} \cline{1-1} \cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13}
\multicolumn{13}{c}{$p=50$}\\
50 & & 0.069 & 0.034 && 0.026 & 0.025 && 0.029 & 0.026 & & 0.054 & 0.036 \\
100 && 0.260 & 0.038 && 0.206 & 0.031 & & 0.141 & 0.034 && 0.299 & 0.038 \\
200 & & 0.569 & 0.026 && 0.549 & 0.025 && 0.411 & 0.030 && 0.546 & 0.037 \\
500 && 0.914 & 0.038 && 0.897 & 0.030 && 0.840 & 0.038 && 0.814 & 0.058 \\
\multicolumn{13}{c}{$p=100$}\\
50 && 0.220 & 0.045 && 0.071 & 0.044 && 0.109 & 0.034 && 0.134 & 0.046 \\
100 &&0.103 & 0.035 && 0.023 & 0.016 && 0.107 & 0.026 && 0.154 & 0.027 \\
200 && 0.431 & 0.030 && 0.389 & 0.025 && 0.283 & 0.029 && 0.481 & 0.032 \\
500 && 0.907 & 0.032 && 0.873 & 0.029 && 0.801 & 0.037 && 0.804 & 0.042 \\
\hline
\end{tabular}
\end{table}
\ignore{
\begin{table}[htbp]
\centering
\caption{True /False positive rates of the significant variables selected by the $95\%$ confidence interval using multiple, one, no and misspecified constraints. Different $n$ is considered and $p=100$.} \label{table: T&F rate_100}
\begin{tabular}{ccccccccccccc}
\hline
$n$ & & TP & FP && TP & FP && TP & FP & & TP & FP \\
\cline{1-1} \cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13}
&&\multicolumn{2}{c}{Multi}& & \multicolumn{2}{c}{One} &&\multicolumn{2}{c}{No}&&\multicolumn{2}{c}{Wrong} \\
\cline{1-1} \cline{1-1} \cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13}
50 && 0.220 & 0.045 && 0.071 & 0.044 && 0.109 & 0.034 && 0.134 & 0.046 \\
100 &&0.103 & 0.035 && 0.023 & 0.016 && 0.107 & 0.026 && 0.154 & 0.027 \\
200 && 0.431 & 0.030 && 0.389 & 0.025 && 0.283 & 0.029 && 0.481 & 0.032 \\
500 && 0.907 & 0.032 && 0.873 & 0.029 && 0.801 & 0.037 && 0.804 & 0.042 \\
\hline
\end{tabular}
\end{table}
}
\section{Discussion}
\label{sec: discussion}
We have considered estimation and inference for the generalized linear models with high dimensional compositional covariates. In order to accounting for the nature of compositional data, a group of linear constraints are imposed on the regression coefficients to ensure subcompositional coherence.
With these constraints, the standard GLM Lasso algorithm based on Taylor expansion and coordinate descent algorithm does not work due to the non-separable nature of the penalty function. Instead, a generalized accelerated proximal gradient algorithm was developed to estimate the regression coefficients. To make statistical inference, a de-biased procedure is proposed to construct valid confidence intervals of the regression coefficients, which could be used for hypothesis testing as well as identifying species that are associated with the outcome. Application of the method to an analysis of IBD microbiome data has identified five bacterial species that are associated with pediatric IBD with a high stability. The identified model has also shown a great prediction performance based on cross-validation.
The approach we took in deriving the confidence intervals follows that of \citet{javanmard2014confidence} by first obtaining an debiased estimates of the regression coefficients. Alternatively, one can consider the approach based on post-selection inference for $\ell_1$-penalized likelihood models \citep{Taylor}. However, one needs to modify the methods for \cite{Taylor} to take into account the linear constraints of the regression coefficients. It would be interesting to compare the performance of this alternative approach.
\section*{Appendix}
We provide proofs for the main theorems in the paper.
\begin{lemma}
If Conditions C1 and C2 hold, then for any matrix $A$,
\[
|(\mathbf{I}_p - P_C)A|_{\infty} \leq k_0 |A|_{\infty}.
\]
\end{lemma}
The proof for this lemma is in the appendix of \citet{shi2016}.
\subsection*{Proof of Theorem \ref{thm: consistency}}
\begin{proof}
By the definition of $\hat{{\boldsymbol \beta}}^{n}$ and (\ref{eq:L1}), we have:
\begin{equation} \label{eq: consis}
-\dfrac{1}{n}[Y^\top \widetilde{\mathbf{Z}}\hat{{\boldsymbol \beta}}^n - \sum_{i=1}^nA(\widetilde{Z}_i^\top\hat{{\boldsymbol \beta}}^n)]+\lambda||\hat{{\boldsymbol \beta}}^n||_1 \leq -\dfrac{1}{n}[Y^\top \widetilde{\mathbf{Z}}{\boldsymbol \beta} - \sum_{i=1}^nA(\widetilde{Z}_i^\top{\boldsymbol \beta})]+\lambda||{\boldsymbol \beta}||_1.
\end{equation}
Denote $h = \hat{{\boldsymbol \beta}}^n - {\boldsymbol \beta}$, and $S_{h}$ be the set of index of the $s$ largest absolute values of $h$. Then rearrange (\ref{eq: consis}), we get:
\begin{equation}
\label{ineq: lambda}
\lambda(\|{\boldsymbol \beta}\|_{1} - \|\hat{{\boldsymbol \beta}}^{n}\|_{1}) \geq -\dfrac{1}{n}[Y^\top \widetilde{\mathbf{Z}} h - \sum_{i=1}^n(A(\widetilde{Z}_i^\top\hat{{\boldsymbol \beta}}^n)-A(\widetilde{Z}_i^\top{\boldsymbol \beta}))].
\end{equation}
Notice that,
\begin{align}
\label{ineq: beta1}
\|{\boldsymbol \beta}\|_{1} - \|\hat{{\boldsymbol \beta}}^{n}\|_{1} =& \|{\boldsymbol \beta}_{supp({\boldsymbol \beta})}\|_{1} - \|\hat{{\boldsymbol \beta}}_{supp({\boldsymbol \beta})}^{n}\|_{1} - \|\hat{{\boldsymbol \beta}}_{supp({\boldsymbol \beta})^{c}}^{n}\| _{1}, \nonumber \\
\leq & \|{\boldsymbol \beta}_{supp({\boldsymbol \beta})}- \hat{{\boldsymbol \beta}}_{supp({\boldsymbol \beta})}^{n}\|_{1}- \|h_{supp({\boldsymbol \beta})^{c}}\|_{1}, \nonumber \\
\leq &\|h_{S_{h}}\|_{1} -\|h_{S^{c}_{h}}\|_{1}.
\end{align}
Furthermore, for each $i$ applied the mean value theorem to $A$ defined in \ref{eq:exp_fam}, there exists $\widetilde{\beta_{i}}^0$ such that $A(\widetilde{Z}_i^\top\hat{\beta}^n)-A(\widetilde{Z}_i^\top\beta) =\mu(\widetilde{\beta}, \widetilde{Z}_i)\widetilde{Z}_i^\top h + \dfrac{1}{2}v(\widetilde{\beta_{i}}^0,\widetilde{Z}_i)\left(\widetilde{Z}_i^\top h\right)^2$. Then we have:
\begin{align}
& -\dfrac{1}{n}[Y^\top \widetilde{\mathbf{Z}} h - \sum_{i=1}^n(A(\widetilde{Z}_i^\top\hat{\beta}^n)-A(\widetilde{Z}_i^\top\beta))]\\
& \geq -\dfrac{1}{n}[Y^\top \widetilde{\mathbf{Z}} h - \mu(\beta, \widetilde{\mathbf{Z}})^\top \widetilde{\mathbf{Z}} h)], \nonumber\\
& \geq -\dfrac{1}{n}( Y -\mu(\beta, \widetilde{\mathbf{Z}}))^\top \widetilde{\mathbf{Z}} h, \nonumber\\
& \geq -\dfrac{1}{n}\| Y -\mu(\beta, \widetilde{\mathbf{Z}})^\top \widetilde{\mathbf{Z}} \|_{\infty} \cdot \|h\|_{1} = -\dfrac{1}{n}\|( Y -\mu(\beta, \widetilde{\mathbf{Z}}))^\top \widetilde{\mathbf{Z}} \|_{\infty} \cdot (\|h_{S_{h}}\|_{1} +\|h_{S^{c}_{h}}\|_{1}). \nonumber
\end{align}
When the event $\| (Y -\mu(\beta, \widetilde{\mathbf{Z}}))^\top \widetilde{\mathbf{Z}} \|_{\infty} \leq \dfrac{n\lambda}{\tau}$ holds, we have:
\begin{equation}
\label{ineq: beta2}
\lambda(\|{\boldsymbol \beta}\|_{1} - \|\hat{{\boldsymbol \beta}}^{n}\|_{1}) \geq -\dfrac{1}{n} \cdot \frac{n\lambda}{\tau} \cdot (\|h_{S_{h}}\|_{1} +\|h_{S^{c}_{h}}\|_{1}).
\end{equation}
So by \eqref{ineq: lambda}, \eqref{ineq: beta1} and \eqref{ineq: beta2} we have:
\begin{align*}
\lambda(\|h_{S_{h}}\|_{1} -\|h_{S^{c}_{h}}\|_{1}) \geq \lambda (\|{\boldsymbol \beta}\|_{1} - \|\hat{{\boldsymbol \beta}}^{n}\|_{1}) \geq - \frac{\lambda}{\tau} \cdot (\|h_{S_{h}}\|_{1} +\|h_{S^{c}_{h}}\|_{1}).
\end{align*}
That is,
\begin{equation}
\label{ineq:hc}
\|h_{S^{c}_{h}}\|_{1} \leq \dfrac{\tau+1}{\tau-1}\|h_{S_{h}}\|_{1}.
\end{equation}
Then by the KKT condition of optimization problem \eqref{eq:L1}, we have:
\begin{equation}
\| \widetilde{\mathbf{Z}}^\top (Y -\mu(\hat{{\boldsymbol \beta}}^{n}, \widetilde{\mathbf{Z}})) + C\bm{\eta}\|_{\infty} \leq n\lambda,
\end{equation}
for some $\bm{\eta} \in {\mathbb R}^{r}$. Then by Lemma 1,
\begin{align}
\| (\mathbf{I}_p- P_C)\left(\widetilde{\mathbf{Z}}^\top (Y -\mu(\hat{{\boldsymbol \beta}}^{n}, \widetilde{\mathbf{Z}})) + C\bm{\mu}\right)\|_{\infty} & \leq k_{0} \| \widetilde{\mathbf{Z}}^\top (Y -\mu(\hat{{\boldsymbol \beta}}^{n}, \widetilde{\mathbf{Z}})) + C\bm{\mu}\|_{\infty} \leq k_0 n \lambda.
\end{align}
Then as
\begin{align*}
(\mathbf{I}_p- P_C)(\widetilde{\mathbf{Z}}^\top (Y -\mu(\hat{{\boldsymbol \beta}}^{n}, \widetilde{\mathbf{Z}})) + C\bm{\mu}) & = (\mathbf{I}_p- P_C)\widetilde{\mathbf{Z}}^\top (Y -\mu(\hat{{\boldsymbol \beta}}^{n}, \widetilde{\mathbf{Z}})) + (\mathbf{I}_p- P_C)C\bm{\mu}, \\
& = \widetilde{\mathbf{Z}}^\top (Y -\mu(\hat{{\boldsymbol \beta}}^{n}, \widetilde{\mathbf{Z}})).
\end{align*}
with the the assumption that $\| (Y -\mu({\boldsymbol \beta}, \widetilde{\mathbf{Z}}))^\top \widetilde{\mathbf{Z}} \|_{\infty} \leq \dfrac{n\lambda}{\tau}$, we have:
\begin{align}
\|\widetilde{\mathbf{Z}}^\top(\mu(\hat{{\boldsymbol \beta}}^{n}, \widetilde{\mathbf{Z}}) - \mu({\boldsymbol \beta}, \widetilde{\mathbf{Z}}))\|& \leq \|\widetilde{\mathbf{Z}}^\top (Y -\mu(\hat{{\boldsymbol \beta}}^{n}, \widetilde{\mathbf{Z}})) \|_{\infty}+\|\widetilde{\mathbf{Z}}^\top (Y -\mu({\boldsymbol \beta}, \widetilde{\mathbf{Z}})) \|_{\infty} \leq k_0 n \lambda + \dfrac{n\lambda}{\tau}. \nonumber
\end{align}
As $\|\widetilde{\mathbf{Z}}^\top(\mu(\hat{{\boldsymbol \beta}}^{n}, \widetilde{\mathbf{Z}}) - \mu({\boldsymbol \beta}, \widetilde{\mathbf{Z}}))\| = \|\widetilde{\mathbf{Z}}^\top \mathbf{V}({\boldsymbol \beta}^{0},\widetilde{\mathbf{Z}}) \widetilde{\mathbf{Z}} h \|_{\infty}$, we get
\[\|\widetilde{\mathbf{Z}}^\top \mathbf{V}({\boldsymbol \beta}^{0},\widetilde{\mathbf{Z}}) \widetilde{\mathbf{Z}} h \|_{\infty} \leq k_0 n \lambda + \dfrac{n\lambda}{\tau} . \]
Since $\mathbf{V}({\boldsymbol \beta}^{0},\widetilde{\mathbf{Z}})$ is a diagonal matrix with all its nonzero elements greater than zero, define $\widetilde{\mathbf{Z}}_{v} = \mathbf{V}^{\frac{1}{2}}({\boldsymbol \beta}^{0},\widetilde{\mathbf{Z}})\widetilde{\mathbf{Z}} $, where $\mathbf{V}^{\frac{1}{2}}({\boldsymbol \beta}^{0},\widetilde{\mathbf{Z}}) = {\rm diag}\{(v({\boldsymbol \beta}^{0}, Z_1))^{\frac{1}{2}},\dots,(v({\boldsymbol \beta}^{0}, Z_n))^{\frac{1}{2}}\}$. So $\widetilde{\mathbf{Z}}_{v}^\top\widetilde{\mathbf{Z}}_{v} = \widetilde{\mathbf{Z}}^\top \mathbf{V}({\boldsymbol \beta}^{0},\widetilde{\mathbf{Z}}) \widetilde{\mathbf{Z}}$.
Using Lemma 5.1 in \citet{cai2013compressed}, we have:
\begin{align*}
|\langle \widetilde{\mathbf{Z}}_{v} h_{S_{h}}, \widetilde{\mathbf{Z}}_{v} h_{S^{c}_{h}} \rangle| &\leq \theta_{s,s}(\widetilde{\mathbf{Z}}_{v}) \|h_{S_{h}}\|_2 \cdot \max(\|h_{S^{c}_{h}}\|_{\infty}, \|h_{S^{c}_{h}}\|_{1}/s)\sqrt{s}, \\
& \leq \sqrt{s}\theta_{s,s}(\widetilde{\mathbf{Z}}_{v})\|h_{S_{h}}\|_2 \cdot \frac{\tau +1}{\tau -1} \|h_{S_{h}}\|_{1} /s, \\
& \leq \dfrac{\tau +1}{\tau -1}\theta_{s,s}(\widetilde{\mathbf{Z}}_{v})\|h_{S_{h}}\|^{2}_2.
\end{align*}
Then,
\begin{align}
\label{ineq:hSh}
(k_0 n \lambda + \dfrac{n\lambda}{\tau})\|h_{S_{h}}\|_1 & \geq \|\widetilde{\mathbf{Z}}^\top \mathbf{V}({\boldsymbol \beta}^{0},\widetilde{\mathbf{Z}}) \widetilde{\mathbf{Z}} h \|_{\infty} \|h_{S_{h}}\|_1 \geq \langle \widetilde{\mathbf{Z}}_{v}^\top \widetilde{\mathbf{Z}}_{v}h, h_{S_{h}}\rangle, \nonumber\\
& = \langle \widetilde{\mathbf{Z}}_{v} h_{S_{h}},\widetilde{\mathbf{Z}}_{v} h_{S_{h}}\rangle +\langle \widetilde{\mathbf{Z}}_{v} h_{S_{h}},\widetilde{\mathbf{Z}}_{v} h_{S^{c}_{h}}\rangle, \nonumber\\
& \geq \|\widetilde{\mathbf{Z}}_{v} h_{S_{h}}\|_{2}^{2} - \dfrac{\tau +1}{\tau -1}\theta_{s,s}(\widetilde{\mathbf{Z}}_{v})\|h_{S_{h}}\|^{2}_2, \nonumber\\
& \geq \left( \delta_{2s}^{-}(\widetilde{\mathbf{Z}}_{v}) - \dfrac{\tau +1}{\tau -1}\theta_{s,s}(\widetilde{\mathbf{Z}}_{v})\right)\|h_{S_{h}}\|^{2}_2, \nonumber\\
& \geq \left(\dfrac{3\tau-1}{2(\tau-1)}\delta_{2s}^{-}(\widetilde{\mathbf{Z}}_{v}) -\dfrac{\tau+1}{2(\tau-1)}\delta_{2s}^{+}(\widetilde{\mathbf{Z}}_{v}) \right)\|h_{S_{h}}\|^{2}_1 /s.
\end{align}
So from \eqref{ineq:hSh} we have:
\begin{align}
\label{ineq:hSh2}
\|h_{S_{h}}\|_1 & \leq \dfrac{s\left(k_0 n \lambda + \dfrac{n\lambda}{\tau}\right)}{\left(\dfrac{3\tau-1}{2(\tau-1)}\delta_{2s}^{-}(\widetilde{\mathbf{Z}}_{v}) -\dfrac{\tau+1}{2(\tau-1)}\delta_{2s}^{+}(\widetilde{\mathbf{Z}}_{v}) \right)}, \nonumber \\
& \leq s\dfrac{k_0 n \lambda + \dfrac{n\lambda}{\tau}}{2n\tau\phi_{0}/(\tau -1)}.
\end{align}
So combine \eqref{ineq:hc} and \eqref{ineq:hSh2}, we have:
\[
\|\hat{{\boldsymbol \beta}}^{n} - {\boldsymbol \beta}\|_1 = \|h_{S_{h}}\|_1+\|h_{S^{c}_{h}}\|_1 \leq \dfrac{2\tau}{\tau -1}\|h_{S_{h}}\|_{1} \leq \dfrac{s\lambda(k_0+1/\tau)}{\phi_{0}} .
\]
Take $\lambda = \tau \tilde{c}\sqrt{(\log p)/n}$, so we have:
\begin{align*}
{\mathbb{P}}\left(\|\hat{\beta}^{n} - \beta\| _1 \leq \dfrac{s\lambda(k_0+1/\tau)}{\phi_{0}} \right) & \geq 1- {\mathbb{P}}\left( \| (Y -\mu(\beta, \widetilde{\mathbf{Z}}))^\top \widetilde{\mathbf{Z}} \|_{\infty} > \dfrac{n\lambda}{\tau}\right) \\
& \geq 1- \sum_{i=1}^{p} {\mathbb{P}}\left( | ((Y -\mu(\beta, \widetilde{\mathbf{Z}}))^\top \widetilde{\mathbf{Z}} )_{i}| > \dfrac{n\lambda}{\tau}\right) \\
& \geq 1 -2 \sum_{i=1}^{p}\exp \left( - \dfrac{(\sqrt{n} \lambda / \tau}{2K^{2}} \right)\geq 1 -2p^{1 - \tilde{c}^{2}/(2K^2)}
\end{align*}
\end{proof}
\subsection*{Proof of Lemma \ref{thm: feasible set}}
\begin{proof}
We first provide a bound for ${\boldsymbol {\mathcal S}igma}$.
Notice that:
\begin{align*}
\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}{\boldsymbol {\mathcal S}igma} -(\mathbf{I}_p-P_C) &= \dfrac{1}{n}\sum_{k=1}^{n}\left( \bm{{\cal{O}}mega}_{{\boldsymbol \beta}} v({\boldsymbol \beta}, \widetilde{Z}_{k})\widetilde{Z}_{k}\widetilde{Z}_{k}^\top - (\mathbf{I}_p-P_C)\right), \\
& = \dfrac{1}{n}\sum_{k=1}^{n}\left( \bm{{\cal{O}}mega}_{{\boldsymbol \beta}}^{1/2}\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}^{1/2} v({\boldsymbol \beta}, \widetilde{Z}_{k})\widetilde{Z}_{k}\widetilde{Z}_{k}^\top\bm{{\cal{O}}mega}^{1/2}_{{\boldsymbol \beta}} {\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}}^{1/2} - (\mathbf{I}_p-P_C)\right).
\end{align*}
The last equality is true as ${\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}}^{1/2}\bm{{\cal{O}}mega}^{1/2}_{{\boldsymbol \beta}}\widetilde{Z}_{k} = (\mathbf{I}_p - P_C)\widetilde{Z}_{k} = \widetilde{Z}_{k}$ for $k =1,2 \ldots,n$. Then notice that ${\mathbb E} \bm{{\cal{O}}mega}_{{\boldsymbol \beta}} v({\boldsymbol \beta}, \widetilde{Z}_{k})\widetilde{Z}_{k}\widetilde{Z}_{k}^\top = {\mathbb E} \bm{{\cal{O}}mega}_{{\boldsymbol \beta}} {\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}} = \mathbf{I}_p - P_C$, so define:
\[
v_{k}^{(ij)} = \bm{{\cal{O}}mega}_{i,\cdot}^{1/2}\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}^{1/2} v({\boldsymbol \beta}, \widetilde{Z}_{k})\widetilde{Z}_{k}\widetilde{Z}_{k}^\top\bm{{\cal{O}}mega}^{1/2}_{{\boldsymbol \beta}} ({\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}})_{\cdot,j}^{1/2} - (\mathbf{I}_p-P_C)_{i,j},
\]
we know that ${\mathbb E} v_{k}^{(ij)} =0$ for $k =1,2 \ldots,n$ and any $i,j$. Then by the proof of Lemma 6.2 in \citet{javanmard2014confidence}, we have:
\begin{align*}
\|v_{k}^{(ij)}\|_{\psi_{1}} & \leq 2\| \bm{{\cal{O}}mega}_{i,\cdot}^{1/2}\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}^{1/2} v({\boldsymbol \beta}, \widetilde{Z}_{k})\widetilde{Z}_{k}\widetilde{Z}_{k}^\top\bm{{\cal{O}}mega}^{1/2}_{{\boldsymbol \beta}} ({\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}})_{\cdot,j}^{1/2} \|_{\psi_{1}}, \\
& \leq 2v({\boldsymbol \beta}, \widetilde{Z}_{k}) \| \bm{{\cal{O}}mega}_{i,\cdot}^{1/2}\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}^{1/2} \widetilde{Z}_{k}\|_{\psi_{2}} \|({\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}})_{\cdot,j}\bm{{\cal{O}}mega}^{1/2}_{{\boldsymbol \beta}}\widetilde{Z}_{k} \|_{\psi_{2}}, \\
& \leq 2 \|({\boldsymbol {\mathcal S}igma}_{{\boldsymbol \beta}})_{\cdot,j}\|_{2} \|\bm{{\cal{O}}mega}_{i,\cdot}^{1/2}\|_{2} \cdot \|\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}^{1/2} \widetilde{Z}_{k}\|_{\psi_{2}} \|\bm{{\cal{O}}mega}^{1/2}_{{\boldsymbol \beta}}\widetilde{Z}_{k} \|_{\psi_{2}}, \\
& \leq 2\sqrt{C_{\text{max}}/ C_{\text{min}}}\kappa^2 \equiv \kappa_{1}^{\prime}.
\end{align*}
Then by inequality for centered sub-exponential random variables from \citet{buhlmann2011statistics}, we have:
\[
{\mathbb{P}}\left( \dfrac{1}{n} |\sum_{k=1}^{n}v_{k}^{(ij)}| \geq \gamma \right) \leq \exp\left( -\dfrac{n}{6} \min \left\{ \left(\dfrac{\gamma}{e\kappa^{'}}\right)^{2}, \left(\dfrac{\gamma}{e\kappa^{'}}\right)\right\}\right).
\]
Pick $\gamma = c \sqrt{(\log p) / n}$ with $ c \leq e \kappa_{1}^{\prime} \sqrt{n / (\log p)}$, we have:
\begin{equation}
\label{ineq:Bernstein}
{\mathbb{P}}\left( \dfrac{1}{n} |\sum_{k=1}^{n}v_{k}^{(ij)}| \geq c \sqrt{\dfrac{\log p}{ n}} \right) \leq 2p^{ - c^{2}/ (6e^2 \kappa_{1}^{\prime 2} )} =2p^{ - c^{2}C_{\text{min}}/ (24e^2 C_{\text{max}} \kappa^4)}.
\end{equation}
Since \eqref{ineq:Bernstein} is true for all $i,j$, we have:
\[
{\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}{\boldsymbol {\mathcal S}igma} -(\mathbf{I}_p-P_C)|_{\infty} \geq
c \sqrt{(\log p) / n} \right) \leq 2p^{ - c^{2}C_{\text{min}}/ (24e^2 C_{\text{max}} \kappa^4)+2} = 2p^{-c_{1}^{''}}.
\]
Then by the following inequality:
\begin{align*}
&{\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}\widehat{{\boldsymbol {\mathcal S}igma}} -(\mathbf{I}_p-P_C)|_{\infty} \geq c \sqrt{(\log p) / n} \right) \\
& \leq {\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}{\boldsymbol {\mathcal S}igma} -(\mathbf{I}_p-P_C)|_{\infty} + |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}({\boldsymbol {\mathcal S}igma} - \widehat{{\boldsymbol {\mathcal S}igma}})| \geq c \sqrt{(\log p) / n} \right) \\
& \leq {\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}{\boldsymbol {\mathcal S}igma} -(\mathbf{I}_p-P_C)|_{\infty} \geq c \sqrt{(\log p) / n} \right) + {\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}({\boldsymbol {\mathcal S}igma} - \widehat{{\boldsymbol {\mathcal S}igma}})| \geq c \sqrt{(\log p) / n}\right)
\end{align*}
Notice that:
\begin{align*}
\left|\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}({\boldsymbol {\mathcal S}igma} - \widehat{{\boldsymbol {\mathcal S}igma}})\right|_{\infty} & = \frac{1}{n} \left| \sum_{k=1}^{n} \left( \bm{{\cal{O}}mega}_{{\boldsymbol \beta}} \left( v({\boldsymbol \beta}, \widetilde{Z}_{k}) - v(\hat{{\boldsymbol \beta}}^n, \widetilde{Z}_{k}) \right) \widetilde{Z}_{k}\widetilde{Z}_{k}^\top \right) \right|_{\infty}\\
& \leq \frac{1}{n} \left| \sum_{k=1}^{n} \left( C\|\hat{{\boldsymbol \beta}}^{n} - {\boldsymbol \beta}\| _1\bm{{\cal{O}}mega}_{{\boldsymbol \beta}} \widetilde{Z}_{k}\widetilde{Z}_{k}^\top \right) \right|_{\infty}
\end{align*}
As
\[
\frac{1}{n} \sum_{k=1}^{n} \left( \bm{{\cal{O}}mega}_{{\boldsymbol \beta}} \widetilde{Z}_{k}\widetilde{Z}_{k}^\top \right) \rightarrow {\mathbb E} \bm{{\cal{O}}mega}_{{\boldsymbol \beta}} \widetilde{Z}_{1}\widetilde{Z}_{1}^\top = {\mathbb E} \bm{{\cal{O}}mega}_{{\boldsymbol \beta}}{\rm T}heta,
\]
together with the result we obtain from theorem \ref{thm: consistency},
\begin{align*}
{\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}({\boldsymbol {\mathcal S}igma} - \widehat{{\boldsymbol {\mathcal S}igma}})|_{\infty} \geq c \sqrt{(\log p) / n}\right) & \leq {\mathbb{P}} \left( \frac{1}{n} \left| \sum_{k=1}^{n} \left( C\|\hat{{\boldsymbol \beta}}^{n} - {\boldsymbol \beta}\| _1\bm{{\cal{O}}mega}_{{\boldsymbol \beta}} \widetilde{Z}_{k}\widetilde{Z}_{k}^\top \right) \right|_{\infty} \geq c \sqrt{(\log p) / n}\right) \\
& \leq 2p^{1 - \hat{c}^{2}/(2K^2)} = 2p^{-c_{2}^{''}}
\end{align*}
where $\hat{c} = \frac{c \phi_{0}}{C|\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}{\rm T}heta|_{\infty}s(k_0 \tau +1)}$.
So finally:
\begin{align*}
&{\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}\widehat{{\boldsymbol {\mathcal S}igma}} -(\mathbf{I}_p-P_C)|_{\infty} \geq c \sqrt{(\log p) / n} \right) \\
& \leq {\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}{\boldsymbol {\mathcal S}igma} -(\mathbf{I}_p-P_C)|_{\infty} \geq c \sqrt{(\log p) / n} \right) + {\mathbb{P}} \left( |\bm{{\cal{O}}mega}_{{\boldsymbol \beta}}({\boldsymbol {\mathcal S}igma} - \widehat{{\boldsymbol {\mathcal S}igma}})|_{\infty} \geq c \sqrt{(\log p) / n}\right) \\
& \leq 2p^{-c_{1}^{''}}+2p^{-c_{2}^{''}}
\end{align*}
\begin{comment}
\end{comment}
\end{proof}
\subsection*{Proof of Theorem \ref{thm:bound on Delta}}
\begin{proof}
As we obtained in lemma \ref{thm: feasible set}, ${\cal{O}}mega_{{\boldsymbol \beta}}$ is in the feasible set with a large probability. That is, event $|M\widehat{{\boldsymbol {\mathcal S}igma}} -(\mathbf{I}_p-P_C)|_{\infty} \geq c \sqrt{(\log p) / n}$ happens with large probability.
Further more,
\begin{align*}
{\mathbb{P}}\left(|(\mathbf{I}_p-P_C) - M\widehat{{\boldsymbol {\mathcal S}igma}}^{0}|_{\infty} \geq c \sqrt{(\log p) / n}\right) & \leq {\mathbb{P}} \left( |M\widehat{{\boldsymbol {\mathcal S}igma}} -(\mathbf{I}_p-P_C)|_{\infty} \geq c \sqrt{(\log p) / n} \right) \\
& + {\mathbb{P}} \left( |M(\widehat{{\boldsymbol {\mathcal S}igma}}^{0}- \widehat{{\boldsymbol {\mathcal S}igma}})|_{\infty} \geq c \sqrt{(\log p) / n}\right).
\end{align*}
The bound for the first term on the RHS is the result from lemma \ref{thm: feasible set}. Applying the similar method to the second term, notice that $\|\hat{{\boldsymbol \beta}}^{0} - {\boldsymbol \beta}\| _1 \leq \|\hat{{\boldsymbol \beta}}^{n} - {\boldsymbol \beta}\| _1$, hence, ${\mathbb{P}} ( |M(\widehat{{\boldsymbol {\mathcal S}igma}}^{0}- \widehat{{\boldsymbol {\mathcal S}igma}})|_{\infty} \geq \allowbreak c \sqrt{(\log p) / n}) \leq 4p^{-c_{2}^{''}}$. So,
\begin{align*}
{\mathbb{P}}\left(|(\mathbf{I}_p-P_C) - M\widehat{{\boldsymbol {\mathcal S}igma}}^{0}|_{\infty} \geq c \sqrt{(\log p) / n}\right) \leq 2p^{-c_{1}^{''}}+6p^{-c_{2}^{''}}
\end{align*}
Finally,
\begin{align*}
\|{\cal{D}}elta\|_{\infty} & \leq \sqrt{n} \left|(\mathbf{I}_p-P_C) - \widetilde{M}\widehat{{\boldsymbol {\mathcal S}igma}}^{0}\right|_{\infty}\|\hat{{\boldsymbol \beta}}^n-{\boldsymbol \beta}\|_{1} \\
& = \sqrt{n} \left|(\mathbf{I}_p-P_C)\left((\mathbf{I}_p-P_C) - \widetilde{M}\widehat{{\boldsymbol {\mathcal S}igma}}^{0}\right)\right|_{\infty}\|\hat{{\boldsymbol \beta}}^n-{\boldsymbol \beta}\|_{1} \\
& \leq k_{0} \sqrt{n}|(\mathbf{I}_p-P_C) - M\widehat{{\boldsymbol {\mathcal S}igma}}^{0}|_{\infty}\|\hat{{\boldsymbol \beta}}^n-{\boldsymbol \beta}\|_{1}
\end{align*}
We have:
\begin{align*}
& {\mathbb{P}}\left( \|{\cal{D}}elta\|_{\infty} > \dfrac{c\tilde{c}k_{0}(k_0\tau +1)}{\phi_0} \cdot \dfrac{s\log p}{\sqrt{n}} \right) \\
& \leq {\mathbb{P}} \left( \|\hat{{\boldsymbol \beta}}^{n} - {\boldsymbol \beta}\| _1 \geq \dfrac{s\lambda(k_0+1/\tau)}{\phi_{0}} = \dfrac{s\tilde{c}(k_{0}\tau+1)\sqrt{(\log p) / n}}{\phi_{0}}\right) \\
& + {\mathbb{P}}\left(|(\mathbf{I}_p-P_C) - M\widehat{{\boldsymbol {\mathcal S}igma}}^{0}|_{\infty} \geq \gamma = c \sqrt{(\log p) / n}\right) \\
& \leq 2p^{-c'} +2p^{-c_{1}^{''}}+6p^{-c_{2}^{''}}
\end{align*}
So we have finished the proof.
\end{proof}
\label{lastpage}
\end{document} |
\begin{document}
\title[Truncated tube domains with multi-sheeted envelope]
{Truncated tube domains with multi-sheeted envelope}
\author{Suprokash Hazra}
\curraddr{Department of Mathematics\\ Mid Sweden University\\
SE-851 70 Sundsvall\\ Sweden} \email{[email protected], [email protected]}
\author{Egmont Porten}
\keywords{Envelopes of holomorphy, truncated tube domains, special phenomena in complex dimension two.}
\subjclass[2020]{Primary 32D10, 32D26, 32Q02 ; Secondary 32V25, 32E20.}
\begin{abstract}
The present article is concerned with a group of problems raised by J.~Noguchi and M.~Jarnicki/P.~Pflug,
namely whether the envelopes of holomorphy of truncated tube domains are always schlicht, i.e.~subdomains of $\mathbb{C}^n$,
and how to characterise schlichtness if this is not the case.
By way of a counter-example homeomorphic to the $4$-ball, we answer the first question in the negative.
Moreover, it is possible that the envelopes has arbitrarily many sheets.
The article is concluded by sufficient conditions for schlichtness in complex dimension two.
\end{abstract}
\maketitle
\section{Introduction}
A domain $D\subset\mathbb{C}^n$ is said to be a \emph{tube domain}
if there are domains $X$, $Y\subset\mathbb{R}^n$
such that $D=X+iY$.
In case that one of the domains $X$, $Y$ equals $\mathbb{R}^n$, $D$ is a \emph{Bochner tube domain}.
Otherwise $D$ is often called \emph{truncated} tube domain.
A classical theorem of Bochner \cite{B} shows the envelope of holomorphy of $X+i\mathbb{R}^n$
to be the tube $\mbox{conv}(X)+i\mathbb{R}^n$ over the convex hull $\mbox{conv}(X)$,
meaning in particular that the envelope is schlicht.
Truncated tube domains are much less understood.
While the geometry of pseudoconvex tube domains is still simple
(see \cite{P} and \cite{N} for an alternative approach),
the investigation of the envelope becomes subtle in the nonpseudoconvex case.
Recently, questions about multi-sheetedness of these envelopes were raised
by J.~Noguchi \cite{N} and by M.~Jarnicki and P.~Pflug \cite{JP2}.
More precisely, it is asked in \cite[Rem.4]{JP2} whether these envelopes are always schlicht,
and in \cite[Problem 4.4]{N} whether simply connected or contractible tube domain have schlicht envelope.
In the present article, we will present counter-examples answering both of the questions in the negative,
namely tube domains homeomorphic to the $4$-ball
whose envelopes have arbitrarily many sheets.
Let us review some terminology and results on envelopes, see \cite{JP1} for exhaustive information.
It is classical that every domain $D\subset\mathbb{C}^n$ admits a
Riemann domain $\pi:\textsf{E}(D)\rightarrow\mathbb{C}^n$
(unique up to isomorphisms of Riemann domains over $\mathbb{C}^n$)
with the following properties:
\begin{itemize}
\item[(i)\phantom{ii}]
There is an embedding $\alpha:D\hookrightarrow\textsf{E}(D)$
with $\pi\circ\alpha=\mbox{id}_D$.
\item[(ii)\phantom{i}] For every $f\in\mathbb{C}alO(D)$, the lift $f\circ(\pi|_{\alpha(D)})
$ extends holomorphically to $\textsf{E}(D)$,
\item[(iii)] $\textsf{E}(D)$ is Stein.
\end{itemize}
This domain is called the \emph{envelope of holomorphy of $D$}, see \cite{JP1} for exhaustive information.
We say that $\textsf{E}(D)$ is \emph{$k$-sheeted over $z\in\mathbb{C}^n$}
if $\# \pi^{-1}(z)=k$.
For $U^{\rm open}\subset\mathbb{C}^n$, a sheet lying \emph{uniformly over $U$}
is an open set $V\subset\textsf{E}(D)$ such that $\pi|_V$ is a homeomorphism onto $U$.
The latter notion is mainly interested for an infinite number of sheets.
If $\textsf{E}(D)$ is $k$-sheeted over $z\in\mathbb{C}^n$ with $k<\infty$,
then there are $k$ sheets lying uniformly over any sufficiently small
neighbourhood of $z$.
The following is our main result.
\begin{theorem}\label{main}
For every integer $k\geq 2$ there is a domain $X_k\Subset\mathbb{R}^2$
with $\mathbb{C}alC^\infty$-smooth boundary and a radius $r_k>0$ such that
the envelope of holomorphy of $D_k=X_k+iB_{r_k}$ has at least $k$ sheets
over some point in $\mathbb{C}^2$.
Furthermore, there are a domain $X_\infty\Subset\mathbb{R}^2$ and $r_\infty>0$
such that $\textsf{E}(X_\infty+iB_{r_\infty})$ has infinitely many sheets lying uniformly
over a neighbourhood of a circle embedded in $\mathbb{C}^2$.
The domains $X_k$ and $X_\infty$ can be constructed diffeomorphic to the open $4$-ball.
\end{theorem}
Throughout, $B_r(y_0)$ denotes the round ball $\{y\in\mathbb{R}^n:|y-y_0|<r\}$ in $\mathbb{R}^n$,
in constrast to balls in $\mathbb{C}^n$, denoted by ${\Bbb B}_r(z_0)$.
If $y_0=0$, we abreviate $B_r=B_r(0)$.
Note however that the exact shape of the domains in $y$-direction is not very important.
Moreover, it is easy to derive corresponding examples of tube domains in $\mathbb{C}^n$, $n\geq 3$.
More interesting is the difference concerning boundary regularity for $X_k$, $k<\infty$, and $X_\infty$.
The authors do not know whether the envelope is always finitely sheeted
if both $X$ and $Y$ are bounded and have smooth boundary.
This may be viewed as a minor variant of the
long-standing open problem raised by B.~Stens\o nes (Problem 3.3.1, \cite{prob})
whether envelopes of bounded domains with smooth boundary are finitely sheeted.
On the other side, one may still look for sufficient conditions ensuring schlichtness.
In recent work, M.~Jarnicki and P.~Pflug \cite{JP2} explicitly determined the envelopes of the model domains
\begin{equation}\label{model}
D_{n,r_1,r_2,r_3}=\{x\subset\mathbb{R}^n:r_1<|x|<r_2\}+iB_{r_3}\subset\mathbb{C}^n,
\end{equation}
where $n\geq 2, 0\leq r_1<r_2,\ 0<r_3$, to be schlicht and equal to
\begin{equation}\label{ED}
\textsf{E}(D_{n,r_1,r_2,r_3})=\{|x|<r_2,|y|<r_3,|y|^2<|x|^2-(r_1^2-r_3^2)\}.
\end{equation}
Motivated by their question about generalizations, we present a general schlichtness theorem for $n=2$.
\begin{theorem}\label{schlicht2d}
Let $X\subset\mathbb{R}^2$ be a convex domain with finitely many holes with strictly convex $\mathbb{C}alC^2$-boundary,
and let $Y\Subset\mathbb{R}^2$ be a convex domain.
Then the envelope of holomorphy of $D=X+iY$ is schlicht.
\end{theorem}
Strict convexity in the above statement is meant in the sense of nonzero curvature,
see Section 4 for details.
Note that the domains $D_{2,r_1,r_2,r_3}$ satisfy the assumptions with $X=\{r_1<|x|<r_2\}$.
In the general situation, one cannot expect to obtain a description of $\textsf{E}(D)$
as explicit as in (\ref{ED}).
Actually, Lemma \ref{polhull} shows that this is essentially as hard as describing polynomial hulls.
However, Theorem \ref{schlicht2d} permits to derive some qualitative consequences.
Our proof essentially relies on phenomena special to complex dimension two,
and the corresponding question in higher dimension remains open.
The article is organised as follows. In Section 2, we construct the counter-examples mentioned in the main result.
The idea will be to produce a helix with many turns in the envelope of holomorphy
and lying over a circle in $\mathbb{C}^2$.
The question whether this helix can arranged to be maximal in the sense of universal coverings,
is discussed and answered in Section 3. Moreover, the relation of our construction to limit cycles of laminations will become apparent.
Section 4 finally contains the proof of Theorem \ref{schlicht2d} and the discussion of some structural consequences.\\
{\bf Acknowledgements:} The authors would like to thank Professor Peter Pflug for drawing their attention to his recent article
and for a lot of valuable remarks helping to ameliorate the quality of the article.
They would also like to thank Professor Junjiro Noguchi for his interest in their work.
\section{Proof of the main result}\label{sec2}
The present section is dedicated to the {\bf proof of Theorem \ref{main}}, organised in four major steps.\\
{\bf Step 1: Geometric preparations.}
For a complex line $L=\{az_1+bz_2=0\}\subset\mathbb{C}^2$, $(a,b)\not=(0,0)$,
the logarithm $f_L=\log(az_1+bz_2)$ can be viewed as
a univalent holomorphic function on the universal covering
$\pi_{\rm univ}:Z\rightarrow\mathbb{C}^2\backslash L$
of $\mathbb{C}^2\backslash L$.
Of course, $Z$ equals essentially $Z_{\log}\times\mathbb{C}$ where $Z_{\log}$ is the Riemann surface
of the logarithm in one complex variable.
More precisely, if another complex line $L'$ meets $L$ transversally at the origin,
it is clear that $\pi_{\rm univ}^{-1}(L')$ is the universal covering of $L'\backslash\{0\}$
(and is equivalent to $Z_{\log}$).
Recall that the set $\mathcal T$ of all real planes $\mathbb{P}i$ intersecting $L$ transversally at $0$
is a topological open $4$-ball.
To see this, we pick one such plane $\mathbb{P}i_0$ and select real linear coordinates $s_1,s_2,t_1,t_2$
such that $L=\{s_1=s_2=0\}$, $\mathbb{P}i_0=\{t_1=t_2=0\}$. Then the planes $\mathbb{P}i\in{\mathcal T}$ are represented as graphs
\[
\left(\begin{array}{c} t_1 \\ t_2\end{array}\right)=A(\mathbb{P}i)\left(\begin{array}{c} s_1 \\ s_2\end{array}\right),
\]
establishing a 1-1 correspondence between $\mathbb{P}i\in{\mathcal T}$ and the real $2\times 2$ matrices $A(\mathbb{P}i)$.
An easy homotopy argument shows
that we can replace $L'$ above by any $\mathbb{P}i\in{\mathcal T}$.
In particular, $Z$ is still connected with infinitely many sheets over $\mathbb{P}i\backslash\{0\}$.
If $\gamma$ is a loop surrounding $0$ once
within $\mathbb{P}i$, following $\gamma$ one turn takes us to a new sheet
and changes thus the value of $\log(az_1+bz_2)$ by $\pm 2\pi i$.
We summarise what we need in the sequel.
\begin{lemma}
Let $L\subset\mathbb{C}^2$ be a complex line and $\mathbb{P}i\subset\mathbb{C}^2$ a real $2$-plane
which intersect transversally at the origin.
Let $\iota:{\Bbb T}=\{\zeta\in\mathbb{C}:|\zeta|=1\}\hookrightarrow\mathbb{P}i\backslash\{0\}$ be a continuous embedding
that surrounds $0$ in $\mathbb{P}i$ precisely once,
$D\subset\mathbb{C}^2$ a simply connected domain containing $\iota({\Bbb T}\backslash\{1\})$ but not $\iota(\{1\})$,
and $F_L$ a continuous branch of $f_L$ on $D$.
Then the two one-sided limits
$\lim_{\theta\downarrow 0}F_L(e^{i\theta})$ and $\lim_{\theta\uparrow 2\pi}F_L(e^{i\theta})$
exist and differ by $\pm2\pi i$.
\end{lemma}
In more invariant terms, a general loop in $\mathbb{C}^2\backslash L$
lifts as a loop to $Z$ if and only if it is not linked with $L$, but we will not need this.
\begin{remark}\rm
{\bf a)}
The constructions in this and the next section rely on the initial choice of a pair $(L,\mathbb{P}i)$
of a complex line $L$ and a real $2$-dimensional subspace $\mathbb{P}i$ transverse to $L$.
These pairs are the points of a $6$-dimensional manifold $\mathcal{P}$.
We claim that $GL_2(\mathbb{C})$ acts transitively on $\mathcal{P}$,
meaning that all initial choices are equivalent.
Given $(L,\mathbb{P}i)$, $j=1,2$, it is easy to reduce to the case that
$\mathbb{P}i_1=\mathbb{P}i_2=\mathbb{R}^2_x$ and
$L_j={\mathbb{C}\left(\!\!\begin{array}{c}1\\a_j+ib_j\end{array}\!\!\right)}$, $b_j\not=0$.
We conclude by observing that (i) the matrices in $GL_2(\mathbb{R})$ fix $\mathbb{R}^2_x$
and (ii) $L_1$ can always be mapped to $L_2$ by a real invertible matrices of the form
$\left(\!\!\begin{array}{cc}1 & 0\\c&d\end{array}\!\!\right)$.
{\bf b)}
The larger manifold $\mathcal{Q}$ of all pairs $(\mathbb{P}i_1,\mathbb{P}i_2)$ where the $\mathbb{P}i_j$
are transverse real $2$-dimensional subspaces of $\mathbb{C}^2$ is $8$-dimensional.
By dimensional reasons
($GL_2(\mathbb{C})$ has real dimension $8$, but real multiples act the same way),
the action of $GL_2(\mathbb{C})$ is not transitive.
Alternatively, one may also observe that pairs of the form $(\mathbb{P}i,i\mathbb{P}i)$
form a $4$-dimensional orbit or use
\cite{W}, where it is shown that $\mathbb{P}i_1\cup\mathbb{P}i_2$
is sometimes locally polynomially convex and sometimes not.
\end{remark}
{\bf Step 2: A first option for $D_2$.}
In the above setting we can take $\mathbb{P}i=\mathbb{R}^2_x$
and fix a complex transverse line $L$ accordingly (for example $L=\{z_1=iz_2\}$).
Set $A=\{(x_1,x_2)\in\mathbb{R}^2:1<|x|<2\}$, and define
\begin{equation}\label{R}
R=\max\{r>0:L\cap(A+iB_{r})=\emptyset\}.
\end{equation}
To show that $R$ is well defined, we easily verify that
$\{r>0:L\cap(A+iB_{r})\not=\emptyset\}$ is a nonempty
interval that is either of the form $(0,r_0]$ or $(0,\infty)$.
The last option is impossible, since any real plane through the origin
different from the $i\mathbb{R}^2$ must meet $A+iB_{r}$,
provided $r$ is large enough.
Working in $\mathbb{R}^2$, we observe that the circle $\{|x-(3/2,-2)|=2\}$
intersects the closure of $A$ in two connected arcs.
Denoting the component passing through $p=(3/2,0)$ by $\gamma_0$,
we set $X=A\backslash\gamma_0$.
Pick $r\in(0,R]$. We claim that $\textsf{E}(X\times iB_r)$ is at least $2$-sheeted over the points
below the hypersurface $H_0=\gamma_0+iB_r$ and close to $(3/2,0)$.
To prove this, we observe that $H_0$ is strictly pseudoconvex.
By the Levi extension theorem, holomorphic functions locally defined on the upper side
$\{|x-(3/2,-2)|>2\}$ extend across $H_0$ to a uniform neighbourhood of $(3/2,0)$.
We can choose a branch $f$ of $f_L$ on the simply connected domain $X+iB_r$.
Our initial remarks on $f_L$ imply that the extension of $f$
through $H_0$ differs by $\pm 2\pi i$ from $f$ on the side $|x-(3/2,-2)|<2$,
and the claim follows. What we have shown so far is already enough to answer
Noguchi's question in the negative.
To finish the construction of $D_2$, we have to smoothen the boundary $X$.
This can for example be done by cutting out from $X$ the component of
\[
\{2-\eta\leq|x-(3/2,-2)|\leq 2\}\cap X, \mbox{ where } 0<\eta\ll 1,
\]
containing $\gamma_0$ and rounding off the four corners.
Note that the choice of $\eta$ sensitively depends on $r$,
as will become clear from the discussion of the Levi extension theorem
before the proof of Lemma \ref{Ginf}.\\
{\bf Step 3: Construction of $D_\infty$.}
To produce envelopes with infinitely many sheets,
we will replace the previous annulus-type domain by a spiral-shaped subdomain of $\{|x|>1\}$
with the circle $S=\{|x|=1\}$ in its limit set. The mapping
\[
\varphi:[1/2,1]\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^2, (s,\theta)\mapsto(1+se^{-\theta})(\cos\theta,\sin\theta)
\]
is an embedding of the semi-infinite strip $[1/2,1]\times\mathbb{R}_{\geq 0}$ into $\{|x|>1\}$ (use that $e^{-2\pi}<1/2$).
The domain $X_\infty=\varphi((1/2,1)\times\mathbb{R}_{>0})$ is squeezed between the straight segment
with endpoints $(3/2,0)$ and $(2,0)$ and the two spirals
\[
\Gamma_{1/2}=\varphi(\{1/2\}\times\mathbb{R}_{\geq 0})\ \mbox{ and }\ \Gamma_{1}=\varphi(\{1\}\times\mathbb{R}_{\geq 0}).
\]
\vspace*{-0.55cm}
\begin{figure}
\caption{Intersection of $D_{\infty}
\label{fig1}
\end{figure}
Note that $S$ also belongs to $\overline{X}_\infty$, meaning in particular
that the set theoretic boundary $\partial X_\infty$
is not smooth along $S$, see Figure 2.1.
\begin{lemma}\label{Ginf}
Set $D_r=X_\infty+iB_{r}$ with $r\in(0,R)$. Then there is $\rho>0$ such that $\textsf{E}(D_r)$
contains a Riemann subdomain which has infinitely many sheets over every point of
the thickened annulus $A_{\rho}=\{1-\rho<|x|<1\}+iB_{\rho}$.
\end{lemma}
Before entering the proof of Lemma \ref{Ginf}, some comments are in order.\\
\noindent
{\bf (1)} Let us identify the universal covering of $A_{\rho}$ with
\[
\{1-\rho<s<1\}\times\mathbb{R}_\theta\times B_{\rho}
\]
via the map $(s,\theta,y)\mapsto(s\cos\theta,s\sin\theta,y)$.
Then the proof below will show
that the Riemann subdomain in the lemma can actually be chosen as the lifting of a domain of the form
\begin{equation}\label{lift}
\{1-\rho<s<1\}\times\{\theta>C\}\times B_{\rho},
\end{equation}
with $C>0$ sufficiently large.\\
\noindent
{\bf (2)}
Recall the Levi extension theorem:
\it
Let $H$ be a connected closed real hypersurface of a domain $G\subset\mathbb{C}^2$
such that $G\backslash H$ has two connected components $G^\pm$.
In addition, assume that the Levi form of $H$ is nondegenerate in every point
and that $G^+$ is the pseudoconcave side of $H$.
For $z_0\in H$, there is an $\eta>0$ with the following properties:
For every open set $V$ containing $H$, all functions holomorphic
on $V^+=V\cap G^+$ extend holomorphically to
$V^+\cup \big(B_\eta(z_0)\cap\overline{G^-}\,\big)$.\\
\rm
We need two additional stability properties of Levi extension:
{\bf (i)} Since it mainly depends on Levi curvature, it is stable under $\mathbb{C}alC^2$-small deformations of $H$.
More concretely, the $\eta$ above can be chosen uniformly for $H$
and all sufficiently $\mathbb{C}alC^2$-close deformations fixing $z_0$.
{\bf (ii)} The constant $\eta$ does not depend on the size of $G^+$.
If $G$ is replaced by a smaller domain $G_1$ with analogous properties,
we still obtain holomorphic extension to $G_1^+\cup\big(B_\eta(z_0)\cap\overline{G^-}\,\big)$.
These properties can be shown by the method of analytic discs, see \cite{MP1}.\\
{\bf Proof of Lemma \ref{Ginf}:}
The spiral $\Gamma_{1/2}$ accumulates smoothly to $S$.
More precisely, for any interval $(\theta_1,\theta_2)\subset\mathbb{R}_{>0}$ which is \emph{short}
in the sense that $\theta_2-\theta_1<2\pi$, the parametrisations
\[
\varphi_j(\theta)=\varphi(1/2,\theta+2\pi j),\ \theta_1<\theta<\theta_2,\ j=0,1,\ldots,
\]
tend to $(\cos\theta,\sin\theta)$ on $(\theta_1,\theta_2)$ with all their derivatives for $j\rightarrow\infty$.
Fix now $r<R$ like indicated in the assumptions. For $\tau\in[\pi/2,5\pi/2)$,
we look at the short interval
$I_{\tau}=(\tau-\pi/2,\tau+\pi/2)$,
the corresponding halfcircle $S_{\tau}\subset S$
centered at $z_{\tau}=(\cos\tau,\sin\tau)$,
and the strictly pseudoconvex hypersurface $H_{\tau}=S_{\tau}+iB_r$.
For $j=0,1,2,\ldots$, we define the translated intervals
\[
I_{\tau,j}=(\tau+\pi(2j-1/2),\tau+\pi(2j+1/2))
\]
and the curved rectangles
\[
Q_{\tau,j}=\varphi((1/2,1)\times I_{\tau,j}))+iB_r.
\]
The corresponding tubes
$Q_{\tau,j}+iB_r$
have a pseudoconcave boundary part
\[
H_{\tau,j}=\varphi(\{1/2\}\times I_{\tau,j}))+iB_r,
\]
which converges in $\mathbb{C}alC^2$-sense toward $H_\tau$ for $j\rightarrow\infty$.
Set $z_{\tau,j}=\varphi(1/2,\tau+2\pi j)$.
For $\eta>0$ not to large ($\eta<1/4$ will do),
the disc $B_{3\eta/2}(z_{\tau,j})$ is disconnected by the arc
$\gamma_{\tau,j}=\varphi(\{1/2\}\times I_{\tau,j})$
into two connected components $B^\pm_{3\eta/2}(z_{\tau,j})$.
We choose signs such that $B^-_{3\eta/2}(z_{\tau,j})$
lies on the pseudoconvex side of $H_{\tau,j}$. We now define the enlarged domain
\[
Q_{\tau,j}(\eta)=Q_{\tau,j}\cup{\Bbb B}ig(\big(\gamma_{\tau,j}\cap B_{3\eta/2}(z_{\tau,j})\big)+iB_\eta {\Bbb B}ig)
\cup {\Bbb B}ig(B^-_{3\eta/2}(z_{\tau,j})+iB_\eta{\Bbb B}ig).
\]
The stability properties of Levi extension discussed above yield an $\eta=\eta(\tau)>0$
such that for $j$ sufficiently large, say $j\geq J=J(\tau)$, the following holds:
Any function holomorphic on $Q_{\tau,j}$ extends to $Q_{\tau,j}(\eta)$.
Enlarging $J$ if necessary, we may in addition assume that the domain
$Q_{\tau,j}(\eta)$, $j\geq J$, contains
\[
{\Bbb B}^-_\eta(\cos\tau,\sin\tau)={\Bbb B}_\eta(\cos\tau,\sin\tau)\cap (B_1+i\mathbb{R}^2),
\]
where ${\Bbb B}_\eta(z)$ denotes the round ball in $\mathbb{C}^2$.
By compactness, we can choose $J$ and $\eta$
uniformly for $\tau\in[\pi/2,5\pi/2]$.
Relabelling the domains $Q_{\tau,j}$ as $Q_\sigma$ if $\sigma=\tau+2\pi j$,
we define $Q_\sigma$ for any $\sigma\geq\pi/2$.
Then we obtain for every $f\in\mathbb{C}alO(D_\infty)$
a $\sigma$-dependent family of germs
$f_\sigma\in\mathbb{C}alO_{\mathbb{C}^2,(\cos\sigma,\sin\sigma)}$\footnote{
$\mathbb{C}alO_{\mathbb{C}^2,(z)}$ denotes the stalk of holomorphic germs at the point $z\in\mathbb{C}^2$.},
$\sigma\geq 2J\pi+\pi/2$, by
\begin{itemize}
\item[\bf (i)\phantom{ii}] restricting $f$ to $f|_{Q_\sigma}$,
\item[\bf (ii)\phantom{i}] extending $f|_{Q_\sigma}$ to $Q_\sigma(\eta)$, and
\item[\bf (iii)] restricting to the germ of the extension at $(\cos\sigma,\sin\sigma)\in Q_\sigma(\eta)$.
\end{itemize}
This yields a continuously parametrised curve
$\sigma\mapsto p(\sigma)\in\textsf{E}(D_\infty)$, $\sigma\geq 2\pi J+\pi/2$, over $S$.
Next we specialise to a branch $\tilde{f}_L$ of $f_L=\log(az_1+bz_2)$ on $D_\infty$,
which exists since $D_\infty$ is simply connected.
The above construction also gives a continuous curve
$\sigma\mapsto q(\sigma)\in Z$ over $S$, $\sigma\geq 2\pi J+\pi/2$,
by mapping the germ of the extension at $(\cos\sigma,\sin\sigma)$
to the corresponding point in the Riemann domain of $\tilde{f}_L$.
With every turn, $q(\sigma)$ changes sheet, and therefore the values
of two consecutive extensions at $(\cos\sigma,\sin\sigma)$ differ by $\pm 2\pi i$.
It follows that $p(\sigma)$ also changes sheet with every turn,
and
Lemma \ref{Ginf} is proved. \qed\\
{\bf Step 4: Construction of $D_k$.}
With $J$ and $\eta$ as chosen in the proof of Lemma \ref{Ginf},
we consider the truncated spiral
\[
X'_k=\varphi((1/2,1)\times(0,2\pi J+2(k+1)\pi)).
\]
Using the arguments above we see that $\textsf{E}(X'_k\times iB_r)$
has the required extension properties.
Smoothness can be obtained by rounding off the corners by moving the boundary outside of $X'_k$,
which results in a slightly larger domain $X_k$.
The construction of $X_k$ and the proof of Theorem \ref{main} are complete. \qed
\begin{remark}\rm
{\bf{(a)}} Since the family of curved rectangles depends continuously on the central angle,
we obtain a subset of $\textsf{E}(D_\infty)$ as parametrised in (\ref{lift}).
A careful examination of the proof of Lemma \ref{Ginf} shows that we can take $C=2J\pi+\pi/2$, $\rho=\eta/\sqrt{2}$
in (1).
{\bf{(b)}} A straightforward modification of the above proof, relying on the properties of Levi extension,
shows that we could have started from an \emph{arbitrary} tubular thickening
of the central spiral $\varphi(\{3/4\}\times\mathbb{R}_{>0})$. The resulting slight simplification will be used in the next section.
\end{remark}
\section{Universally covered circles}
Let $D\subset\mathbb{C}^n$ be a domain and $\gamma\subset\mathbb{C}^n$ an embedded loop
(a curve homeomorphic to the circle)
with inclusion $\iota_\gamma:\gamma\hookrightarrow\mathbb{C}^n$.
A connected component $\gamma'$ of $\pi_D^{-1}(\gamma)$ is either {\bf (a)} a loop or {\bf (b)} an arc.
In case {\bf (a)}, $\pi_D|_{\gamma'}:\gamma'\rightarrow\gamma$ is a covering
of $\gamma$ by $\gamma'$, whose topology is determined by the number
$k$ of sheets over an arbitrary point of $\gamma$ (here $k$ is a positive integer).
Case {\bf (b)} splits again:
{\bf (b$_1$)}
$\pi_D|_{\gamma'}:\gamma'\rightarrow\gamma$ may be the universal covering of $\gamma$, i.e.~$k=\infty$.
{\bf (b$_2$)}
Otherwise, $\pi_D|_{\gamma'}$ is not a covering,
and the closure of $\gamma'$ in the abstract closure of $\textsf{E}(D)$
contains at least one point in the abstract boundary.
If the case {\bf (b$_1$)} is valid, we say also
that $\gamma'$ \emph{universally covers $\gamma$}
(and that $\gamma$ is \emph{universally covered by $\textsf{E}(D)$}).
Intuitively, this means that $\textsf{E}(D)$ becomes as multi-sheeted as possible above $\gamma$.
Note that the definition may be extended to higher-dimensional
submanifolds of $\mathbb{C}^n$ instead of $\gamma$.
The goal of this section is to strengthen the construction of $D_\infty$
in the proof of Theorem \ref{main}.
\begin{proposition}
Assume that $\mathbb{P}i=\mathbb{R}^2$ and $L$ are selected as in the construction of $X_2$
and that $R$ is defined as in (\ref{R}).
For every $r\in(0,R)$, there is a bounded domain $X\subset\mathbb{R}^2$ such that the envelope of holomorphy of
$D=X+iB_r$ universally covers a circle $T\subset\mathbb{R}^2\backslash X$.
\end{proposition}
\begin{proof}
Since the idea is close to the proof of Theorem \ref{main}, we will only sketch the construction.
We choose $\mathbb{P}i=\mathbb{R}^2$, $L$ and $R$ as in the construction of $X_2$.
Instead of working with a spiral $S_\epsilonilon$ accumulating from outside to $\{|x|=1\}$,
we start from a spiral contained in a thin annulus
\[
A_\epsilonilon=\{1<|x|<1+\epsilonilon\}\subset\mathbb{R}^2
\]
and accumulating at both of its boundary circles
($\epsilonilon$ will later be chosen later).
Concretely, $S_\epsilonilon$ can be taken as the curve parametrised by
\[
\psi_\epsilonilon:\theta\mapsto\left(1+\epsilonilon\left(\frac{1}{2}+\frac{\arctan{\theta}}{\pi}\right)\right)(\cos\theta,\sin\theta),\ \theta\in\mathbb{R}.
\]
This spiral accumulates to $T=\{|x|=1\}$ for $\theta\rightarrow -\infty$ and to $\{|x|=1+\epsilonilon\}$ for $\theta\rightarrow\infty$.
The domain $X$ (which also depends on $\epsilonilon$) is now chosen as a tubular neighbourhood\footnote{
i.e.~a domain $X\subset A_\epsilonilon$ containing $S_\epsilonilon$ and admitting a homeomorphism $S_\epsilonilon\times (-1,1)\rightarrow X$
which restricts to $(x,0)\mapsto x$
along $S_\epsilonilon\times\{0\}$.}
of $S_\epsilonilon$ in $A_\epsilonilon$. Its precise shape is not important, but
we can arrange that $X$ is smoothly bounded as a domain in $A_{\epsilonilon}$ (but not in $\mathbb{R}^2$).
\vspace*{-0.55cm}
\label{universal}
\begin{figure}
\caption{Intersection of $D$ with $\mathbb{R}
\end{figure}
Fix now $r\in(0,R)$.
Similarly as before, we look at segments
\[
I_\epsilonilon(\theta)=\psi_\epsilonilon(\theta-\pi/2,\theta+\pi/2)\subset S_\epsilonilon,\ \theta\in\mathbb{R},
\]
which correspond to a change of angle by $\pi$.
The key observation is that a sufficiently small choice of $\epsilonilon$ guarantees that for \emph{all} $\theta\in\mathbb{R}$,
holomorphic functions defined near $I_\epsilonilon(\theta)+iB_r$ extend to a uniform domain
containing the point $p_\theta=(\cos\theta,\sin\theta)$ on the unit circle $T\subset\mathbb{R}^2$.
More precisely, we may even arrange that this domain contains
$L_\epsilonilon(\theta)=\big(I_\epsilonilon(\theta)\cup B^-_{2\epsilonilon}(\psi(\theta))\big)+iB_\epsilonilon$,
where $B^-_{2\epsilonilon}(\psi(\theta))$ is the connected component of
$B^-_{2\epsilonilon}(\psi(\theta))\backslash I_\epsilonilon(\theta)$ containing $p_\theta$.
We conclude similarly as before. Looking at a univalent branch on $X+iB_r$ of the logarithm $f_L$ "around" $L$,
we see that every turn along $S_\epsilonilon$ brings us to a new sheet
in the envelope of $D=X+iB_r$.
Formally, this can again be expressed by a mapping which associates to every $\theta\in\mathbb{R}$
the germ at $(\cos\theta,\sin\theta)$ of the holomorphic extension of the function restricted to a
neighbourhood of $I_\epsilonilon(\theta)+iB_r$.
Since the sets $L_\epsilonilon(\theta)$ move continuously with $\theta$,
$T$ is universally covered by $\textsf{E}(D)$.
\end{proof}
It is crucial for the above construction that the thickness $\epsilonilon$ of the annulus decreases with $r$.
To explain the reason, let us keep $X$ as in the proof but replace $D$ by $D'=X+iB_{r'}$ with $r'>0$ arbitrary.
If $r'$ is small enough, the lines $L_x$ through the points $x\in T$ and parallel to $L$ are disjoint from
\[
\{z\in\mathbb{C}^2:|x|=1+\epsilonilon,|y|\leq r'\}.
\]
Hence for $\theta$ sufficiently large, the hypersurface $M=I_\epsilonilon(\theta)+iB_{r'}$ does not intersect
$L_{(\cos\theta,\sin\theta)}$.
This excludes holomorphic extension from $M$ to a neighbourhood of $(\cos\theta,\sin\theta)$,
and the main argument of the proof breaks down.
\section{Schlichtness in dimension $2$}
In the above examples, multi-sheetedness was derived from Levi extension through pseudoconcave parts of the boundary of $D$.
Moreover, there were always pseudoconvex parts of the boundary over which multi-sheetedness was observed.
The goal of the section is to provide some evidence showing that this interplay is \emph{necessary} for multi-sheetedness in dimension $2$.
Let $C$ be a smoothly embedded circle in $\mathbb{R}^2$. Picking a parametrisation of $C$
by a smooth, $2\pi$-periodic immersion $\gamma:\mathbb{R}\rightarrow\mathbb{R}^2$ the increment of
the angle may be viewed as a $2\pi$-periodic $1$-form $\alpha(\theta)\,d\theta$.
We call $C$ \emph{strictly convex} if $\alpha(\theta)$ does not vanish
(i.e.~there are no inflection points).
It is elementary to verify that this notion is well-defined and that the bounded domain $X_{C,\sf in}$
surrounded by a strictly convex circle $C$ is convex.
By a \emph{convex domain with finitely many strictly convex holes} we mean a domain $X\subset\mathbb{R}^2$
obtained from a convex domain $X_{\sf cvx}$ by excision of finitely many disjoint closed discs of the form
$\overline{X}_{C_j,\sf in}$ where the $C_j$ are strictly convex circles.
We will keep these notations in the following arguments.\\
\noindent
{\bf Proof of Theorem \ref{schlicht2d}:}
The essence of the proof is contained in the treatment of a special case,
which may be viewed as an appropriate deformation of the model domains (\ref{model}):
Assume that $X$ is bounded by two strictly convex circles $C_0,C_1$,
which we number so that $X$ is strictly concave along $C_1$,
and set $D=X+iY$, with $Y$ as in the theorem.
For $K\subset\mathbb{C}^n$ compact, we recall that
\[
\widehat{K}=\{z\in\mathbb{C}^n:|p(z)|\leq\max_{K}|p| \mbox{ for each holomorphic polynomial } p\}
\]
denotes its polynomial hull. We obtain the following description of $\textsf{E}(D)$
in terms of polynomial hulls.
\begin{lemma}\label{polhull}
Let $K_x$ be the closure of the convex domain bounded by $C_1$ and $K=K_x+i\partial Y$.
Then $\textsf{E}(D)=D'\backslash\widehat{K}$, where $D'=(X\cup K_x)+iY$.
In particular, $\widehat{K}$ does not disconnect $D'$.
\end{lemma}
The lemma may be viewed as a variant of a classical result by Stout \cite{Sto}
on extension from parts of strictly pseudoconvex boundaries.
For the reader's convenience, we sketch an independent proof
along the lines of \cite{MP2}, see also \cite{LP} for a related application in dimension $2$.\\
\noindent
{\bf Proof of Lemma \ref{polhull}:}
There is a nonnegative smooth function $\varphi\in\mathbb{C}alC^\infty(\mathbb{C}^2)$
that vanishes precisely on $\widehat{K}$ and is strictly plurisubharmonic on the complement.
Since $\widehat{K}\subset K_x+i\overline{Y}$, we have $\varphi>0$ on $D$.
After an appropriate deformation of $\varphi$, we can in addition assume
that $\varphi|_{\mathbb{C}^2\backslash\widehat{K}}$ is a good Morse function\footnote{
i.e.~a function whose critical points are isolated nondegenerate quadratic singularities lying on different level sets.}.
Next we deform $C_1\times i\mathbb{R}^2$ such that $C_1+i(\mathbb{R}^2\backslash Y)$ is fixed
and $C_1\times Y$ is moved into $D$.
Since $C_1\times Y$ is strictly pseudoconvex hypersurface,
a $\mathbb{C}alC^2$-small deformation of this kind will yield
a strictly pseudoconvex hypersurface $M\subset D$.
Moreover, $M\cup K$ bounds a domain $D_{\sf in}\subset D'$,
which is slightly larger than $\mbox{int}(K_x)+iY$.
Then $M$ disconnects $D_{\sf in}$ into $D''$ and an outer domain $D_{\sf out}$.
Finally we may arrange in addition that $\varphi|_M$ is a good Morse function.
We use the methods from \cite{MP2} to extend (restrictions to $D_{\sf out}$ of)
holomorphic functions on $D$ to the open sets
\[
Q_{h}=D_{\sf out}\cup\{z\in D_{\sf in}:\varphi(z)>h\},\ h>0.
\]
We need the following three elementary properties:
\begin{itemize}
\item[(i)] $Q_h=D_{\sf out}$ for $h\gg 0$,
\item[(ii)] $Q_{h_1}\subset Q_{h_2}$ if $h_1>h_2$,
\item[(iii)] $\bigcup_{h>0}Q_h=D'\backslash\widehat{K}$.
\end{itemize}
The idea is to construct extension by continuously letting decrease $h\downarrow 0$.
Assume that the extension to $Q_{h_0}$ is valid for some $h_0>0$.
If $h_0$ is neither a critical value of $\varphi$ nor of $\varphi|_M$,
it is not hard to construct extension to $Q_{h_1}$ for some $h_1<h_0$
by gluing finitely many local Levi extensions through the strictly pseudoconvex
hypersurface $\{\varphi=h_0\}$.
The general case requires a careful study of the local geometry at Morse singularities.
For the details, we refer to \cite{MP2}.
Note however that the constructions in \cite{MP2}, where extension from \emph{non}pseudoconvex boundaries is considered,
have to take transitional multi-sheetedness into account.
As explained in \cite{LP}, these monodromy problems are easier to handle in our setting, thanks to the strict pseudoconvexity of $M$.
After constructing extension to $D'\backslash\widehat{K}$, the proof will be ready
as soon as we have shown that $D'\backslash\widehat{K}$ is pseudoconvex.
This follows from the convexity of $D'$ and the known fact that the polynomial hull $\widehat{K}$ is pseudoconcave
at points of the essential hull $\widehat{K}\backslash K$, see for example \cite{Slo}.
\qed
\begin{remark}\label{n>=3}
\rm
Most of the above proof generalises without change to $n\geq 3$,
giving univalent extension from $D$ to $D'\backslash\widehat{K}$.
At least for the general case of extension from parts of strictly pseudoconvex boundaries,
it is however known that this need not give the entire envelope and that the envelope
may even be multi-sheeted. \qed
\end{remark}
Let us now consider the general case, where we denote by $C_1,\ldots,C_m$
the components of $\partial X$ along which $X$ is strictly concave.
For each $\mu=1,\ldots,m$, we select a similar curve $C_{\mu,1}$ obtained by smoothly deforming $C_\mu$ slightly into $X$.
By Lemma \ref{polhull} applied to the domain $X_\mu$ squeezed between $C_\mu$ and $C_{\mu,1}$,
we have
\[
\textsf{E}(X_\mu+iY)=X'_\mu\backslash\widehat{K_\mu}.
\]
Here $K_\mu=K_{x,\mu}+i\partial Y$ with $K_{x,\mu}$ being the closure of the convex domain bounded by $C_\mu$,
and $X'_\mu=(X_\mu\cup K_{x,\mu})+iY$.
By construction,
\[
\widehat{D}=D\cup\bigcup_{\mu=1}^m \textsf{E}(X_\mu+iY)
\]
embeds into the envelope of $D$.
To conclude the proof, it just remains to show that $\widehat{D}$ is pseudoconvex
(this is the only place where we use the properties of the outer boundary component of $\partial X$)
and must therefore coincide with the envelope of holomorphy of $D$.
Pseudoconvexity follows from the convexity of
\[
D'=(X\cup\bigcup_{\mu=1}^m K_{x,\mu})+iY
\]
and the local pseudoconvexity along the essential hulls $\widehat{K_\mu}\backslash K_\mu$,
used already in the proof of Lemma \ref{polhull}.
The proof of Theorem \ref{schlicht2d} is complete.
\qed
While the assumption on the strict convexity of the holes can probably be weakened,
we cannot completely omit an assumption on their shape.
\begin{example}\label{convexity}\rm
The following example is a variant of our first option of $D_2$ in Section \ref{sec2}.
Choose $\mathbb{P}i=\mathbb{R}_x^2$ and a complex line $L=\{az_1+bz_2=0\}$ like there.
We construct $X'$ by adding to the annulus $A=\{(x_1,x_2)\in\mathbb{R}^2:2<|x|<3\}$
the bridge $\{|x_1|<1/2\}\cap B_3$ from which we cut away a sufficiently thin bent channel
as indicated in figure \ref{ncvx}
(the width of the channel corresponding to $\eta$ in the construction of $D_2$).
\begin{figure}
\caption{The domain $X'$ in $\mathbb{R}
\label{ncvx}
\end{figure}
We choose $r$ such that the closure of $D'=X'+iB_r$ does not intersect $L_{-1}$ and $L_1$,
the parallel translates of $L$ passing through the points $(\pm 1,0)$ respectively.
Now the choice of a branch of
\begin{equation}\label{root}
\sqrt{(a(z_1+1)+bz_2)(a(z_1-1)+bz_2)}
\end{equation}
defines a univalent holomorphic function $f$ on $D'$.
If the channel is sufficiently thin, $f$ extends from the above part of the interrupted bridge to a neighbourhood of the origin.
Since a single turn around one of the branch points of (\ref{root}) changes the value we have exhibited two sheets over $0$.
Rounding off the eight corners of $X'$ we obtain an example with a nonconvex hole and a nonschlicht envelope of holomorphy.
\qed
\end{example}
It is instructive to relate Lemma \ref{polhull} to the results in \cite{JP2}.
\begin{example}\label{JPexp}\rm
{\bf a)} For $D_{2,r_1,r_2,r_3}$ like in (\ref{model}),
it is enough to find the polynomial hull of the solid torus
$K=\overline{B}_{r_1}+i\partial B_{r_3}$.
By convexity $\widehat{K}$ is contained in the closure of the polydisc tube
$P=\overline{B}_{r_1}+i\overline{B}_{r_3}$.
The function $g=e^{z_1^2+z_2^2}$ has modulus $|g|=e^{|x|^2-|y|^2}$.
Thus $\max_{K}|g|=r_1^2-r_3^2$ is obtained at the boundary $\partial B_{r_1}+i\partial B_{r_3}$ of $K$,
and we get
\[
\widehat{K}\subset L=\{|x|^2-|y|^2\leq r_1^2-r_3^2\}\cap P.
\]
To show that this is an equality we only have to observe
that $L$ is foliated by the complex curves
$\{z_1^2+z_2^2=c\}$, $\mbox{Re}(c)\leq r_1^2-r_3^2$
(pairs of discs if $\mbox{Re}(c)=\mbox{Im}(c)$ and annuli else),
with boundary in $K$. Equality then follows from the
maximum modulus principle.
Hence we have rederived (\ref{ED}) from Lemma \ref{polhull}.
For $n\geq 3$, our arguments concerning polynomial hulls are still valid.
Thus Remark \ref{n>=3} yields holomorphic extension to the set in (\ref{ED}),
and then we conclude by directly verifying that this set is pseudoconvex
as in \cite{JP2}.
Note however that local pseudoconvexity of the complement of $\widehat{K}$
at points of the essential hull $\widehat{K}\backslash K$ is not true in general for $n\geq 3$.
The hull is still locally $1$-pseudoconcave, but this implies less pseudoconvexity for the complement.
{\bf b)} Looking at the "breaking point" $r_1=r_3$ leads us to considering
$${\Bbb D}elta=\{z=x+iy\in\mathbb{C}^2:|y|^2<|x|^2<1\}.$$
With respect to the standard real euclidean structure of $\mathbb{C}^2=\mathbb{R}^4$,
it is congruent to the Hartogs triangle $\{z=(z_1,z_2)\in\mathbb{C}^2:|z_2|^2<|z_1|^2<1\}$.
However, its function theoretic properties are very different.
For example, since $\overline{{\Bbb D}elta}$ is polynomially convex,
${\Bbb D}elta$ has no Nebenh\"ulle, in contrast to the Hartogs triangle.
\qed
\end{example}
It is rare that polynomial hulls can be described as explicitly as in the Example \ref{JPexp}.
Therefore there is not much hope for more explicit information than in Theorem \ref{schlicht2d}.
It is however possible to deduce qualitative properties like
\begin{corollary}
Let $D$, $K_x$ and $Y$ be like in Lemma \ref{polhull}. Assume moreover that $0\in Y$,
and set $Y_R=R\cdot Y$, $K_R=K_x+\partial Y_R$, $D_R=D+iY_R$.
Then the following properties hold:
\begin{itemize}
\item[\bf (a)]
For every $R_0\geq 0$ there is $R_1>0$ such that
$\widehat{K}_R\cap\{|y|\leq R_0\}=\emptyset$
if $R\geq R_1$.
\item[\bf (b)] $\textsf{E}(D+i\mathbb{R}^2)=\bigcup_{R>0}\textsf{E}(D_R)$.
\end{itemize}
\end{corollary}
{\bf Proof:}
General arguments imply actually that {\bf (b)} always holds
for a family of subdomains exhausting $D$ in a suitable sense
if its is know that all envelopes of the subdomains are schlicht.
Alternatively, it is easy to see that {\bf (a)} and {\bf (b)} are equivalent,
again because of schlichtness of the envelopes.
To verify {\bf (a)}, consider again the Gaussian functions
\[
f_{\zeta}(z)=\exp\big((z_1-\zeta_1)^2+(z_2-\zeta_2)^2\big).
\]
For $R_0$ fixed, a direct verification very similar to what is done in Example \ref{JPexp}
shows that
$\max_{K_R}|f_{\zeta}|<c<1$ holds for all $\zeta =(\zeta_1,\zeta_2)\in K_x+i\overline{B}_{R_0}$,
provided $R$ is sufficiently large.
Since $f_{\zeta}(\zeta)=1$, we obtain {\bf (a)}. \qed
\end{document} |
\begin{eqnarray}gin{document}
\title{Where does a random process hit a fractal barrier?}
\author{Itai Benjamini \hbox{ {\rm and} } Alexander Shamov}
\date{April 2016}
\maketitle
\begin{eqnarray}gin{abstract}
Given a Brownian path $\begin{eqnarray}ta(t)$ on ${\mathbb R}$, starting at $1$, a.s. there is a singular time set $T_{\begin{eqnarray}ta}$, such that the first hitting time of $\begin{eqnarray}ta$ by an independent
Brownian motion, starting at $0$, is in $T_{\begin{eqnarray}ta}$ with probability one.
A couple of problems regarding hitting measure for random processes are presented.
\end{abstract}
\section{Introduction}
The study of Harmonic (or hitting) measure for Brownian motion is a well developed subject with dramatic achievements and major problems which are still wide open, see ~\cite{GM}.
In this note we present a couple of problems regarding hitting measure for a wider class of random processes and obtain one result.
When does one dimensional Brownian motion starting at $0$, hits an independent Brownian motion starting at $1$, which serves as the {\em barrier}?
We show that conditioning on the barrier, a.s. with respect to the Wiener measure on barriers,
there is a singular time set (which is a function of the barrier only) that a.s. contains the first hitting time of the barrier.
\section{Random processes in the plane}
Let $\gamma$ be an unbounded one sided curve in the Euclidean plane. Given a simply connected open bounded domain $\Omega$ in the plane.
Reroot the origin of $\gamma$ at a uniformly chosen point of $\Omega$, and rotate $\gamma$ with an independent uniformly chosen angel, around it's root.
Look at the hitting point of this random translation and rotation of $\gamma$ on the boundary of the domain $\mbox{\bf p}artial \Omega$.
For every root in $\Omega$ the hitting point maps the uniform measure on directions $U[0,2\mbox{\bf p}i]$ to a measure on $\mbox{\bf p}artial \Omega$.
\begin{eqnarray}gin{conjecture}
For any $\gamma$ and $\Omega$, for almost every root, the corresponding measure on $\mbox{\bf p}artial \Omega$ has $0$ two dimensional Lebesgue measure.
\end{conjecture}
Moreover,
\begin{eqnarray}gin{question}
For any $\gamma$ and $\Omega$, for almost every root, the corresponding measure on $\mbox{\bf p}artial \Omega$ has Hausdorff dimension (at most) one?
\end{question}
It is of interest to prove even that dimension drops below $2$. Or better below the dimension of $\mbox{\bf p}artial \Omega$ when it is strictly above $1$.
Also getting the result for a restricted family of curves, is of interest.
If $\gamma$ is a Brownian path then Makarov's theorem \cite{Ma} gives an affirmative answer.
For partial results on this conjecture when $\gamma$ is a straight line see ~\cite{FF}.
\subsection{Simple random walks on discrete fractals}
By Makarov's theorem~\cite{Ma} (and Jones and Wolff ~\cite{JW} for general domains) and it's adaptation by Lawler~\cite{La} via coupling to simple random walk,
it is know that the dimension of the hitting measure for two dimensional Brownian motion drops to (at most) $1$.
We therefore suspect that harmonic measure for simple random walk on self similar planar fractals will also be at most $1$.
Here is a specific formulation.
\subsubsection{Sierpinski gasket}
\begin{eqnarray}gin{figure}
\centering
\includegraphics[width=0.8\textwidth,natwidth=610,natheight=120]{S-gasket3.pdf}
\caption{The first three generation of Sierpinski gasket graphs sequence. \label{fig:S-gasket}}
\end{figure}
Given a subset $S$ of the vertices in the $n$-th generation of the Sierpinski gasket graph sequence (see Figure~\ref{fig:S-gasket}).
\begin{eqnarray}gin{question}
Show that the entropy of the hitting measure for a simple random walk starting at the top vertex on $S$ is at most $n$.
\end{question}
Note that in the $n$-th generation Sierpinski gasket graph, the size of the bottom side is $2^{n-1}+1$, which we believe
realizes the largest entropy possible. (Entropy in base $2$, $- \sum_i p_i \log_2 p_i$).
\subsection{Fractional BM}
Recall the probability Brownian motion in ${\mathbb R}^2$, starting at $(1,0)$ hits the negative $x$-axis first at $[-{\mbox{$\epsilon$}}ilon, 0]$ behaves like ${\mbox{$\epsilon$}}ilon^{1/2}$,
as epsilon goes to $0$.
We would like to have a natural statement along the lines that the rougher the process starting at $(1,0)$
the larger the probability it will hit the negative $x$-axis first near the tip.
E.g. if the process starting at $(1,0)$ is a two dimensional fBM with Hurst parameter $H$,
then as $H$ decreases the probability it hits the ${\mbox{$\epsilon$}}ilon$ tip is growing (maybe it is about ${\mbox{$\epsilon$}}ilon^H$ ?)
One can ask similar a question for the graph of one dimensional fBM and SLE curves.
\section{Random process on the line}
\begin{eqnarray}gin{theorem} \label{thm:Singularity}
Let $B$ and $W$ be independent standard Brownian motions on ${\mathbb R}$, and let $\sigma, c > 0$. Define $\tau$ to be the first time when $B$
hits the barrier $c + \sigma W$, i.e.
$$\tau := \inf\{t \mid B_t = c + \sigma W_t\}.$$
Then conditionally on $W$, the distribution of $\tau$ is almost surely singular to the Lebesgue measure.
\end{theorem}
In the proof we will make use of the following standard fact from measure theory.
\begin{eqnarray}gin{proposition} \label{prop:Disintegration}
Let $M, N$ be probability measures on $X \times Y$, a product of standard Borel spaces. Consider the disintegration of $M, N$ with
respect to the $X$-variable (i.e. with respect to the canonical projection $X \times Y \to X$). We write it as follows:
$$M(dx, dy) = M_X(dx) M_{Y \mid X}(x, dy)$$
$$N(dx, dy) = N_X(dx) N_{Y \mid X}(x, dy)$$
where $M_X$ (resp. $N_X$) is the pushforward of $M$ (resp $N$) under $X \times Y \to X$, and $M_{Y \mid X}(x)$
(resp. $N_{Y \mid X}(x)$) is corresponding conditional of $y$ given $x$.
Assume that $M_X$ is equivalent (i.e. mutually absolutely continuous) to $N_X$. Then the following are equivalent:
\begin{eqnarray}gin{enumerate}
\item $M$ is singular to $N$
\item $M_{Y \mid X}(x)$ is singular to $N_{Y \mid X}(x)$ for $M_X$-almost all $x$
\end{enumerate}
\end{proposition}
Another fact we will need is the Bessel(3)-like behavior of the Brownian motion immediately before hitting a constant barrier.
This is an immediate consequence of Williams' Brownian path decomposition theorem (e.g. Theorem VII.4.9 in \cite{RY}).
\begin{eqnarray}gin{proposition} \label{prop:Bessel3}
Consider a Brownian motion $B$ starting from $0$, and let $c>0$. Let $\tau$ be the hitting time $\tau := \inf{t \mid B_t = c}$. Then
for any $\varepsilon > 0$ the conditional distribution of $(c - B_{T-s})_{s=0}^{T-\varepsilon}$ conditioned on
$\tau = T > \varepsilon$ is equivalent to that of a Bessel(3) process starting from $0$ restricted to the time interval
$[0,T-\varepsilon]$.
\end{proposition}
\begin{eqnarray}gin{proof}[Proof of Theorem \ref{thm:Singularity}]
Consider the random measure $\delta_\tau$ and $D := \mathbb{E}[\delta_\tau \mid W]$. The latter is exactly the conditional
distribution of $\tau$ given $W$. By Proposition \ref{prop:Disintegration} (applied to $X := \Omega$, $Y := {\mathbb R}$),
almost sure singularity of $D = D(\omega, dt)$ to the (determinstic) Lebesgue measure is equivalent to the singularity of
$\mathbb{P}(d\omega) D(\omega, dt)$ to $\mathbb{P}(d\omega) dt$. On the other hand, the two spaces $X$ and $Y$ in Proposition
\ref{prop:Disintegration} play symmetric roles, so instead one may disintegrate with respect to the $t \in {\mathbb R}$ variable.
More precisely, let $\Pi(t, d\omega)$ (resp. $\mbox{\bf p}i(t, d\omega)$) be the disintegration of $\mathbb{P}(d\omega) D(\omega, dt)$
(resp. $\mathbb{P}(d\omega) \delta_\tau(\omega, dt)$) with respect to $t$. Then the $\mathbb{P}$-almost sure singularity of $D$ with respect to the
Lebesgue measure is equivalent to the singularity of $\Pi(t)$ with respect to $\mathbb{P}$ for Lebesgue-almost all $t$. On the other hand, the
measures $\mathbb{P}(d\omega) D(\omega, dt)$ and $\mathbb{P}(d\omega) \delta_\tau(\omega, dt)$ agree when restricted to the $\sigma$-algebra
$\sigma(W) \otimes \mathrm{Borel}({\mathbb R})$; therefore, $\Pi(t)$ and $\mbox{\bf p}i(t)$ agree on $\sigma(W)$ for Lebesgue-almost all $t$. Since $D$ is
measurable with respect to $\sigma(W)$, it is enough to verify that $\mbox{\bf p}i(t)$ is singular to $\mathbb{P}$ when restricted to $\sigma(W)$.
Using Proposition \ref{prop:Bessel3} we can characterize $\mbox{\bf p}i(t)$ explicitly, at least up to equivalence. Indeed, the time when $B$
hits $c + \sigma W$ is exactly the time when
$$X := \frac{1}{\sqrt{1 + \sigma^2}} B - \frac{\sigma}{\sqrt{1 + \sigma^2}} W,$$
which is itself a standard Brownian motion under $\mathbb{P}$, hits the constant barrier $\tilde c := \frac{c}{\sqrt{1 + \sigma^2}}$. Thus by
Proposition \ref{prop:Bessel3}, the distribution of $\tilde c - X_{t - \cdot}$ under $\mbox{\bf p}i(t)$ is (locally) equivalent to Bessel(3).
On the other hand,
$$Y := \frac{\sigma}{\sqrt{1 + \sigma^2}} B + \frac{1}{\sqrt{1 + \sigma^2}} W$$
is $\mathbb{P}$-independent of $X$, and since the $\tau$ is measurable with respect to $X$, the independent part $Y$ is not affected by our
change of measure. Thus under $\mbox{\bf p}i(t)$, $X$ and $Y$ are still independent, and $Y_{t - \cdot}$ remains (locally) equivalent to a
Brownian motion.
In order to prove the singularity result we only need the restriction of our measures to $\sigma(W)$. Since
$$W = -\frac{\sigma}{\sqrt{1+\sigma^2}} X + \frac{1}{\sqrt{1+\sigma^2}} Y,$$
we see that under $\mbox{\bf p}i(t)$, $W_{t - \cdot}$ is locally equivalent to a combination of a Bessel(3) and an independent Brownian
motion. Under $\Pi$, however, it is locally a Brownian motion. Thus the problem reduces to the proving that
the local behaviour at time zero of the sum of independent processes
$$U \sim \alpha \cdot \mathrm{BES(3)} + \sqrt{1 - \alpha^2} \cdot \mathrm{BM}$$
is almost surely distinguishable from that of $V \sim \mathrm{BM}$, where $\alpha = -\frac{\sigma}{\sqrt{1+\sigma^2}} < 0$.
This can be achieved by, say, noting that these processes satisfy a law of iterated logarithm with different almost sure constants.
Namely,
$$\limsup_{s \to 0} \frac{V_s}{\sqrt{2 s \log \log s}} = 1$$
$$\limsup_{s \to 0} \frac{U_s}{\sqrt{2 s \log \log s}} \le \sqrt{1 - \alpha^2} < 1$$
\end{proof}
\begin{eqnarray}gin{question}
Study this phenomena for larger class of barriers, e.g. iterated function systems.
Give sharper bounds on the dimension of the the set which a.s. contains the hitting time.
\end{question}
To study this for iterated function systems, we need a uniform bound on the radon nikodym
derivative of the harmonic measure with respect to the uniform measure, at all scales.
Here is a formulation of this problem for {\em random fields}. Consider a function from ${\mathbb R}^d$ to ${\mathbb R}^n$ as a barrier, and look when a random field indexed by ${\mathbb R}^d$
hits the barrier, where the hitting index is defined say as the index with the smallest $L_2$ norm.
\section{Further comments}
\begin{eqnarray}gin{itemize}
\item{\em Bourgain's proof}
Bourgain~\cite{Bo} proved a dimension drop result for Brownian motion in ${\mathbb R}^d$ for any $d$.
Two properties of BM are used in the clever argument, {\em uniform Harnack inequality}
at all scales and the {\em Markov property}, to get independent between scales.
These two properties hold for a wider set of processes in a larger set of spaces, (e.g. Brownian motion on nilpotent groups and fractals).
Also weaker forms of these properties are sufficient to get some drop.
\item{\em Random walk on graphs}
This note concerns with harmonic measure in "small spaces" of dimension at most two.
See ~\cite{BY} for a study of hitting measure for the simple random walk in the presence of a spectral gap:
on highly connected graphs such as expanders, simple random walk is mixing fast and it is shown that it hits
the boundary of sets in a rather uniform way.
More involved behavior arises for graphs which are neither polynomial in the diameter nor expanders,
see ~\cite{BY}.
\item{\em Let's play}
Rules: each of the $k \geq 2 $ players picks independently a unit length path (not necessarily a segment) in the Euclidean plane that contains the origin.
Let $S$ be the union of all the $k$ paths. Look at the harmonic measure from infinity on $S$.
The winner is the player that his path, gets the maximal harmonic measure.
Is choosing a segment from the origin to a random point on the unit circle, independently by each of the players, a {\em Nash equilibrium}?
\end{itemize}
\noindent
\begin{eqnarray}gin{thebibliography}{BKC}
\bibitem{BY}
I. Benjamini and A. Yadin.
Harmonic measure in the presence of a spectral gap.
Annales Institut Henri Poincare. 52, 1050-–1060, 2016.
\bibitem{Bo}
J. Bourgain. On the Hausdorff dimension of harmonic measure in higher dimension. Invent.
Math. 87, 477–-483, 1987.
\bibitem{FF}
K. Falconer and J. Fraser,
The visible part of plane self-similar sets.
Proc. Amer. Math. Soc. 141, 269-–278, 2013.
\bibitem{GM}
J. Garnett and D. Marshall. Harmonic measure, volume 2. Cambridge University Press,
2005.
\bibitem{JW}
P. Jones and T. Wolff. Hausdorff dimension of harmonic measures in the plane. Acta
Mathematica. 161, 131–-144, 1988.
\bibitem{La}
G. Lawler. A discrete analogue of a theorem of Makarov. Combin. Probab. Comput.
2, 181–-199, 1993
\bibitem{Ma}
N. Makarov. On the distortion of boundary sets under conformal mappings. Proc. London
Math. Soc. 3, 369-–384, 1985.
\bibitem{RY}
D. Revuz and M. Yor. Continuous martingales and Brownian motion. Springer, 1999
\end{thebibliography}
\end{document} |
\begin{document}
\title{Characterization of some convex curves\\
on the $3$-sphere}
\mathbf author{
Em\'ilia Alves }
\maketitle
\begin{abstract}
In this paper we provide a characterization for a class of convex curves on the $3$-sphere.
More precisely, using a theorem that represents a locally convex curve on the $3$-sphere as a pair of curves in $\mathbb S^2$, one
of which is locally convex and the other is an immersion, we are capable of completely characterize a class of convex curves on the 3-sphere.
\end{abstract}
\section{Introduction}
A curve $\gamma: [0,1] \rightarrow \mathbb S^n$ of class $C^k$ ($k \geq n$) is called \emph{locally convex} if
\[ \mathrm{det}(\gamma(t),\gamma'(t),\gamma''(t), \cdots, \gamma^{(n)}(t)) > 0 \]
for all $t$.
Therefore, a curve of class $C^k$, for $k \geq 2$, on the $2$-sphere is locally convex if its geodesic curvature is positive at every point.
Analogously, a curve of class $C^k$, for $k \geq 3$, on the $3$-sphere is locally convex if its geodesic torsion is always positive (see proposition~\ref{lccs3} for a proof).
Given a locally convex curve $\gamma:[0,1] \rightarrow \mathbb S^n$, we associate a \emph{Frenet frame curve} $ \mathcal{F_{\gamma}}:[0,1] \rightarrow \mathrm{SO}_{n+1}$ by applying the Gram-Schmidt orthonormalization to the $(n+1)$-vectors $(\gamma(t),\gamma'(t),\dots,\gamma^{(n)}(t))$.
Given $Q \mathbf in \mathrm{SO}_{n+1}$, we denote by ${\mathcal{L}\mathbb S^{n}}(Q)$ the set of all locally convex curves $\gamma:[0,1] \rightarrow \mathbb S^n$ such that $\mathcal{F}_\gamma(0)=I$ and $\mathcal{F}_\gamma(1)=Q$, where $I$ denotes the identity matrix.
The study of the spaces of locally convex curves on the $2$-sphere started with J. A. Little in $1970$. In~\cite{Lit70} he proved that the space ${\mathcal{L}\mathbb S^{2}}(I)$ has $3$ connected components: ${\mathcal{L}\mathbb S^{2}}(\mathbf{1}), {\mathcal{L}\mathbb S^{2}}(-\mathbf{1})_c$ and ${\mathcal{L}\mathbb S^{2}}(-\mathbf{1})_n$ (this notation will be clarified below). Here we denoted by ${\mathcal{L}\mathbb S^2}(-\mathbf{1})_n$ the component associated with non-convex curves whereas ${\mathcal{L}\mathbb S^2}(-\mathbf{1})_c$ denotes the component of convex curves~\cite{Fen29}, see Figure~\ref{Little}. Notice that this component is contractible~\cite{Ani98}.
\begin{figure}
\caption{Examples of curves in the components $\mathcal{L}
\label{Little}
\end{figure}
For $n \geq 2$, the universal (double) cover of $\mathrm{SO}_{n+1}$ is the spin group $\mathrm{Spin}_{n+1}$; let
$\mathcal{P}i_{n+1} : \mathrm{Spin}_{n+1} \rightarrow \mathrm{SO}_{n+1} $ be the natural projection.
Let us denote by $\mathbf{1}$ the identity element in $\mathrm{Spin}_{n+1}$, and by $-\mathbf{1}$ the unique non-trivial element in $\mathrm{Spin}_{n+1}$ such that $\mathcal{P}i_{n+1}(-\mathbf{1})=I$.
Therefore, the Frenet frame curve $\mathcal{F_{\gamma}}:[0,1] \rightarrow \mathrm{SO}_{n+1}$ can be uniquely lifted to a continuous curve $\tilde{\mathcal{F}_{\gamma}}:[0,1] \rightarrow \mathrm{Spin}_{n+1}$ such that $\mathcal{F_{\gamma}}=\mathcal{P}i_{n+1} \circ \tilde{\mathcal{F}_{\gamma}}$ and $\tilde{\mathcal{F}_{\gamma}}(0)=\mathbf{1}$. Given $z \mathbf in \mathrm{Spin}_{n+1}$, we denote by
${\mathcal{L}\mathbb S^{n}}(z)$ the set of curves
$\gamma \mathbf in {\mathcal{L}\mathbb S^{n}}(\mathcal{P}i_{n+1}(z))$
for which $\tilde{\mathcal{F}_{\gamma}}(1)=z$.
Obviously, ${\mathcal{L}\mathbb S^{n}}(\mathcal{P}i_{n+1}(z)) = {\mathcal{L}\mathbb S^{n}}(z)\sqcup {\mathcal{L}\mathbb S^{n}}(-z).$
Notice that $\mathcal{L}\mathbb S^n(z)$ tuns out to be non-empty.
The motivation to study these spaces of curves comes from the realm of ordinary differential equations (ODE), since the space of locally convex curves on the $n$-sphere is deeply related to the study of linear ODEs of order $n+1$; see~\cite{Alv16},~\cite{BST03},~\cite{BST06},~\cite{BST009} and~\cite{SalTom05}. As we already mention, J. Little was the precursor of the study of the homotopy type of the spaces of locally convex curves on the $2$-sphere. After him, the study of these spaces in higher dimensional spheres and also related spaces (for example, in the Euclidean space and in the Projective space) regain interested in the nineties; here we mention the work of B. Z. Shapiro, M. Z. Shapiro and B. A. Khesin that determined the number of connected components of those spaces; see~\cite{SS91},~\cite{Sha93},~\cite{KS92} and~\cite{KS99}.
At this time, although the number of connected components of those spaces has
been completely understood, little information on the cohomology or higher homotopy groups was available,
even for $n=2$.
The homotopy type of the spaces of locally convex curves on the $2$-sphere was completely determined with the work~\cite{Sal13} of N. Saldanha, which followed important developments reported in~\cite{Sal09I} and \cite{Sal09II}.
Today this topic continues to attract the attention of several authors working on a variety of problems not just because of its topological richness, but also due to the number of spillovers it has across several disciplines of Mathematics and applications. These include, but are not limited to, symplectic geometry~\cite{Arn95}, control theory~\cite{RS90} and engineering~\cite{Dub57}.
In spite of the attention the topic has received, the homotopy type of the spaces $\mathcal{L}\mathbb S^n(z)$, $n \geq 3$ and $z \mathbf in \mathrm{Spin}_{n+1}$, remains a open problem.
Recently in~\cite{AlvSal19} the authors present some partial results about the homotopy type of the spaces of locally convex curves on the $3$-sphere. In this paper we resort to some results obtained in~\cite{AlvSal19} as to better understand these spaces of curves and refine the analysis of the homotopy type of $\mathcal{L}\mathbb S^3(z)$, $z \mathbf in \mathrm{Spin}_{4}$. For this, we revisit some concepts and then state our contributions.
It is well known that $\mathrm{Spin}_{4}$ can be identified with $\mathbb S^3 \times \mathbb S^3$.
So, given $z=(z_l,z_r) \mathbf in \mathbb S^3 \times \mathbb S^3$
(where $l$ and $r$ stand for \emph{left} and \emph{right})
we denote by $\mathcal{L}\mathbb S^3(z_l,z_r)$ the space of locally convex curves in $\mathbb S^3$ with the initial and final lifted Frenet frame respectively $(\mathbf{1},\mathbf{1})$ and $(z_l,z_r),$ i.e.,
\[ \mathcal{L}\mathbb S^3(z_l,z_r)=\{\gamma: [0,1] \rightarrow \mathbb S^3 \; | \, \tilde{\mathcal{F}}_\gamma(0)=(\mathbf{1},\mathbf{1}) \; \text{and} \; \tilde{\mathcal{F}}_\gamma(1)=(z_l,z_r) \}. \]
In~\cite{AlvSal19} it was proved that every locally convex curve $\gamma \mathbf in \mathcal{L}\mathbb S^3(z_l,z_r)$ can be represented as a pair of curves on the $2$-sphere $\gamma_l$ and $\gamma_r$, where $\gamma_l$ is locally convex and $\gamma_r$ is merely an immersion. A locally convex curve in $\mathbb S^3$ is rather hard to describe from a geometrical point of view; and the above result allows us to understand such a curve as a pair of curves in $\mathbb S^2$, a situation where geometrical intuition is easier.
If $\gamma \mathbf in \mathcal{L}\mathbb S^3(z_l,z_r)$ is such that its left part $\gamma_l \mathbf in \mathcal{L}\mathbb S^2(z_l)$ is convex, then $\gamma$ is convex (Proposition 5.7 in~\cite{AlvSal19}), but in general the converse is false.
Yet there are still some spaces in which one can characterize completely the convexity of $\gamma$ by merely considering its left part.
The first space in which we have such a characterization is $\mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$.
\begin{theo}\label{th2}
A curve $\gamma \mathbf in \mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ is convex if and only if its left part $\gamma_l \mathbf in \mathcal{L}\mathbb S^2(-\mathbf 1)$ is convex.
\end{theo}
The second space is $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$; however, for this space, we can only give a necessary condition for a curve to be convex, even though we believe that this condition is also sufficient.
\begin{theo}\label{th3}
Assume that $\gamma \mathbf in \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ is convex. Then its left part $\gamma_l \mathbf in \mathcal{L}\mathbb S^2(\mathbf 1)$ is contained in an open hemisphere and its rotation number is equal to $2$.
\end{theo}
The remainder of this work is structured as follows.
Section~\ref{preliminaries} is devoted to some algebraic preliminaries, fundamental in this work.
In Section~\ref{section3} we present some basic definitions and properties of locally convex curves.
In Subsection~\ref{s33} we define a large space of curves, the space of generic curves, and properly define the Frenet frame curve associated with a locally convex curve and to a generic curve.
Finally in Subsection~\ref{convexcurves} we define globally convex curves, which are of fundamental importance in the study
of locally convex curves and in this work.
In Section~\ref{examples} we will give some examples of locally convex curves on the $3$-sphere that will be fundamental in the proofs of Theorem~\ref{th2} and Theorem~\ref{th3}: for this, we will review some definitions and results that are contained in~\cite{AlvSal19}.
In Section~\ref{sectionproofs} we finally give the proofs of Theorem~\ref{th2} and Theorem~\ref{th3} that characterizes convexity in the spaces $\mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ and $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$, respectively.
In Subsection~\ref{s64} there is a precise definition of a curve contained in an open hemisphere and its rotation number.
\noindent{\bf Acknowledgements:} The author is grateful to Nicolau C. Saldanha for helpful conversations and
to CAPES, CNPq, FAPERJ and PUC-Rio for the financial support.
\section{Algebraic Preliminaries} \label{preliminaries}
In this section we briefly review some algebraic concepts: first we recall the definition and some properties about the spin groups. Then we review the decomposition of the orthogonal groups $\mathrm{SO}_{n+1}$ and the spin groups $\mathrm{Spin}_{n+1}$ in the (signed) Bruhat cells.
\subsection{Spin Groups}
For $n \geq 2$, the universal (double) cover of $\mathrm{SO}_{n+1}$ is the spin group $\mathrm{Spin}_{n+1}$; let
$\mathcal{P}i_{n+1} : \mathrm{Spin}_{n+1} \rightarrow \mathrm{SO}_{n+1} $ be the natural projection.
The group $\mathrm{Spin}_{n+1}$ is therefore a simply connected Lie group, which is also compact, and it has the same Lie algebra (and hence the same dimension) as $\mathrm{SO}_{n+1}$.
Let us denote by $\mathbf 1$ the identity element in $\mathrm{Spin}_{n+1}$, and by $-\mathbf{1}$ the unique non-trivial element in $\mathrm{Spin}_{n+1}$ such that $\mathcal{P}i_{n+1}(-\mathbf 1)=I$.
We will give a brief description of $\mathrm{Spin}_{n+1}$ in the cases $n=2$ and $n=3$ since it will be fundamental in this work. It is well known that $\mathrm{Spin}_3 \simeq \mathbb S^3$ and $\mathrm{Spin}_4 \simeq \mathbb S^3 \times \mathbb S^3$.
First, let us recall the definition of the algebra of quaternions:
\[ \mathbb{H}:=\{a\mathbf 1+b\mathbf i+c\mathbf j+d\mathbf k \; | \; (a,b,c,d) \mathbf in \mathbb{R}^4\}, \]
where $\mathbf 1 \mathbf in \mathbb{H}$ is the multiplicative unit, and $\mathbf i,\mathbf j,\mathbf k$ satisfy the product rules
$ \mathbf i^2=\mathbf j^2=\mathbf k^2=\mathbf i\mathbf j\mathbf k=-1$.
As a real vector space, $ \mathbb{H}$ is isomorphic to $\mathbb{R}^4$, hence one can define a Euclidean norm on $\mathbb{H}$ and the set of quaternions with unit norm, $U(\mathbb{H})$, can be naturally identified with $\mathbb S^3$. The space of imaginary quaternions (i.e., of real part $0$) is naturally identified with $\mathbb{R}^3$.
The natural projection $\mathcal{P}i_3 : \mathrm{Spin}_3 \rightarrow \mathrm{SO}_3$ is given by $\mathcal{P}i_3(z)h = zh\bar{z}$, for any $h \mathbf in \mathbb{R}^3$.
The natural projection $\mathcal{P}i_4 : \mathrm{Spin}_4 \rightarrow \mathrm{SO}_4$ is given by $\mathcal{P}i_3(z_l,z_r)q = z_lq\bar{z_r}$, for any $q \mathbf in \mathbb{R}^4$. For a matrix-notation approach of these objects we refer the reader to~\cite[Subsection $2.1$]{AlvSal19}.
\subsection{Bruhat cells and the Coxeter-Weyl Group} \label{BruhatDecomp}
We denote by $\mathrm{Up}^+_{n+1}$ the group of upper triangular matrices with positive diagonal entries and by $\mathrm{Up}^1_{n+1} \subset \mathrm{Up}^+_{n+1}$ the subgroup of upper triangular matrices with diagonal entries equal to one.
The map $B : \mathrm{Up}^1_{n+1} \times \mathrm{SO}_{n+1} \mapsto \mathrm{SO}_{n+1}$ defined by
$ B(U,Q)=UQU' $
where $U'$ is the unique matrix in $\mathrm{Up}^+_{n+1}$ such that $UQU' \mathbf in \mathrm{SO}_{n+1}$ is called the \emph{Bruhat action}.
The Bruhat action is clearly a group action of $\mathrm{Up}^1_{n+1}$ on $\mathrm{SO}_{n+1}$,
and we call its finitely many orbits the \emph{(signed) Bruhat cells}.
It turns out that, two matrices $Q \mathbf in \mathrm{SO}_{n+1}$ and $Q' \mathbf in \mathrm{SO}_{n+1}$ belong to the same Bruhat cell if and only if there exist $U$ and $U'$ in $\mathrm{Up}^+_{n+1}$ such that $Q'=UQU'$.
In other words, given $Q \mathbf in \mathrm{SO}_{n+1}$ the Bruhat cell of $Q$ is the set of matrices $UQU' \mathbf in \mathrm{SO}_{n+1}$, where $U$ and $U'$ belong to $\mathrm{Up}^+_{n+1}$. We denote by $\mathrm{Bru}_Q$ the \emph{(signed) Bruhat cell} of $Q \mathbf in \mathrm{SO}_{n+1}$.
Let $\mathrm{B}_{n+1} \subset \mathrm{O}_{n+1}$ the Coxeter-Weyl group of signed permutations matrices,
so $|\mathrm{B}_{n+1}| = 2^{n+1}(n+1)!$.
Let $\mathrm{B}_{n+1}^+ = \mathrm{B}_{n+1} \cap \mathrm{SO}_{n+1}$ the group of signed permutations matrices of determinant one, so $|\mathrm{B}_{n+1}^+| = 2^{n}(n+1)!$.
Given a signed permutation matrix in $\mathrm{B}_{n+1}^+$ we can drop its entries signs and define a homomorphism from $\mathrm{B}_{n+1}^+$ to the symmetric group $S_{n+1}$.
Each Bruhat cell contains a unique signed permutation matrix $P \mathbf in \mathrm{B}_{n+1}^+$; it follows that, given any two Bruhat cells, associated with two distinct signed-permutation matrices, they are disjoint.
Therefore we have the Bruhat decomposition of $\mathrm{SO}_{n+1}$:
\[ \mathrm{SO}_{n+1}=\bigsqcup_{P \mathbf in \mathrm{B}_{n+1}^+}\mathrm{Bru}_P, \]
and there are $2^n(n+1)!$ different Bruhat cells.
Since the group $\mathrm{Up}_{n+1}^1$ is contractible, its Bruhat action on $\mathrm{SO}_{n+1}$ lifts to a Bruhat action on $\mathrm{Spin}_{n+1}$ that, for simplicity, we still denote by $B : \mathrm{Up}_{n+1}^1 \times \mathrm{Spin}_{n+1} \rightarrow \mathrm{Spin}_{n+1}$.
As before, the Bruhat cells on $\mathrm{Spin}_{n+1}$ are the orbits of the Bruhat action.
Let us denote by $\tilde{\mathrm{B}}_{n+1}^+ = \mathcal{P}i_{n+1}^{-1}(\mathrm{B}_{n+1}^+) \subset \mathrm{Spin}_{n+1}$, so $|\tilde{\mathrm{B}}_{n+1}^+| = 2^{n+1}(n+1)!$. Let $z \mathbf in \mathrm{Spin_{n+1}}$; we define the \emph{Bruhat cell} $\mathrm{Bru}_z$ as the connected component of $\mathcal{P}i_{n+1}^{-1}(\mathrm{Bru}_{\mathcal{P}i_{n+1}(z)})$ which contains $z$.
Obviously $\mathcal{P}i_{n+1}^{-1}(\mathrm{Bru}_{\mathcal{P}i_{n+1}(z)}) = \mathrm{Bru}_z \sqcup \mathrm{Bru}_{-z}$, where each set $\mathrm{Bru}_{z}$, $\mathrm{Bru}_{-z}$ is contractible and non empty.
Since the group $\mathrm{Up}_{n+1}^1$ is contractible, its Bruhat action on $\mathrm{SO}_{n+1}$ lifts to a Bruhat action on $\mathrm{Spin}_{n+1}$ that, for simplicity, we still denote by $B : \mathrm{Up}_{n+1}^1 \times \mathrm{Spin}_{n+1} \rightarrow \mathrm{Spin}_{n+1}$. As before, the Bruhat cells on $\mathrm{Spin}_{n+1}$ are the orbits of the Bruhat action.
Therefore the Bruhat decomposition of $\mathrm{SO}_{n+1}$ can be lifted to the universal cover $\mathcal{P}i_{n+1}: \mathrm{Spin}_{n+1} \rightarrow \mathrm{SO}_{n+1}$ and we have the Bruhat decomposition for $\mathrm{Spin}_{n+1}$:
\[ \mathrm{Spin}_{n+1}=\bigsqcup_{\tilde{P} \mathbf in \tilde{B}_{n+1}^+}\mathrm{Bru}_{\tilde{P}}, \]
and there are $2^{n+1}(n+1)!$ disjoint Bruhat cells in $\mathrm{Spin}_{n+1}$.
\section{Basic Definitions and Properties} \label{section3}
In this section we gather basic notions and properties on the spaces of curves under analysis.
In what follows, we define a large space of curves (the space of generic curves), then we characterize locally convex curves on the $3$-sphere and finally we define globally convex curves.
\subsection{Frenet frame curves and Generic curves}\label{s33}
Consider a locally convex curve $\gamma:[0,1] \rightarrow \mathbb S^n$.
Applying Gram-Schmidt orthonormalization to the $(n+1)$-vectors $(\gamma(t),\gamma'(t),\dots,\gamma^{(n)}(t))$,
there exists a unique $\mathcal{F}_\gamma(t) \mathbf in \mathrm{SO}_{n+1}$ and $R_\gamma(t) \mathbf in \mathrm{Up}_{n+1}^+$ such that
\begin{equation}\label{frenet_gram}
(\gamma(t),\gamma'(t),\dots,\gamma^{(n)}(t))=\mathcal{F_{\gamma}}(t) R_\gamma(t),
\end{equation}
where we recall that $\mathrm{Up}_{n+1}^+$ is the space of upper triangular matrices with positive diagonal entries and real coefficients.
The curve $\mathcal{F_{\gamma}}:[0,1] \rightarrow \mathrm{SO}_{n+1}$ defined by~\eqref{frenet_gram} is called the \emph{Frenet frame curve} of the locally convex curve $\gamma:[0,1] \rightarrow \mathbb S^n$.
Notice that the definition of Frenet frame curve associated with a locally convex curve $\gamma$ on the $n$-sphere does not depend on the choice of a positive (or orientation preserving) reparametrization: indeed a computation using the chain rule shows that
\[ (\gamma \circ \phi(t),(\gamma \circ \phi)'(t),\dots,(\gamma \circ \phi)^{(n)}(t))=(\gamma(\tau),\gamma'(\tau),\dots,\gamma^{(n)}(\tau))U, \]
where $\tau=\phi(t)$ is a positive reparametrization (i.e., the sign of $\phi'(t)$ is positive), and where $U$ is an upper triangular matrix whose diagonal is given by $(1,\phi'(t), \dots,(\phi'(t))^n)$. So $U \mathbf in \mathrm{Up}_{n+1}^+$, which implies that $\mathcal{F_{\gamma \circ \phi}}(t)=\mathcal{F_{\gamma}}(\tau)$.
We denote by ${\mathcal{L}\mathbb S^{n}}$ the set of all locally convex curves $\gamma : [0,1] \rightarrow \mathbb S^{n}$ such that $\mathcal{F}_\gamma(0)=I$. Obviously, ${\mathcal{L}\mathbb S^{n}}(Q) \subset {\mathcal{L}\mathbb S^{n}}$.
Many authors already discussed the topological structures of the spaces $\mathcal{L}\mathbb S^n(Q)$.
It is well known that different topological structures give different spaces which are homotopically equivalent.
Therefore, we will consider that our curves are smooth. We notice, however, that even if juxtaposition of curves jeopardizes smoothness, there is no loss of generality in assuming so.
For more about this discussion we refer to the reader~\cite{SS12},~\cite{Sal13} or~\cite{Alv16}.
Even though we will be mainly interested in locally convex curves, it will
be useful in the sequel to consider a larger space of curves.
A curve $\gamma: [0,1] \rightarrow \mathbb S^n$ of class $C^k$ ($k \geq n$) is called \emph{generic} if
the vectors $\gamma(t), \gamma'(t),\gamma''(t),\dots,\gamma^{(n-1)}(t)$ are linearly independent for all $t \mathbf in [0,1]$.
Given $\gamma$ a generic curve on the $n$-sphere we can still define its Frenet frame curve.
In fact, by applying Gram-Schmidt orthonormalization to the linearly independent $n$-vectors $\gamma(t),\gamma'(t),\dots,\gamma^{(n-1)}(t)$ we obtain $n$ orthonormal vectors $u_0(t),u_1(t), \dots, u_{n-1}(t)$ and then, there is a unique vector $u_n(t)$ for which $u_0(t), u_1(t), \dots, u_{n-1}(t), u_n(t)$ is a positive orthonormal basis.
We may thus set
\begin{equation}\label{frenet_gram2}
\mathcal{F_{\gamma}}(t)=(u_0(t),u_1(t), \dots, u_{n-1}(t), u_n(t)) \mathbf in \mathrm{SO}_{n+1}
\end{equation}
and make the following more general definition. The curve $\mathcal{F_{\gamma}}:[0,1] \rightarrow \mathrm{SO}_{n+1}$ defined by~\eqref{frenet_gram2} is called the \emph{Frenet frame curve} of the generic curve $\gamma:[0,1] \rightarrow \mathbb S^n$.
Clearly, the latter definition coincides with the former when $\gamma$ is locally convex.
Given $Q \mathbf in \mathrm{SO}_{n+1}$, we denote by $\mathcal{G}\mathbb S^n(Q)$ the set of all generic curves $\gamma:[0,1] \rightarrow \mathbb S^{n}$ such that $\mathcal{F_{\gamma}}(0)=I$ and $\mathcal{F_{\gamma}}(1)=Q$. For $z \mathbf in \mathrm{Spin}_{n+1}$, we define ${\mathcal{G}\mathbb S^{n}}(z)$ as the subset of ${\mathcal{G}\mathbb S^{n}}(\mathcal{P}i_{n+1}(z))$ for which $\tilde{\mathcal{F}_{\gamma}}(1)=z$.
Obviously, $\mathcal{L}\mathbb S^n(Q) \subset \mathcal{G}\mathbb S^n(Q)$ and $\mathcal{L}\mathbb S^n(z) \subset \mathcal{G}\mathbb S^n(z)$.
The homotopy type of the spaces $\mathcal{G}\mathbb S^{n}(z)$, $z \mathbf in \mathrm{Spin}_{n+1}$, is well-known. Let us define $\Omega \mathrm{Spin}_{n+1}(z)$ to be the space of all continuous curves $\mathbf alpha : [0,1] \rightarrow \mathrm{Spin}_{n+1}$ with $\mathbf alpha(0)=\mathbf{1}$ and $\mathbf alpha(1)=z$. It is well understood that different values of $z \mathbf in \mathrm{Spin}_{n+1}$ does not change the space $\Omega \mathrm{Spin}_{n+1}(z)$ up to homeomorphism, therefore we will usually drop $z$ from the notation and write $\Omega \mathrm{Spin}_{n+1}$ instead of $\Omega \mathrm{Spin}_{n+1}(z)$. Follows from the works of Hirsch and Smale (\cite{Hir59} and \cite{Sma59b}) that the \emph{Frenet frame injection} $\tilde{\mathcal{F}}: \mathcal{G}\mathbb S^{n}(z) \rightarrow \Omega \mathrm{Spin}_{n+1}$ defined by $(\tilde{\mathcal{F}}(\gamma))(t) = \tilde{\mathcal{F}}_\gamma(t)$ is a homotopy equivalence (see Subsection $5.2$ of~\cite{AlvSal19} for more on this).
Let us look at the special case where $\gamma$ is a generic curve on the $2$-sphere, i.e., $\gamma$ is an immersion.
Let us denote by $\mathbf{t}_\gamma(t)$ the unit tangent vector of $\gamma$ at the point $\gamma(t)$, that is
$\mathbf{t}_\gamma(t):=\frac{\gamma'(t)}{||\gamma'(t)||} \mathbf in \mathbb S^2,$
and by $\mathbf{n}_\gamma(t)$ be the unit normal vector of $\gamma$ at the point $\gamma(t)$, that is
$ \mathbf{n}_\gamma(t):= \gamma(t) \times \mathbf{t}_\gamma(t) $
where $\times$ is the cross-product in $\mathbb{R}^3$. We then have
$ \mathcal{F_{\gamma}}(t)=(\gamma(t),\mathbf{t}_\gamma(t),\mathbf{n}_\gamma(t)) \mathbf in \mathrm{SO}_3 $
where $\mathbf{t}_\gamma(t)$ is the unit tangent and $\mathbf{n}_\gamma(t)$ the unit normal we defined above.
The geodesic curvature $\mathbf kappa_\gamma(t)$ is by definition
$ \mathbf kappa_\gamma(t):=\mathbf{t}_\gamma'(t)\cdot\mathbf{n}_\gamma(t) $
where $\cdot$ is the Euclidean inner product. Here's a geometric definition of locally convex curves in $\mathbb S^2$.
\begin{proposition}
A generic curve $\gamma:[0,1] \rightarrow \mathbb S^2$ is locally convex if and only if $\mathbf kappa_\gamma(t)> 0$ for all $t \mathbf in (0,1)$.
\end{proposition}
\begin{proof}
For a proof see Proposition 18 in~\cite{Alv16}.
\end{proof}
Let us now consider a generic curve $\gamma$ on the $3$-sphere and let $e_1,e_2,e_3,e_4$ be the canonical basis of $\mathbb{R}^4$.
It is clear that
$ \mathcal{F}_\gamma(t)e_1=\gamma(t), \quad \mathcal{F}_\gamma(t)e_2=\mathbf{t}_\gamma(t)=\frac{\gamma'(t)}{||\gamma'(t)||}. $
We define the unit normal $\mathbf{n}_\gamma(t)$ and binormal $\mathbf{b}_\gamma(t)$ by the formulas
$ \mathbf{n}_\gamma(t)=\mathcal{F}_\gamma(t)e_3, \, \mathbf{b}_\gamma(t)=\mathcal{F}_\gamma(t)e_4 $
so that
\[ \mathcal{F_{\gamma}}(t)=(\gamma(t),\mathbf{t}_\gamma(t),\mathbf{n}_\gamma(t),\mathbf{b}_\gamma(t)) \mathbf in \mathrm{SO}_4. \]
The geodesic curvature $\mathbf kappa_\gamma(t)$ is still defined by
$ \mathbf kappa_\gamma(t):=\mathbf{t}_\gamma'(t)\cdot\mathbf{n}_\gamma(t) $
but we further define the geodesic torsion $\tau_{\gamma}(t)$ by
$ \tau_\gamma(t):=-\mathbf{b}_\gamma'(t)\cdot\mathbf{n}_\gamma(t). $
It is clear that the geodesic curvature is never zero. We can then characterize locally convex curves in $\mathbb S^3$.
\begin{proposition} \label{lccs3}
A generic curve $\gamma:[0,1] \rightarrow \mathbb S^3$ is locally convex if and only if $\tau_\gamma(t)> 0$ for all $t \mathbf in (0,1)$.
\end{proposition}
\begin{proof}
Without loss of generality, we may assume that $\gamma$ is parametrized by arc-length. Then, as before, $\mathbf{t}_\gamma(t)=\gamma'(t)$ and $\mathbf kappa_\gamma(t)=\gamma''(t)\cdot\mathbf{n}_\gamma(t)$. Also
$ \gamma''(t)=-\gamma(t)+\mathbf kappa_\gamma(t)\mathbf{n}_\gamma(t) $
and hence
$ \gamma'''(t)=-\gamma'(t)+\mathbf kappa_\gamma(t)'\mathbf{n}_\gamma(t)+\mathbf kappa_\gamma(t)\mathbf{n}_\gamma'(t). $
Since $\mathbf{b}_\gamma(t)\cdot \mathbf{n}_\gamma(t)=0$, we have
$ \tau_\gamma(t)=-\mathbf{b}_\gamma'(t)\cdot \mathbf{n}_\gamma(t)=\mathbf{b}_\gamma(t)\cdot \mathbf{n}'_\gamma(t). $
One then easily computes
\[ \gamma'''(t)\cdot\gamma(t)=0, \]
\[ \gamma'''(t)\cdot\gamma'(t)=-1-\mathbf kappa_\gamma(t)^2, \]
\[ \gamma'''(t)\cdot\mathbf{n}_\gamma(t)=\mathbf kappa_\gamma'(t), \]
\[ \gamma'''(t)\cdot\mathbf{b}_\gamma(t)=\mathbf kappa_\gamma(t)\tau_\gamma(t). \]
So we have the equality
$ (\gamma(t),\gamma'(t),\gamma''(t),\gamma'''(t))=\mathcal{F}_\gamma(t)R_\gamma(t) $
with
\[
R_\gamma(t)=
\begin{pmatrix}
1 & 0 & -1 & 0 \\
0 & 1 & 0 & -1-\mathbf kappa_\gamma(t)^2\\
0 & 0 & \mathbf kappa_\gamma(t) & \mathbf kappa_\gamma'(t) \\
0 & 0 & 0 & \mathbf kappa_\gamma(t)\tau_\gamma(t).
\end{pmatrix}. \]
Since $\mathcal{F}_\gamma(t)$ has determinant $1$ and $\mathbf kappa_\gamma(t)$ is never zero, we have
\[ \mathrm{det}(\gamma(t),\gamma'(t),\gamma''(t),\gamma'''(t))=\mathrm{det}R_\gamma(t)=\mathbf kappa_\gamma(t)^2\tau_\gamma(t) \]
and this proves the statement.
\end{proof}
We continue collecting preliminary notions and auxiliary results instrumental to our arguments.
Let $ \Gamma : [0,1] \rightarrow \mathrm{SO}_{n+1} $, the logarithmic derivative of the curve $\Gamma$ is defined as
$ \Lambda(t)=(\Gamma(t))^{-1}\Gamma'(t) $, that is, $\Gamma'(t)=\Gamma(t)\Lambda(t). $
Notice that $\Lambda$ belongs to the Lie algebra of $\mathrm{SO}_{n+1}$.
Therefore $\Lambda(t)$ is automatically a skew-symmetric matrix for all $t \mathbf in [0,1]$.
Let $\mathfrak{J} \subset \mathfrak{so}_{n+1}$ be the set of Jacobi matrices, i.e., $\mathfrak{J}$ is the set of tridiagonal skew-symmetric matrices with positive subdiagonal entries, that is, matrices of the form
\[ \begin{pmatrix}
0 & -c_1 & 0 & \ldots & 0 \\
c_1 & 0 & -c_2 & & 0 \\
& \ddots & \ddots & \ddots & \\
0 & & c_{n-1} & 0 & -c_n \\
0 & & 0 & c_n & 0
\end{pmatrix}, \quad c_1>0, \dots, c_n>0. \]
We are interested in the following definition in order to characterize the Frenet frame curves associated with a locally convex curves. Let us call a curve $\Gamma : [0,1] \rightarrow \mathrm{SO}_{n+1}$ \emph{Jacobian} if its logarithmic derivative $\Lambda(t)=(\Gamma(t))^{-1}\Gamma'(t)$ belongs to $\mathfrak{J}$ for all $t \mathbf in [0,1]$.
In~\cite{SS12} was introduced a homeomorphism between locally convex curves in $\mathcal{L}\mathbb S^n$ and Jacobian curves starting at the identity, more precisely:
\begin{proposition}\label{propjacobian}
Let $\Gamma : [0,1] \rightarrow \mathrm{SO}_{n+1}$ be a smooth curve with $\Gamma(0)=I$. Then $\Gamma$ is Jacobian if and only if there exists $\gamma \mathbf in \mathcal{L}\mathbb S^n$ such that $\mathcal{F}_\gamma=\Gamma$.
\end{proposition}
The Lemma 2.1 in~\cite{SS12} gives us a correspondence as follows:
given $\Gamma$ a Jacobian curve with $\Gamma(0)=I$, then if we define $\gamma_\Gamma$ by setting $\gamma_\Gamma(t)=\Gamma(t)e_1$ then $\gamma_\Gamma \mathbf in \mathcal{L}\mathbb S^n$ and conversely, given $\gamma \mathbf in \mathcal{L}\mathbb S^n$, its Frenet frame curve is a Jacobian curve.
Notice that the Frenet frame curve $\mathcal{F}_\gamma$ uniquely determines the curve $\gamma$.
For instance, let us characterize Frenet Frame curves of locally convex curves on the $3$-sphere that will be useful in this work.
If $\gamma : [0,1] \rightarrow \mathbb S^3$ is locally convex, then
$ \mathcal{F_{\gamma}}(t)=(\gamma(t),\mathbf{t}_\gamma(t),\mathbf{n}_\gamma(t),\mathbf{b}_\gamma(t)) \mathbf in \mathrm{SO}_4 $
and one gets
\begin{equation}\label{logderives3}
\Lambda_\gamma(t)=
\begin{pmatrix}
0 & -||\gamma'(t)|| & 0 & 0 \\
||\gamma'(t)|| & 0 & -||\gamma'(t)||\mathbf kappa_\gamma(t) & 0 \\
0 & ||\gamma'(t)||\mathbf kappa_\gamma(t) & 0 & -||\gamma'(t)||\tau_\gamma(t) \\
0 & 0 & ||\gamma'(t)||\tau_\gamma(t) & 0
\end{pmatrix}.
\end{equation}
\subsection{Convex curves} \label{convexcurves}
Next we introduce a special class of locally convex curves of fundamental importance in the study of the space of locally convex curves.
Consider $\gamma : [0,1] \rightarrow \mathbb S^n$, a smooth curve, and let $H \subseteq \mathbb{R}^{n+1}$ be a hyperplane.
Let us call $\gamma$ a \emph{globally convex} curve if the image of $\gamma$ intersects $H$, counting with multiplicity, in at most $n$ points.
This definition requires us to clarify the notion of multiplicity. First, endpoints of the curve are not counted as intersections. So, if there exists $t \mathbf in (0,1)$ such that $\gamma(t) \mathbf in H$, then the multiplicity of the intersection point $\gamma(t)$ is the smallest integer $k \geq 1$ such that
$ \gamma^{(j)}(t) \mathbf in H, \quad 0 \leq j \leq k-1. $
Therefore, a multiplicity is one if $\gamma(t) \mathbf in H$ but $\gamma'(t) \notin H$, it is two if $\gamma(t) \mathbf in H$, $\gamma'(t) \mathbf in H$ but $\gamma''(t) \notin H$, and so on.
From the definition it is easy to prove that every globally convex curve is locally convex.
For ease of presentation, we refer to globally convex curves as convex curves.
Given any $n \geq 2$ and $z \mathbf in \mathrm{Spin}_{n+1}$ recall that $\mathcal{L}\mathbb S^n(z)$ is the space of locally convex curves in $\mathbb S^n$ whose initial lifted Frenet frame is $\mathbf 1$ and the final lifted Frenet Frame is $z$.
Convexity is strongly related with the number of connected components of the spaces $\mathcal{L}\mathbb S^n(z)$, $z \mathbf in \mathrm{Spin}_{n+1}$. Let us recall also the following result.
\begin{theo}[M. Z. Shapiro, \cite{Sha93}, S. Anisov, \cite{Ani98}]
\label{thmani}
The space $\mathcal{L}\mathbb S^n(z)$ has exactly two connected components if there exist convex curves in $\mathcal{L}\mathbb S^n(z)$, and one otherwise. If $\mathcal{L}\mathbb S^n(z)$ has two connected components, one is made of convex curves, and this component is contractible.
\end{theo}
This result highlights the importance of identifying the existence of convex curves in the spaces of locally convex curves.
It will be critical in what follows.
\section{Examples} \label{examples}
In the recent paper~\cite{AlvSal19} the authors produced a homemorphism between the space of $\gamma \mathbf in \mathcal{L}\mathbb S^3(z_l,z_r)$ and the space of pairs of curves $(\gamma_l,\gamma_r) \mathbf in \mathcal{L}\mathbb S^2(z_l) \times \mathcal{G}\mathbb S^2(z_r)$ for which such conditions are verified, more precisely:
\begin{theo}[Alves and Saldanha, 2019] \label{th1}
There exists a homeomorphism between the space $ \mathcal{L}\mathbb S^3(z_l,z_r)$ and the space of pairs of curves $(\gamma_l,\gamma_r) \mathbf in \mathcal{L}\mathbb S^2(z_l)\times \mathcal{G}\mathbb S^2(z_r)$
satisfying the condition
\begin{equation*}\label{cond}\tag{L}
||\gamma_l'(t)||=||\gamma_r'(t)||, \quad \mathbf kappa_{\gamma_l}(t)>|\mathbf kappa_{\gamma_r}(t)|, \quad t \mathbf in [0,1].
\end{equation*}
\end{theo}
For the proof we refer to \cite[Subsection 4.1]{AlvSal19}.
The Theorem~\ref{th1} allows us to represents a locally convex curve in $\mathbb S^3$ as a pair of a locally convex curve in $\mathbb S^2$ and an immersion in $\mathbb S^2$, with some compatibility conditions. Hence to produce examples of locally convex curve in $\mathbb S^3$, it is enough to produce examples of such pairs.
In this section, we want to use this theorem to produce examples in the spaces we are interested in:
namely, $\mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ and $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$.
These examples will be fundamental in the proof of Theorem~\ref{th2} and Theorem~\ref{th3}.
In the sequel, for the sake of completeness, we recall the notations and main steps developed
in the Subsection 4.2 of~\cite{AlvSal19}.
Let $c$ be a real number such that $0<c \leq 2\pi$.
Consider $\sigma_c : [0,1] \rightarrow \mathbb S^2$ the unique circle of length $c$, that is
$||\sigma_c'(t)||=c$, with initial and final Frenet frame equals to the identity.
Let $\rho \mathbf in (0,\pi/2]$ be the radius of curvature and $c=2\pi\sin\rho$; the curve $\sigma_c$ can be given by the following formula
\[ \sigma_c(t)=\cos\rho(\cos\rho,0,\sin\rho)+\sin\rho(\sin\rho\cos(2\pi t), \sin(2\pi t), -\cos\rho\cos(2\pi t)). \]
The geodesic curvature of the curve $\sigma_c$ is given by $\cot(\rho) \mathbf in [0,+\mathbf infty)$.
Note that, for $0<c<2\pi$, $\sigma_c$ is locally convex but also convex, but for $c=2\pi$, this is a meridian curve
\[ \sigma_{2\pi}(t)=(\cos(2\pi t), \sin(2\pi t), 0), \]
which has zero geodesic curvature, so this is just an immersion.
All our examples will be constructed as follows. For the left part of our curves, we will use $\sigma_c$ with $c<2\pi$ and iterate it a certain number of times, and for the right part of curves, we will use $\sigma_{2\pi}$ and iterate it a certain number of times. Since the right part will always have zero geodesic curvature, the only restriction so that this pair of curves defines a locally convex curve in $\mathbb S^3$ is the condition that their length should be equal. However, in order to realize different final lifted Frenet frame, we will have to iterate the curve $\sigma_c$ (on the left) and the curve $\sigma_{2\pi}$ (on the right) a different number of times: the equality of length will be achieved by properly choosing $c$ in each case.
Define the curve $\sigma_c^m$, for $m>0$, as the curve $\sigma_c$ iterated $m$ times, that is $\sigma_c^m(t)=\sigma_c(mt)$, $t \mathbf in [0,1]$.
See the Figure~\ref{fig:b} below for an illustration.
\begin{figure}
\caption{The curves $\sigma_c^m$, $\sigma_{2\pi}
\label{fig:b}
\end{figure}
\begin{example} \label{family1}
Let us give explicit examples in the spaces $\mathcal{L}\mathbb S^3((-\mathbf 1)^m,\mathbf k^m)$, $m \geq 1$. For $m\equiv 1$ or $2$ modulo $4$, this will give examples in the spaces $\mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ and $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$.
\end{example}
For $m\equiv 1$ or $2$ modulo $4$, we want to define a curve $\gamma_1^m \mathbf in \mathcal{L}\mathbb S^3((-\mathbf 1)^m,\mathbf k^m)$
such that its left and right parts are given by
\[ \gamma_{1,l}^m=\sigma_c^m \mathbf in \mathcal{L}\mathbb S^2((-\mathbf 1)^m), \quad \gamma_{1,r}^m =\sigma_{2\pi}^{m/2} \mathbf in \mathcal{G}\mathbb S^2(\mathbf k^m). \]
To define a pair of curves, we need to choose $0<c<2\pi$ such that
\[ ||(\gamma_{1,l}^{m})'(t)||=||(\sigma_c^{m})'(t)||=cm \]
is equal to
\[ ||(\gamma_{1,r}^{m})'(t)||=||(\sigma_{2\pi}^{m/2})'(t)||=\pi m. \]
It suffices to choose $c=\pi$ so that both curves have length equal to $\pi m$, then the geodesic curvature of $\gamma_{1,l}^m=\sigma_c^m$ is constantly equal to $\sqrt{3}$. Clearly, the geodesic curvature of $\gamma_{1,r}^m =\sigma_{2\pi}^{m/2}$ is zero.
Let us now find explicitly the curve $\gamma_1^m$. From Theorem~\ref{th1}, we can compute
\[ ||(\gamma_1^{m})'(t)||=\frac{||(\gamma_{1,l}^m)'(t)||(\mathbf kappa_{\gamma_{1,l}}(t)-\mathbf kappa_{\gamma_{1,r}}(t))}{2}=\frac{m\pi\sqrt{3}}{2} \]
\[ \mathbf kappa_{\gamma_1^m}(t)=\frac{2}{\mathbf kappa_{\gamma_{1,l}}(t)-\mathbf kappa_{\gamma_{1,r}}(t)}=\frac{2}{\sqrt{3}},\]
\[\tau_{\gamma_1^m}(t)=\frac{\mathbf kappa_{\gamma_{1,l}}(t)+\mathbf kappa_{\gamma_{1,r}}(t)}{\mathbf kappa_{\gamma_{1,l}}(t)-\mathbf kappa_{\gamma_{1,r}}(t)}=1. \]
Therefore the logarithmic derivative of $\gamma_1^m$ is constant and given by
\[
\Lambda_{\gamma_1^m}=\frac{\pi}{2}
\begin{pmatrix}
0 & -m\sqrt{3} & 0 & 0 \\
m\sqrt{3} & 0 & -2m & 0 \\
0 & 2m & 0 & -m\sqrt{3} \\
0 & 0 & m\sqrt{3} & 0
\end{pmatrix}.
\]
The Jacobian curve $\Gamma_{\gamma_1^m}$ satisfies
\[ \Gamma_{\gamma_1^m}'(t)=\Gamma_{\gamma_1^m}(t)\Lambda_{\gamma_1^m}, \quad \Gamma_{\gamma_1^m}(0)=I \]
and can also be computed explicitly since it is the exponential of $\Gamma_{\gamma_1^m}$, that is
\[ \Gamma_{\gamma_1^m}(t)=\exp(t\Lambda_{\gamma_1^m}). \]
The curve $\gamma_1^m$ is then equal to $\Gamma_{\gamma_1^m}e_1$, and we find that
\begin{eqnarray*}
\gamma_1^m(t) & = & \left(\frac{1}{4}\cos\left(\frac{3}{2}t\pi m\right)+\frac{3}{4}\cos\left(\frac{1}{2}t\pi m\right) \right.,\\
& & \frac{\sqrt{3}}{4}\sin\left(\frac{3}{2}t\pi m\right)+\frac{\sqrt{3}}{4}\sin\left(\frac{1}{2}t\pi m\right), \\
& & \frac{\sqrt{3}}{4}\cos\left(\frac{1}{2}t\pi m\right)-\frac{\sqrt{3}}{4}\cos\left(\frac{3}{2}t\pi m\right), \\
& & \left.\frac{3}{4}\sin\left(\frac{1}{2}t\pi m\right)-\frac{1}{4}\sin\left(\frac{3}{2}t\pi m\right)\right).
\end{eqnarray*}
Below we give an illustration in the case $m=5$ (Figure~\ref{fig:g}).
\begin{figure}
\caption{The curve $\gamma_1^5 \mathbf in \mathcal{L}
\label{fig:g}
\end{figure}
\begin{proposition} \label{curve gamma -1,k is convex}
The curve $\gamma_1^1 \mathbf in \mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ is convex.
\end{proposition}
\begin{proof}
We will prove that the curve $\gamma_{1}^1 \mathbf in \mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ defined in Example~\ref{family1} is convex (see Figure~\ref{fig:c}).
\begin{figure}
\caption{The curve $\gamma_1^1 \mathbf in \mathcal{L}
\label{fig:c}
\end{figure}
Up to a reparametrization with constant speed, this curve is the same as the curve $\tilde{\gamma} : [0,\pi/2] \rightarrow \mathbb S^3$ defined by
\begin{eqnarray*}
\tilde{\gamma}(t) & = & \left(\frac{1}{4}\cos\left(3t\right)+\frac{3}{4}\cos\left(t\right)\right., \\
& & \frac{\sqrt{3}}{4}\sin\left(3t \right)+\frac{\sqrt{3}}{4}\sin\left(t\right), \\
& & \frac{\sqrt{3}}{4}\cos\left(t\right)-\frac{\sqrt{3}}{4}\cos\left(3t\right), \\
& & \left.\frac{3}{4}\sin\left(t\right)-\frac{1}{4}\sin\left(3t \right)\right).
\end{eqnarray*}
Note that, it is sufficient to prove that $\tilde{\gamma}$ is convex.
Observe that for $t \mathbf in [0,\pi/2)$, the first component of $\tilde{\gamma}$ never vanishes, so if we define the central projection
\[ p : (x_1,x_2,x_3,x_4) \mathbf in \mathbb{R}^4 \mapsto \left(1,\frac{x_2}{x_1},\frac{x_3}{x_1},\frac{x_4}{x_1}\right), \]
then it is sufficient to prove that the curve $p(\tilde{\gamma})$, defined for $t \mathbf in [0,\pi/2)$ is convex.
We compute
\[ p(\tilde{\gamma}(t))=\left(1,\sqrt{3}\tan t,\sqrt{3}(\tan t)^2,(\tan t)^3\right) \]
and hence, if we reparametrize by setting $x=\tan t$, we obtain the curve
\[ x \mathbf in [0,+\mathbf infty) \mapsto (1,\sqrt{3}x,\sqrt{3}x^2,x^3) \mathbf in \mathbb{R}^4. \]
It is now obvious that this curve is convex, and therefore our initial curve $\gamma_1^1$ is convex.
\end{proof}
\begin{proposition} \label{curve gamma 1,-1 is convex}
The curve $\gamma_1^2 \mathbf in \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ is convex.
\end{proposition}
\begin{proof}
The proof of this assertion is entirely similar to the proof of the fact that $\gamma_1^1 \mathbf in \mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ (see Figure~\ref{fig:d}) is convex, therefore we let this proof for the reader.
\begin{figure}
\caption{The curve $\gamma_1^2 \mathbf in \mathcal{L}
\label{fig:d}
\end{figure}
\end{proof}
Note that the curve $\gamma_1^2$ is a example of curve in the space $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ that is convex but its left part $\sigma_\pi^2$ is not convex.
\section{Characterization of convex curves in the spaces $\mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ and $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$}\label{sectionproofs}
Our nearest goal is to prove Theorem~\ref{th2}, i.e., a curve $\gamma \mathbf in \mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ is convex if and only if $\gamma_l \mathbf in \mathcal{L}\mathbb S^2(-\mathbf 1)$ is convex. In the sequel, we will prove Theorem~\ref{th3}, i.e., if $\gamma \mathbf in \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ is convex then its left part $\gamma_l \mathbf in \mathcal{L}\mathbb S^2(\mathbf 1)$ is contained in an open hemisphere and its rotation number is 2.
\subsection{Proof of Theorem~\ref{th2} }\label{s63}
In this subsection, we give the proof of Theorem~\ref{th2} which characterizes convexity in the spaces $\mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ by merely considering the left part of the curve.
\begin{proof}[Proof of Theorem~\ref{th2}]
Recall that we want to prove that a curve $\gamma \mathbf in \mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ is convex if and only its left part $\gamma_l \mathbf in \mathcal{L}\mathbb S^2(-\mathbf 1)$ is convex.
It is clear that $\mathcal{L}\mathbb S^2(-\mathbf 1)$ contains convex curves; the curve $\sigma_c$, for $0 < c < 2\pi$ defined in Section~\ref{examples} is convex, since it intersects any hyperplane of $\mathbb{R}^3$ (or equivalently any great circle) in exactly two points. Using Theorem~\ref{thmani}, the space $\mathcal{L}\mathbb S^2(-\mathbf{1})$ has therefore $2$ connected components,
\[ \mathcal{L}\mathbb S^2(-\mathbf{1})=\mathcal{L}\mathbb S^2(-\mathbf{1})_c \sqcup \mathcal{L}\mathbb S^2(-\mathbf{1})_n, \]
where $\mathcal{L}\mathbb S^2(-\mathbf{1})_c$ is the component associated with convex curves and $\mathcal{L}\mathbb S^2(-\mathbf{1})_n$ the component associated with non-convex curves.
The space $\mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ also contains convex curves. Indeed, we prove in the Proposition~\ref{curve gamma -1,k is convex} that the curve $\gamma_{1}^1 \mathbf in \mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k)$ is convex.
Therefore, again using the Theorem~\ref{thmani}, the space $\mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)$ has therefore $2$ connected components,
\[ \mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)=\mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_c \sqcup \mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_n, \]
where $\mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_c$ is the component associated with convex curves and $\mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_n$ the component associated with non-convex curves.
Then we can use Theorem~\ref{th1} to define a continuous map
\[ L : \mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k) \rightarrow \mathcal{L}\mathbb S^2(-\mathbf{1}) \]
by setting $L(\gamma)=\gamma_l$, where $(\gamma_l,\gamma_r)$ is the pair of curves associated with $\gamma$. Since $L$ is continuous and $\mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_c$ is connected, its image by $L$ is also connected. Moreover, we know $\gamma_1^1 \mathbf in \mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_c$, and that $L(\gamma_1^1)=\sigma_1 \mathbf in \mathcal{L}\mathbb S^2(-\mathbf{1})_c$, therefore the image of $\mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_c$ by $L$ intersects $\mathcal{L}\mathbb S^2(-\mathbf{1})_c$; since the latter is connected we must have the inclusion
\[ L\left(\mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_c\right) \subset \mathcal{L}\mathbb S^2(-\mathbf{1})_c. \]
This proves one part of the statement, namely that if $\gamma \mathbf in \mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_c$, then its left part $\gamma_l=L(\gamma) \mathbf in \mathcal{L}\mathbb S^2(-\mathbf{1})_c$. To prove the other part, it is enough to verify that
\[ L\left(\mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_n\right) \subset \mathcal{L}\mathbb S^2(-\mathbf{1})_n. \]
To show this inclusion, using continuity and connectedness arguments as before, it is enough to find one element in $\mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_n$ whose image by $L$ belongs to $\mathcal{L}\mathbb S^2(-\mathbf{1})_n$. We claim that the curve $\gamma_1^5$ from Example~\ref{family1} does the job. To see that $\gamma_1^5 \mathbf in \mathcal{L}\mathbb S^3(-\mathbf{1},\mathbf k)_n$, one can easily check that if we define the plane
\[ H=\{(x_1,0,0,x_4) \mathbf in \mathbb{R}^4 \; | \; x_1 \mathbf in \mathbb{R}, \; x_4 \mathbf in \mathbb{R}\} \]
then
\[ \gamma_1^5(t_i) \mathbf in H, \quad t_i=\frac{i}{5}, \quad 1 \leq i \leq 4. \]
Hence $\gamma_1^5$ has at least $4$ points of intersection with $H$; this shows that $\gamma_5$ is not convex. To conclude, it is clear that $L(\gamma_1^5)=\sigma_5 \mathbf in \mathcal{L}\mathbb S^2(-\mathbf{1})_n$. Hence this proves the desired inclusion and concludes the proof.
\end{proof}
\subsection{Proof of Theorem ~\ref{th3}}\label{s64}
Now we detail the proof of Theorem~\ref{th3}, which gives a necessary condition for a curve in $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ to be locally convex by merely considering its left part.
First we need to recall some basic definition and properties.
An \emph{open hemisphere} $H$ in $\mathbb S^2$ is a subset of $\mathbb S^2$ of the form
$ H_h=\{x \mathbf in \mathbb S^2 \; | \; h\cdot x >0\} $
for some $h \mathbf in \mathbb S^2$, and a \emph{closed hemisphere} is the closure $\bar{H}$ of an open hemisphere, that is it has the form
$ \bar{H}_h=\{x \mathbf in \mathbb S^2 \; | \; h\cdot x \geq 0\}. $
We can make the following definition.
A closed curve $\gamma : [0,1] \rightarrow \mathbb S^2$ is \emph{hemispherical} if it its image is contained in an open hemisphere of $\mathbb S^2$. It is \emph{borderline hemispherical} if it is contained in a closed hemisphere but not contained in any open hemisphere.
Following~\cite{Zul12}, we define a rotation number for any closed curve $\gamma$ in $\mathbb S^2$ contained in a closed hemisphere (such a curve is either hemispherical or borderline hemispherical). To such a closed curve $\gamma$ contained in a closed hemisphere, there is a distinguished choice of hemisphere $h_\gamma$ containing the image of $\gamma$ (this hemisphere $h_\gamma$ is the barycenter of the set of all closed hemisphere containing the image of $\gamma$, the latter being geodesically convex, see~\cite{Zul12} for further details). Let $\mathcal{P}i_{h_\gamma} : \mathbb S^2 \rightarrow \mathbb{R}^2$ be the stereographic projection from $-h_\gamma$, and $\eta_\gamma= \mathcal{P}i_{h_\gamma} \circ \gamma$. The curve $\eta_\gamma$ is now a closed curve in the plane $\mathbb{R}^2$, and it is an immersion. The definition of its rotation number $\mathrm{rot}(\eta_\gamma) \mathbf in \mathbb{Z}$ is now classical: for instance, it can be defined to be the degree of the map
\[ t \mathbf in \mathbb S^1 \mapsto \frac{\eta_\gamma'(t)}{||\eta_\gamma'(t)||} \mathbf in \mathbb S^1. \]
Given a closed curve contained in a closed hemisphere in $\mathbb S^2$, its rotation number $\mathrm{rot}(\gamma)$ is defined by
$ \mathrm{rot}(\gamma):=-\mathrm{rot}(\eta_\gamma) \mathbf in \mathbb{Z}. $
The proof of Theorem~\ref{th3} will be based on two lemmas. The first lemma is a well-known property.
For ease of presentation we omit the proof.
\begin{lemma}\label{lem1}
Consider a continuous map $H : [0,1] \rightarrow \mathcal{L}\mathbb S^2(\mathbf 1)$ such that $\gamma_0=H(0)$ has the property of being hemispherical with rotation number equal to $2$ and $\gamma_1=H(1)$ which does not have this property. Then there exists a time $t>0$ such that $\gamma_t=H(t)$ is borderline hemispherical with rotation number equal to $2$.
\end{lemma}
The next lemma will be proven below.
\begin{lemma}\label{lem2}
Consider the map $L : \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1) \rightarrow \mathcal{L}\mathbb S^2(\mathbf 1)$ given by $L(\gamma)=\gamma_l$, and let $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c$ be the set of convex curves. Then the image of $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c$ by $L$ does not contain a borderline hemispherical curve with rotation number equal to $2$.
\end{lemma}
Now those lemmas build upon the setup of the problem to yield the proof of Theorem~\ref{th3}.
\begin{proof}[Proof of Theorem~\ref{th3}]
Recall that the map $L : \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1) \rightarrow \mathcal{L}\mathbb S^2(\mathbf 1)$ given by $L(\gamma)=\gamma_l$ is continuous, and that $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ contains exactly two connected components, one of which is made of convex curves $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c$ and the other of non-convex curves $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_n$. We need to prove that the image of $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c$ by $L$ contains only curves which are hemispherical with rotation number equal to $2$.
First let us prove that this image contains at least one such element. Recall the family of curves $\gamma_{1}^m \mathbf in \mathcal{L}\mathbb S^3((-\mathbf 1)^m,\mathbf k^m)$, $m \geq 1$, defined in Example~\ref{family1}. For $m=2$, the curve $\gamma_1^2=(\sigma_\pi^2,\sigma^1_{2\pi}) \mathbf in \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ is convex (see Section~\ref{examples}).
Moreover, it is clear that $L(\gamma_1^2)=\sigma_\pi^2$ is hemispherical and has rotation number equal to $2$, and therefore the image of $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c$ by $L$ contains at least the curve $L(\gamma_1^2)=\sigma_\pi^2$.
To prove that the image of $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c$ by $L$ contains only curves which are hemispherical with rotation number equal to $2$, we argue by contradiction, and assume that the image of $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c$ by $L$ contains a curve which is not hemispherical with rotation number equal to $2$. Since $L$ is continuous and $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c$ is connected, its image by $L$ is connected and thus we can find a homotopy $H : [0,1] \rightarrow L\left(\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c\right) \subset \mathcal{L}\mathbb S^2(\mathbf 1)$ between $H(0)=\sigma_\pi^2$, which is hemispherical with rotation number equal to $2$, and a curve $H(1)$ which does not have this property. Using Lemma~\ref{lem1}, one can find a time $t>0$ such that $H(t) \mathbf in L\left(\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c\right)$ is borderline hemispherical with rotation number equal to $2$. But by Lemma~\ref{lem2}, such a curve $H(t)$ cannot belong to $ L\left(\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c\right)$, and so we arrive at a contradiction.
\end{proof}
To conclude, it remains to prove Lemma~\ref{lem2}.
\begin{proof}[Proof of Lemma~\ref{lem2}]
We argue by contradiction, and assume that there exists a curve $\beta \mathbf in \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)_c$ (that is a convex curve $\beta \mathbf in \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$) such that its left part $\beta_l$ is borderline hemispherical curve with rotation number equal to $2$.
First we use our assumption that $\beta$ is convex, which implies that $\mathcal{F}_{\beta}(t)$ belongs to the Bruhat cell of $A ^\top$ for all time $t \mathbf in [0,1]$ (see Proposition $64$ in~\cite{Alv16} or Theorem $3$ in~\cite{GouSal19I}), where
\[ A ^\top=
\begin{pmatrix}
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0 \\
0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0
\end{pmatrix}.
\]
Therefore, by definition, there exist matrices $U_1(t) \mathbf in \mathrm{Up}_4^+$, $U_2(t) \mathbf in \mathrm{Up}_4^+$ (recall that $\mathrm{Up}_4^+$ is the group of upper triangular $4 \times 4$ matrices with positive diagonal entries) such that
\[ \mathcal{F}_{\beta}(t)=U_1(t){} A ^\top U_2(t).\]
This condition can also be written as
\[ \mathcal{F}_{\beta}(t)={} A ^\top L_1(t)U_2(t), \quad L_1(t):=AU_1(t){}A ^\top,\]
with $L_1(t) \mathbf in \mathrm{Lo}_4^+$, where $\mathrm{Lo}_4^+$ is the group of lower triangular $4 \times 4$ matrices with positive diagonal entries. Such a decomposition is not unique, but there exists a unique decomposition
\begin{equation}\label{dec}
\mathcal{F}_{\beta}(t)={} A ^\top L(t)U(t),
\end{equation}
where $U(t) \mathbf in \mathrm{Up}_4^+$. Now $L(t)\mathbf in \mathrm{Lo}_4^1$, where $\mathrm{Lo}_4^1$ is the group of lower triangular $4 \times 4$ matrices with diagonal entries equal to one. Using the fact that $\mathcal{F}_{\beta}(t)^{-1}\mathcal{F}_{\beta}'(t)$ belongs to $\mathfrak{J}$ (because $\beta$ is in particular locally convex), it is easy to see, by a simple computation, that the matrix $L(t)$ in~\eqref{dec} is such that $L(t)^{-1}L'(t)$ has positive subdiagonal entries and all other entries are zero. That is, we can write
\begin{equation}\label{L}
L(t)^{-1}L'(t)=
\begin{pmatrix}
0 & 0 & 0 & 0 \\
+ & 0 & 0 & 0 \\
0 & + & 0 & 0 \\
0 & 0 & + & 0
\end{pmatrix}
, \quad t \mathbf in [0,1].
\end{equation}
Then we use our assumption that the left part $\beta_l$ is a borderline hemispherical curve with rotation number equal to $2$.
\begin{figure}
\caption{The curve $\beta_l$.}
\label{fig:l}
\end{figure}
This implies (see Figure~\ref{fig:l}, where the dotted circle represents the equator of the sphere) that there exist times $t_1$ and $t_2$ and reals $\theta_1$ and $\theta_2$ such that
\[ \tilde{\mathcal{F}}_{\beta_l}(t_1)=\exp(\theta_1\mathbf k) \mathbf in \mathbb S^3, \quad \tilde{\mathcal{F}}_{\beta_l}(t_2)=\exp(\theta_2\mathbf k) \mathbf in \mathbb S^3.\]
Consequently, for $\beta$, we have
\begin{equation}\label{assum}
\begin{cases}
\tilde{\mathcal{F}}_{\beta}(t_1)=(\exp(\theta_1\mathbf k),z_r(t_1)) \mathbf in \mathbb S^3 \times \mathbb S^3 \\
\tilde{\mathcal{F}}_{\beta}(t_2)=(\exp(\theta_2\mathbf k),z_r(t_2)) \mathbf in \mathbb S^3 \times \mathbb S^3.
\end{cases}
\end{equation}
Following Subsection $4.1$ in~\cite{AlvSal19}, let us denote by $\mathbf k_l$ the matrix in $\mathfrak{so}_4$ that corresponds to the left multiplication by $\mathbf k \mathbf in \mathbb{H}$. This matrix is given by
\[ \mathbf k_l=
\begin{pmatrix}
0 & 0 & 0 & -1 \\
0 & 0 & -1 & 0 \\
0 & +1 & 0 & 0 \\
+1 & 0 & 0 & 0
\end{pmatrix}. \]
Recalling that $\mathcal{P}i_4 : \mathbb S^3 \times \mathbb S^3 \rightarrow \mathrm{SO}_4$ is the canonical projection, it follows from~\eqref{assum} that $\mathcal{F}_{\beta}(t_1)=\mathcal{P}i_4(\tilde{\mathcal{F}}_{\beta}(t_1))$ and $\mathcal{F}_{\beta}(t_2)=\mathcal{P}i_4(\tilde{\mathcal{F}}_{\beta}(t_2))$ belong to the subgroup $H$ of matrices in $\mathrm{SO}_4$ that commutes with the matrix $\mathbf k_l$. Clearly, this subgroup $H$ consists of matrices of the form
\[
\begin{pmatrix}
q_{11} & q_{12} & -q_{42} & -q_{41} \\
q_{21} & q_{22} & -q_{32} & -q_{31} \\
q_{31} & q_{32} & q_{22} & q_{21} \\
q_{41} & q_{42} & q_{12} & q_{11}
\end{pmatrix} \mathbf in \mathrm{SO}_4. \]
Using this explicit form of $H$ and the fact that $\mathcal{F}_{\beta}(t_1) \mathbf in H$ and $\mathcal{F}_{\beta}(t_2) \mathbf in H$, one finds, after a direct computation, that the matrix
\begin{equation}
L(t)=
\begin{pmatrix}
1 & 0 & 0 & 0 \\
l_{21}(t) & 1 & 0 & 0 \\
l_{31}(t) & l_{32}(t) & 1 & 0 \\
l_{41}(t) & l_{42}(t) & l_{43}(t) & 1
\end{pmatrix}
\end{equation}
defined in~\eqref{dec} satisfies, at $t=t_1$ and $t=t_2$, the conditions
\begin{equation}\label{contra}
l_{21}(t_1)=-l_{43}(t_1), \quad l_{21}(t_2)=-l_{43}(t_2).
\end{equation}
But clearly, \eqref{contra} is not compatible with~\eqref{L}, and this gives the desired contradiction.
\end{proof}
\section{Final Considerations}
The study of the spaces of locally convex curves started in the seventies with the works of Litte on the $2$-sphere.
But the research on the topological aspects on these spaces of curves on the spheres of higher dimension as in related spaces is very productive area, here we mention some other relevant works: \cite{Dub61}, \cite{MS012}, \cite{SZ13}, \cite{SZ15}, \cite{SZ16}, \cite{Sma58}, \cite{Sma59b}, \cite{Sma59}, \cite{Whi37} and \cite{Zhou}.
A very hard and interesting question in this topic is to determine the homotopy type of the spaces of locally convex curves on the $n$-sphere, for $n \geq 3$.
In this section we will give some directions of future research and some conjectures.
In~\cite{SS12}, N. Saldanha and B. Shapiro proved that the spaces $\mathcal{L}\mathbb S^n(Q)$ fall in at most $\lceil \frac{n}{2} \rceil + 1$ equivalence classes up to homeomorphism, they also studied this classification in the double cover $\mathrm{Spin}_{n+1}$. Therefore, one natural question is to determine if the listed spaces are pairwise non-homemorphic.
The list in the case $n=2$ says that $\mathcal{L}\mathbb S^2(Q)$ is homeomorphic to one of these 2 spaces
$\Omega(\mathrm{SO}_{3}), \; \mathcal{L}\mathbb S^2(I)$; and
$\mathcal{L}\mathbb S^2(z)$ is homeomorphic to one of these three spaces
$\Omega \mathbb S^3, \; \mathcal{L}\mathbb S^2(\mathbf 1), \; \mathcal{L}\mathbb S^2(-\mathbf 1).$
In this case, all the listed spaces are non-homeomorphic.
Moreover, in~\cite{Sal13} the following homotopy equivalences are proven to hold
\[ \mathcal{L}\mathbb S^2(\mathbf{1}) \mathbf approx (\Omega \mathbb S^3) \vee \mathbb S^2 \vee \mathbb S^6 \vee \mathbb S^{10} \vee \cdots, \quad \mathcal{L}\mathbb S^2(-\mathbf{1})_n \mathbf approx (\Omega \mathbb S^3) \vee \mathbb S^4 \vee \mathbb S^8 \vee \cdots.\]
In the case $n=3$, i.e., in the case of $\mathrm{SO}_{4}$ there are at most 3 equivalence classes, and in the case of $\mathbb S^3 \times \mathbb S^3$ at most 5.
Therefore, $\mathcal{L}\mathbb S^3(Q)$ is homeomorphic to one of these three spaces
$\mathcal{L}\mathbb S^3(-I), \; \Omega(\mathrm{SO}_4), \; \mathcal{L}\mathbb S^3(I)$;
and $\mathcal{L}\mathbb S^3(z)$ is homeomorphic to one of these five spaces
$\mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf 1), \; \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1), \; \Omega(\mathbb S^3 \times \mathbb S^3), \;
\mathcal{L}\mathbb S^3(\mathbf 1,\mathbf 1), \; \mathcal{L}\mathbb S^3(-\mathbf 1,-\mathbf 1) . $
In particular, we have $\mathcal{L}\mathbb S^3(-\mathbf 1,\mathbf k) \simeq \mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ and therefore we believe that a stronger version of Lemma~\ref{lem2} is true:
\begin{Conj}
The image of the whole space $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ by $L$ does not contain a borderline hemispherical curve with rotation number equal to $2$.
\end{Conj}
With this stronger statement it would be easy to see from the proof of Theorem~\ref{th3} that our necessary condition for a curve in $\mathcal{L}\mathbb S^3(\mathbf 1,-\mathbf 1)$ to be convex is also sufficient.
Yet for the moment we are not able to prove this stronger statement.
Furthermore, using some techniques developed in~\cite{GouSal19I},~\cite{GouSal19II} and~\cite{AlvSal19} we hope to proof the conjecture below, in particular solving the main problem in the case $n=3$, this is a joint work with V. Goulart, N. Saldanha and B. Shapiro.
\begin{Conj}
We have the following weak homotopy equivalences:
\begin{align*}
\mathcal{L}\mathbb S^3(+\mathbf 1,+\mathbf 1) &\mathbf approx
\Omega(\mathbb S^3 \times \mathbb S^3) \vee \mathbb S^4 \vee \mathbb S^8 \vee \mathbb S^8
\vee \mathbb S^{12} \vee \mathbb S^{12} \vee \mathbb S^{12} \vee \cdots, \\
\mathcal{L}\mathbb S^3(-\mathbf 1,-\mathbf 1) &\mathbf approx
\Omega(\mathbb S^3 \times \mathbb S^3) \vee \mathbb S^2 \vee \mathbb S^6 \vee \mathbb S^6
\vee \mathbb S^{10} \vee \mathbb S^{10} \vee \mathbb S^{10} \vee \cdots, \\
\mathcal{L}\mathbb S^3(+\mathbf 1,-\mathbf 1) &\mathbf approx
\Omega(\mathbb S^3 \times \mathbb S^3) \vee \mathbb S^0 \vee \mathbb S^4 \vee \mathbb S^4
\vee \mathbb S^{8} \vee \mathbb S^{8} \vee \mathbb S^{8} \vee \cdots, \\
\mathcal{L}\mathbb S^3(-\mathbf 1,+\mathbf 1) &\mathbf approx
\Omega(\mathbb S^3 \times \mathbb S^3) \vee \mathbb S^2 \vee \mathbb S^6 \vee \mathbb S^6
\vee \mathbb S^{10} \vee \mathbb S^{10} \vee \mathbb S^{10} \vee \cdots.
\end{align*}
The above bouquets include one copy of $\mathbb S^k$,
two copies of $\mathbb S^{(k+4)}$, \dots, $j+1$ copies of $\mathbb S^{(k+4j)}$, \dots,
and so on.
\end{Conj}
\parindent=0pt
\parskip=0pt
\obeylines
Em\'ilia Alves
[email protected]
Instituto de Matem\'atica e Estat\'istica
Universidade Federal Fluminense
Rua Professor Marcos Waldemar de Freitas Reis s/n
24210-201 Niter\'oi, RJ, Brazil
\end{document} |
\begin{document}
\title{Unitary Invariants and Classification of Four-Qubit States via Negativity Fonts}
\author{S. Shelly Sharma}
\email{[email protected]}
\affiliation{Departamento de F\'{\i}sica, Universidade Estadual de Londrina, Londrina
86051-990, PR Brazil }
\author{N. K. Sharma}
\email{[email protected]}
\affiliation{Departamento de Matem\'{a}tica, Universidade Estadual de Londrina, Londrina
86051-990 PR, Brazil }
\begin{abstract}
Local unitary invariance and the notion of negativity fonts are used as the
principle tools to construct four qubit invariants of degree 8, 12, and 24. A
degree 8 polynomial invariant that is non-zero on pure four qubit states with
four-body quantum correlations and zero on all other states, is identified.
Classification of four qubit states into seven major classes, using criterion
based on the nature of correlations, is discussed.
\end{abstract}
\maketitle
To detect and quantify entanglement of composite quantum systems is a
challenge taken up with great zeal by theorists and experimentalists alike. On
the way, from the elegant bipartite separability criterion of Peres
\cite{pere96} up to classification schemes for four qubit states
\cite{vers02,vers03,miya03,miya04,ging02,lama07,li09,bors10,vieh11,shar12},
several useful entanglement measures and invariants have been found
\cite{woot98,coff00,dur00,wong01,meye02,vida02,luqu03,oste05,luqu06,shar101,shar102,oste06,heyd04,leva051,leva052,leva06,chte07,doko09,elts12}
.\ Two qubit entanglement is quantified by\ concurrence \cite{hill97}, which
for a pure state is equal to global negativity \cite{zycz98,vida02}.
Entanglement of a three qubit state due to three-body quantum correlations is
quantified by three tangle \cite{coff00}. For the most general three qubit
state, the difference of squared global negativity and three tangle is a
measure of two qubit correlations and satisfies CKW inequality \cite{coff00}.
A natural question is, which polynomial function of the coefficients
quantifies entanglement due to four-body correlations? Can we write an
invariant analogous to global negativity for two qubits and three tangle for
three qubits to quantify four-body correlations?
Invariant theory describes invariant properties of homogenous polynomials
under general linear transformations. If we write a qubit state in multilinear
form, we can find the set of invariants of the form in terms of state
coefficients $a_{i_{1}i_{2}...i_{N}}$ by using standard methods, as has been
done in \cite{miya03,miya04,luqu03}. One may then investigate the properties
of all invariants in the set. Our general aim, however, is to construct those
polynomial invariants that quantify entanglement due to $K-$body correlations
in an $N-$qubit $\left( N\geq K\right) $ pure state. This is done by
constructing $N-$qubit invariants from multivariate forms with $\left(
K-1\right) -$qubit invariants as coefficients instead of $a_{i_{1}
i_{2}...i_{N}}$.\ In particular, the invariant that quantifies entanglement
due to $N-$body correlations is obtained from a biform having as coefficients
the $N-1$ qubit invariants. The term $N-$body correlations refers, strictly,
to correlations of the type present in an $N-$qubit GHZ state. The advantage
of our approach \cite{shar101,shar102} is twofold. Firstly, we can choose to
construct invariants that contain information about entanglement of a part of
the system. Secondly, since the form of $N-$qubit invariants is directly
linked to the underlying structure of the composite system state, it can throw
light on the suitability of a given state for a specific information
processing task. Local unitary invariance and the notion of negativity fonts
are used as the principle tools to identify $K-$qubit invariants in an
$N-$qubit state. Negativity fonts are the elementary units of entanglement in
a quantum superposition state. Determinants of negativity fonts are linked to
matrices obtained from state operator through selective partial transposition
\cite{shar07,shar08}. In this article, we obtain analytical expressions for
polynomial invariants of degree $8$, $12$, and $24$ for $N=4$ states. One of
the four qubit invariants is found to be non zero on states with four-body
quantum correlations and zero on separable states as well as on states with
entanglement due to two and three body correlations. It is analogous to three
qubit invariant used to define three tangle \cite{woot98}, and can likewise be
used to construct an entanglement monotone to quantify four-body correlations.
To obtain four qubit invariants that quantify four qubit quantum correlations,
we follow a sequence of steps as given below:
1. Identify two qubit invariants for a given pair in a three qubit state.
2. Obtain a quadratic equation with two qubit invariants for a given pair of
qubits as coefficients. Discriminant of the\ form is the three qubit invariant
written in terms of two qubit invariants.
3. Identify two qubit invariants in a four qubit state. Select three qubits
and write three qubit invariants for these in a four qubit state. We identify
five invariants, including two invariants analogous to ones known for a three
qubit state.
4. A local unitary on fourth qubit yields transformation equations for three
qubit invariants. Proper unitaries can reduce the number of three qubit
invariants in the set to four. The process of finding such local unitaries
yields a quartic equation from which four qubit invariants are obtained. Since
the invariants in a larger Hilbert space are written in terms of relevant
invariants in subspaces, it is possible to differentiate the invariants that
quantify three-body quantum correlations from those that quantify four-body
quantum correlations.
In principle the process can be carried on to higher number of qubits.
Polynomial invariants introduced by Luque and Thibon \cite{luqu03} got
geometrical meaning in the work of Levay \cite{leva06}. We point out the
relation of our four qubit invariants with invariants in \cite{luqu03} and
\cite{leva06}.
Polynomial invariants that identify the nature of correlations in a state are
useful to apply classification criteria proposed in \cite{shar12} to four
qubit states. Two multi qubit pure states are equivalent under stochastic
local operations and classical communication (SLOCC) \cite{dur00} if one can
be obtained from the other with some probability using only local operations
and classical communication amongst different parties. SLOCC equivalence is
the central point in four qubit state classification into nine families in
\cite{vers02}. Borsten et al. \cite{bors10} have invoked the black-hole--qubit
correspondence to derive the classification of four-qubit entanglement.
However, it has been found that the number of four qubit SLOCC\ entanglement
classes is much larger \cite{li09}. The main result of Lamata et al.
\cite{lama07} is that each of the eight genuine in-equivalent entanglement
classes contains a continuous range of strictly non equivalent states,
although with similar structure. O. Viehmann et al. \cite{vieh11} select a set
of generators for the SL(2, C)$\otimes4$ -invariant polynomials or tangles and
classify the eight families of ref. \cite{lama07} using tangle patterns. In
our classification scheme using correlation based criterion \cite{shar12},
multipartite states within the same class have same type of correlations but
may have different number and type of negativity fonts in canonical state (all
the states may not be SLOCC equivalent). In section IV, we calculate the
relevant invariants for SLOCC\ families \cite{vers02}\ and re-classify the
states on the basis of number and nature of negativity fonts with non-zero
determinants. The polynomial invariants used to classify the states in our
scheme quantify correlations generated by distinct interaction types.
Intuitively, this information should be extremely useful to quantum state
engineering. Negativity font analysis can be a helpful tool to optimize the
subsystem interactions to tailor the invariant dynamics for a specific quantum
information processing task. A minor point that will be discussed relates to
the controversy regarding the family L$_{ab_{4}}$ which is pointed out in ref.
\cite{chte07}\ to be a subclass of L$_{abc_{3}}$ with ($a=c)$, while in
\cite{li09} it has been shown that L$_{ab_{4}}$ and L$_{abc_{3}}$ belong to
distinct SLOCC classes.
\section{Negativity Fonts and two qubit invariants}
In this section, we briefly review the concepts of global partial transpose
\cite{pere96}, global negativity \cite{zycz98,vida02}, $K-$way partial
transpose \cite{shar09} and $K-$way negativity fonts \cite{shar101,shar102}.
We also identify those two qubit invariants which determine the entanglement
of a pair of qubits in a three qubit state.
A general N-qubit pure state may be written as
\begin{equation}
\left\vert \Psi^{A_{1},A_{2},...A_{N}}\right\rangle =\sum_{i_{1}i_{2}...i_{N}
}a_{i_{1}i_{2}...i_{N}}\left\vert i_{1}i_{2}...i_{N}\right\rangle ,
\label{nstate}
\end{equation}
where $\left\vert i_{1}i_{2}...i_{N}\right\rangle $ are the basis vectors
spanning $2^{N}$ dimensional Hilbert space, and $A_{p}$ is the location of
qubit $p$. The coefficients $a_{i_{1}i_{2}...i_{N}}$ are complex numbers. The
local basis states of a single qubit are labelled by $i_{m}=0$ and $1,$ where
$m=1,...,N$. The global partial transpose of an $N$ qubit state $\widehat
{\rho}=\left\vert \Psi^{A_{1},A_{2},...A_{N}}\right\rangle \left\langle
\Psi^{A_{1},A_{2},...A_{N}}\right\vert $ with respect to qubit at location $p$
is constructed from the matrix elements of $\widehat{\rho}$ through
\begin{equation}
\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{G}^{T_{A_{p}}
}\left\vert j_{1}j_{2}...j_{N}\right\rangle =\left\langle i_{1}i_{2}
...i_{p-1}j_{p}i_{p+1}...i_{N}\right\vert \widehat{\rho}\left\vert j_{1}
j_{2}...j_{p-1}i_{p}j_{p+1}...j_{N}\right\rangle . \label{ptg}
\end{equation}
To construct a $K-$way partial transpose \cite{shar09}, every matrix element
$\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}\left\vert
j_{1}j_{2}...j_{N}\right\rangle $ is labelled by a number $K=\sum
\limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}),$ where $\delta_{i_{m},j_{m}}=1$ for
$i_{m}=j_{m}$, and $\delta_{i_{m},j_{m}}=0$ for $i_{m}\neq j_{m}$. Matrix
elements of state operator with a given $K$ represent $K-$way coherences
present in the state. Local operations on a quantum superposition transform
$K-$way coherences to $K\pm1$ way coherences. The $K-$way partial transpose
of\ $\widehat{\rho}$ with respect to subsystem $p$ for $K>2$ is obtained by
selective transposition such that
\begin{align}
\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{K}^{T_{A_{p}}
}\left\vert j_{1}j_{2}...j_{N}\right\rangle & =\left\langle i_{1}
i_{2}...i_{p-1}j_{p}i_{p+1}...i_{N}\right\vert \widehat{\rho}\left\vert
j_{1}j_{2}...j_{p-1}i_{p}j_{p+1}...j_{N}\right\rangle ,\nonumber\\
\text{if}\quad\sum\limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}) & =K,\quad
\text{and }\quad\delta_{i_{p},j_{p}}=0 \label{ptk1}
\end{align}
and
\begin{align}
\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{K}^{T_{A_{p}}
}\left\vert j_{1}j_{2}...j_{N}\right\rangle & =\left\langle i_{1}
i_{2}...i_{N}\right\vert \widehat{\rho}\left\vert j_{1}j_{2}...j_{N}
\right\rangle ,\nonumber\\
\text{if}\quad\sum\limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}) & \neq K,
\label{ptk2}
\end{align}
while
\begin{align}
\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{2}^{T_{p}
}\left\vert j_{1}j_{2}...j_{N}\right\rangle & =\left\langle i_{1}
i_{2}...i_{p-1}j_{p}i_{p+1}...i_{N}\right\vert \widehat{\rho}\left\vert
j_{1}j_{2}...j_{p-1}i_{p}j_{p+1}...j_{N}\right\rangle ,\nonumber\\
\text{if}\quad\sum\limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}) & =1\text{ or
}2,\quad\text{and }\quad\delta_{i_{p},j_{p}}=0 \label{pt21}
\end{align}
and
\begin{align}
\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{2}^{T_{p}
}\left\vert j_{1}j_{2}...j_{N}\right\rangle & =\left\langle i_{1}
i_{2}...i_{N}\right\vert \widehat{\rho}\left\vert j_{1}j_{2}...j_{N}
\right\rangle ,\nonumber\\
\text{if}\quad\sum\limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}) & \neq1\text{ or
}2. \label{pt22}
\end{align}
One can verify that global partial transpose may be expanded as
\begin{equation}
\widehat{\rho}_{G}^{T_{A_{p}}}=
{\textstyle\sum\limits_{K=2}^{N}}
\widehat{\rho}_{K}^{T_{A_{p}}}-\left( N-2\right) \widehat{\rho}.
\label{decomp}
\end{equation}
Negativity of $\widehat{\rho}^{T_{p}}$, defined as $N^{A_{p}}=\left(
\left\Vert \rho^{T_{A_{p}}}\right\Vert _{1}-1\right) ,$where $\left\Vert
\widehat{\rho}\right\Vert _{1}$ is the trace norm of $\widehat{\rho}$, arises
due to all possible negativity fonts present in $\widehat{\rho}^{T_{p}}$.
Since $\widehat{\rho}$ is a positive operators, global negativity depends on
the negativity of $K-$way partially transposed operators with $K\geq2$.
To understand the concept of a negativity font in the context of an $N-$qubit
system, consider the state
\begin{align*}
\left\vert \Phi_{K}^{A_{1}A_{2}...A_{N}}\right\rangle & =a_{i_{1}
i_{2}...i_{N}}\left\vert i_{1}i_{2}...i_{N}\right\rangle +a_{i_{1}
+1,i_{2}...i_{N}}\left\vert i_{1}+1,i_{2}...i_{N}\right\rangle \\
& +a_{j_{1}j_{2}...j_{N}}\left\vert j_{1}j_{2}...j_{N}\right\rangle
+a_{j_{1}+1,j_{2}...j_{N}}\left\vert j_{1}+1,j_{2}...j_{N}\right\rangle ,
\end{align*}
with $K=
{\textstyle\sum\limits_{m=1}^{N}}
\left( 1-\delta_{i_{m}j_{m}}\right) $ and $\delta_{i_{1}j_{1}}=0$.\ The
state $\left\vert \Phi_{K}^{A_{1}A_{2}...A_{N}}\right\rangle $ is the product
of a $K-$qubit GHZ-like state with $N-K$ qubit product state. Let
$\widehat{\sigma}_{K}^{T_{A_{1}}}$ be the $K-$way partial transpose of
$\widehat{\sigma}_{K}=\left\vert \Phi_{K}^{A_{1}A_{2}...A_{N}}\right\rangle
\left\langle \Phi_{K}^{A_{1}A_{2}...A_{N}}\right\vert $ with respect to qubit
$A_{1}$. If $\widehat{\rho}$ is a pure state given by $\widehat{\rho
}=\left\vert \Psi^{A_{1}A_{2}...A_{N}}\right\rangle \left\langle \Psi
^{A_{1}A_{2}...A_{N}}\right\vert $, then $\widehat{\sigma}_{K}^{T_{A_{1}}}$ is
a $4\times4$ sub-matrix of $\widehat{\rho}_{G}^{T_{A_{1}}}$ and $\widehat
{\rho}_{K}^{T_{A_{1}}}$with negative eigenvalue given by
\[
\lambda^{-}=-\left\vert \det\left[
\begin{array}
[c]{cc}
a_{i_{1}i_{2}...i_{N}} & a_{j_{1}+1,j_{2}...j_{N}}\\
a_{i_{1}+1,i_{2}...i_{N}} & a_{j_{1}j_{2}...j_{N}}
\end{array}
\right] \right\vert .
\]
The matrix $\left[
\begin{array}
[c]{cc}
a_{i_{1}i_{2}...i_{N}} & a_{j_{1}+1,j_{2}...j_{N}}\\
a_{i_{1}+1,i_{2}...i_{N}} & a_{j_{1}j_{2}...j_{N}}
\end{array}
\right] $ is referred to as a $K-$way negativity font \cite{shar101,shar102}.
A symbol used to represent a negativity font, must identify the qubits that
appear in $K$ qubit GHZ-like state. Therefore we split the set of $N$ qubits
with their locations and local basis indices given by, $T=\left\{ \left(
A_{1}\right) _{i_{1}}\left( A_{2}\right) _{i_{2}}...\left( A_{N}\right)
_{i_{N}}\right\} ,$ into two subsets, with $S_{1,T}$ containing qubits with
local basis indices satisfying $\delta_{i_{m}j_{m}}=0$ ($i_{m}\neq j_{m}$),
and $S_{2,T}$ having qubits for which $\delta_{i_{m}j_{m}}=1$ ($i_{m}=j_{m}$).
To simplify the notation, we represent by $s_{1,T}$, the sequence of local
basis indices for qubits in $S_{1,T}$. A specific negativity font is therefore
represented by
\[
\nu_{S_{2,T}}^{i_{1}i_{2}...i_{N}}=\left[
\begin{array}
[c]{cc}
a_{i_{1}i_{2}...i_{N}} & a_{j_{1}+1,j_{2}...j_{N}}\\
a_{i_{1}+1,i_{2}...i_{N}} & a_{j_{1}j_{2}...j_{N}}
\end{array}
\right] .
\]
A nonzero determinant $D_{S_{2,T}}^{s_{1,T}}=\det\left( \nu_{S_{2,T}}
^{i_{1}i_{2}...i_{N}}\right) $ ensures that $\widehat{\sigma}_{K}^{T_{A_{1}}
}$ is negative. A measurement on the state of a qubit with index in $S_{1,T}$
reduces $\widehat{\sigma}_{K}$ to a separable state, whereas, measuring the
state of a qubit in\ $S_{2,T}$ does not change the negativity of
$\widehat{\sigma}_{K}^{T_{A_{1}}}$. Elementary negativity fonts that quantify
the negativity of $\rho^{T_{A_{p}}}$ for $p\neq1$ are defined in an analogous
fashion. The determinant of a $K-$way negativity font detects $K-$body quantum
correlations in an $N$ qubit state. For even $K$, proper combinations of
determinants of $K-$way negativity fonts are found to be invariant under the
action of local unitary operations on $K$ qubits \cite{shar102}.
For a two qubit state negative eigenvalue of partial transpose is the
invariant that distinguishes between the separable and entangled states.
Global negativity of $\left\vert \Psi^{A_{1}A_{2}}\right\rangle =
{\textstyle\sum}
a_{i_{1}i_{2}}\left\vert i_{1}i_{2}\right\rangle $ is determined by
$I_{2}^{A_{1}A_{2}}=\left\vert a_{00}a_{11}-a_{01}a_{10}\right\vert $, which
is invariant under $U^{A_{1}}\otimes U^{A_{2}}$. Here $U^{A_{i}}$ is a local
unitary operator that acts on qubit $A_{i}$. The subscript on $I_{2}
^{A_{1}A_{2}}$ refers to two-body correlations. A two qubit state therefore
has a single negativity font $\nu^{00}=\left[
\begin{array}
[c]{cc}
a_{00} & a_{01}\\
a_{10} & a_{11}
\end{array}
\right] $. In a general three qubit state,
\[
\left\vert \Psi^{A_{1}A_{2}A_{3}}\right\rangle =
{\textstyle\sum}
a_{i_{1}i_{2}i_{3}}\left\vert i_{1}i_{2}i_{3}\right\rangle ,
\]
the number of two-qubit invariants, for a selected pair of qubits, is three.
For the pair $A_{1}A_{2}$, for example, these are determinants of\ $2-$way
negativity fonts defined as
\begin{equation}
D_{\left( A_{3}\right) _{i_{3}}}^{00}=\det\left[
\begin{array}
[c]{cc}
a_{00i_{3}} & a_{01i_{3}}\\
a_{10i_{3}} & a_{11i_{3}}
\end{array}
\right] ,\;i_{3}=0,1,\label{2wayd2}
\end{equation}
and the difference $\left( D^{000}-D^{010}\right) =\left( D^{000}
+D^{001}\right) $, where
\begin{equation}
D^{0i_{2}0}=\det\left[
\begin{array}
[c]{cc}
a_{0i_{2}0} & a_{0i_{2}+1,1}\\
a_{1i_{2}0} & a_{1,i_{2}+1,1}
\end{array}
\right] ,\;i_{2}=0,1,\label{3wayd2}
\end{equation}
is determinant of a three-way negativity font.
\section{Three-body correlations and three qubit invariants}
Our method was applied in ref. \cite{shar101}, to construct three-tangle
\cite{coff00} and a degree two four qubit invariant which is a function of
determinants of $4-$way negativity fonts. To clarify the process, we review
the three qubit case and show that by using three qubit invariants one may
classify three qubit entangled states into states with i) three and two body
correlations,\ ii) states with only three body correlations and iii) a set of
states with only two body correlations. Class (i) states are the most general
states. Class (ii) states with GHZ\ type entanglement have the form
\[
\left\vert \Psi^{A_{1}A_{2}A_{3}}\right\rangle =a_{i_{1}i_{2}i_{3}}\left\vert
i_{1}i_{2}i_{3}\right\rangle +a_{i_{1}+1,i_{2}+1,i_{3}+1}\left\vert
i_{1}+1,i_{2}+1,i_{3}+1\right\rangle ,
\]
and Class (iii) contains W-like entangled states and bi-separable states of
three qubits. First of all, we write down the transformation equation for two
qubit invariant $D_{\left( A_{3}\right) _{1}}^{00}$ to obtain the invariant
which quantifies three-body correlations. The form of this invariant is later
used to identify three qubit invariants in four qubit states. In the absence
of three-body correlations, modified transformation equations yield three
qubit invariants that quantify two body correlations in a three qubit state.
Under a local unitary U$^{A_{3}}=\frac{1}{\sqrt{1+\left\vert x\right\vert
^{2}}}\left[
\begin{array}
[c]{cc}
1 & -x^{\ast}\\
x & 1
\end{array}
\right] $, $D_{\left( A_{3}\right) _{1}}^{00}$ transforms as
\begin{equation}
\left( D_{\left( A_{3}\right) _{1}}^{00}\right) ^{\prime}=\frac
{1}{1+\left\vert x\right\vert ^{2}}\left( D_{\left( A_{3}\right) _{1}}
^{00}+\left( x\right) ^{2}D_{\left( A_{3}\right) _{0}}^{00}+x\left(
D^{000}+D^{001}\right) \right) ,
\end{equation}
such that
\begin{equation}
\left( N_{A_{3}}^{A_{1}A_{2}}\right) ^{2}=\left\vert D_{\left(
A_{3}\right) _{1}}^{00}\right\vert ^{2}+\left\vert D_{\left( A_{3}\right)
_{0}}^{00}\right\vert ^{2}+2\left\vert \left( \frac{D^{000}+D^{001}}
{2}\right) \right\vert ^{2}\label{na1a2}
\end{equation}
is a three qubit invariant. If the pair of qubits $A_{1}A_{2}$ is entangled
then $N_{A_{3}}^{A_{1}A_{2}}\neq0$. We can verify that global negativity of
$\widehat{\rho}_{G}^{T_{A_{1}}}$ is given by
\begin{equation}
\left( N_{G}^{A_{1}}\right) ^{2}=4\left( N_{A_{3}}^{A_{1}A_{2}}\right)
^{2}+4\left( N_{A_{2}}^{A_{1}A_{3}}\right) ^{2},
\end{equation}
where
\begin{equation}
\left( N_{A_{2}}^{A_{1}A_{3}}\right) ^{2}=\left\vert D_{\left(
A_{2}\right) _{1}}^{00}\right\vert ^{2}+\left\vert D_{\left( A_{2}\right)
_{0}}^{00}\right\vert ^{2}+2\left\vert \left( \frac{D^{000}-D^{001}}
{2}\right) \right\vert ^{2}.
\end{equation}
The discriminant of $\left( D_{\left( A_{3}\right) _{1}}^{00}\right)
^{\prime}=0$, yields three qubit invariant
\begin{equation}
I_{3}^{A_{1}A_{2}A_{3}}=\left( D^{000}+D^{001}\right) ^{2}-4D_{\left(
A_{3}\right) _{0}}^{00}D_{\left( A_{3}\right) _{1}}^{00}\text{,}
\label{3way}
\end{equation}
which is a polynomial invariant of degree four in coefficients $a_{i_{1}
i_{2}i_{3}}$. The subscript in $I_{3}^{A_{1}A_{2}A_{3}}$ refers to three-body
correlations of the type present in a three qubit GHZ state. The terms
$D^{000}-D^{010}$, $D_{\left( A_{3}\right) _{0}}^{00}$, and $D_{\left(
A_{3}\right) _{1}}^{00}$ vanish on a product state of qubits $A_{1}$ and
$A_{2}$. On the state
\begin{equation}
\left\vert \Psi^{A_{1}A_{2}}\right\rangle \left\vert \Psi^{A_{3}}\right\rangle
=
{\textstyle\sum_{i_{1}i_{2}i_{3}}}
a_{i_{1}i_{2}}\left\vert i_{1}i_{2}\right\rangle \left( b_{0}\left\vert
0\right\rangle +b_{1}\left\vert 1\right\rangle \right) ;\quad\left(
i_{m}=0,1\right) ,
\end{equation}
with $D^{00}\neq0$, we have $D_{\left( A_{3}\right) _{0}}^{00}=\left(
b_{0}\right) ^{2}D^{00}$, $D_{\left( A_{3}\right) _{1}}^{00}=\left(
b_{1}\right) ^{2}D^{00}$, and $D^{000}=D^{001}=b_{0}b_{1}D^{00}$ as such
$I_{3}^{A_{1}A_{2}A_{3}}=0.$ Modulus of $I_{3}^{A_{1}A_{2}A_{3}}$, quantifies
the entanglement of qubits $A_{1}A_{2}A_{3}$ due to three body correlations.
Three tangle \cite{coff00}, $\tau_{3}=4\left\vert I_{3}^{A_{1}A_{2}A_{3}
}\right\vert $, is a well known entanglement monotone.{}
For a general three qubit state with $I_{3}^{A_{1}A_{2}A_{3}}=0$,
\ determinants of two-way fonts transform as
\[
\left( D_{\left( A_{3}\right) _{0}}^{00}\right) ^{\prime}=\frac
{1}{1+\left\vert x\right\vert ^{2}}\left( x^{\ast}\sqrt{D_{\left(
A_{3}\right) _{1}}^{00}}-\sqrt{D_{\left( A_{3}\right) _{0}}^{00}}\right)
^{2},
\]
\[
\left( D_{\left( A_{3}\right) _{1}}^{00}\right) ^{\prime}=\frac
{1}{1+\left\vert x\right\vert ^{2}}\left( x\sqrt{D_{\left( A_{3}\right)
_{0}}^{00}}+\sqrt{D_{\left( A_{3}\right) _{1}}^{00}}\right) ^{2},
\]
therefore
\begin{equation}
N_{A_{3}}^{A_{1}A_{2}}=\left\vert \left( D_{\left( A_{3}\right) _{0}}
^{00}\right) ^{\prime}\right\vert +\left\vert \left( D_{\left(
A_{3}\right) _{1}}^{00}\right) ^{\prime}\right\vert =\left\vert D_{\left(
A_{3}\right) _{0}}^{00}\right\vert +\left\vert D_{\left( A_{3}\right) _{1}
}^{00}\right\vert ,
\end{equation}
is a three qubit invariant. In other words if $I_{3}^{A_{1}A_{2}A_{3}}=0$ then
$N_{A_{3}}^{A_{1}A_{2}}$ quantifies two body correlations of the pair
$A_{1}A_{2}$. One can verify that $\left\vert D_{\left( A_{m}\right) _{0}
}^{00}\right\vert +\left\vert D_{\left( A_{m}\right) _{1}}^{00}\right\vert $
($m=1,2,3$), are three qubit invariants in this case. The sum of product
invariants
\begin{align}
I_{2}^{A_{1}A_{2}A_{3}} & =3
{\textstyle\sum\limits_{\substack{i,j=1\\(i<j)}}^{3}}
\left( \left\vert D_{\left( A_{i}\right) _{0}}^{00}\right\vert +\left\vert
D_{\left( A_{i}\right) _{1}}^{00}\right\vert \right) \left( \left\vert
D_{\left( A_{j}\right) _{0}}^{00}\right\vert +\left\vert D_{\left(
A_{j}\right) _{1}}^{00}\right\vert \right) ,\nonumber\\
& =3\left( N_{A_{1}}^{A_{2}A_{3}}N_{A_{2}}^{A_{1}A_{3}}+N_{A_{1}}
^{A_{2}A_{3}}N_{A_{3}}^{A_{1}A_{2}}+N_{A_{2}}^{A_{1}A_{3}}N_{A_{3}}
^{A_{1}A_{2}}\right) \label{i23qubit}
\end{align}
detects W-like tripartite entanglement. It is zero on bi-separable states for
which only one of the three $N_{A_{m}}^{A_{i}A_{j}}=\left\vert D_{\left(
A_{m}\right) _{0}}^{00}\right\vert +\left\vert D_{\left( A_{m}\right) _{1}
}^{00}\right\vert $ ($i\neq j\neq m$) is non zero and one on a three qubit
W-state. Major classes of three qubits states are uniquely defined by values
of polynomial invariants $4\left\vert I_{3}^{A_{1}A_{2}A_{3}}\right\vert $,
$\left( N_{G}^{A_{1}}\right) ^{2}-4\left\vert I_{3}^{A_{1}A_{2}A_{3}
}\right\vert $, and $I_{2}^{A_{1}A_{2}A_{3}}$.
\section{Four-body correlations and four-qubit invariants}
Four qubit states live in the Hilbert space $C^{2}\otimes C^{2}\otimes
C^{2}\otimes C^{2}$ with a distinct subspace for each set of three qubits. If
there were no four body correlations, three qubit invariants $\left(
I_{3}^{A_{i}A_{j}A_{k}}\right) _{\left( A_{l}\right) _{i_{l}}}(i_{l}=0,1)$,
may determine the entanglement of a four qubit state. In general, additional
three qubit invariants that depend also on four-way negativity fonts exist.
For a selected set of three qubits, three qubit invariants constitute a five
dimensional space and are easily found by the action of a local unitary on the
fourth qubit. To write down transformation equations for three qubit
invariants, first of all, we identify two qubit invariants.
In the most general four qubit state
\begin{equation}
\left\vert \Psi^{A_{1}A_{2}A_{3}A_{4}}\right\rangle =
{\textstyle\sum_{i_{1}i_{2}i_{3}i_{4}}}
a_{i_{1}i_{2}i_{3}i_{4}}\left\vert i_{1}i_{2}i_{3}i_{4}\right\rangle
;\quad\left( i_{m}=0,1\right) , \label{4qubitstate}
\end{equation}
when state of qubit $A_{1}$ is transposed, we are looking at entanglement of
qubit $A_{1}$ with rest of the system. Qubit $A_{1}$ may have pairwise
entanglement with qubits $A_{2},$ $A_{3},$ or $A_{4}$. For a given pair, there
are four two-way two qubit invariants (the remaining pair of qubits being in
state $\left\vert 00\right\rangle $, $\left\vert 10\right\rangle $,
$\left\vert 01\right\rangle $ or $\left\vert 11\right\rangle $). For example,
the determinants of two-way negativity fonts for the pair $A_{1}A_{2}$,
written as
\begin{equation}
D_{\left( A_{3}\right) _{i_{3}}\left( A_{4}\right) _{i_{4}}}^{00}
=\det\left[
\begin{array}
[c]{cc}
a_{00i_{3}i_{4}} & a_{01i_{3}i_{4}}\\
a_{10i_{3}i_{4}} & a_{11i_{3}i_{4}}
\end{array}
\right] \text{, \ }\left( i_{3},i_{4}=0,1\right) ,
\end{equation}
are invariant with respect to unitaries on qubits $A_{1}$ and $A_{2}$.
Three-way coherences generate two qubit invariants $D_{\left( A_{4}\right)
_{i_{4}}}^{000}-D_{\left( A_{4}\right) _{i_{4}}}^{010},\left(
i_{4}=0,1\right) $, and \ $D_{\left( A_{3}\right) _{i_{3}}}^{000}
-D_{\left( A_{3}\right) _{i_{3}}}^{010}\left( i_{3}=0,1\right) $, for the
pair $A_{1}A_{2}$. Here determinants of three-way fonts for \{$A_{1}A_{2}
A_{3}$\} and \{$A_{1}A_{2}A_{4}$\}, respectively, are defined as
\begin{equation}
D_{\left( A_{4}\right) _{i_{4}}}^{0i_{2}0}=\det\left[
\begin{array}
[c]{cc}
a_{0i_{2}0i_{4}} & a_{0i_{2}+1,1i_{4}}\\
a_{1i_{2}0i_{4}} & a_{1i_{2}+1,1i_{4}}
\end{array}
\right] ,\text{ \ }\left( i_{2},i_{4}=0,1\right) ,
\end{equation}
and
\begin{equation}
D_{\left( A_{3}\right) _{i_{3}}}^{0i_{2}0}=\det\left[
\begin{array}
[c]{cc}
a_{0i_{2}i_{3}0} & a_{0i_{2}+1i_{3}1}\\
a_{1i_{2}i_{3}0} & a_{1i_{2}+1i_{3}1}
\end{array}
\right] ,\text{ \ }\left( i_{2},i_{3}=0,1\right) .
\end{equation}
If four-way negativity fonts are present, then additional $A_{1}A_{2}$
invariants, $D^{0000}-D^{0100}$\ and $D^{0001}-D^{0101}$, are to be
considered. Determinants of four-way negativity fonts are given by
\begin{equation}
D^{0i_{2}0i_{4}}=\det\left[
\begin{array}
[c]{cc}
a_{0i_{2}0i_{4}} & a_{0,i_{2}+1,1,i_{4}+1}\\
a_{1i_{2}0i_{4}} & a_{1,i_{2}+1,1,i_{4}+1}
\end{array}
\right] ,\text{ \ }\left( i_{2},i_{4}=0,1\right) .
\end{equation}
Degree two four qubit invariant
\begin{equation}
I_{4}=\left( D^{0000}+D^{0011}-D^{0010}-D^{0001}\right) , \label{i4}
\end{equation}
obtained in \cite{shar101} is the same as invariant H of degree two in ref.
\cite{luqu03}. The entanglement monotone, $\tau_{4}=4\left\vert I_{4}
\right\vert $, was called four-tangle in analogy with three tangle
\cite{coff00}. In \cite{shar102} our method was successfully applied to derive
degree two $N-$qubit invariants for even $N$ and degree four invariants for
odd N in terms of determinants of negativity fonts. It was also shown that one
may use the method to construct $N-$qubit invariants to detect $M-$qubit
correlations ($M\leq N$) in an $N-$qubit state. As an example, we reported
degree four invariants $J_{4}^{\left( A_{1}A_{2}\right) }$, $J_{4}^{\left(
A_{1}A_{3}\right) }$, and $J_{4}^{\left( A_{1}A_{4}\right) }$ in ref
\cite{shar102} and found that $\left( I_{4}\right) ^{2}=\frac{1}{3}\left(
J_{4}^{\left( A_{1}A_{2}\right) }+J_{4}^{\left( A_{1}A_{3}\right) }
+J_{4}^{\left( A_{1}A_{4}\right) }\right) $.
Presently, we focus on the set $A_{1}A_{2}A_{3}$ of three qubits in state
$\left\vert \Psi^{A_{1}A_{2}A_{3}A_{4}}\right\rangle $ (Eq. (\ref{4qubitstate}
) ) viewed as
\begin{equation}
\left\vert \Psi^{A_{1}A_{2}A_{3}A_{4}}\right\rangle =\left\vert \Psi_{\left(
A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}\right\rangle \left\vert 0\right\rangle
+\left\vert \Psi_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}}\right\rangle
\left\vert 1\right\rangle ,
\end{equation}
where
\[
\left\vert \Psi_{\left( A_{4}\right) _{i_{4}}}^{A_{1}A_{2}A_{3}
}\right\rangle =
{\textstyle\sum_{i_{1}i_{2}i_{3}}}
a_{i_{1}i_{2}i_{3}i_{4}}\left\vert i_{1}i_{2}i_{3}i_{4}\right\rangle
;\quad\left( i_{4}=0,1\right) .
\]
Three qubit invariants
\begin{equation}
\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{i_{4}}
}=\left( D_{_{\left( A_{4}\right) _{i_{4}}}}^{000}+D_{_{\left(
A_{4}\right) _{i_{4}}}}^{001}\right) ^{2}-4D_{\left( A_{3}\right)
_{0}\left( A_{4}\right) _{i_{4}}}^{00}D_{\left( A_{3}\right) _{1}\left(
A_{4}\right) _{i_{4}}}^{00};\quad i_{4}=0,1,
\end{equation}
quantify GHZ\ state like three-way correlations in three qubit subspace
$C^{2}\otimes C^{2}\otimes C^{2}$. Continuing the search for a four qubit
invariant that detects four qubit correlations, we examine the transformation
of three qubit invariant $\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left(
A_{4}\right) _{1}}$under U$^{A_{4}}=\frac{1}{\sqrt{1+\left\vert y\right\vert
^{2}}}\left[
\begin{array}
[c]{cc}
1 & -y^{\ast}\\
y & 1
\end{array}
\right] $. The resulting transformation equation is
\begin{align}
& \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}
}^{\prime}=\frac{1}{\left( 1+\left\vert y\right\vert ^{2}\right) ^{2}
}\left[ y^{4}\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right)
_{0}}+4y^{3}P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}\right.
\nonumber\\
& \left. +6y^{2}T_{A_{4}}^{A_{1}A_{2}A_{3}}+4yP_{\left( A_{4}\right) _{1}
}^{A_{1}A_{2}A_{3}}+\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left(
A_{4}\right) _{1}}\right] ,\label{i3}
\end{align}
where
\begin{align}
T_{A_{4}}^{A_{1}A_{2}A_{3}} & =\frac{1}{6}\left( D^{0000}+D^{0011}
+D^{0010}+D^{0001}\right) ^{2}\nonumber\\
& -\frac{2}{3}\left( D_{\left( A_{3}\right) _{0}}^{000}+D_{\left(
A_{3}\right) _{0}}^{001}\right) \left( D_{\left( A_{3}\right) _{1}}
^{000}+D_{\left( A_{3}\right) _{1}}^{001}\right) \nonumber\\
& +\frac{1}{3}\left( D_{_{\left( A_{4}\right) _{0}}}^{000}+D_{_{\left(
A_{4}\right) _{0}}}^{001}\right) \left( D_{_{\left( A_{4}\right) _{1}}
}^{000}+D_{_{\left( A_{4}\right) _{1}}}^{001}\right) \nonumber\\
& -\frac{2}{3}\left( D_{\left( A_{3}\right) _{0}\left( A_{4}\right)
_{0}}^{00}D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{1}}
^{00}+D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{1}}^{00}D_{\left(
A_{3}\right) _{1}\left( A_{4}\right) _{0}}^{00}\right) ,
\end{align}
and
\begin{align}
P_{\left( A_{4}\right) _{i_{4}}}^{A_{1}A_{2}A_{3}} & =\frac{1}{2}\left(
D_{_{\left( A_{4}\right) _{i_{4}}}}^{000}+D_{_{\left( A_{4}\right)
_{i_{4}}}}^{001}\right) \left( D^{0000}+D^{0011}+D^{0010}+D^{0001}\right)
\nonumber\\
& -\left( D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{i_{4}}}
^{00}\left( D_{\left( A_{3}\right) _{0}}^{000}+D_{\left( A_{3}\right)
_{0}}^{001}\right) +D_{\left( A_{3}\right) _{0}\left( A_{4}\right)
_{i_{4}}}^{00}\left( D_{\left( A_{3}\right) _{1}}^{000}+D_{\left(
A_{3}\right) _{1}}^{001}\right) \right) .
\end{align}
Discriminant of a quartic equation, $y^{4}a-4by^{3}+6y^{2}c-4dy+f=0$, in
variable $y$ is $\Delta=S^{3}-27T^{2}$ where $S=3c^{2}-4bd+af$, and
$T=acf-ad^{2}-b^{2}f+2bcd-c^{3}$ (cubic invariant ), are polynomial
invariants. When a selected U$^{A_{4}}$ results in $\left( \left(
I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}\right)
^{\prime}=0$ (Eq. (\ref{i3})), the associated polynomial invariant is
\begin{align}
I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}} & =3\left( T_{A_{4}}^{A_{1}A_{2}A_{3}
}\right) ^{2}-4P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}P_{\left(
A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}}\nonumber\\
& +\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}
}\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}
},\label{i48}
\end{align}
which is a four qubit invariant of degree eight expressed in terms of three
qubit invariants for $A_{1}A_{2}A_{3}$. In order to distinguish between degree
$2$ invariant $I_{4}$ and the new invariant, degree of the invariant has been
added to the subscript. By construction, the four qubit invariant
$I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$ is a combination of three qubit
($A_{1}A_{2}A_{3}$) invariants. It is easily verified that on a state which is
a product of $\left\vert \Psi^{A_{1}A_{2}A_{3}}\right\rangle =
{\textstyle\sum_{i_{1}i_{2}i_{3}}}
a_{i_{1}i_{2}i_{3}}\left\vert i_{1}i_{2}i_{3}\right\rangle $ with
$I_{3}^{A_{1}A_{2}A_{3}}\neq0,$ and $\Psi^{A_{4}}=d_{0}\left\vert
0\right\rangle +d_{1}\left\vert 1\right\rangle ,$ we obtain
\begin{equation}
\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left(
I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=T_{A_{4}}
^{A_{1}A_{2}A_{3}}=P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}=P_{\left(
A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}},
\end{equation}
leading to $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$. Likewise, $I_{(4,8)}
^{A_{1}A_{2}A_{3}A_{4}}$ vanishes on product state $\left\vert \Psi
^{A_{1}A_{2}}\right\rangle \left\vert \Psi^{A_{3}A_{4}}\right\rangle $, where
$\left\vert \Psi^{A_{1}A_{2}}\right\rangle =
{\textstyle\sum_{i_{1}i_{2}}}
a_{i_{1}i_{2}}\left\vert i_{1}i_{2}\right\rangle $ and $\left\vert \Psi
^{A_{3}A_{4}}\right\rangle =
{\textstyle\sum_{i_{3}i_{4}}}
b_{i_{3}i_{4}}\left\vert i_{3}i_{4}\right\rangle $. Besides that
$I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$ on a four qubit W-like state, and all
entangled states with only three and two-body correlations, as seen in section IV.
The cubic invariant associated with Eq. (\ref{i3}) is
\begin{equation}
J^{A_{1}A_{2}A_{3}A_{4}}=\det\left[
\begin{array}
[c]{ccc}
\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}} &
P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}} & T_{A_{4}}^{A_{1}A_{2}A_{3}
}\\
P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}} & T_{A_{4}}^{A_{1}A_{2}A_{3}}
& P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}\\
T_{A_{4}}^{A_{1}A_{2}A_{3}} & P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}
& \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}
\end{array}
\right] ,
\end{equation}
while the discriminant reads as
\begin{equation}
\Delta=\left( I_{\left( 4,8\right) }^{A_{1}A_{2}A_{3}A_{4}}\right)
^{3}-27\left( J^{A_{1}A_{2}A_{3}A_{4}}\right) ^{2}\text{.}
\end{equation}
Since there are four ways in which a given set of three qubits may be
selected, $\Delta$ can be expressed in terms of different sets of three qubit
invariants. In addition (Eq. (\ref{i3})) also leads to
\begin{align}
\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) ^{2} & =\left\vert \left(
I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}\right\vert
^{2}+\left\vert \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left(
A_{4}\right) _{1}}\right\vert ^{2}\nonumber\\
+6\left\vert T_{A_{4}}^{A_{1}A_{2}A_{3}}\right\vert ^{2} & +4\left\vert
P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}\right\vert ^{2}+4\left\vert
P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}}\right\vert ^{2},
\end{align}
which is a four qubit invariant analogous to $\left( N_{A_{3}}^{A_{1}A_{2}
}\right) ^{2}$ (Eq. (\ref{na1a2})) for three qubit states. In general, one
can construct an invariant $N_{A_{l}}^{A_{i}A_{j}A_{k}}$ ($i\neq j\neq k\neq
l$) for a selected three qubit subsystem $A_{i}A_{j}A_{k}$ of four qubit
state. In analogy with global negativity, one may define a four qubit
invariant of degree four,
\begin{equation}
\left( N_{(4,4)}^{A_{1}}\right) ^{2}=16\left( N_{A_{4}}^{A_{1}A_{2}A_{3}
}\right) ^{2}+16\left( N_{A_{3}}^{A_{1}A_{2}A_{4}}\right) ^{2}+16\left(
N_{A_{2}}^{A_{1}A_{3}A_{4}}\right) ^{2}, \label{na1_48}
\end{equation}
which detects bipartite entanglement of qubit $A_{1}$ with subsystem
$A_{2}A_{3}A_{4}$ due to three and four body quantum correlations. If
$I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$, but at least two of the $N_{A_{l}
}^{A_{i}A_{j}A_{k}}$ are finite, then 4-partite entanglement can be due to
three and two body correlations. In this case the invariant that detects
entanglement may be defined as
\begin{align}
N_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}} & =16N_{A_{1}}^{A_{2}A_{3}A_{4}}N_{A_{2}
}^{A_{1}A_{3}A_{4}}+16\left( N_{A_{1}}^{A_{2}A_{3}A_{4}}+N_{A_{2}}
^{A_{1}A_{3}A_{4}}\right) N_{A_{3}}^{A_{1}A_{2}A_{4}}\nonumber\\
& +16\left( N_{A_{1}}^{A_{2}A_{3}A_{4}}+N_{A_{2}}^{A_{1}A_{3}A_{4}}
+N_{A_{3}}^{A_{1}A_{2}A_{4}}\right) N_{A_{4}}^{A_{1}A_{2}A_{3}}.
\end{align}
On the other hand, if we have a state on which all $N_{A_{l}}^{A_{i}A_{j}
A_{k}}$ are zero, then the quantities $I_{A_{r}A_{s}}^{A_{p}A_{q}}=\sum
_{i_{r}i_{s}}\left\vert D_{\left( A_{r}\right) _{i_{r}}\left( A_{s}\right)
_{i_{s}}}^{00}\right\vert $ ($p\neq q\neq r\neq s=1$ to $4$), turn out to be
four qubit invariants. A different class of entangled states is obtained if
only one of the $N_{A_{l}}^{A_{i}A_{j}A_{k}}$ is non zero along with a finite
$I_{A_{r}A_{s}}^{A_{p}A_{l}}$. In section II we noted that $I_{2}^{A_{1}
A_{2}A_{3}}$ (Eq. (\ref{i23qubit})) detects W like entanglement of three
qubits $A_{1}A_{2}A_{3}$. Likewise, when $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}
}=N_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$, the invariant
\begin{align}
I_{(2,6)}^{A_{1}A_{2}A_{3}A_{4}} & =\frac{3}{2}\left( I_{2}^{A_{1}
A_{2}A_{3}}\right) \left( I_{A_{2}A_{3}}^{A_{1}A_{4}}+I_{A_{1}A_{3}}
^{A_{2}A_{4}}+I_{A_{1}A_{2}}^{A_{3}A_{4}}\right) \nonumber\\
& +\frac{3}{2}\left( I_{2}^{A_{1}A_{2}A_{4}}\right) \left( I_{A_{1}A_{4}
}^{A_{2}A_{3}}+I_{A_{1}A_{3}}^{A_{3}A_{4}}\right) +\frac{3}{2}\left(
I_{2}^{A_{1}A_{3}A_{4}}\right) I_{A_{1}A_{3}}^{A_{2}A_{4}} \label{i26}
\end{align}
detects W-like four qubit entanglement. Here $\left( I_{2}^{A_{p}A_{q}A_{r}
}\right) _{A_{s}}=3I_{A_{r}A_{s}}^{A_{p}A_{q}}I_{A_{q}A_{s}}^{A_{p}A_{r}}$,
$(p\neq q\neq r\neq s=1$ to $4),$ is the invariant that detects W-like
entanglement of qubits $A_{p}A_{q}A_{r}$ in a four qubit state.
In ref. \cite{leva06} four qubit invariants have been obtained in terms of
coefficients having geometrical significance. A comparison of Eq. (56) of ref.
\cite{leva06} with our Eq. (\ref{i3}), indicates that their set of invariants
($I_{1}$, $I_{2}$, $I_{3}$, $I_{4}$) may be expressed in terms of our three
qubit invariants, though they are not exactly the same. A method equivalent to
method of Schlafli \cite{gelf94} has been used to arrive at Eq. (22)
\[
R(\text{t})=c_{0}t_{0}^{4}+4c_{1}t_{0}^{3}t_{1}+6c_{2}t_{0}^{2}t_{1}
^{2}+4c_{3}t_{0}t_{1}^{2}+c_{4}t_{1}^{4}
\]
by Luque and Thibon \cite{luqu03}. Then higher degree invariants are expressed
in terms of $c_{i}$ coefficients and computer algebra relates these to basic
four qubit invariants. Since for $t_{0}=1$, expression for $R($t$)$ has the
same form as Eq. (\ref{i3}), a direct correspondence can be established
between $c_{i}$ coefficients and our three qubit invariants. Such a comparison
establishes a neat connection of our invariants with projective geometry
approach and classical invariant theory concepts.
\section{Invariants and Classification of four-qubit States}
Decomposition of global partial transpose $\widehat{\rho}_{G}^{T_{A_{p}}}$ of
four qubit state $\left\vert \Psi^{A_{1}A_{2}A_{3},A_{4}}\right\rangle $ with
respect to qubit $A_{p}$ in terms of $K-$way partially transposed operators
(Eq. (\ref{decomp})) reads as
\begin{equation}
\widehat{\rho}_{G}^{T_{A_{p}}}=\sum\limits_{K=2}^{4}\widehat{\rho}
_{K}^{T_{A_{p}}}-2\widehat{\rho}.\label{3n}
\end{equation}
When a state has only $K-$way coherences, we have $\widehat{\rho}
_{G}^{T_{A_{p}}}=\widehat{\rho}_{K}^{T_{A_{p}}}$, for a selected set of $K$
qubits. For a given qubit, the number of $K-$way negativity fonts in a $K-$way
partially transposed matrix varies from $0$ to $4$. Local unitary operations
can be used to annihilate the negativity fonts that is obtain a state for
which determinants of selected negativity fonts are zero. The process leads to
canonical state which is a state written in terms of minimum number of local
basis product states \cite{acin00}. In ref. \cite{shar12}, we proposed a
classification scheme in which an entanglement class is characterized by the
minimal set of $K-$way $\left( 2\leq K\leq4\right) $\ partially transposed
matrices present in the expansion of global partial transpose of the canonical
state. Seven possible ways in which the global partial transpose (GPT) of a
four qubit canonical state may be decomposed correspond to seven major
entanglement classes that is class I. $\left( \widehat{\rho}_{c}\right)
_{G}^{T_{A_{p}}}=\sum\limits_{K=2}^{4}\left( \widehat{\rho}_{c}\right)
_{K}^{T_{A_{p}}}-2\widehat{\rho}_{c}$, II. $\left( \widehat{\rho}_{c}\right)
_{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{c}\right) _{4}^{T_{A_{p}}}+\left(
\widehat{\rho}_{c}\right) _{3}^{T_{A_{p}}}-\widehat{\rho}_{c}$, III. $\left(
\widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{c}\right)
_{4}^{T_{A_{p}}}+\left( \widehat{\rho}_{c}\right) _{2}^{T_{A_{p}}}
-\widehat{\rho}_{c}$ , IV. $\ \left( \widehat{\rho}_{c}\right)
_{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{c}\right) _{4}^{T_{A_{p}}}$, V.
$\left( \widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\left( \widehat{\rho
}_{c}\right) _{3}^{T_{A_{p}}}+\left( \widehat{\rho}_{c}\right)
_{2}^{T_{A_{p}}}-\widehat{\rho}_{c}$ , VI. $\left( \widehat{\rho}_{c}\right)
_{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{c}\right) _{3}^{T_{A_{p}}}$, and
VII. $\left( \widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\left(
\widehat{\rho}_{c}\right) _{2}^{T_{A_{p}}}$. Of these, six classes contain
states with four-partite entanglement, while class VI with $\widehat{\rho}
_{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{3}^{T_{A_{p}}}\right) _{c}$ has only
three qubit entanglement. Each major class contains sub-classes depending on
the number and type of negativity fonts in global partial transpose of the
canonical state. Table \ref{t1} lists the decomposition of $\left( \rho
_{c}\right) _{G}^{T_{A_{p}}}$, invariants $I_{\left( 4,8\right) }
^{A_{1}A_{2}A_{3}A_{4}}$, $D_{A_{4}}^{A_{1}A_{2}A_{3}}$, $\Delta$, and
$N_{K-\text{way}}$ ($K=$2,3,4) in canonical state, for different classes of
four qubit entangled states. Here $D_{A_{4}}^{A_{1}A_{2}A_{3}}=\left(
N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) ^{2}-2\left\vert I_{(4,8)}^{A_{1}
A_{2}A_{3}A_{4}}\right\vert $, is a measure of residual three-way correlations
between qubits $A_{1}A_{2}A_{3}$ and $N_{K-\text{way}}$ ($K=$2,3,4) is the
number of $K-$way negativity fonts in a state. \begin{table}[ptb]
\caption{Decomposition of $\left( \rho_{c}\right) _{G}^{T_{A_{p}}}$,
invariants $I_{\left( 4,8\right) }^{A_{1}A_{2}A_{3}A_{4}}$, $D_{A_{4}
}^{A_{1}A_{2}A_{3}}$, $\Delta$, and $N_{K-\text{way}}$ ($K=$2,3,4) in
canonical state, for seven classes of four qubit entangled states}
\begin{tabular}
[c]{||c||c||c||c||c||c||c||c||}\hline\hline
Class & Decomposition of $\left( \rho_{c}\right) _{G}^{T_{A_{p}}}$ &
$I_{\left( 4,8\right) }^{A_{1}A_{2}A_{3}A_{4}}$ & $D_{A_{4}}^{A_{1}
A_{2}A_{3}}$ & $\Delta$ & $N_{2-way}$ & $N_{3-way}$ & $N_{4-way}
$\\\hline\hline
I & $\sum\limits_{K=2}^{4}\left( \widehat{\rho}_{c}\right) _{K}^{T_{A_{p}}
}-2\widehat{\rho}_{c}$ & $\neq0$ & $\neq0$ & $\neq0$ & $\geq1$ & $\geq1$ &
$\geq1$\\\hline\hline
II & $\left( \rho_{c}\right) _{4}^{T_{A_{p}}}+\left( \rho_{c}\right)
_{3}^{T_{A_{p}}}-\widehat{\rho}_{c}$ & $\neq0$ & $\neq0$ & $0$ & $0$ & $\geq1$
& $\geq1$\\\hline\hline
III & $\left( \rho_{c}\right) _{4}^{T_{A_{p}}}+\left( \rho_{c}\right)
_{2}^{T_{A_{p}}}-\widehat{\rho}_{c}$ & $\neq0$ & $0$ & $\neq0$ & $\geq1$ & $0$
& $\geq1$\\\hline\hline
IV & $\left( \rho_{c}\right) _{4}^{T_{A_{p}}}$ & $\neq0$ & $0$ & $0$ & $0$ &
$0$ & $1$\\\hline\hline
V & $\left( \rho_{c}\right) _{3}^{T_{A_{p}}}+\left( \rho_{c}\right)
_{2}^{T_{A_{p}}}-\widehat{\rho}_{c}$ & $0$ & $\neq0$ & $0$ & $\geq1$ & $\geq1$
& $0$\\\hline\hline
VI & $\left( \rho_{c}\right) _{3}^{T_{A_{p}}}$ & $0$ & $\neq0$ & $0$ & $0$ &
$1$ & $0$\\\hline\hline
VII & $\left( \rho_{c}\right) _{2}^{T_{A_{p}}}$ & $0$ & $0$ & $0$ & $\geq1$
& $0$ & $0$\\\hline\hline
\end{tabular}
\label{t1}
\end{table}
A four qubit state with a single four way negativity font
\[
\left\vert \Psi_{ab}\right\rangle =a\left( \left\vert 0000\right\rangle
+\left\vert 1111\right\rangle \right) +b\left( \left\vert 1101\right\rangle
+\left\vert 1110\right\rangle +\left\vert 0011\right\rangle \right) \text{,}
\]
is an example of class I states . Three qubit invariants for the state are
$I_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}=a^{2}b^{2}$, $P_{\left(
A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}=\frac{1}{2}a^{3}b$, $I_{\left(
A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}}=b^{4}$, $P_{\left( A_{4}\right) _{1}
}^{A_{1}A_{2}A_{3}}=-\frac{1}{2}a^{2}b^{2}$, and $\left( T^{A_{1}A_{2}A_{3}
}\right) _{A_{4}}=\frac{1}{6}\left( a^{4}-2ab^{3}\right) $. Four qubit
invariants are found to be $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=\frac{1}
{12}\left( a^{4}+4ab^{3}\right) ^{2}$, $D_{A_{4}}^{A_{1}A_{2}A_{3}}\neq0$,
and $\Delta\neq0$. A representative of class II states with $\widehat{\rho
}_{G}^{T_{A_{p}}}=\widehat{\rho}_{4}^{T_{A_{p}}}+\widehat{\rho}_{3}^{T_{A_{p}
}}-\widehat{\rho}$ is, $\left\vert \Psi_{a}\right\rangle =a\left( \left\vert
0000\right\rangle +\left\vert 1111\right\rangle \right) +\left\vert
1110\right\rangle $. The state is SLOCC equivalent to GHZ state, however it
deserves a distinct status since on removal of qubit $A_{4}$ it has residual
three way coherences.
Invariants for class III states
\begin{align}
G_{abcd} & =\frac{a+d}{2}\left( \left\vert 0000\right\rangle +\left\vert
1111\right\rangle \right) +\frac{a-d}{2}\left( \left\vert 1100\right\rangle
+\left\vert 0011\right\rangle \right) \nonumber\\
& +\frac{b+c}{2}\left( \left\vert 1010\right\rangle +\left\vert
0101\right\rangle \right) +\frac{b-c}{2}\left( \left\vert 0110\right\rangle
+\left\vert 1001\right\rangle \right) ,
\end{align}
\begin{align}
L_{abc_{2}} & =\frac{a+b}{2}\left( \left\vert 0000\right\rangle +\left\vert
1111\right\rangle \right) +\frac{a-b}{2}\left( \left\vert 1100\right\rangle
+\left\vert 0011\right\rangle \right) \nonumber\\
& +c\left( \left\vert 1010\right\rangle +\left\vert 0101\right\rangle
\right) +\left\vert 0110\right\rangle ,
\end{align}
\begin{equation}
L_{a_{2}b_{2}}=a\left( \left\vert 0000\right\rangle +\left\vert
1111\right\rangle \right) +b\left( \left\vert 0101\right\rangle +\left\vert
1010\right\rangle \right) +\left( \left\vert 0110\right\rangle +\left\vert
0011\right\rangle \right) ,
\end{equation}
and
\begin{equation}
L_{a_{2}0_{3\oplus\widetilde{1}}}=a\left( \left\vert 0000\right\rangle
+\left\vert 1111\right\rangle \right) +\left( \left\vert 0101\right\rangle
+\left\vert 0110\right\rangle +\left\vert 0011\right\rangle \right) ,
\end{equation}
of ref. \cite{vers02} with $\left( \rho_{c}\right) _{G}^{T_{A_{p}}}=\left(
\rho_{c}\right) _{4}^{T_{A_{1}}}+\left( \rho_{c}\right) _{2}^{T_{A_{1}}
}-\widehat{\rho}_{c}$ are listed in Table \ref{t2}. All three way coherences
are convertible to two way coherences as such three-way negativity fonts have
zero determinants. Four qubit entanglement occurs due to four-way and two-way
coherences. For all these states, the invariants $P_{\left( A_{4}\right)
_{0}}^{A_{1}A_{2}A_{3}}$ and $P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}
}$ are identically zero. In Table \ref{t2}, for states in family $G_{abcd}$,
three qubit invariants used for the set $A_{1}A_{2}A_{3}$ are
\[
T_{A_{4}}^{A_{1}A_{2}A_{3}}=\frac{1}{6}\left( A-2B\right) ,\left(
I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left(
I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=B,
\]
where
\begin{equation}
A=\left( a^{2}-b^{2}\right) \left( d^{2}-c^{2}\right) ,B=\frac{1}
{4}\left( a^{2}-d^{2}\right) \left( b^{2}-c^{2}\right) . \label{AB}
\end{equation}
For states $G_{ab00}$ and $G_{00cd}$, $\Delta=0$. For states $L_{abc_{2}}$,
with $\left( T^{A_{1}A_{2}A_{3}}\right) _{A_{4}}=\frac{1}{6}\left(
a^{2}-c^{2}\right) \left( b^{2}-c^{2}\right) $, $\left( I_{3}^{A_{1}
A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=c\left( a^{2}-b^{2}\right)
$, the value $\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right)
_{1}}=0$ results in $\Delta=0$. A comparison of states $L_{abc_{2}}$ with
$a=c$ and L$_{ab_{3}}$ shows that the states are not SLOCC\ equivalent
\cite{li09} because the number of negativity fonts is not equal. However,
since four qubit correlations are null $\left( I_{(4,8)}^{A_{1}A_{2}
A_{3}A_{4}}=0\right) $ for $L_{abc_{2}}$ with $a=c$ as well as L$_{ab_{3}}$,
these are subclasses of the same major class in correlation type based
classification, partially supporting the result of \cite{chte07}.
The families of states $L_{ab_{3}}$ and $L_{a_{4}}$ of ref. \cite{vers02} have
a similar global partial transpose composition. The value of degree
two\ invariant is $I_{4}=\frac{3a^{2}+b^{2}}{2}$ for $L_{ab_{3}}$ and
$I_{4}=2a^{2}$ for $L_{a_{4}}$ indicating that four-way coherences are
present. However, for the set of qubits $A_{1}A_{2}A_{3}$, only non zero three
qubit invariant is $\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left(
A_{4}\right) _{1}}=\frac{a^{2}-b^{2}}{2}$ for L$_{ab_{3}}$ and $\left(
I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=-4a^{2}$\ for
$L_{a_{4}}$. A finite $I_{4}$ but zero $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$
indicates that the superposition contains a product of two qubit entangled
states. Four partite entanglement may, in this case, be detected by products
$\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) \left( N_{A_{3}}^{A_{1}
A_{2}A_{4}}\right) $.
The states in families $L_{a_{2}b_{2}}$ and $L_{a_{2}0_{3\oplus\widetilde{1}}
}$ \cite{vers02} have $\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right)
^{2}=2\left\vert I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}\right\vert $. The states in
$L_{a_{2}b_{2}}$ and $L_{a_{2}0_{3\oplus\widetilde{1}}}$ differ from each
other in the number of two way negativity fonts with non-zero determinants.
Only non zero three tangle for the states $G_{abba}$ is $T_{A_{4}}^{A_{1}
A_{2}A_{3}}$.\begin{table}[ptb]
\caption{Invariants for class III states $G_{abcd}$, $L_{abc_{2}}$,
$L_{a_{2}b_{2}}$ and $L_{a_{2}0_{3\oplus\widetilde{1}}}$ with $\left(
\rho_{c}\right) _{G}^{T_{A_{p}}}=\left( \rho_{c}\right) _{4}^{T_{A_{1}}
}+\left( \rho_{c}\right) _{2}^{T_{A_{1}}}-\widehat{\rho}_{c}$. A and B in
column II are as defined in Eq. (\ref{AB}).}
\label{t2}
\begin{tabular}
[c]{||c||c||c||c||c||}\hline\hline
Invariant$\backslash$Class & $G_{abcd}$ & $L_{abc_{2}}$ & $L_{a_{2}b_{2}}$ &
$L_{a_{2}0_{3\oplus\widetilde{1}}}$\\\hline\hline
$\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) ^{2}$ & $\frac{1}{6}\left\vert
A-2B\right\vert ^{2}+2\left\vert B\right\vert ^{2}$ & $
\begin{array}
[c]{c}
\frac{1}{6}\left\vert \left( a^{2}-c^{2}\right) \left( b^{2}-c^{2}\right)
\right\vert ^{2}\\
+\left\vert c\left( a^{2}-b^{2}\right) \right\vert ^{2}
\end{array}
$ & $\frac{1}{6}\left\vert \left( a^{2}-b^{2}\right) ^{4}\right\vert $ &
$\frac{1}{6}\left\vert a^{8}\right\vert $\\\hline\hline
$I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$ & $\left( \frac{1}{12}\left( A-2B\right)
^{2}+B^{2}\right) $ & $\frac{1}{12}\left( a^{2}-c^{2}\right) ^{2}\left(
b^{2}-c^{2}\right) ^{2}$ & $\frac{1}{12}\left( a^{2}-b^{2}\right) ^{4}$ &
$\frac{1}{12}a^{8}$\\\hline\hline
$D_{A_{4}}^{A_{1}A_{2}A_{3}}$ & $\neq0$ & $\left\vert c\left( a^{2}
-b^{2}\right) \right\vert ^{2}$ & $0$ & $0$\\\hline\hline
$\Delta$ & $\neq0$ & $0$ & $0$ & $0$\\\hline\hline
\end{tabular}
\end{table}The states $G_{a00a}$ and $G_{0bb0}$, with $\left( \rho
_{c}\right) _{G}^{T_{A_{p}}}=\left( \rho_{c}\right) _{4}^{T_{A_{p}}}$
belong to class IV in classification scheme based on correlation type. For
these states only non-zero three tangle is $T_{A_{4}}^{A_{1}A_{2}A_{3}}$,
therefore $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}\neq0$, $\Delta=0$ and $D_{A_{4}
}^{A_{1}A_{2}A_{3}}=0$.
The global partial transpose has composition, $\left( \rho_{c}\right)
_{G}^{T_{A_{1}}}=\left( \rho_{c}\right) _{3}^{T_{A_{1}}}+\left( \rho
_{c}\right) _{2}^{T_{A_{1}}}-\widehat{\rho}_{c}$, for class V states
$L_{0_{7\oplus\overline{1}}}$and $L_{0_{5\oplus\overline{3}}}$. In both cases
$I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$, while the product $\left( N_{A_{4}
}^{A_{1}A_{2}A_{3}}\right) \left( N_{A_{3}}^{A_{1}A_{2}A_{4}}\right) \neq
0$. Two states differ in the the number of two-way negativity fonts with
non-zero determinants. Only non-zero invariant for Class VI state
$L_{0_{3\oplus\overline{1}}0_{3\oplus\overline{1}}}$ \cite{vers02} with
$\left( \rho_{c}\right) _{G}^{T_{A_{1}}}=\left( \rho_{c}\right)
_{3}^{T_{A_{1}}}$ is $\left( I_{3}^{A_{2}A_{3}A_{4}}\right) _{\left(
A_{1}\right) _{0}}$. The state has only three qubit entanglement. Class VII
with $\left( \rho_{c}\right) _{G}^{T_{A_{p}}}=\left( \rho_{c}\right)
_{2}^{T_{A_{p}}}$ contains four qubit states with W-type entanglement
represented by $L_{a=0b_{3}=0}$ and separable states with entangled qubit
pairs, for example $G_{aaaa}$.
The polynomial invariant $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$ is non-zero on
states $\left\vert \Psi_{ab}\right\rangle $, $G_{abcd}$, $L_{abc_{2}}$,
$L_{a_{2}b_{2}}$, $L_{a_{2}0_{3\oplus\widetilde{1}}}$, $G_{a00a}$ and
$G_{0bb0}$ and vanishes on states L$_{ab_{3}}$, $L_{a_{4}}$, $L_{0_{7\oplus
\overline{1}}}$, $L_{0_{5\oplus\overline{3}}},L_{0_{3\oplus\overline{1}
}0_{3\oplus\overline{1}}}$ and $G_{aaaa}$. We define an entanglement monotone
to quantify four qubit correlations as
\begin{equation}
\tau_{(4,8)}=4\left\vert \left( 12I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}\right)
^{\frac{1}{2}}\right\vert , \label{tau48}
\end{equation}
which is one on states with maximal entanglement due to four-body
correlations, finite on all states with entanglement due to four-body
correlations and zero otherwise. The subscript $(4,8)$ is carried on from
$I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$. One can verify that on four qubit GHZ
state
\[
\left\vert GHZ\right\rangle =\frac{1}{\sqrt{2}}\left( \left\vert
0000\right\rangle +\left\vert 1111\right\rangle \right)
\]
as well as cluster states \cite{brie01,raus01}
\[
\left\vert C_{1}\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle
+\left\vert 1100\right\rangle +\left\vert 0011\right\rangle -\left\vert
1111\right\rangle \right)
\]
\[
\left\vert C_{2}\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle
+\left\vert 0110\right\rangle +\left\vert 1001\right\rangle -\left\vert
1111\right\rangle \right) ,
\]
\[
\left\vert C_{3}\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle
+\left\vert 1010\right\rangle +\left\vert 0101\right\rangle -\left\vert
1111\right\rangle \right) ,
\]
$\tau_{(4,8)}=1$ and $\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right)
^{2}=2\left\vert I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}\right\vert $. So what is
different in cluster states? We recall the invariants $J^{A_{i}A_{j}}$ from
\cite{shar102}, the invariants that detect entanglement of a selected pair,
$A_{i}A_{j}$, of qubits in a four qubit state. For a GHZ state\ $J^{A_{i}
A_{j}}=\frac{1}{4}$, for $\left( i\neq j\right) =1$ to $4$, while for a
cluster state all $J^{A_{i}A_{j}}$ \cite{shar102}, do not have the same value.
In canonical form, GHZ\ has a single four-way negativity font, while a cluster
state has two four-way negativity fonts besides also having two-way negativity
fonts (state reduction does not destroy all the coherences).
Another state proposed through a numerical search in ref. \cite{brow06} to be
a maximally entangled state is
\[
\left\vert \Phi\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle
+\left\vert 1101\right\rangle \right) +\frac{1}{\sqrt{8}}\left( \left\vert
1011\right\rangle +\left\vert 0011\right\rangle +\left\vert 0110\right\rangle
-\left\vert 1110\right\rangle \right) ,
\]
However, on this state
\[
T_{A_{4}}^{A_{1}A_{2}A_{3}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left(
A_{4}\right) _{0}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left(
A_{4}\right) _{1}}=\frac{1}{32},
\]
\[
\left( P_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left(
P_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=0,
\]
therefore $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=\frac{1}{256}$, and $\tau
_{(4,8)}=\sqrt{\frac{3}{4}}$. On two excitation four qubit Dicke state
\[
\left\vert \Psi_{D}\right\rangle =\frac{1}{\sqrt{6}}\left( \left\vert
0011\right\rangle +\left\vert 1100\right\rangle +\left\vert 0101\right\rangle
+\left\vert 1010\right\rangle +\left\vert 1001\right\rangle +\left\vert
0110\right\rangle \right) ,
\]
we have, $\tau_{(4,8)}=\frac{5}{9},$ while it is zero on four qubit W-state
\[
\left\vert W\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle
+\left\vert 1100\right\rangle +\left\vert 1010\right\rangle +\left\vert
1001\right\rangle \right) .
\]
Four tangle $\tau_{4}$ also vanishes on $W-$like state of four qubits,
however, it fails to vanish on product of two qubit entangled states. Contrary
to $\tau_{(4,8)}$, a non zero $\tau_{4}$ does not ensure four-partite
entanglement. On four qubit state
\begin{align}
\left\vert HS\right\rangle & =\frac{1}{\sqrt{6}}\left( \left\vert
0011\right\rangle +\left\vert 1100\right\rangle +\exp\left( \frac{i2\pi}
{3}\right) \left( \left\vert 1010\right\rangle +\left\vert 0101\right\rangle
\right) \right) \nonumber\\
& +\frac{1}{\sqrt{6}}\exp\left( \frac{i4\pi}{3}\right) \left( \left\vert
1001\right\rangle +0110\right) ,
\end{align}
conjectured to have maximal entanglement in ref. \cite{higu00}, we have
$D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{1}}^{00}=D_{\left(
A_{3}\right) _{1}\left( A_{4}\right) _{0}}^{00}=\frac{1}{6}$, and for
$4-$way negativity fonts $D^{0011}=\frac{1}{6}$, $D^{0001}=\frac{1}{12}\left(
1-i\sqrt{3}\right) ,$ and $D^{0010}=\frac{1}{12}\left( 1+i\sqrt{3}\right)
$). Therefore
\[
T_{A_{4}}^{A_{1}A_{2}A_{3}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left(
A_{4}\right) _{0}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left(
A_{4}\right) _{1}}=0,
\]
\[
\left( P_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left(
P_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=0,
\]
leading to $\tau_{(4,8)}=0$. However, the invariant $I_{(2,6)}^{A_{1}
A_{2}A_{3}A_{4}}=1$ (Eq. (\ref{i26})) on $\left\vert HS\right\rangle $ and
takes value $\frac{27}{64}$ on four qubit $\left\vert W\right\rangle $ state.
It reflects the fact that a measurement on the state of a qubit, in
$\left\vert HS\right\rangle $ always leaves the three remaining qubits in a
three qubit W-state, whereas a similar measurement on a $\left\vert
W\right\rangle $ state yields a mixture of three qubit W-state with three
qubits in a separable state.
The choice $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$ to quantify four qubit
correlations is also supported by the conclusions of \ \cite{endr06}, where
for \ a selected set of four qubit states, generator S of ref. \cite{luqu03}
has been shown to have the same parameter dependence as optimized Bell type
inequalities and a combination of global negativity and 2-qubit concurrences.
To summarize, degree 8, 12 and 24 four qubit invariants, expressed in terms of
three qubit invariants, have been obtained. One can continue the process to
higher number of qubits. Commonly, multivariate forms in terms of state
coefficients $a_{i_{1}i_{2}...i_{N}}$ are used to obtain polynomial invariants
for qubit systems. Our strategy is to write multivariate forms with relevant
$K-$qubit invariants as coefficients. The advantage of our technique is that
relevant invariants in a larger Hilbert space are easily related to invariants
in sub spaces as such to the structure of the quantum state at hand.
Construction of polynomial invariants for states other than the most general
state is a great help in classification of states. Our method can be easily
applied to determine the invariants for any given state. Entanglement monotone
that quantifies four qubit correlations can be used to quantify correlations
in pure and mixed (via convex roof extension) four qubit states.
This work is financially supported by CNPq Brazil and FAEP UEL Brazil.
\end{document} |
\begin{document}
\title{{\shortname}
\begin{abstract}
Transformers have been actively studied for time-series forecasting in recent years. While often showing promising results in various scenarios, traditional Transformers are not designed to fully exploit the characteristics of time-series data and thus suffer some fundamental limitations, e.g., they are generally not decomposable or interpretable, and are neither effective nor efficient for long-term forecasting. In this paper, we propose {ETSformer}, a novel time-series Transformer architecture, which exploits the principle of exponential smoothing in improving Transformers for time-series forecasting. In particular, inspired by the classical exponential smoothing methods in time-series forecasting, we propose the novel exponential smoothing attention (ESA) and frequency attention (FA) to replace the self-attention mechanism in vanilla Transformers, thus improving both accuracy and efficiency. Based on these, we redesign the Transformer architecture with modular decomposition blocks such that it can learn to decompose the time-series data into interpretable time-series components such as level, growth and seasonality. Extensive experiments on various time-series benchmarks validate the efficacy and advantages of the proposed method. Code is available at \url{https://github.com/salesforce/ETSformer}.
\end{abstract}
\section{Introduction}
Transformer models have achieved great success in the fields of NLP \cite{vaswani2017attention, Devlin2019BERT} and CV \cite{carion2020end, dosovitskiy2021an} in recent times. The success is widely attributed to its self-attention mechanism which is able to explicitly model both short and long range dependencies adaptively via the pairwise query-key interaction. Owing to their powerful capability to model sequential data, Transformer-based architectures \cite{LI2019EnhancingTL,Wu2020AdversarialST,zhou2021informer, wu2021autoformer,zerveas2021transformer} have been actively explored for the time-series forecasting, especially for the more challenging Long Sequence Time-series Forecasting (LSTF) task. While showing promising results, it is still quite challenging to extract salient temporal patterns and thus make accurate long-term forecasts for large-scale data. This is because time-series data is usually noisy and non-stationary. Without incorporating appropriate knowledge about time-series structures \cite{assimakopoulos2000theta, theodosiou2011forecasting, hyndman2008forecasting}, it is prone to learning the spurious dependencies and lacks interpretability.
Moreover, the use of content-based, dot-product attention in Transformers is not effective in detecting essential temporal dependencies for two reasons.
(1) Firstly, time-series data is usually assumed to be generated by a conditional distribution over past observations, with the dependence between observations weakening over time \cite{mohri2010stability, kuznetsov2015learning}. Therefore, neighboring data points have similar values, and recent tokens should be given a higher weight\footnote{An assumption further supported by the success of classical exponential smoothing methods and ARIMA model selection methods tending to select small lags.} when measuring their similarity \cite{hyndman2008forecasting, hyndman2008automatic}. This indicates that attention measured by a relative time lag is more effective than that measured by the similarity of the content when modeling time-series.
(2) Secondly, many real world time-series display strong seasonality -- patterns in time-series which repeat with a fixed period. Automatically extracting seasonal patterns has been proved to be critical for the success of forecasting \cite{cleveland1976decomposition, cleveland1990stl, woo2022cost}. However, the vanilla attention mechanism is unlikely able to learn these required periodic dependencies without any in-built prior structure.
\input{figures/decomposition}
To address these limitations, we propose {ETSformer}, an effective and efficient Transformer architecture for time-series forecasting, inspired by exponential smoothing methods \cite{hyndman2008forecasting} and illustrated in \cref{fig:decomposition}. First of all, {ETSformer} incorporates inductive biases of time-series structures by performing a layer-wise level, growth, and seasonal decomposition. By leveraging the high capacities of deep architectures and an effective residual learning scheme, {ETSformer} is able to extract a series of latent growth and seasonal patterns and model their complex dependencies.
Secondly, {ETSformer} introduces a novel Exponential Smoothing Attention (ESA) and Frequency Attention (FA) to replace vanilla attention. In particular, ESA constructs attention scores based on the relative time lag to the query, and achieves \(\mathcal{O}(L \log L)\) complexity for the length-$L$ lookback window and demonstrates powerful capability in modeling the growth component. FA leverages the Fourier transformation to extract the dominating seasonal patterns by selecting the Fourier bases with the \(K\) largest amplitudes in frequency domain, and also achieves \(\mathcal{O}(L \log L)\) complexity. Finally, the predicted forecast is a composition of level, trend, and seasonal components, which makes it human interpretable. We conduct extensive empirical analysis and show that {ETSformer} achieves state-of-the-art performance by outperforming competing approaches over 6 real world datasets on both the multivariate and univariate settings, and also visualize the time-series components to verify its interpretability.
\section{Related Work}
\textbf{Transformer based deep forecasting.} Inspired by the success of Transformers in CV and NLP, Transformer-based time-series forecasting models have been actively studied recently. LogTrans \cite{LI2019EnhancingTL} introduces local context to Transformer models via causal convolutions in the query-key projection layer, and propose the LogSparse attention to reduce complexity to \(\mathcal{O}(L \log L)\). Informer \cite{zhou2021informer} extends the Transformer by proposing the ProbSparse attention and distillation operation to achieve \(\mathcal{O}(L \log L)\) complexity. AST \cite{Wu2020AdversarialST} leverages a sparse normalization transform, \(\alpha\mhyphen\mathrm{entmax}\), to implement a sparse attention layer. It further incorporates an adversarial loss to mitigate the adverse effect of error accumulation in inference. Similar to our work that incorporates prior knowledge of time-series structure, Autoformer \cite{wu2021autoformer} introduces the Auto-Correlation attention mechanism which focuses on sub-series based similarity and is able to extract periodic patterns.
Yet, their implementation of series decomposition which performs de-trending via a simple moving average over the input signal without any learnable parameters is arguably a simplified assumption, insufficient to appropriately model complex trend patterns. {ETSformer} on the other hand, decomposes the series by de-seasonalization as seasonal patterns are more identifiable and easier to detect \cite{de2011forecasting}. Furthermore, the Auto-Correlation mechanism fails to attend to information from the local context (i.e. forecast at \(t+1\) is not dependent on \(t, t-1\), etc.) and does not separate the trend component into level and growth components, which are both crucial for modeling trend patterns. Lastly, similar to previous work, their approach is highly reliant on manually designed dynamic time-dependent covariates (e.g. month-of-year, day-of-week), while {ETSformer} is able to automatically learn and extract seasonal patterns from the time-series signal directly.
\textbf{Attention Mechanism.}
The self-attention mechanism in Transformer models has recently received much attention, its necessity has been greatly investigated in attempts to introduce more flexibility and reduce computational cost. Synthesizer \cite{tay2021synthesizer} empirically studies the importance of dot-product interactions, and show that a randomly initialized, learnable attention mechanisms with or without token-token dependencies can achieve competitive performance with vanilla self-attention on various NLP tasks. \cite{you2020hard} utilizes an unparameterized Gaussian distribution to replace the original attention scores, concluding that the attention distribution should focus on a certain local window and can achieve comparable performance. \cite{raganato2020fixed} replaces attention with fixed, non-learnable positional patterns, obtaining competitive performance on NMT tasks. \cite{lee2021fnet} replaces self-attention with a non-learnable Fourier Transform and verifies it to be an effective mixing mechanism. While our proposed ESA shares the spirit of designing attention mechanisms that are not dependent on pair-wise query-key interactions, our work is inspired by exploiting the characteristics of time-series and is an early attempt to utilize prior knowledge of time-series for tackling the time-series forecasting tasks.
\section{Preliminaries and Background}
\textbf{Problem Formulation} Let \({\bm{x}}_t \in \mathbb{R}^m\) denote an observation of a multivariate time-series at time step \(t\). Given a lookback window \({\bm{X}}_{t-L:t} = [{\bm{x}}_{t-L}, \ldots, {\bm{x}}_{t-1}]\), we consider the task of predicting future values over a horizon, \({\bm{X}}_{t:t+H} = [{\bm{x}}_t, \ldots, {\bm{x}}_{t+H-1}]\). We denote \(\hat{{\bm{X}}}_{t:t+H}\) as the point forecast of \({\bm{X}}_{t:t+H}\). Thus, the goal is to learn a forecasting function \(\hat{{\bm{X}}}_{t:t+H} = f ({\bm{X}}_{t-L:t})\) by minimizing some loss function \(\mathcal{L}: \mathbb{R}^{H \times m} \times \mathbb{R}^{H \times m} \to \mathbb{R}\).
\textbf{Exponential Smoothing} We instantiate exponential smoothing methods \cite{hyndman2008forecasting} in the univariate forecasting setting. They assume that time-series can be decomposed into seasonal and trend components, and trend can be further decomposed into level and growth components. Specifically, a commonly used model is the additive Holt-Winters' method \cite{holt2004forecasting, winters1960forecasting}, which can be formulated as:
\begin{align}
\label{eq:es}
\text{Level}&: e_{t} = \alpha(x_t - s_{t-p}) + (1-\alpha)(e_{t-1}+b_{t-1})\nonumber\\
\text{Growth}&: b_{t} = \beta(e_{t} - e_{t-1}) + (1-\beta)b_{t-1}\nonumber\\
\text{Seasonal}&: s_t = \gamma(x_t - e_t) + (1-\gamma)s_{t-p}\nonumber\\
\text{Forecasting}&:\hat{x}_{t+h|t} = e_t + hb_t +s_{t+h - p}
\end{align}
where $p$ is the period of seasonality, and $\hat{x}_{t+h|t}$ is the $h$-steps ahead forecast. The above equations state that the $h$-steps ahead forecast is composed of the last estimated level $e_t$, incrementing it by $h$ times the last growth factor, $b_t$, and adding the last available seasonal factor $s_{t+h-p}$. Specifically, the level smoothing equation is formulated as a weighted average of the seasonally adjusted observation $(x_t- s_{t-p})$ and the non-seasonal forecast, obtained by summing the previous level and growth $(e_{t-1} + b_{t-1})$. The growth smoothing equation is implemented by a weighted average between the successive difference of the (de-seasonalized) level, $(e_t - e_{t-1})$, and the previous growth, $b_{t-1}$. Finally, the seasonal smoothing equation is a weighted average between the difference of observation and (de-seasonalized) level, $(x_t - e_t)$, and the previous seasonal index $s_{t-p}$. The weighted average of these three equations are controlled by the smoothing parameters $\alpha$, $\beta$ and $\gamma$, respectively.
A widely used modification of the additive Holt-Winters' method is to allow the damping of trends, which has been proved to produce robust multi-step forecasts \cite{svetunkov2016complex, mckenzie2010damped}. The forecast with damping trend can be rewritten as:
\begin{align}
\label{eq:damping}
\hat{x}_{t+h|t} = e_t + (\phi+\phi^2+\dots+\phi^h)b_t +s_{t+h - p},
\end{align}
where the growth is damped by a factor of $\phi$. If $\phi=1$, it degenerates to the vanilla forecast. For $0<\phi<1$, as $h {\textnormal{i}}ghtarrow \infty$ this growth component approaches an asymptote given by $\phi b_t/(1- \phi)$.
\section{{ETSformer}}
\input{figures/overall_archi}
In this section, we redesign the classical Transformer architecture into an exponential smoothing inspired encoder-decoder architecture specialized for tackling the time-series forecasting problem.
Our architecture design methodology relies on three key principles:
(1) the architecture leverages the stacking of multiple layers to progressively extract a series of level, growth, and seasonal representations from the intermediate latent residual;
(2) following the spirit of exponential smoothing, we extract the salient seasonal patterns while modeling level and growth components by assigning higher weight to recent observations;
(3) the final forecast is a composition of level, growth, and seasonal components making it human interpretable.
We now expound how our {ETSformer} architecture encompasses these principles.
\subsection{Overall Architecture}
\cref{fig:overall-architecture} illustrates the overall encoder-decoder architecture of {ETSformer}. At each layer, the encoder is designed to iteratively extract growth and seasonal latent components from the lookback window.
The level is then extracted in a similar fashion to classical level smoothing in \cref{eq:es}.
These extracted components are then fed to the decoder to further generate the final $H$-step ahead forecast via a composition of level, growth, and seasonal forecasts, which is defined:
{\bm{s}}kip -0.2in
\begin{small}
\begin{align}
\label{eq:forecast}
\hat{{\bm{X}}}_{t:t+H} = {\bm{E}}_{t:t+H} + \mathrm{Linear} \Big( \sum^N_{n=1}(\trend{t:t+H}{n}
+ \season{t:t+H}{n})\Big),
\end{align}
\end{small}
{\bm{s}}kip -0.1in
where \({\bm{E}}_{t:t+H} \in \mathbb{R}^{H \times m}\), and \(\trend{t:t+H}{n}, \season{t:t+H}{n} \in \mathbb{R}^{H \times d}\) represent the level forecasts, and the growth and seasonal latent representations of each time step in the forecast horizon, respectively. The superscript represents the stack index, for a total of \(N\) encoder stacks. Note that \(\mathrm{Linear}(\cdot): \mathbb{R}^d \to \mathbb{R}^m\) operates element-wise along each time step, projecting the extracted growth and seasonal representations from latent to observation space.
\subsubsection{Input Embedding}
Raw signals from the lookback window are mapped to latent space via the input embedding module, defined by \({\textnormal{e}}s{t-L:t}{0} = \level{t-L:t}{0} = \mathrm{Conv}({\bm{X}}_{t-L:t}),\)
where \(\mathrm{Conv}\) is a temporal convolutional filter with kernel size 3, input channel \(m\) and output channel \(d\). In contrast to prior work \cite{LI2019EnhancingTL, Wu2020AdversarialST, wu2021autoformer,zhou2021informer}, the inputs of {ETSformer} do not rely on any other manually designed dynamic time-dependent covariates (e.g. month-of-year, day-of-week) for both the lookback window and forecast horizon. This is because the proposed Frequency Attention module (details in \cref{subsubsec:fa}) is able to automatically uncover these seasonal patterns, which renders it more applicable for challenging scenarios without these discriminative covariates and reduces the need for feature engineering.
\subsubsection{Encoder}
The encoder focuses on extracting a series of latent growth and seasonality representations in a cascaded manner from the lookback window.
To achieve this goal, traditional methods rely on the assumption of additive or multiplicative seasonality which has limited capability to express complex patterns beyond these assumptions. Inspired by \cite{oreshkin2019n, he2016deep}, we leverage residual learning to build an expressive, deep architecture to characterize the complex intrinsic patterns. Each layer can be interpreted as sequentially analyzing the input signals. The extracted growth and seasonal signals are then removed from the residual and undergo a nonlinear transformation before moving to the next layer. Each encoder layer takes as input the residual from the previous encoder layer \({\textnormal{e}}s{t-L:t}{n-1}\) and emits \({\textnormal{e}}s{t-L:t}{n}, \trend{t-L:t}{n}, \season{t-L:t}{n}\), the residual, latent growth, and seasonal representations for the lookback window via the Multi-Head Exponential Smoothing Attention (MH-ESA) and Frequency Attention (FA) modules (detailed description in \cref{subsec:esa-and-fa}). The following equations formalizes the overall pipeline in each encoder layer, and for ease of exposition, we use the notation \({\bm{c}}entcolon =\) for a variable update.
\noindent\begin{minipage}[t]{.5\linewidth}
\begin{align*}
\textrm{Seasonal:} \quad
\season{t-L:t}{n} & = \mathrm{FA}_{t-L:t}({\textnormal{e}}s{t-L:t}{n-1}) \\
{\textnormal{e}}s{t-L:t}{n-1} & {\bm{c}}entcolon = {\textnormal{e}}s{t-L:t}{n-1} - \season{t-L:t}{n}
\end{align*}
\end{minipage}
\begin{minipage}[t]{.5\linewidth}
\begin{align*}
\textrm{Growth:} \quad
\trend{t-L:t}{n} & = \mathrm{MH}\mhyphen\mathrm{ESA}({\textnormal{e}}s{t-L:t}{n-1}) \\
{\textnormal{e}}s{t-L:t}{n-1} & {\bm{c}}entcolon = \mathrm{LN}({\textnormal{e}}s{t-L:t}{n-1} - \trend{t-L:t}{n}) \\
{\textnormal{e}}s{t-L:t}{n} & = \mathrm{LN}({\textnormal{e}}s{t-L:t}{n-1} + \mathrm{FF}({\textnormal{e}}s{t-L:t}{n-1}))
\end{align*}
\end{minipage}
\(\mathrm{LN}\) is layer normalization \cite{ba2016layer}, \(\mathrm{FF}(x) = \mathrm{Linear}(\sigma(\mathrm{Linear}(x)))\) is a position-wise feedforward network \cite{vaswani2017attention} and $\sigma(\cdot)$ is the sigmoid function.
\textbf{Level Module} Given the latent growth and seasonal representations from each layer, we extract the level at each time step $t$ in the lookback window in a similar way as the level smoothing equation in \cref{eq:es}. Formally, the adjusted level is a weighted average of the current (de-seasonalized) level and the level-growth forecast from the previous time step $t-1$. It can be formulated as:
\begin{align*}
\level{t}{n} = {\bm{a}}lpha * \Big( \level{t}{n-1} - \mathrm{Linear}(\season{t}{n}) \Big)
+ (1 - {\bm{a}}lpha) * \Big( \level{t-1}{n} + \mathrm{Linear}(\trend{t-1}{n}) \Big),
\end{align*}
where \({\bm{a}}lpha \in \mathbb{R}^m\) is a learnable smoothing parameter, \(*\) is an element-wise multiplication term, and \(\mathrm{Linear}(\cdot): \mathbb{R}^d \to \mathbb{R}^m\) maps representations to observation space. Finally, the extracted level in the last layer $\level{t-L:t}{N}$ can be regarded as the corresponding level for the lookback window.
We show in \cref{subapp:level-esa} that this recurrent exponential smoothing equation can also be efficiently evaluated using the efficient \(\mathbb{E}SA\) algorithm (\cref{alg:efficient-esa}) with an auxiliary term.
\subsubsection{Decoder}
The decoder is tasked with generating the $H$-step ahead forecasts. As shown in \cref{eq:forecast}, the final forecast is a composition of level forecasts ${\bm{E}}_{t:t+H}$, growth representations $\trend{t:t+H}{n}$ and seasonal representations $\season{t:t+H}{n}$ in the forecast horizon. It comprises \(N\) Growth + Seasonal (G+S) Stacks, and a Level Stack. The G+S Stack consists of the Growth Damping (GD) and FA blocks, which leverage \(\trend{t}{n}\), \(\season{t-L:t}{n}\) to predict \(\trend{t:t+H}{n}\), \(\season{t:t+H}{n}\), respectively.
\noindent\begin{minipage}[t]{.5\textwidth}
\begin{align*}
\text{Growth:}\quad\trend{t:t+H}{n} = \mathrm{TD}(\trend{t}{n})
\end{align*}
\end{minipage}
\begin{minipage}[t]{.5\textwidth}
\begin{align*}
\text{Seasonal:}\quad\season{t:t+H}{n} = \mathrm{FA}_{t:t+H}(\season{t-L:t}{n})
\end{align*}
\end{minipage}
To obtain the level in the forecast horizon, the Level Stack repeats the level in the last time step $t$ along the forecast horizon. It can be defined as ${\bm{E}}_{t:t+H} = \mathrm{Repeat}_H(\level{t}{N}) = [\level{t}{N}, \ldots, \level{t}{N}]$, with $\mathrm{Repeat}_H(\cdot): \mathbb{R}^{1 \times m} \to \mathbb{R}^{H\times m}$.
\textbf{Growth Damping} To obtain the growth representation in the forecast horizon, we follow the idea of trend damping in \cref{eq:damping} to make robust multi-step forecast.
Thus, the trend representations can be formulated as:
\begin{align*}
\mathrm{TD}(\trend{t}{n})_j & = \sum_{i=1}^{j} \gamma^i \trend{t}{n}, \\
\mathrm{TD}(\trend{t-L:t}{n}) & = [\mathrm{TD}(\trend{t}{n})_t, \ldots, \mathrm{TD}(\trend{t}{n})_{t+H-1}],
\end{align*}
where \(0 < \gamma < 1\) is the damping parameter which is learnable, and in practice, we apply a multi-head version of trend damping by making use of \(n_h\) damping parameters. Similar to the implementation for level forecast in the Level Stack, we only use the last trend representation in the lookback window $\trend{t}{n}$ to forecast the trend representation in the forecast horizon.
\subsection{Exponential Smoothing Attention and Frequency Attention Mechanism}
\label{subsec:esa-and-fa}
\input{figures/attention_mechanism}
Considering the ineffectiveness of existing attention mechanisms in handling time-series data, we develop the Exponential Smoothing Attention (ESA) and Frequency Attention (FA) mechanisms to extract latent growth and seasonal representations. ESA is a non-adaptive, learnable attention scheme with an inductive bias to attend more strongly to recent observations by following an exponential decay, while FA is a non-learnable attention scheme, that leverages Fourier transformation to select dominating seasonal patterns. A comparison between existing work and our proposed ESA and FA is illustrated in \cref{fig:attention-mechanisms}.
\subsubsection{Exponential Smoothing Attention}
Vanilla self-attention can be regarded as a weighted combination of an input sequence, where the weights are normalized alignment scores measuring the similarity between input contents \cite{tsai2019transformer}.
Inspired by the exponential smoothing in \cref{eq:es}, we aim to assign a higher weight to recent observations. It can be regarded as a novel form of attention whose weights are computed by the relative time lag, rather than input content.
Thus, the ESA mechanism can be defined as \(\mathbb{E}SA: \mathbb{R}^{L \times d} \to \mathbb{R}^{L \times d}\), where \(\mathbb{E}SA({\bm{V}})_t \in \mathbb{R}^d\) denotes the \(t\)-th row of the output matrix, representing the token corresponding to the \(t\)-th time step. Its exponential smoothing formula can be further written as:
\begin{align*}
\mathbb{E}SA({\bm{V}})_t &= \alpha{\bm{V}}_{t} + (1-\alpha)\mathbb{E}SA({\bm{V}})_{t-1} = \sum_{j=0}^{t-1} \alpha (1 - \alpha)^j {\bm{V}}_{t-j} + (1 - \alpha)^t {\bm{v}}_0,
\end{align*}
where \(0 < \alpha < 1\) and \({\bm{v}}_0\) are learnable parameters known as the smoothing parameter and initial state respectively.
\textbf{Efficient $\mathbb{E}SA$ algorithm}
The straightforward implementation of the ESA mechanism by constructing the attention matrix, \({\bm{A}}_{\mathrm{ES}}\) and performing a matrix multiplication with the input sequence (detailed algorithm in \cref{subapp:esa-implementation}) results in an \(\mathcal{O}(L^2)\) computational complexity.
\begin{align*}
\mathbb{E}SA({\bm{V}}) =
\begin{bmatrix}
\mathbb{E}SA({\bm{V}})_1 \\
{\bm{d}}ots \\
\mathbb{E}SA({\bm{V}})_L
\end{bmatrix}
= {\bm{A}}_{\mathrm{ES}} \cdot
\begin{bmatrix}
{\bm{v}}_0^T \\
{\bm{V}}
\end{bmatrix},
\end{align*}
Yet, we are able to achieve an efficient algorithm by exploiting the unique structure of the exponential smoothing attention matrix, \({\bm{A}}_{\mathrm{ES}}\), which is illustrated in \cref{subapp:esa-matrix}. Each row of the attention matrix can be regarded as iteratively right shifting with padding (ignoring the first column). Thus, a matrix-vector multiplication can be computed with a cross-correlation operation, which in turn has an efficient fast Fourier transform implementation \cite{Mathieu2014FastTO}. The full algorithm is described in \cref{alg:efficient-esa}, \cref{subapp:esa-algo}, achieving an \(\mathcal{O}(L \log L)\) complexity.
\textbf{Multi-Head Exponential Smoothing Attention (MH-ESA)} We use \(\mathbb{E}SA\) as a basic building block, and develop the Multi-Head Exponential Smoothing Attention to extract latent growth representations. Formally, we obtain the growth representations by taking the successive difference of the residuals.
\begin{align*}
\tilde{{\bm{Z}}}_{t-L:t}^{(n)} & = \mathrm{Linear}({\textnormal{e}}s{t-L:t}{n-1}), \\
\trend{t-L:t}{n} & = \mathrm{MH}\mhyphen\mathbb{E}SA(\tilde{{\bm{Z}}}_{t-L:t}^{(n)} - [\tilde{{\bm{Z}}}_{t-L:t-1}^{(n)}, {\bm{v}}_0^{(n)}]), \\
\trend{t-L:t}{n} & \coloneqq \mathrm{Linear}(\trend{t-L:t}{n}),
\end{align*}
where \(\mathrm{MH}\mhyphen\mathbb{E}SA\) is a multi-head version of \(\mathbb{E}SA\) and \({\bm{v}}_0^{(n)}\) is the initial state from the ESA mechanism.
\subsubsection{Frequency Attention}
\label{subsubsec:fa}
The goal of identifying and extracting seasonal patterns from the lookback window is twofold. Firstly, it can be used to perform de-seasonalization on the input signals such that downstream components are able to focus on modeling the level and growth information. Secondly, we are able to extrapolate the seasonal patterns to build representations for the forecast horizon.
The main challenge is to automatically identify seasonal patterns. Fortunately, the use of power spectral density estimation for periodicity detection has been well studied \cite{vlachos2005periodicity}. Inspired by these methods, we leverage the discrete Fourier transform (DFT, details in \cref{app:dft}) to develop the FA mechanism to extract dominant seasonal patterns.
Specifically, FA first decomposes input signals into their Fourier bases via a DFT along the temporal dimension, \(\mathcal{F}({\textnormal{e}}s{t-L:t}{n-1}) \in \mathbb{C}^{F \times d}\) where \(F = \lfloor L /2 {\textnormal{f}}loor +1\), and selects bases with the \(K\) largest amplitudes. An inverse DFT is then applied to obtain the seasonality pattern in time domain.
Formally, this is given by the following equations:
\begin{gather}
{\bm{P}}hi_{k,i} = \phi \Big ( \mathcal{F}({\textnormal{e}}s{t-L:t}{n-1})_{k,i} \Big ),\quad
{\bm{A}}_{k,i} = \Big | \mathcal{F}({\textnormal{e}}s{t-L:t}{n-1})_{k,i} \Big |, \nonumber \\
\kappa_i^{(1)}, \ldots, \kappa_i^{(K)} = \mathop{\arg\topk}_{k \in \{2, \ldots, F\}} \nobreakspace \Big \{ {\bm{A}}_{k,i} \Big \}, \nonumber \\
\season{j,i}{n} = \sum_{k=1}^K {\bm{A}}_{\kappa_i^{(k)},i} \Big [ \cos (2 \pi f_{\kappa_i^{(k)}} j + {\bm{P}}hi_{\kappa_i^{(k)},i}) + \cos(2 \pi \bar{f}_{\kappa_i^{(k)}} j + \bar{{\bm{P}}hi}_{\kappa_i^{(k)},i}) \Big ], \label{eq:seasonal-extrapolate}
\end{gather}
where \({\bm{P}}hi_{k,i}, {\bm{A}}_{k,i}\) are the phase/amplitude of the \(k\)-th frequency for the \(i\)-th dimension, \(\mathop{\arg\topk}\) returns the arguments of the top \(K\) amplitudes, \(K\) is a hyperparameter, \(f_k\) is the Fourier frequency of the corresponding index, and \(\bar{f}_k, \bar{{\bm{P}}hi}_{k,i}\) are the Fourier frequency/amplitude of the corresponding conjugates.
Finally, the latent seasonal representation of the $i$-th dimension for the lookback window is formulated as \(\season{t-L:t, i}{n} = [\season{t-L,i}{n}, \ldots, \season{t-1,i}{n}]\).
For the forecast horizon, the FA module extrapolates beyond the lookback window via, \(\season{t:t+H, i}{n} = [\season{t,i}{n}, \ldots, \season{t+H-1,i}{n}]\).
Since \(K\) is a hyperparameter typically chosen for small values, the complexity for the FA mechanism is similarly \(\mathcal{O}(L \log L)\).
\section{Experiments}
\label{sec:experiments}
This section presents extensive empirical evaluations on the LSTF task over 6 real world datasets, ETT, ECL, Exchange, Traffic, Weather, and ILI, coming from a variety of application areas (details in \cref{app:data}) for both multivariate and univariate settings.
This is followed by an ablation study of the various contributing components, and interpretability experiments of our proposed model. An additional analysis on computational efficiency can be found in \cref{app:efficiency} for space.
For the main benchmark, datasets are split into train, validation, and test sets chronologically, following a 60/20/20 split for the ETT datasets and 70/10/20 split for other datasets. Inputs are zero-mean normalized and we use MSE and MAE as evaluation metrics. Further details on implementation and hyperparameters can be found in \cref{app:implementation}.
\subsection{Results}
\input{tables/multivar_results}
For the multivariate benchmark, baselines include recently proposed time-series/efficient Transformers -- Autoformer, Informer, LogTrans, and Reformer \cite{Kitaev2020Reformer}, and RNN variants -- LSTnet \cite{lai2018modeling}, and LSTM \cite{hochreiter1997long}. Univariate baselines further include N-BEATS \cite{oreshkin2019n}, DeepAR \cite{salinas2020deepar}, ARIMA, Prophet \cite{taylor2018forecasting}, and AutoETS \cite{bhatnagar2021merlion}. We obtain baseline results from the following papers: \cite{wu2021autoformer, zhou2021informer}, and further run AutoETS from the Merlion library \cite{bhatnagar2021merlion}.
\cref{tab:multivar-results} summarize the results of {ETSformer} against top performing baselines on a selection of datasets, for the multivariate setting, and \cref{tab:univar-results} in \cref{app:univar-results} for space. Results for {ETSformer} are averaged over three runs (standard deviation in \cref{app:sd}).
Overall, {ETSformer} achieves state-of-the-art performance, achieving the best performance (across all datasets/settings, based on MSE) on 35 out of 40 settings for the multivariate case, and 17 out of 23 for the univariate case. Notably, on Exchange, a dataset with no obvious periodic patterns, {ETSformer} demonstrates an average (over forecast horizons) improvemnt of 39.8\% over the best performing baseline, evidencing its strong trend forecasting capabilities. We highlight that for cases where {ETSformer} does not achieve the best performance, it is still highly competitive, and is always within the top 2 performing methods, based on MSE, for 40 out of 40 settings in the multivariate benchmark , and 21 out of 23 settings of the univariate case.
\subsection{Ablation Study}
\input{tables/component_ablation}
We study the contribution of each major component which the final forecast is composed of level, growth, and seasonality.
\cref{tab:component-ablation} first presents the performance of the full model, and subsequently, the performance of the resulting model by removing each component.
We observe that the composition of level, growth, and season provides the most accurate forecasts across a variety of application areas, and removing any one component results in a deterioration. In particular, estimation of the level of the time-series is critical. We also analyse the case where MH-ESA is replaced with a vanilla multi-head attention, and observe that our trend attention formulation indeed is more effective.
\input{figures/interpretability}
\subsection{Interpretability}
{ETSformer} generates forecasts based on a composition of interpretable time-series components. This means we can visualize each component individually, and understand how seasonality and trend affects the forecasts. We showcase this ability in \cref{fig:interpretability} on both synthetic and real world data. Experiments with synthetic data are crucial in this case, since we are not able to obtain the ground truth decomposition from real world data. {ETSformer} is first trained on the synthetic dataset (details in \cref{app:synthetic}) with clear (nonlinear) trend and seasonality patterns which we can control. Given a lookback window (without noise), we visualize the forecast, as well as decomposed trend and seasonal forecasts.
{ETSformer} successfully forecasts interpretable level, trend (level + growth), and seasonal components, as observed in the trend and seasonality components closely tracking the ground truth patterns. Despite obtaining a low MSE, the competing decomposition based approach, Autoformer, struggles to disambiguate between trend and seasonality.
{\bm{s}}pace{-0.1in}
\section{Conclusion}
\label{sec:conclusion}
Inspired by the classical exponential smoothing methods and emerging Transformer approaches for time-series forecasting, we proposed {ETSformer}, a novel Transformer-based architecture for time-series forecasting which learns level, growth, and seasonal latent representations and their complex dependencies. {ETSformer} leverages the novel Exponential Smoothing Attention and Frequency Attention mechanisms which are more effective at modeling time-series than vanilla self-attention mechanism, and at the same time achieves \(\mathcal{O}(L \log L)\) complexity, where \(L\) is the length of lookback window.
Our extensive empirical evaluation shows that {ETSformer} achieves state-of-the-art performance, beating competing baselines in 35 out of 40 and 17 out of 23 settings for multivariate and univariate forecasting respectively.
Future directions include including additional covariates such as holiday indicators and other dummy variables to consider holiday effects which cannot be captured by the FA mechanism.
{\small
}
\appendix
\section{Exponential Smoothing Attention}
\label{app:esa-algorithm}
\subsection{Exponential Smoothing Attention Matrix}
\label{subapp:esa-matrix}
\begin{gather*}
{\bm{A}}_{\mathrm{ES}} = \begin{bmatrix}
(1-\alpha)^1 & \alpha & 0 & 0 & \ldots & 0 \\
(1-\alpha)^2 & \alpha (1-\alpha) & \alpha & 0 & \ldots & 0 \\
(1-\alpha)^3 & \alpha (1-\alpha)^2 & \alpha (1-\alpha) & \alpha & \ldots & 0 \\
{\bm{d}}ots & {\bm{d}}ots & {\bm{d}}ots & {\bm{d}}ots & \ddots & {\bm{d}}ots \\
(1 - \alpha)^L & \alpha (1-\alpha)^{L-1} & \ldots & \alpha (1-\alpha)^j & \ldots & \alpha
\end{bmatrix}
\end{gather*}
\subsection{Efficient Exponential Smoothing Attention Algorithm}
\label{subapp:esa-algo}
\input{algorithms/efficient_esa}
\subsection{Level Smoothing via Exponential Smoothing Attention}
\label{subapp:level-esa}
\begin{align*}
\level{t}{n} & = {\bm{a}}lpha * ( \level{t}{n-1} - \season{t}{n} ) + (1 - {\bm{a}}lpha) * ( \level{t-1}{n} + \trend{t-1}{n} ) \\
& = {\bm{a}}lpha * (\level{t}{n-1} - \season{t}{n}) + (1 - {\bm{a}}lpha) * \trend{t-1}{n} \\
& \hspace{3em} + (1 - {\bm{a}}lpha) * [{\bm{a}}lpha * (\level{t-1}{n-1} - \season{t-1}{n}) + (1 - {\bm{a}}lpha) * (\level{t-2}{n} + \trend{t-2}{n}) ] \\
& = {\bm{a}}lpha * (\level{t}{n-1} - \season{t}{n}) + {\bm{a}}lpha * (1 - {\bm{a}}lpha) * (\level{t-1}{n-1} - \season{t-1}{n}) \\
& \hspace{3em} + (1 - {\bm{a}}lpha) * \trend{t-1}{n} + (1 - {\bm{a}}lpha )^2 * \trend{t-2}{n} \\
& \hspace{3em} + (1 - {\bm{a}}lpha)^2 [{\bm{a}}lpha * (\level{t-2}{n-1} - \season{t-2}{n}) + (1 - {\bm{a}}lpha) * (\level{t-3}{n} + \trend{t-3}{n})] \\
& {\bm{d}}otswithin{=} \\
& = (1 - {\bm{a}}lpha)^t (\level{0}{n} - \season{0}{n}) + \sum_{j=0}^{t-1} {\bm{a}}lpha * (1 - {\bm{a}}lpha)^j * ( \level{t-j}{n-1} - \season{t-j}{n}) + \sum_{k=1}^t (1 - {\bm{a}}lpha)^k * \trend{t-k}{n} \\
& = \mathbb{E}SA(\level{t-L:t}{n-1} - \season{t-L:t}{n}) + \sum_{k=1}^t (1 - {\bm{a}}lpha)^k * \trend{t-k}{n}
\end{align*}
Based on the above expansion of the level equation, we observe that \(\level{n}{t}\) can be computed by a sum of two terms, the first of which is given by an \(\mathbb{E}SA\) term, and we finally, we note that the second term can also be calculated using the conv1d\_fft algorithm, resulting in a fast implementation of level smoothing.
\subsection{Further Details on ESA Implementation}
\label{subapp:esa-implementation}
\begin{minipage}[t]{0.49\textwidth}
\input{algorithms/naive_esa}
\end{minipage}
\begin{minipage}[t]{0.49\textwidth}
\input{algorithms/conv1d_fft}
\end{minipage}
\cref{alg:naive-esa} describes the naive implementation for ESA by first constructing the exponential smoothing attention matrix, \({\bm{A}}_{\mathrm{ES}}\), and performing the full matrix-vector multiplication.
Efficient \(\mathbb{E}SA\) relies on \cref{alg:conv-fft}, to achieve an \(\mathcal{O}(L \log L)\) complexity, by speeding up the matrix-vector multiplication. Due to the structure lower triangular structure of \({\bm{A}}_{\mathrm{ES}}\) (ignoring the first column), we note that performing a matrix-vector multiplication with it is equivalent to performing a convolution with the last row. \cref{alg:conv-fft} describes the pseudocode for fast convolutions using fast Fourier transforms.
\section{Discrete Fourier Transform}
\label{app:dft}
The DFT of a sequence with regular intervals, \({\bm{x}} = ({x}_0, {x}_1, \ldots, {x}_{N-1})\) is a sequence of complex numbers,
\[ {c}_k = \sum_{n=0}^{N-1} {x}_n \cdot \exp (- i 2 \pi k n / N),\]
for \(k=0, 1, \ldots, N-1\), where \(c_k\) are known as the Fourier coefficients of their respective Fourier frequencies.
Due to the conjugate symmetry of DFT for real-valued signals, we simply consider the first \(\lfloor N/2 {\textnormal{f}}loor + 1\) Fourier coefficients and thus we denote the DFT as \(\mathcal{F}: \mathbb{R}^N \to {\mathbb{C}}^{\lfloor N/2 {\textnormal{f}}loor +1}\).
The DFT maps a signal to the frequency domain, where each Fourier coefficient can be uniquely represented by the amplitude, \(|{c}_k|\), and the phase, \(\phi({c}_k)\),
\begin{align*}
|{c}_k| & = \sqrt{\mathfrak{R}\{{c}_k\}^2 + \mathfrak{I}\{{c}_k\}^2}
&
\phi({c}_k) & = \tan^{-1} \bigg( \frac{\mathfrak{I}\{{c}_k\}}{\mathfrak{R}\{{c}_k\}} \bigg)
\end{align*}
where \(\mathfrak{R}\{{c}_k\}\) and \(\mathfrak{I}\{{c}_k\}\) are the real and imaginary components of \({c}_k\) respectively.
Finally, the inverse DFT maps the frequency domain representation back to the time domain,
\[{x}_n = \mathcal{F}^{-1}({\bm{c}})_n = \frac{1}{N} \sum_{k=0}^{N-1} c_k \cdot \exp(i 2 \pi k n / N),\]
\section{Implementation Details}
\label{app:implementation}
\subsection{Hyperparameters}
For all experiments, we use the same hyperparameters for the encoder layers, decoder stacks, model dimensions, feedforward layer dimensions, number of heads in multi-head exponential smoothing attention, and kernel size for input embedding as listed in \cref{tab:hyperparams}. We perform hyperparameter tuning via a grid search over the number of frequencies \(K\), lookback window size, and learning rate, selecting the settings which perform the best on the validation set based on MSE (on results averaged over three runs). The search range is reported in \cref{tab:hyperparams}, where the lookback window size search range was decided to be set as the values for the horizon sizes for the respective datasets.
\input{tables/hyperparams}
\subsection{Optimization}
We use the Adam optimizer \cite{kingma2015adam} with \(\beta_1 = 0.9\), \(\beta_2 = 0.999\), and \({\epsilon}ilon = 1e-08\), and a batch size of 32.
We schedule the learning rate with linear warmup over 3 epochs, and cosine annealing thereafter for a total of 15 training epochs for all datasets. The minimum learning rate is set to 1e-30. For smoothing and damping parameters, we set the learning rate to be 100 times larger and do not use learning rate scheduling. Training was done on an Nvidia A100 GPU.
\subsection{Regularization}
We apply two forms of regularization during the training phase.
\paragraph{Data Augmentations}
We utilize a composition of three data augmentations, applied in the following order - scale, shift, and jitter, activating with a probability of 0.5.
\begin{enumerate}
\item Scale -- The time-series is scaled by a single random scalar value, obtained by sampling ${\epsilon}ilon \sim \mathcal{N}(0, 0.2)$, and each time step is $\tilde{x}_t = {\epsilon}ilon x_t$.
\item Shift -- The time-series is shifted by a single random scalar value, obtained by sampling ${\epsilon}ilon \sim \mathcal{N}(0, 0.2)$ and each time step is $\tilde{x}_t = x_t + {\epsilon}ilon$.
\item Jitter -- I.I.D. Gaussian noise is added to each time step, from a distribution ${\epsilon}ilon_t \sim \mathcal{N}(0, 0.2)$, where each time step is now $\tilde{x}_t = x_t + {\epsilon}ilon_t$.
\end{enumerate}
\paragraph{Dropout}
We apply dropout \cite{srivastava2014dropout} with a rate of \(p=0.2\) across the model. Dropout is applied on the outputs of the Input Embedding, Frequency Self-Attention and Multi-Head ES Attention blocks, in the Feedforward block (after activation and before normalization), on the attention weights, as well as damping weights.
\section{Datasets}
\label{app:data}
\textbf{ETT}\footnote{\url{https://github.com/zhouhaoyi/ETDataset}}
Electricity Transformer Temperature \cite{zhou2021informer} is a multivariate time-series dataset, comprising of load and oil temperature data recorded every 15 minutes from electricity transformers. ETT consists of two variants, ETTm and ETTh, whereby ETTh is the hourly-aggregated version of ETTm, the original 15 minute level dataset.
\textbf{ECL}\footnote{\url{lhttps://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014}}
Electricity Consuming Load measures the electricity consumption of 321 households clients over two years, the original dataset was collected at the 15 minute level, but is pre-processed into an hourly level dataset.
\textbf{Exchange}\footnote{\url{https://github.com/laiguokun/multivariate-time-series-data}}
Exchange \cite{lai2018modeling} tracks the daily exchange rates of eight countries (Australia, United Kingdom, Canada, Switzerland, China, Japan, New Zealand, and Singapore) from 1990 to 2016.
\textbf{Traffic}\footnote{\url{https://pems.dot.ca.gov/}}
Traffic is an hourly dataset from the California Department of Transportation describing road occupancy rates in San Francisco Bay area freeways.
\textbf{Weather}\footnote{\url{https://www.bgc-jena.mpg.de/wetter/}}
Weather measures 21 meteorological indicators like air temperature, humidity, etc., every 10 minutes for the year of 2020.
\textbf{ILI}\footnote{\url{https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html}}
Influenza-like Illness records the ratio of patients seen with ILI and the total number of patients on a weekly basis, obtained by the Centers for Disease Control and Prevention of the United States between 2002 and 2021.
\section{Synthetic Dataset}
\label{app:synthetic}
The synthetic dataset is constructed by a combination of trend and seasonal component.
Each instance in the dataset has a lookack window length of 192 and forecast horizon length of 48.
The trend pattern follows a nonlinear, saturating pattern, \(b(t) = \frac{1}{1 + \exp{\beta_0(t-\beta_1)}}\), where \(\beta_0=-0.2, \beta_1=192\).
The seasonal pattern follows a complex periodic pattern formed by a sum of sinusoids. Concretely, \(s(t) = A_1 \cos(2 \pi f_1 t) + A_2 \cos(2 \pi f_2 t\), where \(f_1 = 1/10, f_2 = 1/13\) are the frequencies, \(A_1 = A_2 = 0.15\) are the amplitudes.
During training phase, we use an additional noise component by adding i.i.d. gaussian noise with 0.05 standard deviation.
Finally, the \(i\)-th instance of the dataset is \(x_i = [x_i(1), x_i(2), \ldots, x_i(192+48)]\), where \(x_i(t) = b(t) + s(t + i) + {\epsilon}ilon\).
\section{Univariate Forecasting Benchmark}
\label{app:univar-results}
\input{tables/univar_results}
\section{{ETSformer} Standard Deviation}
\label{app:sd}
\input{tables/sd}
\section{Computational Efficiency}
\label{app:efficiency}
\input{figures/efficiency}
In this section, our goal is to compare {ETSformer}'s computational efficiency with that of competing Transformer-based approaches. Visualized in \cref{fig:efficiency}, {ETSformer} maintains competitive efficiency with compting quasilinear complexity Transformers, while obtaining state-of-the-art performance.
Furthermore, due to {ETSformer}'s unique decoder architecture which relies on its Trend Damping and Frequency Attention modules rather than output embeddings, {ETSformer} maintains superior efficiency as forecast horizon increases.
\end{document} |
\betaegin{document}
\title[Superquadratic trace functions]{Klein's trace inequality and superquadratic trace functions}
\alphauthor[M. Kian \,\,\text{\rm m}athfrak{m}akeLowercase{and} M.W. Alomari]{Mohsen Kian \,\,\text{\rm m}athfrak{m}akeLowercase{and} Mohammad W. Alomari}
\alphaddress{Mohsen Kian: Department of Mathematics, University of Bojnord, P.O. Box
1339, Bojnord 94531, Iran}
\varepsilonmail{[email protected] }
\alphaddress{Mohammad W. Alomari: Department of Mathematics, Faculty of Science and Information Technology, Irbid National University, P.O. Box 2600, Irbid, P.C. 21110, Jordan.}
\varepsilonmail{[email protected]}
\sharpubjclass[2010]{Primary: 47A56, 15A45 Secondary: 15A18, 15A42.}
\keywords{Klein's trace inequality, superquadratic trace function, majorization}
\betaegin{abstract}
We show that if $f$ is a non-negative superquadratic function, then $A\,\text{\rm m}apsto\,\text{\rm m}athrm{Tr}f(A)$ is a superquadratic function on the matrix algebra. In particular,
\betaegin{align*}
\mathrm{Tr}\; f\lambdaeft( {\frac{{A + B}}{2}} \right) +\mathrm{Tr}\; f\lambdaeft(\lambdaeft| {\frac{{A - B}}{2}}\right|\right) \lambdaeq \frac{{\mathrm{Tr}\; {f\lambdaeft( A \right)} + \mathrm{Tr}\; {f\lambdaeft( B \right)} }}{2}
\varepsilonnd{align*}
holds for all positive matrices $A,B$.
In addition, we present a Klein's inequality for superquadratic functions as
$$
\,\text{\rm m}athrm{Tr}[f(A)-f(B)-(A-B)f'(B)]\geq \,\text{\rm m}athrm{Tr}[f(|A-B|)]
$$
for all positive matrices $A,B$.
It gives in particular an improvement of the Klein's inequality for non-negative convex function.
As a consequence, some variants of the Jensen trace inequality for superquadratic functions have been presented.
\varepsilonnd{abstract}
\,\text{\rm m}aketitle
\sharpection{Introduction and Preliminaries}
In study of quantum mechanical systems, there are many famous concepts which are related to the trace function $A\,\text{\rm m}apsto\,\text{\rm m}athrm{Tr}(A)$. The well-known relative entropy of a density matrix $\rho$ (a positive matrix of trace one) with respect of another density matrix $\sharpigma$ is defined by
$$S(\rho|\sharpigma)=\,\text{\rm m}athrm{Tr}(\rho\lambdaog\rho)-\,\text{\rm m}athrm{Tr}(\rho\lambdaog\sharpigma).$$
More generally, for a proper (continuous) real function $f$, the study of the mapping $A\,\text{\rm m}apsto\,\text{\rm m}athrm{Tr}(f(A))$ is important.
The main subject of this paper, is to study this mapping for a class of real functions, the superquadrtic functions. It is known that if $f: \,\text{\rm m}athbb{R} \to \,\text{\rm m}athbb{R}$ is a continuous convex (monotone increasing) function, then the trace function $A\,\text{\rm m}apsto \mathrm{Tr}\;\lambdaeft(f\lambdaeft(A\right)\right)$ is a convex (monotone increasing) function, see \cite{HJ,EHL}. In Section 2, we present this result for superquadratic functions.
For all Hermitian $n\times n$ matrices $A$ and $B$ and all differentiable convex functions $f: \,\text{\rm m}athbb{R} \to \,\text{\rm m}athbb{R}$ with derivative $f^{\prime}$, the well known Klein inequality reads as \betaegin{align}
\lambdaabel{eq4.2}\mathrm{Tr}\;\lambdaeft[ { {f\lambdaeft( A \right) - f\lambdaeft( B \right) - \lambdaeft( {A - B} \right)f'\lambdaeft( B \right)}} \right]\ge0.
\varepsilonnd{align}
With $f(t) = t \lambdaog t$ $(t>0)$, this gives
\betaegin{align*}
S\lambdaeft( {A|B} \right) = \mathrm{Tr}\; A\lambdaeft( {\lambdaog A - \lambdaog B} \right) \ge \mathrm{Tr}\;\lambdaeft( {A - B} \right)
\varepsilonnd{align*}
for positive matrices $A,B$. If $A$ and $B$ are density matrices, then $S\lambdaeft(A,B\right)\ge0$. This is a classical application of the Klein inequality. See \cite{Ca,PZ}. To see a collection of trace inequalities the reader can refer to \cite{CaLi, FL,FKY, Hi,ShAb,Ya} and references therein.
In Section 3, we present a Klein trace inequality for superquadrtic functions. We show that our result improves previous results in the case of non-negative functions. In-addition, some applications of our results present counterpart to some known trace inequalities. We give some examples to clarify our results.
\betaigskip
Let $\,\text{\rm m}athscr{B}\lambdaeft( \,\text{\rm m}athscr{H}\right) $ be the $C^*$-algebra
of all bounded linear operators defined on a complex Hilbert space
$\lambdaeft( \,\text{\rm m}athscr{H};\lambdaeft\lambdaangle \cdot ,\cdot \right\text{\rm ran\,}gle
\right)$ with the identity operator $I$. When $\deltaim \,\text{\rm m}athscr{H}=n$, we identify $\,\text{\rm m}athscr{B}\lambdaeft( \,\text{\rm m}athscr{H}\right)$
with the algebra $\,\text{\rm m}athbb{M}_{n}$ of $n$-by-$n$ complex
matrices. We denote by $\,\text{\rm m}athbb{H}_n$ the real subspace of Hermitian matrices and by $\,\text{\rm m}athbb{M}^{+}_{n}$ the cone of positive (semidefinite) matrices. The identity matrix of any size will be denoted by $I$.
Every Hermitian matrix $A\in\,\text{\rm m}athbb{H}_n$ enjoys the spectral decomposition $A=\sharpum_{j=1}^{n}\lambdaambda_j P_j$, where $\lambdaambda_j$'s are eigenvalues of $A$ and $P_j$'s are projection matrices with $\sharpum_{j=1}^{n} P_j=I$. If $f$ is a continuous real function which is defined on the set of eigenvalues of $A$, then $f(A)$ is the matrix defined using the spectral decomposition by $f(A)=\sharpum_{j=1}^{n}f(\lambdaambda_j) P_j$. The eigenvalues of $f(A)$ are just $f(\lambdaambda_j)$. Moreover, If $U$ is a unitary matrix, then $f(U^*AU)=U^*f(A)U$.
For $A=[a_{ij}]\in \,\text{\rm m}athbb{M}_{n}$ the canonical trace of $A$ is denoted by $\,\text{\rm m}athrm{Tr} A$ and is defined to be $\sharpum_{j=1}^{n}a_{ii}$. The canonical trace is a unitary invariant mapping, say $\,\text{\rm m}athrm{Tr} UAU^*=\,\text{\rm m}athrm{Tr} A$ for every unitary matrix $U$. So, when $\lambdaambda_1,\cdots,\lambdaambda_n$ are eigenvalues of $A$ and $\lambdaeft\{\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_1,\cdots,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_n\right\}$ is an orthonormal set of corresponding eigenvectors in $\,\text{\rm m}athbb{C}^n$, then
$$\,\text{\rm m}athrm{Tr} A=\sharpum_{j=1}^n{\lambdaambda_j}(A)=\sharpum_{j=1}^n{\lambdaeft\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\right\text{\rm ran\,}gle}\quad\,\text{\rm m}box{and}\quad \,\text{\rm m}athrm{Tr} f(A)=\sharpum_{j=1}^nf({\lambdaambda_j}(A))=\sharpum_{j=1}^nf({\lambdaeft\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\right\text{\rm ran\,}gle}).$$
If $\,\text{\rm m}athscr{H}$ is a separable Hilbert space with an orthonormal basis $\{e_i\}_i$, an operator $A\in\,\text{\rm m}athscr{B}\lambdaeft( \,\text{\rm m}athscr{H}\right)$ is said to be a trace class operator if
\betaegin{align*}
\lambdaeft\| A \right\|_1 = \sharpum\lambdaimits_i {\lambdaeft\lambdaangle {\lambdaeft( {A^* A} \right)^{1/2} e_i ,e_i } \right\text{\rm ran\,}gle },
\varepsilonnd{align*}
is finite. In this case, the trace of $A$ is defined by $\mathrm{Tr}\;\lambdaeft( A \right) = \sharpum\lambdaimits_i {\lambdaeft\lambdaangle {Ae_i ,e_i } \right\text{\rm ran\,}gle }$
and is independent of the choice of the orthonormal basis. When $\,\text{\rm m}athscr{H}$ is finite-dimensional, every operator is trace class and this definition of trace of $A$ coincides with the definition of the trace of a matrix. \\
For a vector $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}=(x_1,\lambdadots,x_n)$ in $\,\text{\rm m}athbb{R}^n$, let $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}^\deltaownarrow$ and $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}^\uparrow$ denotes the vectors obtained by rearranging entries of $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}$ in decreasing and increasing order, respectively, i.e., $x_1^\deltaownarrow\geq\lambdadots\geq x_n^\deltaownarrow$ and $x_1^\uparrow\lambdaeq\lambdadots\lambdaeq x_n^\uparrow$.
A vector $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}\in\,\text{\rm m}athbb{R}^n$ is said to be weakly majorised by $\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}\in\,\text{\rm m}athbb{R}^n$ and denoted by $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}\prec_w \,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$ if $\sharpum_{j=1}^{k}x_j^\deltaownarrow\lambdaeq \sharpum_{j=1}^{k}y_j^\deltaownarrow$ holds for every $k=1,\lambdadots,n$. If in addition $\sharpum_{j=1}^{n}x_j^\deltaownarrow= \sharpum_{j=1}^{n}y_j^\deltaownarrow$, then $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}$ is said to be majorised by $\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$ and is denoted by $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}\prec \,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$. The trace of a vector $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}\in\,\text{\rm m}athbb{R}^n$ is defined to be the sum of its entries and is denoted using a same notation as a matrix by $\,\text{\rm m}athrm{Tr}\ \,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}$.
A matrix $P=[p_{ij}]\in\,\text{\rm m}athbb{M}_n$ is said to be doubly stochastic if all of its entries are non-negative and
$$\sharpum_{i=1}^{n}p_{ij}=1\quad\,\text{\rm m}box{for all $j$}\qquad\,\text{\rm m}box{and}\qquad \sharpum_{j=1}^{n}p_{ij}=1\quad\,\text{\rm m}box{for all $i$}.$$
For all $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}\in\,\text{\rm m}athbb{R}^n$ it is well-known that $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}\prec\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$ if and only if there exists a doubly stochastic matrix $P$ such that $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}=P\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$, see \cite[Theorem II.1.10]{bh}. More results concerning majorization can be found in \cite{bh,HJ}.
\betaigskip
A function $f:J\sharpubseteq{\,\text{\rm m}athbb{R}}\to \,\text{\rm m}athbb{R}$ is called convex if
\betaegin{align}
f\lambdaeft( \alphalpha t +\lambdaeft(1-\alphalpha\right)s \right)\lambdae \alphalpha f\lambdaeft(
{t} \right)+ \lambdaeft(1-\alphalpha\right) f\lambdaeft( {s}
\right),\lambdaabel{eq1.1}
\varepsilonnd{align}
for all points $ s,t \in J$ and all $\alphalpha\in [0,1]$. If $-f$
is convex then we say that $f$ is concave. Moreover, if $f$ is
both convex and concave, then $f$ is said to be affine.
Geometrically, for all $x,y\in J$ with $x \lambdae t \lambdae y$, the two points $\lambdaeft(x,f\lambdaeft(x\right)\right)$ and $\lambdaeft(y,f\lambdaeft(y\right)\right)$ on the graph of $f$ are on or
below the chord joining the endpoints. In symbols, we write
\betaegin{align*}
f\lambdaeft(t\right)\lambdae \frac{f\lambdaeft( y \right) - f\lambdaeft( x \right)
}{y-x} \lambdaeft( {t-x} \right)+ f\lambdaeft( x \right)
\varepsilonnd{align*}
for any $x \lambdae t \lambdae y$ and $x,y\in J$.
Equivalently, given a function $f : J\to \,\text{\rm m}athbb{R}$, we say that
$f$ admits a support line at $s \in J $ if there exists a $\lambdaambda
\in \,\text{\rm m}athbb{R}$ such that
\betaegin{align}
f\lambdaeft( t \right) \ge f\lambdaeft( s \right) + \lambdaambda \lambdaeft( {t - s}
\right) \lambdaabel{eq1.2}
\varepsilonnd{align}
for all $t\in J$.
The set of all such $\lambdaambda$ is called the subdifferential of $f$
at $s$ and it is denoted by $\partial f$. Indeed, the
subdifferential gives us the slopes of the supporting lines for
the graph of $f$ so that if $f$ is convex, then $\partial f(s) \ne
\varepsilonmptyset$ at all interior points of its domain.
From this point of view, Abramovich \varepsilontal \cite{SJS} extended the
above idea for what they called superquadratic functions. Namely,
a function $f:[0,\infty)\to \,\text{\rm m}athbb{R}$ is called superquadratic
provided that for all $s\ge0$ there exists a constant $C_s\in
\,\text{\rm m}athbb{R}$ such that
\betaegin{align}\lambdaabel{eq1.3}
f\lambdaeft( t \right) \ge f\lambdaeft( s \right) + C_s \lambdaeft( {t - s}
\right) + f\lambdaeft( {\lambdaeft| {t - s} \right|} \right)
\varepsilonnd{align}
for all $t\ge0$. A function $f$ is called subquadratic if $-f$ is
superquadratic. Thus, for a superquadratic function we require
that $f$ is above its tangent line plus a translation of $f$
itself. If $f$ is differentiable and satisfies $f(0) = f^{\prime}(0) = 0$, then one can easily see
that the constant $C_s$ in the definition is necessarily $f^{\prime}(s)$, see \cite{AJS}.
Prima facie, superquadraticity looks to be stronger than
convexity, but if $f$ takes negative values then it
may be considered weaker. On the other hand, non-negative subquadratic functions does not need to be concave. In other words, there exist subquadratic function which are convex. This fact helps us first to improve some results for convex functions and second to present some counterpart
results concerning convex functions.
Some known examples of superquadratic functions are power functions. For every $p\geq2$, the function $f(t)=t^p$ is superquadratic as well as convex. If $1\lambdaeq p\lambdaeq 2$, then $f(t)=-t^p$ is superquadratic and concave. To see more examples of superquadratic and subquadratic functions and their properties, the reader can refer to \cite{AJS,SJS,SIP,MA,BPV}. Among others, Abramovich \varepsilontal \cite{SJS} proved that the inequality
\betaegin{align}\lambdaabel{eq.SJS}
f\lambdaeft( {\int {\varphi d\,\text{\rm m}u } } \right) \lambdae \int {f\lambdaeft(
{\varphi \lambdaeft( s \right)} \right)-f\lambdaeft( {\lambdaeft| {\varphi \lambdaeft(
s \right) - \int {\varphi d\,\text{\rm m}u } } \right|} \right)d\,\text{\rm m}u \lambdaeft( s
\right)}
\varepsilonnd{align}
holds for all probability measures $\,\text{\rm m}u$ and all nonnegative, $\,\text{\rm m}u$-integrable functions $\varphi$ if and only if $f$ is superquadratic.
As a matrix extension of \varepsilonqref{eq.SJS}, Kian \cite{K} showed that if $f:[0,\infty)\to\,\text{\rm m}athbb{R}$ is a continuous superquadratic function, then
\betaegin{align}\lambdaabel{qk}
f(\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\text{\rm ran\,}gle)\lambdaeq \lambdaangle f(A)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\text{\rm ran\,}gle-\lambdaangle f(|A-\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\text{\rm ran\,}gle|)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\text{\rm ran\,}gle
\varepsilonnd{align}
holds for every positive matrix $A\in\,\text{\rm m}athbb{M}_n^+$ and every unit vector $\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\in\,\text{\rm m}athbb{C}^n$. More generally, it has been shown in \cite{KS} that if $\partialhi:\,\text{\rm m}athbb{M}_n\to\,\text{\rm m}athbb{M}_m$ is a unital positive linear map, then
\betaegin{align}\lambdaabel{qkd}
f(\lambdaangle \partialhi(A)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\text{\rm ran\,}gle)\lambdaeq \lambdaangle \partialhi(f(A))\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\text{\rm ran\,}gle-\lambdaeft\lambdaangle \partialhi(f(|A-\lambdaangle \partialhi(A)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\text{\rm ran\,}gle|))\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},
\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\right\text{\rm ran\,}gle
\varepsilonnd{align}
holds for every positive matrix $A\in\,\text{\rm m}athbb{M}_n^+$ and every unit vector $\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\in\,\text{\rm m}athbb{C}^n$.
\sharpection{Superquadratic trace functions}
It is known that if $f: \,\text{\rm m}athbb{R} \to \,\text{\rm m}athbb{R}$ is a continuous convex function, then the trace function $A\,\text{\rm m}apsto \mathrm{Tr}\;\lambdaeft[f\lambdaeft(A\right)\right]$ is a convex function on $\,\text{\rm m}athbb{M}_n$. In this section, we present this fact for superquadratic functions. We need some lemmas.
Note that if $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}=(x_1,\cdots,x_n)\in\,\text{\rm m}athbb{R}^n$ is a vector and $f:\,\text{\rm m}athbb{R}\to\,\text{\rm m}athbb{R}$ is a real function, we denote the vector $\lambdaeft(f\lambdaeft(x_1\right),\cdots,f\lambdaeft(x_n\right)\right)$ by $f(\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}})$.
\betaegin{lemma}\lambdaabel{lm1}\cite{bh}
For $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}\in\,\text{\rm m}athbb{R}^n$ \\
\rm{ (i)}\ \ If $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}\prec \,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$, then $|\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}|\prec |\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}|$, where $|\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}|=\lambdaeft(\lambdaeft|x_1\right|,\cdots,\lambdaeft|x_n\right|\right)$.\\
\rm{(ii)}\ \ $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}\prec \,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$ if and only if $\,\text{\rm m}athrm{Tr} f(\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}})\lambdaeq \,\text{\rm m}athrm{Tr} f(\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}})$ for every convex function $f$.
\varepsilonnd{lemma}
\betaegin{lemma}\lambdaabel{lm2}
Assume that $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}\in\,\text{\rm m}athbb{R}_+^n$ and $f:\lambdaeft[0,\infty\right)\to \,\text{\rm m}athbb{R}$ is a superquadratic function. If $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}\prec \,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$, then there exists a doubly stochastic matrix $P$ such that $\,\text{\rm m}athrm{Tr} f(\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}})\lambdaeq \,\text{\rm m}athrm{Tr} f(\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}})-\,\text{\rm m}athrm{Tr} PF$, where $F=\lambdaeft[f\lambdaeft(\lambdaeft|x_i-y_j\right|\right)\right]$.
\varepsilonnd{lemma}
\betaegin{proof}
For $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}\in\,\text{\rm m}athbb{R}_+^n$, if $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}\prec \,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$, then there exists a doubly stochastic matrix $P=[p_{ij}]$ such that $\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}}=P\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}}$. Therefore, $x_i=\sharpum_{j=1}^{n}p_{ij}y_j$ for every $i=1,\cdots,n$ and $\sharpum_{j=1}^{n}p_{ij}=1$. If $f$ is a superquadratic function, then from \varepsilonqref{eq.SJS} we conclude that the inequality
\betaegin{align}
f(x_i)=f\lambdaeft(\sharpum_{j=1}^{n}p_{ij}y_j\right)\lambdaeq\sharpum_{j=1}^{n}p_{ij}f(y_j)-\sharpum_{j=1}^{n}p_{ij}
f\lambdaeft(\lambdaeft|y_j-\sharpum_{j=1}^{n}p_{ij}y_j\right|\right)
\varepsilonnd{align}
holds for every $i=1,\cdots,n$. Summing over $i$, we obtain
$$\,\text{\rm m}athrm{Tr} f(\,\text{\rm m}athrm{\,\text{\rm m}athbf{x}})\lambdaeq \,\text{\rm m}athrm{Tr} f(\,\text{\rm m}athrm{\,\text{\rm m}athbf{y}})-\sharpum_{i,j=1}^{n}p_{ij} f\lambdaeft(\lambdaeft|y_j-x_i\right|\right).$$
If we put $F=\lambdaeft[f\lambdaeft(\lambdaeft|x_i-y_j\right|\right)\right]$, then $\sharpum_{i,j=1}^{n}p_{ij} f\lambdaeft(\lambdaeft|y_j-x_i\right|\right)=\,\text{\rm m}athrm{Tr} PF$. This completes the proof.
\varepsilonnd{proof}
\betaegin{lemma}\cite{SJS}\lambdaabel{lm44}
\lambdaabel{lemma1}Let $f:[0,\infty)\to\,\text{\rm m}athbb{R}$ be a superquadratic function. Then
\betaegin{enumerate}
\item $f\lambdaeft(0\right)\lambdae 0$;
\item If $f$ is differentiable and $f(0)=f^{\prime}(0)=0$, then $C_s=f^{\prime}(s)$ in \varepsilonqref{eq1.3} for all $s\ge0$;
\item If $f$ is non-negative, then $f(0)=f^{\prime}(0)=0$ and $f$ is convex and increasing. \varepsilonnd{enumerate}
\varepsilonnd{lemma}
\betaegin{theorem} \lambdaabel{lemma3}
Let $f:\lambdaeft[0,\infty\right)\to \,\text{\rm m}athbb{R}$ be a continuous superquadratic function. If $f$ is non-negative, then the mapping $A\,\text{\rm m}apsto \,\text{\rm m}athrm{Tr} \lambdaeft[f(A)\right]$ is a superquadratic function on $\,\text{\rm m}athbb{M}_n^+$. More generally, the inequality
\betaegin{align}\lambdaabel{eq2.1}
\mathrm{Tr}\; f\lambdaeft( {\frac{{A + B}}{2}} \right) +\mathrm{Tr}\; f\lambdaeft(\lambdaeft| {\frac{{A - B}}{2}}\right|\right) \lambdaeq \frac{{\mathrm{Tr}\; {f\lambdaeft( A \right)} + \mathrm{Tr}\; {f\lambdaeft( B \right)} }}{2}-\,\text{\rm m}athrm{Tr}[PG+QF]
\varepsilonnd{align}
holds for some doubly stochastic matrices $P=[p_{ij}]$ and $Q=[q_{ij}]$, in which $$G=\lambdaeft[f\lambdaeft(\frac{1}{2}\lambdaeft|\lambdaeft|\lambdaambda_i\right|-\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right|\right)\right]
\quad\,\text{\rm m}box{and}\quad F=\lambdaeft[f\lambdaeft(\frac{1}{2}\lambdaeft|\xi_i-\,\text{\rm m}u_j-\nu_j\right|\right)\right],$$
where $\lambdaambda_i,\xi_i,\,\text{\rm m}u_i$ and $\nu_i$ are eigenvalues of $A-B$, $A+B$, $A$ and $B$, respectively.
\varepsilonnd{theorem}
\betaegin{proof}
For a Hermitian matrix $X$, assume that $\lambdaambda^\deltaownarrow(X)$ and $\lambdaambda^\uparrow(X)$ are eigenvalues of $X$ arranged in decreasing order and increasing order, respectively. Recall that \cite{bh} if $A,B$ are Hermitian matrices, then
\betaegin{align}\lambdaabel{j}
\lambdaambda^\deltaownarrow(A)- \lambdaambda^\deltaownarrow(B)\prec\lambdaambda^\deltaownarrow(A-B)\prec \lambdaambda^\deltaownarrow(A)- \lambdaambda^\uparrow(B)
\varepsilonnd{align}
and
\betaegin{align}\lambdaabel{jj}
\lambdaambda^\deltaownarrow(A)+ \lambdaambda^\uparrow(B)\prec\lambdaambda^\deltaownarrow(A+B)\prec \lambdaambda^\deltaownarrow(A)+ \lambdaambda^\deltaownarrow(B).
\varepsilonnd{align}
From \varepsilonqref{j} we have
\betaegin{align}\lambdaabel{jg}
\lambdaambda^\uparrow(B-A)\prec \lambdaambda^\deltaownarrow(B)-\lambdaambda^\deltaownarrow(A)
\varepsilonnd{align}
and noting Lemma \ref{lm1} this gives
\betaegin{align}\lambdaabel{jgg}
\lambdaeft|\lambdaambda^\deltaownarrow(A-B)\right|\prec \lambdaeft|\lambdaambda^\deltaownarrow(B)-\lambdaambda^\deltaownarrow(A)\right|.
\varepsilonnd{align}
We assume that $\,\text{\rm m}u_j$ and $\nu_j$ \ $(j=1,\cdots,n)$ are eigenvalues of $A$ and $B$ respectively, arranged in decreasing order. If $f$ is superquadratic, then it follows from \varepsilonqref{jgg} and Lemma \ref{lm2} that
\betaegin{align*}
\,\text{\rm m}athrm{Tr} f(|A-B|)&=\sharpum_{j=1}^{n}f\lambdaeft(|\lambdaambda_j(A-B)|\right)= \,\text{\rm m}athrm{Tr}\ f\lambdaeft(\lambdaeft|\lambdaambda^\deltaownarrow(A-B)\right|\right)\nonumber\\
&\lambdaeq \,\text{\rm m}athrm{Tr}\ f\lambdaeft(\lambdaeft|\lambdaambda^\deltaownarrow(B)-\lambdaambda^\deltaownarrow(A)\right|\right)-\,\text{\rm m}athrm{Tr} PG\qquad(\,\text{\rm m}box{by Lemma \ref{lm2}})\nonumber\\
&=\sharpum_{j=1}^{n} f\lambdaeft(\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right)-\,\text{\rm m}athrm{Tr} PG,
\varepsilonnd{align*}
for some doubly stochastic matrix $P=[p_{ij}]$, in which $G=\lambdaeft[f\lambdaeft(\lambdaeft|\lambdaeft|\lambdaambda_i\right|-\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right|\right)\right]$ and $\lambdaambda_i$'s are eigenvalues of $A-B$. This implies that for every $\alphalpha\geq0$, the inequality
\betaegin{align}\lambdaabel{j2}
\,\text{\rm m}athrm{Tr} f(\alphalpha|A-B|)\lambdaeq \sharpum_{j=1}^{n} f\lambdaeft(\alphalpha\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right)-\,\text{\rm m}athrm{Tr} P_\alphalpha G_\alphalpha
\varepsilonnd{align}
holds for some doubly stochastic matrix $P_\alphalpha$, in which $G_\alphalpha=\lambdaeft[f\lambdaeft(\alphalpha\lambdaeft|\lambdaeft|\lambdaambda_i\right|-\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right|\right)\right]$ and $\lambdaambda_i$'s are eigenvalues of $(A-B)$. Now suppose that $\alphalpha\in[0,1]$. Another use of Lemma \ref{lm2} together with \varepsilonqref{jj} gives
\betaegin{align}\lambdaabel{jjg}
\,\text{\rm m}athrm{Tr}\ f\lambdaeft(\lambdaambda^\deltaownarrow(\alphalpha A+(1-\alphalpha)B)\right) \lambdaeq \,\text{\rm m}athrm{Tr}\ f\lambdaeft(\alphalpha\lambdaambda^\deltaownarrow(A)+ (1-\alphalpha)\lambdaambda^\deltaownarrow(B)\right)-\,\text{\rm m}athrm{Tr}QF\
\varepsilonnd{align}
for some doubly stochastic matrix $Q$, where $F=\lambdaeft[f\lambdaeft(\lambdaeft|\xi_i-\alphalpha\,\text{\rm m}u_j-(1-\alphalpha)\nu_j\right|\right)\right]$ and $\xi_i$'s are eigenvalues of $\alphalpha A+(1-\alphalpha)B$. Therefore
{\sharpmall\betaegin{align*}
&\,\text{\rm m}athrm{Tr} f(\alphalpha A+(1-\alphalpha)B) \\
&=\sharpum_{j=1}^{n}f\lambdaeft(\lambdaambda_j^\deltaownarrow(\alphalpha A+(1-\alphalpha)B)\right)\\
&\lambdaeq\sharpum_{j=1}^{n}f\lambdaeft(\alphalpha\,\text{\rm m}u_j+ (1-\alphalpha)\nu_j\right)-\,\text{\rm m}athrm{Tr}QF\qquad\qquad(\,\text{\rm m}box{by \varepsilonqref{jjg}})\\
&\lambdaeq \sharpum_{j=1}^{n}\lambdaeft\{\alphalpha f(\,\text{\rm m}u_j)+(1-\alphalpha)f(\nu_j)-\alphalpha f\lambdaeft((1-\alphalpha)\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right) -(1-\alphalpha)f\lambdaeft(\alphalpha\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right)\right\}-\,\text{\rm m}athrm{Tr}QF\\
&\hspace{6cm}(\,\text{\rm m}box{since $f$ is superquadratic})\\
&=\alphalpha \,\text{\rm m}athrm{Tr} f(A)+(1-\alphalpha)\,\text{\rm m}athrm{Tr} f(B) -\alphalpha\sharpum_{j=1}^{n}f\lambdaeft((1-\alphalpha)\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right)-(1-\alphalpha)\sharpum_{j=1}^{n}
f\lambdaeft(\alphalpha\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right)-\,\text{\rm m}athrm{Tr}QF\\
&\lambdaeq\alphalpha \,\text{\rm m}athrm{Tr} f(A)+(1-\alphalpha)\,\text{\rm m}athrm{Tr} f(B)\\
&\qquad-\alphalpha\,\text{\rm m}athrm{Tr} f\lambdaeft((1-\alphalpha)\lambdaeft|A-B\right|\right)-(1-\alphalpha)\,\text{\rm m}athrm{Tr} f\lambdaeft(\alphalpha\lambdaeft|A-B\right|\right)-\,\text{\rm m}athrm{Tr}[(1-\alphalpha)P_\alphalpha G_\alphalpha+\alphalpha P_{1-\alphalpha} G_{1-\alphalpha}+QF],
\varepsilonnd{align*}}
where the last inequality follows from \varepsilonqref{j2}. In particular, with $\alphalpha=1/2$ this gives
\betaegin{align*}
\mathrm{Tr}\; f\lambdaeft( {\frac{{A + B}}{2}} \right) +\mathrm{Tr}\; f\lambdaeft(\lambdaeft| {\frac{{A - B}}{2}}\right|\right) \lambdaeq \frac{{\mathrm{Tr}\; {f\lambdaeft( A \right)} + \mathrm{Tr}\; {f\lambdaeft( B \right)} }}{2}-\,\text{\rm m}athrm{Tr}[PG+QF]
\varepsilonnd{align*}
for some doubly stochastic matrices $P=[p_{ij}]$ and $Q=[q_{ij}]$, in which
{\sharpmall\betaegin{align*}
G=\lambdaeft[f\lambdaeft(\frac{1}{2}\lambdaeft|\lambdaeft|\lambdaambda_i\right|-\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right|\right)\right]
\quad\,\text{\rm m}box{and}\quad F=\lambdaeft[f\lambdaeft(\frac{1}{2}\lambdaeft|\xi_i-\,\text{\rm m}u_j-\nu_j\right|\right)\right],
\varepsilonnd{align*}}
where $\lambdaambda_i$ and $\xi_i$ are eigenvalues of $A-B$ and $A+B$, respectively. Equivalently
{\sharpmall\betaegin{align*}
&\mathrm{Tr}\; f\lambdaeft( {\frac{{A + B}}{2}} \right) +\mathrm{Tr}\; f\lambdaeft(\lambdaeft| {\frac{{A - B}}{2}}\right|\right) \\ &\qquad \lambdaeq \frac{{\mathrm{Tr}\; {f\lambdaeft( A \right)} + \mathrm{Tr}\; {f\lambdaeft( B \right)} }}{2} -\sharpum_{i,j=1}^{n}\lambdaeft(p_{ij}
f\lambdaeft(\frac{1}{2}\lambdaeft|\lambdaeft|\lambdaambda_i\right|-\lambdaeft|\,\text{\rm m}u_j-\nu_j\right|\right|\right)+
q_{ij}f\lambdaeft(\frac{1}{2}\lambdaeft|\xi_i-\,\text{\rm m}u_j-\nu_j\right|\right)\right),
\varepsilonnd{align*}}
from which we conclude that if $f$ is non-negative, then $A\,\text{\rm m}apsto \mathrm{Tr}\; f\lambdaeft(A\right)$ is a superquadratic function. This completes the proof.
\varepsilonnd{proof}
In 2003, Hansen \& Pedersen \cite{HP} proved a trace version of then Jensen inequality. They showed that if $f:J\sharpubseteq\,\text{\rm m}athbb{R}\to\,\text{\rm m}athbb{R}$ is a continuous convex function, then
\betaegin{align}\lambdaabel{HPT}
\mathrm{Tr}\;\lambdaeft[{f\lambdaeft( {\sharpum\lambdaimits_{i = 1}^k {C_i^* A_i C_i } }
\right)}\right] \lambdae \mathrm{Tr}\;\lambdaeft[{\sharpum\lambdaimits_{i = 1}^k {C_i^*
f\lambdaeft( {A_i } \right)C_i}}\right]
\varepsilonnd{align}
for every $k$-tuple of Hermitian matrices $(A_1,\cdots, A_k)$ in $\,\text{\rm m}athbb{M}_n$ with spectra contained in $J$ and every $k$-tuple $\lambdaeft(C_1, \cdots, C_k\right)$ of matrices with $\sharpum_{i=1}^k{C_i^*C_i}=I$.
In the rest of this section, using the concept of superquadratic functions and Theorem \ref{lemma3}, we present variants of \varepsilonqref{HPT} for superquadratic functions, which give in particular some refinements of the Hansen--Pedersen trace inequality \varepsilonqref{HPT} in the case of non-negative functions. Beside our results concerning \varepsilonqref{HPT}, we give a conjuncture as follows.
\textbf{Conjuncture.}\
If $f:[0,\infty)\to\,\text{\rm m}athbb{R}$ is a continuous superquadratic function, then
{\sharpmall\betaegin{align}\lambdaabel{HPT-s}
\mathrm{Tr}\;\lambdaeft[{f\lambdaeft( {\sharpum\lambdaimits_{i = 1}^k {C_i^* A_i C_i } }
\right)}\right] \lambdae \mathrm{Tr}\;\lambdaeft[{\sharpum\lambdaimits_{i = 1}^k {C_i^*
f\lambdaeft( {A_i } \right)C_i}}\right]-\mathrm{Tr}\;\lambdaeft[\sharpum\lambdaimits_{i = 1}^kC_i^*f\lambdaeft(\lambdaeft|A_i-\mathrm{Tr}\; \lambdaeft[\sharpum\lambdaimits_{i = 1}^kC_i^*A_iC_i\right]\right|\right)C_i\right]
\varepsilonnd{align}}
for every $k$-tuple of positive matrices $(A_1,\cdots, A_k)$ in $\,\text{\rm m}athbb{M}_n^+$ and every $k$-tuple $\lambdaeft(C_1, \cdots, C_k\right)$ of matrices with $\sharpum_{i=1}^k{C_i^*C_i}=I$.
We now use Theorem \ref{lemma3} to present the first variant of \varepsilonqref{HPT} for superquadratic functions.
\betaegin{corollary}\lambdaabel{pr3}
Assume that $f:[0,\infty)\to[0,\infty)$ is a continuous function. If $f$ is superquadratic, then
\betaegin{align}\lambdaabel{JTS}
\,\text{\rm m}athrm{Tr}f\lambdaeft(C^*AC\right)+ \,\text{\rm m}athrm{Tr} f\lambdaeft(D^*AD\right)\lambdaeq \,\text{\rm m}athrm{Tr}\lambdaeft[C^*f(A)C+D^*f(A)D \right]-
\,\text{\rm m}athrm{Tr}\lambdaeft[f\lambdaeft( \lambdaeft|DAC\right|\right)+ f\lambdaeft(\lambdaeft|C^*AD^*\right|\right)\right]
\varepsilonnd{align}
for every positive matrix $A\in\,\text{\rm m}athbb{M}_n^+$ and every isometry $C$, where $D=\sharpqrt{1-CC^*}$.
\varepsilonnd{corollary}
\betaegin{proof}
To prove \varepsilonqref{JTS}, we apply Theorem \ref{lemma3} and then employ a similar argument as in \cite[Theorem 1.9]{FMPS}. Assume that $A,B\in\,\text{\rm m}athbb{M}_n^+$. If $C\in\,\text{\rm m}athbb{M}_n$ and $C^*C=I$, then the block matrices $U=\lambdaeft[\betaegin{array}{cc}
C & D\\ 0 & -C^*
\varepsilonnd{array}\right]$ and $V=\lambdaeft[\betaegin{array}{cc}
C & -D\\ 0 & C^*
\varepsilonnd{array}\right]$ are unitary matrices in $\,\text{\rm m}athbb{M}_{2n}$, provided that $D=(I-CC^*)^{1/2}$.
With $\tilde{A}=\lambdaeft[\betaegin{array}{cc}
A & 0\\ 0&B
\varepsilonnd{array}\right]$ we compute
\betaegin{align}\lambdaabel{qn1}
\frac{U^*\tilde{A}U +V^*\tilde{A}V}{2}=\lambdaeft(C^*AC\right)\oplus \lambdaeft(DAD+CBC^*\right)
\varepsilonnd{align}
and
\betaegin{align}\lambdaabel{qn2}
\lambdaeft|\frac{U^*\tilde{A}U -V^*\tilde{A}V}{2}\right|=\lambdaeft|DAC\right|\oplus \lambdaeft|C^*AD\right|.
\varepsilonnd{align}
Now we use Theorem \ref{lemma3} to write
\betaegin{align*}
&\,\text{\rm m}athrm{Tr}f\lambdaeft(C^*AC\right)+ \,\text{\rm m}athrm{Tr} f\lambdaeft(DAD+CBC^*\right)\\
&= \,\text{\rm m}athrm{Tr} f\lambdaeft(\frac{U^*\tilde{A}U +V^*\tilde{A}V}{2}\right)\qquad\qquad\qquad(\,\text{\rm m}box{by \varepsilonqref{qn1}})\\
&\lambdaeq \,\text{\rm m}athrm{Tr}\ \frac{f\lambdaeft(U^*\tilde{A}U\right)+f\lambdaeft(V^*\tilde{A}V\right)}{2}-\,\text{\rm m}athrm{Tr}\ f\lambdaeft(\lambdaeft|\frac{U^*\tilde{A}U -V^*\tilde{A}V}{2}\right|\right)\qquad(\,\text{\rm m}box{by Theorem \ref{lemma3}})\\
&=\,\text{\rm m}athrm{Tr}\ \frac{U^*f(\tilde{A})U + V^*f(\tilde{A})V}{2}-\,\text{\rm m}athrm{Tr}\ f\lambdaeft(\lambdaeft|\frac{U^*\tilde{A}U -V^*\tilde{A}V}{2}\right|\right)\\
&=\,\text{\rm m}athrm{Tr}\lambdaeft[C^*f(A)C+Df(A)D +Cf(B)C^*\right]-
\,\text{\rm m}athrm{Tr}\lambdaeft[f\lambdaeft( \lambdaeft|DAC\right|\right)+ f\lambdaeft(\lambdaeft|C^*AD\right|\right)\right],
\varepsilonnd{align*}
where the last equality follows from \varepsilonqref{qn1} and \varepsilonqref{qn2}. Putting $B=0$ and noting that $f(0)\lambdaeq0$, this gives the desired inequality.
\varepsilonnd{proof}
We remark that a non-negative superquadratic function $f$ is convex and satisfies $f(0)=0$. If $C^*C=I$, then with $D=\sharpqrt{I-CC^*}$ we have $D^*D=I-CC^*\lambdaeq I$. It follows from \varepsilonqref{HPT} that
\betaegin{align*}
\,\text{\rm m}athrm{Tr}f\lambdaeft(C^*AC\right)+ \,\text{\rm m}athrm{Tr} f\lambdaeft(D^*AD\right)\lambdaeq \,\text{\rm m}athrm{Tr}\ C^*f(A)C + \,\text{\rm m}athrm{Tr} D^*f(A)D.
\varepsilonnd{align*}
Therefore Corollary \ref{pr3} gives a refinement of \varepsilonqref{HPT}, when $f$ is a non-negative superquadratic function.
\betaigskip
To present the second variant of \varepsilonqref{HPT}, we give the following version of \varepsilonqref{qk} and \varepsilonqref{qkd}. The proof is similar to those of \cite[Theorem 2.1]{K} and \cite[Theorem 2.3]{KS}. We include the proof for the sake of completeness.
\betaegin{lemma}\lambdaabel{lm5}
Let $f:[0,\infty)\to\,\text{\rm m}athbb{R}$ be a continuous superquadratic function and $\partialhi:\,\text{\rm m}athbb{M}_n\to\,\text{\rm m}athbb{M}_m$ be a unital positive linear map. If $\tau$ is an state on $\,\text{\rm m}athbb{M}_m$, then
\betaegin{align*}
f(\tau(\partialhi(A)))\lambdaeq \tau (\partialhi(f(A)))-\tau(\partialhi(f(|A-\tau(\partialhi(A))|)))
\varepsilonnd{align*}
for every positive matrix $A$.
\varepsilonnd{lemma}
\betaegin{proof}
If $A$ is a positive matrix, then applying the functional calculus to \varepsilonqref{eq1.3} with $t=A$ and then applying the positive linear functional $\tau$ gives the inequality
\betaegin{align*}
\tau(f\lambdaeft( A \right)) \ge f\lambdaeft( s \right) + C_s \lambdaeft( {\tau(A) - s}
\right) + \tau(f\lambdaeft( {\lambdaeft| {A - s} \right|} \right))
\varepsilonnd{align*}
for every $s\geq0$. Put $s=\tau(A)$ to obtain
\betaegin{align}\lambdaabel{qkn}
\tau(f\lambdaeft( A \right)) \ge f\lambdaeft( \tau(A)\right) + \tau(f\lambdaeft( {\lambdaeft| {A - \tau(A)} \right|} \right)).
\varepsilonnd{align}
Now assume that $\partialhi:\,\text{\rm m}athbb{M}_n\to\,\text{\rm m}athbb{M}_m$ is a unital positive liner map. If $\tau$ is an state on $\,\text{\rm m}athbb{M}_m$, then the mapping $\psi_\tau:\,\text{\rm m}athbb{M}_n\to\,\text{\rm m}athbb{C}$ defined by $\psi_\tau(X)=\tau(\partialhi(X))$ is an state on $\,\text{\rm m}athbb{M}_n$. Applying \varepsilonqref{qkn} to $\psi_\tau$ gives the desired inequality.
\varepsilonnd{proof}
The canonical trace is a positive linear functional on $\,\text{\rm m}athbb{M}_n$. If $\tau(A)=1/n \mathrm{Tr}\; (A)$, then Lemma \ref{lm5} concludes the following result.
\betaegin{proposition}
Let $f:[0,\infty)\to\,\text{\rm m}athbb{R}$ be a continuous superquadratic function. If $\partialhi:\,\text{\rm m}athbb{M}_n\to\,\text{\rm m}athbb{M}_m$ is a unital positive linear map, then
\betaegin{align*}
f\lambdaeft(\frac{1}{n}\mathrm{Tr}\; \partialhi(A)\right)\lambdaeq \frac{1}{n}\mathrm{Tr}\; \lambdaeft[\partialhi(f(A))-\partialhi\lambdaeft(f\lambdaeft(\lambdaeft|A-\frac{1}{n}\mathrm{Tr}\; \partialhi(A)\right|\right)\right)\right]
\varepsilonnd{align*}
for every positive matrix $A\in\,\text{\rm m}athbb{M}_n^+$.
\varepsilonnd{proposition}
In the next result, we present another variant of the Hansen-Pedersen trace inequality \varepsilonqref{HPT} for superquadratic functions. We need a well-known fact from matrix analysis.
\betaegin{lemma}\lambdaabel{lm7}\cite{bh}
If $A\in\,\text{\rm m}athbb{H}_n$ is a Hermitian matrix, then
\betaegin{align}\lambdaabel{eq2.6}
\sharpum_{j=1}^{k}\lambdaambda_j(A)=\,\text{\rm m}ax \sharpum_{j=1}^{k}\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle,\quad (k=1,\cdots,n)
\varepsilonnd{align}
where the maximum is taken over all choices of orthonormal set of vectors $\{\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_1,\cdots,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_k\}$.
\varepsilonnd{lemma}
\betaegin{proposition}\lambdaabel{thk11}
Let $f:[0,\infty)\to\,\text{\rm m}athbb{R}$ be a continuous superquadratic function. If $\partialhi:\,\text{\rm m}athbb{M}_n\to\,\text{\rm m}athbb{M}_m$ is a unital positive linear map, then
\betaegin{align*}
\,\text{\rm m}athrm{Tr}f\lambdaeft(\partialhi(A)\right)\lambdaeq \,\text{\rm m}athrm{Tr}\ \partialhi(f(A))
-\,\text{\rm m}in\lambdaeft\{\sharpum_{j=1}^{n}\lambdaeft\lambdaangle \partialhi\lambdaeft(f\lambdaeft(\lambdaeft|A-\lambdaangle \partialhi(A)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle\right|\right)\right) \,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\right\text{\rm ran\,}gle\right\},
\varepsilonnd{align*}
for every positive matrix $A\in\,\text{\rm m}athbb{M}^+_n$, where the minimum is taken over all choices of orthonormal system of vectors $\{\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_1,\cdots,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_k\}$.
\varepsilonnd{proposition}
\betaegin{proof}
Assume that $\lambdaambda_1,\lambdadots,\lambdaambda_n$ are eigenvalues of $\partialhi(A)$ and $\{\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_1,\cdots, \,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_n\}$ is orthonormal system of corresponding eigenvectors of $\partialhi(A)$. Then
\betaegin{align*}
\,\text{\rm m}athrm{Tr}f\lambdaeft(\partialhi(A)\right)&=\sharpum_{j=1}^{n}f\lambdaeft(\lambdaambda_j(\partialhi(A))\right)\\
&=\sharpum_{j=1}^{n}f\lambdaeft(\lambdaeft\lambdaangle \partialhi(A) \,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\right\text{\rm ran\,}gle\right)\\
&\lambdaeq \sharpum_{j=1}^{n}\lambdaeft[\lambdaeft\lambdaangle \partialhi(f(A)) \,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\right\text{\rm ran\,}gle-\lambdaeft\lambdaangle \partialhi\lambdaeft(f\lambdaeft(\lambdaeft|A-\lambdaangle \partialhi(A)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\right\text{\rm ran\,}gle\right|\right) \,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\right\text{\rm ran\,}gle\right]\qquad(\,\text{\rm m}box{by \varepsilonqref{qkd}})\\
&\lambdaeq \,\text{\rm m}athrm{Tr}\ \partialhi(f(A)) -\sharpum_{j=1}^{n}\lambdaeft\lambdaangle \partialhi\lambdaeft(f\lambdaeft(\lambdaeft|A-\lambdaangle \partialhi(A)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle\right|\right)\right) \,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\right\text{\rm ran\,}gle,
\varepsilonnd{align*}
in which the last inequality follows from Lemma \ref{lm7}. This completes the proof.
\varepsilonnd{proof}
\sharpection{ Klein inequality}
In this section, we present a Klein trace inequality for superquadratic functions. In particular, we show that if $f$ is non-negative, a refinement of the Klein inequality \varepsilonqref{eq4.2} holds. The next lemma can be found in \cite{bh}.
\betaegin{lemma}\lambdaabel{lm3}\cite{bh}
If $X,Y\in\,\text{\rm m}athbb{M}_n$ are Hermitian matrices, then the inequality
\betaegin{align}\lambdaabel{eq2.12}
\,\text{\rm m}athrm{Tr} XY\lambdaeq \lambdaangle \lambdaambda^\deltaownarrow(X),\lambdaambda^\deltaownarrow(Y)\text{\rm ran\,}gle
\varepsilonnd{align}
holds.
\varepsilonnd{lemma}
The main result of this section is the following Klein inequality for superquadratic functions.
\betaegin{theorem}[Klein's Inequality for superquadratic functions]\lambdaabel{thm1}
Assume that $f:[0,\infty)\to\,\text{\rm m}athbb{R}$ is a differentiable superquadratic function with $f(0)=f'(0)=0$. Then
{\sharpmall\betaegin{align}\lambdaabel{eq2.110}
\,\text{\rm m}athrm{Tr}[f(A)-f(B)-(A-B)f'(B)]\geq \,\text{\rm m}in\lambdaeft\{\sharpum_{j=1}^{n}f(|x-y|); x\in\sharpigma(A),\ y\in\sharpigma(B)\right\}
\varepsilonnd{align}}
for all $A,B\in\,\text{\rm m}athbb{M}_n^+$ in which $\sharpigma(A)$ is the set of eigenvalues of $A$.
In particular, if $f$ is non-negative, then
\betaegin{align}\lambdaabel{eq2.10}
\,\text{\rm m}athrm{Tr}[f(A)-f(B)-(A-B)f'(B)]\geq \,\text{\rm m}athrm{Tr}f(|A-B|)
\varepsilonnd{align}
for all $A,B\in\,\text{\rm m}athbb{M}_n^+$.
\varepsilonnd{theorem}
\betaegin{proof}
First we prove \varepsilonqref{eq2.10}. Suppose that $\lambdaambda_j$ and $\,\text{\rm m}u_j$ \ $(j=1,\cdots,n)$ are eigenvalues of $A$ and $B$, respectively, arranged in decreasing order. If $f$ is non-negative, then $f'$ is a monotone increasing function by Lemma \ref{lm44} and so $f'(\,\text{\rm m}u_j)$\ $(j=1,\cdots,n)$ are eigenvalues of $f(B)$ arranged in decreasing order. Hence
\betaegin{align*}
\,\text{\rm m}athrm{Tr}(A-B)f'(B)&=\,\text{\rm m}athrm{Tr}\ Af'(B)-\,\text{\rm m}athrm{Tr}\ Bf'(B)\\
&=\,\text{\rm m}athrm{Tr}\ Af'(B)-\sharpum_{j=1}^{n}\,\text{\rm m}u_jf'(\,\text{\rm m}u_j)\\
&\lambdaeq \sharpum_{j=1}^{n}\lambdaambda_jf'(\,\text{\rm m}u_j)-\sharpum_{j=1}^{n}\,\text{\rm m}u_jf'(\,\text{\rm m}u_j)\qquad\,\text{\rm m}box{by \varepsilonqref{eq2.12}}\\
&=\sharpum_{j=1}^{n}(\lambdaambda_j-\,\text{\rm m}u_j)f'(\,\text{\rm m}u_j).
\varepsilonnd{align*}
Moreover, it follows from proof of Theorem \ref{lemma3} that
\betaegin{align}
\,\text{\rm m}athrm{Tr} f(|A-B|)\lambdaeq \sharpum_{j=1}^{n}f\lambdaeft(\lambdaeft|\lambdaambda_j-\,\text{\rm m}u_j\right|\right).
\varepsilonnd{align}
Note that if a superquadratic function $f$ is differentiable on $(0,\infty)$ and $f(0)=f'(0)=0$, then Lemma \ref{lemma1} implies that
\betaegin{align*}
f(t)\geq f(s)+f'(s)(t-s)+f(|t-s|)
\varepsilonnd{align*}
for all $s,t\geq0$. This gives
\betaegin{align*}
f(\lambdaambda_j)\geq f(\,\text{\rm m}u_j)+f'(\,\text{\rm m}u_j)(\lambdaambda_j-\,\text{\rm m}u_j)+f(|\lambdaambda_j-\,\text{\rm m}u_j|)\qquad (j=1,\cdots,n)
\varepsilonnd{align*}
and so
\betaegin{align}\lambdaabel{eq2.13}
\sharpum_{j=1}^{n} f(\lambdaambda_j)\geq \sharpum_{j=1}^{n} f(\,\text{\rm m}u_j)+ \sharpum_{j=1}^{n}f'(\,\text{\rm m}u_j)(\lambdaambda_j-\,\text{\rm m}u_j)+ \sharpum_{j=1}^{n}f(|\lambdaambda_j-\,\text{\rm m}u_j|),
\varepsilonnd{align}
which proves \varepsilonqref{eq2.10}. In general case, when $f$ is not assumed to be non-negative, we suppose that $\lambdaambda_j$ \ $(j=1,\cdots,n)$ are eigenvalues of $A$ arranged in decreasing order and $\,\text{\rm m}u_j$ \ $(j=1,\cdots,n)$ are eigenvalues of $B$, arranged in such a way that $f'(\,\text{\rm m}u_1)\geq\cdots \geq f'(\,\text{\rm m}u_n)$. By a same argument as in the first part of the proof, this guarantees the inequality $\,\text{\rm m}athrm{Tr}(A-B)f'(B)\lambdaeq \sharpum_{j=1}^{n}(\lambdaambda_j-\,\text{\rm m}u_j)f'(\,\text{\rm m}u_j)$. It follows from \varepsilonqref{eq2.13} that
\betaegin{align*}
\,\text{\rm m}athrm{Tr} f(A)\geq \,\text{\rm m}athrm{Tr} f(B) + \,\text{\rm m}athrm{Tr}(A-B)f'(B)+\sharpum_{j=1}^{n}f(|\lambdaambda_j-\,\text{\rm m}u_j|),
\varepsilonnd{align*}
from which we get \varepsilonqref{eq2.110}.
\varepsilonnd{proof}
When the superquadratic function $f$ is non-negative, then Theorem~\ref{thm1} gives a refinement of the Klein's inequality \varepsilonqref{eq4.2} for convex functions. Indeed, if $f\geq0$, then
{\sharpmall \betaegin{align*}
0\lambdaeq \,\text{\rm m}athrm{Tr}[f(A)-f(B)-(A-B)f'(B)-f(|A-B|)]\lambdaeq\,\text{\rm m}athrm{Tr}[f(A)-f(B)-(A-B)f'(B)].
\varepsilonnd{align*}}
\betaegin{example}\lambdaabel{ex1}
The function $f(t)=t^p$ is superquadratic for every $p\geq2$. Theorem~\ref{thm1} gives
\betaegin{align*}
0\lambdaeq \,\text{\rm m}athrm{Tr}[A^p-B^p-p(A-B)B^{p-1}-|A-B|^p]\lambdaeq\,\text{\rm m}athrm{Tr}[A^p-B^p-p(A-B)B^{p-1}]
\varepsilonnd{align*}
for all $A,B\in\,\text{\rm m}athbb{M}_n^+$ and every $p\geq2$.
As a simple example, assume that $p=3$ and consider the positive matrices
\betaegin{align*}
A=\lambdaeft[\betaegin{array}{cc}
2 & 1\\ 1&2
\varepsilonnd{array}\right]\quad\,\text{\rm m}box{and}\quad B=\lambdaeft[\betaegin{array}{cc}
2 & 0\\ 0&0
\varepsilonnd{array}\right].
\varepsilonnd{align*}
Then
\betaegin{align*}
\,\text{\rm m}athrm{Tr}[A^p-B^p-p(A-B)B^{p-1}]=20 \quad\,\text{\rm m}box{and}\quad \,\text{\rm m}athrm{Tr} |A-B|^p\sharpimeq 14.15.
\varepsilonnd{align*}
\varepsilonnd{example}
On the other hand, if $f\geq0$ is a convex function and $-f$ is a superquadratic function, then Theorem \ref{thm1} provides an upper bound for the Klein's Inequality. Applying Theorem \ref{thm1} to the superquadratic function $-f$ we obtain
\betaegin{align}\lambdaabel{re22}
\,\text{\rm m}athrm{Tr}[f(A)-f(B)-(A-B)f'(B)]\lambdaeq \,\text{\rm m}ax\lambdaeft\{\sharpum_{j=1}^{n}f(|x-y|); x\in\sharpigma(A),\ y\in\sharpigma(B)\right\}
\varepsilonnd{align}
for all $A,B\in\,\text{\rm m}athbb{M}_n^+$, while the left side is positive due to the Klein's Inequality for the convex function $f$.
\betaegin{example}
If $1\lambdaeq p\lambdaeq2$, then the function $f(t)=t^p$ is convex and $-f(t)=-t^p$ is superquadratic. It follows from \varepsilonqref{re22} that
\betaegin{align*}
\,\text{\rm m}athrm{Tr}[A^p-B^p-p(A-B)B^{p-1}]\lambdaeq \,\text{\rm m}ax\lambdaeft\{\sharpum_{j=1}^{n}|x-y|^p; x\in\sharpigma(A),\ y\in\sharpigma(B)\right\},
\varepsilonnd{align*}
for all $A,B\in\,\text{\rm m}athbb{M}_n^+$ and every $1\lambdaeq p\lambdaeq2$.
To see a simple example, let $p=3/2$ and consider the two matrices in Example \ref{ex1}. Then
\betaegin{align*}
\,\text{\rm m}athrm{Tr}[A^p-B^p-p(A-B)B^{p-1}]\sharpimeq3.36 \quad\,\text{\rm m}box{and}\quad \,\text{\rm m}ax\lambdaeft\{\sharpum_{j=1}^{n}|x-y|^p; x\in\sharpigma(A),\ y\in\sharpigma(B)\right\}\sharpimeq6.19.
\varepsilonnd{align*}
\varepsilonnd{example}
\betaigskip
If $f$ is a continuous convex function, then
$f(\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\text{\rm ran\,}gle)\lambdaeq\lambdaangle f(A)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}},\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\text{\rm ran\,}gle$
for every unit vector $ \,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}\in\,\text{\rm m}athbb{C}^n$, see \cite{FMPS}. If $\{\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_1,\cdots,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_n\}$ is an orthonormal basis of $\,\text{\rm m}athbb{C}^n$, then
\betaegin{align*}
\sharpum_{j=1}^{n}f(\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle)&\lambdaeq \sharpum_{j=1}^{n} \lambdaangle f(A)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle)\\
&\lambdaeq \sharpum_{j=1}^{n} \lambdaambda_j(f(A))\qquad\,\text{\rm m}box{by \varepsilonqref{eq2.6}}\\
&=\,\text{\rm m}athrm{Tr} f(A).
\varepsilonnd{align*}
In other words,
\betaegin{align}\lambdaabel{eq2.7}
\sharpum_{j=1}^{n}f(\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle) \lambdaeq \,\text{\rm m}athrm{Tr} f(A).
\varepsilonnd{align}
for every orthonormal basis $\{\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_1,\cdots,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_n\}$ of $\,\text{\rm m}athbb{C}^n$. Inequality \varepsilonqref{eq2.7} is known as the \textit{Peierls inequality}. The equality holds in \varepsilonqref{eq2.7} when $\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_i$'s are eigenvectors of $A$.
We present a variant of the Peierls inequality in the case when $f$ is a superquadratic function. It gives in particular a refinement of the Peierls inequality if $f$ is non-negative.
\betaegin{proposition}
\lambdaabel{prp1} Assume that $f$ is a superquadratic function. If $A\in\,\text{\rm m}athbb{M}^+_n$, then
\betaegin{align}\lambdaabel{eq2.8}
\sharpum_{j=1}^{n}f(\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle)+\sharpum_{j=1}^{n}\lambdaangle f(|A-\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle|)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle \lambdaeq \,\text{\rm m}athrm{Tr} f(A)
\varepsilonnd{align}
for every orthonormal basis $\{\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_1,\cdots,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_n\}$ of $\,\text{\rm m}athbb{C}^n$. Equality holds if $f$ is non-negative and $\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_i$'s are eigenvectors of $A$.
\varepsilonnd{proposition}
\betaegin{proof}
Let $f$ be a superquadratic function. We apply the Jensen's operator inequality \varepsilonqref{qk} and then we use \varepsilonqref{eq2.6}. This gives \varepsilonqref{eq2.8}.
If $f$ is non-negative, then
\betaegin{align*}
\sharpum_{j=1}^{n}f(\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle)\lambdaeq \sharpum_{j=1}^{n}f(\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle)+\sharpum_{j=1}^{n}\lambdaangle f(|A-\lambdaangle A\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle|)\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j,\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_j\text{\rm ran\,}gle \lambdaeq \,\text{\rm m}athrm{Tr} f(A).
\varepsilonnd{align*}
Hence, choosing $\,\text{\rm m}athrm{\,\text{\rm m}athbf{u}}_i$'s to be the eigenvectors of $A$, gives the equality in \varepsilonqref{eq2.7} and so in \varepsilonqref{eq2.8}
\varepsilonnd{proof}
\betaegin{thebibliography}{99}
\betaibitem{SIP} S. Abramovich, S. Iveli\'{c} and J. Pe\v{c}ari\'{c}, \textit{Improvement of Jensen-Steffensen's inequality for superquadratic functions}, Banach J. Math. Anal. {\betaf4} (1) (2010), 146--158.
\betaibitem{AJS} S. Abramovich, G. Jamesion and G. Sinnamon, \textit{Inequalities for averages of convex and superquadratic functions}, J. Inequ. Pure Appl. Math. {\betaf 5} (4) (2004), Article 91.
\betaibitem{SJS} S. Abramovich, G. Jameson and G. Sinnamon, \textit{Refining Jensen's inequality}, Bull. Math. Soc. Sci. Math. Roumanie, {\betaf 47} (2004), 3--14.
\betaibitem{MA} M. W. Alomari, \textit{Operator Popviciu's inequality for superquadratic and convex functions of selfadjoint operators in Hilbert spaces}, Advan. Pure Appl. Math., accepted.
\betaibitem{BPV}S. Bani\'{c}, J. Pe\v{c}ari\'{c} and S. Varo\v{s}anec, \textit{Superquadratic functions and refinements of some classical inequalities}, J. Korean Math. Soc. {\betaf 45} (2) (2008), 513--525.
\betaibitem{bh} R. Bhatia, \textit{Matrix Analysis}, Springer-Verlag New York, 1997.
\betaibitem{Ca} E. A. Carlen, \textit{Trace inequalities and quantum entropy:
An introductory course}, Book chapter, Contemporary Mathematics, 2010, DOI:10.1090/conm/529/10428.
\betaibitem{CaLi} E. A. Carlen and E. H. Lieb, \textit{A Minkowski type trace inequality and strong subadditivity of quantum entropy II: convexity and concavity}, Lett. Math. Phys. {\betaf 83} (2008), 107--126.
\betaibitem{FL} S. Furuichi and M. Lin, \textit{A matrix trace inequality and its application}, Linear Algebra Appl. {\betaf 433} (2010), 1324--1328.
\betaibitem{FKY} S. Furuichi, K. Kuriyama and K. Yanagi, \textit{Trace inequalities for products of matrices}, Linear Algebra Appl. {\betaf 430} (2009), 2271--2276.
\betaibitem{FMPS} T. Furuta, J. Mi\'{c}i\'{c}, J. Pe\v{c}ari\'{c} and Y. Seo, \textit{Mond-Pe\v{c}ari\'{c} method in operator inequalities: Inequalities for bounded
self-adjoint operators on a Hilbert space}, Element, Zagreb, 2005.
\betaibitem{HP} F. Hansen G.K. Pedersen, \textit{Jensen's operator inequality}, Bull. London Math. Soc. {\betaf 35} (2003), 553--564.
\betaibitem{Hi} F. Hiai, \textit{Concavity of certain matrix trace functions}, Taiwanese J. Math. {\betaf 5}, no. 3 (2001), 535--554.
\betaibitem{HJ}R. A. Horn and C. R. Johnson, \textit{Matrix Analysis, 2nd ed.}, Cambridge University Press, 2013.
\betaibitem{K}M. Kian, \textit{Operator Jensen inequality for superquadratic functions}, Linear Algebra Appl. {\betaf 456} (2014), 82--87.
\betaibitem{KS} M. Kian and S.S. Dragomir, \textit{Inequalities involving superquadratic
functions and operators}, Mediterr. J. Math. {\betaf11} (4) (2014), 1205--1214.
\betaibitem{EHL}E. H. Lieb, \textit{Convex trace functions and the Wigner--Yanase--Dyson conjecture}, Advances in Math. {\betaf 11} (1973), 267--288.
\betaibitem{PZ} D. Petz, \textit{A survey of certain trace inequalities, functional analysis and operator theory center publications}, Vol. 30, Institute of Mathematics, Polish Academy od Sciences, Warszawa, 1994.
\betaibitem{ShAb} Kh. Shebrawi and H. Albadawi, \textit{Trace inequalities for matrices}, Bull. Aust. Math. Soc. {\betaf 87} (2013), 139--148.
\betaibitem{Ya} Xin Min Yang, \textit{A Matrix trace inequality}, J. Math. Anal. Appl. {\betaf 263} (2001), 327--331.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title[The Erd\H os conjecture for primitive sets]{The Erd\H os conjecture for primitive sets}
\author{Jared Duker Lichtman}
\address{Department of Mathematics, Dartmouth College, Hanover, NH 03755}
\email{[email protected]}
\email{[email protected]}
\author{Carl Pomerance}
\address{Department of Mathematics, Dartmouth College, Hanover, NH 03755}
\email{[email protected]}
\subjclass[2010]{Primary 11B83; Secondary 11A05, 11N05}
\date{June 30, 2018.}
\keywords{primitive set, primitive sequence, Mertens' product formula}
\begin{abstract}
A subset of the integers larger than 1 is {\it primitive} if no member divides another.
Erd\H os proved in 1935 that the sum of $1/(a\log a)$ for $a$ running over a primitive set $A$
is universally bounded over all choices for $A$. In 1988 he asked if this universal bound is
attained for the set of prime numbers. In this paper we make some progress on several fronts,
and show a connection to certain prime number ``races" such as the race between $\pi(x)$
and $\textnormal{li}(x)$.
\end{abstract}
\maketitle
\section{Introduction}
A set of positive integers $>1$ is called {\bf primitive} if no element divides any other (for convenience, we exclude the singleton set $\{1\}$). There are
a number of interesting and sometimes unexpected theorems about primitive sets.
After Besicovitch \cite{besicovitch}, we know that the upper asymptotic density of a primitive set can be arbitrarily close to $1/2$, whereas the lower asymptotic density is always $0$. Using the fact that
if a primitive set has a finite reciprocal sum, then
the set of multiples of members of the set has an asymptotic density, Erd\H os gave an elementary proof that the
set of nondeficient numbers (i.e., $\sigma(n)/n\ge2$, where $\sigma$ is the sum-of-divisors
function) has an asymptotic density. Though the reciprocal sum of a primitive set can
possibly diverge, Erd\H os \cite{erdos35} showed that for a primitive set $A$,
$$
\sum_{a\in A}\frac1{a\log a}<\infty.
$$
In fact, the proof shows that these sums are uniformly bounded as $A$ varies over
primitive sets.
Some years later in a 1988 seminar in Limoges, Erd\H os suggested that in fact we always have
\begin{equation}
\label{eq:conj}
f(A):=\sum_{a\in A}\frac1{a\log a}\le\sum_{p\in\mathcal{P}}\frac1{p\log p},
\end{equation}
where $\mathcal{P}$ is the set of prime numbers. The assertion \eqref{eq:conj} is now known
as the Erd\H os conjecture for primitive sets.
In 1991, Zhang \cite{zhang1} proved the
Erd\H os conjecture for primitive sets $A$ with no member having more than 4 prime factors
(counted with multiplicity).
After Cohen \cite{cohen}, we have
\begin{equation}
\label{eq:cohen}
C: = \sum_{p\in \mathcal{P}}\frac1{p\log p} = 1.63661632336\ldots\,,
\end{equation}
the sum over primes in \eqref{eq:conj}.
Using the original Erd\H os argument in \cite{erdos35}, Erd\H os and Zhang showed that
$f(A)<2.886$ for a primitive set $A$, which was later improved by Robin to $2.77$. These unpublished
estimates are reported in Erd\H os--Zhang \cite{ez} who used another method to show that
$f(A)<1.84$. Shortly after, Clark \cite{clark} claimed that
$f(A)\le e^\gamma=1.781072\dots$\,. However, his brief argument appears to be
incomplete.
Our principal results are the following.
\begin{theorem}\label{thm:egamma}
For any primitive set $A$ we have $f(A) < e^\gamma$.
\end{theorem}
\begin{theorem}
\label{thm:no8s}
For any primitive set $A$ with no element divisible by $8$, we have $f(A)<C+2.37\times10^{-7}$.
\end{theorem}
Say a prime $p$ is
{\bf Erd\H os strong} if for any primitive set $A$ with the property that each element of $A$ has least prime
factor $p$, we have $f(A)\le 1/(p\log p)$.
We conjecture that every prime is Erd\H os strong. Note that
the Erd\H os conjecture \eqref{eq:conj} would immediately follow, though it is not clear that the Erd\H os conjecture implies our conjecture. Just proving our conjecture for the case of $p=2$ would give the inequality in
Theorem \ref{thm:no8s} for all primitive sets $A$.
Currently the best we can do for a primitive set $A$ of even numbers is that
$f(A)<e^\gamma/2$, see Proposition \ref{lem:erdos} below.
For part of the next result, we assume the Riemann hypothesis (RH) and the Linear Independence hypothesis (LI), which asserts that the sequence of numbers $\gamma_n>0$ such that $\zeta(\tfrac{1}{2}+i\gamma_n)=0$ is linearly independent over ${\mathbb Q}$.
\begin{theorem}
\label{thm:race}
Unconditionally, all of the odd
primes among the first $10^8$ primes are Erd\H os strong.
Assuming RH and LI,
the Erd\H os strong primes have relative lower logarithmic
density $>0.995$.
\end{theorem}
The proof depends strongly on a recent result of Lamzouri \cite{lamz} who was interested in
the ``Mertens race" between $\prod_{p\le x}(1-1/p)$ and $1/(e^\gamma \log x)$.
For a primitive set $A$, let $\mathcal{P}(A)$ denote the support of $A$, i.e., the
set of prime numbers that divide some member of $A$. It is clear that the
Erd\H os conjecture \eqref{eq:conj} is equivalent to the same assertion where
the prime sum is over $\mathcal{P}(A)$.
\begin{theorem}
\label{thm:support}
If $A$ is a primitive set with $\mathcal{P}(A)\subset[3,\exp(10^6)]$, then
$$
f(A)\le\sum_{p\in\mathcal{P}(A)}\frac{1}{p\log p}.
$$
\end{theorem}
If some primitive set $A$ of odd numbers exists with $f(A)>\sum_{p\in\mathcal{P}(A)}1/(p\log p)$,
Theorem \ref{thm:support} suggests that it will be very difficult indeed to give a concrete example!
For a positive integer $n$, let $\Omega(n)$ denote the number of prime factors of $n$
counted with multiplicity. Let ${\mathbb N}_k$ denote the set of integers $n$ with $\Omega(n)=k$.
Zhang \cite{zhang2} proved a result that implies $f({\mathbb N}_k)< f({\mathbb N}_1)$ for each $k\ge2$, so that the Erd\H os conjecture holds for the primitive sets ${\mathbb N}_k$. More recently, Banks and Martin
\cite{bm} conjectured that $f({\mathbb N}_1)>f({\mathbb N}_2)>f(N_3)>\cdots$\,. The inequality
$f({\mathbb N}_2)>f({\mathbb N}_3)$ was just established by Bayless, Kinlaw, and Klyve \cite{BKK}.
We prove the following result.
\begin{theorem}
\label{thm:Nk}
There is a positive constant $c$ such that $f({\mathbb N}_k)\ge c$ for all $k$.
\end{theorem}
We let the letters $p,q,r$ represent primes. In addition, we let $p_n$ represent the
$n$th prime. For an integer $a>1$, we let $P(a)$ and $p(a)$ denote the largest and smallest prime factors of $a$. Modifying the notation introduced in \cite{ez}, for a primitive set $A$ let
\begin{align*}
A_p & = \{a\in A: p(a)\ge p\},\\
A'_p & = \{a\in A: p(a) = p\},\\
A''_p & = \{a/p : a\in A'_p\}.
\end{align*}
We let $f(a)=1/(a\log a)$ and so $f(A)=\sum_{a\in A}f(a)$.
In this language, Zhang's full result \cite{zhang2} states that $f(({\mathbb N}_k)'_p)\le f(p)$ for all primes $p$, $k\ge1$.
We also, let
$$
g(a)=\frac1a\prod_{p<P(a)}\left(1-\frac1p\right),\quad h(a)=\frac1{a\log P(a)},
$$
with
$ g(A)=\sum_{a\in A}g(a)$ and $h(A)=\sum_{a\in A}h(a)$.
\section{The Erd\H os approach}
In this section we will prove Theorem \ref{thm:egamma}.
We begin with an argument inspired by the original 1935 paper of Erd\H os \cite{erdos35}.
\begin{proposition}\label{lem:erdos}
For any primitive set $A$, if $q\notin A$ then
$$f(A'_q) < e^\gamma g(q) = \frac{e^\gamma}{q}\prod_{p<q}\bigg(1-\frac{1}{p}\bigg).$$
\end{proposition}
\begin{proof}
For each $a\in A'_q$, let $S_a = \{ba : p(b) \ge P(a)\}$. Note that $S_a$ has asymptotic density $g(a)$. Since $A'_q$ is primitive, we see that the sets $S_a$ are pairwise disjoint. Further, the union of the sets $S_a$ is contained in the set of all natural numbers $m$ with $p(m) = q$, which has asymptotic density $g(q)$. Thus, the sum of densities for each $S_a$ is dominated by $g(q)$, that is,
\begin{align}\label{eq:g(A)}
g(A'_q)=\sum_{a\in A'_q}g(a) \le g(q).
\end{align}
By Theorem 7 in \cite{RS1}, we have for $x\ge285$,
\begin{equation}
\label{eq:mert}
\prod_{p\le x}\left(1-\frac1p\right)>\frac{1}{e^\gamma\log(2x)},
\end{equation}
which may be extended to all $x\ge 1$ by a calculation. Thus, since each $a \in A'_q$ is composite,
$$g(a)=\frac{1}{a}\prod_{p<P(a)}\bigg(1-\frac{1}{p}\bigg) > \frac{e^{-\gamma}}{ a\log\big(2P(a)\big)} > \frac{e^{-\gamma}}{ a\log a} = e^{-\gamma}f(a).$$
Hence by \eqref{eq:g(A)},
\begin{equation*}
f(A'_q)/e^\gamma < g(A'_q) \le g(q).
\end{equation*}
\end{proof}
\begin{remark}
Let $\sigma$ denote the sum-of-divisors function and let $A$ be the set of $n$ with
$\sigma(n)/n\ge2$ and $\sigma(d)/d<2$ for all proper divisors $d$ of $n$, the set of
primitive nondeficient numbers. Then an appropriate analog of
$g(A)$ gives the density of nondeficient numbers, recently shown in \cite{mits1} to lie in the tight interval
$(0.2476171,\,0.2476475)$.
In \cite{JDLpnd}, an analog of
Proposition \ref{lem:erdos} is a key ingredient for sharp bounds on the reciprocal sum of the
primitive nondeficient numbers.
\end{remark}
\begin{remark}
We have $g(\mathcal P)=1$. It is easy to see by induction over primes $r$ that
\begin{equation*}
\sum_{p\le r}g(p)=\sum_{p\le r}\frac1p\prod_{q<p}\left(1-\frac1q\right)=1-\prod_{p\le r}\left(1-\frac1p\right).
\end{equation*}
Letting $r\to\infty$ we get that $g(\mathcal P)=1$. There is also a holistic way of seeing this.
Since $g(p)$ is the density of the set of integers with least prime factor $p$,
it would make sense that
$g(\mathcal P)$ is the density of the set of integers which have a least prime factor, which is 1.
To make this rigorous, one notes that the density of the set of integers whose least prime factor
is $>y$ tends to 0 as $y\to\infty$.
As a consequence of $g(\mathcal P)=1$, we have
\begin{equation}
\label{eq:ident}
\sum_{p>2}g(p)=\frac12,
\end{equation}
an identity we will find to be useful.
\end{remark}
For a primitive set $A$, let
$$A^k = \{a: 2^k\| a\in A\}, \qquad B^k = \{a/2^k: a\in A^k\}.$$
The next result will help us prove Theorem \ref{thm:egamma}.
\begin{lemma}\label{lem:egamma}
For a primitive set $A$, let $k\ge1$ be such that $2^k\notin A$. Then we have
$$f(A^{k}) < \frac{e^\gamma}{2^k}\sum_{p\notin A\atop p>2}g(p).$$
\end{lemma}
\begin{proof}
If $2^kp\notin A$ for a prime $p>2$, then $(B^k)'_p$ is a primitive set of odd composite numbers,
so by Proposition \ref{lem:erdos}, $f((B^k)'_p) < e^\gamma g(p)$.
Now if $2^kp\in A$ for some odd prime $p$, then $(B^{k})'_p=\{p\}$ and note $p\notin A$ by primitivity. We have $f(2^kp) < 2^{-k}e^\gamma g(p)$ since
$$
\frac1{2^kp\log(2^kp)}\le\frac1{2^kp\log(2p)}<\frac{e^\gamma}{2^k}g(p),
$$
which follows from \eqref{eq:mert}. Hence combining the two cases,
\begin{align*}
f(A^k)=\sum_{p\notin A\atop p>2}f(2^k{\cdot}(B^k)'_p)& \le \sum_{p\in B^k,p\notin A\atop p>2}f(2^kp) + 2^{-k}\sum_{p\notin B^k,p\notin A\atop p>2}f((B^k)'_p)\\
& < \frac{e^\gamma}{2^k}\sum_{p\notin A\atop p>2}g(p).
\end{align*}
\end{proof}
With Lemma \ref{lem:egamma} in hand, we prove $f(A)<e^\gamma$.
\begin{proof}[Proof of Theorem \ref{thm:egamma}]
From Erd\H os--Zhang \cite{ez}, we have that $f(A_3)<0.92$. If $2\in A$, then $A'_2=\{2\}$, so that
$f(A)=f(A_3)+f(A'_2)<0.92+1/(2\log2)<e^\gamma$. Hence we may assume that $2\notin A$.
If $A$ contains every odd prime, then $f(A'_2)$ consists of at most one power
of 2, and the calculation just concluded shows we may assume this is not the case.
Hence there is at least one odd prime $p_0\notin A$.
By Proposition \ref{lem:erdos}, we have
\begin{align}\label{eq:A}
f(A) &= \sum_pf(A'_p)= \sum_{p\in A}f(p) + \sum_{p\notin A} f(A'_p) < \sum_{p\in A}f(p) + e^\gamma\sum_{p\notin A\atop p>2} g(p) + f(A'_2).
\end{align}
First suppose $A$ contains no powers of $2$. Then by Lemma \ref{lem:egamma},
\begin{align*}
f(A'_2) = \sum_{k\ge1}f(A^{k}) < \sum_{k\ge1}\frac{e^\gamma}{2^k}\sum_{p\notin A\atop p>2}g(p) = e^\gamma\sum_{p\notin A\atop p>2}g(p).
\end{align*}
Substituting into \eqref{eq:A}, we conclude, using \eqref{eq:ident},
\begin{align}
\label{eq:fA}
f(A) & < \sum_{p\in A}f(p) + 2e^\gamma\sum_{p\notin A\atop p>2}g(p) \le 2e^\gamma\sum_{p>2}g(p) = e^\gamma.
\end{align}
For the last inequality we used that for every prime $p$,
\begin{equation}
\label{eq:fgineq2}
\frac{f(p)}{e^\gamma g(p)}<1.082,
\end{equation}
which follows after a short calculation using \cite[Theorem 7]{RS1}.
Now if $2^K\in A$ for some positive integer $K$, then $K$ is unique and $K\ge 2$.
Also $A^K=\{2^K\}$ and
$A^k=\emptyset$ for all $k>K$, so again by Lemma \ref{lem:egamma},
\begin{align*}
f(A'_2) = \sum_{k=1}^Kf(A^{k}) = f(2^K) + \sum_{k=1}^{K-1}\frac{e^\gamma}{2^k}\sum_{p\notin A\atop p>2}g(p) = f(2^K) + (1-2^{1-K})e^\gamma\sum_{p\notin A\atop p>2}g(p).
\end{align*}
Substituting into \eqref{eq:A} gives
\begin{align}
\label{eq:fAK}
f(A) < \sum_{p\in A}f(p) + f(2^K) + (2-2^{1-K})e^\gamma\sum_{p\notin A\atop p>2}g(p) & \le f(2^K) + (2-2^{-1})e^\gamma\sum_{p>2}g(p)\nonumber\\
& \le f(2^2) + (1-2^{-2})e^\gamma < e^\gamma,
\end{align}
using $K\ge2$, the identity \eqref{eq:ident}, inequality \eqref{eq:fgineq2}, and $f(2^2)< 2^{-2} e^\gamma$. This completes the proof.
\end{proof}
\section{Mertens primes}
In this section we will prove Theorems \ref{thm:race} and Theorem \ref{thm:support}.
Note that by Mertens' theorem,
$$
\prod_{p<x}\left(1-\frac1p\right)\sim\frac1{e^\gamma\log x},\quad x\to\infty,
$$
where $\gamma$ is Euler's constant.
We say a prime $q$ is {\bf Mertens} if
\begin{equation}
\label{eq:help}
e^\gamma\prod_{p<q}\Big(1-\frac{1}{p}\Big) \le \frac{1}{\log q},
\end{equation}
and let $\mathcal P^{\textrm{Mert}}$ denote the set of Mertens primes. We are interested in Mertens primes because of the following consequence of Proposition \ref{lem:erdos}, which shows that every Mertens prime is Erd\H os strong.
\begin{corollary}\label{cor:mert}
Let $A$ be a primitive set.
If $q\in \mathcal{P}^{\rm Mert}$, then $f(A'_q)\le f(q)$. Hence if $A'_q \subset \{q\}$ for all $q\notin \mathcal{P}^{\rm Mert}$, then $A$ satisfies the Erd\H os conjecture.
\end{corollary}
\begin{proof}
By Proposition \ref{lem:erdos} we have $f(A'_q)\le\max\{e^\gamma g(q),f(q)\}$. If $q\in\mathcal{P}^{\textrm{Mert}}$, then
$$e^\gamma g(q) = \frac{e^\gamma}{q}\prod_{p<q}\bigg(1-\frac{1}{p}\bigg)\le \frac{1}{q\log q} =f(q),$$
so $f(A'_q)\le f(q)$.
\end{proof}
Now, one would hope that the Mertens inequality \eqref{eq:help}
holds for all primes $q$. However, \eqref{eq:help} fails for $q=2$ since $e^\gamma > 1/\log 2$. We have computed that $q$ is indeed a Mertens prime for all $2<q\le p_{10^8} = 2{,}038{,}074{,}743$, thus proving the unconditional part of Theorem \ref{thm:race}.
\subsection{Proof of Theorem \ref{thm:race}}
To complete the proof, we use a result of Lamzouri \cite{lamz} relating the Mertens inequality to the race between $\pi(x)$
and $\textnormal{li}(x)$, studied by Rubinstein and Sarnak \cite{RubSarn}. Under the assumption of RH and LI,
he proved that the set $\mathcal N$ of real numbers $x$ satisfying
\begin{align*}
e^\gamma \prod_{p\le x}\bigg(1-\frac{1}{p}\bigg) > \frac{1}{\log x},
\end{align*}
has logarithmic density $\delta(\mathcal{N})$ equal to the logarithmic density of numbers $x$ with $\pi(x)>\textnormal{li}(x)$,
and in particular
\begin{align}
\delta(\mathcal N) = \textnormal{li}m_{x\to\infty}\frac{1}{\log x}\int_{t\in \mathcal N\cap[2,x]}\frac{dt}{t} = 0.00000026\ldots\,.
\end{align}
We note that if a prime $p=p_n\in \mathcal N$, then for $p'=p_{n+1}$ we have $[p,p')\subset\mathcal N$ because the prime product on the left-hand side is constant on $[p,p')$, while $1/\log x$ is decreasing for $x\in [p,p')$.
The set of primes ${\mathbb Q}Q$ in $\mathcal N$ is precisely the set of non-Mertens primes, so ${\mathbb Q}Q=\mathcal P\setminus\mathcal P^{\textrm{Mert}}$. From the above observation, we may leverage knowledge of the continuous logarithmic density $\delta(\mathcal N)$ to obtain an upper bound on the relative (upper) logarithmic density of non-Mertens primes
\begin{align}
\label{eq:dens}
\bar\delta({\mathbb Q}Q) := \textnormal{li}msup_{x\to \infty}\frac{1}{\log x}\sum_{p\le x\atop p\in {\mathbb Q}Q}\frac{\log p}{p}.
\end{align}
From the above observation, we have
\begin{align*}
\delta(\mathcal N) \ge \textnormal{li}msup_{x\to\infty}\frac{1}{\log x}\sum_{p\le x\atop p\in \mathcal Q}\int_{p}^{p'}\frac{dt}{t} & = \textnormal{li}msup_{x\to\infty}\frac{1}{\log x}\sum_{p\le x\atop p\in \mathcal Q}\log(p'/p).
\end{align*}
Then letting $d_p=p'-p$ be the gap between consecutive primes, we have
\begin{align*}
\delta(\mathcal N) \ge \textnormal{li}msup_{x\to\infty}\frac{1}{\log x}\sum_{p\le x\atop p\in \mathcal Q}\frac{d_p}{p},
\end{align*}
since $\sum\log(p'/p) = \sum d_p/p + O(1)$. The average gap is roughly $\log p$, so we may consider the primes for which $d_p < \epsilon \log p$, for a small positive constant $\epsilon$ to be determined.
We claim
\begin{align}\label{eq:RVclaim}
\textnormal{li}msup_{x\to\infty}\frac1{\log x}\sum_{\substack{p\le x\\ d_p < \epsilon \log p}}\frac{\log p}{p} \ \le \ 16\epsilon,
\end{align}
from which it follows
\begin{align*}
\bar\delta({\mathbb Q}Q) & = \textnormal{li}msup_{x\to\infty}\frac{1}{\log x}\sum_{p\le x\atop p\in {\mathbb Q}Q}\frac{\log p}{p} \le \textnormal{li}msup_{x\to\infty}\frac{1}{\log x}\Big(\sum_{\substack{p\le x\\ p\in {\mathbb Q}Q\\d_p \ge \epsilon \log p}}\frac{d_p/\epsilon}{p} + \sum_{\substack{p\le x\\ d_p < \epsilon \log p}}\frac{\log p}{p}\Big)\\
& \le \delta(\mathcal N)/\epsilon + 16\epsilon.
\end{align*}
Hence to prove Theorem \ref{thm:race} it suffices to prove \eqref{eq:RVclaim}, since taking $\epsilon = \sqrt{\delta(\mathcal N)}/4$ gives
\begin{align}
\bar\delta({\mathbb Q}Q) < 8\sqrt{\delta(\mathcal N)} < 4.2\times10^{-3}.
\end{align}
By Riesel-Vaughan \cite[Lemma 5]{RV}, the number of primes $p$ up to $x$ with $p+d$ also prime is at most
\begin{align*}
\sum_{p\le x\atop p+d\textrm{ prime}}1 \le \frac{8c_2x}{\log^2 x}\prod_{p\mid d\atop p>2}\frac{p-1}{p-2},
\end{align*}
where $c_2$ is for the twin-prime constant $2\prod_{p>2}p(p-2)/(p-1)^2=1.3203\ldots$.
Denote the prime product by $F(d) = \prod_{p\mid d\atop p>2}\frac{p-1}{p-2}$, and consider the multiplicative function $H(d) = \sum_{u\mid d}\mu(u)F(d/u)$. We have $H(2^k)=0$ for all $k\ge1$, and for $p>2$ we have $H(p)=F(p)-1$, and $H(p^k)=0$ if $k\ge2$. Thus,
\begin{align*}
\sum_{d\le y} F(d) & = \sum_{d\le y}\sum_{u\mid d}H(u) = \sum_{u\le y}H(u)\sum_{d\le y/u}1\le y\sum_{u\le y}\frac{H(u)}{u} \le y\prod_{p>2}\Big(1 + \frac{H(p)}{p}\Big)\\
& = y\prod_{p>2}\Big(1 + \frac{(p-1)/(p-2)-1}{p}\Big) = y\prod_{p>2}\Big(1 + \frac{1}{p(p-2)}\Big).
\end{align*}
Noting that $c_2':=\prod_{p>2}(1 + 1/[p(p-2)])=2/c_2$, we have
\begin{align*}
\sum_{\substack{p\le x\\ d_p < \epsilon \log p}}1\le \sum_{d\le \epsilon\log x}\sum_{p\le x\atop p+d\textrm{ prime}}1 \le \frac{8c_2x}{\log^2 x}\sum_{d\le \epsilon\log x}F(d) \le \epsilon\frac{8c_2c_2'x}{\log x} = \epsilon\frac{16x}{\log x}.
\end{align*}
Thus, \eqref{eq:RVclaim} now follows by partial summation, and the proof is complete.
\begin{remark}
The concept of relative upper logarithmic density of the set of non-Mertens primes in
\eqref{eq:dens} can be replaced in the theorem with
$$
\bar\delta_0({\mathbb Q}Q):=\textnormal{li}msup_{x\to\infty}\frac1{\log\log x}\sum_{\substack{p\le x\\p\in {\mathbb Q}Q}}\frac1p.
$$
Indeed, $\bar\delta_0({\mathbb Q}Q)\le\bar\delta({\mathbb Q}Q)$ follows from the identity
$$
\sum_{\substack{p\le x\\p\in {\mathbb Q}Q}}\frac1p=\frac1{\log x}\sum_{\substack{p\le x\\p\in {\mathbb Q}Q}}\frac{\log p}p
+\int_2^x\frac1{t(\log t)^2}\sum_{\substack{p\le t\\p\in {\mathbb Q}Q}}\frac{\log p}p\,dt.
$$
\end{remark}
\begin{remark}
\label{rmk:martin}
Greg Martin has indicated to us that one should be able to prove (under RH and LI) that the relative logarithmic density
of ${\mathbb Q}Q$ exists and is equal to the logarithmic density of $\mathcal N$. The idea is as follows. Partition
the positive reals into intervals of the form $[y,y+y^{1/3})$. Let $E_1$ be the union of those intervals
$[y,y+y^{1/3})$ where the sign of $e^\gamma\prod_{p\le x}(1-1/p)-1/\log x$ is not constant and let
$E_2$ be the union of those intervals $[y,y+y^{1/3})$ which do not have $\sim y^{1/3}/\log y$ primes
as $y\to\infty$. The the logarithmic density of $E_1\cup E_2$ can be shown to be 0, from which the assertion
follows.
\end{remark}
\subsection{Proof of Theorem \ref{thm:support}}
We now use some numerical estimates of Dusart \cite{dusart} to prove Theorem \ref{thm:support}.
We say a pair of primes $p\le q$ is a {\bf Mertens pair} if
$$
\prod_{p\le r<q}\left(1-\frac1r\right)>\frac{\log p}{\log pq}.
$$
We claim that every pair of primes $p,q$ with $2<p\le q<e^{10^6}$ is a Mertens pair.
Assume this and let $A$ be a primitive set supported on the odd primes to $e^{10^6}$.
By \eqref{eq:g(A)}, if $p\notin A$, we have
\begin{align*}
\frac1{p}&\ge\sum_{a\in A'_p}\frac1a\prod_{p\le r <P(a)}\left(1-\frac1{r}\right)
>\sum_{a\in A'_p}\frac{\log p}{a\log(p\,P(a))}\\
&\ge\sum_{a\in A'_p}\frac{\log p}{a\log a}=f(A'_p)\log p.
\end{align*}
Dividing by $\log p$ we obtain $f(A'_p)\le f(p)$, which also holds if $p\in A$. Thus, the claim about Mertens pairs implies the theorem.
To prove the claim, first note that if $p$ is
a Mertens prime, then $p,q$ is a Mertens pair for all primes $q\ge p$. Indeed, we have
$$
\prod_{p\le r<q}\left(1-\frac1r\right)=\prod_{r<p}\left(1-\frac1r\right)^{-1}\prod_{r<q}\left(1-\frac1r\right)
>e^\gamma \log p\prod_{r<q}\left(1-\frac1r\right).
$$
By \eqref{eq:mert}, this last product exceeds $e^{-\gamma}/\log(2q)>e^{-\gamma}/\log(pq)$,
and using this in the above display shows that $p,q$ is indeed a Mertens pair. Since
all of the odd primes up to $p_{10^8}$ are Mertens, to complete the proof of our assertion,
it suffices to consider the case when $p>p_{10^8}$. Define $E_p$ via the
equation
$$
\prod_{r<p}\left(1-\frac1r\right)=\frac{1+E_p}{e^\gamma \log p}.
$$
Using \cite[Theorem 5.9]{dusart}, we have for $p>2{,}278{,}382$,
\begin{equation}
\label{eq:D}
|E_p|\le .2/(\log p)^3.
\end{equation}
A routine calculation
shows that if $p\le q<e^{4.999(\log p)^4}$, then
$$
\prod_{p\le r<q}\left(1-\frac1r\right)=\frac{\log p}{\log q}\cdot\frac{1+E_q}{1+E_p}
>\frac{\log p}{\log pq}.
$$
It remains to note that $4.999(\log p_{10^8})^4 >1{,}055{,}356$.
It seems interesting to record the principle that we used in the proof.
\begin{corollary}
\label{cor:mertens}
If $A$ is a primitive set such that $p(a),P(a)$ is a Mertens pair for each $a\in A$, then
$f(A)\le f(\mathcal{P}(A))$.
\end{corollary}
\begin{remark}
\label{rmk:ford}
Kevin Ford has noted to us the remarkable similarity between the concept of Mertens primes in this
paper and the numbers
$$
\gamma_n=\left(\gamma+\sum_{k\le n}\frac{\log p_k}{p_k-1}\right)\prod_{k\le n}\left(1-\frac1{p_k}\right)
$$
discussed in Diamond--Ford \cite{DF}. In particular, while it may not be obvious from the definition, the analysis in
\cite{DF} on whether the sequence $\gamma_1,\gamma_2,\dots$ is monotone
is quite similar to the analysis in \cite{lamz} on the Mertens inequality. Though the numerical evidence
seems to indicate we always have $\gamma_{n+1}<\gamma_n$, this is disproved in \cite{DF}, and it is
indicated there that the first time this fails may be near $1.9\cdot10^{215}$. This may also be near where the
first odd non-Mertens prime exists. If this is the case, and under assumption of RH, it may be that
every pair of primes $p\le q$ is a Mertens pair when $p>2$ and $q < \exp(10^{100})$.
\end{remark}
\section{Odd primitive sets}
In this section we prove Theorem \ref{thm:no8s} and establish a curious result on parity for primitive sets.
Let
$$
\epsilon_0=\sum_{\substack{p>2\\p\notin\mathcal{P}^{\rm Mert}}}\left(e^\gamma g(p)-f(p)\right).
$$
\begin{lemma}
\label{lem:eps}
We have $0\le\epsilon_0<2.37\times10^{-7}$.
\end{lemma}
\begin{proof}
By the definition of $\mathcal{P}^{\rm Mert}$, the summands in the definition of $\epsilon_0$ are nonnegative,
so that $\epsilon_0\ge0$. If $p>2$ is not Mertens, then $p>p_{10^8}>2\times10^9$, so that \eqref{eq:D} shows that
\begin{equation}
\label{eq:nonM}
e^\gamma g(p)-f(p)<\frac1{5p(\log p)^4}.
\end{equation}
By \cite[Proposition 5.16]{dusart}, we have
$$
p_n >n(\log n+\log\log n-1 +(\log\log n-2.1)/\log n,\quad n\ge 2.
$$
Using this we find that
$$
\sum_{n>10^8}\frac1{5p_n(\log p_n)^4}<2.37\times10^{-7},
$$
which with \eqref{eq:nonM} completes the proof.
\end{proof}
\begin{remark}
Clearly, a smaller bound for $\epsilon_0$ would follow by raising the search limit for Mertens primes.
Another small improvement could be made using the estimate in \cite{axler} for $p_n$.
It follows from the ideas in Remark \ref{rmk:martin} that $\epsilon_0>0$. Further, it may be provable
from the ideas in
Remark \ref{rmk:ford} that $\epsilon_0<10^{-100}$ if the Riemann Hypothesis holds.
\end{remark}
We have the following result.
\begin{theorem}
\label{thm:odd}
For any odd primitive set $A$, we have
\begin{align}
\label{eq:odd}
f(A) \le f(\mathcal{P}(A))+\epsilon_0.
\end{align}
\end{theorem}
\begin{proof}
Assume that $A$ is an odd primitive set. We have
$$
f(A)=\sum_{p\in\mathcal{P}(A)}f(A'_p)\le\sum_{p\in\mathcal{P}(A)\cap\mathcal{P}^{\rm Mert}}f(p)+\sum_{p\in\mathcal{P}(A)\setminus\mathcal{P}^{\rm Mert}}e^\gamma g(p)
\le\epsilon_0+\sum_{p\in\mathcal{P}(A)}f(p)
$$
by the definition of $\epsilon_0$. This completes the proof.
\end{proof}
This theorem yields the following corollary.
\begin{corollary}\label{cor:8}
If $A$ is a primitive set containing no multiple of $8$, then \eqref{eq:odd} holds.
\end{corollary}
\begin{proof}
We have seen the corollary in the case that $A$ is odd.
Next, suppose that $A$ contains an even number, but no multiple of 4.
If $2\in A$, the result follows by applying Theorem \ref{thm:odd} to $A\setminus\{2\}$, so assume
$2\notin A$. Then $A''_2$ is an odd primitive set and $f(A'_2)\le f(A''_2)/2$.
We have by the odd case that
\begin{equation}
\label{eq:2mod4}
f(A)=f(A_3)+f(A'_2)< f(\mathcal{P}(A_3))+\epsilon_0+\frac12\left(f(\mathcal{P}(A''_2))+\epsilon_0\right).
\end{equation}
Since
$$
\frac12f(\mathcal{P}(A''_2))\le\frac12f(\mathcal{P}\setminus\{2\})<0.4577
$$
and $f(2)=0.7213\dots$, \eqref{eq:2mod4} and Lemma \ref{lem:eps} imply that $f(A)<f(\mathcal{P}(A))$,
which is stronger than required. The case when $A$ contains a multiple of 4
but no multiple of 8 follows in a similar fashion.
\end{proof}
Since a cube-free number cannot be divisible by 8, \eqref{eq:odd} holds for all primitive sets $A$ of cube-free numbers. Also, the proof of Corollary \ref{cor:8} can be adapted to show that \eqref{eq:odd} holds
for all primitive sets $A$ containing no number that is 4~(mod~8).
We close out this section with a curious result about those primitive sets $A$ where \eqref{eq:odd}
does not hold.
Namely, the Erd\H os conjecture must then hold for the set of odd members of $A$.
Put another way, \eqref{eq:odd} holds for any primitive set $A$ for
which the Erd\H os conjecture for the odd members of $A$ {\it fails}.
\begin{theorem}
\label{thm:curious}
If $A$ is a primitive set with $f(A)> f(\mathcal{P}(A))+\epsilon_0$, then $f(A_3)<f(\mathcal{P}(A_3))$.
\end{theorem}
\begin{proof}[Proof (Sketch)]
Without loss of generality, we may include in $A$ all primes not in $\mathcal{P}(A)$, and so assume that $\mathcal{P}(A)=\mathcal{P}$
and $f(A)>C+\epsilon_0$.
By Theorem \ref{thm:odd} we may assume that $A$ is not odd, and by Corollary \ref{cor:8}
we may assume that $2\notin A$. By the proof of Theorem \ref{thm:egamma} (see \eqref{eq:fA} and
\eqref{eq:fAK}), if $3\in A$, we have
$$
f(A)<f(3)+\frac23e^\gamma<C,
$$
a contradiction, so we may assume that $3\notin A$. We now apply the method of proof of Theorem \ref{thm:egamma}
to $A_3$, where powers of 3 replace powers of 2. This leads to
$$
f(A_3)<\frac12e^\gamma<C-f(2)=f(\mathcal{P}(A_3)).
$$
This completes the argument.
\end{proof}
\section{Zhang primes and the Banks--Martin conjecture}
Note that
$$
\sum_{p\ge x}\frac1{p\log p}\sim\frac1{\log x},\quad x\to\infty.
$$
In Erd\H os--Zhang \cite{ez} and in Zhang \cite{zhang2}, numerical approximations
to this asymptotic relation are exploited.
Say a prime $q$ is {\bf Zhang} if
$$\sum_{p\ge q}\frac{1}{p\log p} \le \frac{1}{\log q}.$$
Let $\mathcal P^{\textrm{Zh}}$ denote the set of Zhang primes. We are interested in Zhang primes because of the following result.
\begin{theorem}\label{thm:zhang}
If $\mathcal P(A'_p)\subset \mathcal P^{\textrm{Zh}}$, then $f(A'_p) \le f(p)$.
Hence the Erd\H os conjecture holds for all primitive sets $A$ supported on $\mathcal P^{\textrm{Zh}}$.
\end{theorem}
\begin{proof}
As in \cite{ez} it suffices to prove the theorem in the case that $A$ is a finite set. By $d^\circ(A)$ we mean
the maximal value of $\Omega(a)$ for $a\in A$.
We proceed by induction on $d^\circ(A_p')$. If $d^\circ(A'_p)\le 1$, then $f(A_p') \le f(p)$. If $d^\circ(A'_p)> 1$, then $f(A_p')\le f(A_p'')/p$. The primitive set $B:=A_p''$ satisfies $f(B)=f(B_p)=\sum_{q\ge p}f(B_q')$. Since $d^\circ(B_q') \le d^\circ(B) < d^\circ(A'_p)$, by induction we have $f(B_q') \le f(q)$. Thus, since $p$ is Zhang,
$$f(A_p'') = f(B)=\sum_{q\ge p}f(B_q')\le \sum_{q\ge p}\frac{1}{q\log q} \le \frac{1}{\log p},$$
from which we obtain $f(A_p')\le f(A_p'')/p \le 1/(p\log p)$. This completes the proof.
\end{proof}
From this one might hope that all primes are Zhang. However, the prime 2 is not Zhang since $C> 1/\log 2$,
and the prime 3 is not Zhang since $C-1/(2\log2)>1/\log3$. Nevertheless, as with Mertens primes, it is true that the remaining
primes up to $p_{10^8}$ are Zhang. Indeed, starting from \eqref{eq:cohen}, we
computed that
\begin{align}
\sum_{p\ge q}\frac1{p\log p} = C - \sum_{p < q}\frac{1}{p\log p} \le \frac{1}{\log q}\qquad\textrm{for all } 3< q \le p_{10^8}.
\end{align}
The computation stopped at $10^8$ for convenience, and one could likely extend this further with some patience. It seems likely that there is also a ``race" between $\sum_{p\ge q}1/(p\log p)$
and $1/\log q$, as with Mertens primes, and that a large logarithmic density of primes $q$
are Zhang, with a small logarithmic density of primes failing to be Zhang.
A related conjecture due to Banks and Martin \cite{bm} is the chain of inequalities,
\begin{align*}
\sum_{p}\frac{1}{p\log p} > \sum_{p\le q}\frac{1}{pq\log pq} > \sum_{p\le q\le r}\frac{1}{pqr\log pqr} > \cdots,
\end{align*}
succinctly written as $f({\mathbb N}_k) > f({\mathbb N}_{k+1})$ for all $k\ge1$, where
${\mathbb N}_k = \{n: \Omega(n) = k\}$. As mentioned in the introduction, we know only that
$f({\mathbb N}_1)> f({\mathbb N}_k)$ for all $k\ge2$ and $f({\mathbb N}_2)>f({\mathbb N}_3)$.
More generally, for a subset $Q$ of primes, let ${\mathbb N}_k(Q)$ denote the subset of ${\mathbb N}_k$ supported on $Q$. A result of Zhang \cite{zhang2} impies that $f({\mathbb N}_1(Q)) > f({\mathbb N}_{k}(Q))$ for all $k>1$, while Banks and Martin showed that $f({\mathbb N}_k(Q)) > f({\mathbb N}_{k+1}(Q))$ if $\sum_{p\in Q}1/p$ is not too large.
We prove a similar result in the case where $Q$ is a subset of the Zhang primes and we
replace $f({\mathbb N}_k(Q))$ with $h({\mathbb N}_k(Q))$. Recall $h(A) = \sum_{a\in A}1/(a\log P(a))$.
\begin{proposition}
For all $k\ge1$ and $Q\subset \mathcal P^{\textrm{Zh}}$,
we have $h({\mathbb N}_k(Q)) \ge h({\mathbb N}_{k+1}(Q))$.
\end{proposition}
\begin{proof}
Since $p_k$ is a Zhang prime, we have
\begin{align*}
h({\mathbb N}_{k+1}(Q)) & = \sum_{\substack{q_1\le \cdots\le q_{k+1}\\ q_i\in Q}}\frac{1}{q_1\cdots q_k q_{k+1}\log q_{k+1}} \\
& = \sum_{\substack{q_1\le \cdots\le q_{k}\\ q_i\in Q}}\frac{1}{q_1\cdots q_{k}}\sum_{q_{k+1}\ge q_{k}}\frac{1}{q_{k+1}\log q_{k+1}}\\
& \le \sum_{\substack{q_1\le \cdots\le q_{k}\\ q_i\in Q}}\frac{1}{q_1\cdots q_{k} \log q_k} = h({\mathbb N}_{k}(Q)).
\end{align*}
This completes the proof.
\end{proof}
It is interesting that if we do not in some way restrict the primes used, the analogue
of the Banks--Martin conjecture for the function $h$ fails. In particular, we have
$$
h({\mathbb N}_2)>\sum_{m\le 10^4}\frac1{p_m}\sum_{n\ge m}\frac1{p_n\log p_n}
=\sum_{m\le 10^4}\frac1{p_m}\left(C-\sum_{k<m}\frac1{p_k\log p_k}\right)
>1.638,
$$
while $h({\mathbb N}_1)=C<1.637$.
It is also interesting that the analogue of the Banks--Martin conjecture for the function $g$ is
false since
$$
1=g({\mathbb N}_1)=g({\mathbb N}_2)=g({\mathbb N}_3)=\cdots\,.
$$
We have already shown in \eqref{eq:g(A)} that
$g(A'_q)\le g(q)$ for any primitive set $A$ and prime $q$, so the analogue for $g$ of the strong Erd\H os conjecture holds.
\subsection{Proof of Theorem \ref{thm:Nk}.}
We now return to the function $f$ and prove Theorem \ref{thm:Nk}.
We may assume that $k$ is large. Let $m=\lfloor\sqrt{k}\rfloor$ and let
$B(n)=e^{e^n}$. We have
\begin{align*}
f({\mathbb N}_k)&=\sum_{\Omega(a)=k}\frac1{a\log a} >\sum_{\substack{\Omega(a)=k\\
e^{e^{k}}<a\le e^{e^{k+m}}}}\frac1{a\log a}\\
&=\sum_{j\le m}\sum_{\substack{\Omega(a)=k\\B({k+j-1})<a\le
B(k+j)}}\frac1{a\log a} >
\sum_{j\le m}\frac1{\log
B({k+j})}\sum_{\substack{\Omega(a)=k\\B(k+j-1)<a\le B({k+j})}}
\frac1a.
\end{align*}
Thus it suffices to show that there is a positive constant $c$ such that for $j\le m$ we have
\begin{equation}
\label{eq:ss}
\sum_{\substack{\Omega(a)=k\\B({k+j-1})<a\le B({k+j})}}\frac1a
\ge c\frac{\log B({k+j})}m=c\frac{e^{k+j}}m,
\end{equation}
so that the proposition will follow.
Let $N_k(x)$ denote the number of members of ${\mathbb N}_k$ in $[1,x]$.
We use the Sathe--Selberg theorem, see \cite[Theorem 7.19]{MV},
from which we have that uniformly for $B({k})< x\le B({k+m})$, as $k\to\infty$,
$$
N_k(x)\sim \frac x{k!}\frac{(\log\log x)^k}{\log x}.
$$
This result also follows from Erd\H os \cite{erdos48}.
We have
\begin{align*}
\sum_{\substack{\Omega(a)=k\\B({k+j-1})<a\le B({k+j})}}\frac1a
&>\int_{B({k+j-1})}^{B({k+j})}\frac{N_k(x)-N_k(B({k+j-1}))}{x^2}\,dx\\
&\gg \int_{2B({k+j-1})}^{B({k+j})}\frac{N_k(x)}{x^2}\,dx.
\end{align*}
Thus,
\begin{align*}
\sum_{\substack{\Omega(a)=k\\B({k+j-1})<a\le B({k+j} )}}\frac1a
&\gg \frac{(\log\log
B({k+j-1}))^k}{k!}\int_{2B({k+j-1})}^{B({k+j})}\frac{dx}{x\log x}\\
&=\frac{(k+j-1)^k}{k!}(\log\log B({k+j})-\log\log(2B({k+j-1}))\\
&\gg\frac{(k+j-1)^k}{k!}\gg\frac{e^{k+j}}{\sqrt{k}},
\end{align*}
the last estimate following from Stirling's formula. This proves
\eqref{eq:ss},
and so the theorem.
The sets ${\mathbb N}_k$ and Theorem \ref{thm:Nk} give us the following result.
\begin{corollary}
\label{cor:largex}
We have that
$$
\textnormal{li}msup_{x\to\infty}\{f(A):A\subset[x,\infty),\,A \textnormal{ primitive}\}>0.
$$
\end{corollary}
\section*{Acknowledgments}
We thank Greg Martin for the content of Remark \ref{rmk:martin} and Kevin Ford for the content of
Remark \ref{rmk:ford}.
We thank Paul Kinlaw and Zhenxiang Zhang for some helpful comments.
\end{document} |
\begin{document}
\title{Minimal Codes From Characteristic Functions Not Satisfying The Ashikhmin-Barg Condition}
\abstract{A \textit{minimal code} is a linear code where the only instance that a codeword has its support contained in the support of another codeword is when the codewords are scalar multiples of each other. Ashikhmin and Barg gave a sufficient condition for a code to be minimal, which led to much interest in constructing minimal codes that do not satisfy their condition. We consider a particular family of codes $\mathcal C_f$ when $f$ is the indicator function of a set of points, and prove a sufficient condition for $\mathcal C_f$ to be minimal and not satisfy Ashikhmin and Barg's condition based on certain geometric properties of the support of $f$. We give a lower bound on the size of a set of points satisfying these geometric properties and show that the bound is tight.}
\section{Introduction}
Linear codes have found applications in areas far beyond error-correction. For example, an $(S,T)$ \textit{secret-sharing scheme} is a collection of $S$ ``shares" of a $q$-ary secret such that knowledge of any $T$ shares determines the secret, but knowledge of $T-1$ or fewer shares gives no information. Massey showed that certain codewords in the dual of a linear code, called \textit{minimal codewords}, can be used to construct a secret sharing scheme \cite{M}. However, determining the set of minimal codewords in a linear code is a hard problem in general, which galvanized the search for codes where every codeword is minimal, referred to as \textit{minimal codes}.
In 1998, Ashikhmin and Barg gave a sufficient condition for a code to be minimal based on the ratio of the maximum and minimum weight of the code \cite{AB}.
\begin{lemma}
A $q$-ary linear $[n,k]$ code $\mathcal C$ is minimal if
\[ \frac{w_{\max}}{w_{\min}} < \frac{q}{q-1}, \]
where $w_{\min}$ and $w_{\max}$ are the minimum and maximum weights of $\mathcal C$, respectively.
\end{lemma}
For many years following this result all known examples of minimal codes satisfied Ashikhmin and Barg's condition, until 2017 when Chang and Hyun \cite{CH} constructed a minimal binary code from a simplicial complex that did not. As for finding a family of $q$-ary minimal codes not satisfying Ashikhmin and Barg's condition for $q$ an odd prime power, a particular code that has received much attention is the code $\mathcal C_f$, which we will now define. Let $f: \mathbb{F}_q^n \rightarrow \mathbb{F}_q$ be an arbitrary but fixed function. We define $\mathcal C_f$ to be the $\mathbb{F}_q$-subspace of $\mathbb{F}_q^{q^n-1}$ spanned by vectors of the form
\begin{equation*}
c(u,v):= (uf(x)+v \cdot x)_{x \in \mathbb{F}_q^n \setminus \{0\}}
\end{equation*}
where $u \in \mathbb{F}_q$, $v \in \mathbb{F}_q^n$, and $v\cdot x$ is the usual dot product. We record below the basic parameters of the code, which are well known.
\begin{lemma}
If $f:\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ is not linear and $f(a) \neq 0$ for some $a \in \mathbb{F}_q^n \setminus \{0 \}$, then $\mathcal C_f$ is a $[q^n-1, n+1]$ linear code.
\end{lemma}
In \cite{DHZ}, Ding et. al. gave necessary and sufficient conditions for the code $\mathcal C_f$ to be minimal when $q=2$ based on the Walsh-Hadamard transform of $f$, which is defined to be the function $\hat{f}(x):= \sum_{v \in \mathbb{F}_2^n}(-1)^{f(x)+v\cdot x}$.
\begin{lemma}\label{dingminimal}
If $f:\mathbb{F}_2^n \rightarrow \mathbb{F}_2$ is not linear and $f(a) \neq 0$ for some $a \in \mathbb{F}_2^n \setminus \{0 \}$ then the binary code $\mathcal C_f$ is minimal if and only if $\hat{f}(x) + \hat{f}(y) \neq 2^n$ and $\hat{f}(x)-\hat{f}(y) \neq 2^n$ for every pair of distinct vectors $x, y \in \mathbb{F}_2^n$.
\end{lemma}
A boolean function $f: \mathbb{F}_2^n \rightarrow \mathbb{F}_2$ with $n$ a positive even integer is said to be \textit{bent} if $|\hat{f}(x)|=2^{n/2}$ for all $x \in \mathbb{F}_2^n$. Using Lemma~\ref{dingminimal}, it is easy to see that the binary code $\mathcal C_f$ is minimal when $f$ is a bent function.
Bonini et. al. studied the code $\mathcal C_f$ for arbitrary prime powers $q$ and functions $f$, and showed that if the zero set of $f$ satisfies certain geometric properties then $\mathcal C_f$ is a minimal code \cite{BB}. We continue this line of work by considering the code $\mathcal C_f$ when $f$ is the indicator function of a set, and give sufficient conditions for $\mathcal C_f$ to be minimal and not satisfying the condition of Ashikhmin and Barg in terms of the geometric properties of the support of $f$. We give a tight lower bound on the size of sets satisfying our geometric conditions, and give an explicit example of a set meeting the lower bound. In section 2 we lay out the notation used, and in section 3 we present the main results.
\section{Notation}
\begin{definition}
A \textit{linear} $[n,k]$ \textit{code} is a $k$-dimensional subspace $\mathcal C$ of $\mathbb{F}_q^n$.
\end{definition}
\begin{definition}
A codeword $c \in \mathcal C$ is said to be \textit{minimal} if whenever $\textnormal{supp}(c) \subseteq \textnormal{supp}(c^\prime)$ for some codeword $c^\prime \in \mathcal C$ it implies that $c = \lambda c^\prime$ for some $\lambda \in \mathbb{F}_q^\times$. A linear code $\mathcal C$ is \textit{minimal} if all codewords of $\mathcal C$ are minimal.
\end{definition}
We summarize some of the notation used in the paper:
\begin{itemize}
\item $e_i$ will denote the $i^{th}$ standard basis vector.
\item For $v \in \mathbb{F}_q^n$, we will let $H(v)$ denote the set $\{u \in \mathbb{F}_q^n : v \cdot u =0 \}$.
\item For a function $f : \mathbb{F}_q^n \rightarrow \mathbb{F}_q$, we will let $V(f)$ denote the set of zeros of $f$, $\{ u \in \mathbb{F}_q^n : f(u)=0 \}$.
\item If $U$ is any set, we will let $U^*:=U \setminus \{0 \}$, and $\overline{U}$ will denote the complement of $U$.
\item By a \textit{hyperplane}, we will mean an $(n-1)$-dimensional subspace of $\mathbb{F}_q^n$.
\item By an \textit{affine hyperplane}, we will mean a coset of an $(n-1)$-dimensional subspace of $\mathbb{F}_q^n$. Note that by our convention, a hyperplane is also an affine hyperplane.
\end{itemize}
\section{The Main Results}
The next theorem is our main result.
\begin{theorem}\label{main}
Let $q$ be an arbitrary prime power, and let $S \subseteq \mathbb{F}_q^n \setminus \{0\}$ be a set of points such that
\begin{enumerate}
\item $S$ is not contained in any affine hyperplane,
\item $S$ meets every affine hyperplane,
\item $|S| < q^{n-2}(q-1)$
\end{enumerate}
Then $\mathcal C_f$ with $f$ the indicator function of $S$ is a minimal code that does not satisfy the Ashikhmin-Barg condition.
\end{theorem}
\begin{proof}
Suppose that $\textnormal{supp}( c(u^\prime,v^\prime)) \subseteq \textnormal{supp}(c(u,v))$ for some codewords $c(u,v), c(u^\prime, v^\prime)$ of $\mathcal C_f$. Equivalently,
\begin{equation}\label{assump}
V(uf(x)+v \cdot x)^* \subseteq V(u^\prime f(x) + v^\prime \cdot x)^*
\end{equation}
We proceed by cases to show that $c(u,v)=\lambda c(u^\prime,v^\prime)$ for some $\lambda \in \mathbb{F}_q^\times$.
\textbf{Case 1:} If $v=0$, then it implies $V(f)^* \subseteq H(v^\prime)^*$, so that $\overline{H(v^\prime)^*} \subseteq S$, contradicting that $|S| < q^{n-2}(q-1)$.
\textbf{Case 2:} If $v^\prime =0$ then $V(uf(x)+v\cdot x)^* \subseteq V(f)^*$. From the partition
\begin{equation}\label{partition}
V(uf(x) + v\cdot x)^* = (V(f)^* \cap H(v)^*) \cup ( S \cap \{v \cdot x = -u \})
\end{equation}
it implies that $S$ does not meet the affine hyperplane $\{v \cdot x =-u \}$, a contradiction.
\textbf{Case 3:} If $v, v^\prime \neq 0$, then from equation~\ref{assump} and the partition of equation~\ref{partition} we have
\begin{equation}
V(f)^* \cap H(v)^* \subseteq V(f)^* \cap H(v^\prime)^* \subseteq H(v^\prime)^*
\end{equation}
Here $|H(v)^* \cap V(f)^*| = |H(v)^* \setminus S| \geq q^{n-1} - |S| \geq q^{n-2}+1$, so that $H(v)^* \cap V(f)^*$ is not contained in a hyperplane other than $H(v)$, i.e. $H(v)=H(v^\prime)$. Thus we have $v^\prime = \lambda v$ for some $\lambda \in \mathbb{F}_q^\times$. Since $S$ meets every affine hyperplane, we can choose some $y$ in $S \cap \{v\cdot x = -u \}$. Equation~\ref{assump} then implies $u=-v\cdot y$ and $u^\prime=-\lambda v \cdot y$, so that $u^\prime = \lambda u$. We therefore conclude $c(u^\prime, v^\prime) = \lambda c(u,v)$ in this case, as was required.
\textbf{Case 4:} If $u=0$ and $H(v) \neq H(v^\prime)$, then equation~\ref{assump} reads as $H(v)^* \subseteq V(u^\prime f(x)+v^\prime \cdot x)^*$. Using the partition of equation~\ref{partition} and the assumption that $S$ is not contained in any affine hyperplane, we have
\begin{equation*}
\begin{split}
q^{n-1}-1 &= |H(v)^*| \\
&= |V(f)^* \cap H(v^\prime)^* \cap H(v)^*| + |S \cap \{ v^\prime \cdot x =-u^\prime \} \cap H(v)^* | \\
& \leq |H(v)^* \cap H(v^\prime)^*| + |S \cap \{ v^\prime \cdot x =-u^\prime \} | \\
& \leq q^{n-2}-1 + q^{n-2}(q-1) - 1 \\
&= q^{n-1} - 2\\
\end{split}
\end{equation*}
This clear contradiction means we therefore must have $H(v)=H(v^\prime)$, so that $v^\prime = \lambda v$ for some $\lambda \in \mathbb{F}_q^\times$. But the containment $H(v)^* \subseteq V(u^\prime f(x) + \lambda v \cdot x)^*$ implies that $H(v)^* \subseteq V(f)^*$, or equivalently $S \subseteq \overline{H(v)^*}$. This contradicts that $S$ meets the hyperplane $H(v)$.
\textbf{Case 5:} If $u^\prime =0$, then we have $V(uf(x)+v\cdot x)^* \subseteq H(v^\prime)^*$, which follows by case 3.
Lastly we check that the code $\mathcal C_f$ does not satisfy the Ashikhmin-Barg condition. The maximum weight is at least the weight of $c(0,1)$, which is the number of points in $H(v)$, and the minimum weight is at most the weight of $c(1,0)$, which is $|S|$. Thus:
\begin{equation*}
\frac{w_{\max}}{w_{\min}} \geq \frac{q^{n-1}}{|S|} > \frac{q}{q-1}
\end{equation*}
\end{proof}
When $q=2$ the conditions of Theorem~\ref{main} simplify considerably, which we record in the following corollary.
\begin{corollary}\label{mainbinary}
Let $S \subseteq \mathbb{F}_2^n \setminus \{0 \}$ be a set of points such that
\begin{enumerate}
\item $S$ is not contained in any hyperplane,
\item $S$ meets every hyperplane,
\item $|S| < 2^{n-2}$
\end{enumerate}
Then the binary code $C_f$ with $f$ the indicator function of $S$ is a minimal code not satisfying the Ashikhmin-Barg condition.
\end{corollary}
\begin{proof}
It suffices to check that when $q=2$, and $S$ is a set of points that is not contained in a hyperplane and meets every hyperplane, then $S$ is not contained in an affine hyperplane, and meets every affine hyperplane.
To see that $S$ is not contained in an affine hyperplane, suppose that $H$ is an affine hyperplane not containing the origin, and that $S \subseteq H$. Since $q=2$, then $\overline{H}$ is a hyperplane, so that $S$ does not meet the hyperplane $\overline{H}$, a contradiction.
Similarly, if $S$ does not meet the affine hyperplane $H$ not containing the origin, then $S$ is contained in $\overline{H}$, which is a hyperplane. Therefore $S$ meets every affine hyperplane, so that $S$ satisfies the conditions of Theorem~\ref{main}.
\end{proof}
\begin{example}
Assume that $n \geq 6$ is an even positive integer, and let $q=2$. A \textit{partial spread} of order $s$ is a set of $n/2$-dimensional subspaces $\{U_1,...,U_s\}$ of $\mathbb{F}_2^n$ such that $U_i \cap U_j = \{ 0 \}$ for all $1 \leq i, j \leq s$. It is easy to see that a partial spread of order $s$ has at most $2^{n/2}+1$ elements.
In \cite{DHZ}, Ding et. al. showed that when $1 \leq s \leq 2^{n/2}+1$, $s \notin \{1, 2^{n/2}, 2^{n/2}+1 \}$, then $\mathcal C_f$ with $f$ the indicator function of the set $S = \cup_{i=1}^s U_i^*$ is a minimal code. Moreover, they showed that if, in addition, we have $s \leq 2^{\frac{n}{2}-2}$ then $\mathcal C_f$ does not satisfy Ashikhmin and Barg's condition. They proved this by computing the Walsh-Hadamard transform of $f$ and then applying Lemma~\ref{dingminimal}, but we can alternatively check that the set $S$ satisfies the conditions of Corollary~\ref{mainbinary}.
Since $s \geq 2$, then $S$ clearly spans $\mathbb{F}_2^n$, and the assumption that $n \geq 6$ means that $\dim(U_i) \geq 3$, so $S$ meets every hyperplane. Finally, we have in general that $|S|=s(2^{n/2}-1)$, so if we assume that $s \leq 2^{\frac{n}{2}-2}$ then an easy computation shows that $|S| \leq 2^{n-2}-2^{\frac{n}{2}-2}< 2^{n-2}$. Therefore $S$ indeed satisfies the conditions of Corollary~\ref{mainbinary}.
\end{example}
\begin{example}
Let $n \geq 7$ and $2 \leq k \leq \lfloor \frac{n-3}{2} \rfloor$. Let $S$ be the set of vectors of $\mathbb{F}_2^n$ with weight at most $k$. In \cite{DHZ}, Ding et. al. showed that $\mathcal C_f$ with $f$ the indicator function of $S$ is a minimal $[2^n-1, n+1, \sum_{i=1}^k {n \choose i}]$ binary code, and moreover that $\mathcal C_f$ does not satisfy the Ashikhmin-Barg condition if and only if
\begin{equation}\label{dingbd}
1 + 2 \sum_{i=1}^k {n \choose i} \leq 2^{n-1} + {n-1 \choose k}
\end{equation}
We alternatively check that the set $S$ satisfies the conditions of Corollary~\ref{mainbinary}. Since $S$ contains the standard basis vectors, $S$ is clearly not contained in any hyperplane. Given any hyperplane $H(v)$, at least one of the vectors $e_1$, $e_2$, or $e_1+e_2$ is an element of $H(v)$, and each of these vectors is also an element of $S$. Therefore $S$ also meets every hyperplane. In general the size of $S$ is $\sum_{i=1}^k {n \choose k}$, so to apply Corollary~\ref{mainbinary} we lastly need to impose the restriction that $\sum_{i=1}^k {n \choose k} < 2^{n-2}$. We note that this is equivalent to the inequality
\begin{equation}
1+ 2 \sum_{i=1}^k {n \choose k} \leq 2^{n-1}
\end{equation}
which is a more restrictive condition than the inequality given in Equation~\ref{dingbd}.
\end{example}
We lastly give a tight lower bound on the size of a set of points satisfying the conditions of Theorem~\ref{main}. The following lemma was first proved by Jameson \cite{J}. There are many known proofs of the result; for a survey on them we refer the reader to \cite{B}.
\begin{lemma}\label{bound}
If $S$ is a set of points in $\mathbb{F}_q^n$ meeting every affine hyperplane then $|S| \geq n(q-1)+1$.
\end{lemma}
The lower bound of Lemma~\ref{bound} clearly gives a lower bound on the size of a set of points satisfying the conditions of Theorem~\ref{main}. However, it is not obvious that this bound should be tight since the set of points in Theorem~\ref{main} does not contain the origin.
\begin{theorem}\label{boundmain}
Let $q$ be an arbitrary prime power. If $S \subseteq \mathbb{F}_q^n \setminus \{0\}$ is a set of points such that
\begin{enumerate}
\item $S$ is not contained in any affine hyperplane,
\item $S$ meets every affine hyperplane,
\item $|S| < q^{n-2}(q-1)$,
\end{enumerate}
then $|S| \geq n(q-1)+1$. Moreover, this lower bound is tight.
\end{theorem}
\begin{proof}
To show that this lower bound is tight, consider the set of points
\begin{equation*}
S: =\{ a+ \lambda e_i : \lambda \in \mathbb{F}_q, 1 \leq i \leq n \}
\end{equation*}
where $a \in \mathbb{F}_q^n \setminus \{0 \}$ is any point not equal to $\lambda e_i$ for any $\lambda \in \mathbb{F}_q$, $1 \leq i \leq n$. By our choice of $a$, the origin is not an element of $S$.
Clearly $S$ is not contained in an affine hyperplane, and $|S|=n(q-1)+1 <q^{n-2}(q-1)$.
Lastly, if $u_1X_1+u_2X_2+...+u_nX_n=\alpha$ is the equation of an affine hyperplane, then it is easily checked that $a+\lambda e_i$ is a point on the affine hyperplane, where $i$ is chosen such that $u_i \neq 0$, and $\lambda$ is chosen to be the element $\lambda = \frac{1}{u_i}(\alpha- u \cdot a)$.
\end{proof}
The sufficient conditions given in Theorem~\ref{main} and Corollary~\ref{mainbinary} give geometric conditions for the code $\mathcal C_f$ to be minimal and not satisfy the Ashikhmin-Barg condition when $q$ is any prime power and $f$ is an indicator function. Moreover, since the minimum weight of $\mathcal C_f$ is at most $|S|$, then the tight lower bound on the size of a set of points satisfying these conditions given in Theorem~\ref{boundmain} shows that it is possible for $\mathcal C_f$ to additionally have small minimum weight.
\newcommand{\Addresses}{{
\footnotesize
Julien Sorci, \textsc{Department of Mathematics, University of Florida,
P. O. Box 118105, Gainesville FL 32611, USA}\par\nopagebreak
\textit{E-mail address}: \texttt{[email protected]}
}}
\title{Minimal Codes From Characteristic Functions Not Satisfying The Ashikhmin-Barg Condition}
\Addresses
\end{document} |
\begin{document}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newtheorem{definition}{Definition}[section]
\newtheorem{defn}[definition]{Definition}
\newtheorem{lem}[definition]{Lemma}
\newtheorem{prop}[definition]{Proposition}
\newtheorem{thm}[definition]{Theorem}
\newtheorem{cor}[definition]{Corollary}
\newtheorem{cors}[definition]{Corollaries}
\newtheorem{example}[definition]{Example}
\newtheorem{examples}[definition]{Examples}
\newtheorem{rems}[definition]{Remarks}
\newtheorem{rem}[definition]{Remark}
\newtheorem{notations}[definition]{Notations}
\theoremstyle{remark}
\theoremstyle{remark}
\theoremstyle{remark}
\theoremstyle{notations}
\theoremstyle{remark}
\theoremstyle{remark}
\theoremstyle{remark}
\newtheorem{dgram}[definition]{Diagram}
\theoremstyle{remark}
\newtheorem{fact}[definition]{Fact}
\theoremstyle{remark}
\newtheorem{illust}[definition]{Illustration}
\theoremstyle{remark}
\theoremstyle{definition}
\newtheorem{question}[definition]{Question}
\theoremstyle{definition}
\newtheorem{conj}[definition]{Conjecture}
\title{\textbf{SS-Injective Modules and Rings}}
\author{\textbf{Adel Salim Tayyah} \\ Department of Mathematics, College of Computer Science and \\Information Technology, Al-Qadisiyah University, Al-Qadisiyah, Iraq \\Email: [email protected] \\ \\\textbf{Akeel Ramadan Mehdi}\\ Department of Mathematics, College of Education, \\mathcal{A}l-Qadisiyah University, P. O. Box 88, Al-Qadisiyah, Iraq \\Email: akeel\[email protected]}
\date{\today}
\maketitle
\begin{abstract} \,We introduce and investigate ss-injectivity as a generalization of both soc-injectivity and small injectivity. A module $M$ is said to be ss-$N$-injective \,(where $N$ is a module) \,if \, every \, $R$-homomorphism from a semisimple small submodule of $N$ into $M$ extends to $N$. A module $M$ is said to be ss-injective (resp. strongly ss-injective), if $M$ is ss-$R$-injective (resp. ss-$N$-injective for every right $R$-module $N$).
Some characterizations and properties of (strongly) ss-injective modules and rings are given. Some results of Amin, Yuosif and Zeyada on soc-injectivity are extended to ss-injectivity. Also, we provide some new characterizations of universally mininjective rings, quasi-Frobenius rings, Artinian rings and semisimple rings.
\end{abstract}
$\vphantom{}$
\noindent \textbf{Key words and phrases:} Small injective rings (modules); soc-injective rings (modules); SS-Injective rings (modules); Perfect rings; quasi-Frobenius rings.
$\vphantom{}$
\noindent \textbf{2010 Mathematics Subject Classification:} Primary: 16D50, 16D60, 16D80 ; Secondary: 16P20, 16P40, 16L60 .
$\vphantom{}$
\noindent \textbf{$\ast$} The results of this paper will be part of a MSc thesis of the first author, under the supervision of the second author at the University of Al-Qadisiyah.
$\vphantom{}$
\mathcal{S}ection{Introduction}
Throughout this paper, $R$ is an associative ring with identity, and all modules are unitary $R$-modules. For a right $R$-module $M$, we write soc$(M)$, $J(M)$, $Z(M)$, $Z_{2}(M)$, $E(M)$ and End$(M)$ for the socle, the Jacobson radical, the singular submodule, the second singular submodule, the injective hull and the endomorphism ring of $M$, respectively. Also, we use $S_{r}$, $S_{\ell}$, $Z_{r}$, $Z_{\ell}$, $Z_{2}^{r}$ and $J$ to indicate the right socle, the left socle, the right singular ideal, the left singular ideal, the right second singular ideal, and the Jacobson radical of $R$, respectively. For a submodule $N$ of $M$, we write $N\mathcal{S}ubseteq^{ess}M$, $N\ll M$, $N\mathcal{S}ubseteq^{\oplus}M$, and $N\mathcal{S}ubseteq^{max}M$ to indicate that $N$ is an essential submodule, a small submodule, a direct summand, and a maximal submodule of $M$, respectively. If $X$ is a subset of
a right $R$-module $M$, the right (resp. left) annihilator of $X$ in $R$ is denoted by $r_{R}(X)$ (resp. $l_{R}(X)$). If $M=R$, we write $r_{R}(X)=r(X)$ and
$l_{R}(X)=l(X)$.
Let $M$ and $N$ be right $R$-modules, $M$ is called soc-$N$-injective if every $R$-homomorphism from the soc$(N)$ into $M$ extends to $N$.
A right $R$-module $M$ is called soc-injective, if $M$ is soc-$R$-injective. A right $R$-module $M$ is called strongly soc-injective, if $M$ is soc-$N$-injective for all right $R$-module $N$ \cite{2AmYoZe05}
Recall that a right $R$-module $M$ is called mininjective \cite{14NiYo97} (resp. small injective \cite{19ThQu09}, principally small injective \cite{20Xia11}) if every $R$-homomorphism from any simple (resp. small, principally small) right ideal to $M$ extend to $R$. A ring is called right mininjective (resp. small injective, principally small injective) ring, if it is right mininjective (resp. small injective, principally small injective) as right $R$-module. A ring $R$ is called right Kasch if every simple right $R$-module embeds in $R$ (see for example \cite{15NiYu03}. Recall that a ring $R$ is called semilocal if $R/J$ is a semisimple \cite{11Lom99}. Also, a ring $R$ is said to be right perfect if every right $R$-module has a projective cover. Recall that a ring $R$ is said to be quasi-Frobenius (or $QF$) ring if it is right (or left) artinian and right (or left) self-injective; or equivalently, every injective right $R$-module is projective.
In this paper, we introduce and investigate the notions of ss-injective and strongly ss-injective modules and rings. Examples are given to show that the (strong) ss-injectivity is distinct from that of mininjectivity, principally small injectivity, small injectivity, simple J-injectivity, and (strong) soc-injectivity. Some characterizations and properties of (strongly) ss-injective modules and rings are given.
W. K. Nicholson and M. F. Yousif in \cite{14NiYo97} introduced the notion of universally mininjective ring, a ring $R$ is called right universally mininjective if $S_{r}\cap J=0$. In Section 2, we show that $R$ is a right universally mininjective ring if and only if every simple right $R$-module is ss-injective. We also prove that if $M$ is a projective right $R$-module, then every quotient of an ss-$M$-injective right $R$-module is ss-$M$-injective
if and only if every sum of two ss-$M$-injective submodules of a right $R$-module is ss-$M$-injective if and only if Soc$(M)\cap J(M)$ is projective. Also, some results are given in terms of ss-injectivity modules. For example, every simple singular right $R$-module is ss-injective implies that $S_{r}$ projective and $r(a)\mathcal{S}ubseteq^{\oplus}R_{R}$ for all $a\in S_{r}\cap J$, and if $M$ is a finitely generated right $R$-module, then Soc$(M)\cap J(M)$ is finitely generated if and only if every direct sum of ss-$M$-injective right $R$-modules is ss-$M$-injective if and only if every direct sum of \emph{$\mathbb{N}$} copies of ss-$M$-injective right $R$-module is ss-$M$-injective.
In Section 3, we show that a right $R$-module $M$ is strongly ss-injective if and only if every small submodule $A$ of a right $R$-module $N$, every $R$-homomorphism $\alpha:A\longrightarrow M$ with $\alpha(A)$ semisimple extends to $N$. In particular, $R$ is semiprimitive if every simple right $R$-module is strongly ss-injective, but not conversely. We also prove that if $R$ is a right perfect ring, then a right $R$-module $M$ is strongly
soc-injective if and only if $M$ is strongly ss-injective. A results (\cite[Theorem 3.6 and Proposition 3.7]{2AmYoZe05}) are extended. We prove that a ring $R$ is right artinian if and only if every direct sum of strongly ss-injective right $R$-modules is injective, and $R$
is $QF$ ring if and only if every strongly ss-injective right $R$-module is projective.
In Section 4, we extend the results (\cite[Proposition 4.6 and Theorem 4.12]{2AmYoZe05}) from a soc-injective ring to an ss-injective
ring (see Proposition~\ref{Proposition:(4.14)} and Corollary~\ref{Corollary:(4.18)}).
In Section 5, we show that a ring $R$ is $QF$ if and only if $R$ is strongly ss-injective and right noetherian with essential right socle if and only if $R$ is strongly ss-injective, $l(J^{2})$ is countable generated left ideal, $S_{r}\mathcal{S}ubseteq^{ess}R_{R}$, and the chain $r(x_{1})\mathcal{S}ubseteq r(x_{2}x_{1})\mathcal{S}ubseteq...\mathcal{S}ubseteq r(x_{n}x_{n-1}...x_{1})\mathcal{S}ubseteq...$ terminates for every infinite sequence $x_{1},x_{2},...$ in $R$ (see Theorem~\ref{Theorem:(5.10)} and Theorem~\ref{Theorem:(5.12)}). Finally, we prove that a ring $R$ is $QF$ if and only if $R$ is strongly left and right ss-injective, left Kasch, and $J$ is left $t$-nilpotent (see Theorem~\ref{Theorem:(5.16)}), extending a result of I. Amin, M. Yousif and N. Zeyada \cite[Proposition 5.8]{2AmYoZe05} on strongly soc-injective rings.
General background materials can be found in \cite{3AnFu74}, \cite{9Kas82} and \cite{10Lam99}.
\mathcal{S}ection{SS-Injective Modules}
\begin{defn}\label{Definition:(2.1)(a)}
Let \textit{N} be a right \textit{R}-module. A right \textit{R}-module
\textit{M} is said to be ss-\textit{N}-injective, if for any semisimple
small submodule \textit{K} of \textit{N}, any right \textit{R}-homomorphism
\textit{\textcolor{black}{$f:K{\color{black}\longrightarrow}M$}}\textit{
}extends to \textit{N}. A module \textit{M} is said to be ss-\textit{quasi}-injective
if \textit{M} is ss-\textit{M}-injective. \textit{M} is said to be
ss-injective if \textit{M} is ss-\textit{R}-injective. A ring \textit{R}
is said to be right ss-injective if the right \textit{R}-module $\mathit{{\color{black}R}_{{\color{black}R}}}$
is ss-injective.
\end{defn}
\begin{defn}\label{Definition:(2.1(b))}
A right $R$-module $M$ is said to be strongly ss-injective if $M$ is ss-$N$-injective, for all right $R$-module $N$. A ring $R$ is said to be strongly right ss-injective if the right $R$-module $R_{R}$ is strongly ss-injective.
\end{defn}
\begin{example}\label{example:(2.2)}
\noindent \emph{(1) Every soc-injective module is ss-injective, but
not conversely (see Example~\ref{Example:(5.8)}).}
\noindent\emph{(2) Every small injective module is ss-injective, but
not conversely (see Example~\ref{Example:(5.6)}).}
\noindent\emph{(3) Every $\mathbb{Z}$-module is ss-injective. In fact, if $M$ is a $\mathbb{Z}$-module, then $M$ is small injective (by \cite[Theorem 2.8]{19ThQu09} and hence it is ss-injective.}
\noindent\emph{(4) The two classes of principally small injective rings and ss-injective
rings are different (see \cite[Example 5.2]{15NiYu03}, Example~\ref{Example:(4.4)} and Example~\ref{Example:(5.6)}).}
\noindent\emph{(5) Every strongly soc-injective module is strongly ss-injective,
but not conversely (see Example~\ref{Example:(5.8)}).}
\noindent\emph{(6) Every strongly ss-injective module is ss-injective, but not conversely
(see Example~\ref{Example:(5.7)}).}
\end{example}
\begin{thm}\label{Theorem:(2.3)} The following statements hold:
\noindent (1) Let $N$ be a right $R$-module and let ${\color{black}\left\{ M_{i}:i\in I\right\} }$ be a family of right $R$-modules. Then the direct product $\prod_{{\mathcal{S}criptstyle {\mathcal{S}criptscriptstyle i\in I}}}M_{i}$ is ss-$N$-injective if and only if each $\mathit{M_{i}}$ is ss-$N$-injective, for all $\mathit{i\in I}$.
\noindent (2) Let $M$, $N$ and $K$ be right $R$-modules with $K{\color{black}\mathcal{S}ubseteq}N$. If $M$ is ss-$N$-injective, then $M$ is ss-$K$-injective.
\noindent (3) Let $M$, $N$ and $K$ be right $R$-modules with ${\color{black}M{\color{black}\cong}N}$. If $M$ is ss-$K$-injective, then $N$ is ss-$K$-injective.
\noindent (4) Let $M$, $N$ and $K$ be right $R$-modules with ${\color{black}K{\color{black}\cong}N}$. If $M$ is ss-$K$-injective, then $M$ is ss-$N$-injective.
\noindent (5) Let $M$, $N$ and $K$ be right $R$-modules with $N$ is a direct summand of $M$. If $M$ is ss-$K$-injective, then $N$ is ss-$K$-injective.
\end{thm}
\begin{proof} Clear.
\end{proof}
\begin{cor}\label{Corollary:(2.4)}
\noindent(1) If $N$ is a right $R$-module, then a finite direct sum of ss-$N$-injective modules is again ss-$N$-injective. Moreover, a finite direct sum of ss-injective (resp. strongly ss-injective) modules is again ss-injective (resp. strongly ss-injective).
\noindent(2) A direct summand of an ss-quasi-injective (resp. ss-injective,
strongly ss-injective) module is again ss-quasi-injective
(resp. ss-injective, strongly ss-injective).
\end{cor}
\begin{proof}
(1) By taking the index $I$ to be a finite set and applying Theorem~\ref{Theorem:(2.3)}(1).
(2) This follows from Theorem~\ref{Theorem:(2.3)}(5).
\end{proof}
\begin{lem}\label{Lemma:(2.5)}
Every ss-injective right $R$-module is right mininjective.
\end{lem}
\begin{proof}
Let $I$ be a simple right ideal of $R$. By \cite[Lemma 3.8]{16Pas04} we have that either $I$ is nilpotent or a direct summand of $R$. If $I$ is a nilpotent, then $I\mathcal{S}ubseteq J$ by \cite[Corollary 6.2.8]{6Bla11} and hence $I$ is a semisimple small right ideal of $R$. Thus every ss-injective right $R$-module is right mininjective.
\end{proof}
It easy to prove the following proposition.
\begin{prop}\label{Proposition:(2.6)} Let $N$ be a right $R$-module. If $J(N)$ is a small submodule of $N$, then a right $R$-module $M$ is ss-$N$-injective if and only if any $R$-homomorphism $f:soc(N)\cap J(N){\color{black}\longrightarrow M}$ extends to $N$.
\end{prop}
\begin{prop}\label{Proposition:(2.8)}
Let $N$ be a right $R$-module and $\left\{ {\color{black}A_{i}:i=1,2,...,n}\right\} $ be a family of finitely generated right $R$-modules. Then $N$ is ss-$\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}A_{i}$-injective if and only if $N$ is ss-$\mathit{A_{i}}$-injective, for all $\mathit{i}=1,2,...,n$.
\end{prop}
\begin{proof}
($\Rightarrow$) This follows from Theorem~\ref{Theorem:(2.3)}((2),(4)).
$\!\!\!\!\!\!\!\!$($\Leftarrow$) By \cite[Proposition (I.4.1) and Proposition (I.1.2)]{5BiKeNe82} we have soc($\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}A_{i})\cap J(\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}A_{i})=($soc$\,\cap \,J)(\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}A_{i})$ $=\mathit{\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}}($soc$\,\cap\, J)(\mathit{A_{i}})=\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}($soc$(\mathit{A_{i}})\,\cap\,J(\mathit{A_{i}}))$.
For $j=1,2,...,n$, consider the following diagram:
\[
\xymatrix{
K_{j}=soc(\mathit{A_{j}})\,\cap\,J(\mathit{A_{j}}) \,\ar[d]_{i_{K_{j}}}\ar@{^{(}->}[r]^{\qquad\qquad i_{2}} \ar[r] & A_{j} \ar[d]^{i_{A_{j}}} \\
\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}(soc(\mathit{A_{i}})\,\cap\,J(\mathit{A_{i}})) \,\ar[d]_{f}\ar@{^{(}->}[r]^{\qquad\qquad i_{1}} \ar[r] &{\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}\mathit{A_{i}}}\\N
}
\]
\noindent where \,$i_{1}$, \,$i_{2}$ \, are inclusion maps and \, $i_{K_{j}}$, \, $i_{A_{j}}$ \, are injection maps. \, By hypothesis, \, there exists an $R$-homomorphism $h_{j}:\mathit{A_{j}\longrightarrow N}$ such that $\mathit{h_{j}\circ i_{2}=f\circ i_{K_{j}}}$,
also there exists exactly one homomorphism $h:\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}A_{i}\longrightarrow N$
satisfying $\mathit{h_{j}}=h\circ i_{A_{j}}$ by \cite[Theorem 4.1.6(2)]{9Kas82}. Thus $\mathit{f\circ i_{K_{j}}}=h_{j}\circ i_{2}=h\circ i_{A_{j}}\circ i_{2}=h\circ i_{1}\circ i_{K_{j}}$ for all $\mathit{j}=1,2,...,n$. Let $(\mathit{a_{1},a_{2},}...,a_{n})$$\in\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}($soc$(A_{i})\cap J(A_{i}))$,
thus $\mathit{a_{j}\in}$soc$(A_{j})\cap J(A_{j})$, for all $\mathit{i}=1,2,...,n$
and, $f(\mathit{a_{1},a_{2},}...,a_{n})=f(\mathit{i_{K_{1}}}(a_{1}))+$$f(\mathit{i_{K_{2}}}(a_{2}))+...+$$f(\mathit{i_{K_{n}}}(a_{n}))=(\mathit{h\circ i_{1}})(\mathit{a_{1},a_{2},}...,a_{n})$. Thus $\mathit{f=h\circ i_{1}}$ and the proof is complete.
\end{proof}
\begin{cor}\label{Corollary:(2.10)} Let $M$
be a right $R$-module and $1=\mathit{e_{1}}+e_{2}+...+e_{n}$ in $R$ such that $\mathit{e_{i}}$ are orthogonal idempotent. Then $M$ is ss-injective if and only if $M$ is ss-$\mathit{e_{i}R}$-injective for every $\mathit{i=1,2,}...,n$.
\noindent (2) For idempotents $e$ and $f$ of $R$. If $\mathit{eR\cong f}R$ and $M$ is ss-$\mathit{eR}$-injective, then $M$ is ss-$\mathit{fR}$-injective.
\end{cor}
\begin{proof}
\noindent (1) From \cite[Corollary 7.3]{3AnFu74}, we have $\mathit{R=\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}}e_{i}R$, thus it follows from Proposition~\ref{Proposition:(2.8)} that $M$ is ss-injective if and only if $M$ is ss-$\mathit{e_{i}}R$-injective for all 1$\leq i\leq n$.
(2) This follows from Theorem~\ref{Theorem:(2.3)}(4).
\end{proof}
\begin{prop}\label{Proposition:(2.9)} A right $R$-module $M$ is ss-injective if and only if $M$ is ss-$P$-injective, for every finitely generated projective right $R$-module $P$.
\end{prop}
\begin{proof} ($\Rightarrow$) Let $M$ be an ss-injective $R$-module, thus it follows from Proposition~\ref{Proposition:(2.8)} that $M$ is ss-$\mathit{R^{n}}$-injective for any $\mathit{n\in\mathbb{\mathbb{Z^{\dotplus}}}}$. Let $P$ be a finitely generated projective $R$-module, thus by \cite[Corollary 5.5]{1AdWe92}, we have that $P$ is a direct summand of a module isomorphic to $\mathit{R^{m}}$ for some $\mathit{m\in\mathbb{Z^{\dotplus}}}$. Since $M$ is ss-$\mathit{R^{m}}$-injective, thus $M$
is ss-$P$-injective by Theorem~\ref{Theorem:(2.3)}((2),(4)).
($\Leftarrow$) By the fact that $R$ is projective.
\end{proof}
\begin{prop}\label{Proposition:(2.11)} The following statements are equivalent for a right $R$-module $M$.
\noindent(1) Every right $R$-module is ss-$M$-injective.
\noindent(2) Every simple submodule of $M$ is ss-$M$-injective.
\noindent(3) \emph{soc}$(M)\cap J(M)=0$.
\end{prop}
\begin{proof}
(1) $\Rightarrow$ (2) and (3) $\Rightarrow$ (1) are obvious.
(2) $\Rightarrow$ (3) Assume that soc$(M)\cap J(M)\neq 0$, thus soc$(M)\cap J(M)=\underset{i\in I}{\bigoplus}x_{i}R$ where $x_{i}R$ is a simple small submodule of $M$, for each $i\in I$. Therefore, $x_{i}R$ is ss-$M$-injective for each $i\in I$ by hypothesis. For any $i\in I$, the inclusion map from $x_{i}R$ to $M$ is split, so we have that $x_{i}R$ is a direct summand of $M$. Since $x_{i}R$ is small submodule of $M$, thus $x_{i}R=0$ and hence $x_{i}=0$ for all $i\in I$ and this a contradiction.
\end{proof}
\begin{lem}\label{Lemma:(2.13)} Let $M$ be an ss-quasi-injective right $R$-module and $S=\emph{End}(M_{R})$, then the following statements hold:
\noindent(1) $l_{M}r_{R}(m)=Sm$ for all $m\in$ \emph{soc}$(M)\cap J(M)$.
\noindent(2) $r_{R}(m)\mathcal{S}ubseteq r_{R}(n)$, where $m\in$ \emph{soc}$(M)\cap J(M)$, $n\in M$ implies $Sn\mathcal{S}ubseteq Sm$.
\noindent(3) $l_{S}(mR\cap r_{M}(\alpha))=l_{S}(m)+S\alpha$, where $m\in $ \emph{soc}$(M)\cap J(M)$, $\alpha\in S$.
\noindent(4) If $kR$ is a simple submodule of $M$, then $Sk$ is a simple left $S$-module, for all $k\in J(M)$. Moreover, \emph{soc}$(M)\cap J(M)\mathcal{S}ubseteq $\,\,\emph{soc}$(_{S}M)$.
\noindent(5) \emph{soc}$(M)\cap J(M)\mathcal{S}ubseteq r_{M}(J($$_{S}S))$.
\noindent(6) $l_{S}(A\cap B)=l_{S}(A)+l_{S}(B)$, for every semisimple small right submodules $A$ and $B$ of $M$.
\end{lem}
\begin{proof}
(1) Let $n\in l_{M}r_{R}(m)$, thus $r_{R}(m)\mathcal{S}ubseteq r_{R}(n)$.
Now, let $\gamma:mR\longrightarrow M$ is given by $\gamma(mr)=nr$,
thus $\gamma$ is a well define $R$-homomorphism. By hypothesis, there exists an endomorphism $\beta$ of $M$
such that $\beta_{|mR}=\gamma$. Therefore, $n=\gamma(m)=\beta(m)\in Sm$,
that is $l_{M}r_{R}(m)\mathcal{S}ubseteq Sm$. The inverse inclusion is clear.
(2) Let $n\in M$ and $m\in$ soc$(M)\cap J(M)$. Since
$r_{R}(m)\mathcal{S}ubseteq r_{R}(n)$, then $n\in l_{M}r_{R}(m)$. By (1),
we have $n\in Sm$ as desired.
(3) If $f\in l_{S}(m)+S\alpha$, then $f=f_{1}+f_{2}$
such that $f_{1}(m)=0$ and $f_{2}=g\alpha$, for some $g\in S$.
For all $n\in mR\cap r_{M}(\alpha)$, we have $n=mr$ and $\alpha(n)=0$
for some $r\in R$. Since $f_{1}(n)=f_{1}(mr)=f_{1}(m)r=0$ and $f_{2}(n)=g(\alpha(n))=g(0)=0$,
thus $f\in l_{S}(mR\cap r_{M}(\alpha))$ and this implies that $l_{S}(m)+S\alpha\mathcal{S}ubseteq l_{S}(mR\cap r_{M}(\alpha))$.
Now, we will prove that the other inclusion. Let $g\in l_{S}(mR\cap r_{M}(\alpha)).$ If
$r\in r_{R}(\alpha(m))$, then $\alpha(mr)=0$, so $mr\in mR\cap r_{M}(\alpha)$
which yields $r_{R}(\alpha(m))\mathcal{S}ubseteq r_{R}(g(m))$. Since $m\in$ soc$(M)\cap J(M)$,
thus $\alpha(m)\in$ soc$(M)\cap J(M)$. By (2), we have that $g(m)=\gamma\alpha(m)$
for some $\gamma\in S$. Therefore, $g-\gamma\alpha\in l_{S}(m)$
which leads to $g\in l_{S}(m)+S\alpha$. Thus $l_{S}(mR\cap r_{M}(\alpha))=l_{S}(m)+S\alpha$.
(4) To prove $Sk$ is simple left $S$-module,
we need only show that $Sk$ is cyclic for any nonzero element in
it. If $0\neq\alpha(k)\in Sk$, then $\alpha:kR\longrightarrow\alpha(kR)$
is an $R$-isomorphism.
Since $\alpha\in S$, then $\alpha(kR)\ll M$. Since $M$
is ss-quasi-injective, thus $\alpha^{-1}:\alpha(kR)\longrightarrow kR$ has an extension
$\beta\in S$ and hence $\beta(\alpha(k))=\alpha^{-1}(\alpha(k))=k$,
so $k\in S\alpha k$ which leads to $Sk=S\alpha k$. Therefore $Sk$
is a simple left $S$-module and this leads to soc$(M)\cap J(M)\mathcal{S}ubseteq$ soc$(_{S}M)$.
(5) If $mR$ is simple and small submodule of $M$,
then $m\neq0$. We claim that $\alpha(m)=0$ for all $\alpha\in J(S)$,
thus $mR\mathcal{S}ubseteq r_{M}(J(S))$. Otherwise, $\alpha(m)\neq0$ for
some $\alpha\in J(S)$. Thus $\alpha:mR\longrightarrow\alpha(mR)$
is an $R$-isomorphism.
Now, we need prove that $r_{R}(\alpha(m))=r_{R}(m)$. Let $r\in r_{R}(m)$,
so $\alpha(m)r=\alpha(mr)=\alpha(0)=0$ which leads to $r_{R}(m)\mathcal{S}ubseteq r_{R}(\alpha(m))$.
The other inclusion, if $r\in r_{R}(\alpha(m))$, then $\alpha(mr)=0$,
that is $mr\in$ ker$(\alpha)=0$, so $r\in r_{R}(m)$. Hence $r_{R}(\alpha(m))=r_{R}(m)$.
Since $m,\alpha(m)\in$ soc$(M)\cap J(M)$, thus $S\alpha m=Sm$ (by(2))
and this implies that $m=\beta\alpha(m)$ for some $\beta\in S$,
so $(1-\beta\alpha)(m)=0$. Since $\alpha\in J(S)$, then the element
$\beta\alpha$ is quasi-regular by \cite[Theorem 15.3]{3AnFu74}. Thus $1-\beta\alpha$
is invertible and hence $m=0$ which is a contradiction. This shows
that soc$(M)\cap J(M)\mathcal{S}ubseteq r_{M}(J(S))$.
(6) Let $\alpha\in l_{S}(A\cap B)$ and consider
$f:A+B\longrightarrow M$ is given by $f(a+b)=\alpha(a),$ for all
$a\in A$ and $b\in B$. Since $M$ is ss-quasi-injective,
thus there exists $\beta\in S$ such that $f(a+b)=\beta(a+b).$ Thus
$\beta(a+b)=\alpha(a)$, so $(\alpha-\beta)(a)=\beta(b)$ which yields
$\alpha-\beta\in l_{S}(A)$. Therefore, $\alpha=\alpha-\beta+\beta\in l_{S}(A)+l_{S}(B)$
and this implies that $l_{S}(A\cap B)\mathcal{S}ubseteq l_{S}(A)+l_{S}(B)$.
The other inclusion is trivial and the proof is complete.
\end{proof}
\begin{rem}\label{Remark:(2.14)} \emph{Let $M$ be a right $R$-module, then $D(S)=\{
\alpha\in S=$ End$(M)\mid r_{M}(\alpha)\cap mR\neq 0$ for each $
0\neq m\in $ soc$(M)\cap J(M)\}$ is a left ideal in $S$.}
\end{rem}
\begin{proof}
This is obvious.
\end{proof}
\begin{prop}\label{Proposition:(2.15)} Let $M$ be an ss-quasi-injective right $R$-module. Then $r_{M}(\alpha)\varsubsetneqq r_{M}(\alpha-\alpha\gamma\alpha)$, for all $\alpha\notin D(S)$ and for some $\gamma\in S$.
\end{prop}
\begin{proof}
For all $\alpha\notin D(S)$. By hypothesis, we can find $0\neq m\in$ soc$(M)\cap J(M)$ such that $r_{M}(\alpha)\cap mR=0$.
Clearly, $r_{R}(\alpha(m))=r_{R}(m)$, so $Sm=S\alpha m$ by Lemma~\ref{Lemma:(2.13)}(2). Thus $m=\gamma\alpha m$ for some $\gamma\in S$ and this
implies that $(\alpha-\alpha\gamma\alpha)m=0$. Therefore, $m\in r_{M}(\alpha-\alpha\gamma\alpha)$, but $m\notin r_{M}(\alpha)$ and hence the inclusion is strictly.
\end{proof}
\begin{prop}\label{Proposition:(2.16)}
Let $M$ be an ss-quasi-injective right $R$-module, then the set $\{\alpha\in S=\emph{End}(M)\mid 1-\beta\alpha$ is monomorphism for all
$\beta\in S\}$ is contained in $D(S)$. Moreover, $J($$_{S}S)\mathcal{S}ubseteq D(S)$.
\end{prop}
\begin{proof}
Let $\alpha\notin D(S)$, then there exists $0\neq m\in$ soc$(M)\cap J(M)$
such that $r_{M}(\alpha)\cap mR=0$. If $r\in r_{R}(\alpha(m))$,
then $\alpha(mr)=0$ and so $mr\in r_{M}(\alpha)$. Since $r_{M}(\alpha)\cap mR=0$.
Thus $r\in r_{R}(m)$ and hence $r_{R}(\alpha(m))\mathcal{S}ubseteq r_{R}(m)$,
so $Sm\mathcal{S}ubseteq S\alpha m$ by Lemma~\ref{Lemma:(2.13)}(2). Therefore, $m\in$ ker$(1-\gamma\alpha)$
for some $\gamma\in S$. Since $m\neq0$, thus $1-\gamma\alpha$ is
not monomorphism and hence the inclusion holds. Now, let $\alpha\in J($$_{S}S)$
we have $\beta\alpha$ is a quasi-regular element by \cite[Theorem 15.3]{3AnFu74}
and hence $1-\beta\alpha$ is isomorphism for all $\beta\in S$, which
completes the proof.
\end{proof}
\begin{thm}\label{Theorem:(2.17)}\textup{(ss-Baer's condition)} The following statements are equivalent for a ring $R$.
\noindent (1) $M$ is an ss-injective right $R$-module.
\noindent (2) If $S_{r}\cap J=A\oplus B$ and $\alpha:A\longrightarrow M$
is an $R$-homomorphism, then there exists $m\in M$ such that $\alpha(a)=ma$ for all $a\in A$
and $mB=0$.
\noindent (3) If $S_{r}\cap J=A\oplus B$, and $\alpha:A\longrightarrow M$ is an $R$-homomorphism, then there exists $m\in M$ such that $\alpha(a)=ma$, for all $a\in A$
and $mB=0$.
\end{thm}
\begin{proof}
(1)$\Rightarrow$(2) Define $\gamma:S_{r}\cap J\longrightarrow M$ by $\gamma(a+b)=\alpha(a)$ for all $a\in A,b\in B$. By hypothesis,
there is a right $R$-homomorphism $\beta:\mathit{R\longrightarrow M}$ is an extension of $\gamma$, so if $\mathit{m=\beta}(1)$, then $\mathit{\alpha}(a)=\gamma(a)=\beta(a)=\beta(1)a=ma$, for all $\mathit{a\in A}$. Moreover, $mb=\beta(b)=\gamma(b)=\alpha(0)=0$
for all $b\in B$, so $mB=0$.
(2)$\Rightarrow$(1) Let $\alpha:I\rightarrow M$
be any right $R$-homomorphism, where $I$ is any semisimple small right ideal in $R$. By (2), there exists $m\in M$ such that $\alpha(a)=ma$
for all $a\in I$. Define $\beta:R_{R}\longrightarrow M$ by $\beta(r)=mr$
for all $r\in R$, thus $\beta$ extends $\alpha$.
(2)$\Leftrightarrow$(3) Clear.
\end{proof}
A ring $R$ is called right universally mininjective ring if it satisfies the condition $\mathit{S}_{r}\cap J=0$ (see for example \cite{14NiYo97}). In the next results, we give new characterizations of universally mininjective ring in terms of ss-injectivity and soc-injectivity.
\begin{cor}\label{Corollary:(2.18)} The following are equivalent for a ring $R$.
\noindent (1) $R$ is right universally mininjective.
\noindent (2) $R$ is right mininjective and every quotient of a soc-injective right
$R$-module is soc-injective.
\noindent (3) $R$ is right mininjective and every quotient of an injective right $R$-module is soc-injective.
\noindent (4) $R$ is right mininjective and every semisimple submodule of a projective
right $R$-module is projective.
\noindent (5) Every right $R$-module is ss-injective.
\noindent (6) Every simple right ideal is ss-injective.
\end{cor}
\begin{proof}
(1)$\Leftrightarrow$(2)$\Leftrightarrow$(3)$\Leftrightarrow$(4) By \cite[Lemma 5.1]{14NiYo97} and \cite[Corollary 2.9]{2AmYoZe05}.
(1)$\Leftrightarrow$(5)$\Leftrightarrow$(6) By Proposition~\ref{Proposition:(2.11)}.
\end{proof}
\begin{thm}\label{Theorem:(2.20)} If $M$ is a projective right $R$-module. Then the following statements are equivalent.
\noindent (1) Every quotient of an ss-$M$-injective right $R$-module is ss-$M$-injective.
\noindent (2) Every quotient of a \emph{soc}-$M$-injective right $R$-module is ss-$M$-injective.
\noindent (3) Every quotient of an injective right $R$-module is ss-$M$-injective.
\noindent (4) Every sum of two ss-$M$-injective submodules of a right $R$-module is ss-$M$-injective.
\noindent (5) Every sum of two \emph{soc}-$M$-injective submodules of
a right $R$-module is ss-$M$-injective.
\noindent (6) Every sum of two injective submodules of a right $R$-module
is ss-$M$-injective.
\noindent (7) Every semisimple small submodule of $M$ is projective.
\noindent (8) Every simple small submodule of $M$ is projective.
\noindent (9) $\emph{soc}(M)\cap J(M)$ is projective.
\end{thm}
\begin{proof} (1)$\Rightarrow$(2)$\Rightarrow$(3), (4)$\Rightarrow$(5)$\Rightarrow$(6)
and (9)$\Rightarrow$(7)$\Rightarrow$(8) are obvious.
(8)$\Rightarrow$(9) Since soc$(M)\cap J(M)$ is a direct sum of simple submodules of $M$ and since every simple in $J(M)$ is small in $M$, thus soc$(M)\cap J(M)$ is projective.
(3)$\Rightarrow$(7) Consider the following diagram:
\[
\xymatrix{
0\,\ar[r] &K \,\ar[d]_{f}\ar@{^{(}->}[r]^{i} \ar[r] & M \\
E \,\ar[r]^{h}&N \, \ar[r] &0
}
\]
\noindent where $E$ and $N$ are right $R$-modules, $K$ is a semisimple small submodule of $M$, $h$ is a right $R$-epimorphism
and \, $f$ \, is a right$R$-homomorphism. We can assume that $E$ is injective (see, e.g. \cite[Proposition 5.2.10]{6Bla11}). Since $N$
is ss-$M$-injective, thus $f$ can be extended to an $R$-homomorphism $\mathit{g}:M\longrightarrow N$. By projectivity of $M$,
thus $g$ can be lifted to an $R$-homomorphism $\tilde{g}:M\longrightarrow E$ such that $\mathit{h}\circ\tilde{g}=g$.
Define $\tilde{f}:K\longrightarrow E$ is the restriction of $\tilde{g}$
over $K$. Clearly, $\mathit{h}\circ\tilde{f}=f$ and this implies that $K$ is projective.
(7)$\Rightarrow$(1) Let $N$ and $L$ be right $R$-modules with $h:N\longrightarrow L$ is an $R$-epimorphism and $N$ is ss-$M$-injective. Let $K$ be any semisimple small submodule of $M$ and let $f:K\longrightarrow L$ be any left $R$-homomorphism. By hypothesis $K$ is projective, thus $f$ can be lifted to $R$-homomorphism $\mathit{g}:K\longrightarrow N$ such that $\mathit{h}\circ g=f$. Since $N$ is ss-$M$-injective, thus there exists an $R$-homomorphism
$\tilde{g}:M\longrightarrow N$ such that $\tilde{g}\circ i=g$. Put $\beta=h\circ\tilde{g}:M\longrightarrow L$. Thus $\beta\circ i=h\circ\tilde{g}\circ i=h\circ g=f$. Hence $L$ is an ss-$M$-injective right $R$-module.
(1)$\Rightarrow$(4) Let $N_{1}$ and $N_{2}$ be two ss-$M$-injective submodules of a right $R$-module
$N$. Thus $N_{1}+N_{2}$ is a homomorphic image of the direct sum $N_{1}\oplus N_{2}$. Since $N_{1}\oplus N_{2}$ is ss-$M$-injective, thus $N_{1}+N_{2}$ is ss-$M$-injective by hypothesis.
(6)$\Rightarrow$(3) Let $E$ be an injective right $R$-module with submodule $N$. Let $Q=E\oplus E$, \, $K=\{(n,n)\mid n\in N\}$, $\bar{Q}=Q/K$, $H_{1}=\{y+K\in\bar{Q}\mid y\in E\oplus 0\}$, $H_{2}=\{y+K\in\bar{Q}\mid y\in 0\oplus E\}$. Then $\bar{Q}=H_{1}+H_{2}$. Since $(E\oplus 0)\cap K=0$ and $(0\oplus E)\cap K=0$, thus $E\cong H_{i}$, $i=1,2$. Since $H_{1}\cap H_{2}=\{y+K\in\bar{Q}\mid y\in N\oplus0\}$=
$\{y+K\in\bar{Q}\mid y\in 0\oplus N\}$, thus $H_{1}\cap H_{2}\cong N$ under $y\mapsto y+K$ for all $y\in N\oplus0$.
By hypothesis, $\bar{Q}$ is ss-$M$-injective. Since $H_{1}$ is injective, thus $\bar{Q}=H_{1}\oplus A$ for some submodule $A$ of $\bar{Q}$, so $A\cong(H_{1}+H_{2})/H_{1}\cong H_{2}/H_{1}\cap H_{2}\cong E/N$. By Theorem~\ref{Theorem:(2.3)}(5), $E/N$ is ss-$M$-injective.
\end{proof}
\begin{cor}\label{Corollary:(2.21)} The following statements are equivalent.
\noindent (1) Every quotient of an ss-injective right $R$-module is ss-injective.
\noindent (2) Every quotient of a soc-injective right $R$-module is ss-injective.
\noindent (3) Every quotient of a small injective right $R$-module is ss-injective.
\noindent (4) Every quotient of an injective right $R$-module is ss-injective.
\noindent (5) Every sum of two ss-injective submodules of any right $R$-module is ss-injective.
\noindent (6) Every sum of two soc-injective submodules of any right $R$-module is ss-injective.
\noindent (7) Every sum of two small injective submodules of any right $R$-module is ss-injective.
\noindent (8) Every sum of two injective submodules of any right $R$-module is ss-injective.
\noindent (9) Every semisimple small submodule of any projective right $R$-module is projective.
\noindent (10) Every semisimple small submodule of any finitely generated projective right $R$-module is projective.
\noindent (11) Every semisimple small submodule of $\mathit{R}_{R}$ is projective.
\noindent (12) Every simple small submodule of $\mathit{R}_{R}$ is projective.
\noindent (13) $\mathit{S}_{r}\cap J$ is
projective.
\noindent (14) $S_{r}$ is projective.
\end{cor}
\begin{proof}
The equivalence of (1), (2), (4), (5), (6), (8), (11), (12) and (13) is from Theorem~\ref{Theorem:(2.20)}.
(1)$\Rightarrow$(3)$\Rightarrow$(4), (5)$\Rightarrow$(7)$\Rightarrow$(8) and (9)$\Rightarrow$(10)$\Rightarrow$(13) are clear.
(14)${\color{black}\Rightarrow}$(9) By \cite[Corollary 2.9]{2AmYoZe05}.
(13)$\Rightarrow$(14) Let $S_{r}=(S_{r}\cap J)\oplus A$, where $A=\underset{{\mathcal{S}criptscriptstyle i\in I}}{{\displaystyle {\mathcal{S}criptstyle {\textstyle \bigoplus}}}}S_{i}$ and $S_{i}$ is a right simple and summand of $R_{R}$ for all $i\in I$. Thus $A$ is projective, but $S_{r}\cap J$ is projective, so it follows
that $S_{r}$ is projective.
\end{proof}
\begin{thm}\label{Theorem:(2.22)}
If every simple singular right $R$-module is ss-injective, then $r(a)\mathcal{S}ubseteq^{\oplus}R_{R}$ for every $a\in S_{r}\cap J$ and $S_{r}$ is projective.
\end{thm}
\begin{proof} Let $a\in S_{r}\cap J$ and let $A=RaR+r(a)$. Thus there exists a right ideal $B$ of $R$ such that $A\oplus B\mathcal{S}ubseteq^{ess}R_{R}$. Suppose that $A\oplus B\neq R_{R}$, thus we choose $I\mathcal{S}ubseteq^{max}R_{R}$ such that $A\oplus B\mathcal{S}ubseteq I$ and so $I\mathcal{S}ubseteq^{ess}R_{R}$. By hypothesis, $R/I$ is a right ss-injective. Consider the map $\alpha:aR\longrightarrow R/I$ is given by $\alpha(ar)=r+I$ which is a well-define $R$-homomorphism. Thus there exists $c\in R$ such that $1+I=ca+I$ and hence $1-ca\in I$. But $ca\in RaR\mathcal{S}ubseteq I$ which leads to $1\in I$, a contradiction.
Thus $A\oplus B=R$ and hence $RaR+(r(a)\oplus B)=R$. Since $RaR\ll R_{R}$, thus $r(a)\mathcal{S}ubseteq^{\oplus}R_{R}$. Put $r(a)=(1-e)R$, for some
$e^{2}=e\in R$, so it follows that $ax=aex$ \,\,for all $x\in R$ and hence $aR=aeR$. Let $\gamma:eR\longrightarrow aeR$ be defined by $\gamma(er)=aer$ for all $r\in R$. Then $\gamma$ is a well-defined $R$-epimorphism. Clearly, ker$(\gamma)=eR\cap r(a)$. Hence $\gamma$ is an isomorphism and so $aR$ is projective. Since $S_{r}\cap J$ is a direct sum of simple small right ideals, thus $S_{r}\cap J$ is projective and it follows from Corollary~\ref{Corollary:(2.21)} that $S_{r}$ is projective.
\end{proof}
\begin{cor}\label{Corollary:(2.23)} The following statements are equivalent for a ring $R$.
\noindent (1) $R$ is right mininjective and every simple singular right $R$-module is ss-injective.
\noindent (2) $R$ is right universally mininjective.
\end{cor}
\begin{proof} By Theorem~\ref{Theorem:(2.22)} and \cite[Lemma 5.1]{14NiYo97}.
\end{proof}
Recall that a ring $R$ is called zero insertive, if $aRb=0$ for each $a,b\in R$ with $ab=0$ (see \cite{19ThQu09}). Note that if $R$
is zero insertive ring, then $RaR+r(a)\mathcal{S}ubseteq^{ess}R_{R}$ for every $a\in R$ (see \cite[Lemma 2.11]{19ThQu09}).
\begin{prop}\label{Proposition:(2.24)} Let \, $R$ \, be a zero insertive ring. If every simple singular right \, $R$-module is ss-injective, then $R$ is right universally mininjective.
\end{prop}
\begin{proof} Let $a\in S_{r}\cap J$. We claim that $RaR+r(a)=R$, thus $r(a)=R$ (since $RaR\ll R$), so $a=0$ and this means that
$S_{r}\cap J=0$. Otherwise, if $RaR+r(a)\mathcal{S}ubsetneqq R$, then there exists a maximal right ideal $I$ of $R$ such that $RaR+r(a)\mathcal{S}ubseteq I$. Since $I\mathcal{S}ubseteq^{ess}R_{R}$, thus $R/I$ is ss-injective by hypothesis. Consider $\alpha:aR\longrightarrow R/I$ is given by $\alpha(ar)=r+I$ \, for all $r\in R$ which is a well-defined $R$-homomorphism. Thus $1+I=ca+I$ for some $c\in R$. Since $ca\in RaR\mathcal{S}ubseteq I$, thus $1\in I$ and this a contradicts with a maximality of $I$, so we must have $RaR+r(a)=R$ and this completes the proof.
\end{proof}
\begin{thm}\label{Theorem:(2.25)}
If $M$ is a finitely generated right $R$-module, then the following statements are equivalent.
\noindent (1) \emph{soc}$(M)\cap J(M)$ is a Noetherian $R$-module.
\noindent(2) \emph{soc}$(M)\cap J(M)$ is finitely generated.
\noindent (3) Any direct sum of ss-$M$-injective right $R$-modules is ss-$M$-injective.
\noindent (4) Any direct sum of \emph{soc}-$M$-injective right $R$-modules is ss-$M$-injective.
\noindent(5) Any direct sum of injective right $R$-modules is ss-$M$-injective.
\noindent (6) $\mathit{K}^{(S)}$ is ss-$M$-injective for every injective right $R$-module
$K$ and for any index set $S$.
\noindent (7) $\mathit{K}^{(\mathbb{N})}$ is ss-$M$-injective for every injective right $R$-module $K$.
\end{thm}
\begin{proof}
\textcolor{black}{(1)$\Rightarrow$(2) and (3)$\Rightarrow$(4)$\Rightarrow$(5)$\Rightarrow$(6)$\Rightarrow$(7)
Clear.}
\textcolor{black}{(2)$\Rightarrow$(3) Let $\mathit{E}=\underset{{\mathcal{S}criptscriptstyle i\in I}}{\bigoplus}M_{i}$
be a direct sum of ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective
right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules and
$\mathit{f}:N\longrightarrow E$ be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism,
where }\textit{\textcolor{black}{N}}\textcolor{black}{{} is a semisimple
small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{.
Since soc$(M)\cap J(M)$ is finitely generated, thus}\textit{\textcolor{black}{{}
N}}\textcolor{black}{{} is finitely generated and hence $\mathit{f}(N)\mathcal{S}ubseteq\underset{{\mathcal{S}criptscriptstyle j\in I_{1}}}{\bigoplus}M_{j}$,
for some finite subset }\textit{\textcolor{black}{$I_{1}$}}\textcolor{black}{{}
of }\textit{\textcolor{black}{I}}\textcolor{black}{. Since a finite
direct sums of ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective
right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is
ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective, thus
$\underset{{\mathcal{S}criptscriptstyle j\in I_{1}}}{\bigoplus}M_{j}$ is ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective
and hence }\textit{\textcolor{black}{f}}\textcolor{black}{{} can be
extended to an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
$\mathit{g}:M\longrightarrow E$. Thus }\textit{\textcolor{black}{E}}\textcolor{black}{{}
is ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective. }
\textcolor{black}{(7)$\Rightarrow$(1) Let $\mathit{N}_{1}\mathcal{S}ubseteq N_{2}\mathcal{S}ubseteq...$
be a chain of submodules of soc$(M)\cap J(M)$. For each $\mathit{i}\geq1$,
let $\mathit{E}_{i}=E(M/N_{i})$, $\mathit{E}=\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\bigoplus}}E_{i}$ and $\mathit{M_{i}}=\underset{{\mathcal{S}criptscriptstyle j=1}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\prod}}E_{j}=E_{i}\oplus(\underset{{\mathcal{S}criptscriptstyle \underset{j\neq i}{j=1}}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\prod}}E_{j})$,
then $\mathit{M_{i}}$ is injective. By hypothesis, $\mathit{\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\bigoplus}}M_{i}}=(\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\bigoplus}}E_{i})\oplus(\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\bigoplus}}\underset{{\mathcal{S}criptscriptstyle \underset{j\neq i}{j=1}}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\prod}}E_{j})$
is ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective, so
it follows from Theorem~\ref{Theorem:(2.3)}(5) that }\textit{\textcolor{black}{E}}\textcolor{black}{{}
it self is ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective.
Define $\mathit{f}:U=\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigcup}}N_{i}\longrightarrow E$
by $\mathit{f}(m)=(m+N_{i})_{i}$. It is clear that }\textit{\textcolor{black}{f}}\textcolor{black}{{}
is a well defined }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism. Since
}\textit{\textcolor{black}{M}}\textcolor{black}{{} is finitely generated,
thus soc$(M)\cap J(M)$ is a semisimple small submodule of
}\textit{\textcolor{black}{M}}\textcolor{black}{{} and hence $\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigcup}}N_{i}$
is a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{,
so }\textit{\textcolor{black}{f}}\textcolor{black}{{} can be extended
to a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
$\mathit{g}:M\longrightarrow E$. Since }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is finitely generated, we have $\mathit{g}(M)\mathcal{S}ubseteq\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}E(M/N_{i})$
for some }\textit{\textcolor{black}{n}}\textcolor{black}{{} and hence
$\mathit{f}(\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\bigcup}}N_{i})\mathcal{S}ubseteq\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle n}}{\bigoplus}}E(M/N_{i})$.
Since $\mathit{\pi_{i}}f(x)=\pi_{i}(x+N_{j})_{{\mathcal{S}criptscriptstyle j\geq1}}=x+N_{i}$,
for all $\mathit{x\in}U$ and $\mathit{i}\geq1$, where $\mathit{\pi_{i}}:\underset{{\mathcal{S}criptscriptstyle j\geq1}}{\bigoplus}E(M/N_{j})\longrightarrow E(M/N_{i})$
be the projection map, thus $\mathit{\pi_{i}}f(U)=U/N_{i}$ for all
$\mathit{i}\geq1$. Since $\mathit{f}(U)\mathcal{S}ubseteq\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle n}}{\bigoplus}}E(M/N_{i})$, thus $\mathit{U}/N_{i}=\pi_{i}f(U)=0$, for all $\mathit{i}\geq n+1$,
so $\mathit{U}=N_{i}$ for all $\mathit{i}\geq n+1$ and hence the
chain $\mathit{N}_{1}\mathcal{S}ubseteq N_{2}\mathcal{S}ubseteq...$ terminates at $\mathit{N}_{n+1}$.
Thus soc$(M)\cap J(M)$ is a Noetherian $R$-module.}
\end{proof}
\begin{cor}\label{Corollary:(2.26)} If $N$ is a finitely generated right $R$-module, then the following statements are equivalent.
\noindent (1) \emph{soc}$(N)\cap J(N)$ is finitely generated.
\noindent (2) $\mathit{M}^{(S)}$ is ss-$N$-injective for every \emph{soc}-$N$-injective right $R$-module $M$
and for any index set $S$.
\noindent (3) $\mathit{M}^{(S)}$ is ss-$N$-injective for every ss-$N$-injective right $R$-module $M$
and for any index set $S$.
\noindent (4) $\mathit{M}^{(\mathbb{N})}$
is ss-$N$-injective for every \emph{soc}-$N$-injective right $R$-module $M$.
\noindent (5) $\mathit{M}^{(\mathbb{N})}$ is ss-$N$-injective for
every ss-$N$-injective
right $R$-module $M$.
\end{cor}
\begin{proof}
By Theorem~\ref{Theorem:(2.25)}.
\end{proof}
\begin{cor}\label{Corollary:(2.27)} The following statements are equivalent.
\noindent (1) $\mathit{S}_{r}\cap J$ is finitely generated.
\noindent (2) Any direct sum of ss-injective right $R$-modules is ss-injective.
\noindent (3) Any direct sum of soc-injective right $R$-modules is ss-injective.
\noindent (4) Any direct sum of small injective right $R$-modules is ss-injective.
\noindent (5) Any direct sum of injective right $R$-modules is ss-injective.
\noindent (6) $\mathit{M}^{(S)}$ is ss-injective for every injective right $R$-module $M$ and for any index set $S$.
\noindent (7) $\mathit{M}^{(S)}$ is ss-injective for every soc-injective right $R$-module $M$ and for any index set $S$.
\noindent (8) $\mathit{M}^{(S)}$ is ss-injective for every small injective right $R$-module $M$ and for any index set $S$.
\noindent (9) $\mathit{M}^{(S)}$ is ss-injective for every ss-injective right $R$-module $M$ and for any index set $S$.
\noindent (10) $\mathit{M}^{(\mathbb{N})}$ is ss-injective for every injective right $R$-module $M$.
\noindent (11) $\mathit{M}^{(\mathbb{N})}$ is ss-injective for every \emph{soc}-injective right $R$-module $M$.
\noindent (12) $\mathit{M}^{(\mathbb{N})}$ is ss-injective for every small injective right $R$-module $M$.
\noindent (13) $\mathit{M}^{(\mathbb{N})}$ is ss-injective for every ss-injective right $R$-module $M$.
\end{cor}
\begin{proof}
By applying Theorem~\ref{Theorem:(2.25)} and Corollary~\ref{Corollary:(2.26)}.
\end{proof}
\begin{rem}\label{Remark:(2.28)} \emph{Let $M$ be a right $R$-module. We denote that $r_{u}(N)=\{a\in S_{r}\cap J \mid Na=0\}$ and $l_{M}(K)=\{
m\in M \mid mK=0\}$ where $N\mathcal{S}ubseteq M$ and $K\mathcal{S}ubseteq S_{r}\cap J$. Clearly, $r_{u}(N)\mathcal{S}ubseteq(S_{r}\cap J)_{R}$
and $l_{M}(K)\mathcal{S}ubseteq$ $ _{s}{M}$, where $S=End(M_{R})$ and we have the
following: }
\noindent\textcolor{black}{(1) $N\mathcal{S}ubseteq l_{M}r_{u}(N)$ \emph{for all} $N\mathcal{S}ubseteq M$.}
\noindent\textcolor{black}{(2) $K\mathcal{S}ubseteq r_{u}l_{M}(K)$ \emph{for all} $K\mathcal{S}ubseteq S_{r}\cap J$.}
\noindent\textcolor{black}{(3) $r_{u}l_{M}r_{u}(N)=r_{u}(N)$ \emph{for all} $N\mathcal{S}ubseteq M$.}
\noindent\textcolor{black}{(4) $l_{M}r_{u}l_{M}(K)=l_{M}(K)$ \emph{for all} $K\mathcal{S}ubseteq S_{r}\cap J$.}
\end{rem}
\begin{proof}
This is clear
\end{proof}
\begin{lem}\label{Lemma:(2.29)} The following statements are equivalent for a right $R$-module $M$:
\noindent (1) $R$ satisfies the ACC for right ideals of form $r_{u}(N)$, where $N\mathcal{S}ubseteq M$.
\noindent (2) $R$ satisfies the DCC for $l_{M}(K)$, where $K\mathcal{S}ubseteq S_{r}\cap J$.
\noindent (3) For each semisimple small right ideal $I$ there exists a finitely
generated right ideal $K\mathcal{S}ubseteq I$ such that $l_{M}(I)=l_{M}(K)$.
\end{lem}
\begin{proof} (1)$\Leftrightarrow$(2) Clear.
(2)$\Rightarrow$(3) Consider $\Omega=\{
l_{M}(A) \mid A$ is finitely generated right ideal and $A\mathcal{S}ubseteq I$ $\}$ which
is non empty set because $M\in\Omega$. Now, let $K$
be a finitely generated right ideal of $R$ and contained in $I$.
such that $l_{M}(K)$ is minimal in $\Omega$. Put $B=K+xR$, where
$x\in I$. Thus $B$ is a finitely generated right ideal contained
in $I$ and $l_{M}(B)\mathcal{S}ubseteq l_{M}(K)$. But since $l_{M}(K)$ is
minimal in $\Omega$, thus $l_{M}(B)=l_{M}(K)$ which yields $l_{M}(K)x=0$
for all $x\in I$. Therefore, $l_{M}(K)I=0$ and hence $l_{M}(K)\mathcal{S}ubseteq l_{M}(I)$.
But $l_{M}(I)\mathcal{S}ubseteq l_{M}(K)$, so $l_{M}(I)=l_{M}(K)$.
(3)$\Rightarrow$(1) Suppose that $r_{u}(M_{1})\mathcal{S}ubseteq r_{u}(M_{2})\mathcal{S}ubseteq...\mathcal{S}ubseteq r_{u}(M_{n})\mathcal{S}ubseteq...$,
where $M_{i}\mathcal{S}ubseteq M$ for each $i$. Put $D_{i}=l_{M}r_{u}(M_{i})$ for each $i$,
and $I=\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\bigcup}}r_{u}(M_{i})$,
then $I\mathcal{S}ubseteq S_{r}\cap J$. By hypothesis, there exists a finitely
generated right ideal $K$ of $R$ and contained in $I$ such that $l_{M}(I)=l_{M}(K)$.
Since $K$ is a finitely generated, thus there exists $t\in\mathbb{N}$ such that $K\mathcal{S}ubseteq r_{u}(M_{n})$
for all $n\geq t$, that is $l_{M}(K)\mathcal{S}upseteq l_{M}r_{u}(M_{n})=D_{n}$
for all $n\geq t$. Since $l_{M}(K)=l_{M}(I)=l_{M}(\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle \infty}}{\bigcup}}r_{u}(M_{i}))=\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigcap}}l_{M}r_{u}(M_{i})=\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigcap}}D_{i}\mathcal{S}ubseteq D_{n}$,
thus $l_{M}(K)=D_{n}$ for all $n\geq t$. Since $D_{n}=l_{M}r_{u}(M_{n})$,
thus $r_{u}(M_{n})=r_{u}l_{M}r_{u}(M_{n})=r_{u}(D_{n})=r_{u}l_{M}(K)$
for all $n\geq t$. Thus $r_{u}(M_{n})=r_{u}(M_{t})$ for all $n\geq t$.
and hence (3) implies (1), which completes the proof.
\end{proof}
The first part in following proposition is obtained directly by Corollary~\ref{Corollary:(2.27)}, but we will prove it by different way.
\begin{prop}\label{Proposition:(2.30)} Let $E$ be an ss-injective right $R$-module. Then $E^{(\mathbb{N})}$ is ss-injective if and only if $R$
satisfies the ACC for right ideals of form $r_{u}(N)$, where $N\mathcal{S}ubseteq E$.
\end{prop}
\begin{proof} ($\Rightarrow$) Suppose that $r_{u}(N_{1})\mathcal{S}ubsetneqq r_{u}(N_{2})\mathcal{S}ubsetneqq...\mathcal{S}ubsetneqq r_{u}(N_{m})\mathcal{S}ubsetneqq...$
be a strictly chain, where $N_{i}\mathcal{S}ubseteq E$. Thus we get, $l_{E}r_{u}(N_{1})\mathcal{S}upsetneqq l_{E}r_{u}(N_{2})\mathcal{S}upsetneqq...\mathcal{S}upsetneqq l_{E}r_{u}(N_{m})\mathcal{S}upsetneqq...$. For each $i\geq1,$ so we can find $t_{i}\in l_{E}r_{u}(N_{i})/l_{E}r_{u}(N_{i+1})$
and $a_{i+1}\in r_{u}(N_{i+1})$ such that $t_{i}a_{i+1}\neq0$. Let $L=\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigcup}}r_{u}(N_{i})$, then for all $\ell\in L$ there exists $m_{\ell}\geq1$ such that
$\ell\in r_{u}(N_{i})$ for all $i\geq m_{\ell}$ and this implies that $t_{i}\ell=0$ for all $i\geq m_{\ell}$. Put $\bar{t}=(t_{i})_{i}$
, we have $\bar{t}\ell\in E^{(\mathbb{N})}$ for every $\ell\in L$. Consider $\alpha_{\bar{t}}:L\longrightarrow E^{(\mathbb{N})}$ is
given by $\alpha_{\bar{t}}(\ell)=\bar{t}\ell$, then $\alpha_{\bar{t}}$ is a well-define $R$-homomorphism.
Since $L$ is a semisimple small right ideal, thus $\alpha_{\bar{t}}$ extends to $\gamma:R\longrightarrow E^{(\mathbb{N})}$
(by hypothesis) and hence $\alpha_{\bar{t}}(\ell)=\bar{t}\ell=\gamma(\ell)=\gamma(1)\ell$.
Thus there exists $k\geq1$ such that $t_{i}\ell=0$ for all $i\geq k$
and all $\ell\in L$ (since $\gamma(1)\in E^{(\mathbb{N})}$), but
this contradicts with $t_{k}a_{k+1}\neq 0$.
($\Leftarrow$) Let $\alpha:I\longrightarrow E^{(\mathbb{N})}$
be an $R$-homomorphism, where $I$ is a semisimple small right ideal, thus it follows from Lemma~\ref{Lemma:(2.29)} that there is a finitely generated right ideal $K\mathcal{S}ubseteq I$ such that $l_{M}(I)=l_{M}(K)$. Since $E^{\mathbb{N}}$ is ss-injective,
thus $\alpha=a.$ for some $a\in E^{\mathbb{N}}$. Write $K=\overset{{\mathcal{S}criptscriptstyle m}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}r_{i}R$,
so we have $\alpha(r_{i})=ar_{i}\in E^{(\mathbb{N})}$, $i=1,2,...,m$.
Thus there exists $\tilde{a}\in E^{(\mathbb{N})}$ such that $a_{n}r_{i}=\tilde{a}_{n}r_{i}$
for all $n\in\mathbb{N}$, $i=1,2,...,m,$ where $a_{n}$ is the $n$th-coordinate of $a$ . Since $K$ is generated by $\{r_{1},r_{2},...,r_{m}\}$, thus $ar=\tilde{a}r$ for all $r\in K$. Therefore, $a_{n}-\tilde{a}_{n}\in l_{M}(K)=l_{M}(I)$ for all $n\in\mathbb{N}$ which leads to $a_{n}r=\tilde{a}_{n}r$
for all $r\in I$ and $n\in\mathbb{N}$, so $ar=\tilde{ar}$ for all $r\in I$. Thus there exists $\tilde{a}\in E^{(\mathbb{N})}$ such that $\alpha(r)=\tilde{a}r$ for all $r\in I$ and this means that $E^{(\mathbb{N})}$ is ss-injective.
\end{proof}
\begin{thm}\label{Theorem:(2.31)} The following statements are equivalent for a ring $R$:
\noindent (1) $S_{r}\cap J$ is finitely generated.
\noindent (2) $\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}E(M_{i})$
is ss-injective right $R$-module for every simple right $R$-modules $M_{i}$, $i\geq1$.
\end{thm}
\begin{proof} (1)$\Rightarrow$(2) By Corollary~\ref{Corollary:(2.27)}.
(2)$\Rightarrow$(1) Let $I_{1}\mathcal{S}ubsetneqq I_{2}\mathcal{S}ubsetneqq...$
be a properly ascending chain of semisimple small right ideals of
$R$. Clearly, $I=\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptstyle {\mathcal{S}criptscriptstyle i=1}}}{\bigcup}}I_{i}\mathcal{S}ubseteq S_{r}\cap J$.
For every $i\geq1$, there exists $a_{i}\in I$, $a_{i}\notin I_{i}$
and consider $N_{i}/I_{i}\mathcal{S}ubseteq^{max}(a_{i}R+I_{i})/I_{i}$, so
$K_{i}=(a_{i}R+I_{i})/N_{i}$ is a simple right $R$-module.
Define $\alpha_{i}:(a_{i}R+I_{i})/I_{i}\longrightarrow(a_{i}R+I_{i})/N_{i}$
by $\alpha_{i}(x+I_{i})=x+N_{i}$ which is right $R$-epimorphism.
Let $E(K_{i})$ be the injective hull of $K_{i}$ and $i_{i}:K_{i}\rightarrow E(K_{i})$ be the inclusion map. By injectivity of $E(K_{i})$, there there exists $\beta_{i}:I/I_{i}\longrightarrow E(K_{i})$
such that $\beta_{i}=i_{i}\alpha_{i}$. Since $a_{i}\notin N_{i}$,
then $\beta_{i}(a_{i}+I_{i})=i_{i}(\alpha_{i}(a_{i}+I_{i}))=a_{i}+N_{i}\neq0$
for each $i\geq1$. If $b\in I$, then there exists $n_{b}\geq1$
such that $b\in I_{i}$ for all $i\geq n_{b}$ and hence $\beta_{i}(b+I_{i})=0$
for all $i\geq n_{b}$. Thus we can define $\gamma:I\longrightarrow\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}E(K_{i})$
by $\gamma(b)=(\beta_{i}(b+I_{i}))_{i}$. Then there exists $\tilde{\gamma}:R\longrightarrow\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}E(K_{i})$
such that $\tilde{\gamma}_{|I}=\gamma$ (by hypothesis). Put $\tilde{\gamma}(1)=(c_{i})_{i}$,
thus there exists $n\geq1$ with $c_{i}=0$ for all $i\geq n$. Since
$(\beta_{i}(b+I_{i}))_{i}=\gamma(b)=\tilde{\gamma}(b)=\tilde{\gamma}(1)b=(c_{i}b)_{i}$
for all $b\in I$, thus $\beta_{i}(b+I_{i})=c_{i}b$ for all $i\geq1$,
so it follows that $\beta_{i}(b+I_{i})=0$ for all $i\geq n$ and
all $b\in I$ and this contradicts with $\beta_{n}(a_{n}+I_{n})\neq0$.
Hence (2) implies (1).
\end{proof}
\mathcal{S}ection{Strongly SS-Injective Modules}
\begin{prop}\label{Proposition:(3.1)} The following statements are equivalent.
\noindent (1) $M$ is a strongly ss-injective right$R$-module.
\noindent (2) Every $R$-homomorphism $\alpha:A\longrightarrow M$ extends to $N$,
for all right $R$-module $N$, where $A\ll N$ and $\alpha(A)$ is a semisimple submodule in $M$.\end{prop}
\begin{proof}
\textcolor{black}{(2)$\Rightarrow$(1) Clear.}
\textcolor{black}{(1)$\Rightarrow$(2) Let }\textit{\textcolor{black}{A}}\textcolor{black}{{}
be a small submodule of }\textit{\textcolor{black}{N}}\textcolor{black}{,
and $\alpha:A\longrightarrow M$ be an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
with $\alpha(A)$ is a semisimple submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{.
If $B=$ ker$(\alpha)$, then $\alpha$ induces an embedding $\tilde{\alpha}:A/B\longrightarrow M$
defined by $\tilde{\alpha}(a+B)=\alpha(a)$, for all $a\in A$. Clearly,
$\tilde{\alpha}$ is well define because if $a_{1}+B=a_{2}+B$ we
have $a_{1}-a_{2}\in B$, so $\alpha(a_{1})=\alpha(a_{2})$, that
is $\tilde{\alpha}(a_{1}+B)=\tilde{\alpha}(a_{2}+B)$. Since }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is strongly ss-injective and $A/B$ is semisimple and small in $N/B$,
thus $\tilde{\alpha}$ extends to an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
$\gamma:N/B\longrightarrow M$. If $\pi:N\longrightarrow N/B$ is
the canonical map, then the }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
$\beta=\gamma\circ\pi:N\longrightarrow M$ is an extension of $\alpha$
such that if $a\in A$, then $\beta(a)=\gamma\circ\pi(a)=\gamma(a+B)=\tilde{\alpha}(a+B)=\alpha(a)$
as desired. }\end{proof}
\begin{cor}\label{Corollary:(3.2)} \noindent (1) Let $M$ be a semisimple right $R$-module. If $M$ is a strongly ss-injective, then $M$ is small injective.
\noindent (2) If every simple right $R$-module is strongly ss-injective, then $R$ is semiprimitive. \end{cor}
\begin{proof}
(1) By Proposition~\ref{Proposition:(3.1)}.
(2) By (1) and applying \cite[Theorem 2.8]{19ThQu09}.\end{proof}
\begin{rem}\label{Remark:(3.3)} \emph{The converse of Corollary~\ref{Corollary:(3.2)} is not true (see Example~\ref{Example:(3.8)})}.\end{rem}
\begin{thm}\label{Theorem:(3.4)} If $\mathit{M}$ is a strongly ss-injective (or just ss-$\mathit{E}(M)$-injective) right $R$-module,
then for every semisimple small submodule $\mathit{A}$ of $\mathit{M}$, there is an injective $R$-module $\mathit{E}_{A}$ such that $\mathit{M}=E_{A}\oplus T_{A}$ where $\mathit{T}_{A}$ is a submodule of $M$ with $\mathit{T}_{A}\cap A=0$. Moreover, if $\mathit{A}\neq0$, then $\mathit{E}_{A}$ can be taken $\mathit{A}\leq^{ess}E_{A}$. \end{thm}
\begin{proof}
\textcolor{black}{Let }\textit{\textcolor{black}{A}}\textcolor{black}{{}
be a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{.
If $\mathit{A}=0$, we are done by taking $\mathit{E}_{A}=0$ and
$\mathit{T}_{A}=M$. Suppose that $\mathit{A}\neq0$ and let}
\textcolor{black}{ $\mathit{i}_{1}$, $\mathit{i}_{2}$ and $\mathit{i}_{3}$
be inclusion maps and $D_{A}=E(A)$ be the injective hull of }\textit{\textcolor{black}{A}}\textcolor{black}{{}
in $\mathit{E}(M)$. Since }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is strongly ss-injective, thus }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is ss-$\mathit{E}(M)$-injective. Since }\textit{\textcolor{black}{A}}\textcolor{black}{{}
is a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{,
it follows from \cite[Lemma 5.1.3(a)]{9Kas82} that }\textit{\textcolor{black}{A}}\textcolor{black}{{}
is a semisimple small submodule in $\mathit{E}(M)$ and hence there
exists an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
$\mathit{\alpha}:E(M)\longrightarrow M$ such that $\alpha i_{2}i_{1}=i_{3}$.
Put $\beta=\alpha i_{2}$, thus $\beta:D_{A}\longrightarrow M$ is
an extension of $i_{3}$. Since $\mathit{A}\leq^{ess}D_{A}$, thus
$\beta$ is a monomorphism. Put $\mathit{E}_{A}=\beta(D_{A})$. Since
$\mathit{E}_{A}$ is an injective submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{,
thus $\mathit{M}=E_{A}\oplus T_{A}$ for some submodule $T_{A}$ of
}\textit{\textcolor{black}{M}}\textcolor{black}{. Since $\beta(A)=A$,
thus $A\mathcal{S}ubseteq\beta(D_{A})=E_{A}$ and this means that $T_{A}\cap A=0$.
Moreover, define $\tilde{\beta}=\beta:D_{A}\longrightarrow E_{A}$,
thus $\tilde{\beta}$ is an isomorphism. Since $A\leq^{ess}D_{A}$,
thus $\tilde{\beta}(A)\leq^{ess}E_{A}$. But $\tilde{\beta}(A)=\beta(A)=A$,
so $A\leq^{ess}E_{A}$.}\end{proof}
\begin{cor}\label{Corollary:(3.5)} If $M$ is a right $R$-module has a semisimple small submodule $A$ such that $A\leq^{ess}M$, then the following conditions are equivalent.
\noindent (1) $M$ is injective.
\noindent (2) $M$ is strongly ss-injective.
\noindent (3) $M$ is ss-$E(M)$-injective. \end{cor}
\begin{proof}
\textcolor{black}{(1)$\Rightarrow$(2) and (2)$\Rightarrow$(3) are
obvious.}
\textcolor{black}{(3)$\Rightarrow$(1) By Theorem~\ref{Theorem:(3.4)}, we can write
$M=E_{A}\oplus T_{A}$ where $E_{A}$ injective and $T_{A}\cap A=0$.
Since $A\leq^{ess}M$, thus $T_{A}=0$ and hence $M=E_{A}$. Therefore,
}\textit{\textcolor{black}{M}}\textcolor{black}{{} is an injective }\textit{\textcolor{black}{R}}\textcolor{black}{-module.}\end{proof}
\begin{example}\label{Example:(3.6)}
\emph{\textcolor{black}{$\mathbb{Z}_{4}$ as $\mathbb{Z}$-module is not
strongly ss-injective. In particular, $\mathbb{Z}_{4}$ is not ss-$\mathbb{Z}_{2^{\infty}}$-injective.}}\end{example}
\begin{proof}
\textcolor{black}{Assume that $\mathbb{Z}_{4}$ is strongly ss-injective
$\mathbb{Z}$-module. Let $A=<2>=\left\{ 0,2\right\} $. It is clear
that }\textit{\textcolor{black}{A}}\textcolor{black}{{} is a semisimple
small and essential submodule of $\mathbb{Z}_{4}$ as $\mathbb{Z}$-module.
Thus by Corollary~\ref{Corollary:(3.5)} we have that $\mathbb{Z}_{4}$ is injective
$\mathbb{Z}$-module and this is a contradiction. Thus $\mathbb{Z}_{4}$
as $\mathbb{Z}$-module is not strongly ss-injective. Since $E(\mathbb{Z}_{2^{2}})=\mathbb{Z}_{2^{\infty}}$
as $\mathbb{Z}$-module, thus $\mathbb{Z}_{4}$ is not ss-$\mathbb{Z}_{2^{\infty}}$-injective,
by Corollary~\ref{Corollary:(3.5)}.}\end{proof}
\begin{cor}\label{Corollary:(3.7)} Let $M$ be a right $R$-module such that soc$(M)\cap J(M)$ is small submodule in $M$ (in particular, if $M$
is finitely generated). If $M$ is strongly ss-injective, then $M=E\oplus T$, where $E$ is injective and $T\cap$ soc$(M)\cap J(M)=0$. Moreover, if soc$(M)\cap J(M)\neq0$, then we can take soc$(M)\cap J(M)\leq^{ess}E$. \end{cor}
\begin{proof}
\textcolor{black}{By taking $A=$ soc$(M)\cap J(M)$ and applying Theorem~\ref{Theorem:(3.4)}}
\end{proof}
The following example shows that the converse of Theorem~\ref{Theorem:(3.4)} and Corollary~\ref{Corollary:(3.7)} is not true.
\begin{example}\label{Example:(3.8)}
\emph{\textcolor{black}{Let $M=\mathbb{Z}_{6}$ as $\mathbb{Z}$-module.
Since $J(M)=0$ and soc$(M)=M$, thus soc$(M)\cap J(M)=0$. So, we
can write $M=0\oplus M$ with $M\cap($soc$(M)\cap J(M))=0$. Let $N=\mathbb{Z}_{8}$
as $\mathbb{Z}$-module. Since $J(N)=<\bar{2}>$ and soc$(N)=<\bar{4}>$.
Define $\gamma:$ soc$(N)\cap J(N)\longrightarrow M$ by \,\, $\gamma(\bar{4})=\bar{3}$,
\,\, thus $\gamma$ \,\,is \,\,a \,\,\,\,$\mathbb{Z}$-homomorphism. Assume that }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is strongly ss-injective, thus }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is ss-}\textit{\textcolor{black}{N}}\textcolor{black}{-injective,
so there exists $\mathbb{Z}$-homomorphism $\beta:N\longrightarrow M$
such that $\beta\circ i=\gamma$, where }\textit{\textcolor{black}{i}}\textcolor{black}{{}
is the inclusion map from soc$(N)\cap J(N)$ to }\textit{\textcolor{black}{N}}\textcolor{black}{.
Since $\beta(J(N))\mathcal{S}ubseteq J(M)$, thus $\bar{3}=\gamma(\bar{4})=\beta(\bar{4})\in\beta(J(N))\mathcal{S}ubseteq J(M)=0$
and this contradiction, so }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is not strongly ss-injective $\mathbb{Z}$-module.}}\end{example}
\begin{cor}\label{Corollary:(3.9)} The following statements are equivalent:
\noindent (1) \emph{soc}$(M)\cap J(M)=0$, for all right $R$-module $M$.
\noindent (2) Every right $R$-module is strongly ss-injective.
\noindent (3) Every simple right $R$-module is strongly ss-injective.
\end{cor}
\begin{proof}
By Proposition~\ref{Proposition:(2.11)}.
\end{proof}
\mathcal{S}ubparagraph*{\textmd{Recall that a ring }\textmd{\textit{R}}\textmd{ is called
a right }\textmd{\textit{V}}\textmd{-ring (}\textmd{\textit{GV}}\textmd{-ring}\textmd{\textit{,
SI}}\textmd{-ring, respectively) if every simple (simple singular,
singular, respectively) right }\textmd{\textit{R}}\textmd{-module
is injective. A right }\textmd{\textit{R}}\textmd{-module }\textmd{\textit{M}}\textmd{
is called strongly s-injective if every }\textmd{\textit{R}}\textmd{-homomorphism
from }\textmd{\textit{K}}\textmd{ to }\textmd{\textit{M}}\textmd{
extends to }\textmd{\textit{N}}\textmd{ for every right }\textmd{\textit{R}}\textmd{-module
}\textmd{\textit{N}}\textmd{, where ${\color{black}K\mathcal{S}ubseteq Z(N)}$
(see \cite{22Zey14}). }\textmd{\textcolor{black}{A submodule }}\textmd{\textit{\textcolor{black}{K}}}\textmd{\textcolor{black}{{}
of a right }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-module
}}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{}
is called }}\textmd{\textit{\textcolor{black}{t}}}\textmd{\textcolor{black}{-essential
in }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{}
(written $K\mathcal{S}ubseteq^{tes}M$ ) if for every submodule }}\textmd{\textit{\textcolor{black}{L}}}\textmd{\textcolor{black}{{}
of }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{,
$K\cap L\mathcal{S}ubseteq Z_{2}(M)$ implies that $L\mathcal{S}ubseteq Z_{2}(M)$,}}\textmd{\textit{\textcolor{black}{{}
}}}\textmd{\textcolor{black}{}}\textmd{\textit{\textcolor{black}{{}
M}}}\textmd{\textcolor{black}{{} is said to be }}\textmd{\textit{\textcolor{black}{t}}}\textmd{\textcolor{black}{-semisimple
if for every submodule }}\textmd{\textit{\textcolor{black}{A}}}\textmd{\textcolor{black}{{}
of }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{}
there exists a direct summand }}\textmd{\textit{\textcolor{black}{B}}}\textmd{\textcolor{black}{{}
of }}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{}
such that $B\mathcal{S}ubseteq^{tes}A$ (see \cite{4AsHaTo13}) }}\textmd{. In the next
results, we will give some relations between ss-injectivity and other
injectivities and we provide many new equivalences of }\textmd{\textit{V}}\textmd{-rings,
}\textmd{\textit{GV}}\textmd{-rings, }\textmd{\textit{SI}}\textmd{
rings and }\textmd{\textit{QF}}\textmd{ rings.}}
\begin{lem}\label{Lemma:(3.10)} Let $M/N$ be a semisimple right $R$-module and $C$ any right $R$-module. Then every homomorphism from a right submodule (resp. a right semisimple submodule) $A$ of $M$ to $C$ can be extended to a homomorphism from $M$ to $C$ if and only if every homomorphism from a right submodule (resp. a right semisimple submodule) $B$ of $N$ to $C$ can be extended to a homomorphism from $M$ to $C$.\end{lem}
\begin{proof}
\textcolor{black}{($\Rightarrow$) is obtained directly.}
\textcolor{black}{($\Leftarrow$) Let }\textit{\textcolor{black}{f}}\textcolor{black}{{}
be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
from a right submodule }\textit{\textcolor{black}{A}}\textcolor{black}{{}
of }\textit{\textcolor{black}{M}}\textcolor{black}{{} to }\textit{\textcolor{black}{C}}\textcolor{black}{.
Since $M/N$ is semisimple, thus there exists a right submodule }\textit{\textcolor{black}{L}}\textcolor{black}{{}
of }\textit{\textcolor{black}{M}}\textcolor{black}{{} such that $A+L=M$
and $A\cap L\leq N$ (see \cite[Proposition 2.1]{11Lom99}). Thus there
exists a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
$g:M\longrightarrow C$ such that $g(x)=f(x)$ for all $x\in A\cap L$.
Define $h:M\longrightarrow C$ such that for any $x=a+\ell$, $a\in A$,
$\ell\in L$, $h(x)=f(a)+g(\ell)$. Thus }\textit{\textcolor{black}{h}}\textcolor{black}{{}
is a well define }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism,
because if $a_{1}+\ell_{1}=a_{2}+\ell_{2}$, $a_{i}\in A$, $\ell_{i}\in L$,
$i=1,2$, then $a_{1}-a_{2}=\ell_{2}-\ell_{1}\in A\cap L$, that is
$f(a_{1}-a_{2})=g(\ell_{2}-\ell_{1})$ which leads to $h(a_{1}+\ell_{1})=h(a_{2}+\ell_{2})$.
Thus }\textit{\textcolor{black}{h }}\textcolor{black}{is a well define
}\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism and
extension of }\textit{\textcolor{black}{f}}\textcolor{black}{.}\end{proof}
\begin{cor}\label{Corollary:(3.11)} For right $R$-modules $M$ and $N$, then the following hold:
\noindent (1) If $M$ is finitely generated and $M/J(M)$ is semisimple right $R$-module, then $N$ is right \emph{soc}-$M$-injective
if and only if $N$ is right ss-$M$-injective.
\noindent (2) If $M/$\emph{soc}$(M)$ is a semisimple right $R$-module, then $N$ is \emph{soc}-$M$-injective if and only if $N$ is $M$-injective.
\noindent (3) If $R/S_{r}$ is semisimple right $R$-module, then $N$ is \emph{soc}-injective if and only if $N$ is injective.
\noindent (4) If $R/S_{r}$ is semisimple right $R$-module, then $N$ is ss-injective if and only if $N$ is small injective. \end{cor}
\begin{proof}
\textcolor{black}{(1). ($\Rightarrow$) Clear.}
\textcolor{black}{($\Leftarrow$) Since }\textit{\textcolor{black}{N}}\textcolor{black}{{}
is a right ss-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective,
thus every homomorphism from a semisimple small submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{{}
to }\textit{\textcolor{black}{N}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{M}}\textcolor{black}{.
Since }\textit{\textcolor{black}{M}}\textcolor{black}{{} is finitely
generated, thus $J(M)\ll M$ and hence every homomorphism from any
semisimple submodule of }\textit{\textcolor{black}{$J(M)$}}\textcolor{black}{{}
to }\textit{\textcolor{black}{N}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{M}}\textcolor{black}{.
Since $M/J(M)$ is semisimple. Thus every homomorphism from any semisimple
submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{{} to }\textit{\textcolor{black}{N}}\textcolor{black}{{}
extends to }\textit{\textcolor{black}{M}}\textcolor{black}{{} by Lemma~\ref{Lemma:(3.10)}. Therefore }\textit{\textcolor{black}{N}}\textcolor{black}{{} is
a soc-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective right
}\textit{\textcolor{black}{R}}\textcolor{black}{-module.}
\textcolor{black}{(2). ($\Rightarrow$) Since }\textit{\textcolor{black}{N}}\textcolor{black}{{}
is soc-}\textit{\textcolor{black}{M}}\textcolor{black}{-injective.
Thus every homomorphism from any submodule of soc$(M)$ to }\textit{\textcolor{black}{N}}\textcolor{black}{{}
extends to }\textit{\textcolor{black}{M}}\textcolor{black}{. Since
$M/$soc$(M)$ is semisimple, thus Lemma~\ref{Lemma:(3.10)} implies that every homomorphism
from any submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{{}
to }\textit{\textcolor{black}{N}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{M}}\textcolor{black}{.
Hence }\textit{\textcolor{black}{N}}\textcolor{black}{{} is }\textit{\textcolor{black}{M}}\textcolor{black}{-injective.}
\textcolor{black}{($\Leftarrow$) Clear.}
\textcolor{black}{(3) By (2).}
\textcolor{black}{(4) Since $R/S_{r}$ is semisimple right }\textit{\textcolor{black}{R}}\textcolor{black}{-module,
thus $J(R/S_{r})=0$. By \cite[Theorem 9.1.4(b)]{9Kas82}, we have $J\mathcal{S}ubseteq S_{r}$
and hence $J=J\cap S_{r}$. Thus }\textit{\textcolor{black}{N}}\textcolor{black}{{}
is ss-injective if and only if }\textit{\textcolor{black}{N}}\textcolor{black}{{}
is small injective.}\end{proof}
\begin{cor}\label{Corollary:(3.12)} Let $R$ be a semilocal ring, then $S_{r}\cap J$ is finitely generated if and only if $S_{r}$ is finitely generated. \end{cor}
\begin{proof}
\textcolor{black}{Suppose that $S_{r}\cap J$ is finitely generated.
By Corollary~\ref{Corollary:(2.27)}, every direct sum of soc-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules
is ss-injective. Thus it follows from Corollary~\ref{Corollary:(3.11)}(1) and \cite[Corollary 2.11]{2AmYoZe05} that $S_{r}$ is finitely generated. The converse
is clear.}\end{proof}
\begin{thm}\label{Theorem:(3.13)} If $R$ is a right perfect ring, then a right $R$-module $M$ is strongly soc-injective if and only if $M$ is strongly ss-injective.\end{thm}
\begin{proof}
\textcolor{black}{($\Rightarrow$) Clear.}
\textcolor{black}{($\Leftarrow$) Let }\textit{\textcolor{black}{R}}\textcolor{black}{{}
be a right perfect ring and }\textit{\textcolor{black}{M}}\textcolor{black}{{}
be a strongly ss-injective right $R$-module. By \cite[Theorem 3.8]{11Lom99}, }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a semilocal ring and hence by \cite[Theorem 3.5]{11Lom99}, we have every
right }\textit{\textcolor{black}{R}}\textcolor{black}{-module }\textit{\textcolor{black}{N}}\textcolor{black}{{}
is semilocal and hence $N/J(N)$ is semisimple right }\textit{\textcolor{black}{R}}\textcolor{black}{-module.
Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right
perfect ring, thus the Jacobson radical of every right }\textit{\textcolor{black}{R}}\textcolor{black}{-module
is small by \cite[Theorem 4.3 and 4.4, p. 69]{7ChDiMa05}. Thus $N/J(N)$ is
semisimple and $J(N)\ll N$, for any $N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{.
Since }\textit{\textcolor{black}{M}}\textcolor{black}{{} is strongly
ss-injective, thus every homomorphism from a semisimple small submodule
of }\textit{\textcolor{black}{N}}\textcolor{black}{{} to }\textit{\textcolor{black}{M}}\textcolor{black}{{}
extends to }\textit{\textcolor{black}{N}}\textcolor{black}{, for every
$N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{, and this
implies that every homomorphism from any semisimple submodule of $J(N)$
to }\textit{\textcolor{black}{M}}\textcolor{black}{{} extends to }\textit{\textcolor{black}{N}}\textcolor{black}{,
for every $N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{.
Since $N/J(N)$ is semisimple right }\textit{\textcolor{black}{R}}\textcolor{black}{-module,
for every $N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{.
Thus Lemma~\ref{Lemma:(3.10)} implies that every homomorphism from any semisimple
submodule of }\textit{\textcolor{black}{N}}\textcolor{black}{{} to }\textit{\textcolor{black}{M}}\textcolor{black}{{}
extends to }\textit{\textcolor{black}{N}}\textcolor{black}{, for every
$N\in$ Mod-}\textit{\textcolor{black}{R}}\textcolor{black}{{} and hence
}\textit{\textcolor{black}{M}}\textcolor{black}{{} is strongly soc-injective.}\end{proof}
\begin{cor}\label{Corollary:(3.14)} A ring $R$ is $QF$ ring if and only if every strongly ss-injective right $R$-module is projective. \end{cor}
\begin{proof}
\textcolor{black}{($\Rightarrow$) If }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is }\textit{\textcolor{black}{QF}}\textcolor{black}{{} ring, then }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a right perfect ring, so by Theorem~\ref{Theorem:(3.13)} and \cite[Proposition 3.7]{2AmYoZe05} we have every strongly ss-injective right $R$-module is projective.}
\textcolor{black}{($\Leftarrow$) By hypothesis we have every injective
right }\textit{\textcolor{black}{R}}\textcolor{black}{-module is projective
and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{{}
ring (see for instance \cite[Proposition 12.5.13]{6Bla11}).}\end{proof}
\begin{thm}\label{Theorem:(3.15)} The following statements are equivalent for a ring $R$.
\noindent (1) Every direct sum of strongly ss-injective right $R$-modules is injective.
\noindent (2) Every direct sum of strongly soc-injective right $R$-modules is injective.
\noindent (3) $R$ is right artinian. \end{thm}
\begin{proof}
\textcolor{black}{(1)$\Rightarrow$(2) Clear.}
\textcolor{black}{(2)$\Rightarrow$(3) Since every direct sum of strongly
soc-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules
is injective, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right noetherian and right semiartinian by \cite[Theorem 3.3 and Theorem 3.6]{2AmYoZe05}, so it follows from \cite[Proposition 5.2, p.189]{18Ste75} that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is right artinian.}
\textcolor{black}{(3)$\Rightarrow$(1) By hypothesis, }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right perfect and right noetherian. It follows from Theorem~\ref{Theorem:(3.13)}
and \cite[Theorem 3.3]{2AmYoZe05} that every direct sum of strongly ss-injective
right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is
strongly soc-injective. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right semiartinian, so \cite[Theorem 3.6]{2AmYoZe05} implies that every
direct sum of strongly ss-injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules
is injective .}\end{proof}
\begin{thm}\label{Theorem:(3.16)} If $R$ is a right $t$-semisimple, then a right $R$-module $M$ is injective if and only if $M$ is strongly s-injective. \end{thm}
\begin{proof}
\textcolor{black}{($\Rightarrow$) Obvious.}
\textcolor{black}{($\Leftarrow$) Since }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is strongly s-injective, thus $Z_{2}(M)$ is injective by \cite[Proposition 3, p.27]{22Zey14}. Thus every homomorphism $f:K\longrightarrow M$, where
$K\mathcal{S}ubseteq Z_{2}^{r}$ extends to }\textit{\textcolor{black}{R}}\textcolor{black}{{}
by \cite[Lemma 1, p.26]{22Zey14}. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a right }\textit{\textcolor{black}{t}}\textcolor{black}{-semisimple,
thus $R/Z_{2}^{r}$ is a right semisimple (see \cite[Theorem 2.3]{4AsHaTo13}).
So by applying Lemma~\ref{Lemma:(3.10)}, we conclude that }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is injective.}\end{proof}
\begin{cor}\label{Corollary:(3.17)} The following statements are equivalent for a ring $R$.
\noindent (1) $R$ is right $SI$ and right $t$-semisimple.
\noindent (2) $R$ is semisimple. \end{cor}
\begin{proof}
\textcolor{black}{(1)$\Rightarrow$(2). Since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a right }\textit{\textcolor{black}{SI}}\textcolor{black}{{} ring,
thus every right }\textit{\textcolor{black}{R}}\textcolor{black}{-module
is strongly s-injective by \cite[Theorem 1, p.29]{22Zey14}. By Theorem~\ref{Theorem:(3.16)},
we have every right }\textit{\textcolor{black}{R}}\textcolor{black}{-module
is injective and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is semisimple ring.}
\textcolor{black}{(2)$\Rightarrow$(1). Clear.}\end{proof}
\begin{cor}\label{Corollary:(3.18)} If $R$ is a right $t$-semisimple ring, then $R$ is right $V$-ring if and only if $R$ is right $GV$-ring. \end{cor}
\begin{proof}
\textcolor{black}{($\Rightarrow$). Clear.}
\textcolor{black}{($\Leftarrow$). By \cite[Proposition 5, p.28]{22Zey14}
and Theorem~\ref{Theorem:(3.16)}.}\end{proof}
\begin{cor}\label{Corollary:(3.19)} If $R$ is a right $t$-semisimple ring, then $R/S_{r}$ is noetherian right $R$-module if and only if $R$ is right noetherian. \end{cor}
\begin{proof}
If \textcolor{black}{$R/S_{r}$ is a noetherian right }\textit{\textcolor{black}{R}}\textcolor{black}{-module,
thus every direct sum of injective right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules
is strongly s-injective by \cite[Proposition 6]{22Zey14}. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right }\textit{\textcolor{black}{t}}\textcolor{black}{-semisimple,
so it follows from Theorem~\ref{Theorem:(3.16)} that every direct sum of injective
right }\textit{\textcolor{black}{R}}\textcolor{black}{-modules is
injective and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right noetherian. The converse is clear.}\end{proof}
\mathcal{S}ection{SS-Injective Rings}
\mathcal{S}ubparagraph*{\textmd{\textcolor{black}{We recall that the dual of a right }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-module
}}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{}
is $M^{d}=$Hom$_{R}(M,R_{R})$ and clearly that $M^{d}$ is a left }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-module.}}}
\begin{prop}\label{Proposition:(4.1)} The following statements are equivalent for a ring $R$.
\noindent (1) $R$ is a right ss-injective ring.
\noindent (2) If $K$ is a semisimple right $R$-module, $P$ and $Q$ are finitely generated projective right $R$-modules,
$\beta:K\longrightarrow P$ is an $R$-monomorphism with $\beta(K)\ll P$ and $f:K\longrightarrow Q$ is an $R$-homomorphism,
then $f$ can be extended to an $R$-homomorphism $h:P\longrightarrow Q$.
\noindent (3) If $M$ is a right semisimple $R$-module and $f$ is a nonzero monomorphism from $M$ to $R_{R}$ with $f(M)\ll R_{R}$, then $M^{d}=Rf$.
\end{prop}
\begin{proof}
\textcolor{black}{(2)$\Rightarrow$(1) Clear.}
\textcolor{black}{(1)$\Rightarrow$(2)} Since \textit{Q} is finitely generated, there is an \textit{R}-epimorphism
\textit{\textcolor{black}{$\alpha_{1}:R^{n}\longrightarrow Q$ }}for
some \textit{\textcolor{black}{$n\in\mathbb{Z}^{+}$.}} Since \textit{Q}
is projective, there is an \textit{R}-homomorphism \textit{\textcolor{black}{$\alpha_{2}:Q\longrightarrow R^{n}$}}
such that \textit{\textcolor{black}{$\alpha_{1}\alpha_{2}=I_{Q}$.}}
Define \textit{\textcolor{black}{$\tilde{\beta}:K\longrightarrow\beta(K)$}} by \textit{\textcolor{black}{$\tilde{\beta}(a)=\beta(a)$
}}for all ${\color{black}a\in K}$. Since \textit{R} is a right ss-injective
ring (by hypothesis), it follows from Proposition~\ref{Proposition:(2.8)} and Corollary~\ref{Corollary:(2.4)}(1)
that \textit{\textcolor{black}{$R^{n}$ }}is a right ss-\textit{P}-injective
\textit{R}-module. So there exists an \textit{R}-homomorphism \textit{\textcolor{black}{$h:P\longrightarrow R^{n}$
}}such that \textit{\textcolor{black}{$hi=\alpha_{2}f$ $\tilde{\beta}^{-1}$}}. Put
${\color{black}g=\alpha_{1}h:P\longrightarrow Q}$. Thus \textit{\textcolor{black}{$gi=(\alpha_{1}h)i=\alpha_{1}(\alpha_{2}f$
$\tilde{\beta}^{-1})=f$ $\tilde{\beta}^{-1}$}} and hence \textit{\textcolor{black}{$(g\beta)(a)=g(i(\beta(a)))=(f$
$\tilde{\beta}^{-1})(\beta(a))=f$ $(a)$}} for all ${\color{black}a\in K}$.
Therefore, there is an \textit{R}-homomorphism \textit{\textcolor{black}{$g:P\longrightarrow Q$}}
such that \textit{\textcolor{black}{$g\beta=f$. }}
\textcolor{black}{(1)$\Rightarrow$(3) Let $g\in M^{d}$, we have
$gf^{-1}:f(M)\rightarrow R_{R}$. Since $f(M)$ is a semisimple
small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{}
and }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right ss-injective
ring (by hypothesis), thus $gf^{-1}=a.$ for some $a\in R$.
Therefore, $g=af$ and hence $M^{d}=Rf$.}
\textcolor{black}{(3)$\Rightarrow$(1) Let $f:K\longrightarrow R$
be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism,
where }\textit{\textcolor{black}{K}}\textcolor{black}{{} is a semisimple
small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{}
and let $i:K\longrightarrow R$ be the inclusion map, thus by (2)
we have $K^{d}=Ri$ and hence $f=ci$ in $K^{d}$ for some $c\in R$.
Thus there is $c\in R$ such that $f(a)=ca$ for all $a\in K$ and
this implies that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a right ss-injective ring.}
\end{proof}
\begin{example}\label{Example:(4.2)}
\noindent\emph{(1) Every universally mininjective ring is ss-injective, but
not conversely (see Example~\ref{Example:(5.7)}).}
\noindent\emph{(2) The two classes of universally mininjective rings and soc-injective
rings are different (see Example~\ref{Example:(5.7)} and Example~\ref{Example:(5.8)}).}\end{example}
\begin{cor}\label{Corollary:(4.3)} Let $R$ be a right ss-injective ring. Then:
\noindent (1) $R$ is a right mininjective ring.
\noindent (2) $lr(a)=Ra$, for all $a\in S_{r}\cap J$.
\noindent (3) $r(a)\mathcal{S}ubseteq r(b)$, $a\in S_{r}\cap J$, $b\in R$ implies $Rb\mathcal{S}ubseteq Ra$.
\noindent (4) $l(bR\cap r(a))=l(b)+Ra$, for all $a\in S_{r}\cap J$, $b\in R$.
\noindent (5) $l(K_{1}\cap K_{2})=l(K_{1})+l(K_{2})$, for all semisimple small right ideals $K_{1}$ and $K_{2}$ of $R$.\end{cor}
\begin{proof}
(1) By Lemma~\ref{Lemma:(2.5)}.
(2), (3),(4) and (5) are obtained by Lemma~\ref{Lemma:(2.13)}.
\end{proof}
\mathcal{S}ubparagraph*{\textmd{The following is an example of a right mininjective ring
which is not right ss-injective.}}
\begin{example}\label{Example:(4.4)}
\emph{(The Bj\"{o}rk Example \cite[Example 2.5]{15NiYu03})\textcolor{black}{.
Let }\textit{\textcolor{black}{F}}\textcolor{black}{{} be a field and
let $a\mapsto\bar{a}$ be an isomorphism $F\longrightarrow\bar{F}\mathcal{S}ubseteq F$,
where the subfield $\bar{F}\neq F$. Let }\textit{\textcolor{black}{R}}\textcolor{black}{{}
denote the left vector space on basis $\{1, t\}$, and make }\textit{\textcolor{black}{R}}\textcolor{black}{{} into an
}\textit{\textcolor{black}{F}}\textcolor{black}{-algebra by defining
$t^{2}=0$ and $ta=\bar{a}t$ for all $a\in F$. By \cite[Example 2.5]{15NiYu03} we have }\textit{\textcolor{black}{R}}\textcolor{black}{{} is
a right mininjective local ring. It is mentioned in \cite[Example 4.15]{2AmYoZe05}, that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is not right soc-injective. Since $R$ is a local ring, thus by Corollary~\ref{Corollary:(3.11)}(1), }\textit{\textcolor{black}{R}}\textcolor{black}{{} is not
right ss-injective ring.}}\end{example}
\begin{thm}\label{Theorem:(4.5)} Let $R$ be a right ss-injective ring. Then:
\noindent (1) $S_{r}\cap J\mathcal{S}ubseteq Z_{r}$.
\noindent (2) If the ascending chain $r(a_{1})\mathcal{S}ubseteq r(a_{2}a_{1})\mathcal{S}ubseteq...$
terminates for any sequence $a_{1},a_{2},...$ in $Z_{r}\cap S_{r}$,
then $S_{r}\cap J$ is right t-nilpotent and $S_{r}\cap J=Z_{r}\cap S_{r}$. \end{thm}
\begin{proof}
\textcolor{black}{(1) Let $a\in S_{r}\cap J$ and $bR\cap r(a)=0$
for any $b\in R$. By Corollary~\ref{Corollary:(4.3)}(4), $l(b)+Ra=l(bR\cap r(a))=l(0)=R$,
so $l(b)=R$ because $a\in J$, implies that $b=0$. Thus $r(a)\mathcal{S}ubseteq^{ess}R_{R}$
and hence $S_{r}\cap J\mathcal{S}ubseteq Z_{r}$.}
\textcolor{black}{(2) For any sequence $x_{1},x_{2},...$ in $Z_{r}\cap S_{r}$,
we have $r(x_{1})\mathcal{S}ubseteq r(x_{2}x_{1})\mathcal{S}ubseteq...$ . By hypothesis,
there exists $m\in\mathbb{N}$ such that $r(x_{m}...x_{2}x_{1})=r(x_{m+1}x_{m}...x_{2}x_{1})$.
If $x_{m}...x_{2}x_{1}\neq0$, then $(x_{m}...x_{2}x_{1})R\cap r(x_{m+1})\neq0$
and hence $0\neq x_{m}...x_{2}x_{1}r\in r(x_{m+1})$ for some $r\in R$.
Thus $x_{m+1}x_{m}...x_{2}x_{1}r=0$ and this implies that $x_{m}...x_{2}x_{1}r=0$,
a contradiction. Thus $Z_{r}\cap S_{r}$ is right t-nilpotent, so
$Z_{r}\cap S_{r}\mathcal{S}ubseteq J$ . Therefore, $S_{r}\cap J=Z_{r}\cap S_{r}$
by (1).}\end{proof}
\begin{prop}\label{Proposition:(4.6)} Let $R$ be a right ss-injective ring. Then:
\noindent (1) If $Ra$ is a simple left ideal of $R$,
then soc$(aR)\cap J(aR)$ is zero or simple.
\noindent (2) $rl(S_{r}\cap J)=S_{r}\cap J$ if and only if $rl(K)=K$ for all semisimple small right ideals $K$ of $R$.\end{prop}
\begin{proof}
(1)\textcolor{black}{{} Suppose that soc$(aR)\cap J(aR)$ is a nonzero.
Let $x_{1}R$ and $x_{2}R$ be any simple small right ideals of }\textit{\textcolor{black}{R}}\textcolor{black}{{}
with $x_{i}\in aR$, $i=1,2$. If $x_{1}R\cap x_{2}R=0$, then by
Corollary~\ref{Corollary:(4.3)}(5) $l(x_{1})+l(x_{2})=R$. Since $x_{i}\in aR$, thus
$x_{i}=ar_{i}$ for some $r_{i}\in R$, $i=1,2$, that is $l(a)\mathcal{S}ubseteq l(ar_{i})=l(x_{i})$,
$i=1,2$. Since $Ra$ is a simple, then $l(a)\mathcal{S}ubseteq^{max}R$, that
is $l(x_{1})=l(x_{2})=l(a)$. Therefore, $l(a)=R$ and hence $a=0$
and this contradicts the minimality of $Ra$. Thus soc$(aR)\cap J(aR)$
is simple.}
\textcolor{black}{(2) Suppose that $rl(S_{r}\cap J)=S_{r}\cap J$
and let }\textit{\textcolor{black}{K}}\textcolor{black}{{} be a semisimple
small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{,
trivially we have $K\mathcal{S}ubseteq rl(K)$. If $K\cap xR=0$ for some $x\in rl(K)$,
then by Corollary~\ref{Corollary:(4.3)}(5) $l(K\cap xR)=l(K)+l(xR)=R$, since $x\in rl(K)\mathcal{S}ubseteq$$rl(S_{r}\cap J)=S_{r}\cap J$
. If $y\in l(K)$, then $yx=0$, that is $y(xr)=0$ for all $r\in R$
and hence $l(K)\mathcal{S}ubseteq l(xR)$.Thus $l(xR)=R$, so $x=0$ and this
means that $K\mathcal{S}ubseteq^{ess}rl(K)$. Since $K\mathcal{S}ubseteq^{ess}rl(K)\mathcal{S}ubseteq rl(S_{r}\cap J)=S_{r}\cap J$,
it follows that $K=rl(K)$. The converse is trivial.}\end{proof}
\begin{lem}\label{Lemma:(4.7)} The following statements are equivalent.
\noindent (1) $rl(K)=K$, for all semisimple small
right ideals $K$ of $R$.
\noindent (2) $r(l(K)\cap Ra)=K+r(a)$, for all semisimple small right ideals $K$
of $R$ and all $a\in R$. \end{lem}
\begin{proof}
\textcolor{black}{(1)$\Rightarrow$(2). Clearly, $K+r(a)\mathcal{S}ubseteq r(l(K)\cap Ra)$
by \cite[Proposition 2.16]{3AnFu74}. Now, let $x\in r(l(K)\cap Ra)$ and
$y\in l(aK)$. Then $yaK=0$ and $y\in l(ax)$. Thus $l(aK)\mathcal{S}ubseteq l(ax)$,
and so $ax\in rl(ax)\mathcal{S}ubseteq rl(aK)=aK$, since $aK$ is a semisimple
small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{.
Hence $ax=ak$ for some $k\in K$, and so $(x-k)\in r(a)$. This leads
to $x\in K+r(a)$, that is $r(l(K)\cap Ra)=K+r(a)$.}
\textcolor{black}{(2)$\Rightarrow$(1). By taking $a=1$.}
\end{proof}
\mathcal{S}ubparagraph*{\textmd{\textcolor{black}{Recall that a right ideal }}\textmd{\textit{\textcolor{black}{I
}}}\textmd{\textcolor{black}{of }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{}
is said to be lie over a summand of $R_{R}$, if there exists a direct
decomposition $R_{R}=A_{R}\oplus B_{R}$ with $A\mathcal{S}ubseteq I$ and
$B\cap I\ll R_{R}$ (see \cite{13Nich76}) which leads to $I=A\oplus(B\cap I)$.}}}
\begin{lem}\label{Lemma:(4.8)} Let $K$ be an $m$-generated semisimple right ideal lies over summand of $R_{R}$. If $R$
is right ss-injective, then every homomorphism from $K$ to $R_{R}$ can be extended to an endomorphism of $R_{R}$. \end{lem}
\begin{proof}
\textcolor{black}{Let $\alpha:K\longrightarrow R$ be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism.
By hypothesis, $K=eR\oplus B$, for some $e^{2}=e\in R$, where $B$
is an }\textit{\textcolor{black}{m}}\textcolor{black}{-generated semisimple
small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{}
. Now, we need prove that $K=eR\oplus(1-e)B$. Clearly, $eR+(1-e)B$
is a direct sum. Let $x\in K$, then $x=a+b$ for some $a\in eR$,
$b\in B$, so we can write $x=a+eb+(1-e)b$ and this implies that
$x\in eR\oplus(1-e)B$. Conversely, let $x\in eR\oplus(1-e)B$. Thus
$x=a+(1-e)b$, for some $a\in eR$, $b\in B$. We obtain $x=a+(1-e)b=(a-eb)+b\in eR\oplus B$.
It is obvious that $(1-e)B$ is an }\textit{\textcolor{black}{m}}\textcolor{black}{-generated
semisimple small right ideal. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a right ss-injective, then there exists $\gamma\in End(R_{R})$
such that $\gamma_{|(1-e)B}=\alpha_{|(1-e)B}$ . Define $\beta:R_{R}\longrightarrow R_{R}$
by $\beta(x)=\alpha(ex)+\gamma((1-e)x)$, for all $x\in R$ which
is a well defined $R$-homomorphism. If $x\in K$, then $x=a+b$ where
$a\in eR$ and $b\in(1-e)B$, so $\beta(x)=\alpha(ex)+\gamma((1-e)x)=\alpha(a)+\gamma(b)=\alpha(a)+\alpha(b)=\alpha(x)$
which yields $\beta$ is an extension of $\alpha$. }\end{proof}
\begin{cor}\label{Corollary:(4.9)} Let $R$ be a semiregular ring (or just every finitely generated semisimple right ideal lies over a summand of $R_{R}$). If $R$ is a right ss-injective ring, then every $R$-homomorphism from a finitely generated semisimple right ideal to $R$ extends to $R$.\end{cor}
\begin{proof}
\textcolor{black}{By \cite[Theorem 2.9]{13Nich76} and Lemma~\ref{Lemma:(4.8)}.}\end{proof}
\begin{cor}\label{Corollary:(4.10)} Let $S_{r}$ be a finitely generated and lie over a summand of $R_{R}$, then $R$ is a right ss-injective ring if and only if $R$ is right soc-injective.
\end{cor}
\mathcal{S}ubparagraph*{\textmd{\textcolor{black}{Recall that a ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{}
is called right minannihilator if every simple right ideal }}\textmd{\textit{\textcolor{black}{K}}}\textmd{\textcolor{black}{{}
of }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{}
is an annihilator; equivalently, if $rl(K)=K$ (see \cite{14NiYo97}).}}}
\begin{lem}\label{Lemma:(4.11)} A ring $R$ is a right minannihilator if and only if $rl(K)=K$ for any simple small right ideal $K$ of $R$.
\end{lem}
\begin{lem}\label{Lemma:(4.12)} A ring $R$ is a left minannihilator if and only if $lr(K)=K$ for any simple small left ideal $K$ of $R$.
\end{lem}
\begin{cor}\label{Corollary:(4.13)} Let $R$ be a right ss-injective ring, then the following hold:
\noindent (1) If $rl(S_{r}\cap J)=S_{r}\cap J$, then $R$ is right minannihilator.
\noindent (2) If $S_{\ell}\mathcal{S}ubseteq S_{r}$, then:
\noindent i) $S_{\ell}=S_{r}$.
\noindent ii) $R$ is a left minannihilator ring. \end{cor}
\begin{proof}
(1)\textcolor{black}{{} Let $aR$ be a simple small right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{,
thus $rl(a)=aR$ by Proposition~\ref{Proposition:(4.6)}(2). Therefore, }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a right minannihilator ring.}
\textcolor{black}{(2) i) Since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a right ss-injective ring, thus it is right mininjective and it follows
from \cite[Proposition 1.14 (4)]{14NiYo97} that $S_{\ell}=S_{r}$ .}
\textcolor{black}{ii) If $Ra$ is a simple small left ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{,
then $lr(a)=Ra$ by Corollary~\ref{Corollary:(4.3)}(2) and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a left minannihilator ring.}\end{proof}
\begin{prop}\label{Proposition:(4.14)} The following statements are equivalent for a right ss-injective ring $R$.
\noindent (1) $S_{\ell}\mathcal{S}ubseteq S_{r}$.
\noindent (2) $S_{\ell}=S_{r}$.
\noindent (3) $R$ is a left mininjective ring. \end{prop}
\begin{proof}
\textcolor{black}{(1)$\Rightarrow$(2) By Corollary~\ref{Corollary:(4.13)}(2) (i).}
\textcolor{black}{(2)$\Rightarrow$(3) By Corollary~\ref{Corollary:(4.13)}(2) and \cite[Corollary 2.34]{15NiYu03}, we need only show that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right minannihilator ring. Let $aR$ be a simple small right ideal,
then $Ra$ is a simple small left ideal by \cite[Theorem 1.14]{14NiYo97}.
Let $0\neq x\in rl(aR)$, then $l(a)\mathcal{S}ubseteq l(x)$. Since $l(a)\leq^{max}R$,
thus $l(a)=l(x)$ and hence $Rx$ is simple left ideal, that is $x\in S_{r}$.
Now , if $Rx=Re$ for some $e=e^{2}\in R$, then $e=rx$ for some
$0\neq r\in R$. Since $(e-1)e=0$, then $(e-1)rx=0$, that is $(e-1)ra=0$
and this implies that $ra\in eR$. Thus $raR\mathcal{S}ubseteq eR$, but $eR$
is semisimple right ideal, so $raR\mathcal{S}ubseteq^{\oplus}R$ and hence
$ra=0$. Therefore, $rx=0$, that is $e=0$, a contradiction. Thus
$x\in J$ and hence $x\in S_{r}\cap J$. Therefore, $aR\mathcal{S}ubseteq rl(aR)\mathcal{S}ubseteq S_{r}\cap J$.
Now, let $aR\cap yR=0$ for some $y\in rl(aR)$, thus $l(aR)+l(yR)=l(aR\cap yR)=R$.
Since $y\in rl(aR)$, thus $l(aR)\mathcal{S}ubseteq l(yR)$ and hence $l(yR)=R$,
that is $y=0$. Therefore, $aR\mathcal{S}ubseteq^{ess}rl(aR)$, so $aR=rl(aR)$
as desired.}
\textcolor{black}{(3)$\Rightarrow$(1) Follows from \cite[Corollary 2.34]{15NiYu03}.}
\end{proof}
\mathcal{S}ubparagraph*{\textmd{\textcolor{black}{Recall that a ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{}
is said to be right minfull if it is semiperfect, right mininjective
and soc$(eR)\neq0$ for each local idempotent $e\in R$ (see \cite{15NiYu03}).
A ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{}
is called right min-}}\textmd{\textit{\textcolor{black}{PF}}}\textmd{\textcolor{black}{,
if it is a semiperfect, right mininjective, $S_{r}\mathcal{S}ubseteq^{ess}R_{R}$,
$lr(K)=K$ for every simple left ideal $K\mathcal{S}ubseteq Re$ for some local
idempotent $e\in R$ (see \cite{15NiYu03}).}}}
\begin{cor}\label{Corollary:(4.18)} Let $R$ be a right ss-injective ring, semiperfect with $S_{r}\mathcal{S}ubseteq^{ess}R_{R}$.
Then $R$ is right minfull ring and the following statements hold:
\noindent (1) Every simple right ideal of $R$ is essential in a summand.
\noindent (2) soc$(eR)$ is simple and essential in $eR$ for every local idempotent $e\in R$. Moreover, $R$ is right finitely cogenerated.
\noindent (3) For every semisimple right ideal $I$ of $R$, there exists $e=e^{2}\in R$ such that $I\mathcal{S}ubseteq^{ess}rl(I)\mathcal{S}ubseteq^{ess}eR$.
\noindent (4) $S_{r}\mathcal{S}ubseteq S_{\ell}\mathcal{S}ubseteq rl(S_{r})$.
\noindent (5) If $I$ is a semisimple right ideal of $R$ and $aR$ is a simple right ideal of $R$ with $I\cap aR=0$, then $rl(I\oplus aR)=rl(I)\oplus rl(aR)$.
\noindent (6) $rl(\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}a_{i}R)=\underset{{\mathcal{S}criptscriptstyle i=1}}{\overset{{\mathcal{S}criptscriptstyle n}}{\bigoplus}}rl(a_{i}R)$
, where $\overset{{\mathcal{S}criptscriptstyle n}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}a_{i}R$
is a direct sum of simple right ideals.
\noindent (7) The following statements are equivalent.
\noindent (a) $S_{r}=rl(S_{r})$.
\noindent (b) $K=rl(K)$ for every semisimple right ideals $K$ of $R$.
\noindent (c) $kR=rl(kR)$ for every simple right ideals $kR$ of $R$.
\noindent (d) $S_{r}=S_{\ell}$.
\noindent (e) \emph{soc}$(Re)$ is simple for all local idempotent
$e\in R$.
\noindent (f) \emph{soc}$(Re)=S_{r}e$ for every local idempotent $e\in R$.
\noindent (g) $R$ is left mininjective.
\noindent (h) $L=lr(L)$ for every semisimple left ideals $L$ of $R$.
\noindent (i) $R$ is left minfull ring.
\noindent (j) $S_{r}\cap J=rl(S_{r}\cap J)$.
\noindent (k) $K=rl(K)$ for every semisimple small right ideals $K$ of $R$.
\noindent (l) $L=lr(L)$ for every semisimple small left ideals $L$ of $R$.
\noindent (8) If $R$ satisfies any condition of (7), then $r(S_{\ell}\cap J)\mathcal{S}ubseteq^{ess}R_{R}$. \end{cor}
\begin{proof}
\textcolor{black}{(1), (2), (3), (4), (5) and (6) are obtained by
Corollary~\ref{Corollary:(3.11)} and \cite[Theorem 4.12]{2AmYoZe05}.}
\textcolor{black}{(7) The equivalence of (a), (b), (c), (d), (e),
(f), (g), (h) and (i) follows from Corollary~\ref{Corollary:(3.11)} and \cite[Theorem 4.12]{2AmYoZe05}.}
\textcolor{black}{(b)$\Rightarrow$(j) Clear.}
\textcolor{black}{(j)$\Leftrightarrow$(k) By Proposition~\ref{Proposition:(4.6)}(2).}
\textcolor{black}{(k)$\Rightarrow$(c) By Corollary~\ref{Corollary:(4.13)}(1).}
\textcolor{black}{(h)$\Rightarrow$(l) Clear.}
\textcolor{black}{(l)$\Rightarrow$(d) Let $Ra$ be a simple left
ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{. By hypothesis,
$lr(A)=A$ for any simple small left ideal }\textit{\textcolor{black}{A}}\textcolor{black}{{}
of }\textit{\textcolor{black}{R}}\textcolor{black}{. By Lemma~\ref{Lemma:(4.12)},
$lr(A)=A$ for any simple left ideal }\textit{\textcolor{black}{A}}\textcolor{black}{{}
of }\textit{\textcolor{black}{R}}\textcolor{black}{{} and hence $lr(Ra)=Ra$.
Thus }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right min-PF
ring and it follows from \cite[Theorem 3.14]{14NiYo97} that $S_{r}=S_{\ell}$.}
\textcolor{black}{(8) Let }\textit{\textcolor{black}{K}}\textcolor{black}{{}
be a right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{}
such that $r(S_{\ell}\cap J)\cap K=0$. Then $Kr(S_{\ell}\cap J)=0$
and we have $K\mathcal{S}ubseteq lr(S_{\ell}\cap J)=S_{\ell}\cap J=S_{r}\cap J$.
Now, $r((S_{\ell}\cap J)+l(K))=$$r(S_{\ell}\cap J)\cap K=0$. Since
}\textit{\textcolor{black}{R}}\textcolor{black}{{} is left Kasch, then
$(S_{\ell}\cap J)+l(K)=R$ by \cite[Corollary 8.28(5)]{10Lam99}. Thus $l(K)=R$
and hence $K=0$, so $r(S_{\ell}\cap J)\mathcal{S}ubseteq^{ess}R_{R}$.}
\end{proof}
\mathcal{S}ubparagraph*{\textmd{\textcolor{black}{Recall that }}\textmd{a}\textmd{\textcolor{black}{{}
right }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-module
}}\textmd{\textit{\textcolor{black}{M}}}\textmd{\textcolor{black}{{}
is called almost-injective if $M=E\oplus K$, where }}\textmd{\textit{\textcolor{black}{E}}}\textmd{\textcolor{black}{{}
is injective and }}\textmd{\textit{\textcolor{black}{K}}}\textmd{\textcolor{black}{{}
has zero radical (see \cite{23ZeHuAm11}). After reflect on \cite[Theorem 2.12]{23ZeHuAm11}
we found it is not true always and the reason is due to the homomorphism
$h:(L+J)/J\longrightarrow K$ in the part (3)$\Rightarrow$(1) of the proof of Theorem 2.12 in \cite{23ZeHuAm11} is not well define, in particular see the following example.}}}
\begin{example}\label{Example:(4.19)}
\emph{\textcolor{black}{In particular from the proof of the part (3)$\Rightarrow$(1)
in \cite[Theorem 2.12]{23ZeHuAm11}, we consider $R=\mathbb{Z}_{8}$ and $M=K=<\bar{4}>=\left\{ \bar{0},\bar{4}\right\} $.
Thus $M=E\oplus K$, where $E=0$ is a trivial injective }\textit{\textcolor{black}{R}}\textcolor{black}{-module
and $J(K)=0$. Let $f:L\longrightarrow K$ is the identity map, where
$L=K$. So, the map $h:(L+J)/J\longrightarrow K$ which is given by
$h(\ell+J)=f(\ell)$ is not well define, because $J=\bar{4}+J$ but
$h(J)=f(\bar{0})=\bar{0}\neq\bar{4}=f(\bar{4})=h(\bar{4}+J)$.}}
\end{example}
\mathcal{S}ubparagraph*{\textmd{The following example shows that there is a contradiction in \cite[Theorem 2.12]{23ZeHuAm11}.}}
\begin{example}\label{Example:(4.20)}
\emph{\textcolor{black}{Assume that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a right artinian ring but not semisimple (this claim is found because
for example $\mathbb{Z}_{8}$ satisfies this property). Now, let }\textit{\textcolor{black}{M}}\textcolor{black}{{}
be a simple right }\textit{\textcolor{black}{R}}\textcolor{black}{-module,
then }\textit{\textcolor{black}{M}}\textcolor{black}{{} is almost-injective.
Clearly, }\textit{\textcolor{black}{R}}\textcolor{black}{{} is semilocal
(see \cite[Theorem 9.2.2]{9Kas82}), thus }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is injective by \cite[Theorem 2.12]{23ZeHuAm11}. Therefore, }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is }\textit{\textcolor{black}{V}}\textcolor{black}{-ring and hence
}\textit{\textcolor{black}{R}}\textcolor{black}{{} is a right semisimple
ring but this contradiction. In other word, Since $\mathbb{Z}_{8}$
is semilocal ring and $<\bar{4}>=\left\{ \bar{0},\bar{4}\right\} $
is almost injective as $\mathbb{Z}_{8}$-module, then $<\bar{4}>$
is injective by \cite[Theorem 2.12]{23ZeHuAm11}. Thus $<\bar{4}>\mathcal{S}ubseteq^{\oplus}\mathbb{Z}_{8}$
and this contradiction.}}\end{example}
\begin{thm}\label{Theorem:(4.21)} The following statements are equivalent for a ring $R$.
\noindent (1) $R$ is semiprimitive and every almost-injective right $R$-module is quasi-continuous.
\noindent (2) $R$ is right ss-injective and right minannihilator
ring, $J$ is right artinian, and every almost-injective right $R$-module
is quasi-continuous.
\noindent (3) $R$ is a semisimple ring. \end{thm}
\begin{proof}
\textcolor{black}{(1)$\Rightarrow$(2) and (3)$\Rightarrow$(1) are
clear.}
\textcolor{black}{(2)$\Rightarrow$(3) Let }\textit{\textcolor{black}{M}}\textcolor{black}{{}
be a right }\textit{\textcolor{black}{R}}\textcolor{black}{-module
with zero radical. If }\textit{\textcolor{black}{N}}\textcolor{black}{{}
is an arbitrary nonzero submodule of }\textit{\textcolor{black}{M}}\textcolor{black}{,
then $N\oplus M$ is quasi-continuous and by \cite[Corollary 2.14]{12MoMu90},
}\textit{\textcolor{black}{N}}\textcolor{black}{{} is }\textit{\textcolor{black}{M}}\textcolor{black}{-injective.
Thus $N\leq^{\oplus}M$ and hence }\textit{\textcolor{black}{M}}\textcolor{black}{{}
is semisimple. In particular $R/J$ is semisimple }\textit{\textcolor{black}{R}}\textcolor{black}{-module
and hence $R/J$ is artinian by \cite[Theorem 9.2.2(b)]{9Kas82}, so }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is semilocal ring. Since }\textsl{\textcolor{black}{J}}\textcolor{black}{{}
is a right artinian, then }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right artinian. So it follows from Corollary~\ref{Corollary:(4.18)}(7) that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right and left mininjective. Thus \cite[Corollary 4.8]{14NiYo97} implies
that }\textit{\textcolor{black}{R}}\textcolor{black}{{} is }\textit{\textcolor{black}{QF}}\textcolor{black}{{}
ring. By hypothesis, $R\oplus(R/J)$ is quasi-continuous (since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is self-injective), so again by \cite[Corollary 2.14]{12MoMu90} we have that
$R/J$ is injective. Since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is }\textit{\textcolor{black}{QF}}\textcolor{black}{{} ring, then $R/J$
is projective (see \cite[Theorem 13.6.1]{9Kas82}). Thus the canonical map
$\pi:R\longrightarrow R/J$ is splits and hence $J\leq^{\oplus}R$,
that is $J=0$. Therefore }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is semisimple. }
\end{proof}
\mathcal{S}ection{STRONGLY SS-INJECTIVE RINGS}
\begin{prop}\label{Proposition:(5.1)} A ring $R$ is strongly right ss-injective if and only if every finitely generated projective right $R$-module is strongly ss-injective. \end{prop}
\begin{proof}
Since a finite direct sum of strongly ss-injective modules is strongly
ss-injective, so every finitely generated free right $R$-module is strongly
ss-injective. But a direct summand of strongly ss-injective is strongly
ss-injective. Therefore, every finitely generated projective is strongly
ss-injective. The converse is clear.
\end{proof}
\mathcal{S}ubparagraph*{\textmd{\textcolor{black}{A ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{}
is called a right Ikeda-Nakayama ring if $l(A\cap B)=l(A)+l(B)$ for
all right ideals }}\textmd{\textit{\textcolor{black}{A}}}\textmd{\textcolor{black}{{}
and }}\textmd{\textit{\textcolor{black}{B}}}\textmd{\textcolor{black}{{}
of }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{}
(see \cite[p.148]{15NiYu03}). In the next proposition, the strongly ss-injectivity
gives a new version of Ikeda-Nakayama rings.}}}
\begin{prop}\label{Proposition:(5.2)} Let $R$ be a strongly right ss-injective ring, then $l(A\cap B)=l(A)+l(B)$ for
all semisimple small right ideals $A$ and all right ideals $B$ of $R$.\end{prop}
\begin{proof}
\textcolor{black}{Let $x\in l(A\cap B)$ and define $\alpha:A+B\longrightarrow R_{R}$
by $\alpha(a+b)=xa$ for all $a\in A$ and $b\in B$. Clearly, $\alpha$
is well define, because if $a_{1}+b_{1}=a_{2}+b_{2}$, then $a_{1}-a_{2}=b_{2}-b_{1}$,
that is $x(a_{1}-a_{2})=0$, so $\alpha(a_{1}+b_{1})=\alpha(a_{2}+b_{2})$.
The map $\alpha$ induces an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
$\tilde{\alpha}:(A+B)/B\longrightarrow R_{R}$ which is given by $\tilde{\alpha}(a+B)=xa$
for all $a\in A$. Since $(A+B)/B\mathcal{S}ubseteq$ soc$(R/B)\cap J(R/B)$ and
}\textit{\textcolor{black}{R}}\textcolor{black}{{} is a strongly right
ss-injective, $\tilde{\alpha}$ can be extended to an }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
$\gamma:R/B\longrightarrow R_{R}$. If $\gamma(1+B)=y$, for some $y\in R$,
then $y(a+b)=xa$, for all $a\in A$ and $b\in B$. In particular,
$ya=xa$ for all $a\in A$ and $yb=0$ for all $b\in B$. Hence $x=(x-y)+y\in l(A)+l(B)$.
Therefore, $l(A\cap B)\mathcal{S}ubseteq l(A)+l(B)$. Since the converse is
always holds, thus the proof is complete.}
\end{proof}
\mathcal{S}ubparagraph*{\textmd{\textcolor{black}{Recall that a ring }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{}
is said to be right simple }}\textmd{\textit{\textcolor{black}{J}}}\textmd{\textcolor{black}{-injective
if for any small right ideal }}\textmd{\textit{\textcolor{black}{I}}}\textmd{\textcolor{black}{{}
and any }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{-homomorphism
$\alpha:I\longrightarrow R_{R}$ with simple image, $\alpha=c.$ for
some $c\in R$ (see \cite{21YoZh04}).}}}
\begin{cor}\label{Corollary:(5.3)} Every strongly right ss-injective ring is right simple $J$-injective. \end{cor}
\begin{proof} By Proposition~\ref{Proposition:(3.1)}.\end{proof}
\begin{rem}\label{Remark:(5.4)}\emph{ The converse of Corollary~\ref{Corollary:(5.3)} is not true (see Example~\ref{Example:(5.7)})}.\end{rem}
\begin{prop}\label{Proposition:(5.5)} Let $R$ be a right Kasch and strongly right ss-injective ring. Then:
\noindent (1) $rl(K)=K$, for every small right ideal $K$. Moreover, $R$ is right minannihilator.
\noindent (2) If $R$ is left Kasch, then $r(J)\mathcal{S}ubseteq^{ess}R_{R}$. \end{prop}
\begin{proof}
\textcolor{black}{(1) By Corollary~\ref{Corollary:(5.3)} and \cite[Lemma 2.4]{21YoZh04}.}
\textcolor{black}{(2) Let }\textit{\textcolor{black}{K}}\textcolor{black}{{}
be a right ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{{}
and $r(J)\cap K=0$. Then $Kr(J)=0$ and we obtain $K\mathcal{S}ubseteq lr(J)=J$,
because }\textit{\textcolor{black}{R}}\textcolor{black}{{} is left Kasch.
By (1), we have $r(J+l(K))=r(J)\cap K=0$ and this means that $J+l(K)=R$
(since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is left Kasch).
Thus $K=0$ and hence $r(J)\mathcal{S}ubseteq^{ess}R_{R}$.}
\end{proof}
\mathcal{S}ubparagraph*{\textmd{The following examples show that the classes of rings: strongly
ss-injective rings, soc-injective rings and of small injective rings
are different.}}
\begin{example}\label{Example:(5.6)}
\emph{\textcolor{black}{Let $R=\mathbb{Z}_{(p)}=\{\frac{m}{n}\mid p$ does not
divide $n\}$, the localization ring of $\mathbb{Z}$ at the prime
$p$. Then }\textit{\textcolor{black}{R}}\textcolor{black}{{} is a commutative
local ring and it has zero socle but not principally small injective
(see \cite[Example 4]{20Xia11}). Since $S_{r}=0$, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is strongly soc-injective ring and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is strongly ss-injective ring.}}
\end{example}
\begin{example}\label{Example:(5.7)}
\emph{Let $R=\left\{ \begin{array}{cc}
\left(\begin{array}{cc}
n & x\\
0 & n\end{array}\right)\mid & n\in\mathbb{Z}, \, x\in\mathbb{Z}_{2}\end{array}\right\} $. Thus $R$ is a commutative ring, $J=S_{r}=\left\{ \begin{array}{cc}
\left(\begin{array}{cc}
0 & x\\
0 & 0\end{array}\right)\mid & x\in\mathbb{Z}_{2}\end{array}\right\} $ and $R$ is small injective
(see \cite[Example(i)]{19ThQu09}). Let $A=J$ and
}
\emph{\noindent $B=\left\{ \begin{array}{cc}
\left(\begin{array}{cc}
2n & 0\\
0 & 2n\end{array}\right)\mid & n\in\mathbb{Z}\end{array}\right\} $, then $l(A)=\left\{ \begin{array}{cc}
\left(\begin{array}{cc}
2n & y\\
0 & 2n\end{array}\right)\mid & n\in\mathbb{Z},\,y\in\mathbb{Z}_{2}\end{array}\right\} $ and
}
\noindent\emph{ $l(B)=\left\{ \begin{array}{cc}
\left(\begin{array}{cc}
0 & y\\
0 & 0\end{array}\right)\mid & y\in\mathbb{Z}_{2}\end{array}\right\} $. Thus $l(A)+l(B)=\left\{ \begin{array}{cc}
\left(\begin{array}{cc}
2n & y\\
0 & 2n\end{array}\right)\mid & n\in\mathbb{Z},\, y\in\mathbb{Z}_{2}\end{array}\right\} $.}
\noindent\emph{ Since $A\cap B=0$, thus $l(A\cap B)=R$ and this implies that $l(A)+l(B)\neq l(A\cap B)$.
Therefore $R$ is not strongly ss-injective and not strongly soc-injective by Proposition~\ref{Proposition:(5.2)}.}
\end{example}
\begin{example}\label{Example:(5.8)}
\emph{\textcolor{black}{Let $F=\mathbb{Z}_{2}$ be the field of two elements,
$F_{i}=F$ for $i=1,2,3,...$, $Q=\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\prod}}F_{i}$,
$S=\overset{{\mathcal{S}criptscriptstyle \infty}}{\underset{{\mathcal{S}criptscriptstyle i=1}}{\bigoplus}}F_{i}$
. If }\textit{\textcolor{black}{R}}\textcolor{black}{{} is the subring
of }\textit{\textcolor{black}{Q}}\textcolor{black}{{} generated by 1
and }\textit{\textcolor{black}{S}}\textcolor{black}{, then }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a Von Neumann regular ring (see \cite[Example (1), p.28]{22Zey14}). Since
}\textit{\textcolor{black}{R}}\textcolor{black}{{} is commutative, thus
every simple }\textit{\textcolor{black}{R}}\textcolor{black}{-module
is injective by \cite[Corollary 3.73]{10Lam99}. Thus }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is }\textit{\textcolor{black}{V}}\textcolor{black}{-ring and hence
$J(N)=0$ for every right }\textit{\textcolor{black}{R}}\textcolor{black}{-module
}\textit{\textcolor{black}{N}}\textcolor{black}{. It follows from
Corollary~\ref{Corollary:(3.9)} that every }\textit{\textcolor{black}{R}}\textcolor{black}{-module
is strongly ss-injective. In particular, }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is strongly ss-injective ring. But }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is not soc-injective (see \cite[Example (1)]{22Zey14}).}}
\end{example}
\begin{example}\label{Example:(5.9)}
\emph{\textcolor{black}{Let $R=\mathbb{Z}_{2}[x_{1},x_{2},...]$ where $\mathbb{Z}_{2}$
is the field of two elements, $x_{i}^{3}=0$ for all i, $x_{i}x_{j}=0$
for all $i\neq j$ and $x_{i}^{2}=x_{j}^{2}\neq0$ for all i and j.
If $m=x_{i}^{2}$, then }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a commutative, semiprimary, local, soc-injective ring with $J=$span\{m, $x_{1}$, $x_{2}$,
... \}, and }\textit{\textcolor{black}{R}}\textcolor{black}{{} has simple
essential socle $J^{2}=\mathbb{Z}_{2}m$ (see \cite[Example 5.7]{2AmYoZe05}).
It follows from \cite[Example 5.7]{2AmYoZe05} that the }\textit{\textcolor{black}{R}}\textcolor{black}{-homomorphism
$\gamma:J\longrightarrow R$ which is given by $\gamma(a)=a^{2}$
for all $a\in J$ with simple image can be not extended to }\textit{\textcolor{black}{R}}\textcolor{black}{,
then }\textit{\textcolor{black}{R}}\textcolor{black}{{} is not simple
}\textit{\textcolor{black}{J}}\textcolor{black}{-injective and not
small injective, so it follows from Corollary~\ref{Corollary:(5.3)} that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is not strongly ss-injective.}}
\end{example}
\mathcal{S}ubparagraph*{\textmd{\textcolor{black}{Recall that }}\textmd{\textit{\textcolor{black}{R}}}\textmd{\textcolor{black}{{}
is said to be right minsymmetric ring if $aR$ is simple right ideal
then $Ra$ is simple left ideal (see \cite{14NiYo97}). Every right mininjective
ring is right minsymmetric by \cite[Theorem 1.14]{14NiYo97}.}}}
\begin{thm}\label{Theorem:(5.10)} A ring $R$ is QF if and only if $R$ is a strongly right
ss-injective and right noetherian ring with $S_{r}\mathcal{S}ubseteq^{ess}R_{R}$.\end{thm}
\begin{proof}
\textcolor{black}{($\Rightarrow$) This is clear.}
\textcolor{black}{($\Leftarrow$) By Corollary~\ref{Corollary:(4.3)}(1), }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right minsymmetric. It follows from \cite[Lemma 2.2]{19ThQu09} that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right perfect. Thus }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is strongly right soc-injective, by Theorem~\ref{Theorem:(3.13)}. Since $S_{r}\mathcal{S}ubseteq^{ess}R_{R}$,
so it follows from \cite[Corollary 3.2]{2AmYoZe05} that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is self-injective and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is }\textit{\textcolor{black}{QF}}\textcolor{black}{.}\end{proof}
\begin{cor}\label{Corollary:(5.11)} For a ring $R$ the following statements are true.
\noindent (1) $R$ is semisimple if and only if $S_{r}\mathcal{S}ubseteq^{ess}R_{R}$
and every semisimple right $R$-module
is strongly soc-injective.
\noindent (2) $R$ is QF if and only if $R$ is
strongly right ss-injective, semiperfect
with essential right socle and $R/S_{r}$ is noetherian as right $R$-module. \end{cor}
\begin{proof}
(1) Suppose that $S_{r}\mathcal{S}ubseteq^{ess}R_{R}$ and
every semisimple right $R$-module
is strongly soc-injective, then $R$
is a right noetherian right V-ring by \cite[Proposition 3.12]{2AmYoZe05}, so
it follows from Corollary~\ref{Corollary:(3.9)} that $R$
is strongly right ss-injective. Thus $R$
is QF by Theorem~\ref{Theorem:(5.10)}.
But $J=0$, so $R$ is
semisimple. The converse is clear.
(2) By\textcolor{black}{{} \cite[Theorem 2.9]{14NiYo97}, $J=Z_{r}$. Since
$R/Z_{2}^{r}$ is a homomorphic image of $R/Z_{r}$ and }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a semilocal ring, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is a right }\textit{\textcolor{black}{t}}\textcolor{black}{-semisimple.
By Corollary~\ref{Corollary:(3.19)}, }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right noetherian, so it follows from Theorem~\ref{Theorem:(5.10)} that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is }\textit{\textcolor{black}{QF}}\textcolor{black}{. The converse
is clear.}\end{proof}
\begin{thm}\label{Theorem:(5.12)} A ring $R$ is $QF$ if and only if $R$ is a strongly right
ss-injective, $l(J^{2})$ is a countable generated left ideal, $S_{r}\mathcal{S}ubseteq^{ess}R_{R}$
and the chain $r(x_{1})\mathcal{S}ubseteq r(x_{2}x_{1})\mathcal{S}ubseteq...\mathcal{S}ubseteq r(x_{n}x_{n-1}...x_{1})\mathcal{S}ubseteq...$
terminates for every infinite sequence $x_{1},x_{2},...$ in $R$.\end{thm}
\begin{proof}
\textcolor{black}{($\Rightarrow$) Clear.}
\textcolor{black}{($\Leftarrow$) By \cite[Lemma 2.2]{19ThQu09}, }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right perfect. Since $S_{r}\mathcal{S}ubseteq^{ess}R_{R}$, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right Kasch (by \cite[Theorem 3.7]{14NiYo97}). Since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is strongly right ss-injective, thus }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right simple }\textit{\textcolor{black}{J}}\textcolor{black}{-injective,
by Corollary~\ref{Corollary:(5.3)}. Now, by Proposition~\ref{Proposition:(5.5)}(1) we have $rl(S_{r}\cap J)=S_{r}\cap J$,
so it follows from Corollary~\ref{Corollary:(4.18)}(7) that $S_{r}=S_{\ell}$. By \cite[Lemma 3.36]{15NiYu03}, $S_{2}^{r}=l(J^{2})$. The result now follows from
\cite[Theorem 2.18]{21YoZh04}.}\end{proof}
\begin{rem}\label{Remark:(5.13)}
\emph{The condition \textcolor{black}{$S_{r}\mathcal{S}ubseteq^{ess}R_{R}$ in Theorem~\ref{Theorem:(5.10)}
and Theorem~\ref{Theorem:(5.12)} can be not deleted, for example, $\mathbb{Z}$ is
strongly ss-injective noetherian ring but not }\textit{\textcolor{black}{QF}}\textcolor{black}{.}}\end{rem}
The following two results are extension of Proposition 5.8 in \cite{2AmYoZe05}.
\begin{cor}\label{Corollary:(5.15)} The following statements are equivalent.
\noindent (1) $R$ is a $QF$ ring.
\noindent (2) $R$ is a left perfect, strongly left and right ss-injective ring. \end{cor}
\begin{proof}
By Corollary~\ref{Corollary:(5.3)} and \cite[Corollary 2.12]{21YoZh04}.\end{proof}
\begin{thm}\label{Theorem:(5.16)} The following statements are equivalent:
\noindent (1) $R$ is a $QF$ ring.
\noindent (2) $R$ is a strongly left and right ss-injective, right Kasch and $J$ is left $t$-nilpotent.
\noindent (3) $R$ is a strongly left and right ss-injective, left Kasch and $J$ is left $t$-nilpotent. \end{thm}
\begin{proof}
(1)\textcolor{black}{$\Rightarrow$(2) and (1)$\Rightarrow$(3) are
clear.}
\textcolor{black}{(3)$\Rightarrow$(1) Suppose that $xR$ is simple
right ideal. Thus either $rl(x)=xR\mathcal{S}ubseteq^{\oplus}R_{R}$ or $x\in J$.
If $x\in J$, then $rl(x)=xR$ (since }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is right minannihilator), so it follows from Theorem~\ref{Theorem:(3.4)} that $rl(x)\mathcal{S}ubseteq^{ess}E\mathcal{S}ubseteq^{\oplus}R_{R}$.
Therefore, $rl(x)$ is essential in a direct summand of $R_{R}$ for
every simple right ideal $xR$. Let }\textit{\textcolor{black}{K}}\textcolor{black}{{}
be a maximal left ideal of }\textit{\textcolor{black}{R}}\textcolor{black}{.
Since }\textit{\textcolor{black}{R}}\textcolor{black}{{} is left Kasch,
thus $r(K)\neq0$ by \cite[Corollary 8.28]{10Lam99}. Choose $0\neq y\in r(K)$,
so $K\mathcal{S}ubseteq l(y)$ and we conclude that $K=l(y)$. Since $Ry\cong R/l(y)$,
thus $Ry$ is simple left ideal. But }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is left mininjective ring, so $yR$ is right simple ideal by \cite[Theorem 1.14]{14NiYo97} and this implies that $r(K)\mathcal{S}ubseteq^{ess}eR$ for
some $e^{2}=e\in R$ (since $r(K)=rl(y)$). Thus }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is semiperfect by \cite[Lemma 4.1]{15NiYu03} and hence }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is left perfect (since }\textit{\textcolor{black}{J}}\textcolor{black}{{}
is left }\textit{\textcolor{black}{t}}\textcolor{black}{-nilpotent),
so it follows from Corollary~\ref{Corollary:(5.15)} that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is }\textit{\textcolor{black}{QF}}\textcolor{black}{.}
\textcolor{black}{(2)$\Rightarrow$(1) It is similar to the proof
of (3)$\Rightarrow$(1).}\end{proof}
\begin{thm}\label{Theorem:(5.17)} The ring $R$ is $QF$ if and only if $R$ is strongly left and right ss-injective, left and right Kasch, and the chain $l(a_{1})\mathcal{S}ubseteq l(a_{1}a_{2})\mathcal{S}ubseteq l(a_{1}a_{2}a_{3})\mathcal{S}ubseteq...$ terminates for every $a_{1},a_{2},...\in Z_{\ell}$. \end{thm}
\begin{proof}
\textcolor{black}{($\Rightarrow$) Clear.}
\textcolor{black}{($\Leftarrow$) By Proposition~\ref{Proposition:(5.5)}, $l(J)$ is essential
in $_{R}R$. Thus $J\mathcal{S}ubseteq Z_{\ell}$. Let $a_{1},a_{2},...\in J$
, we have $l(a_{1})\mathcal{S}ubseteq l(a_{1}a_{2})\mathcal{S}ubseteq l(a_{1}a_{2}a_{3})\mathcal{S}ubseteq...$.
Thus there exists $k\in\mathbb{N}$ such that $l(a_{1}...a_{k})=l(a_{1}...a_{k}a_{k+1})$
(by hypothesis). Suppose that $a_{1}...a_{k}\neq0$, so $R(a_{1}...a_{k})\cap l(a_{k+1})\neq0$
(since $l(a_{k+1})$ is essential in $_{R}R$). Thus $ra_{1}...a_{k}\neq0$
and $ra_{1}...a_{k}a_{k+1}=0$ for some $r\in R$, a contradiction.
Therefore, $a_{1}...a_{k}=0$ and hence }\textit{\textcolor{black}{J}}\textcolor{black}{{}
is left }\textit{\textcolor{black}{t}}\textcolor{black}{-nilpotent,
so it follows from Theorem~\ref{Theorem:(5.16)} that }\textit{\textcolor{black}{R}}\textcolor{black}{{}
is }\textit{\textcolor{black}{QF}}\textcolor{black}{.}\end{proof}
\begin{cor}\label{Corollary:(5.18)} The ring $R$ is $QF$ if and only if $R$ is strongly left and right ss-injective with essential right socle, and the chain $r(a_{1})\mathcal{S}ubseteq r(a_{2}a_{1})\mathcal{S}ubseteq r(a_{3}a_{2}a_{1})\mathcal{S}ubseteq...$
terminates for every infinite sequence $a_{1},a_{2},...$ in $R$.\end{cor}
\begin{proof}
By \cite[Lemma 2.2]{19ThQu09} and Corollary~\ref{Corollary:(5.15)}.
\end{proof}
\end{document} |
\begin{document}
\title[Hilbert functions of general intersections]
{On Hilbert functions of\\ general intersections of ideals}
\author{Giulio Caviglia}
\address{
Department of Mathematics,
Purdue University,
West Lafayette,
IN 47901, USA.
}
\email{[email protected]}
\author{Satoshi Murai}
\address{
Satoshi Murai,
Department of Mathematical Science,
Faculty of Science,
Yamaguchi University,
1677-1 Yoshida, Yamaguchi 753-8512, Japan.
}
\email{[email protected]}
\thanks{The work of the first author was supported by a grant from the
Simons Foundation (209661 to G. C.).
The work of the second author was supported by KAKENHI 22740018.
}
\subjclass[2010]{Primary 13P10, 13C12, Secondary 13A02}
\maketitle
\begin{abstract}
Let $I$ and $J$ be homogeneous ideals in a standard graded polynomial ring.
We study upper bounds of the Hilbert function of the intersection of $I$ and $g(J)$,
where $g$ is a general change of coordinates.
Our main result gives a generalization of Green's hyperplane section theorem.
\end{abstract}
\section{Introduction}
Hilbert functions of graded $K$-algebras
are important invariants studied in several areas of mathematics.
In the theory of Hilbert functions,
one of the most useful tools is Green's hyperplane section theorem,
which gives a sharp upper bound for the Hilbert function of $R/hR$, where $R$ is a standard graded $K$-algebra and $h$ is a general linear form, in terms of the Hilbert function of $R$.
This result of Green has been extended to the case of general homogeneous polynomials
by Herzog and Popescu \cite{HP} and Gasharov \cite{Ga}.
In this paper, we study a further generalization of these theorems.
Let $K$ be an infinite field
and $S=K[x_1,\dots,x_n]$ a standard graded polynomial ring.
Recall that the \textit{Hilbert function} $H(M,-) : \mathbb{Z} \to \mathbb{Z}$ of a finitely generated graded $S$-module $M$
is the numerical function defined by
$$H(M,d)=\dim_K M_d,$$
where $M_d$ is the graded component of $M$ of degree $d$.
A set $W$ of monomials of $S$ is said to be \textit{lex}
if, for all monomials $u,v \in S$ of the same degree,
$u \in W$ and $v>_{\mathrm{lex}} u$ imply $v \in W$,
where $>_{\mathrm{lex}}$ is the lexicographic order induced by the ordering $x_1> \cdots > x_n$.
We say that a monomial ideal $I \subset S$ is a \textit{lex ideal}
if the set of monomials in $I$ is lex.
The classical Macaulay's theorem \cite{Ma} guarantees that,
for any homogeneous ideal $I \subset S$,
there exists a unique lex ideal, denoted by $I^{{\mathrm{lex}}}$, with the same Hilbert function as $I$.
Green's hyperplane section theorem \cite{Gr} states
\begin{theorem}[Green's hyperplane section theorem]
\label{green}
Let $I \subset S$ be a homogeneous ideal.
For a general linear form $h \in S_1$,
$$H(I \cap (h),d) \leq H(I^{\mathrm{lex}} \cap (x_n),d) \ \ \mbox{for all } d \geq 0.$$
\end{theorem}
Green's hyperplane section theorem is known to be useful to prove several important results on Hilbert functions such as Macaulay's theorem \cite{Ma} and Gotzmann's persistence theorem \cite{Go}, see \cite{Gr}.
Herzog and Popescu \cite{HP} (in characteristic $0$) and Gasharov \cite{Ga} (in positive characteristic) generalized Green's hyperplane section theorem in the following form.
\begin{theorem}[Herzog--Popescu, Gasharov]
\label{hpg}
Let $I \subset S$ be a homogeneous ideal.
For a general homogeneous polynomial $h \in S$ of degree $a$,
$$H(I \cap (h),d) \leq H(I^{\mathrm{lex}} \cap(x_n^a),d) \ \ \mbox{for all } d \geq 0.$$
\end{theorem}
We study a generalization of Theorems \ref{green} and \ref{hpg}.
Let $>_{\mathrm{{oplex}}}$ be the lexicographic order on $S$ induced by the ordering $x_n> \cdots > x_1$.
A set $W$ of monomials of $S$ is said to be \textit{opposite lex}
if, for all monomials $u,v \in S$ of the same degree,
$u \in W$ and $v>_{\mathrm{{oplex}}} u$ imply $v \in W$.
Also, we say that a monomial ideal $I \subset S$ is an \textit{opposite lex ideal}
if the set of monomials in $I$ is opposite lex.
For a homogeneous ideal $I \subset S$,
let $I^{\mathrm{{oplex}}}$ be the opposite lex ideal with the same Hilbert function as $I$
and let $\ensuremath{\mathrm{Gin}}_\sigma(I)$ be the generic initial ideal (\cite[\S 15.9]{Ei}) of $I$
with respect to a term order $>_\sigma$.
In Section 3 we will prove the following
\begin{theorem}
\label{intersection} Suppose $\mathrm{char}(K)=0$.
Let $I\subset S$ and $J \subset S$ be homogeneous ideals such that $\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(J)$ is lex.
For a general change of coordinates $g$ of $S$,
$$H(I \cap g(J),d) \leq H(I^{\mathrm{lex}} \cap J^{\mathrm{{oplex}}} ,d)
\ \ \mbox{for all } d\geq 0.$$
\end{theorem}
Theorems \ref{green} and \ref{hpg}, assuming that the characteristic is zero, are special cases of the above theorem when $J$ is principal.
Note that Theorem \ref{intersection} is sharp since the equality holds if $I$ is lex and $J$ is oplex (Remark \ref{rem1}). Note also that if $\ensuremath{\mathrm{Gin}}_\sigma(I)$ is lex for some term order $>_\sigma$ then $\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(J)$ must be lex as well (\cite[Corollary 1.6]{Co1}).
Unfortunately, the assumption on $J$, as well as the assumption on the characteristic of $K$, in Theorem \ref{intersection}
are essential (see Remark \ref{example}).
However, we prove the following result for the product of ideals.
\begin{theorem}
\label{product}
Suppose $\mathrm{char}(K)=0$.
Let $I\subset S$ and $J \subset S$ be homogeneous ideals.
For a general change of coordinates $g$ of $S$,
$$H(I g(J),d) \geq H(I^{\mathrm{lex}} J^{\mathrm{{oplex}}} ,d)
\ \ \mbox{for all } d\geq 0.$$
\end{theorem}
Inspired by Theorems \ref{intersection} and \ref{product},
we suggest the following conjecture.
\begin{conjecture}
\label{conj} Suppose $\mathrm{char}(K)=0.$
Let $I\subset S$ and $J \subset S$ be homogeneous ideals such that $\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(J)$ is lex.
For a general change of coordinates $g$ of $S$,
\[
\dim_K \ensuremath{\mathrm{Tor}}_i(S/I,S/g(J))_d \leq \dim_K \ensuremath{\mathrm{Tor}}_i(S/I^{\mathrm{lex}},S/J^{\mathrm{{oplex}}})_d
\ \ \mbox{for all } d\geq 0.
\]
\end{conjecture}
Theorems \ref{intersection} and \ref{product}
show that the conjecture is true if $i=0$ or $i=1.$
The conjecture is also known to be true when $J$ is generated by linear forms by the result of Conca \cite[Theorem 4.2]{Co}. Theorem \ref{2.5}, which we prove later, also provides some evidence supporting the above inequality.
\section{Dimension of $\ensuremath{\mathrm{Tor}}$ and general change of coordinates}
Let ${GL}_n(K)$ be the general linear group of invertible $n \times n$ matrices over $K$.
Throughout the paper, we identify each element $h=(a_{ij}) \in {GL}_n(K)$
with the change of coordinates defined by $h(x_i)=\sum_{j=1}^n a_{ji}x_j$
for all $i$.
We say that a property (P) holds for a general $g \in {GL}_n(K)$
if there is a non-empty Zariski open subset $U \subset {GL}_n(K)$
such that (P) holds for all $g \in U$.
We first prove that, for two homogeneous ideals $I \subset S$ and $J \subset S$,
the Hilbert function of $I \cap g(J)$ and that of $I g(J)$ are well defined for a general $g \in {GL}_n (K)$, i.e.\ there exists a non-empty Zariski open subset of ${GL}_n(K)$ on which the Hilbert function of $I \cap g(J)$ and that of $I g(J)$ are constant.
\begin{lemma}
\label{2-0}
Let $I \subset S$ and $J \subset S$ be homogeneous ideals.
For a general change of coordinates $g \in {GL}_n(K)$,
the function $H(I \cap g(J),-)$
and $H(I g(J),-)$ are well defined.
\end{lemma}
\begin{proof}
We prove the statement for $I \cap g(J)$
(the proof for $Ig(J)$ is similar).
It is enough to prove the same statement for $I+g(J)$.
We prove that $\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I+g(J))$ is constant for a general $g \in {GL}_n(K)$.
Let $t_{kl}$, where $1 \leq k,l \leq n$, be indeterminates,
$\tilde K=K(t_{kl}: 1 \leq k,l \leq n)$ the field of fractions of $K[t_{kl}: 1 \leq k,l \leq n]$
and $A=\tilde K [x_1,\dots,x_n]$.
Let $\rho: S \to A$ be the ring map induced by $\rho(x_k)= \sum_{l=1}^n t_{lk} x_l$ for $k=1,2,\dots,n$,
and $\tilde L= I A + \rho(J)A \subset A$.
Let $L \subset S$ be the monomial ideal with the same monomial generators
as $\ensuremath{\mathrm{in}}_{\mathrm{lex}}(\tilde L)$.
We prove $\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I+g(J))=L$ for a general $g \in {GL}_n(K)$.
Let $f_1,\dots,f_s$ be generators of $I$
and $g_1,\dots,g_t$ those of $J$.
Then the polynomials $f_1,\dots,f_s,\rho(g_1),\dots,\rho(g_t)$ are generators of $\tilde L$.
By the Buchberger algorithm,
one can compute a Gr\"obner basis of $\tilde L$ from
$f_1,\dots,f_s,\rho(g_1),\dots,\rho(g_t)$ by finite steps.
Consider all elements $h_1,\dots,h_m \in K(t_{kl}:1 \leq k,l \leq n)$
which are the coefficient of polynomials (including numerators and denominators of rational functions)
that appear in the process of computing a Gr\"obner basis of $\tilde L$ by the Buchberger algorithm.
Consider a non-empty Zariski open subset $U \subset {GL}_n(K)$
such that $h_i(g) \in K \setminus \{0\}$ for any $g \in U$,
where $h_i(g)$ is an element obtained from $h_i$ by substituting $t_{kl}$ with entries of $g$.
By construction
$\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I+g(J))=L$
for every $g \in U$.
\end{proof}
\begin{remark}\label{ConstantHF} The method used to prove the above lemma can be easily generalized to a number of situations.
For instance for a general $g \in {GL}_n(K)$ and a finitely generated graded $S$-module $M,$
the Hilbert function of $\ensuremath{\mathrm{Tor}}_i(M,S/g(J))$ is well defined for every $i$. Let
$\mathbb F: 0 \stackrel{\varphi_{p+1}}{\longrightarrow}
\mathbb F_p \stackrel{\varphi_p}{\longrightarrow}
\cdots
\longrightarrow
\mathbb F_1 \stackrel{\varphi_1}{\longrightarrow}
\mathbb F_0
\stackrel{\varphi_0}{\longrightarrow}0$
be a graded free resolution of $M.$ Given a change of coordinates $g$, one first notes that for every $i=0,\dots,p$, the Hilbert function $H(\ensuremath{\mathrm{Tor}}_i(M,S/g(J)),-)$ is equal to the difference between the Hilbert function of $\rm{Ker}(\pi_{i-1} \circ \varphi_i)$ and the one of $\varphi_{i+1}(F_{i+1}) + F_i \otimes_S g(J)$ where $\pi_{i-1}: F_{i-1} \rightarrow F_{i-1} \otimes_S S/g(J)$ is the canonical projection.
Hence we have
\begin{align}\label{H-TOR}
\nonumber H(\ensuremath{\mathrm{Tor}}_i & (M,S/g(J)),-)= \\
&H(F_i, -) -H(\varphi_i(F_i)+ g(J) F_{i-1},-)
+ H(g(J) F_{i-1},-)\\
\nonumber &- H(\varphi_{i+1}(F_{i+1}) + g(J) F_i,-).
\end{align}
Clearly $H(F_i,-)$ and $H(g(J) F_{i-1},-)$ do not depend on $g.$
Thus it is enough to show that, for a general $g$, the Hilbert functions of $\varphi_i(F_i)+g(J) F_{i-1}$ are well defined for all $i=0,\dots,p+1.$ This can be seen as in Lemma \ref{2-0}.
\end{remark}
Next, we present two lemmas which will allow us to reduce the proofs of the theorems in the third section to combinatorial considerations regarding Borel-fixed ideals.
The first Lemma is probably clearly true to some experts,
but we include its proof for the sake of the exposition.
The ideas used in Lemma \ref{lemma2} are similar to that of \cite[Lemma 2.1]{Ca1} and they rely on the construction of a flat family and on the use of the structure theorem for finitely generated modules over principal ideal domains.
\begin{lemma}
\label{lemma1}
Let $M$ be a finitely generated graded $S$-module
and $J \subset S$ a homogeneous ideal.
For a general change of coordinates $g \in {GL}_n(K)$ we have that
$\dim_K \ensuremath{\mathrm{Tor}}_i(M,S/g(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i(M,S/J)_j$ for all $i$ and for all $j.$
\end{lemma}
\begin{proof}
Let $\mathbb F$ be a resolution of $M,$ as in Remark \ref{ConstantHF}. Let $i$, $0\leq i \leq p+1$ and notice that, by equation \eqref{H-TOR}, it is sufficient to show: $H(\varphi_i(F_i)+g(J) F_{i-1},-)\geq
H(\varphi_i(F_i)+JF_{i-1},-).$ We fix a degree $d$ and consider the monomial basis of $ (F_{i-1})_d.$
Given a change of coordinates $h=(a_{kl}) \in {GL}_n(K)$ we present the vector space $V_d=(\varphi_i(F_i)+h(J)F_{i-1})_d$ with respect to this basis. The dimension of $V_d$ equals the rank of a matrix whose entries are polynomials in the $a_{kl}$'s with coefficients in $K.$ Such a rank is maximal when the change of coordinates $h$ is general.
\end{proof}
For a vector $\mathbf w=(w_1,\ldots,w_n) \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$,
let $\ensuremath{\mathrm{in}}_\mathbf w (I)$ be the initial ideal of a homogeneous ideal $I$ with respect to the
weight order $>_\mathbf w$
(see \cite[p.\ 345]{Ei}).
Let $T$ be a new indeterminate and
$R=S[T]$.
For $\mathbf a=(a_1,\dots,a_n) \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$,
let $x^{\mathbf a}=x_1^{a_1} x_2^{a_2} \cdots x_n^{a_n}$
and $(\mathbf a, \mathbf w)= a_1w_1 + \cdots + a_n w_n$.
For a polynomial $f= \sum_{\mathbf a \in \ensuremath{\mathbb{Z}}_{\geq 0}^n} c_{\mathbf a} x^{\mathbf a}$,
where $c_{\mathbf a} \in K$,
let $b= \max \{ (\mathbf a,\mathbf w) : c_{\mathbf a} \ne 0\}$
and
$$\tilde f = T^b \left(\sum_{\mathbf a \in \ensuremath{\mathbb{Z}}_{\geq 0}^n} T^{-(\mathbf a,\mathbf w)}c_{\mathbf a} x^{\mathbf a}\right) \in R.$$
Note that $\tilde f$ can be written as $\tilde f=\ensuremath{\mathrm{in}}_\mathbf w(f) + T g$ where $g \in R$.
For an ideal $I \subset S$,
let $\tilde I =(\tilde f :f \in I) \subset R$.
For $\lambda \in K \setminus\{0\}$,
let $D_{\lambda,\mathbf w}$ be the diagonal change of coordinates defined by $D_{\lambda,\mathbf w}(x_i)=\lambda^{-w_i} x_i$.
From the definition, we have
$$R/\big(\tilde I +(T)\big) \cong S/ \ensuremath{\mathrm{in}}_\mathbf w(I)$$
and
$$R/\big(\tilde I +(T-\lambda)\big) \cong S/D_{\lambda,\mathbf w}(I)$$
where $\lambda \in K \setminus \{0\}$.
Moreover $(T-\lambda)$ is a non-zero divisor of $R/\tilde I$ for any $\lambda \in K$.
See \cite[\S 15.8]{Ei}.
\begin{lemma}
\label{lemma2}
Fix an integer $j$.
Let $\mathbf w \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$,
$M$ a finitely generated graded $S$-module and $J \subset S$ a homogeneous ideal.
For a general $\lambda \in K$, one has
\[
\dim_K \ensuremath{\mathrm{Tor}}_i \big(M,S/\ensuremath{\mathrm{in}}_\mathbf w(J)\big)_j\geq \dim_K \ensuremath{\mathrm{Tor}}
_i\big(M, S/D_{\lambda,\mathbf w}(J)\big)_j
\ \mbox{ for all $i$.}
\]
\end{lemma}
\begin{proof}
Consider the ideal $\tilde {J} \subset R$ defined as above.
Let $\tilde M = M \otimes_S R$ and
$T_i=\ensuremath{\mathrm{Tor}}_i^{R}(\tilde M,R/\tilde{J})$.
By the structure theorem for
modules over a PID (see \cite[p.\ 149]{La}),
we have
$$(T_i)_j\cong K[T]^{a_{ij}}
\bigoplus A_{ij}$$
as a finitely generated $K[T]$-module,
where $a_{ij} \in \ensuremath{\mathbb{Z}}_{\geq 0}$ and where $A_{ij}$ is the torsion submodule.
Moreover $A_{ij}$ is a module of the form
$$A_{ij}\cong \bigoplus_{h=1}^{b_{ij}} K [T]/(P^{i,j}_{h}),$$
where $P^{i,j}_h$ is a non-zero polynomial in $K[T]$.
Set $l_{\lambda}=T-\lambda$.
Consider the exact sequence
\begin {eqnarray}
\label{aa}
\begin{CD} 0 @>>> R/\tilde{J} @>\cdot l_{\lambda}>>
R/\tilde{J} @>>> R/\big((l_{\lambda})+\tilde{J} \big) @>>> 0.
\end{CD}
\end {eqnarray}
By considering the long exact sequence induced by $\ensuremath{\mathrm{Tor}}^R_i(\tilde M,-),$ we have the following exact sequence
\begin{equation}\label{bo} 0\longrightarrow T_i/l_{\lambda} T_i \longrightarrow
\ensuremath{\mathrm{Tor}}_i^{R}\big(\tilde M,R/\big((l_{\lambda})+\tilde{J}\big)\big) \longrightarrow
K_{i-1} \longrightarrow 0,
\end{equation}
where $K_{i-1}$ is the kernel of the map $T_{i-1} \xrightarrow{\cdot l_{\lambda}} T_{i-1}$.
Since $l_{\lambda}$ is a regular element for $R$ and $\tilde M$,
the middle term in (\ref{bo}) is isomorphic to
\begin{eqnarray*}
\ensuremath{\mathrm{Tor}}_i^{R/(l_\lambda)} \big(\tilde M /l_\lambda \tilde M, R/\big((l_{\lambda})+\tilde J \big)\big)
=\left\{
\begin{array}{lll}
\ensuremath{\mathrm{Tor}}_i^S \big(M,S/\ensuremath{\mathrm{in}}_\mathbf w(J)\big), & \mbox{ if } \lambda=0,\\
\ensuremath{\mathrm{Tor}}_i^S \big(M,S/D_{\lambda,\mathbf w}(J)\big), & \mbox{ if } \lambda\ne0
\end{array}
\right.
\end{eqnarray*}
(see \cite[p.\ 140]{Mat}).
By taking the graded component of degree $j$ in (\ref{bo}),
we obtain
\begin{eqnarray}
\label{banngou}
\begin{array}{lll}
\dim_K
\ensuremath{\mathrm{Tor}}_i^{S}\big(M,S/\ensuremath{\mathrm{in}}_\mathbf w (J) \big)_j &=& a_{ij} +
\# \{P^{ij}_h : P^{i,j}_h(0)=0\}\\
&& + \# \{P^{i-1,j}_h : P^{i-1,j}_h(0)=0\},
\end{array}
\end{eqnarray}
where $\# X$ denotes the cardinality of a finite set $X$,
and
\begin{eqnarray}
\label{yon}
\dim_K
\ensuremath{\mathrm{Tor}}_i^{S}\big(M,S/D_{\lambda,\mathbf w}(J) \big)_j &=& a_{ij}
\end{eqnarray}
for a general $\lambda \in K$.
This proves the desired inequality.
\end{proof}
\begin{corollary}
\label{add}
With the same notation as in Lemma \ref{lemma2},
for a general $\lambda \in K$,
\[
\dim_K \ensuremath{\mathrm{Tor}}_i \big(M,\ensuremath{\mathrm{in}}_\mathbf w(J)\big)_j \geq \dim_K \ensuremath{\mathrm{Tor}}_i \big(M, D_{\lambda,\mathbf w}(J) \big)_j
\mbox{ for all }i.
\]
\end{corollary}
\begin{proof}
For any homogeneous ideal $I \subset S$,
by considering the long exact sequence induced by $\ensuremath{\mathrm{Tor}}_i(M,-)$ from the short exact sequence
$0 \longrightarrow I \longrightarrow S \longrightarrow S/I \longrightarrow 0$ we have
$$\ensuremath{\mathrm{Tor}}_i(M,I) \cong \ensuremath{\mathrm{Tor}}_{i+1}(M,S/I)
\mbox{ for }i \geq 1$$
and
$$\dim_K \ensuremath{\mathrm{Tor}}_0(M,I)_j = \dim_K \ensuremath{\mathrm{Tor}}_1(M,S/I)_j + \dim_K M_j - \dim_K \ensuremath{\mathrm{Tor}}_0(M,S/I)_j.$$
Thus by Lemma \ref{lemma2} it is enough to prove that
\begin{eqnarray*}
&&\dim_K \ensuremath{\mathrm{Tor}}_1\big(M,S/\ensuremath{\mathrm{in}}_\mathbf w(J)\big)_j -\dim_K \ensuremath{\mathrm{Tor}}_1\big(M,S/D_{\lambda,\mathbf w}(J)\big)_j\\
&&\geq \dim_K \ensuremath{\mathrm{Tor}}_0\big(M,S/\ensuremath{\mathrm{in}}_\mathbf w(J)\big)_j -\dim_K \ensuremath{\mathrm{Tor}}_0\big(M,S/D_{\lambda,\mathbf w}(J)\big)_j.
\end{eqnarray*}
This inequality follows from (\ref{banngou}) and (\ref{yon}).
\end{proof}
\begin{proposition}
\label{2.3}
Fix an integer $j$.
Let $I \subset S$ and $J \subset S$ be homogeneous ideals.
Let $\mathbf w,\mathbf w' \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$.
For a general change of coordinates $g \in {GL}_n(K)$,
\begin{itemize}
\item[(i)]
$\dim_K \ensuremath{\mathrm{Tor}}_i(S/I,S/g(J))_j
\leq \dim_K \ensuremath{\mathrm{Tor}}_i (S/\ensuremath{\mathrm{in}}_{\mathbf w}(I), S/{\ensuremath{\mathrm{in}}_{\mathbf w'}}(J))_j
\ \mbox{ for all }i.$
\item[(ii)]
$\dim_K \ensuremath{\mathrm{Tor}}_i(I,S/g(J))_j
\leq \dim_K \ensuremath{\mathrm{Tor}}_i (\ensuremath{\mathrm{in}}_{\mathbf w}(I), S/{\ensuremath{\mathrm{in}}_{\mathbf w'}}(J))_j
\ \mbox{ for all }i.$
\end{itemize}
\end{proposition}
\begin{proof}
We prove (ii) (the proof for (i) is similar).
By Lemmas \ref{lemma1} and \ref{lemma2} and Corollary \ref{add}, we have
\begin{eqnarray*}
\dim_K \ensuremath{\mathrm{Tor}}_i \big(\ensuremath{\mathrm{in}}_{\mathbf w}(I), S/\ensuremath{\mathrm{in}}_{\mathbf w'}(J)\big)_j
&\geq& \dim_K \ensuremath{\mathrm{Tor}}_i \big(D_{\lambda_1,\mathbf w}(I), S/D_{\lambda_2,\mathbf w'}(J)\big)_j \\
&=& \dim_K \ensuremath{\mathrm{Tor}}_i \big(I, S/D^{-1}_{\lambda_1,\mathbf w} \big(D_{\lambda_2,\mathbf w'}(J)\big)\big)_j\\
&\geq& \dim_K \ensuremath{\mathrm{Tor}}_i\big(I,S/g(J)\big)_j,
\end{eqnarray*}
as desired,
where $\lambda_1,\lambda_2$ are general elements in $K$.
\end{proof}
\begin{remark}
Let $\mathbf w'=(1,1,\dots,1)$ and note that the composite of two general changes of coordinates is still general. By replacing $J$ by $h(J)$ for a general change of coordinates $h,$ from Proposition \ref{2.3}(i) it follows that
\[
\dim_K \ensuremath{\mathrm{Tor}}_i(S/I,S/h(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i\big(S/\ensuremath{\mathrm{in}}_{>_{\sigma}}(I),S/h(J))_j
\]
for any term order $>_\sigma$.
The above fact gives, as a special case, an affirmative answer to \cite[Question 6.1]{Co}.
This was originally proved in the thesis of the first author \cite{Ca2}.
We mention it here because there seem to be no published article which includes the proof of this fact.
\end{remark}
\begin{theorem}
\label{2.5}
Fix an integer $j$.
Let $I \subset S$ and $J \subset S$ be homogeneous ideals.
For a general change of coordinates $g \in {GL}_n(K)$,
\begin{itemize}
\item[(i)] $\dim_K \ensuremath{\mathrm{Tor}}_i(S/I,S/g(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i(S/\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I),S/\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}} (J))_j
\ \ \mbox{for all }i.$
\item[(ii)] $\dim_K \ensuremath{\mathrm{Tor}}_i(I,S/g(J))_j \leq \dim_K \ensuremath{\mathrm{Tor}}_i(\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I),S/\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}} (J))_j
\ \ \mbox{for all }i.$
\end{itemize}
\end{theorem}
\begin{proof}
Without loss of generality,
we may assume $\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I)=\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I)$ and that $\ensuremath{\mathrm{in}}_{\mathrm{{oplex}}}(J)=\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J)$.
It follows from \cite[Propositin 15.16]{Ei} that there are vectors $\mathbf w, \mathbf w' \in \ensuremath{\mathbb{Z}}_{\geq 0}^n$ such that
$\ensuremath{\mathrm{in}}_\mathbf w(I)=\ensuremath{\mathrm{in}}_{\mathrm{lex}}(I)$ and $\ensuremath{\mathrm{in}}_{\mathbf w'}(g(J))=\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J)$.
Then the desired inequality follows from Proposition \ref{2.3}.
\end{proof}
Since $\ensuremath{\mathrm{Tor}}_0(S/I,S/J)\cong S/(I+J)$ and $\ensuremath{\mathrm{Tor}}_0(I,S/J)\cong I/IJ$,
we have the next corollary.
\begin{corollary}
\label{2.6}
Let $I \subset S$ and $J \subset S$ be homogeneous ideals.
For a general change of coordinates $g \in {GL}_n(K)$,
\begin{itemize}
\item[(i)] $H(I \cap g(J) ,d) \leq H(\ensuremath{\mathrm{Gin}}_{\mathrm{lex}} (I)\cap \ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J),d)$ for all $d \geq 0$.
\item[(ii)] $H(Ig(J),d) \geq H(\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I)\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J),d)$ for all $d \geq 0$.
\end{itemize}
\end{corollary}
We conclude this section with a result regarding the Krull dimension of certain Tor modules.
We show how Theorem \ref{2.5} can be used to give a quick proof of Proposition \ref{MiSp}, which is a special case (for the variety $X=\mathbb{P}^{n-1}$ and the algebraic group ${SL}_n$) of the main Theorem of \cite{MS}.
Recall that generic initial ideals are \textit{Borel-fixed}, that is they are fixed under the action of the Borel subgroup of ${GL}_n(K)$ consisting of all the upper triangular invertible matrices. In particular for an ideal $I$ of $S$ and an upper triangular matrix $b\in {GL}_n(K)$ one has $b(\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I))= \ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I).$ Similarly, if we denote by $op$ the change of coordinates of
$S$ which sends $x_i$ to $x_{n-i}$ for all $i=1,\dots,n,$ we have that $b( op (\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(I)))= op (\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(I)).$
We call \textit{opposite Borel-fixed} an ideal $J$ of $S$ such that $op(J)$ is Borel-fixed (see
\cite[\S 15.9]{Ei} for more details on the combinatorial properties of Borel-fixed ideals).
It is easy to see that if $J$ is Borel-fixed, then so is $(x_1,\dots,x_i)+J$ for every $i=1,\dots,n.$ Furthermore if $j$ is an integer equal to $\min \{i : x_i\not \in J \}$ then $J:x_j$ is also Borel-fixed; in this case $I$ has a minimal generator divisible by $x_j$ or $I=(x_1,\dots,x_{j-1}).$ Analogous statements hold for opposite Borel-fixed ideals.
Let $I$ and $J$ be ideals generated by linear forms. If we assume that $I$ is Borel fixed and that $J$ is opposite Borel fixed, then there exist $1\leq i,j \leq n $ such that $I=(x_1,\dots,x_i)$ and $J=(x_j,\dots,x_n).$ An easy computation shows that the Krull dimension of $\ensuremath{\mathrm{Tor}}_i(S/I,S/J)$ is always zero when $i>0.$
More generally one has
\begin{proposition}[Miller--Speyer]\label{MiSp} Let $I$ and $J$ be two homogeneous ideals of $S.$ For a general change of coordinates $g$, the Krull dimension of $\ensuremath{\mathrm{Tor}}_i(S/I,S/g(J))$ is zero for all $i>0.$
\end{proposition}
\begin{proof} When $I$ or $J$ are equal to $(0)$ or to $S$ the result is obvious. Recall that a finitely generated graded module $M$ has Krull dimension zero if and only if $M_d=0$ for all $d$ sufficiently large. In virtue of Theorem \ref{2.5} it is enough to show that $\ensuremath{\mathrm{Tor}}_i(S/I,S/J)$ has Krull dimension zero whenever $I$ is Borel-fixed, $J$ opposite Borel-fixed and $i>0.$ By contradiction, let the pair $I,J$ be a maximal counterexample (with respect to point-wise inclusion). By the above discussion, and by applying $op$ if necessary, we can assume that
$I$ has a minimal generator of degree greater than 1. Let $j=\min \{h : x_h\not \in I \}$ and notice that both
$(I:x_j)$ and $(I+(x_j))$ strictly contain $I.$ For every $i>0$ the short exact sequence $ 0 \rightarrow S/(I:x_j) \rightarrow S/I \rightarrow S/(I+(x_j)) \rightarrow 0$ induces the exact sequence
\[
\ensuremath{\mathrm{Tor}}_i(S/(I:x_j),S/J)
\rightarrow
\ensuremath{\mathrm{Tor}}_i(S/I,S/J)
\rightarrow
\ensuremath{\mathrm{Tor}}_i(S/(I+(x_j)),S/J).
\]
By the maximality of $I,J$, the first and the last term have Krull dimension zero. Hence the middle term must have dimension zero as well, contradicting our assumption.
\end{proof}
\section{General intersections and general products}
In this section, we prove Theorems \ref{intersection} and \ref{product}. We will assume throughout the rest of the paper $\mathrm{char}(K)=0.$
A monomial ideal $I \subset S$ is said to be \textit{$0$-Borel} (or \textit{strongly stable})
if, for every monomial $u x_j \in I$ and for every $1 \leq i <j$ one has $ux_i \in I$.
Note that $0$-Borel ideals are precisely all the possible Borel-fixed ideals in characteristic $0$.
In general, the Borel-fixed property depends on the characteristic of the field and we refer the readers to \cite[\S 15.9]{Ei} for the details.
A set $W \subset S$ of monomials in $S$ is said to be \textit{$0$-Borel} if the ideal they generate is $0$-Borel, or equivalently if for every monomial $u x_j \in W$ and for every $1 \leq i <j$ one has $ux_i \in W$. Similarly we say that a monomial ideal $J \subset S$ is \textit{opposite $0$-Borel} if for every monomial $ux_j \in J$ and for every $j < i \leq n$ one has $ux_i \in J$.
Let $>_{\mathrm{{rev}}}$ be the reverse lexicographic order induced by the ordering $x_1 > \cdots >x_n$.
We recall the following result \cite[Lemma 3.2]{Mu}.
\begin{lemma}
\label{3-1}
Let $V=\{v_1,\dots,v_s\} \subset S_d$ be a $0$-Borel set of monomials
and $W =\{w_1,\dots,w_s\} \subset S_d$ the lex set of monomials,
where $v_1 \geq_{{\mathrm{{rev}}}} \cdots \geq_{{\mathrm{{rev}}}} v_s$ and
$w_1 \geq_{{\mathrm{{rev}}}} \cdots \geq _{{\mathrm{{rev}}}} w_s$.
Then $v_i \geq_{{\mathrm{{rev}}}} w_i$ for all $i=1,2,\dots,s$.
\end{lemma}
Since generic initial ideals with respect to $>_{\mathrm{lex}}$ are $0$-Borel,
the next lemma and Corollary \ref{2.6}(i)
prove Theorem \ref{intersection}.
\begin{lemma}
\label{3-2}
Let $I \subset S$ be a $0$-Borel ideal and $P \subset S$ an opposite lex ideal.
Then $\dim_K(I\cap P)_d \leq \dim_K (I^{\mathrm{lex}} \cap P)_d$
for all $d\geq 0$.
\end{lemma}
\begin{proof}
Fix a degree $d$.
Let $V,W$ and $Q$ be the sets of monomials of degree $d$ in $I$, $I^{\mathrm{lex}}$ and $P$ respectively.
It is enough to prove that $\# V \cap Q \leq \# W \cap Q$.
Observe that $Q$ is the set of the smallest $\#Q$ monomials in $S_d$ with respect to $>_{\mathrm{{rev}}}$.
Let $m=\max_{>_{\mathrm{{rev}}}} Q$.
Then by Lemma \ref{3-1}
$$\# V \cap Q = \# \{ v \in V: v \leq_{{\mathrm{{rev}}}} m\}
\leq \# \{ w \in W: w \leq_{{\mathrm{{rev}}}} m\} = \# W \cap Q,$$
as desired.
\end{proof}
Next, we consider products of ideals.
For a monomial $u \in S$, let $\max u$ (respectively, $\min u$) be the maximal (respectively,
minimal) integer $i$ such that $x_i$ divides $u$, where we set $\max 1 = 1$ and
$\min 1 = n$. For a monomial ideal $I \subset S$, let $I_{(\leq k)}$ be the K-vector space spanned by
all monomials $u \in I$ with $\max u \leq k$.
\begin{lemma}
\label{3-4}
Let $I \subset S$ be a $0$-Borel ideal and $P \subset S$ an opposite $0$-Borel ideal.
Let $G(P)=\{u_1,\dots,u_s\}$ be the set of the minimal monomial generators of $P$.
As a $K$-vector space,
$IP$ is the direct sum
$$
IP=\bigoplus_{i=1}^s (I_{(\leq \min u_i)})u_i.
$$
\end{lemma}
\begin{proof}
It is enough to prove that,
for any monomial $w \in IP$, there is the unique expression $w=f(w)g(w)$
with $f(w) \in I$ and $g(w) \in P$
satisfying
\begin{itemize}
\item[(a)] $\max f(w) \leq \min g(w)$.
\item[(b)] $g(w) \in G(P)$.
\end{itemize}
Given any expression $w=fg$ such that $f \in I$ and $g \in P$,
since $I$ is $0$-Borel and $P$ is opposite $0$-Borel,
if $\max f > \min g$ then we may replace $f$ by $f \frac{x_{\min g}} {x_{\max f}} \in I$
and replace $g$ by $g \frac{x_{\max f}} {x_{\min g}} \in P$.
This fact shows that there is an expression satisfying (a) and (b).
Suppose that the expressions $w=f(w)g(w)$ and $w=f'(w)g'(w)$ satisfy conditions (a) and (b).
Then, by (a), $g(w)$ divides $g'(w)$ or $g'(w)$ divides $g(w)$.
Since $g(w)$ and $g'(w)$ are generators of $P$,
$g(w)=g'(w)$.
Hence the expression is unique.
\end{proof}
\begin{lemma}
\label{3-5}
Let $I \subset S$ be a $0$-Borel ideal and $P \subset S$ an opposite $0$-Borel ideal.
Then $\dim_K(IP)_d \geq \dim_K (I^{{\mathrm{lex}}}P)_d$
for all $d\geq 0$.
\end{lemma}
\begin{proof}
Lemma \ref{3-1} shows that $\dim_K {I_{(\leq k)}}_d \geq \dim_K {I^{\mathrm{lex}}_{(\leq k)}}_d$
for all $k$ and $d \geq 0$.
Then the statement follows from Lemma \ref{3-4}.
\end{proof}
Finally we prove Theorem \ref{product}.
\begin{proof}[Proof of Theorem \ref{product}]
Let $I'=\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(I)$ and $J'=\ensuremath{\mathrm{Gin}}_{\mathrm{{oplex}}}(J)$.
Since $I'$ is $0$-Borel and $J'$ is opposite $0$-Borel,
by Corollary \ref{2.6}(ii) and Lemmas \ref{3-5}
$$H(Ig(J),d) \geq H(I'J',d) \geq H(I^{{\mathrm{lex}}} J',d) \geq H(I^{\mathrm{lex}} J^{\mathrm{{oplex}}},d)$$
for all $d \geq 0$.
\end{proof}
\begin{remark}
\label{rem1}
Theorems \ref{intersection} and \ref{product}
are sharp.
Let $I \subset S$ be a Borel-fixed ideal and $J \subset S$ an ideal satisfying that $h(J)=J$
for any lower triangular matrix $h \in {GL}_n(K)$.
For a general $g \in {GL}_n(K)$,
we have the LU decomposition $g=bh$
where $h \in {GL}_n(K)$ is a lower triangular matrix and $b \in {GL}_n(K)$ is an upper triangular matrix.
Then as $K$-vector spaces
$$I \cap g(J) \cong b^{-1}(I) \cap h(J)= I\cap J
\mbox{ and }
I g(J) \cong b^{-1}(I) h(J)= I J.$$
Thus if $I$ is lex and $J$ is opposite lex then
$H(I\cap g(J),d)=H(I\cap J,d)$ and
$H(Ig(J),d)=H(I J,d)$ for all $d\geq 0$.
\end{remark}
\begin{remark}\label{example}
The assumption on $\ensuremath{\mathrm{Gin}}_{\mathrm{lex}}(J)$ in Theorem \ref{intersection} is necessary.
Let $I=(x_1^3,x_1^2x_2,x_1x_2^2,x_2^3) \subset K[x_1,x_2,x_3]$
and $J=(x_3^2,x_3^2x_2,x_3x_2^2,x_2^3)\subset K[x_1,x_2,x_3]$.
Then the set of monomials of degree $3$ in $I^{\mathrm{lex}}$ is
$\{x_1^3,x_1^2x_2,x_1^2x_3,x_1x_2^2\}$
and that of $J^{\mathrm{{oplex}}}$ is
$\{x_3^3,x_3^2x_2,x_3^2x_1,x_3x_2^2\}$.
Hence $H(I^{\mathrm{lex}}\cap J^{\mathrm{{oplex}}},3)=0$.
On the other hand, as we see in Remark \ref{rem1},
$H(I\cap g(J),3)=H(I\cap J,3)=1$. Similarly, the assumption on the characteristic of $K$ is needed as one can easily see by considering $\mathrm{char}(K)=p>0$, $I=(x_1^p,x_2^p)\subset K[x_1,x_2]$ and $J=x_2^p.$ In this case we have
$H(I^{\mathrm{lex}}\cap J^{\mathrm{{oplex}}},p)=0$, while $H(I\cap g(J),p)=H(g^{-1}(I)\cap J,p)=1$ since $I$ is fixed under any change of coordinates.
\end{remark}
Since $\ensuremath{\mathrm{Tor}}_0(S/I,S/J) \cong S/(I+J)$
and $\ensuremath{\mathrm{Tor}}_1(S/I,S/J) \cong (I\cap J)/ IJ$
for all homogeneous ideals $I \subset S$ and $J \subset S$,
Theorems \ref{intersection} and \ref{product} show the next statement.
\begin{remark}
\label{cor}
Conjecture \ref{conj} is true if $i=0$ or $i=1.$
\end{remark}
\end{document} |
\begin{document}
\begin{abstract}
From the mid-1970s, Eberhard Kirchberg undertook a remarkable extensive study of $C^*$-algebras exactness whose applications spread out to many branches of analysis.
In this review we focus on the case of groupoid $C^*$-algebras for which the notion of exactness needs to be better understood. In particular some versions of exactness play an important role in the study of the weak containment problem (WCP), that is whether the coincidence of the full and reduced groupoid $C^*$-algebras implies the amenability of the groupoid or not.
\end{abstract}
\title{Amenability, exactness and weak containment property for groupoids }
{\mathbb{S}}ection*{Introduction}
The study of $C^*$-algebras exactness was initiated by Kirchberg as early as the mid-1970s. In the short note \cite{Ki77Proc} he announced several results whose proofs were published along with many other major contributions in the 1990s \cite{Ki91, Ki93, Ki94}.
An unexpected link was found at the end of the 1990s between the Novikov higher signatures conjecture for a discrete group $\Gamma$ and the exactness of its reduced $C^*$-algebra $C^*_{r}(\Gamma)$. This follows from the discovery by Higson and Roe \cite{HR} that a finitely generated group $\Gamma$ has the so-called Yu's property A \cite{Yu} if and only if $\Gamma$ admits an amenable action on a compact space $X$ (then we say that the group is amenable at infinity). Such a group satisfies the above mentioned Novikov conjecture \cite{Yu, HR, H00} and morever $C^*_{r}(\Gamma)$ is exact since it is a sub-$C^*$-algebra of the nuclear crossed product $C(X)\rtimes \Gamma$. At the same time extensive studies of amenable actions (and amenable groupoids) \cite{AD-R} and of reduced group $C^*$-algebras exactness \cite{KW99} had just appeared. This series of results was finally crowned by the proof of the fact that if $C_{r}^*(\Gamma)$ is exact then $\Gamma$ admits an amenable action on a compact space \cite{GK, Oza, AD02}. All this raised a renewed interest about
various notions of exactness for locally compact groups.
It turned out to be potentially interesting to extend these notions to the case of locally compact groupoids. A first attempt was presented in \cite{AD00}. A detailed presentation was made available in \cite{AD16}. Nowadays, some versions of groupoid exactness appear to be essential in the study of the weak containment problem (WCP).
The purpose of this paper is to describe some of the history of this subject. It is organized as follows.
In section 1, after pointing out the fact, due to Kirchberg \cite{Ki93}, that many full group $C^*$-algebras are not exact, we focus on exactness of reduced group $C^*$-algebras, compared with two other notions of exactness, namely KW-exactness introduced by Kirchberg and Wassermann in \cite{KW99}, and amenability at infinity. The three notions are equivalent for discrete groups and we describe what is known in general for locally compact groups.
In Section 2 we recall some facts about measured and locally compact groupoids and their operator algebras and Section 3 provides a short summary about amenable groupoids. In Section 4 we introduce the notion of amenability at infinity for a locally compact groupoid. Here, compact spaces have to be replaced by locally compact spaces that are fibred on the space of units of the groupoid in such a way that the projection is a proper map. When the groupoid ${\mathcal G}$ is \'etale, it has a universal fibrewise compactification $\beta_r{\mathcal G}$, called its Stone-\v Cech fibrewise compactification. Then, amenability at infinity of ${\mathcal G}$ is equivalent to the fact that the canonical action of ${\mathcal G}$ on $\beta_r{\mathcal G}$ is amenable, and can be expressed in terms of positive type kernels, exactly as for groups.
In Section 5 we describe the relations between the various notions of exactness that are defined for \'etale groupoids. They are equivalent when we assume the inner amenability of the groupoid. Inner amenability is well understood in the group case, in particular all discrete groups are inner amenable in our sense, but this notion remains mysterious for groupoids. In this section we also introduce a weak notion of exactness for groupoids, that we call inner exactness. It is automatically fulfilled for transitive groupoids and in particular for all locally compact groups, but it has proven useful in other contexts.
From 2014 many remarkable results have been obtained in the study of the (WCP) for groupoids, which crucially involve exactness. We review them in Section 6. Finally,
in the last section we recap some open problems.
{\mathbb{S}}ection{Exact group $C^*$-algebras}
Unlike the functor $(\cdot)\otimes_{\hbox {max}} A$ (maximal tensor product with the $C^*$-algebra $A$), the minimal tensor product functor $(\cdot)\otimes A$ is not necessarily exact, that is, given a short exact sequence of $C^*$-algebras $0 \rightarrow I \rightarrow B \rightarrow B/I \rightarrow 0,$
the sequence
$$0 \rightarrow I\otimes A \rightarrow B\otimes A \rightarrow (B/I) \otimes A\rightarrow 0$$
is not always exact in the middle. When the functor $(\cdot)\otimes A$ is exact, one says that the $C^*$-algebra $A$ is {\it exact}. This notion, so named in the pioneering paper \cite{Ki77Proc}, has been the subject of Kirchberg's major contributions from the end of the 1980s. As early as 1976, Simon Wassermann showed in \cite{Wass76} that the full $C^*$-algebra
$C^*({\mathbb{F}}_2)$ of the free group ${\mathbb{F}}_2$ on two generators is not exact. In fact, when $\Gamma$ is a finitely generated residually finite group, the full group $C^*$-algebra $C^*(\Gamma)$ is exact if and only if the group $\Gamma$ is amenable{\mathbbb{F}}ootnote{To the author's knowledge, there is so far no example of a non-amenable group $G$ such that $C^*(G)$ is exact.}(see \cite{Ki77Proc}, and \cite[Proposition 7.1]{Ki93} for a more general result).
This is in sharp contrast with the behaviour of the reduced group $C^*$-algebras. For instance, let $G$ be a locally compact group having a closed amenable subgroup $P$ with $G/P$ compact and let $H$ be any closed subgroup of $G$. Then the full crossed product $C^*$-algebras $C(G/P)\rtimes H$ and $C_0(H{\mathbb{S}}etminus G)\rtimes P$ are Morita equivalent \cite{Rie76} and $C_0(H{\mathbb{S}}etminus G)\rtimes P$ is nuclear since $P$ is amenable. It follows that $C(G/P)\rtimes H$ is nuclear and that the reduced crossed product $C^*$-algebra $C(G/P)\rtimes_r H$ is nuclear too. But the reduced group $C^*$-algebra $C^*_{r}(H)$ embeds into $C(G/P)\rtimes_r H$ since $G/P$ is compact and therefore $C^*_{r}(H)$ is an exact $C^*$-algebra. This well-known argument applies for instance to closed subgroups of almost connected groups.
A locally compact group is said to be $C^*$-{\em exact}{\mathbbb{F}}ootnote{This differs from the terminology used in \cite{Ki77Proc} where a group was called $C^*$-exact if its full $C^*$-algebra was exact.} if its reduced group $C^*$-algebra is $C^*$-exact. Most familiar groups are known to be exact. The first examples of discrete groups that are not $C^*$-exact are Gromov monsters \cite{Gro}. Osajda has given other examples \cite{Osa}, and he even built residually finite groups that are not $C^*$-exact \cite{Osa18}.
Clearly, an easy way to show that the reduced group $C^*$-algebra $C^*_{r}(G)$ of a locally compact group $G$ is exact is to exhibit a continuous action of $G$ on a compact space $X$ such that the reduced crossed product $C(X)\rtimes_r G$ is nuclear. When this property is fulfilled with $G$ a discrete group, it follows that the $G$-action on $X$ is (topologically) amenable \cite[Theorem 4.5]{AD87}, \cite[Theorem 5.8]{AD02}. This notion plays an important role the study of reduced group $C^*$-algebras exactness. It will be recalled in the subsequent sections in the more general context of groupoids. It is called {\it amenability at infinity} \cite[Definition 5.2.1]{AD-R} (or boundary amenability).
When a locally compact group $G$ has an amenable action on a compact space, it has a property stronger than $C^*$-exactness, that was introduced by Kirchberg and Wassermann \cite{KW99}, and is now often called KW-exactness (see for instance \cite[Definition 5.1.9]{BO}).
\begin{defn}\label{KWexact:group} A locally compact group $G$ is KW-{\it exact} if the functor $A \mapsto A\rtimes_r G$ is exact, that is, for every short exact sequence of $G$-$C^*$-algebras $0 \rightarrow I \rightarrow A \rightarrow A/I \rightarrow 0$, the sequence
$$0 \rightarrow I \rtimes_r G \rightarrow A\rtimes_rG \rightarrow(A/I)\rtimes_r G \rightarrow 0$$
is also exact.
\end{defn}
The theorem below, which presents the currently known relations between the different definitions of exactness for a locally compact group involves in particular the notion of {\em inner amenability}. Following \cite[page 84]{Pat88}, we say that a locally compact group $G$ is {\it inner amenable} if there exists an inner invariant mean on $L^\infty(G)$, that is, a state $m$ such that $m(sfs^{-1}) = m(f)$ for every $f\in L^\infty(G)$ and $s\in G$, where $(sfs^{-1})(y) = f(s^{-1}ys)$. This is a quite weak notion (that should deserve in fact the name of weak inner amenablity) since for instance every discrete group is inner amenable in this sense (whereas Effros \cite{Effros} excludes the trivial inner invariant mean in his definition). Note that a locally compact group $G$ is amenable if and only if $G$ is inner amenable and $C^*_{r}(G)$ is nuclear \cite{LP}.
The importance of inner amenability when studying the relations between properties of groups and their $C^*$-algebras has also been highlighted by Kirchberg in \cite[\S 7]{Ki93} where inner amenability is called Property (Z).
\begin{thm}\label{equivexact:group} Let $G$ be a locally compact group and consider the following conditions:
\begin{itemize}
\item[(1)] $G$ has an amenable action on a compact space;
\item[(2)] $G$ is KW-exact;
\item[(3)] $G$ is $C^*$-exact.
\end{itemize}
Then (1) $\Leftrightarrow$ (2) ${{\mathbb{R}}ightarrow}$ (3) and the three conditions are equivalent when $G$ is an inner amenable group or when $C^*_{r}(G)$ has a tracial state.
\end{thm}
That (1) ${{\mathbb{R}}ightarrow}$ (2) ${{\mathbb{R}}ightarrow}$ (3) is easy (see for instance \cite[Theorem 7.2]{AD02}). When $G$ is a discrete group the equivalence between (2) and (3) is proved in \cite [Theorem 5.2]{KW99} and the fact that (3) implies (1) is proved in \cite{Oza}. That (2) implies (1) for any locally compact group is proved in \cite[Theorem 5.6]{BCL}, \cite[Proposition 2.5]{OS20}. The fact that (3) implies (1) in the case of an inner amenable locally compact group $G$ was treated in \cite[Theorem 7.3]{AD02} where we used a property of $G$ that we called Property (W). Subsequently, it was proved in \cite{CT} that this property (W) is the same as inner amenability. The fact that (3) implies (2) when $C^*_{r}(G)$ has a tracial state is proved in \cite{Man}. Let us point out that this latter property is equivalent to the existence of an open amenable normal subgroup in $G$ as shown in \cite{KR}.
Whether (3) implies (2) holds for any locally compact group is still open. Note that if KW-exactness and $C^*$-exactness are equivalent for all unimodular totally
disconnected second countable groups then they are equivalent for all locally compact second
countable groups \cite{CZ}.
{\mathbb{S}}ection{Background on groupoids}
We assume that the reader is familiar with the basic definitions about groupoids. We use the terminology and the notation of \cite{AD-R}. The unit space of a groupoid ${\mathcal G}$ is denoted by ${\mathcal G}^{(0)}$ and is often renamed as $X$. We implicitly identify ${\mathcal G}^{(0)}$ to a subset of ${\mathcal G}$. The structure of ${\mathcal G}$ is defined by the range and source maps $r,s: {\mathcal G}{\mathbbb{T}}o{\mathcal G}^{(0)}$, the inverse map $\gamma \mapsto \gamma^{-1}$ from ${\mathcal G}$ to ${\mathcal G}$ and the multiplication map $(\gamma,\gamma') \mapsto \gamma\gamma'$ from
${\mathcal G}^{(2)} = {\mathbb{S}}et{(\gamma,\gamma')\in {\mathcal G}{\mathbbb{T}}imes {\mathcal G} : s(\gamma) = r(\gamma')}$ to ${\mathcal G}$. For $x\in {\mathcal G}^{(0)}$ we set ${\mathcal G}^x = r^{-1}(x)$, ${\mathcal G}_x = s^{-1}(x)$ and ${\mathcal G}(x) = {\mathcal G}^x \cap {\mathcal G}_x$. Given $E{\mathbb{S}}ubset {\mathcal G}^{(0)}$, we write ${\mathcal G}(E) = r^{-1}(E)\cap s^{-1}(E)$.
One important example is given by the left action of a group $G$ on a set $X$. The corresponding {\em semidirect product groupoid} ${\mathcal G} = X\rtimes G$ is $X{\mathbbb{T}}imes G$ as a set. Its unit set is $X$, the range and source maps are given respectively by $r(x,g) = x$ and $s(x,g)= g^{-1}x$. The product is given by $(x,g)(g^{-1}x,h) = (x,gh)$, and the inverse by $(x,g)^{-1} = (g^{-1}x, g^{-1})$. Equivalence relations on $X$ are also an interesting family of examples. If ${\mathcal R}{\mathbb{S}}ubset X{\mathbbb{T}}imes X$ is an equivalence relation, it is viewed as a groupoid with $X$ as set of units, $r(x,y) = x$ and $s(x,y) = y$ as range and source maps respectively. The product is given by $(x,y)(y,z) = (x,z)$ and the inverse by $(x,y)^{-1} = (y,x)$.
{\mathbb{S}}ubsection{Measured groupoids} A {\em Borel groupoid} ${\mathcal G}$ is a groupoid endowed with a Borel structure such that the range, source, inverse and product maps are Borel, where ${\mathcal G}^{(2)}$ has the Borel structure induced by ${\mathcal G}{\mathbbb{T}}imes {\mathcal G}$ and ${\mathcal G}^{(0)}$ has the Borel structure induced by ${\mathcal G}^{(0)}$. A {\em Borel Haar system} $\lambda$ on ${\mathcal G}$ is a family $(\lambda^x)_{x\in {\mathcal G}^{(0)}}$ of measures on the fibres ${\mathcal G}^x$, wich is Borel (in the sense that for every non-negative Borel function $f$ on ${\mathcal G}$ the function $x\mapsto \lambda(f)(x) =\int f{\,\mathrm d}\lambda^x$ is Borel), left invariant (in the sense that for all $\gamma\in{\mathcal G}, \quad\gamma\lambda^{s(\gamma)} = \lambda^{r(\gamma)}$), proper (in the sense that there exists a non-negative Borel function $f$ on ${\mathcal G}$ such that $ \lambda(f)(x) = 1$ for all $x\in {\mathcal G}^{(0)}$). Given a measure $\mu$ on ${\mathcal G}^{(0)}$, one can integrate the measures $\lambda^x$ with respect to $\mu$ to get a measure $\mu\circ\lambda$ on ${\mathcal G}$. The measure $\mu$ is quasi-invariant with respect to the Haar system if the inverse map preserves the $(\mu\circ\lambda)$-negligible sets. A {\em measured groupoid} is a triple $({\mathcal G},\lambda,\mu)$ satisfying the above properties. All measure spaces are assumed to be standard and the measures are ${\mathbb{S}}igma$-finite.
\begin{exs}\label{exs:mesgroupoid} (a) {\it Semidirect product measured groupoids.} Let $G$ be a second countable locally compact group with a left Haar measure $\lambda$, and $X$ a standard Borel space. A Borel left action of $G$ on $X$ is a left action such that the map $(x,s)\mapsto sx$ from $X{\mathbbb{T}}imes G$ to $X$ is Borel. Then ${\mathcal G}=X\rtimes G$ is a Borel groupoid with a canonical Haar system, also denoted by $\lambda$. Indeed, identifying ${\mathcal G}^x$ with $G$, we take $\lambda^x = \lambda$. Let $\mu$ be a measure on $X$. Then $\mu\circ\lambda = \mu\otimes\lambda$. Moreover $\mu$ is quasi-invariant with respect to the $G$-action if and only if $({\mathcal G},\lambda,\mu)$ is a measured groupoid.
(b) {\em Discrete measured equivalence relations.} Let ${\mathcal R}$ be an equivalence relation on a Borel standard space $X$ which has countable equivalence classes and such that ${\mathcal R}$ is a Borel subset of $X{\mathbbb{T}}imes X$.
This groupoid has a canonical Haar system: $\lambda^x$ is the counting measure on the equivalence class of $x$, identified with ${\mathcal R}^x$. A measure $\mu$ on $X$ is quasi-invariant if for every Borel subset $A{\mathbb{S}}ubset X$, the saturation of $A$ with respect to ${\mathcal R}$ has measure $0$ when $\mu(A) = 0$. Then $({\mathcal R},\mu)$ is a measured groupoid, called a {\em discrete measured equivalence relation}.
\end{exs}
{\mathbb{S}}ubsection{Topological groupoids}
A {\it locally compact groupoid} is a groupoid ${\mathcal G}$ equipped with a locally compact{\mathbbb{F}}ootnote{By convention a locally compact space will be Hausdorff.} topology such that the structure maps are continuous, where ${\mathcal G}^{(2)}$ has the topology induced by ${\mathcal G}{\mathbbb{T}}imes{\mathcal G}$ and ${\mathcal G}^{(0)}$ has the topology induced by ${\mathcal G}$. A {\em continuous Haar system} is a family $ \lambda=(\lambda^x)_{x\in {\mathcal G}^{(0)}}$ of measures on ${\mathcal G}$ such that $\lambda^x$ has exactly ${\mathcal G}^x$ as support for every $x\in {\mathcal G}^{(0)}$, is left invariant and is continuous in the sense that for every $f\in {\mathcal C}_c({\mathcal G})$ (the space of continuous complex valued functions with compact support on ${\mathcal G}$) the function $x\mapsto \lambda(f)(x)= \int f{\,\mathrm d}\lambda^x$ is continuous. Note that the existence of a continuous Haar system implies that the range (and therefore the source) map is open \cite[Chap. I, Proposition 2.4]{Ren_book}.
\begin{exs}\label{exs:groupoids} (a) {\it Semidirect products.} Let us consider a locally compact group $G$ with Haar measure $\lambda$ acting continuously to the left on a locally compact space $X$. Then ${\mathcal G} = X\rtimes G$ is locally compact groupoid and the Haar system defined in Example \ref{exs:mesgroupoid} (a) is continuous.
(b) {\it Group bundle groupoids.} A group bundle groupoid is a locally compact groupoid such that the range and source maps are equal and open. By \cite[Lemma 1.3]{Ren91}, one can choose, for $x\in {\mathcal G}^{(0)}$, a left Haar measure $\lambda^x$ on the group ${\mathcal G}^x ={\mathcal G}_x$ in such a way that $(\lambda^x)_{x\in X}$ forms a Haar system on ${\mathcal G}$. An explicit example will be given in Section \ref{sec:HLS}.
(c) {\it \'Etale groupoids.} A locally compact groupoid is called {\it \'etale} when its range (and therefore its source) map is a local homeomorphism from ${\mathcal G}$ into ${\mathcal G}^{(0)}$. Then ${\mathcal G}^x$ and ${\mathcal G}_x$ are discrete and ${\mathcal G}^{(0)}$ is open in ${\mathcal G}$. Moreover the family of counting measures $\lambda^x$ on ${\mathcal G}^x$ forms a Haar system (see \cite[Chap. I, Proposition 2.8]{Ren_book}). It will be implicitly our choice of Haar system. Groupoids associated with actions of discrete groups are \'etale.
\end{exs}
{\mathbb{S}}ubsection{Groupoid operator algebras} For the representation theory of measured groupoids we refer to \cite[\S 6.1]{AD-R}. The von Neumann algebra $VN({\mathcal G},\lambda,\mu)$ associated to such a groupoid is defined by its left regular representation. For a semidirect product $(X\rtimes G, \mu)$ it is the von Neumann crossed product $L^\infty(X,\mu)\rtimes G$. For a discrete measured equivalence relation $({\mathcal R},\mu)$, it is the von Neumann algebra defined in \cite{FMII}.
We will now focus on the operator algebras associated with a locally compact groupoid{\mathbbb{F}}ootnote{Throughout this text a locally compact groupoid will be implicitly endowed with a Haar system $\lambda$ which, concerning the examples given in Examples \ref{exs:groupoids}, will be the Haar systems described there.} ${\mathcal G}$. We set $X = {\mathcal G}^{(0)}$. The space ${\mathcal C}_c({\mathcal G})$ of continuous functions with compact support on ${\mathcal G}$ is an involutive algebra with respect to the following operations for $f,g\in {\mathcal C}_c({\mathcal G})$:
\begin{align*}
(f*g)(\gamma) &= \int f(\gamma_1)g(\gamma_{1}^{-1}\gamma) d\lambda^{r(\gamma)}(\gamma_1)\\
f^*(\gamma) & =\overline{f(\gamma^{-1})}.
\end{align*}
We define a norm on ${\mathcal C}_c({\mathcal G})$ by
$${\mathbbb{N}}orm{f}_I = \max {\mathbb{S}}et{{\mathbb{S}}up_{x\in X} \int \abs{f(\gamma)}{\,\mathrm d}\lambda^x(\gamma), \,\,{\mathbb{S}}up_{x\in X} \int \abs{f(\gamma^{-1})}{\,\mathrm d}\lambda^x(\gamma)}.$$
The {\it full $C^*$-algebra $C^*({\mathcal G})$ of the groupoid} ${\mathcal G}$ is the enveloping $C^*$-algebra of the Banach $*$-algebra obtained by completion of ${\mathcal C}_c({\mathcal G})$ with respect to the norm ${\mathbbb{N}}orm{\cdot}_I$.
In order to define the reduced $C^*$-algebra of ${\mathcal G}$ we need the notion of (right) Hilbert $C^*$-module ${\mathcal H}$ over a $C^*$-algebra $A$ (or Hilbert $A$-module) for which we refer to \cite{Lance_book}. We shall denote by ${\mathcal B}_A({\mathcal H})$ the $C^*$-algebra of $A$-linear adjointable maps from ${\mathcal H}$ into itself.
Let ${\mathcal E}$ be the Hilbert $C^*$-module{\mathbbb{F}}ootnote{When ${\mathcal G}$ is \'etale, we shall use the notation $\ell^2_{{\mathcal C}_0(X)}({\mathcal G})$ rather than $L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda)$ } $L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda)$ over ${\mathcal C}_0(X)$ (the algebra of continuous functions on $X$ vanishing to $0$ at infinity) obtained by completion of ${\mathcal C}_c({\mathcal G})$ with respect to the ${\mathcal C}_0(X)$-valued inner product
$${\mathbb{S}}cal{\xi,\eta}(x) = \int_{{\mathcal G}^x} \overline{\xi(\gamma)}\eta(\gamma){\,\mathrm d}\lambda^x(\gamma).$$
The ${\mathcal C}_0(X)$-module structure is given by
$$(\xi f)(\gamma) = \xi(\gamma)f\circ r(\gamma).$$
Let us observe that $L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda)$ is the space of continuous sections vanishing at infinity of a continuous field of Hilbert spaces with fibre $L^2({\mathcal G}^x,\lambda^x)$ at $x\in X$.
We let ${\mathcal C}_c({\mathcal G})$ act on ${\mathcal E}$ by the formula
$$(\Lambda(f)\xi)(\gamma) = \int f(\gamma^{-1}\gamma_1) \xi(\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1).$$
Then, $\Lambda$ extends to a representation of $C^*({\mathcal G})$ in the Hilbert ${\mathcal C}_0(X)$-module ${\mathcal E}$, called the {\it regular representation of} $({\mathcal G},\lambda)$. Its range is denoted by $C^*_{r}({\mathcal G})$ and called the {\it reduced $C^*$-algebra}{\mathbbb{F}}ootnote{Very often, the Hilbert ${\mathcal C}_0(X)$-module $L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda^{-1})$ is considered in order to define the reduced $C^*$-algebra (see for instance \cite{KS02,KS04}). We pass from this setting to ours (which we think to be more convenient for our purpose) by considering the isomorphism $U: L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda^{-1}){\mathbbb{T}}o L^2_{{\mathcal C}_0(X)}({\mathcal G},\lambda)$ such that $(U\xi)(\gamma) = \xi(\gamma^{-1})$.}{\it of the groupoid} ${\mathcal G}$. Note that $\Lambda(C^*({\mathcal G}))$ acts fibrewise on the corresponding continuous field of Hilbert spaces with fibres $L^2({\mathcal G}^x,\lambda^x)$ by the formula
$$(\Lambda_x(f)\xi)(\gamma) = \int_{{\mathcal G}^x} f(\gamma^{-1}\gamma_1) \xi(\gamma_1) {\,\mathrm d}\lambda^x(\gamma_1)$$
for $f\in {\mathcal C}_c({\mathcal G})$ and $\xi\in L^2({\mathcal G}^x,\lambda^x)$. Moreover, we have
${\mathbbb{N}}orm{\Lambda(f)} = {\mathbb{S}}up_{x\in X} {\mathbbb{N}}orm{\Lambda_x(f)}$.
For a semidirect product groupoid ${\mathcal G} = X\rtimes G$ as in Example \ref{exs:groupoids} (a) we get the usual crossed products $C^*({\mathcal G}) = C_0(X)\rtimes G$ and $C^*_{r}({\mathcal G}) = C_0(X)\rtimes_r G$.
{\mathbb{S}}ection{Amenable groupoids}
{\mathbb{S}}ubsection{Amenability of measured groupoids} The existence of actions of non-amenable groups exhibiting behaviours reminiscent of amenability had already been observed in the 1970s by several authors, among them Vershik \cite{Ver} for the boundary action of $PSL(2,{\mathbb{Z}})$. The original definition of an amenable action in the measured setting is due to Zimmer \cite[Definition 1.4]{Zi3}. It was expressed in terms of an involved fixed point property. Later \cite{Zi1} it was reformulated in terms of invariant means: an action of a discrete group $\Gamma$ on a measured space $(X,\mu)$, with $\mu$ being quasi-invariant, is amenable if there exists a norm one projection $m: L^\infty(X\rtimes \Gamma, \mu\circ\lambda){\mathbbb{T}}o L^\infty(X,\mu)$ such that $s.(m(f)) = m(s.f)$ for all $f\in L^\infty(X\rtimes \Gamma, \mu\circ\lambda)$ and $s\in \Gamma$ where $(s.f)(x,t) = f(s^{-1}x,s^{-1}t)$ and $(s.m(f))(x) = m(f)(s^{-1}x)$. In \cite{AEG}, this characterization was extended to the case of any second countable locally compact group. It also holds in the case of discrete measured equivalence relations. Clearly, the right framework that unifies this notion of amenability is that of measured groupoids.
\begin{defn}\label{amen:measgroupoid}\cite[Definition 3.2.8]{AD-R} A measured groupoid $({\mathcal G},\lambda,\mu)$ is said to be {\it amenable} if there exists a norm one projection $m: L^\infty({\mathcal G},\mu\circ\lambda) {\mathbbb{T}}o L^\infty({\mathcal G}^{(0)},\mu)$ such that $m(\psi*f) = \psi* m(f)$ for every $f\in L^\infty({\mathcal G},\mu\circ\lambda)$ and every Borel function $\psi$ on ${\mathcal G}$ such that ${\mathbb{S}}up_{x\in {\mathcal G}^{(0)}}\lambda^x(\abs{\psi} )<\infty$.
\end{defn}
Recall that $(\psi*f)(\gamma) = \int \psi(\eta)f(\eta^{-1}\gamma) {\,\mathrm d} \lambda^{r(\gamma)}(\eta)$ for $f\in L^\infty({\mathcal G},\mu\circ\lambda)$ and that we have $(\psi*f)(x) = \int \psi(\eta)f(\eta^{-1}x) {\,\mathrm d} \lambda^{x}(\eta)$ for $f\in L^\infty(X,\mu)$.
The first definition of amenability for a measured groupoid is due to Renault \cite[Chap. II, \S 3]{Ren_book}. It was expressed in different terms: as a generalisation of the classical Day condition or equivalently as generalisations of the Reiter condition or of the Godement condition for groups.
\begin{thm}\label{thm:caractAmenMeas}\cite[Propostion 3.2.14]{AD-R} Let $({\mathcal G},\lambda,\mu)$ be a measured groupoid. We endow ${\mathcal G}$ with the measure $\mu\circ\lambda$. The following conditions are equivalent:
\begin{itemize}
\item[(i)] $({\mathcal G},\lambda,\mu)$ is amenable;
\item[(ii)] $[$Weak Day condition$]$ There exists a sequence $(g_n)$ of non-negative Borel functions on ${\mathcal G}$ such that $\lambda(g_n) =1$ and $\lim_n f*g_n - (\lambda(f)\circ r )g_n = 0$ in the weak topology of $L^1({\mathcal G})$ for all $f\in L^1({\mathcal G})$;
\item[(iii)] $[$Weak Reiter condition$]$ There exists a sequence $(g_n)$ of non-negative Borel functions on ${\mathcal G}$ such that $\lambda(g_n) =1$ and $\lim_n \int\abs{g_n(\gamma^{-1}\gamma_1) - g_n(\gamma_1)}{\,\mathrm d} \lambda^{r(\gamma)}(\gamma_1) = 0$ in the weak*-topology of $L^\infty({\mathcal G})$;
\item[(iv)] $[$Weak Godement condition$]$ There exists a sequence $(\xi_n)$ of Borel functions on ${\mathcal G}$ such that $\lambda(\abs{\xi_n}^2) = 1$ for all $n$ and $\lim_n \int\overline{\xi_n(\gamma_1)}\xi_n(\gamma^{-1}\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1) = 1$ in the weak*-topology of $L^\infty({\mathcal G})$.
\end{itemize}
\end{thm}
{\mathbb{S}}ubsection{Amenability of locally compact groupoids} The (topological) amenability{\mathbbb{F}}ootnote{From now on, amenability will implicitly mean topological amenability.} of a locally compact groupoid ${\mathcal G}$ has been introduced by Renault in \cite{Ren_book}. In \cite{AD-R} it is defined as follows.
\begin{defn} \cite[Definition 2.2.1]{AD-R} We say that a locally compact groupoid ${\mathcal G}$ is {\em amenable} if there exists a net (or a sequence when ${\mathcal G}$ is ${\mathbb{S}}igma$-compact) $(m_i)$, where $m_i = (m_i^{x})_{x\in {\mathcal G}^{(0)}}$ is a family of probability measures $m_i^{x}$ on ${\mathcal G}^x$, continuous in the sense that $x\mapsto m_i^{x}(f)$ is continuous for every $f\in {\mathcal C}_c({\mathcal G})$, and such that $\lim_i{\mathbbb{N}}orm{\gamma m_i^{s(\gamma)} - m_i^{r(\gamma)}}_1 = 0$ uniformly on every compact subset of ${\mathcal G}$.
\end{defn}
This notion has many equivalent definitions:
\begin{thm}\label{thm:caractAmen}\cite[Proposition 2.2.13]{AD-R} Let ${\mathcal G}$ be a ${\mathbb{S}}igma$-compact locally compact groupoid. The following conditions are equivalent:
\begin{itemize}
\item[(i)] ${\mathcal G}$ is amenable;
\item[(ii)] $[$Reiter condition$]$ There exists a sequence $(g_n)$ in ${\mathcal C}_c({\mathcal G})^+$ such that $\lim_n\lambda(g_n) = 1$ uniformly on every compact subset of ${\mathcal G}^{(0)}$ and $\lim_n \int\abs{g_n(\gamma^{-1}\gamma_1) - g_n(\gamma_1)}{\,\mathrm d} \lambda^{r(\gamma)}(\gamma_1) = 0$ uniformly on every compact subset of ${\mathcal G}$;
\item[(iii)] There exists a sequence $(h_n)$ of continuous positive definite functions with compact support on ${\mathcal G}$ whose restrictions to ${\mathcal G}^{(0)}$ are bounded by $1$ and such that $\lim_n h_n = 1$ uniformly on every compact subset of ${\mathcal G}$;
\item[(iv)] $[$Godement condition$]$ There exists a sequence $(\xi_n)$ in ${\mathcal C}_c({\mathcal G})$ such that $\lambda(\abs{\xi_n}^2) \leq 1$ for all $n$ and $\lim_n \int\overline{\xi_n(\gamma_1)}\xi_n(\gamma^{-1}\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1) = 1$ uniformly on every compact subset of ${\mathcal G}$.
\end{itemize}
\end{thm}
Recall that a function $h$ on ${\mathcal G}$ is {\em positive definite} or of {\em positive type} if for every $x\in {\mathcal G}^{(0)}$, $n\in {\mathbb{N}}$ and $\gamma_1,\cdots, \gamma_n \in {\mathcal G}^x$, the $n {\mathbbb{T}}imes n$ matrix $[h(\gamma_i^{-1}\gamma_j)]$ is non-negative. For instance, given $\xi$ on ${\mathcal G}$ such that $\lambda(\abs{\xi}^2) $ is bounded on ${\mathcal G}^{(0)}$, the function $\gamma \mapsto \int\overline{\xi(\gamma_1)}\xi(\gamma^{-1}\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1)$ is positive definite.
\begin{rems}\label{rem:Godement} (a) In \cite{AD-R} it is assumed that ${\mathcal G}$ is second countable but the proof of the above theorem holds as well when ${\mathcal G}$ is ${\mathbb{S}}igma$-compact. This observation will be useful later when working with the groupoid $\beta_r{\mathcal G} \rtimes {\mathcal G}$.
(b) In the above characterizations, the boundedness conditions for the sequences $(h_n)$ and $(\xi_n)$ are not necessary (see \cite[Propositions 2.2.13]{AD-R}).
\end{rems}
\begin{defn}\cite[Chap. II, Definition 3.6]{Ren_book},\cite[Definition 3.3.1]{AD-R} One says that a second countable locally compact groupoid with Haar system $({\mathcal G},\lambda)$ is {\em measurewise amenable} if for every quasi-invariant measure $\mu$ on ${\mathcal G}^{(0)}$ the measured groupoid $({\mathcal G},\lambda,\mu)$ is amenable.
\end{defn}
Topological amenability is closely related to measurewise amenability. It is not hard to see for instance that the Reiter condition of Theorem \ref{thm:caractAmen} implies the weak Reiter condition of Theorem \ref{thm:caractAmenMeas} for every quasi-invariant measure $\mu$. Therefore topological amenability implies measurewise amenability. It is a long-standing open question whether the converse is true. This has been proved for \'etale groupoids \cite[Corollary 3.3.8]{AD-R} and recently for locally compact second countable semidirect product groupoids \cite[Corollary 3.29]{BEW20}.
\begin{rem} Let us consider the case of a locally compact semidirect product groupoid{\mathbbb{F}}ootnote{For these groupoids the ${\mathbb{S}}igma$-compactness assumption is not needed \cite[Proposition 2.5]{AD02}.} ${\mathcal G}= X\rtimes G$. Then, topological amenability is for instance spelled out as the existence of a net $(m_i)$ of weak*-continuous maps $m_i: x\mapsto m_i^{x}$ from $X$ into the space of probability measures on $G$, such that $\lim_i {\mathbbb{N}}orm{gm_i^{x} - m_i^{gx}}_1 = 0$ uniformly on every compact subset of $X{\mathbbb{T}}imes G$. In this case we also say that {\em the $G$-action on $X$ is amenable}.
We set $A = {\mathcal C}_0(X)$ and for every $f\in {\mathcal C}_c(X{\mathbbb{T}}imes G)$ we set ${\mathbbb{T}}ilde f(s)(x) = f(x,s)$. Then ${\mathbbb{T}}ilde f$ is in the space ${\mathcal C}_c(G,A)$ of continuous functions with compact support from $G$ into $A$. It is also an element of the Hilbert $A$-module $L^2(G,A)$ given as the completion of ${\mathcal C}_c(G,A)$ with respect to the $A$-valued inner product ${\mathbb{S}}cal{\xi,\eta} = \int_G \xi(s)^*\eta(s) {\,\mathrm d}\lambda(s)$. Finally, for $\xi\in {\mathcal C}_c(G,A)$, we set $({\mathbbb{T}}ilde\alpha_t\xi)(s)(x) = \xi(t^{-1}s)(t^{-1}x)$ and we denote by the same symbol the continuous extension of ${\mathbbb{T}}ilde\alpha_t$ to $L^2(G,A)$. If $h_i(\gamma)= \int \overline{\xi_i(\gamma_1)}\xi_i(\gamma^{-1}\gamma_1) {\,\mathrm d}\lambda^{r(\gamma)}(\gamma_1)$ with $\xi_i\in {\mathcal C}_c(X{\mathbbb{T}}imes G)$, we have
$${\mathbbb{T}}ilde h_i(t)(x) = \int_G \overline{\xi_i(x,s)} \xi_i(t^{-1}x,t^{-1}s){\,\mathrm d}\lambda(s) = {\mathbb{S}}cal{{\mathbbb{T}}ilde\xi_i,{\mathbbb{T}}ilde\alpha_t({\mathbbb{T}}ilde\xi_i)}(x).$$
It follows that the Godement condition characterizing the amenability of $X\rtimes G$ may be interpreted as the existence of a bounded net $(\eta_i)$ in $L^2(G,A)$ such that ${\mathbb{S}}cal{\eta_i, {\mathbbb{T}}ilde\alpha_t(\eta_i)} {\mathbbb{T}}o 1$ uniformly on compact subsets of $G$ in the strict topology of $A$.
The first tentative to define an amenable action of a group on a non-commutative $C^*$-algebra $A$ was presented in \cite{AD87}. The solution was not satisfactory since it was limited to discrete groups and involved the bidual of $A$. Since the end of the 2010s, a new interest in the subject has led to major advances \cite{BC, BEW, BEW20, OS20} and resulted in very nice equivalent definitions of amenability. One of the definitions is the following extension of the commutative case described above, called the approximation property (AP), first introduced in \cite{Exe, ENg} in the setting of Fell bundles over locally compact groups. An action $\alpha: G\curvearrowright A$ of a locally compact group on a $C^*$-algebra $A$ has the {\em approximation property} (AP) if there exists a bounded net (or sequence in separable cases) $(\eta_i)$ in ${\mathcal C}_c(G,A) {\mathbb{S}}ubset L^2(G,A)$ such that ${\mathbb{S}}cal{\eta_i, a {\mathbbb{T}}ilde\alpha_t(\eta_i)} {\mathbbb{T}}o a$ in norm, uniformly on compact subsets of $G$, for every $a\in A$. Here one sets again ${\mathbbb{T}}ilde\alpha_t(\eta)(s) = \alpha_t(\eta(t^{-1}s))$ for $\eta\in {\mathcal C}_c(G,A)$, $s,t\in G$. For interesting properties of amenable actions of locally compact groups on $C^*$-algebras we refer to \cite{BEW20, OS20}.
\end{rem}
{\mathbb{S}}ection{Amenable at infinity groupoids}
As already said, the property for a locally compact group $G$ to be KW-exact is equivalent to the existence of an amenable $G$-action on a compact space. In order to try to extend this fact to a locally compact groupoid ${\mathcal G}$ we need some preparation.
{\mathbb{S}}ubsection{First definitions} Let $X$ be a locally compact space. A {\it fibre space} over $X$ is a pair $(Y,p)$ where $Y$ is a locally compact space and $p$ is a continuous surjective map from $Y$ on $X$. For $x\in X$ we denote by $Y^x$ the {\it fibre} $p^{-1}(x)$. We say that $(Y,p)$ is {\em fibrewise compact} if the map $p$ is proper in the sense that $p^{-1}(K)$ is compact for every subset $K$ of $X$. Note that this property is stronger than requiring each fibre to be compact.
Let $ (Y_i,p_i)$, $i=1,2$, be two fibre spaces over $X$. We denote by $Y_{1}
\,_{p_1}\!\!*_{p_2} Y_2$ (or $Y_1*Y_2$ when there is no ambiguity) the {\it fibred product}{\mathbb{1}}ex{fibred product}
${\mathbb{S}}et{(y_1,y_2)\in Y_1{\mathbbb{T}}imes Y_2: p_1(y_1) = p_2(y_2)}$
equipped with the topology induced by the product topology. We say that a continuous map $\varphi: Y_1{\mathbbb{T}}o Y_2$ is a {\it morphism of fibre spaces} if $p_2\circ \varphi = p_1$.
\begin{defn}\label{def:Gspace} Let ${\mathcal G}$ be a locally compact groupoid. A {\it left} ${\mathcal G}$-{\it space} {\mathbb{1}}ex{${\mathcal G}$-space} is a fibre space $(Y,p)$ over $X = {\mathcal G}^{(0)}$, equipped with a continuous map $(\gamma, y) \mapsto \gamma y$ from ${\mathcal G}\,_s\!*_pY$ into $Y$, satisfying the following conditions:
\begin{itemize}
\item $p(\gamma y) = r(\gamma)$ for $(\gamma, y) \in {\mathcal G}\,_s\!*_pY$, and $p(y)y = y$ for $y\in Y$;
\item if $(\gamma_1, y) \in {\mathcal G}\,_s\!*_pY$ and $(\gamma_2,\gamma_1)\in {\mathcal G}^{(2)}$, then $(\gamma_2\gamma_1)y = \gamma_2(\gamma_1 y)$.
\end{itemize}
\end{defn}
Given such a ${\mathcal G}$-space $(Y,p)$, we associate a groupoid $Y\rtimes {\mathcal G}$, called the semidirect product groupoid of $Y$ by ${\mathcal G}$. It is defined as in the case of group actions except that as a topological space it is the fibred product $Y\!_p\!*_r {\mathcal G}$ over $X = {\mathcal G}^{(0)}$. Although $p$ is not assumed to be an open map, the range map $(y,\gamma) \mapsto y$ from $Y\rtimes {\mathcal G}$ onto $Y$ is open since the range map $r:\gamma \mapsto r(\gamma)$ is open. Moreover, if ${\mathcal G}$ has a Haar system $(\lambda^x)_{x\in X}$, then $Y\rtimes {\mathcal G}$ has the canonical Haar system $y\mapsto \delta_y{\mathbbb{T}}imes \lambda^{p(y)}$ (identified with $\lambda^{p(y)}$ on ${\mathcal G}^{(p(y)}$) (see \cite[Proposition 1.4]{AD16}). Note that $Y\rtimes {\mathcal G}$ is an \'etale groupoid when ${\mathcal G}$ is \'etale.
We say that the {\em ${\mathcal G}$-space $(Y,p)$ is amenable} if the semidirect product groupoid $Y\rtimes {\mathcal G}$ is amenable. Note that if ${\mathcal G}$ is an amenable groupoid, every ${\mathcal G}$-space is amenable \cite[Corollary 2.2.10]{AD-R}.
There is a subtlety about the definition of amenability at infinity which leads us to introduce two notions. We do not know whether they are equivalent in general.
\begin{defn}\label{def:ameninf} Let ${\mathcal G}$ be a locally compact groupoid and let $X= {\mathcal G}^{(0)}$. We say that
\begin{itemize}
\item[(i)] ${\mathcal G}$ is {\em strongly amenable at infinity} if there exists an amenable fibrewise compact ${\mathcal G}$-space $(Y,p)$ with a continuous section ${\mathbb{S}}igma : X{\mathbbb{T}}o Y$ of $p$;
\item[(ii)] ${\mathcal G}$ is {\em amenable at infinity} if there exists an amenable fibrewise compact ${\mathcal G}$-space;
\end{itemize}
\end{defn}
\begin{exs}\label{rem:invGE} (a) Every locally compact amenable groupoid ${\mathcal G}$ is strongly amenable at infinity since the left action of ${\mathcal G}$ on its unit space is amenable.
(b) It is easily seen that the semidirect product groupoid ${\mathcal G} = X\rtimes G$ relative to an action of a KW-exact (hence amenable at infinity) locally compact group $G$ on a locally compact space $X$ is strongly amenable at infinity \cite[Proposition 4.3]{AD16}. This is also true for partial actions{\mathbbb{F}}ootnote{For the definition see \cite[\S I.2, \S I.5]{Exel_Book}.} of exact discrete groups \cite[Proposition 4.23]{AD16}.
\end{exs}
It is useful to have a criterion of amenablity at infinity which does not involve $Y$ but only ${\mathcal G}$. Before proceeding further we need to introduce some notation and definitions. We set ${\mathcal G}*_r {\mathcal G} = {\mathbb{S}}et{(\gamma,\gamma_1) \in {\mathcal G}{\mathbbb{T}}imes {\mathcal G} : r(\gamma) = r(\gamma_1)}$. A subset of ${\mathcal G}*_r {\mathcal G}$ will be called a {\it tube} if its image by the map
$(\gamma,\gamma_1) \mapsto \gamma^{-1}\gamma_1$ is relatively compact in ${\mathcal G}$. We denote by
${\mathcal C}_t({\mathcal G}*_r{\mathcal G})$ the space of continuous bounded functions on ${\mathcal G}*_r{\mathcal G}$ with support in a tube.
We say that a function $k: {\mathcal G} *_r {\mathcal G} {\mathbbb{T}}o {\mathbb{C}}$ is a {\it positive definite kernel} if for every $x\in X$, $n \in {\mathbb{N}}$ and $\gamma_1,\dots,\gamma_n \in {\mathcal G}^x$, the matrix $[k(\gamma_i,\gamma_j)]$ is non-negative, that is
$${\mathbb{S}}um_{i,j=1}^n \overline{\alpha_i}\alpha_j k(\gamma_i,\gamma_j) \geq 0$$
for $\alpha_1,\dots,\alpha_n \in {\mathbb{C}}$.
In the case of groups (for which amenability at infinity coincides with strong amenability at infinity) let us recall the following result:
\begin{thm}\label{caractinfgroup} A (second countable) locally compact group $G$ is amenable at infinity if and only if there exists a net $(k_i)$ of continuous positive definite kernels $k_i: G{\mathbbb{T}}imes G {\mathbbb{T}}o {\mathbb{C}}$ with support in tubes such that $\lim_i k_i = 1$ uniformly on tubes.
\end{thm}
When $G$ is any discrete group this is proved in \cite{Oza} and when $G$ is a locally compact second countable group this is proved in \cite[Theorem 2.3, Corollary 2.9]{DL} which improves \cite[Proposition 3.5]{AD02}. One important ingredient in the proof of the above theorem is the use of a universal compact $G$-space, namely the Stone-\v Cech compactification $\beta G$ of $G$ if $G$ is discrete and an appropriate variant of it in general.
{\mathbb{S}}ubsection{Fibrewise compactifications of ${\mathcal G}$-spaces} In order to extend Theorem \ref{caractinfgroup} to the case of groupoids we first need some informations about fibrewise compactifications of fibre spaces.
\begin{defn}\label{def:fibComp}
A {\it fibrewise compactification} of a fibre space $(Y,p)$ over a locally compact space $X$ is a triple $(Z, \varphi, q)$
where $Z$ is a locally compact space, $q: Z \rightarrow X$ is a continuous
{\it proper} map and $\varphi : Y \rightarrow Z$ is a homeomorphism onto an open
dense subset of $Z$ such that $p = q \circ \varphi$.
\end{defn}
We denote by ${\mathcal C}_0(Y,p)$ the $C^*$-algebra of continuous bounded functions $g$ on $Y$ such that for every $\varepsilon >0$ there exists a compact subset $K$ of $X$ satisfying $\abs{g(y)} \leq \varepsilon$ if $y {\mathbbb{N}}otin p^{-1}(K)$. We denote by $\beta_pY$ the Gelfand spectrum of ${\mathcal C}_0(Y,p)$. The inclusion $f\mapsto f\circ p$ from ${\mathcal C}_0(X)$ into ${\mathcal C}_0(Y,p)$ defines a surjection $p_\beta$ from $\beta_pY$ onto $X$. It is easily checked that $(\beta_pY,p_\beta)$ is fibrewise compact. We call it the
{\em Stone-\v Cech fibrewise compactification of} $(Y,p)$. Note that when $X$ is compact, then ${\mathcal C}_0(Y,p)$ is the $C^*$-algebra of continuous bounded functions on $Y$ and $\beta_pY$ is the usual Stone-\v Cech compactification $\beta Y$ of $Y$.
We observe that even if $p: Y{\mathbbb{T}}o X$ is open, its extension $p_\beta : \beta_pY{\mathbbb{T}}o X$ is not always open. Consider for instance $Y =( [0,1]{\mathbbb{T}}imes {\mathbb{S}}et{0}) {\mathbb{S}}qcup (]1/2, 1]{\mathbbb{T}}imes {\mathbb{S}}et{1}) {\mathbb{S}}ubset {\mathbb{R}}^2$ and let $p$ be the first projection on $X= [0,1]$. Then $\beta_p Y = \beta Y =( [0,1]{\mathbbb{T}}imes {\mathbb{S}}et{0}) {\mathbb{S}}qcup (\beta]1/2, 1]{\mathbbb{T}}imes {\mathbb{S}}et{1}$). The fibres of $\beta_p Y$ are the same as those of $Y$ except $(\beta_p Y)^{1/2} = ({\mathbb{S}}et{1/2}{\mathbbb{T}}imes {\mathbb{S}}et{0}){\mathbb{S}}qcup \big((\beta]1/2,1]{\mathbb{S}}etminus ]1/2,1]){\mathbbb{T}}imes{\mathbb{S}}et{1}\big)$. Then $\beta_pY {\mathbb{S}}etminus ([0,1]{\mathbbb{T}}imes {\mathbb{S}}et{0})$ is open and its image by $p_\beta$ is $[1/2,1]$.
The next proposition shows that $(\beta_p Y, p_\beta)$ is the solution of a universal problem.
\begin{prop}\label{prop:universal}\cite[Proposition A.4]{AD16} Let $(Y,p)$ and $(Y_1,p_1)$ be two fibre spaces over $X$, where $(Y_1,p_1)$ is fibrewise compact. Let $\varphi_1: (Y,p) {\mathbbb{T}}o (Y_1,p_1)$ be a morphism. There exists
a unique continuous map ${\mathbb{P}}hi_1 : \beta_p Y {\mathbbb{T}}o Y_1$ which extends $\varphi_1$. Moreover, ${\mathbb{P}}hi_1$ is proper and is a morphism of fibre spaces, that is, $p_\beta = p_1\circ {\mathbb{P}}hi_1$.
\end{prop}
We assume now that $(Y,p)$ is a ${\mathcal G}$-space. A {\it ${\mathcal G}$-equivariant fibrewise compactification} of the ${\mathcal G}$-space $(Y,p)$ is a fibrewise compactification $(Z,\varphi, q)$ of $(Y,p)$ such that $(Z,q)$ is a ${\mathcal G}$-space satisfying $\varphi(\gamma y) = \gamma\varphi(y)$ for every $(\gamma, y)\in {\mathcal G}\,_s\!*_pY$.
We need to extend the ${\mathcal G}$-action on $(Y,p)$ to a continuous ${\mathcal G}$-action on $(\beta_p Y,p_\beta)$. Even in the case of a non-discrete group action $G\curvearrowright Y$ this is not possible in general: we have to replace $\beta Y$ by the spectrum of the $C^*$-algebra of bounded left-uniformly continuous functions on $G$ \cite{AD02}.
In the groupoid case it is more complicated, and {\bf we will limit ourselves to the case of \'etale groupoids}.
\begin{prop}\label{prop:max_min} \cite[Proposition 2.5]{AD16} Let $(Y,p)$ be a ${\mathcal G}$-space, where ${\mathcal G}$ is an \'etale groupoid. The structure of ${\mathcal G}$-space of $(Y,p)$ extends in a unique way to the Stone-\v Cech fibrewise compactification $(\beta_p Y, p_\beta)$ and makes it a ${\mathcal G}$-equivariant fibrewise compactification.
\end{prop}
\begin{prop}\label{prop:universal1}\cite[Proposition 2.6]{AD16} Let ${\mathcal G}$ be an \'etale groupoid and $(Y,p)$, $(Y_1,p_1)$ be two ${\mathcal G}$-spaces. We assume that $(Y_1,p_1)$ is fibrewise compact. Let $\varphi_1: (Y,p) {\mathbbb{T}}o (Y_1,p_1)$ be a ${\mathcal G}$-equivariant morphism. The unique continuous map ${\mathbb{P}}hi_1 : \beta_p Y {\mathbbb{T}}o Y_1$ which extends $\varphi_1$ is ${\mathcal G}$-equivariant.
\end{prop}
{\mathbb{S}}ubsection {Amenability at infinity for \'etale groupoids} We view the fibrewise space $r:{\mathcal G} {\mathbbb{T}}o {\mathcal G}^{(0)}$ in an obvious way as a left ${\mathcal G}$-space. Its ${\mathcal G}$-equivariant fibrewise compactification $(\beta_r {\mathcal G}, r_\beta)$ will play an important role in the sequel because of the following observation.
\begin{prop}\label{prop:SC} An \'etale groupoid ${\mathcal G}$ is strongly amenable at infinity if and only if the Stone-\v Cech fibrewise compactification $(\beta_r {\mathcal G}, r_\beta)$ is an amenable ${\mathcal G}$-space.
\end{prop}
\begin{proof} In one direction, we note that the inclusions ${\mathcal G}^{(0)}{\mathbb{S}}ubset {\mathcal G}{\mathbb{S}}ubset \beta_r {\mathcal G}$ provide a continuous section for $r_\beta$ and therefore the amenability of the ${\mathcal G}$-space $\beta_r{\mathcal G}$ implies the strong amenability at infinity of ${\mathcal G}$. Conversely, assume that $(Y,p,{\mathbb{S}}igma)$ satisfies the conditions of Definition \ref{def:ameninf}. We define a continuous ${\mathcal G}$-equivariant morphism $\varphi : ({\mathcal G},r){\mathbbb{T}}o (Y,p)$ by
$$\varphi(\gamma) = \gamma {\mathbb{S}}igma\circ s(\gamma).$$
Then, by Proposition \ref{prop:universal1}, $\varphi$ extends in a unique way to a continuous ${\mathcal G}$-equivariant morphism ${\mathbb{P}}hi$ from $(\beta_r{\mathcal G}, r_\beta)$ into $(Y,p)$.
Note that ${\mathbb{P}}hi(\beta_r{\mathcal G})$ is a closed ${\mathcal G}$-invariant subset of $Y$. Now, it follows from \cite[Proposition 2.2.9 (i)]{AD-R} that ${\mathcal G}\curvearrowright \beta_r{\mathcal G}$ is amenable, since ${\mathcal G}\curvearrowright {\mathbb{P}}hi(\beta_r{\mathcal G})$ is amenable.
\end{proof}
The space $\beta_r{\mathcal G}$ has the serious drawback that it is not second countable in most of the cases but however it is ${\mathbb{S}}igma$-compact when ${\mathcal G}$ is second countable. On the other hand, it has the advantage of being intrinsic. Moreover it is possible to build a {\it second countable} amenable fibrewise compact ${\mathcal G}$-space out of any amenable fibrewise compact ${\mathcal G}$-space, when ${\mathcal G}$ is second countable and \'etale \cite[Lemma 4.9]{AD16}.
\begin{thm}\label{prop:amen_inf} Let ${\mathcal G}$ be a second countable \'etale groupoid. The following conditions are equi\-valent:
\begin{itemize}
\item[(i)] ${\mathcal G}$ is strongly amenable at infinity;
\item[(ii)] there exists a sequence $(k_n)$ of bounded positive definite continuous
kernels on ${\mathcal G} *_r {\mathcal G}$ supported in tubes such that
\begin{itemize}
\item[(a)] for every $n$, the restriction of $k_n$ to the diagonal of ${\mathcal G} *_r {\mathcal G}$ is uniformly bounded by $1$;
\item[(b)] $\lim_n k_n = 1$ uniformly on tubes.
\end{itemize}
\end{itemize}
\end{thm}
\begin{proof} By Theorem \ref{thm:caractAmen}, the groupoid $\beta_r{\mathcal G}\rtimes {\mathcal G}$ is amenable if and only if there exists a net $(h_n)$ of continuous positive definite functions in ${\mathcal C}_c(\beta_r{\mathcal G}\rtimes {\mathcal G})$, whose restriction to the set of units are bounded by 1, such that
$\lim_i h_i = 1$ uniformly on every compact subset of $\beta_r{\mathcal G}\rtimes {\mathcal G}$.
For $(\gamma_1,\gamma_2)\in {\mathcal G}*_r{\mathcal G}$ we set $k_n(\gamma_1,\gamma_2) = h_n(\gamma_1^{-1}, \gamma_1^{-1}\gamma_2)$. Then we check that $k_n$ is a positive definite kernel bounded by $1$ on the diagonal, supported in a tube, and that $\lim_i k_n = 1$ uniformly on tubes.
The converse is proved similarly (see \cite[Theorem 4.13, Theorem 4.15]{AD16} for details).
\end{proof}
As observed in \cite[Remark 3.4, Remark 4.16]{AD16} it suffices in (ii) (a) above to require that each $k_n$ is bounded.
{\mathbb{S}}ection{About exactness for groupoids}
{\mathbb{S}}ubsection{Equivalence of several definitions of exactness for \'etale groupoids}
\begin{defn}
Let ${\mathcal G}$ be a locally compact groupoid. We say that ${\mathcal G}$ is KW{\it-exact} if for every ${\mathcal G}$-equivariant exact sequence
$0{\mathbbb{T}}o I {\mathbbb{T}}o A {\mathbbb{T}}o B {\mathbbb{T}}o 0$
of ${\mathcal G}$-$C^*$-algebras, the corresponding sequence
$$0 {\mathbbb{T}}o C^*_{r}({\mathcal G},I) {\mathbbb{T}}o C^*_{r}({\mathcal G},A) {\mathbbb{T}}o C^*_{r}({\mathcal G},B){\mathbbb{T}}o 0$$
of reduced crossed products is exact.
We say that ${\mathcal G}$ is {\it $C^*$-exact} if $C^*_{r}({\mathcal G})$ is an exact $C^*$-algebra.
\end{defn}
For the definition of actions of locally compact groupoids on $C^*$-algebras and the construction of the corresponding crossed products we refer for instance to \cite{KS04} or \cite[\S 6.2]{AD16}. As in the case of groups we easily see that amenability at infinity implies KW-exactness which in turn implies $C^*$-exactness (see for instance \cite[Theorem 7.2]{AD02} for groups and \cite[\S 7]{AD16} for groupoids). The main problem is to see if $C^*$-exactness of an \'etale groupoid implies its amenability at infinity as it is the case for discrete groups.
In this section we will adapt to the \'etale groupoid case our proof of the fact that an inner amenable locally compact $C^*$-exact group is amenable at infinity \cite[Theorem 7.3]{AD02}. We need first to define inner amenability for groupoids.
\begin{defn} Let ${\mathcal G}$ be a locally compact groupoid. Following \cite[Definition 2.1]{Roe}, we say that a closed subset $A$ of ${\mathcal G} {\mathbbb{T}}imes {\mathcal G}$ is {\it proper} if for every compact subset $K$ of ${\mathcal G}$, the sets $(K{\mathbbb{T}}imes {\mathcal G}) \cap A$ and $({\mathcal G} {\mathbbb{T}}imes K) \cap A$ are compact. We say that a function $f: {\mathcal G}{\mathbbb{T}}imes {\mathcal G} {\mathbbb{T}}o {\mathbb{C}}$ is {\it properly supported} if its support is proper.
\end{defn}
Given a groupoid ${\mathcal G}$, let us observe that the product ${\mathcal G}{\mathbbb{T}}imes {\mathcal G}$ has an obvious structure of groupoid, with $X{\mathbbb{T}}imes X$ as set of units, where $X= {\mathcal G}^{(0)}$. Observe that a map $f:{\mathcal G}{\mathbbb{T}}imes {\mathcal G}{\mathbbb{T}}o {\mathbb{C}}$
is positive definite if and only if, given an integer $n$, $(x,y)\in X{\mathbbb{T}}imes X$ and $\gamma_1,\dots, \gamma_n\in {\mathcal G}^x$, $\eta_1,\dots,\eta_n\in {\mathcal G}^y$, the matrix $[f(\gamma_i^{-1}\gamma_j, \eta_i^{-1}\eta_j)]_{i,j}$ is non-negative.
\begin{defn}\label{def:wia} We say that a locally compact groupoid ${\mathcal G}$ is {\it inner amenable}{\mathbb{1}}ex{inner amenable l. c. groupoid} if for every compact subset $K$ of ${\mathcal G}$ and for every $\varepsilon >0$
there exists a continuous positive definite function $f$ on the product groupoid ${\mathcal G}{\mathbbb{T}}imes {\mathcal G}$, properly supported, such that $f(x,y)\leq 1$ for all $x,y\in {\mathcal G}^{(0)}$ and such that $|f(\gamma,\gamma) - 1| < \varepsilon$ for all $\gamma \in K$.
\end{defn}
This terminology is justified by the fact that for a locally compact group the above property is equivalent to the notion of inner amenability introduced in Section 1. That this property for groups implies inner amenability is proved in \cite{CT}; the reverse is almost immediate \cite[Proposition 4.6]{AD02}.
Every amenable locally compact groupoid ${\mathcal G}$ is inner amenable since the groupoid ${\mathcal G}{\mathbbb{T}}imes {\mathcal G}$ is amenable and therefore Theorem \ref{thm:caractAmen} applies to this groupoid. Every closed subgroupoid of an inner amenable groupoid is inner amenable \cite[Corollary 5.6]{AD16}.
Every semidirect product groupoid $X\rtimes G$ is inner amenable as soon as $G$ is an inner amenable locally compact group \cite[Corollary 5.9]{AD16}. We do not know whether every \'etale groupoid is inner amenable.
\begin{thm}\label{thm:equiv} Let ${\mathcal G}$ be a second countable inner amenable \'etale groupoid. Then the following conditions are equivalent:
\begin{itemize}
\item[(1)] ${\mathcal G}$ is strongly amenable at infinity.
\item[(2)] ${\mathcal G}$ is amenable at infinity.
\item[(3)] $\beta_r {\mathcal G}\rtimes {\mathcal G}$ is nuclear.
\item[(4)] $\beta_r {\mathcal G}\rtimes {\mathcal G}$ is exact.
\item[(5)] ${\mathcal G}$ is KW-exact.
\item[(6)] $C^*_{r}({\mathcal G})$ is exact.
\end{itemize}
\end{thm}
The following implications are immediate or already known:
$$\xymatrix{(1) \ar@{=>}[r] \ar@{=>}[d] & (2)\ar@{=>}[r] &(5)\ar@{=>}[d]\\
(3) \ar@{=>}[r] & (4) \ar@{=>}[r] & (6)} $$
The implication (1) ${{\mathbb{R}}ightarrow}$ (3) is proved in \cite[Corollary 6.2.14]{AD-R} for second countable locally compact groupoids, but this result extends to the groupoid $\beta_r {\mathcal G}\rtimes {\mathcal G}$ when ${\mathcal G}$ is second countable locally compact and \'etale (see \cite[Proposition 7.2]{AD16}).
It remains to show that (6) implies (1). We give below an idea of the proof which is detailed in \cite[\S 8]{AD16}.
$\blacktriangleright$ The first step is to extend Kirchberg's characterization of exact $C^*$-algebras as being nuclearly embeddable into some
${\mathcal B}(H)$ as follows.
\begin{lem}\cite[Lemma 8.1]{AD16}\label{lem:Kirch} Let $A$, $B$ be two separable $C^*$-algebras, where $B$ is nuclear. Let ${\mathcal E}$ be a countably generated Hilbert $C^*$-module over $B$. Let $\iota : A {\mathbbb{T}}o {\mathcal B}_B({\mathcal E})$ be an embedding of $C^*$-algebras. Then $A$ is exact if and only if $\iota$ is nuclear.
\end{lem}
The two main ingredients of the proof of this lemma are the Kasparov absorption theorem and the Kasparov-Voiculescu theorem \cite[Theorem 2, Theorem 6]{Kasp} that allow us to reduce the situation to the case of Hilbert spaces.
$\blacktriangleright$ The second step is the following approximation theorem. Recall that a completely positive contraction ${\mathbb{P}}hi:A{\mathbbb{T}}o B$ between two $C^*$-algebras is {\it factorable} if there exists an integer $n$ and completely positive contractions $\psi: A {\mathbbb{T}}o M_{n}({\mathbb{C}})$, $\varphi : M_{n}({\mathbb{C}}) {\mathbbb{T}}o B$ such that ${\mathbb{P}}hi = \varphi \circ \psi$. A map ${\mathbb{P}}si :C^*_{r}({\mathcal G}) {\mathbbb{T}}o B$ is said to have a {\it compact support} if there exists a compact subset $K$ of ${\mathcal G}$ such that ${\mathbb{P}}si(f) = 0$ for every $f \in {\mathcal C}_c({\mathcal G})$ with $(\hbox{Supp}\, f) \cap K = \emptyset$.
\begin{thm}\cite[Corollary 8.4]{AD16}\label{cor:approx} Let $B$ be a $C^*$-algebra and let ${\mathbb{P}}hi : C^{*}_{r}({\mathcal G}) {\mathbbb{T}}o B$ be a nuclear completely positive contraction.
Then for every $\varepsilon > 0$ and every $a_1,\dots,a_k \in C^{*}_{r}({\mathcal G})$
there exists a factorable completely positive contraction ${\mathbb{P}}si : C^{*}_{r}({\mathcal G}) {\mathbbb{T}}o B$, with compact support,
such that $$\| {\mathbb{P}}si(a_i) - {\mathbb{P}}hi(a_i) \| \leq \varepsilon \,\,\, \hbox{for}\,\,\, i = 1,\dots,k.$$
\end{thm}
$\blacktriangleright$ Finally we need the following result due to Jean Renault (private communication). Given $f:{\mathcal G}{\mathbbb{T}}imes {\mathcal G} {\mathbbb{T}}o {\mathbb{C}}$, we set $f_\gamma(\gamma') = f(\gamma, \gamma')$.
\begin{lem}\cite[Lemma 8.5]{AD16} Let ${\mathcal G}$ be a locally compact groupoid.
\begin{itemize}
\item[(a)] Let $f\in {\mathcal C}_c({\mathcal G})$ be a continuous positive definite function. Then, $f$ viewed as an element of $C^*_{r}({\mathcal G})$ is a positive element.
\item[(b)] Let $f:{\mathcal G}{\mathbbb{T}}imes {\mathcal G} {\mathbbb{T}}o {\mathbb{C}}$ be a properly supported positive definite function. Then $\gamma\mapsto f_\gamma$ is a continuous positive definite function from ${\mathcal G}$ into $C^*_{r}({\mathcal G})$.
\end{itemize}
\end{lem}
$\blacktriangleright$ We can now now proceed to the proof of (6) ${{\mathbb{R}}ightarrow}$ (1) in Theorem \ref{thm:equiv}.
\begin{proof}[Proof of (6) ${{\mathbb{R}}ightarrow}$ (1)]
We fix a compact subset $K$ of ${\mathcal G}$ and $\varepsilon >0$. We want to find a continuous bounded positive definite kernel $k\in {\mathcal C}_t({\mathcal G}*_r{\mathcal G})$ such that $k(\gamma,\gamma) \leq 1$ for all $\gamma\in {\mathcal G}$ and $\abs{k(\gamma,\gamma_1) - 1} \leq \varepsilon$ whenever $\gamma^{-1}\gamma_1 \in K$ (see Theorem \ref{prop:amen_inf}).
We set ${\mathcal E} = \ell^2_{{\mathcal C}_0(X)}({\mathcal G})$ with $X = {\mathcal G}^{(0)}$. Recall that $\lambda^x$ is the counting measure on ${\mathcal G}^x$. We first choose a bounded, continuous positive definite function $f$ on ${\mathcal G}{\mathbbb{T}}imes {\mathcal G}$, properly supported,
such that $|f(\gamma,\gamma) - 1| \leq \varepsilon/2$ for $\gamma \in K$ and $f(x,y)\leq 1$ for $(x,y)\in X{\mathbbb{T}}imes X$. By Lemma \ref{lem:Kirch} the regular representation $\Lambda$ is nuclear. Then, using Theorem \ref{cor:approx}, we find a compactly supported completely positive contraction ${\mathbb{P}}hi: C^*_{r}({\mathcal G}) {\mathbbb{T}}o {\mathcal B}_{{\mathcal C}_0(X)}({\mathcal E})$ such that{\mathbbb{F}}ootnote{We write $f_\gamma$ instead of $\Lambda(f_\gamma)$
for simplicity of notation.}
$${\mathbbb{N}}orm{{\mathbb{P}}hi(f_\gamma) - f_\gamma} \leq \varepsilon/2$$
for $\gamma \in K$.
We also choose a continuous function $\xi : X {\mathbbb{T}}o [0,1]$ with compact support such that $\xi(x) = 1$ if $x\in s(K)\cup r(K)$.
Let $(\gamma,\gamma_1)\in {\mathcal G}*_r{\mathcal G}$. We choose an open bisection $S$ such that $\gamma\in S$ and a continuous function $\varphi: X{\mathbbb{T}}o [0,1]$, with compact support in $r(S)$ such that $\varphi (x) = 1$ on a neighborhood of $r(\gamma)$. We denote by $\xi_\varphi$ the continuous function on ${\mathcal G}$ with compact support (and thus $\xi_\varphi \in {\mathcal E}$) such that
$$\xi_\varphi(\gamma') = 0 \,\,\hbox{if}\,\, \gamma'{\mathbbb{N}}otin S,\quad \xi_\varphi(\gamma') = \varphi\circ r(\gamma')\xi\circ s(\gamma') \,\,\hbox{if}\,\,\gamma'\in S.$$
Note that ${\mathbbb{N}}orm{\xi_\varphi}_{\mathcal E} \leq 1$.
We define $\xi_{\varphi_1}$ similarly with respect to $\gamma_1$.
Then we set
\begin{align*}
k(\gamma,\gamma_1) &= \langle \xi_\varphi,
{\mathbb{P}}hi(f_{\gamma^{-1}\gamma_{1}})\xi_{\varphi_1}\rangle (r(\gamma))\\
&= \xi\circ s(\gamma)\big({\mathbb{P}}hi(f_{\gamma^{-1}\gamma_{1}})\xi_{\varphi_1}\big)(\gamma).
\end{align*}
We observe that $k(\gamma,\gamma_1)$ does not depend on the choices of $S,\varphi, S_1,\varphi_1$.
Since $\gamma \mapsto f_\gamma$ is a continuous positive definite function from ${\mathcal G}$ into $C^*_{r}({\mathcal G})$ and since ${\mathbb{P}}hi$ is completely positive, we see that $k$ is a continuous and positive definite kernel. Moreover, there is a compact subset $K_1$ of ${\mathcal G}$ such that ${\mathbb{P}}hi(f_\gamma) = 0$ when $\gamma{\mathbbb{N}}otin K_1$, because ${\mathbb{P}}hi$ is compactly supported, and $f$ is properly supported. It follows that $k$ is supported in a tube.
We fix $(\gamma,\gamma_1)\in {\mathcal G}*_r{\mathcal G}$ such that $\gamma^{-1}\gamma_1 \in K$. Then we have
$$\abs{k(\gamma,\gamma_1) -1}\leq \varepsilon/2 + \abs{ \langle \xi_\varphi,f_{\gamma^{-1}\gamma_{1}}\xi_{\varphi_1}
\rangle(r(\gamma)) -1},$$
and
$$
\langle \xi_\varphi, f_{\gamma^{-1}\gamma_{1}}\xi_{\varphi_1}\big\rangle(r(\gamma))\\
= \xi\circ s(\gamma)\xi\circ s(\gamma_1)f(\gamma^{-1}\gamma_{1},\gamma^{-1}\gamma_{1}).
$$
Observe that $s(\gamma)\in r(K)$ and $s(\gamma_1)\in s(K)$ and therefore $ \xi\circ s(\gamma) = 1= \xi\circ s(\gamma_1)$.
It follows that
$$\abs{k(\gamma,\gamma_1) -1}\leq \varepsilon/2 + \abs{f(\gamma^{-1}\gamma_{1},\gamma^{-1}\gamma_{1})-1}\leq \varepsilon.$$
To end the proof it remains to check that $k$ is a bounded kernel. Since this kernel is positive definite, it suffices to show that $\gamma\mapsto k(\gamma,\gamma)$ is bounded on ${\mathcal G}$. We have
$$k(\gamma,\gamma) = \langle \xi_\varphi,{\mathbb{P}}hi(f_{s(\gamma)})\xi_{\varphi}\rangle (r(\gamma)) \leq {\mathbbb{N}}orm{{\mathbb{P}}hi(f_{s(\gamma)})}.$$
Our claim follows, since ${\mathbb{P}}hi(f_{s(\gamma)})= 0$ when $s(\gamma){\mathbbb{N}}otin K_1\cap X$ and $x\mapsto f_x$ is continuous from the compact set $K_1\cap X$ into $C_r^{*}({\mathcal G})$.
\end{proof}
\begin{rem}\label{rem:partial} Let $\alpha : \Gamma\curvearrowright X$ be an action of a discrete group on a locally compact space $X$. Since the groupoid $X\rtimes G$ is inner amenable, Theorem \ref{thm:equiv} applies. Therefore $C_0(X)\rtimes_r \Gamma$ is exact if and only if the groupoid $X\rtimes \Gamma$ is KW-exact.
More generally, this holds for any partial action such that the domains of the partial homeomorphisms $\alpha_t$ are closed (in addition to being open). Indeed it not difficult to show that the groupoid $X\rtimes \Gamma$ is inner amenable (directly, or using the fact that such partial actions admit a Hausdorff globalisation \cite[Proposition 5.7]{Exel_Book}).
For general partial actions of $\Gamma$ the situation is not clear. We do not know whether $X\rtimes \Gamma$ is inner amenable in this case. If
$\Gamma$ is exact the semidirect product groupoid $X\rtimes \Gamma$ is strongly amenable at infinity \cite[Proposition 4.23]{AD16} and therefore $C^*_{r}(X\rtimes \Gamma)$ is exact. This had been previously shown in \cite[Corollary 2.2]{AEK} by using Fell bundles.
\end{rem}
{\mathbb{S}}ubsection{Inner exactness} We introduce now a very weak notion of exactness. First let us make some reminders. Let ${\mathcal G}$ be a locally compact groupoid. Recall that a subset $E$ of $X= {\mathcal G}^{(0)}$ is said to be {\it invariant} if $s(\gamma)\in E$ if and only if $r(\gamma)\in E$.
Let $F$ be a closed invariant subset of $X$ and set $U = X{\mathbb{S}}etminus F$. It is well-known that the inclusion
$\iota : {\mathcal C}_c({\mathcal G}(U) ){\mathbbb{T}}o {\mathcal C}_c({\mathcal G})$ extends to an injective homomorphism from $C^*({\mathcal G}(U))$ into $C^*({\mathcal G})$ and from
$C^*_{r}({\mathcal G}(U))$ into $C^*_{r}({\mathcal G})$. Similarly, the restriction map $\pi: {\mathcal C}_c({\mathcal G}) {\mathbbb{T}}o {\mathcal C}_c({\mathcal G}(F))$ extends to a surjective homomorphism from $C^*({\mathcal G})$ onto $C^*({\mathcal G}(F))$ and from
$C^*_{r}({\mathcal G})$ onto $C^*_{r}({\mathcal G}(F))$. Moreover the sequence
$$0 \rightarrow C^*({\mathcal G}(U)) \rightarrow C^*({\mathcal G}) \rightarrow C^*({\mathcal G}(F)) \rightarrow 0$$
is exact. For these facts, we refer to \cite[page 102]{Ren_book}, or to \cite[Proposition 2.4.2]{Ram} for a detailed proof. On the other hand, the corresponding sequence
\begin{equation}\label{eq:ie}
0 \rightarrow C^*_{r}({\mathcal G}(U)) \rightarrow C^*_{r}({\mathcal G}) \rightarrow C^*_{r}({\mathcal G}(F)) \rightarrow 0
\end{equation}
with respect to the reduced groupoid $C^*$-algebras is not always exact, as shown in \cite[Remark 4.10]{Ren91} (see also Proposition \ref{prop:HLS} below).
\begin{defn}\label{def:inamen} A locally compact groupoid such that the sequence \eqref{eq:ie} is exact for every closed invariant subset $F$ of $X$ called KW-{\it inner exact} or simply {\it inner exact}.{\mathbb{1}}ex{inner exact groupoid}
\end{defn}
We will see that the class of inner exact groupoids plays a role in the study of the (WCP). It is also interesting in itself and now plays a role in other contexts (see for instance \cite{BL}, \cite{BCS}, \cite{BEM}). This class is quite large. It includes all locally compact groups and more generally the groupoids that act with dense orbits on their space of units. This class is stable under equivalence of groupoids \cite[Theorem 6.1]{Lal17}. Of course, KW-exact groupoids are inner exact.
{\mathbb{S}}ubsection{The case of group bundle groupoids} We first need to recall some definitions.
\begin{defn}\label{def:bundle_C*}\cite[Definition 1.1]{KW95} A {\it field} (or bundle)
{\it of $C^*$-algebras over a locally compact space} $X$ is a triple ${\mathcal A} = (A, {\mathbb{S}}et{\pi_x: A {\mathbbb{T}}o A_x}_{x\in X},X)$ where $A$, $A_x$ are $C^*$-algebras, and where $\pi_x$ is a surjective $*$-homomorphism such that
\begin{itemize}
\item[(i)] ${\mathbb{S}}et{\pi_x: x\in X}$ is faithful, that is, ${\mathbbb{N}}orm{a} = {\mathbb{S}}up_{x\in X}{\mathbbb{N}}orm{\pi_x(a)}$ for every $a\in A$;
\item[(ii)] for $f\in {\mathcal C}_0(X)$ and $a\in A$, there is an element $fa\in A$ such that $\pi_x(fa) = f(x)\pi_x(a)$ for $x\in X$;
\item[(iii)] the inclusion of ${\mathcal C}_0(X)$ into the center of the multiplier algebra of $A$ is non-degenerate.
\end{itemize}
We say that the field is (usc) {\it upper semi-continuous} (resp. (lsc) {\it lower semi-continuous}) if the function $x\mapsto {\mathbbb{N}}orm{\pi_x(a)}$ is upper semi-continuous (resp. lower semi-continuous) for every $a\in A$.
If for each $a\in A$, the function $x\mapsto {\mathbbb{N}}orm{\pi_x(a)}$ is in ${\mathcal C}_0(X)$, we will say that ${\mathcal A}$ is a {\it continuous field of $C^*$-algebras}{\mathbbb{F}}ootnote{In \cite{KW95}, this is called a continuous bundle of $C^*$-algebras.}.
\end{defn}
Recall that a ${\mathcal C}_0(X)$-algebra $A$ is a $C^*$-algebra equipped with a non-degenerate homomorphism from ${\mathcal C}_0(X)$ into the multiplier algebra of $A$ (see \cite[Appendix C.1]{Will}).
For $x\in X$ we denote by ${\mathcal C}_x(X)$ the subalgebra of ${\mathcal C}_0(X)$ of functions that vanish at $x$. Note that a ${\mathcal C}_0(X)$-algebra $A$ gives rise to an usc field of $C^*$-algebras with fibres $A_x = A/{\mathcal C}_x(X)A$ (see \cite[Proposition 1.2]{Rie} or \cite[Appendix C.2]{Will}). We will use the following characterization of usc fields of $C^*$-algebras.
\begin{lem}\cite[Lemma 2.3]{KW95}, \cite[Lemma 9.4]{AD16}\label{lem:usc} Let ${\mathcal A}$ be a field of $C^*$-algebras on a locally compact space $X$. The function $x\mapsto {\mathbbb{N}}orm{\pi_x(a)}$ is upper semi-continuous at $x_0$ for every $a\in A$ if and only if $\ker \pi_{x_0}= {\mathcal C}_{x_0}(X)A$
\end{lem}
We apply this fact to the reduced $C^*$-algebra of a groupoid group bundle ${\mathcal G}$ as defined in Example \ref{exs:groupoids} (b). The structure of ${\mathcal C}_0(X)$-algebra of the $C^*$-algebra $C^*_{r}({\mathcal G})$ is defined by $(f h)(\gamma) = f\circ r(\gamma) h(\gamma)$ for $f\in {\mathcal C}_0(X)$ and $h\in {\mathcal C}_c({\mathcal G})$ (see \cite[Lemma 2.2.4]{Ram}, \cite[\S 5]{LR}). We set $U_x = X {\mathbb{S}}etminus {\mathbb{S}}et{x}$. Then we have $C^*_{r}({\mathcal G}(U_x)) = {\mathcal C}_x(X) C^*_{r}({\mathcal G})$. We get that $C^*_{r}({\mathcal G})$ is an usc field of $C^*$-algebras over $X$ with fibre $C^*_{r}({\mathcal G})/{\mathcal C}_x(X) C^*_{r}({\mathcal G})= C^*_{r}({\mathcal G})/C^*_{r}({\mathcal G}(U_x)) $ at $x$.
On the other hand, $(C^*_{r}({\mathcal G}), {\mathbb{S}}et{\pi_x : C^*_{r}({\mathcal G}){\mathbbb{T}}o C^*_{r}({\mathcal G}(x))})$ is lower semi-continuous (see
\cite[Th\'eor\`eme 2.4.6]{Ram} or \cite[Theorem 5.5]{LR}). Then it follows from Lemma \ref{lem:usc} that the function $x\mapsto {\mathbbb{N}}orm{\pi_x(a)}$ is continuous at $x_0$ for every $a\in C^*_{r}({\mathcal G})$ if and only if the following sequence is exact:
$$0\rightarrow C^*_{r}({\mathcal G}(U_{x_0})) \rightarrow C^*_{r}({\mathcal G}) {\mathbb{S}}tackrel{\pi_{x_0}}{\rightarrow} C^*_{r}({\mathcal G}(x_0)) \rightarrow 0.$$
\begin{prop}\label{prop:innexact} Let ${\mathcal G}$ be a group bundle groupoid on $X$. The following conditions are equi\-valent:
\begin{itemize}
\item[(i)] ${\mathcal G}$ is inner exact;
\item[(ii)] for every $x\in X$ the following sequence is exact:
$$
0 \rightarrow C^*_{r}({\mathcal G}(X{\mathbb{S}}etminus {\mathbb{S}}et{x})) \rightarrow C^*_{r}({\mathcal G}) \rightarrow C^*_{r}({\mathcal G}(x)) \rightarrow 0.
$$
\item[(iii)] $C^*_{r}({\mathcal G})$ is a continuous field of $C^*$-algebras over $X$ with fibres $C^*_{r}({\mathcal G}(x)) $.
\end{itemize}
\end{prop}
\begin{proof} (i) ${\mathbb{R}}ightarrow$ (ii) is obvious and (ii) ${\mathbb{R}}ightarrow$ (iii) is a particular case of the previous observation. Assume that (iii) holds true and, given an invariant closed subset $F$ of $X$, let us show that the following sequence is exact:
$$
0 \rightarrow C^*_{r}({\mathcal G}(X{\mathbb{S}}etminus F)) \rightarrow C^*_{r}({\mathcal G}) \rightarrow C^*_{r}({\mathcal G}(F)) \rightarrow 0.
$$
Let $a\in C^*_{r}({\mathcal G})$ be such that $\pi_x(a) = 0$ for every $x\in F$. Let $\varepsilon >0$ be given. Then $K = {\mathbb{S}}et{x\in X, {\mathbbb{N}}orm{\pi_x(a)} \geq \varepsilon}$ is a compact subset of $X$ with $K\cap F = \emptyset$. Take $f\in {\mathcal C}_0(X), f: X{\mathbbb{T}}o [0,1]$ with $f(x) = 1$ for $x\in K$ and $f(x) = 0$ for $x\in F$. We have ${\mathbbb{N}}orm{a-fa}\leq \varepsilon$ and $fa\in C^*_{r}({\mathcal G}(X{\mathbb{S}}etminus F))$. Therefore $a\in C^*_{r}({\mathcal G}(X{\mathbb{S}}etminus F))$.
\end{proof}
{\mathbb{S}}ubsection{The case of HLS groupoids} \label{sec:HLS}
The following class of \'etale group bundle groupoids (that we call HLS-groupoids){\mathbb{1}}ex{HLS-groupoid} was introduced by Higson, Lafforgue and Skandalis \cite{HLS},
in order to provide examples of groupoids for which the Baum-Connes conjecture fails. We consider an infinite discrete group $\Gamma$ and a decreasing sequence $(N_k)_{k\in {\mathbb{N}}}$ of normal subgroups of $\Gamma$ of finite index. We set $\Gamma_\infty = \Gamma$, and $\Gamma_k = \Gamma/N_k$ and we denote by $q_k : \Gamma {\mathbbb{T}}o \Gamma_k$ the quotient homomorphism for $k$ in the Alexandroff compactification ${\mathbb{N}}^+$ of ${\mathbb{N}}$. Let ${\mathcal G}$ be the quotient of $ {\mathbb{N}}^+ {\mathbbb{T}}imes\Gamma$ with respect to the equivalence relation
$$(k,t){\mathbb{S}}im (l,u) \,\,\,\hbox{if} \,\,\, k=l \,\,\,\hbox{and}\,\,\, q_k(t) = q_k(u).$$
Then ${\mathcal G}$ is the bundle of groups $k\mapsto \Gamma_k$ over ${\mathbb{N}}^+$. The range and source maps are given by $r([k,t]) = s([k,t]) = k$, where $[k,t] = (k,q_k(t))$ is the equivalence class of $(k,t)$. We endow ${\mathcal G}$ with the quotient topology. Then ${\mathcal G}$ is Hausdorff (and obviously an \'etale groupoid) if and only if for every $s{\mathbbb{N}}ot= 1$ there exists $k_0$ such that $s{\mathbbb{N}}otin N_k$ for $k\geq k_0$ (hence, $\Gamma$ is residually finite). We keep this assumption. Such examples are provided by taking $\Gamma = \hbox{SL}_n({\mathbb{Z}})$ and $\Gamma_k = \hbox{SL}_n({\mathbb{Z}}/k{\mathbb{Z}})$, for $k\geq 2$.
For these HLS groupoids, the exactness of $C^*_{r}({\mathcal G})$ is a very strong condition which suffices to imply the amenability of $\Gamma$ as shown by Willett in \cite{Wil15}.
\begin{prop}\label{prop:HLS} Let us keep the above notation. We assume that $\Gamma$ is finitely generated. Then the following conditions are equivalent:
\begin{itemize}
\item[(1)] $\Gamma$ is amenable; $(2)$ ${\mathcal G}$ is amenable;
\item[(3)] ${\mathcal G}$ is KW-exact; $(4)$ ${\mathcal G}$ is inner exact;
\item[(5)] the sequence
$0 \longrightarrow C^*_{r}({\mathcal G}({\mathbb{N}}))\longrightarrow C^*_{r}({\mathcal G}) \longrightarrow C^*_{r}({\mathcal G}(\infty)) \longrightarrow 0$
is exact $($$($5'$)$ $C^*_{r}({\mathcal G})$ is a continuous field of $C^*$-algebras with fibres $C^*_{r}({\mathcal G}(x))$$)$;
\item[(6)] $C^*_{r}({\mathcal G})$ is nuclear; $(7)$ $C^*_{r}({\mathcal G})$ is exact.
\end{itemize}
\end{prop}
\begin{proof} The equivalence between (1) and (2) follows for instance from \cite[Lemma 2.4]{Wil15}. That (2) ${{\mathbb{R}}ightarrow}$ (3) ${{\mathbb{R}}ightarrow}$ (4) ${{\mathbb{R}}ightarrow}$ (5) is obvious and by Proposition \ref{prop:innexact} we have (5) ${{\mathbb{R}}ightarrow}$ (5'). Let us prove that (5') ${{\mathbb{R}}ightarrow}$ (1). Assume by contradiction that $\Gamma$ is not amenable. We fix a symmetric probability measure $\mu$ on $\Gamma$ with a finite support that generates $\Gamma$ and we choose $n_0$ such that the restriction of $q_n$ to the support of $\mu$ is injective for $n\geq n_0$. We take $a\in{\mathcal C}_c({\mathcal G}){\mathbb{S}}ubset C^*_{r}({\mathcal G})$ such that $a(\gamma) = 0$ except for $\gamma = (n,q_n(s))$ with $n\geq n_0$ and $s\in {\mathbb{S}}upp(\mu)$ where $a(\gamma) = \mu(s)$. Then $\pi_n(a) = 0$ if $n<n_0$ and $\pi_n(a) = \lambda_{\Gamma_n}(\mu) \in C_{r}^*(\Gamma_n)= C^*_{r}({\mathcal G}(n))$ if $n\geq n_0$, where $\lambda_{\Gamma_n}$ is the quasi-regular representation of $\Gamma$ in $\ell^2(\Gamma_n)$. By Kesten's result \cite{Kes1, Kes} on spectral radii relative to symmetric random walks, we have ${\mathbbb{N}}orm{\lambda_{\Gamma_n}(\mu)}_{C_{r}^*(\Gamma_n)} =1$ for ${\mathbb{N}} {\mathbbb{N}}i n\geq n_0$ and ${\mathbbb{N}}orm{\lambda_{\Gamma_\infty}(\mu)}_{C_{r}^*(\Gamma_\infty)} < 1$ since $\Gamma$ is not amenable. It follows that $C^*_{r}({\mathcal G})$ is not a continuous field of $C^*$-algebras with fibres $C^*_{r}({\mathcal G}(n))$ on ${\mathbb{N}}^+$, a contradiction.
We know that (2) ${{\mathbb{R}}ightarrow}$ (6) ${{\mathbb{R}}ightarrow}$ (7). For the fact that (7) ${{\mathbb{R}}ightarrow}$ (1) see \cite[Lemma 3.2]{Wil15}.
\end{proof}
Given a group bundle groupoid it may happen that ${\mathcal G}(x)$ is a $C^*$-exact group for every $x\in {\mathcal G}^{(0)}$ whereas ${\mathcal G}$ is not $C^*$-exact. Indeed if ${\mathcal G}$ is an HLS groupoid associated with a group $\Gamma$ that has Kazdhan's property (T), then the sequence
$$0 \longrightarrow C^*_{r}({\mathcal G}({\mathbb{N}}))\longrightarrow C^*_{r}({\mathcal G}) \longrightarrow C^*_{r}({\mathcal G}(\infty)) \longrightarrow 0$$
is not exact (it is not even exact in $K$-theory!) \cite{HLS}. As an example we can take the exact group $\Gamma = SL(3,{\mathbb{Z}})$. The previous proposition shows that $C^*_{r}({\mathcal G})$ is not exact. Willett has given an even more surprising example with $\Gamma = {\mathbb{F}}_2$, the free group with two generators (see below).
{\mathbb{S}}ection{Weak containment, exactness and amenability}
\begin{defn}\label{def:weakamen} Let ${\mathcal G}$ be a locally compact groupoid. We say that ${\mathcal G}$ has the {\it weak containment property}, {\mathbb{1}}ex{weak containment property}(WCP) {\mathbb{1}}ex{(WCP)} in short, if the canonical surjection from its full $C^*$-algebra $C^*({\mathcal G})$ onto its reduced $C^*$-algebra $C^*_{r}({\mathcal G})$ is injective, {\it i.e.}, the two completions $C^*({\mathcal G})$ and $C^*_{r}({\mathcal G})$ of ${\mathcal C}_c({\mathcal G})$ are the same.
\end{defn}
A very useful theorem of Hulanicki \cite{Hul, Hul66} asserts that a locally compact group $G$ has the (WCP) if and only if it is amenable. While it has long been known that every amenable locally compact groupoid has the (WCP) \cite{Ren91, AD-R}, whether the converse holds was a long-standing open problem (see \cite[Remark 6.1.9]{AD-R}). Remarkably, in 2015 Willett \cite{Wil15} published a very nice example showing that a group bundle groupoid may have the (WCP) without being amenable. His example is a HLS groupoid where $(\Gamma_n)$ is a well chosen sequence of subgroups of the free group with two generators ${\mathbb{F}}_2$. Therefore the groupoid version of Hulanicki's theorem is not true in general. However there are many positive results, all of which involve an additional exactness assumption.
A first result in this direction is due to Buneci \cite{Bun}. She proved that a second countable locally compact transitive groupoid ${\mathcal G}$ having the (WCP) is measurewise amenable. The (topological) amenability of ${\mathcal G}$ can also be proved by observing that it is preserved under equivalence of groupoids \cite[Theorem 2.2.17]{AD-R}, as well as the (WCP) \cite[Theorem 17]{SW12}, and using the fact that ${\mathcal G}$ is equivalent to any of its isotropy group by transitivity \cite{MRW}.
It is only in 2014 that a second result appeared, linking amenability and the (WCP)
\begin{thm}\label{Mat}\cite{Mat} Let $\Gamma$ be a discrete group acting by homeomorphisms on a compact space. Then the semidirect product groupoid is amenable if and only if it has the (WCP) and $\Gamma$ is exact.
\end{thm}
Note that the exactness of $\Gamma$ is equivalent to the strong amenability at infinity of the groupoid $X\rtimes \Gamma$ since $X$ is compact and $\Gamma$ is discrete (see \cite[Proposition 4.3 (i)]{AD16}). Recently, the above theorem has been extended by Kranz as follows.
\begin{thm}\label{Kranz} \cite{Kra} Let ${\mathcal G}$ be an \'etale groupoid. Then ${\mathcal G}$ is amenable if and only if it has the (WCP) and is strongly amenable at infinity.
\end{thm}
Assuming that ${\mathcal G}$ has the (WCP) and is strongly amenable at infinity,
Kranz's strategy to prove that ${\mathcal G}$ is amenable is the same as that of \cite{Mat}, but with additional technical difficulties. It consists in showing that the canonical inclusion of $C^*_{r}({\mathcal G})$ into its bidual $C^*_{r}({\mathcal G})^{**}$ is nuclear. Then by \cite[Proposition 2.3.8]{BO} one sees that $C^*_{r}({\mathcal G})$ is nuclear and by \cite[Theorem 5.6.18]{BO} it follows that the groupoid ${\mathcal G}$ is amenable. The delicate step, which requires in a crucial way that ${\mathcal G}$ is \'etale, is to show the existence of a completely positive map $\phi: C^*_{r}(\beta_r{\mathcal G} \rtimes {\mathcal G}) {\mathbbb{T}}o C^*_{r}({\mathcal G})^{**}$ whose restriction to $C^*_{r}({\mathcal G})$ is the inclusion from $C^*_{r}({\mathcal G})$ in its bidual. Since $C^*_{r}(\beta_r{\mathcal G} \rtimes {\mathcal G})$ is nuclear (see \cite[Proposition 7.2]{AD16}), this inclusion is nuclear.
By a different method the following extension of Theorem \ref{Mat} was obtained in \cite{BEW20}. Note that unlike the case where $X$ is compact it is not true in general that $G$ is KW-exact when $X\rtimes G$ is amenable.
\begin{thm} \cite[Theorem 5.15]{BEW20} Let $G\curvearrowright X$ be a continuous action of a locally compact group $G$ on a locally compact space $X$. We assume that $G$ is KW-exact and that $X\rtimes G$ has the (WCP). Then the groupoid $X\rtimes G$ is amenable.
\end{thm}
It is interesting to note that the behaviour is different for group actions on non-commutative $C^*$-algebras. For instance in \cite[Proposition 5.25]{BEW20} a surprising example of a non-amenable action having the (WCP) of the exact group $G= PSL(2,{\mathbb{C}})$ on the $C^*$-algebra of compact operators has been constructed. It would be interesting to construct an example with an exact discrete group.
For group bundle groupoids we have the following easy result.
\begin{prop}\label{prop:WCPbundle} Let ${\mathcal G}$ be a second countable locally compact group bundle groupoid over a locally compact space $X$. Then ${\mathcal G}$ is amenable if and only if it has the (WCP) and is inner exact.
\end{prop}
\begin{proof} Assume that ${\mathcal G}$ has the (WCP) and is inner exact and let $x\in X$. We set $U_x = X{\mathbb{S}}etminus {\mathbb{S}}et{x}$. In the commutative diagram
$$\xymatrix{
0\ar[r] &C^*({\mathcal G}(U_x)) \ar[d] \ar[r] & C^*({\mathcal G})\ar[d]^{\lambda} \ar[r] &C^* ({\mathcal G}(x))
\ar[r]\ar[d]^{\lambda_{{\mathcal G}(x)}} &0\\
0\ar[r] & C^*_{r}({\mathcal G}(U_x)) \ar[r] & C^*_{r}({\mathcal G})\ar[r]^{\pi_x}& C^*_r ({\mathcal G}(x))
\ar[r] &0}$$
both sequences are exact and $\lambda$ is injective. Chasing through the diagram we see that $\lambda_{{\mathcal G}(x)}$ is injective ({\it i.e.}, the group ${\mathcal G}(x)$ is amenable). This ends the proof since, by \cite[Theorem 3.5]{Ren13}, the group bundle groupoid ${\mathcal G}$ is amenable if and only if ${\mathcal G}(x)$ is amenable for every $x\in X$.
\end{proof}
The cases of transitive groupoids and of group bundle groupoids are included in the following result of B\"onicke. His nice elementary proof is reproduced in \cite[Theorem 10.5]{AD16}.
\begin{thm}\cite{Bon}\label{thm:bon} Let ${\mathcal G}$ be a second countable locally compact groupoid such that the orbit space ${\mathcal G}{\mathbb{S}}etminus {\mathcal G}^{(0)}$ equipped with the quotient topology is $T_0$. Then the following conditions are equivalent:
\begin{itemize}
\item[(i)] ${\mathcal G}$ is amenable;
\item[(ii)] ${\mathcal G}$ has the (WCP) and is inner exact.
\end{itemize}
\end{thm}
{\mathbb{S}}ection{Open questions}
{\mathbb{S}}ubsection{About amenability at infinity and inner amenability}
\begin{itemize}
\item[(1)] The notion of strong amenability at infinity has proven to be more useful than amenability at infinity. But are the two notions equivalent? Note that by Theorem \ref{thm:equiv}
this is true for every second countable inner amenable \'etale groupoid.
\item[(2)] It would be interesting to understand better the notion of inner amenability for locally compact groupoids. Is it invariant under equivalence of groupoids? Are there \'etale groupoids that are not inner amenable? In particular, if $G$ is a discrete group acting partially on a locally compact space X, is it true that the corresponding partial transformation groupoid is inner amenable? This is true when the domains of the partial homeomorphisms are both open and closed but what happens in general? It would also be interesting to study the case of HLS groupoids.
\end{itemize}
{\mathbb{S}}ubsection{About exactness for groups}
\begin{itemize}
\item[(3)] Let us denote by $[InnAmen]$ the class of locally compact inner amenable groups and by $[Tr]$ the class of groups whose reduced $C^*$-algebra has a tracial state. Groups $G$ in each of these two classes and such that $C^*_r(G)$ is nuclear are amenable \cite{AD02}, \cite{Ng}. Almost connected groups are amenable at infinity \cite[Theorem 6.8]{KW99bis}, \cite[Proposition 3.3]{AD02}. Their full $C^*$-algebras are nuclear. So they are in $[InnAmen]$ or in $[Tr]$ if and only if they are amenable. This latter observation applies also to groups of type I.
We denote by $[{\mathcal C}]$ the class of locally compact groups for which $C^*$-exactness is equivalent to KW-exactness. It contains $[InnAmen]$ and $[Tr]$. Almost connected groups are in $[{\mathcal C}]$ since they are KW-exact. The case of groups of type I is not clear. Of course they are $C^*$-exact but are they KW-exact? In support of this question we point out that it is conjectured that every second countable locally compact group of type I has a cocompact amenable subgroup \cite{CKM}, a property which implies amenability at infinity \cite[Proposition 5.2.5]{AD-R}.
It would be interesting to find more examples in the class $[{\mathcal C}]$. It seems difficult to find examples not in $[{\mathcal C}]$. Note that this class is preserved by extensions
$$0{\mathbbb{T}}o N{\mathbbb{T}}o G{\mathbbb{T}}o G/N{\mathbbb{T}}o 0$$
where $N$ is amenable.
Indeed since $N$ is amenable, $C^*_{r}(G/N)$ is a quotient of $C^*_{r}(G)$ (see the proof of Lemma 3.5 in \cite{CZ}). Assume that $G$ is $C^*$-exact and that $G/N\in [{\mathcal C}]$. Then the group $G/N$ is $C^*$-exact and therefore KW-exact. It follows that $G$ is KW-exact since the class of KW-exact groups is preserved under extension \cite[Proposition 5.1]{KW99bis}.
\item[(4)] There are examples of non-inner amenable groups in $[Tr]$ (see \cite[Remark 2.6 (ii)]{FSW}, \cite[Example 4.15]{Man}). But are there inner amenable groups which are not in $[Tr]$? Note that the subclass $[IN]$ of $[InnAmen]$ is contained into $[Tr]$ \cite[Theorem 2.1]{FSW}. Let us recall that a locally compact group $G$ is in $[IN]$ if its identity $e$ has a compact neighborhood invariant under conjugacy. By \cite[Proposition 4.2]{Tay}, this is equivalent to the existence of a normal tracial state on the von Neumann algebra $L(G)$ of $G$. Since $C^*_{r}(G)$ is weakly dense into $L(G)$ the conclusion follows.
Let us observe that the existence of a locally compact group in $[InnAmen]{\mathbb{S}}etminus [Tr]$ is equivalent to the existence of a totally disconnected locally compact group in $[InnAmen]{\mathbb{S}}etminus [Tr]$. Indeed let $G$ be a locally compact group in this set. Let $G_0$ be the connected component of the identity. Then $G_0$ is inner amenable as well as $G/G_0$ (see \cite[Corollary 3.3]{CT} and \cite[Proposition 6.2]{LP}). Since a connected inner amenable group is amenable (by \cite[Theorem 5.8]{AD02}), we see that $G_0$ is amenable. It follows that $C^*_{r}(G/G_0)$ is a quotient of $C^*_{r}(G)$ and therefore $G/G_0$ is not in $[Tr]$.
As a consequence of this observation we are left with the following problem: does there exist a totally disconnected locally compact inner amenable group without open normal amenable subgroups?
\end{itemize}
{\mathbb{S}}ubsection{About exactness for groupoids}
\begin{itemize}
\item[(5)] In \cite{KW95}, Kirchberg and Wassermann have constructed examples of continuous fields of exact $C^*$-algebras on a locally compact space, whose $C^*$-algebra of continuous sections vanishing at infinity is not exact. Are there examples of \'etale group bundle groupoids ${\mathcal G}$, whose reduced $C^*$-algebra is not exact whereas
$$(C^*_{r}({\mathcal G}), {\mathbb{S}}et{\pi_x : C^*_{r}({\mathcal G}) {\mathbbb{T}}o C^*_{r}({\mathcal G}(x))}_{x\in {\mathcal G}^{(0)}},{\mathcal G}^{(0)})$$
is a continuous field of exact $C^*$-algebras (compare with Proposition \ref{prop:HLS})?
A similar question is asked in \cite[Question 3]{Lal17}: if ${\mathcal G}$ is an inner exact locally compact group bundle groupoid, whose fibres are KW-exact groups, is it true that ${\mathcal G}$ is KW-exact?
\item[(6)]Let ${\mathcal G}$ be a locally compact groupoid. We have
\begin{center}
Amenability at infinity $\xLongrightarrow{{\mathbbb{T}}ext{(1)}}$ KW-exactness $\xLongrightarrow{{\mathbbb{T}}ext{(2)}}$ $C^*$-exactness.
\end{center}
Let us recap what is known about the reversed arrows and what is still open.
When ${\mathcal G}$ is an \'etale inner amenable groupoids, the above three notions of exactness are equivalent (Theorem \ref{thm:equiv}). Does this fact extend to any inner amenable locally compact groupoid? Without the assumption of inner amenability, nothing is known.
As already said, it seems difficult to find an example of a locally compact group which is $C^*$-exact but not KW-exact. Could it be easier to find an example in the context of locally compact groupoids?
If $G$ is a KW-exact locally compact group, every semidirect product groupoid (relative to a global or to a partial action) is strongly amenable at infinity \cite[Proposition 4.3, Proposition 4.23]{AD16}, and therefore KW-exact. Does KW-exactness of $X\rtimes G$ imply that $X\rtimes G$ is amenable at infinity in general? Note that the notion of exactness for a semidirect product groupoid $X\rtimes \Gamma$, where $\Gamma$ is a discrete group is not ambiguous by Theorem \ref{thm:equiv}.
\end{itemize}
{\mathbb{S}}ubsection{(WCP) vs amenability}
\begin{itemize}
\item[(7)] Are there examples of inner exact groupoids ${\mathcal G}$ which have the (WCP) without being amenable? By \cite{Bon} one should look for examples for which the orbit space ${\mathcal G}{\mathbb{S}}etminus {\mathcal G}^{(0)}$ is not $T_0$.
\item[(8)] We have seen that if an \'etale locally compact groupoid ${\mathcal G}$ is assumed to be strongly amenable at infinity, the (WCP) implies its amenability (Theorem \ref{Kranz}). Is it true in general, or at least for a semidirect product groupoid $X\rtimes G$ with $X$ and $G$ locally compact? Recall that this holds when $G$ is KW-exact \cite[Theorem 5.15]{BEW20}, a property stronger than strong amenability at infinity.
\item[(9)] Let $G$ be a discrete group and let $X = \partial G = \beta G{\mathbb{S}}etminus G$ be its boundary equipped with the natural action of $G$. The weak containment property for $\partial G \rtimes G$ implies that this groupoid is amenable. Indeed the (WCP) implies that the sequence
$$0\longrightarrow C^*_{r}(G\rtimes G) \longrightarrow C^*_{r}( \beta G\rtimes G)\longrightarrow C^*_{r}(\partial G\rtimes G)\longrightarrow 0$$
is exact. Roe and Willett have proven in \cite{RW} that this exactness property implies that $G$ is exact. It follows that $G\curvearrowright \beta G$ is amenable and therefore $G\curvearrowright \partial G$ is amenable too. Can we replace $\partial G$ by $\beta G$, that is, if $G\curvearrowright \beta G$ has the (WCP) can we deduce that $G\curvearrowright \beta G$ is amenable? This is asked in \cite[Remark 4.10]{BEW20bis}.
\end{itemize}
{\mathbbb{N}}oindent{\em Acknowledgements}. I thank Julian Kranz, Jean Renault and the referee for useful remarks and suggestions.
{\mathbbb{N}}oindent {\bf Addendum.} A construction due to Suzuki \cite{Suz} gives an example of a totally disconnected locally compact inner amenable group without open normal amenable subgroups (or equivalently without tracial states), thus answering the question posed in point 7.2.(4) above.
Suzuki considers a sequence $(\Gamma_n, F_n)$ of pairs of discrete groups, where $F_n$ is a finite group acting on $\Gamma_n$ is such a way that the reduced $C^*$-algebra of the semidirect product $\Gamma_n \rtimes F_n$ is simple. Let us set $\Gamma = \bigoplus_{i=1}^{\infty}\Gamma_i$ and $K= \prod_{i=1}^\infty F_i$. Let $K$ act on $\Gamma$ component-wise. Then Suzuki shows that $C^*_{r}(G)$ is simple and that the Plancherel weight is the unique lsc semifinite tracial weight on $C^*_{r}(G)$. Since $G$ is not discrete this weight is not finite and therefore $G$ in not in $[Tr]$ (see also \cite[Remark 2.5]{FSW}).
Let us show now that $G\in [InnAmen]$. We set $G_n = (\bigoplus_{i=1}^n \Gamma_i)\rtimes K$. Then $(G_n)$ is an increasing sequence of open subgroups of $G$ with $\bigcup_{n=1}^\infty G_n = G$. Since $G_n$ contains an open compact normal subgroup, namely $\prod_{i= n+1}^\infty F_i$ we see that there exists an inner invariant mean on $L^\infty(G_n)$ and therefore a mean $m_n$ on $L^\infty(G)$ which is invariant by conjugation under the elements of $G_n$. Any cluster point of the sequence $(m_n)$ in $L^\infty(G)^*$ gives an inner invariant mean on $L^\infty(G)$.
\end{document} |
\begin{document}
\title{Iterated dynamical maps in an ion trap.}
\author{M. Duncan$^1$, J. Links$^1$, and G. J. Milburn$^2$.\\ \hspace{1cm}}
\address{$^1$School of Physical Sciences,\\
$^2$The Centre for Quantum Computer Technology, \\ The University of Queensland,QLD 4072 Australia.}
\begin{abstract}
Iterated dynamical maps offer an ideal setting to investigate quantum dynamical bifurcations and are well adapted to few-qubit quantum computer realisations. We show that a single trapped ion, subject to periodic impulsive forces, exhibits a rich structure of dynamical bifurcations derived from the Jahn-Teller Hamiltonian flow model. We show that the entanglement between the oscillator and electronic degrees of freedom reflects the underlying dynamical bifurcation in a Floquet eigenstate.
\end{abstract}
\maketitle
Iterated area-preserving maps are better adapted to physical implementations of quantum information processing than Hamiltonian flows. In fact one of the first quantum algorithms, the Grover search algorithm\cite{Grover}, may be regarded as a quantum description of an area-preserving map. Typically, a quantum algorithm consists of a sequence of elementary unitary operations on one or more two-level systems. In the Grover search algorithm a simple product of unitary operators is iterated. While it is possible to simulate an arbitrary Hamiltonian flow as a quantum circuit, area-preserving maps are simulated more directly as an iterated sequence of unitary gates. Originally introduced by Poincar\'{e}, iterated maps have formed the core of studies in quantum chaos for many decades\cite{haake}. The model reported here is a simple example of a non trivial area-preserving map, based on the Jahn-Teller model\cite{jt} and is naturally adapted to an ion trap implementation of quantum information processing.
The Jahn-Teller model describes a class of systems in which one or more particle coordinates
are coupled to a two level system\cite{Eng72}. The $E\otimes\epsilon$ Jahn-Teller model is a minimal description in which two
harmonic oscillator coordinates are coupled to a two level system. We may write the Jahn -Teller $E\otimes\epsilon$ Hamiltonian as
\begin{equation}
H_0 ={\widetilde{\Delta}} s_z+ \frac{1}{2m}(p^2_x +p^2_y) + \frac{m \widetilde{\omega}^2}{2} (q^2_x +q^2_y)+\lambda_xq_x s_x+\lambda_y q_y s_y
\label{hamiltonian}
\end{equation}
where $\{q_x,q_y,p_x,p_y\}$ are the conjugate position and momentum operators in the harmonic oscillator space, and $\{s_x,s_y,s_z\}$ are $su(2)$ operators acting on the states of the two-level internal degree of freedom.
The classical description of this model is a Hamiltonian flow in the phase-space of the Cartesian product of the phase plane of the oscillators $\mathbb{R}^2$ and the spherical phase space of the two level system $S^2$, known as the Bloch sphere. The quantum model exhibits the {\em conical intersection} that has assumed importance in various biophysical models of vision\cite{CI}. It is well known that the $E\otimes\epsilon$ Jahn-Teller model displays a classical bifurcation of fixed points as the coupling strength is varied and that the quantum analog displays a maximum
in the entanglement at the same value of the coupling strength\cite{LevMut01,hines} as the classical fixed point bifurcation. At this point there is a morphological change in the nature of the ground state. This is the few body analogue of a quantum phase transition in a true many body system.
We do not consider the Hamiltonian flow model, but rather a related model, which
we will call a {\em kicked} $E\otimes\epsilon$ model that is better adapted to an ion trap realisation. Instead of a time-independent Hamiltonian,
we will consider a strongly time dependent version of the Hamiltonian. We
assume that
$$\lambda_{x,y}\mapsto \lambda_{x,y}\sum_n\delta(t-{n \tau_{x,y}})$$ A hamiltonian which is periodically varying in time is most naturally described in terms of the Floquet operator, a unitary operator that maps the dynamics over one period.
Two kicking interactions are periodically applied instantaneously and consecutively, after which there is a period of evolution $\tau$ under the Hamiltonian flow associated with the confining potential.
The unitary operator which discretely maps states over each period is the Floquet operator (setting $\hbar=1$)
\begin{align}
U&=\exp(-iH_0\tau)\exp(-i\lambda H_{x})\exp(-i\lambda H_{y}),
\nonumber\\
H_0 &={\widetilde{\Delta}} s_z+ \frac{1}{2m}(p^2_x +p^2_y) + \frac{m \widetilde{\omega}^2}{2} (q^2_x +q^2_y),\nonumber\\
H_{\chi}&=q_\chi s_\chi,\qquad \chi=x,\,y
\label{floquet}
\end{align}
The unitary map (\ref{floquet}) is the kicked equivalent of the $E \otimes \epsilon$ Jahn--Teller model. We will fix the frequency of the harmonic potential, $\widetilde{\omega}$, the internal splitting energy, $\widetilde{\Delta}$, the periodicity of the kick $\tau$ and adopt units such that $m=\widetilde{\omega}^{-1}$. This leaves the kick coupling $\lambda$ as the tunable parameter.
For each system observable $A$ evolving under a Hamiltonian $H$ the time evolution is
${d\langle A\rangle}/{dt}=i\langle[H,A]\rangle$.
From this expression, we can derive operator differential equations for the evolution of all seven operators under each of $H_0,\,H_{x},\,H_{y}$. According to Ehrenfest's theorem \cite{Sakurai}, in the classical limit expectation values of operators approach classical variables. Hence the differential operator equations become a set of classical equations in this limit. Integrating these equations over one period $\tau$ gives three maps which we then compose into a single area-preserving map on the classical phase space
$M(\mathbb{R}^4\times \mathcal{S}^2)\mapsto \mathbb{R}^4\times\mathcal{S}^2$
where $\mathcal{S}^2$ denotes the Bloch sphere.
The operator time evolution equations for $q_x,\,q_y,\,p_x,\,p_y,\,s_x,\,s_y,\,s_z$ can be determined under each of $\exp(-iH_0)$, $\exp(-iH_x)$, $\exp(-iH_y)$. Taking these to be classical differential equations, in each case we can integrate these to obtain three maps for the corresponding classical variables. Composing these three maps yields
\begin{widetext}
\small
\begin{align}
q_x &\mapsto \bigl(p_x-\lambda\{s_x \cos(\lambda q_y)+s_z \sin(\lambda q_y)\}\bigr)\sin\omega+q_x\cos\omega,
\qquad q_y \mapsto \bigl(p_y-\lambda s_y\bigr)\sin\omega+q_y\cos\omega, \label{map1} \\
p_x &\mapsto \bigl(p_x-\lambda\{s_x \cos(\lambda q_y)+s_z \sin(\lambda q_y)\}\bigr)\cos\omega-q_x\sin\omega,
\qquad
p_y \mapsto \bigl(p_y-\lambda s_y\bigr)\cos\omega-q_y\sin\omega\\
s_x &\mapsto \bigl(\cos\Delta\cos(q_y\lambda)-\sin\Delta\sin(q_x\lambda)\sin(q_y\lambda)\bigr)s_x-\sin\Delta\cos(q_x\lambda)s_y\nonumber\\
&\mbox{} \ \ \ \ \ \ \ \ \ \ +\bigl(\cos\Delta\sin(q_y\lambda)+\sin\Delta\sin(q_x\lambda)\cos(q_y\lambda)\bigr)s_z\\
s_y &\mapsto \bigl(\cos\Delta\sin(q_x\lambda)\sin(q_y\lambda)+ \sin\Delta\cos(q_y\lambda)\bigr)s_x\nonumber\\
&\mbox{} \ \ \ \ \ \ \ \ \ \ +\cos\Delta\cos(q_x\lambda)s_y+\bigl(-\cos\Delta\sin(q_x\lambda_a)\cos(q_y\lambda)+\sin\Delta\sin(q_y\lambda)\bigr)s_z\\
s_z &\mapsto -\sin(q_y\lambda)\cos(q_x\lambda)s_x+\sin(q_x\lambda)s_y+\cos(q_y
\lambda)\cos(q_x\lambda)s_z
\label{map2}
\end{align}
\normalsize
\end{widetext}
Our first step is to solve for fixed points of the map by calculating solutions to
$M(x^{*})=x^{*}$.
The fixed points correspond to periodic orbits of the system in phase space.
There are trivial fixed points at the equilibrium of the harmonic oscillator where the pseudo-spin is either aligned or anti-aligned with the $z$-axis. These fixed points are the only ones present when $\lambda=0$, one of which is stable,
\begin{align}
s_z=-1/2,\,\,s_x=s_y=
q_x=q_y=p_x=p_y=0.
\label{fp}
\end{align}
while the other ($s_z=1/2$, all other co-ordinates equal to zero) is unstable. As the kicking coupling $\lambda$ is increased a bifurcation of the stable fixed point occurs when
\begin{eqnarray}
\lambda_b^2=\frac{8\tan(\omega/2)}{\cot(\Delta/2)\pm 1}
\label{bif}
\end{eqnarray}
where $\omega=\widetilde{\omega}\tau, \Delta=\widetilde{\Delta}\tau, $ are dimensionless variables. Depending on the values of $\omega,\,\Delta$, the above equation can admit either 0,1 or 2 solutions. The smallest value of $\lambda$ for which a solution exists corresponds to a pitchfork bifurcation where the fixed point (\ref{fp}) becomes a saddle point and two new stable fixed points emerge, as illustrated in Fig. \ref{levelsfig}. When there is a second solution to (\ref{bif}) a second pichfork bifurcation occurs at the origin, with the saddle point becoming a local maximum and two new saddle points emerging.
For $\lambda=0$ we can associate the stable fixed point in phase space with the ground state of the quantum system. As $\lambda$ is varied we can (numerically) track this eigenstate of the Floquet operator (\ref{floquet}), which we will refer to as the Pseudo Ground State (PGS), denoted $|\psi_g\rangle$,. Analogously we define the first Pseudo Excited State (PES), denoted $|\psi_e\rangle$.
Below we demonstrate how the bifurcations in the phase space signal
distinctive qualitative changes in the PGS, providing insights into a one-body analogue of quantum phase transitions which occur in many-body systems.
For the oscillator space we choose the basis states which are simultaneous eigenstates of the number operator $N=p^2_x+p^2_y +q^2_x +q^2_y$ and the angular momentum operator $L_z=q_xp_y-q_y p_x$, which we label as $|N,l\rangle, \, N\geq l \geq -N$. The basis $|+\rangle,\,|-\rangle $ for the internal two-level system are eigenstates of $s_z$ with eigenvalues $\pm 1/2$. The unitary map (\ref{floquet}) is invariant under the parity transformation $\Pi= -i\exp(i\pi J_z)$ where $J_z=L_z+s_z$ is the total angular momentum operator. Since $\Pi^2=I$ the Hilbert space splits into
two parity classes given by
\begin{align*}
O&=\{ |2k,l\rangle|-\rangle,|2k+1,l\rangle|+\rangle \,:\,k\in\mathbb{N}\},\\
E&=\{ |2k,l\rangle|+\rangle,|2k+1,l\rangle|-\rangle \,:\,k\in\mathbb{N}\}.
\end{align*}
As the state space of the system is infinite-dimensional, numerical diagonalisation requires a choice of truncation $N_t$. We take $N_t=18$ giving the dimension of the Hilbert space to be 342, which is expected to give reliable results in the weak coupling limit since for $\lambda=0$ the ground state is $|0,0\rangle |-\rangle\in\, O$. As $\lambda$ is incrementally increased, we numerically solve for the PGS (in the subspace $O$) with an adaptive procedure which requires that the increment $\delta\lambda$ is chosen such that
$$\left|\langle \psi_g(\lambda)|\psi_g(\lambda+\delta\lambda)\rangle\right| < 0.01.$$
\begin{figure}
\caption{{\bf Fixed point bifurcation of the classical system.}
\label{levelsfig}
\end{figure}
Husimi functions \cite{Husimi} allow for the representation of quantum states in Hilbert space as a density in a classical phase space, thus providing means for comparison between classical and quantum systems.
They are defined in terms of coherent states
\begin{equation}
|\alpha_x,\alpha_y\rangle=e^{-|\alpha_x|^2/2}e^{-|\alpha_y|^2/2}\sum_{n_x+n_y=0}^{} \frac{\alpha_x^{n_x}\alpha_y^{n_y}}{\sqrt{n_x!n_y!}}|n_x\rangle|n_y\rangle
\label{cs}
\end{equation}
where $\alpha_\chi=(q_\chi+ip_\chi)/\sqrt{2}$ and $|n_\chi\rangle$ are number eigenstates of
$N_\chi=p^2_\chi+q^2_\chi$.
The explicit form of the Husimi function is
\begin{equation*}
\text{H}(\psi,\alpha_x,\alpha_y)=\text{Tr}(\text{Tr}_s(|\psi\rangle\langle\psi|)|\alpha_x,\alpha_y\rangle\langle
\alpha_x,\alpha_y|)
\end{equation*}
which exists in the four-dimensional oscillator phase space.
Above, $\text{Tr}_s$ is the trace over the two-level subsystem.
\par In Fig. \ref{husimifig} the position space cross section of the Husimi function for the PGS is shown as a function of the coupling $\lambda$. A signature of the pitchfork bifurcation of the trivial fixed point is apparent. A state that is highly localised at the origin of the oscillator space splits into branches associated with each of the classical stable fixed points after the bifurcation.
For $\lambda< \lambda_b$ we expect the wavefunction of the PGS to be localised at the stable fixed point in the classical picture; i.e. the PGS is approximately
\begin{equation*}
|\psi_g\rangle\approx |0,0\rangle|-\rangle
\end{equation*}
where $|0,0\rangle$ corresponds to the harmonic oscillator ground state and $|-\rangle$ is the eigenstate of $s_z$ with eigenvalue $-1/2$. After the first bifurcation we expect the PGS to be in a linear combination of two states localised at each of the two new stable fixed points in the classical picture. A parity invariance of the Floquet operator, determines the phase relationship between these two states with the result that the PGS is approximately
\begin{equation}
|\psi_g\rangle\approx\frac{1}{\sqrt{2}}\left(|\alpha_x,\alpha_y\rangle|\hat{n}\rangle-|-\alpha_x,-\alpha_y\rangle|\hat{n}'\rangle
\right)
\label{approx1}
\end{equation}
and the first PES is approximately
\begin{equation}
|\psi_e\rangle\approx \frac{1}{\sqrt{2}}\left(|\alpha_x,\alpha_y\rangle|\hat{n}\rangle+|-\alpha_x,-\alpha_y\rangle|\hat{n}'\rangle
\right).
\label{approx2}
\end{equation}
The state $|\hat{n}\rangle= \cos(\theta/2)|+\rangle+e^{i\phi}\sin(\theta/2)|-\rangle$ is determined by the pseudo-spin component
$\hat{n}= \sin(\theta)\cos(\phi){\mathbf i} + \sin(\theta)\sin(\phi){\mathbf j} + \cos(\theta){\mathbf k}$ of the classical fixed point, and $\hat{n}'$ is the parity transformed pseudo-spin component obtained from $\phi \rightarrow \phi+\pi $.
By taking even and odd combinations of the states (\ref{approx1},\ref{approx2}), we can test this approximation for $\lambda>\lambda_b$. It is expected the two combinations give states localised near each of the classical fixed points respectively, (although with some discrepancy due to mixing of the oscillator reduced density matrix which is absent in the approximation Eq.(\ref{approx2}). The Husimi function of the even combination $(|\psi_g\rangle+|\psi_e\rangle)/\sqrt{2}$ of the numerically determined PGS and first PES is shown in Fig. \ref{husimifig}(b), confirming that localisation occurs.
\begin{figure}
\caption{{\bf Husimi function cross sections.}
\label{husimifig}
\end{figure}
\begin{figure}
\caption{{\bf Entanglement measures.}
\label{entanglementfig}
\end{figure}
As the PGS is a pure state the entanglement between two subsystems $A$ and $B$ can be quantified by the von Neumann entropy of the reduced density matrix
\begin{equation*}
S(\rho_A)=-Tr(\rho_A\log(\rho_A))\\
\end{equation*}
where $\rho_A \equiv Tr_A(\rho)$.
In this manner the entanglement between the two-level system and the oscillator degrees of freedom, or between an oscillator degree of freedom and the rest of the system, can be calculated from the numerically determined PGS.
To calculate the entanglement between the two oscillator degrees of freedom we use the reduced density matrix
$
\rho_O=Tr_s(|\psi\rangle\langle\psi|)$.
which is generally a mixed state. In this instance the entanglement may be quantified through the logarithmic negativity \cite{Vidal2002}
\begin{equation*}
E_{{N}}(\rho_O)\equiv \log_N\parallel\rho^{T_A}\parallel_1
\end{equation*}
where the norm is the trace norm $\parallel A\parallel_1=Tr\sqrt{A^{\dagger}A}$ and $\rho^{T_A}$ denotes the partial transpose over subsystem $A$.
The model of this paper might be realised in a cylindrical Paul trap. In order to ensure that the motion in the $x-y$ plane is well confined we need to ensure that the secular frequencies in this plane $\omega_{x,y}$ are smaller than the frequency in the $z-$direction. Using the stability diagram for this kind of trap (see figure 1 in \cite{Leibfried}) we can operate at trap parameters such that $\omega_{x,y}/\omega_z \approx 10$ without significant micromotion in the $x-y$ motion. Although this requires operation close to the stability edge it should be achievable. In order to couple different components of the Pauli matrix to the vibrational degree of freedom we can make use of the Raman scheme for state dependent displacements introduced by Monroe et al.\cite{Monroe}. Consider first that the Raman beams are directed along the $x-$axis of the trap but the same arguments hold for the $y-$ axis Raman pulses. The effective interaction Hamiltonian for the Raman pulses is $H_R=\chi q_x\sigma_z$ where $\sigma_z=|e\rangle \langle e|-|g\rangle\langle g|$ with $|e\rangle,|g\rangle$ are the ground and excited states of the relevant two-level electronic transition. In other words we have a pseudo-spin realisation with $|+\rangle=|e\rangle,\ |-\rangle=|g\rangle$. If we `sandwich' this Raman pulse by two laser pulses tuned to the carrier transition with an appropriate phase choice, the displacement can be made to depend on any component of the Pauli matrices. For example the pulse sequence,
\begin{equation}
e^{-i\pi/4\sigma_y} e^{-i\chi q_x\sigma_z} e^{i\pi/4\sigma_y}=e^{-i\chi q_x\sigma_x}
\end{equation}
will achieve the desired result for the $x-$ kick.
We are now in a position to consider the experimental signatures of our results. As a specific example, we will focus on detecting the entanglement implicit in the state just beyond the first bifurcation, approximately given by in Eq.(\ref{approx2}). This can be reached adiabatically starting in the state in $|0,0,\rangle|-\rangle$and using a sequence of pulses with gradually increasing coupling strength. At the end of a sequence the probability to detect the atom in the excited state is simply
\begin{equation}
P(+)=\cos^2(\theta/2)\left (1- e^{-2\alpha_x^2-2\alpha_y^2}\right )
\end{equation}
where $\theta, \alpha_x, \alpha_y$ are determined by the classical fixed points. Sampling this distribution thus indicates the support on the classical fixed points. Readout of the excited state is easily done in ion trap realisations using a cycling transition\cite{Leibfried}. The experimental verification of the entanglement that results after the bifurcation is therefore quite achievable with current technology.
\acknowledgements
This work was supported by the Australian Research Council and the European Union IP QAP.
\begin{references}
\bibitem{Grover}L.K. Grover, Phys. Rev. Letts, {\bf 79}, 325 (1997).
\bibitem{haake}Fritz Haake, {\em Quantum Signatures of Chaos}, (Springer-Verlag, New York, 1992).
\bibitem{jt}H. A. Jahn and E. Teller, Proc. R. Soc. London A, {\bf 161},220,(1937).
\bibitem{Eng72}R.Englmann, {\em The Jahn-Teller effect in Molecules and Crystals}, (John Wiley \& Sons, New York, 1972).
\bibitem{CI}A. Warshel and W. W. Parson, Quart. Rev. Biophys. 34, 563 (2001).
\bibitem{LevMut01}G.Levine and V.N.Muthukumar, Phys Rev B, {\bf 63}, 245112 (2001).
\bibitem{hines}A. P. Hines, C. M. Dawson, R. H. McKenzie, and G. J. Milburn Phys. Rev. A 70, 022303 (2004).
\bibitem{Sakurai}
J.~J. Sakurai.
\newblock {\em Modern Quantum Mechanics}.
\newblock Addison Wesley, Reading, 1994.
\bibitem{Husimi}
K.~Husimi.
\newblock \textit{Some formal properties of the density matrix.}
\newblock {Proc. Phys. Math. Soc. Japan} \textbf{22}, 264 (1940).
\bibitem{Vidal2002}
G.~Vidal and R.~F. Werner.
\newblock \textit{Computable measure of entanglement.}
\newblock {Phys. Rev. A} \textbf{65}, 0.32314 (2002).
\bibitem{Leibfried}D. Leibfried,R. Blatt,C. Monroe,D. Wineland, Rev. Mod Phys., {\bf 75}, 281 (2003).
\bibitem{Monroe}C.Monroe, D.M.Meekhof, B.E.King and D.J.Wineand, Science, {\bf 272}, 1131 (1996).
\end{references}
\end{document} |
\begin{document}
\title{ extbf{Asymptotic expansion for solutions of the Navier-Stokes equations with non-potential body forces}
\begin{center}
$^{1}$Department of Mathematics and Statistics, Texas Tech University\\
Box 41042, Lubbock, TX 79409-1042, U.S.A.\\
Email address: \texttt{[email protected]}
$^{2}$Mathematics Department, Tulane University\\
6823 St. Charles Ave, New Orleans, LA 70118, U.S.A.\\
Email address: \texttt{[email protected]}
\bvec end{center}
\begin{abstract}
We study the long-time behavior of spatially periodic solutions of the Navier-Stokes equations in the three-dimensional space.
The body force is assumed to possess an asymptotic expansion or, resp., finite asymptotic approximation, in Sobolev-Gevrey spaces, as time tends to infinity, in terms of polynomial and decaying exponential functions of time.
We establish an asymptotic expansion, or resp., finite asymptotic approximation, of the same type for the Leray-Hopf weak solutions.
This extends previous results that were obtained in the case of potential forces, to the non-potential force case, where the body force may have different levels of regularity and asymptotic approximation.
This expansion or approximation, in fact, reveals precisely how the structure of the force influences the asymptotic behavior of the solutions.
\bvec end{abstract}
\section{Introduction}\label{intro}
We study the Navier-Stokes equations (NSE) for a viscous, incompressible fluid in the three-dimensional space, $\bvec ensuremath{\mathbb R}^3$.
Let $\bvec x\in \bvec ensuremath{\mathbb R}^3$ and $t\in\bvec ensuremath{\mathbb R}$ denote the space and time variables, respectively.
Let the (kinematic) viscosity be denoted by $\nu>0$, the velocity vector field by $\bvec u(\bvec x,t)\in\bvec ensuremath{\mathbb R}^3$, the pressure by $p(\bvec x,t)\in\bvec ensuremath{\mathbb R}$, and the body force by $\mathbf f(\bvec x,t)\in\bvec ensuremath{\mathbb R}^3$. The NSE which describe the fluid's dynamics are given by
\begin{align}\label{nse}
\begin{split}
&\begin{displaystyle} \frac{\partial \bvec u}{\partial t}\bvec end{displaystyle} + (\bvec u\cdot\nabla)\bvec u -\nu\Delta \bvec u = -\nabla p+\mathbf f \quad\text{on }\bvec ensuremath{\mathbb R}^3\times(0,\infty),\\
&\textrm{div } \bvec u = 0 \quad\text{on }\bvec ensuremath{\mathbb R}^3\times(0,\infty).
\bvec end{split}
\bvec end{align}
The initial condition is
\begin{equation}\label{ini}
\bvec u(\bvec x,0) = \bvec u^0(\bvec x),
\bvec end{equation}
where $\bvec u^0(\bvec x)$ is a given divergence-free vector field.
In this paper, we focus on solutions $\bvec u(\bvec x, t)$ and $p(\bvec x, t)$ which are $L$-periodic for some $L>0$.
Here, a function $g(\bvec x)$ is $L$-periodic if
\begin{equation}s
g(\bvec x+L\bvec e_j)=g(\bvec x)\quad \textrm{for all}\quad \bvec x\in \bvec ensuremath{\mathbb R}^3,\ j=1,2,3,\bvec end{equation}s
where $\{\bvec e_1,\bvec e_2,\bvec e_3\}$ is the standard basis of $\bvec ensuremath{\mathbb R}^3$.
By the remarkable Galilean transformation, we can assume also that $\bvec u(\bvec x,t)$, for all $t\ge 0$, has zero average over the domain $\Omega=(-L/2, L/2)^3$.
A function $g(\bvec x)$ is said to have zero average over $\Omega$ if
\begin{equation} \label{Zacond}
\int_\Omega g(\bvec x)d\bvec x=0.
\bvec end{equation}
By rescaling the spatial and time variables, we assume throughout, without loss of generality, that $L=2\pi$ and $\nu =1$.
\begin{notation} We will use the following standard notation.
\begin{enumerate}[label={\rm (\alphaph*)}]
\item In studying dynamical systems in infinite dimensional spaces, we denote, regarding \bvec eqref{nse} and \bvec eqref{ini}, $u(t)=\bvec u(\cdot,t)$, $f(t)=\mathbf f(\cdot,t)$, and $u^0=\bvec u^0(\cdot)$.
\item For non-negative functions $h(t)$ and $g(t)$, we write
\begin{equation}s
h(t)=\mathcal{O}(g(t))\ \text{as $t\to\infty$}\quad \text{if there exist $T,C>0$ such that }\ h(t)\leq Cg(t),\ \forall t>T.
\bvec end{equation}s
\item The Sobolev spaces on $\Omega$ are denoted by $H^m(\Omega)$ for $m=0,1,2,\ldots$, each consists of functions on $\Omega$ with distributional derivatives up to order $m$ belonging to $L^2(\Omega)$.
\bvec end{enumerate}
\bvec end{notation}
The type of asymptotic expansion that we study here is defined, in a general setting, as follows.
\begin{definition}\label{expanddef}
Let $X$ be a real vector space.
{\rm (a)} An $X$-valued polynomial is a function $t\in \bvec ensuremath{\mathbb R}\mapsto \sum_{n=1}^d a_n t^n$,
for some $d\ge 0$, and $a_n$'s belonging to $X$.
{\rm (b)} When $(X,\|\cdot\|)$ is a normed space, a function $g(t)$ from $(0,\infty)$ to $X$ is said to have the asymptotic expansion
\begin{equation} \label{gexpand}
g(t) \sim \sum_{n=1}^\infty g_n(t)e^{-nt} \text{ in } X,
\bvec end{equation}
where $g_n(t)$'s are $X$-valued polynomials, if for all $N\geq1$, there exists $\varepsilon_N>0$ such that
\begin{equation} \label{grem}
\Big\|g(t)- \sum_{n=1}^N g_n(t)e^{-nt}\Big\|=\mathcal O(e^{-(N+\varepsilon_N)t})\ \text{as }t\to\infty.
\bvec end{equation}
\bvec end{definition}
This article aims at studying the asymptotic behavior of the solution $\bvec u(\bvec x,t)$ as $t\to\infty$ for a certain class of forces $\mathbf f(\bvec x,t)$.
The case when $\mathbf f$ is a potential force, i.e., $\mathbf f(\bvec x,t)=-\nabla \phi(\bvec x,t)$, for some scalar function $\phi$, has been well-studied. For instance, it is well-known that any Leray-Hopf weak solution becomes regular eventually and decays in the $H^1(\Omega)$-norm exponentially.
For more precise asymptotic behavior, Dyer and Edmunds \cite{DE1968} proved that a non-trivial, regular solution is also bounded below by an exponential function.
Foias and Saut \cite{FS84a} then proved that in bounded or periodic domains, a non-trivial, regular solution decays exponentially at an exact rate which is an eigenvalue of the Stokes operator.
Furthermore, they established in \cite{FS87} that for such a solution $u(t)$, the following asymptotic expansion holds, in the sense of Definition \ref{expanddef}, in Sobolev spaces $H^m(\Omega)^3$, for all $m\ge 0$:
\begin{equation} \label{expand}
u(t) \sim \sum_{n=1}^\infty q_n(t)e^{-nt},
\bvec end{equation}
where $q_n(t)$'s are unique polynomials in $t$
with trigonometric polynomial values.
Recently, in \cite{HM1}, it was shown by the authors that the expansion in fact holds in Gevrey spaces.
More precisely, for any $\sigma>0$ and $m\in\bvec ensuremath{\mathbb N}$, there exists an $\varepsilon_N>0$, for each integer $N>0$, such that
\begin{equation}\label{vHM}
\Big\|e^{\sigma (-\Delta)^{1/2}} \Big(u(t)- \sum_{n=1}^N q_n(t)e^{-nt}\Big)\Big\|_{H^m(\Omega)^3}=\mathcal O\big(e^{-(N+\varepsilonsilon_N)t}\big)\textrm{ as }t\to\infty.
\bvec end{equation}
Note that the (Gevrey) norm in estimate \bvec eqref{vHM} is much stronger than the standard Sobolev norm in $H^m(\Omega)$.
More importantly, the simplified approach in \cite{HM1} allows the proof to be applied to wider classes of equations; that approach will be adopted in this paper.
Regarding the case of potential forces, the interested reader is referred to \cite{FS83,FS84a,FS84b,FS87,FS91} for deeper studies on the asymptotic expansion, its associated normalization map, and invariant nonlinear manifolds; for the associated (Poincar\'e-Dulac) normal form, see \cite{FHOZ1,FHOZ2,FHS1}; for its applications to statistical solutions of the NSE, decaying turbulence, and analysis of helicity, see \cite{FHN1,FHN2}; for a result in the whole space $\mathbb R^3$, see \cite{KukaDecay2011}.
The main goal of this paper is to establish \bvec eqref{expand} when $\mathbf f$ is \textit{not} a potential function.
To understand the result without calling for technical details, we state it here as a `meta theorem'; the rigorous version will be provided by Theorem \ref{mainthm}, whose proof will then be presented in section \ref{pfsec}.
\begin{theorem}[Meta Theorem]\label{meta}
Assume that the body force has an asymptotic expansion
\begin{equation}\label{fexpand}
f(t)\sim \sum_{n=1}^\infty f_n(t)e^{-nt},
\bvec end{equation}
in some appropriate functional space.
Then any Leray-Hopf weak solution $u(t)$ of \bvec eqref{nse} and \bvec eqref{ini} admits an asymptotic expansion of the form \bvec eqref{expand} in the same space.
\bvec end{theorem}
On the other hand, when $f(t)$ satisfies a finitary version of \bvec eqref{fexpand}, then we obtain a corresponding finite asymptotic approximation result in Theorem \ref{finitetheo}.
More specifically, if the right-hand side of \bvec eqref{fexpand} is a finite sum, then we show that the corresponding solution, $u(t)$, admits a finite sum approximation of the same type analogous to \bvec eqref{expand}.
Theorem \ref{finitetheo} is also suitable for the case when the force is not smooth, but belongs only to a pre-specified Sobolev class.
Theorem \ref{meta} develops one of the possible avenues of extending the Foias-Saut theory \cite{FS87} for the NSE to the non-potential force case. The finite sum version, Theorem \ref{finitetheo}, which is a new feature only for the non-potential force case, allows asymptotic analysis of the solution even when the force has restricted regularity and only limited information on the force's asymptotic behavior is known. Our analysis, moreover, explicitly indicates how each term in the expansion of the force integrates into the expansion of the solution.
We emphasize that the assumptions that we do impose on the force are very natural. This point is not so obvious because if one directly adopts the argument of Foias-Saut in \cite{FS87}, one would then be led to impose conditions on \textit{time-derivatives of all orders} on the force. We, nevertheless, are able to avoid these conditions and ultimately establish the claimed expansion by applying the refined approach, in the case of periodic domains, initiated in \cite{HM1}.
We also remark that we obtain a welcome technical improvement over \cite{HM1} in obtaining the expansion in Sobolev spaces, when Gevrey regularity is not available; it requires a more elaborate bootstrapping process which is carried out in Part II of the proof of Proposition \ref{theo23}.
The results obtained in this paper are the first steps of the larger program of understanding the relation between the asymptotic expansion and the external body force. On the other hand, in spite of assuming the force to have regular modes of decay, further understanding of the solution's resulting expansion and its consequences will shed insights into the nonlinear structure of the NSE, and decaying turbulence theory.
It is worth mentioning at this point that the global existence of regular solutions and the uniqueness of the global weak solutions of the 3D NSE remain outstanding open problems.
However, our result details the asymptotic behavior of any Leray-Hopf weak solution, regardless of the resolution to either of these issues.
Moreover, to the best of the authors' knowledge, there have been no numerical studies made to determine the polynomials $q_n(t)$ in \bvec eqref{expand} as of yet. By extending the expansion to accommodate non-potential forces, our result should facilitate the formulation and testing of possible numerical algorithms for their computation. Indeed, one can attempt to compute the expansion of \bvec emph{explicit} solutions, particularly, those for which the nonlinear term in NSE does not vanish. These solutions are quite easy to generate when certain, specific forces are used at the onset, but harder to find
when the projected force in the functional equation \bvec eqref{fctnse} is \bvec emph{given and fixed}, albeit zero, as in the case of potential forces (see \bvec eqref{potential}).
For the structure of this paper, Section \ref{bkgmain} lays the necessary background for formulating the result.
The main theorems are Theorems \ref{mainthm} and \ref{finitetheo}, while Corollary \ref{Vcor} emphasizes the scenario of finite-dimensional polynomial coefficients in \bvec eqref{fexpand}, and the construction of the corresponding polynomials appearing \bvec eqref{expand} as solutions to finite-dimensional, linear ordinary differential equations.
Section \ref{Gdecay} contains explicit estimates that ultimately furnish ``eventual" exponential decay for the weak solutions in Sobolev and Gevrey spaces, see Propositions \ref{theo22} and \ref{theo23}. They are used crucially in section \ref{pfsec}, which is devoted to the proofs of our main results.
\section{Background and main results}\label{bkgmain}
The space $L^2(\Omega)^3$ of square (Lebesgue) integrable vector fields on $\Omega$ is a Hilbert space with the standard inner product $\inprod{\cdot,\cdot}$ and norm $|\cdot
|$ defined by
$$
\inprod{u,v}=\int_\Omega \bvec u(\bvec x)\cdot \bvec v(\bvec x) d\bvec x
\quad\text{and}\quad |u|
=\inprod{u,u}^{1/2}\quad\text{for } u=\bvec u(\cdot),\ v=\bvec v(\cdot).$$
We note that $|\cdot|$ is also used to denote the absolute value, modulus, and, more generally, the Euclidean norm in $\bvec ensuremath{\mathbb R}^n$ and $\bvec ensuremath{\mathbb C}^n$, for $n\in\bvec ensuremath{\mathbb N}$. Nonetheless, its meaning will be made clear by the context.
Let $\mathcal{V}$ be the set of all $L$-periodic trigonometric polynomial vector fields which are divergence-free and have zero average over $\Omega$.
Define
$$H, \text{ resp. } V\ =\text{ closure of }\mathcal{V} \text{ in }
L^2(\Omega)^3, \text{ resp. } H^1(\Omega)^3.$$
We use the following embeddings and identification
$$V\subset H=H'\subset V',$$
where each space is dense in the next one, and the embeddings are compact.
Let $\mathcal{P}$ denote the orthogonal (Leray) projection in $L^2(\Omega)^3$ onto $H$. Explicitly,
\begin{align*}
\mathcal{P}\Big(\sum_{\bvec k\ne \mathbf 0}
\widehat \bvec u(\bvec k)e^{i\bvec k\cdot \bvec x}\Big) = \sum_{\bvec k\ne \mathbf 0} \Big\{\widehat\bvec u(\bvec k)-\Big(\widehat\bvec u(\bvec k)\cdot \frac{\bvec k}{|\bvec k|}\Big)\frac{\bvec k}{|\bvec k|}\Big\}e^{i\bvec k\cdot \bvec x} .
\bvec end{align*}
We define the Stokes operator $A:V\to V'$ by
\begin{equation}s
\inprod{A\bvec u,\bvec v}_{V',V}=
\doubleinprod{\bvec u,\bvec v}
\stackrel{\rm def}{=} \sum_{i=1}^3 \inprod{ \frac{\partial \bvec u}{\partial x_i} , \frac{\partial \bvec v}{\partial x_i} },\quad \text{for all } \bvec u,\bvec v\in V.
\bvec end{equation}s
As an unbounded operator on $H$, the operator $A$ has the domain $\mathcal D(A)=V\cap H^2(\Omega)^3$, and
\begin{equation}s A\bvec u = - \mathcal{P}\Delta \bvec u=-\Delta \bvec u\in H, \quad \textrm{for all}\quad \bvec u\in\mathcal D(A).
\bvec end{equation}s
The last identity is due to the periodic boundary conditions.
It is known that the spectrum of the Stokes operator $A$ is $\sigma(A)=\{\lambda_j:j\in \bvec ensuremath{\mathbb N}\}$,
where $\lambda_j$ is strictly increasing in $j$, and is an eigenvalue of $A$.
In fact, for each $j\in\bvec ensuremath{\mathbb N}$, $\lambda_j=|\bvec k|^2$ for some $\bvec k\in\bvec ensuremath{\mathbb Z}^3\setminus \{\mathbf 0\}$. Note that since $\sigma(A)\subset \bvec ensuremath{\mathbb N}$ and $\lambda_1=1$, the additive semigroup generated by $\sigma(A)$ is equal to $\bvec ensuremath{\mathbb N}$.
For $\alphapha,\sigma \in \bvec ensuremath{\mathbb R}$ and $\bvec u=\sum_{\bvec k\ne \mathbf 0}
\widehat \bvec u(\bvec k)e^{i\bvec k\cdot \bvec x}$, define
$$A^\alphapha \bvec u=\sum_{\bvec k\ne \mathbf 0} |\bvec k|^{2\alphapha} \widehat \bvec u(\bvec k)e^{i\bvec k\cdot
\bvec x},$$
$$A^\alphapha e^{\sigma A^{1/2}} \bvec u=\sum_{\bvec k\ne \mathbf 0} |\bvec k|^{2\alphapha}e^{\sigma
|\bvec k|} \widehat \bvec u(\bvec k)e^{i\bvec k\cdot
\bvec x}.$$
We then define the Gevrey spaces by
\begin{equation}s
G_{\alphapha,\sigma}=\mathcal D(A^\alphapha e^{\sigma A^{1/2}} )\stackrel{\rm def}{=} \{ \bvec u\in H: |\bvec u|_{\alphapha,\sigma}\stackrel{\rm def}{=} |A^\alphapha
e^{\sigma A^{1/2}}\bvec u|<\infty\},
\bvec end{equation}s
and the domain of the fractional operator $A^\alpha$ by
\begin{equation}s
\mathcal D(A^\alphapha)=G_{\alphapha,0}=\{ \bvec u\in H: |A^\alphapha \bvec u|=|\bvec u|_{\alphapha,0}<\infty\}.
\bvec end{equation}s
Thanks to the zero-average condition \bvec eqref{Zacond}, the norm $|A^{m/2}\bvec u|$ is equivalent to $\|\bvec u\|_{H^m(\Omega)^3}$ on the space $\mathcal D(A^{m/2})$ for $m=0,1,2,\ldots$
Note that $\mathcal D(A^0)=H$, $\mathcal D(A^{1/2})=V$, and $\|\bvec u\|\stackrel{\rm def}{=} |\nabla \bvec u|$ is equal to $|A^{1/2}\bvec u|$ for $\bvec u\in V$.
Also, the spaces $G_{\alphapha,\sigma}$ are decreasing in $\alphapha$ and $\sigma$.
Denote for $\sigma\in\bvec ensuremath{\mathbb R}$ the space
\begin{equation}s
E^{\infty,\sigma}=\bigcap_{\alphapha\ge 0} G_{\alphapha,\sigma}=\bigcap_{m\in\bvec ensuremath{\mathbb N} } G_{m,\sigma}.
\bvec end{equation}s
We will say that an asymptotic expansion \bvec eqref{gexpand} holds in $E^{\infty,\sigma}$ if it holds in $G_{\alphapha,\sigma}$ for all $\alphapha\ge 0$.
Let us also denote by $\mathcal{P}^{\alphapha,\sigma}$ the space of $G_{\alphapha,\sigma}$-valued polynomials in case $\alphapha\in \bvec ensuremath{\mathbb R}$, and
the space of $E^{\infty,\sigma}$-valued polynomials in case $\alphapha=\infty$.
We define the bilinear mapping $B:V\times V\to V'$, which is associated with
the nonlinear term in the NSE, by
\begin{equation}s
\inprod{B(\bvec u,\bvec v),\bvec w}_{V',V}=b(\bvec u,\bvec v,\bvec w)\stackrel{\rm def}{=} \int_\Omega ((\bvec u\cdot \nabla) \bvec v)\cdot \bvec w\, d\bvec x, \quad \textrm{for all}\quad \bvec u,\bvec v,\bvec w\in V.
\bvec end{equation}s
In particular,
\begin{equation}s
B(\bvec u,\bvec v)=\mathcal{P}((\bvec u\cdot \nabla) \bvec v), \quad \textrm{for all}\quad \bvec u,\bvec v\in\mathcal D(A).
\bvec end{equation}s
More precisely, for $\bvec u=\sum_{\bvec k\ne 0}
\widehat \bvec u(\bvec k)e^{i\bvec k\cdot \bvec x}$ and $\bvec v=\sum_{\bvec k\ne 0}
\widehat \bvec v(\bvec k)e^{i\bvec k\cdot \bvec x}$,
\begin{equation}s
B(\bvec u,\bvec v)
= \sum_{\bvec k\ne \mathbf 0} \Big\{\widehat\bvec b(\bvec k)-\Big(\widehat\bvec b(\bvec k)\cdot \frac{\bvec k}{|\bvec k|}\Big)\frac{\bvec k}{|\bvec k|} \Big\}e^{i\bvec k\cdot
\bvec x},
\quad\text{where }\widehat\bvec b(\bvec k)=\sum_{\bvec m+\bvec l=\bvec k}
i (\widehat\bvec u(\bvec m)\cdot \bvec l)\widehat\bvec v(\bvec l).
\bvec end{equation}s
It is clear that
\begin{equation}\label{BVV}
B(\mathcal V,\mathcal V)\subset \mathcal V.
\bvec end{equation}
By applying the Leray projection $\mathcal{P}$ to \bvec eqref{nse} and \bvec eqref{ini},
we rewrite the initial value problem for NSE in the functional form as
\begin{equation}\label{fctnse}
\frac{du(t)}{dt} + Au(t) +B(u(t),u(t))=\mathcal P f(t) \quad \text{ in } V' \text{ on } (0,\infty),
\bvec end{equation}
with the initial data
\begin{equation}\label{uzero}
u(0)=u^0\in H.
\bvec end{equation}
(See e.g. \cite{LadyFlowbook69,CFbook,TemamAMSbook} for more details.)
Because of the projection $\mathcal P$ on the right-hand side of \bvec eqref{fctnse}, it is convenient to assume, without loss of generality that the force belongs to $H$. Then, we have
$$\mathcal P f(t)=f(t)\text{ in \bvec eqref{fctnse}}.$$
In the case $\mathbf f(\bvec x,t)$ is a potential force, then, by the Helmholtz-Leray decomposition,
\begin{equation} \label{potential}
\mathcal P f(t)\bvec equiv 0 \text{ in the functional equation \bvec eqref{fctnse}}.
\bvec end{equation}
In dealing with weak solutions of \bvec eqref{fctnse}, we follow the presentation in \cite{FMRTbook} and use the results there.
\begin{definition}\label{lhdef}
Let $f\in L^2_{\rm loc}([0,\infty),H)$.
A \bvec emph{Leray-Hopf weak solution} $u(t)$ of \bvec eqref{fctnse} is a mapping from $[0,\infty)$ to $H$ such that
\begin{equation}\label{lh:wksol}
u\in C([0,\infty),H_{\rm w})\cap L^2_{\rm loc}([0,\infty),V),\quad u'\in L^{4/3}_{\rm loc}([0,\infty),V'),
\bvec end{equation}
and satisfies
\begin{equation}\label{varform}
\frac{d}{dt} \inprod{u(t),v}+\doubleinprod{u(t),v}+b(u(t),u(t),v)=\inprod{f(t),v}
\bvec end{equation}
in the distribution sense in $(0,\infty)$, for all $v\in V$, and the energy inequality
\begin{equation}\label{Lenergy}
\frac12|u(t)|^2+\int_{t_0}^t \|u(\tau)\|^2d\tau\le \frac12|u(t_0)|^2+\int_{t_0}^t \langle f(\tau),u(\tau)\rangle d\tau
\bvec end{equation}
holds for $t_0=0$ and almost all $t_0\in(0,\infty)$, and all $t\ge t_0$.
We will say that a Leray-Hopf weak solution $u(t)$ is \bvec emph{regular} if $u\in C([0,\infty),V)$.
\bvec end{definition}
Above, $H_{\rm w}$ is the topological vector space $H$ with the weak topology.
This definition of the Leray-Hopf weak solutions with the choice of the energy inequality \bvec eqref{Lenergy} is, in fact, equivalent to the weak solutions used in \cite[Chapter II, section 7]{FMRTbook}, see e.g. Remark 1(e) of \cite{FRT2010} for the explanations.
\begin{assumption}
It is assumed throughout the paper that the function $f(t)$ belongs to $L^\infty_{\rm loc}([0,\infty),H)$.
\bvec end{assumption}
This assumption serves to guarantee the existence of the Leray-Hopf weak solutions for any $u^0\in H$, see e.g. \cite{FMRTbook}. We note that, later in the paper, we will further impose that the force, $f(t)$, decays in time, for instance, in Sobolev norms.
Regarding the problem of finding asymptotic expansions for solutions of the NSE, it is natural, at this stage, to study the class of functions $f(t)$ which have similar asymptotic behavior as $u(t)$ in \bvec eqref{expand}.
We specify the condition on $f(t)$ more precisely in the next theorem, which is our first main result.
\begin{theorem}[Asymptotic expansion] \label{mainthm}
Assume that there exist a number $\sigma_0\geq0$ and polynomials $f_n\in\mathcal{P}^{\infty,\sigma_0}$, for all $n\ge 1$, such that $f(t)$ has the asymptotic expansion
\begin{equation}\label{forcexpand}
f(t)\sim \sum_{n=1}^\infty f_n(t)e^{-nt}\quad \text{in }E^{\infty,\sigma_0} .
\bvec end{equation}
Let $u(t)$ be a Leray-Hopf weak solution of \bvec eqref{fctnse} and \bvec eqref{uzero}.
Then there exist polynomials $q_n\in\mathcal{P}^{\infty,\sigma_0}$, for all $n\ge 1$, such that $u(t)$ has the asymptotic expansion
\begin{equation}\label{uexpand}
u(t)\sim \sum_{n=1}^\infty q_n(t) e^{-nt}\quad \text{in }E^{\infty,\sigma_0} .
\bvec end{equation}
Moreover, the mappings
\begin{equation}\label{uF}
u_n(t)\stackrel{\rm def}{=} q_n(t) e^{-nt} \quad\text{and}\quad F_n(t)\stackrel{\rm def}{=} f_n(t)e^{-nt},
\bvec end{equation}
satisfy the following ordinary differential equations in the space $E^{\infty,\sigma_0}$
\begin{equation}\label{unODE}
\frac{d}{dt} u_n(t) + Au_n(t) +\sum_{\stackrel{k,m\ge 1}{k+m=n}}B(u_k(t),u_m(t))= F_n(t),\quad t\in\bvec ensuremath{\mathbb R},
\bvec end{equation}
for all $n\ge 1$.
\bvec end{theorem}
Regarding equation \bvec eqref{unODE}, when $n=1$, the sum on its left-hand side is empty, hence the equation reads as
\begin{equation}\label{u1ODE}
\frac{d}{dt} u_1(t) + Au_1(t) = F_1(t).
\bvec end{equation}
\begin{remark}\label{betterem}
Observe that since the expansion \bvec eqref{forcexpand} is an infinite sum, it immediately implies the following remainder estimate:
\begin{align*}
\Big|f(t)-\sum_{n=1}^N f_n(t)e^{-nt}\Big|_{\alphapha,\sigma_0}
&\le \Big|f_{N+1}(t)e^{-(N+1)t}\Big|_{\alphapha,\sigma_0}+\Big|f(t)-\sum_{n=1}^{N+1} f_n(t)e^{-nt}\Big|_{\alphapha,\sigma_0}\\
&=\mathcal O(e^{-(N+\varepsilon)t})+\mathcal O(e^{-(N+1+\delta_{N+1},\alphapha)t}),
\bvec end{align*}
which holds for all $N\geq1$, $\alphapha\ge 0$, $\varepsilon\in(0,1)$ and some $\delta_{N+1,\alphapha}\in(0,1)$. Therefore, we have for all $N\geq1$, $\alphapha\ge0$ that
\begin{equation}\label{fep}
\Big|f(t)-\sum_{n=1}^N f_n(t)e^{-nt}\Big|_{\alphapha,\sigma_0}
=\mathcal O(e^{-(N+\varepsilon)t}) \quad\text{as }t\to\infty,\quad \forall\varepsilon\in(0,1).
\bvec end{equation}
Similarly, the expansion \bvec eqref{uexpand} implies for any $N\ge 1$ and $\alphapha\ge0$ that
\begin{equation}\label{uqa}
\Big |u(t)-\sum_{n=1}^N q_n(t) e^{-nt}\Big |_{\alphapha,\sigma_0} =\mathcal O(e^{-(N+\varepsilon)t} )\quad\text{as } t\to\infty,\quad \forall\varepsilon\in(0,1).
\bvec end{equation}
(In fact, the same argument applies to the general expansion \bvec eqref{gexpand} so that $\varepsilon_N$ in \bvec eqref{grem} can be taken arbitrarily in $(0,1)$.)
\bvec end{remark}
\begin{remark}\label{mainrmk} The following additional remarks on Theorem \ref{mainthm} are in order.
\begin{enumerate}[label={\rm (\alphaph*)}]
\item We do not require the time derivative $d^m f/dt^m$, \bvec emph{for all} $m\in\bvec ensuremath{\mathbb N}$, to have any kind of expansion.
Indeed, this rather stringent requirement would have been imposed if one adapts Foias-Saut's original proof.
Our relaxation of this condition is owed to the higher regularity of the solutions for large time, in the particular case of periodic domains, either in Sobolev or Gevrey spaces.
In the case of Sobolev spaces ($\sigma_0=0$), it requires a bootstrapping scheme to gradually increase the regularity to any needed level.
In the case of the Gevrey spaces ($\sigma_0>0$), the effect is immediate, hence the proof is shorter.
(For the related Gevrey norm techniques, see \cite{FT-Gevrey, HM1, OliverTiti2000} and references therein.)
\item The equations \bvec eqref{unODE} determine the polynomials $q_n(t)$'s and indicate the interactions on all scales between the body force and the nonlinear terms in NSE.
Even though the solution $u(t)$ decays to zero, such interactions are complicated.
\item Condition \bvec eqref{forcexpand} is easily satisfied for any finite sum
\begin{equation}s
f(t)=\sum_{n=1}^N f_n(t)e^{-nt},
\bvec end{equation}s
for some fixed $N\ge 1$, and polynomials $f_n(t)$'s belonging to $\mathcal V$. Even in this case, the result in Theorem \ref{mainthm} is new and the expansion \bvec eqref{uexpand} for $u(t)$ can still be an infinite sum.
\item If the expansion \bvec eqref{forcexpand} for $f(t)$ holds in $G_{0,\sigma_1}$ for some $\sigma_1>0$, then it holds in $E^{\infty,\sigma_0}$ for any $\sigma_0\in (0,\sigma_1)$, and hence, by Theorem \ref{mainthm}, the solution $u(t)$ admits the expansion \bvec eqref{uexpand} in $E^{\infty,\sigma_0}$ for any $\sigma_0\in (0,\sigma_1)$.
\item The equations \bvec eqref{unODE} in fact are {\it linear} systems of ordinary differential equations (ODEs) in infinite-dimensional spaces. They form an integrable system in the sense that it can be recursively solved by the variation of constants formula. Moreover, each solution $u_n(t)$ of the form \bvec eqref{uF} is uniquely determined provided that $R_n u_n(0)=\xi_n\in R_nH$ is given.
\bvec end{enumerate}
\bvec end{remark}
In the following simple scenario, the ODEs \bvec eqref{unODE} are finite-dimensional systems, which make them more accessible for deeper study, and, in particular, for numerical computations.
\begin{corollary}\label{Vcor}
If all $f_n(t)$'s in Theorem \ref{theo22} are $\mathcal V$-valued polynomials,
then so are the polynomials $q_n(t)$'s in the expansion \bvec eqref{uexpand}, and consequently, the equations \bvec eqref{unODE} are systems of linear ODEs in finite-dimensional spaces.
\bvec end{corollary}
Next is the paper's second main result which deals with the case when $f(t)$ does not possess an asymptotic expansion \bvec eqref{forcexpand}, but rather a \bvec emph{finite asymptotic approximation}. Then it is proved that the weak solution also admits a finite asymptotic approximation of the same type.
\begin{theorem}[Finite asymptotic approximation]\label{finitetheo}
Suppose there exist an integer $N_*\ge1$, real numbers $\sigma_0\ge 0$, $\mu_*\ge \alphapha_*\ge N_*/2$, and, for any $1\le n\le N_*$, numbers $\delta_n\in(0,1)$ and polynomials $f_n\in\mathcal{P}^{\mu_n,\sigma_0}$, such that
\begin{equation} \label{ffinite}
\Big|f(t)-\sum_{n=1}^N f_n(t)e^{-nt}\Big|_{\alphapha_N,\sigma_0}=\mathcal O(e^{-(N+\delta_N)t}) \quad\text{as }t\to\infty,
\bvec end{equation}
for $1\le N\le N_*$, where
\begin{equation}s
\mu_n=\mu_*-(n-1)/2,\quad \alphapha_n=\alphapha_*-(n-1)/2.
\bvec end{equation}s
Let $u(t)$ be a Leray-Hopf weak solution of \bvec eqref{fctnse} and \bvec eqref{uzero}.
\begin{enumerate}[label={\rm{(\roman*)} }]
\item Then there exist polynomials $q_n\in\mathcal{P}^{\mu_n+1,\sigma_0}$, for $1\le n\le N_*$,
such that one has for $1\le N\le N_*$ that
\begin{equation} \label{ufinite}
\Big |u(t)-\sum_{n=1}^N q_n(t)e^{-nt}\Big|_{\alphapha_N,\sigma_0}=\mathcal O(e^{-(N+\varepsilon)t})\quad\text{as }t\to\infty,\quad\forall\varepsilon\in(0,\delta_N^*),
\bvec end{equation}
where
$ \delta_N^*=\min\{\delta_1,\delta_2,\ldots,\delta_N\}$.
Moreover, the ODEs \bvec eqref{unODE} hold in the corresponding space $G_{\mu_n,\sigma_0}$ for $1\le n\le N_*$.
\item In particular, if all $f_n(t)$'s belong to $\mathcal V$, resp., $E^{\infty,\sigma_0}$, then so do all $q_n(t)$'s, and the ODEs \bvec eqref{unODE} hold in $\mathcal V$, resp., $E^{\infty,\sigma_0}$.
\bvec end{enumerate}
\bvec end{theorem}
Regarding the finite approximation \bvec eqref{ffinite}, in addition to its sum having only finitely many terms ($N\le N_*$), the force $f(t)$ now has much less regularity compared to the expansion \bvec eqref{forcexpand}. This, therefore, determines the regularity of the solution $u(t)$ and its asymptotic approximation \bvec eqref{ufinite}. Such dependence was worked out in detail in the above theorem.
It is worth noticing that Theorem \ref{finitetheo} is stronger than Theorem \ref{mainthm}. Nevertheless, the proof of Theorem \ref{finitetheo}, in the current presentation, is adapted from that of Theorem \ref{mainthm}.
\section{Exponential decay in Gevrey and Sobolev spaces}\label{Gdecay}
This section prepares for the proofs in section \ref{pfsec}. Particularly, we will derive the exponential decay for weak solutions in both Gevrey and Sobolev spaces.
First, we have a few basic inequalities concerning the Sobolev and Gevrey norms.
It is elementary to see that
\begin{equation}s
\max_{x\ge 0} (x^{2\alphapha}e^{-\sigma x})=\Big(\frac{2\alphapha}{e\sigma }\Big)^{2\alphapha} \text{ for any } \sigma,\alphapha>0.
\bvec end{equation}s
Applying this inequality, one easily obtains
\begin{equation}\label{als}
|A^\alphapha u|=|(A^\alphapha e^{-\sigma A^{1/2}})e^{\sigma A^{1/2}}u| \le \Big(\frac{2\alphapha}{e\sigma}\Big)^{2\alphapha} |e^{\sigma A^{1/2}}u|\quad \forall \alphapha,\sigma>0.
\bvec end{equation}
We recall a key estimate for the Gevrey norm of the bilinear form (cf. Lemma 2.1 in \cite{HM1}), which is based mainly on the work of Foias and Temam \cite{FT-Gevrey}.
\begin{lemma}
\label{nonLem}
Let $\sigma\ge 0$ and $\alphapha\ge 1/2$. There exists an absolute constant $K>1$, independent of $\alphapha,\sigma$, such that
\begin{equation}\label{AalphaB}
|B(u,v)|_{\alphapha ,\sigma }\le K^\alphapha |u|_{\alphapha +1/2,\sigma } \, |v|_{\alphapha +1/2,\sigma}\quad \forall u,v\in G_{\alphapha+1/2,\sigma}.
\bvec end{equation}
\bvec end{lemma}
Although inequality \bvec eqref{AalphaB} is not sharp, it is very convenient for our calculations below and will be sufficient for our purposes.
As a consequence of Lemma \ref{nonLem}, we have
\begin{equation}\label{BGG}
B(G_{\alphapha+1/2,\sigma},G_{\alphapha+1/2,\sigma})\subset G_{\alphapha,\sigma}\quad\text{for } \alphapha\ge 1/2,\ \sigma\ge 0,
\bvec end{equation}
\begin{equation}\label{BEE}
B(E^{\infty,\sigma},E^{\infty,\sigma})\subset E^{\infty,\sigma}\quad\text{for }\sigma\ge 0.
\bvec end{equation}
Next, we prove a small data result, which establishes the global existence of the solution in Gevrey spaces and its exponential decay as time goes to infinity.
\begin{proposition} \label{theo22}
Let $\delta\in (0,1), \lambda\in(1-\delta,1]$ and $\sigma\ge 0, \alphapha\ge1/2$.
Define the positive numbers $C_0=C_0(\alphapha,\delta)$ and $C_1=C_1(\alphapha,\delta,\lambda)$ by
\begin{equation}s
\begin{cases}
C_0=\frac\delta{6K^\alphapha},\
C_1=\frac2{\sqrt3}\,\sqrt{\delta(\lambda-1+\delta)}\, C_0&\text{if }\sigma>0,\\
C_0=\frac\delta{4K^\alphapha},\
C_1=\sqrt{2}\,\sqrt{\delta(\lambda-1+\delta)}\, C_0&\text{if }\sigma=0.
\bvec end{cases}
\bvec end{equation}s
Suppose
\begin{equation}\label{usmall}
|A^\alphapha u^0|\le C_0,
\bvec end{equation}
and
\begin{equation}\label{fta}
|f(t)|_{\alphapha-1/2,\sigma}\le C_1e^{-\lambda t},\quad\forall t\ge0.
\bvec end{equation}
Then there exists a unique solution $u(t)$ of \bvec eqref{fctnse} and \bvec eqref{uzero} that satisfies
$$u\in C([0,\infty),\mathcal D(A^\alphapha))$$ and
\begin{equation}\label{uest}
|u(t)|_{\alphapha,\sigma}\le \sqrt{2}C_0e^{-(1-\delta)t},\quad \forall t\ge t_*,
\bvec end{equation}
where $t_*=6\sigma/\delta$.
Moreover, one has for all $t\ge t_*$ that
\begin{equation}\label{intAa}
\int_t^{t+1} |u(\tau)|_{\alphapha+1/2,\sigma}^2d\tau
\le \frac{3C_0^2}{2(1-\delta)}e^{-2(1-\delta)t}.
\bvec end{equation}
\bvec end{proposition}
\begin{proof}
While the estimates below are formal, they can be justified by performing them at the level of the Galerkin approximation and then passing to the limit. The estimates will hold for the unique, regular solution $u(t)$.
\textbf{Part I: case $\sigma>0$.} Let $\varphi(t)$ be a function in $C^\infty(\bvec ensuremath{\mathbb R})$ such that
$$\varphi((-\infty,0])=\{0\},\quad
\varphi([0,t_*])=[0,\sigma],\quad
\varphi([t_*,\infty))=\{\sigma\},$$
and
$$0< \varphi'(t)< 2\sigma/t_*=\delta/3\quad \text{for all }t\in(0,t_*).$$
From equation \bvec eqref{fctnse}, we have
\begin{equation}\label{daeu}
\frac{d}{dt} (A^{\alphapha}e^{\varphi(t) A^{1/2}}u(t)) =A^{\alphapha}e^{\varphi(t) A^{1/2}}(-Au-B(u,u)+f)
+ \varphi'(t)A^{1/2}A^{\alphapha}e^{\varphi(t) A^{1/2}} u.
\bvec end{equation}
Taking inner product of the equation \bvec eqref{daeu} with $A^{\alphapha}e^{\varphi(t) A^{1/2}}u(t)$ gives
\begin{align*}
&\frac12\frac{d}{dt} |u|_{\alphapha,\varphi(t)}^2 + |A^{1/2}u|_{\alphapha,\varphi(t)}^2
=\varphi'(t)\langle A^{2\alphapha+1/2}e^{2\varphi(t) A^{1/2}}u,u\rangle\\
&\quad -\langle A^{\alphapha}e^{\varphi(t) A^{1/2}}B(u,u),A^{\alphapha}e^{\varphi(t) A^{1/2}}u\rangle+ \langle A^{\alphapha-1/2}e^{\varphi(t) A^{1/2}}f,A^{\alphapha+1/2}e^{\varphi(t) A^{1/2}}u\rangle.
\bvec end{align*}
Applying the Cauchy-Schwarz inequality, then Lemma \ref{nonLem} to the second term on the right-hand side, we obtain
\begin{align*}
&\frac12\frac{d}{dt} |u|_{\alphapha,\varphi(t)}^2 + |A^{1/2}u|_{\alphapha,\varphi(t)}^2\\
& \le \varphi'(t) |u|_{\alphapha+1/2,\varphi(t)}^2+K^\alphapha |A^{1/2}u|_{\alphapha,\varphi(t)}^2 |u|_{\alphapha,\varphi(t)}
+ |f(t)|_{\alphapha-1/2,\varphi(t)}|u|_{\alphapha+1/2,\varphi(t)}, \\
&\le \frac\delta3 |u|_{\alphapha+1/2,\varphi(t)}^2+K^\alphapha |A^{1/2}u|_{\alphapha,\varphi(t)}^2 |u|_{\alphapha,\varphi(t)}
+ \frac3{4\delta}|f(t)|_{\alphapha-1/2,\varphi(t)}^2 +\frac\delta3|u|_{\alphapha+1/2,\varphi(t)}^2.
\bvec end{align*}
This implies
\begin{equation}\label{s1}
\frac12\frac{d}{dt} |u|_{\alphapha,\varphi(t)}^2 + \Big(1-\frac{2\delta}3 -K^\alphapha |u|_{\alphapha,\varphi(t)}\Big)|A^{1/2}u|_{\alphapha,\varphi(t)}^2 \le \frac3{4\delta}|f(t)|_{\alphapha-1/2,\sigma}^2.
\bvec end{equation}
Let $T\in(0,\infty)$. Note that $|u(0)|_{\alphapha,\varphi(0)}=|A^\alphapha u^0|<2C_0$. Assume that
\begin{equation}\label{uT}
|u(t)|_{\alphapha,\varphi(t)}\le 2C_0,\quad \forall t\in[0,T).
\bvec end{equation}
Then for $t\in (0,T)$, we have from \bvec eqref{s1} and \bvec eqref{fta} that
\begin{equation}\label{s2}
\frac{d}{dt} |u|_{\alphapha,\varphi(t)}^2 + 2(1-\delta)|A^{1/2}u|_{\alphapha,\varphi(t)}^2 \le \frac3{2\delta}|f(t)|_{\alphapha-1/2,\sigma}^2\leq \frac{3C_1^2}{2\delta} e^{-2\lambda t}.
\bvec end{equation}
Applying Gronwall's inequality in \bvec eqref{s2} yields for all $t\in(0,T)$ that
\begin{align*}
|u(t)|_{\alphapha,\varphi(t)}^2
&\le e^{-2(1-\delta)t}|u^0|_{\alphapha,0}^2+\frac{3C_1^2}{2\delta}e^{-2(1-\delta)t}\int_0^t e^{2(1-\delta)\tau} \cdot e^{-2\lambda\tau} d\tau\\
&\le e^{-2(1-\delta)t}|u^0|_{\alphapha,0}^2+\frac{3C_1^2}{4\delta (\lambda-1+\delta)} e^{-2(1-\delta)t}\\
&= \Big(|u^0|_{\alphapha,0}^2 + C_0^2\Big)e^{-2(1-\delta)t}.
\bvec end{align*}
Combining this with condition \bvec eqref{usmall} for the initial data, we obtain
\begin{equation}s
|u(t)|_{\alphapha,\varphi(t)}^2
\le 2C_0^2e^{-2(1-\delta)t} ,
\bvec end{equation}s
which gives
\begin{equation}\label{s4}
|u(t)|_{\alphapha,\varphi(t)}
\le \sqrt{2} C_0 e^{-(1-\delta)t}, \quad \forall t\in(0,T).
\bvec end{equation}
In particular, letting $t\to T^-$ in \bvec eqref{s4} yields
\begin{equation}s
\lim_{t\to T^-}|u(t)|_{\alphapha,\varphi(t)}
\le \sqrt{2}C_0<2C_0.
\bvec end{equation}s
By the standard contradiction argument, we have that the inequality in \bvec eqref{uT} holds for all $t>0$, and consequently, so does \bvec eqref{s4}. Since $\varphi(t)=\sigma$ for $t\ge t_*$, the desired estimate \bvec eqref{uest} follows \bvec eqref{s4}.
For $t\ge t_*$, integrating \bvec eqref{s2} from $t$ to $t+1$ gives
\begin{align*}
& 2(1-\delta)\int_t^{t+1} |A^{1/2}u(\tau)|_{\alphapha,\sigma}^2d\tau
\le |u(t)|_{\alphapha,\sigma}^2+\frac{3C_1^2}{2\delta}\int_t^{t+1}e^{-2\lambda \tau}d\tau\\
&\le 2C_0^2 e^{-2(1-\delta)t}+\frac{3C_1^2}{4\delta\lambda}e^{-2\lambda t}
\le C_0^2 e^{-2(1-\delta)t}\Big(2+\frac{\lambda-1+\delta}{\lambda}\Big)\\
&\le 3C_0^2 e^{-2(1-\delta)t}.
\bvec end{align*}
Then estimate \bvec eqref{intAa} follows.
\textbf{Part II: case $\sigma=0$.} The proof is similar to Part I without using the function $\varphi(t)$.
Here, we perform necessary calculations.
First, using Sobolev norms, we have
\begin{equation}\label{dtAa}
\frac12\frac{d}{dt} |A^\alphapha u|^2+\Big(1-\frac\delta2-K^\alphapha|A^\alphapha u|\Big)|A^{\alphapha+1/2}u|^2\le \frac1{2\delta}|A^{\alphapha-1/2}f|^2.
\bvec end{equation}
As long as $|A^\alphapha u(t)|\le 2C_0=\delta/(2K^\alphapha)$ in $[0,T)$ for some $T\in(0,\infty]$, we have
for $t\in(0,T)$ that
\begin{align*}
|A^\alphapha u(t)|^2
&\le |A^\alphapha u^0|^2 e^{-2(1-\delta)t} + \frac{C_1^2 e^{-2(1-\delta)t}}{\delta}
\int_0^t e^{-2(\lambda-\delta+1)\tau}d\tau\\
&\le \Big(C_0^2 +\frac{C_1^2}{2\delta(\lambda-\delta+1)}\Big) e^{-2(1-\delta)t}= 2C_0^2 e^{-2(1-\delta)t}.
\bvec end{align*}
This implies $T=\infty$ and then also proves \bvec eqref{uest}.
Now, using \bvec eqref{uest} for the second term on the left-hand side of \bvec eqref{dtAa}, and then integrating in time gives
\begin{align*}
2(1-\delta)\int_t^{t+1} |A^{\alphapha+1/2}u|^2d\tau
&\le |A^\alphapha u(t)|^2+ \frac1\delta\int_t^{t+1}|A^{\alphapha-1/2}f|^2d\tau\\
&\le 2C_0^2 e^{-2(1-\delta)t} + \frac{C_1^2 e^{-2\lambda t}}{2\delta\lambda}
\le 3C_0^2 e^{-2(1-\delta)t}.
\bvec end{align*}
This implies \bvec eqref{intAa}, and the proof is complete.
\bvec end{proof}
\begin{remark}
We point out that Proposition \ref{theo22} is essentially an application of Leray-Hopf energy inequality and the well-known fact (cf. \cite{DoeringTiti}) that having control on the growth of the rate of global energy dissipation, $\bvec epsilonilon:=\sup_{t\geq0}\bvec epsilonilon(t)$, where $\bvec epsilonilon(t):=\nu\lVert{\nabla u(t)}\rVert_{L^2}^2$, can sustain exponential decay in the spectrum of the corresponding solution. The need for proving Proposition \ref{theo22} comes from the fact that we require detailed information of the decay rates of the solution and the particular effect on it from the body force, which is not immediately available from \cite{DoeringTiti}, where the case of having a non-potential force is not treated.
\bvec end{remark}
Considering decaying forces, we assume at the moment up to Proposition \ref{theo23} that there are numbers $M_*,\kappa_0>0$ such that
\begin{equation}\label{fkappa}
|f(t)|\le M_* e^{-(1+\kappa_0)t/2},\quad \forall t\ge 0.
\bvec end{equation}
We recall estimate (A.39) of \cite[Chap. II]{FMRTbook} for Leray-Hopf weak solutions (under the Basic Assumption,)
\begin{equation}s
|u(t)|^2\le e^{-t}|u_0|^2 + e^{-t}\int_0^t e^\tau |f(\tau)|^2d\tau,\quad \forall t>0.
\bvec end{equation}s
It then follows from \bvec eqref{fkappa} that
\begin{equation}\label{uenerM}
|u(t)|^2\le e^{-t}(|u_0|^2 + M_*^2/\kappa_0),\quad \forall t>0.
\bvec end{equation}
By applying the Cauchy-Schwarz, Cauchy, and Poincar\'e inequalities to the last term on the right-hand side of \bvec eqref{Lenergy}, upon simplifying we obtain
\begin{equation}s
|u(t)|^2+\int_{t_0}^t \|u(\tau)\|^2d\tau\le |u(t_0)|^2+\int_{t_0}^t |f(\tau)|^2\ d\tau,
\bvec end{equation}s
for $t_0=0$ and almost all $t_0\in(0,\infty)$, and all $t\ge t_0$.
Using \bvec eqref{uenerM} for $|u(t_0)|^2$, and, again, \bvec eqref{fkappa} yields
\begin{equation}\label{uVest}
\int_{t_0}^{t_0+1}\|u(\tau)\|^2d\tau
\le e^{-t_0}\Big(|u_0|^2 + \frac{M_*^2}{\kappa_0}\Big) + \frac1{1+\kappa_0}M_*^2 e^{-(1+\kappa_0)t_0}
\le e^{-t_0}\Big(|u_0|^2 + \frac{2 M_*^2}{\kappa_0}\Big).
\bvec end{equation}
For any $t\ge 0$, let $\{t_n\}_{n=1}^\infty$ be a sequence in $(0,\infty)$ converging to $t$ such that \bvec eqref{uVest} holds for $t_0=t_n$. Then letting $n\to\infty$ gives
\begin{equation}\label{tt1}
\int_t^{t+1}\|u(\tau)\|^2d\tau
\le e^{-t}\Big(|u_0|^2 + \frac{2 M_*^2}{\kappa_0}\Big).
\bvec end{equation}
\begin{proposition}\label{theo23}
Assume \bvec eqref{fkappa} and, additionally, that there are $\sigma\ge 0$, $\alphapha\ge 1/2$ and $\lambda_0\in(0,1)$ such that
\begin{equation}\label{falphaonly}
|f(t)|_{\alphapha,\sigma}=\mathcal O(e^{-\lambda_0t})\quad\text{as }t\to\infty.
\bvec end{equation}
Let $u(t)$ be a Leray-Hopf weak solution of \bvec eqref{fctnse}.
Then for any $\delta\in (1-\lambda_0,1)$, there exists $T_*>0$
such that $u(t)$ is a regular solution of \bvec eqref{fctnse} on $[T_*,\infty)$, and one has for all $t\ge 0$ that
\begin{equation}\label{preus0}
|u(T_*+t)|_{\alphapha+1/2,\sigma} \le K^{-\alphapha-1/2}e^{-(1-\delta)t},
\bvec end{equation}
\begin{equation}\label{preBlt}
|B(u(T_*+t),u(T_*+t))|_{\alphapha,\sigma}\le K^{-\alphapha-1}e^{-2(1-\delta)t},
\bvec end{equation}
where $K$ is the constant in Lemma \ref{nonLem}.
\bvec end{proposition}
\begin{proof}
First, we note that \bvec eqref{preBlt} is a direct consequence of \bvec eqref{preus0}. Indeed, applying Lemma \ref{nonLem} with the use of \bvec eqref{preus0}, we have for $t\ge 0$,
\begin{equation}s
|B(u(T_*+t),u(T_*+t))|_{\alphapha,\sigma}\le K^\alphapha |u(T_*+t))|_{\alphapha+1/2,\sigma}^2\le K^\alphapha \Big(\frac{e^{-(1-\delta)t}}{K^{\alphapha+1/2}}\Big)^2,
\bvec end{equation}s
which yields \bvec eqref{preBlt}.
We focus on proving \bvec eqref{preus0} now.
Define
\begin{equation}s
\lambda=\frac{1-\delta+\lambda_0}{2}\in(1-\delta,\lambda_0).
\bvec end{equation}s
We consider each case $\sigma>0$ and $\sigma=0$ separately.
\textbf{(i) Case $\sigma>0$.}
\textit{Step 1.}
By \bvec eqref{tt1} and \bvec eqref{falphaonly}, there exists $t_0>0$ such that
\begin{equation}s
|A^{1/2}u(t_0)|<C_0(1/2,\delta),
\bvec end{equation}s
\begin{equation}s
|f(t_0+t)|_{0,\sigma}\le C_1(1/2,\delta,\lambda)e^{-\lambda t},\quad \forall t\ge 0.
\bvec end{equation}s
Applying Proposition \ref{theo22} to $u(t_0+\cdot)$, $f(t_0+\cdot)$, $\alphapha=1/2$ results in
\begin{equation}s
|u(t_0+t)|_{1/2,\sigma}\le \sqrt 2 C_0(1/2,\delta) e^{-(1-\delta)t}\le K^{-1/2}e^{-(1-\delta)t},\quad \forall t\ge t_*=6\sigma/\delta.
\bvec end{equation}s
Then by \bvec eqref{als}, we have for all $t\ge t_*$ that
\begin{align}
|A^{\alphapha+1/2} u(t_0+t)|
&\le \Big(\frac{2\alphapha+1}{e\sigma}\Big)^{2\alphapha+1} |e^{\sigma A^{1/2}} u(t_0+t)|
\le \Big(\frac{2\alphapha+1}{e\sigma}\Big)^{2\alphapha+1} |u(t_0+t)|_{1/2,\sigma} \notag \\
&\le \Big(\frac{2\alphapha+1}{e\sigma}\Big)^{2\alphapha+1} K^{-1/2} e^{-(1-\delta)t}.\label{Aaut0}
\bvec end{align}
\textit{Step 2.}
From \bvec eqref{Aaut0} and \bvec eqref{falphaonly} we deduce that there is a sufficiently large $T>t_0+t_*$ so that
\begin{equation}s
|A^{\alphapha+1/2} u(T)|\le C_0(\alphapha+1/2,\delta),
\bvec end{equation}s
\begin{equation}s
|f(T+t)|_{\alphapha,\sigma}\le C_1(\alphapha+1/2,\delta,\lambda)e^{-\lambda t} \quad \forall t\ge 0.
\bvec end{equation}s
Applying Proposition \ref{theo22} again to $u(T+\cdot)$, $\alphapha:=\alphapha+1/2$, we obtain that there is $T_*>T+t_*$ such that
\begin{equation}s
|u(T_*+t)|_{\alphapha+1/2,\sigma}\le \sqrt 2 C_0(\alphapha+1/2,\delta)e^{-(1-\delta)t}\le \frac1{K^{\alphapha+1/2}}e^{-(1-\delta)t}\quad \forall t\ge 0.
\bvec end{equation}s
This yields \bvec eqref{preus0} and completes the proof of Case $(i)$.
\textbf{(ii) Case $\sigma=0$.}
We will apply Proposition \ref{theo22} recursively to gain the exponential decay for $u(t)$ in higher Sobolev norms.
For $j\in \bvec ensuremath{\mathbb N}$, suppose
\begin{equation}\label{intAj}
\lim_{t\to\infty}\int_t^{t+1}|A^{j/2}u(\tau)|^2d\tau=0,
\bvec end{equation}
and
\begin{equation}\label{fj}
|A^{(j-1)/2}f(t)|=\mathcal O(e^{-\lambda_0 t})\quad\text{as }t\to\infty.
\bvec end{equation}
Then there is $T>0$ so that
\begin{equation}s
|A^{j/2} u(T)|\le C_0(j/2,\delta),
\bvec end{equation}s
\begin{equation}s
|A^{j/2-1/2}f(T+t)|\le C_1(j/2,\delta,\lambda)e^{-\lambda t} \quad \forall t\ge 0.
\bvec end{equation}s
Applying Proposition \ref{theo22} to $u(T+\cdot)$, $\alphapha:=j/2$, $\sigma:=0$, we obtain
\begin{equation}s
|A^{j/2}u(T+t)|\le \sqrt 2 C_0(j/2,\delta)e^{-(1-\delta)t}\le \frac1{K^{j/2}}e^{-(1-\delta)t}\quad \forall t\geq0,
\bvec end{equation}s
and
\begin{equation}\label{intAj1}
\int_t^{t+1}|A^{(j+1)/2}u(\tau)|^2d\tau=\mathcal O(e^{-2(1-\delta)t})\text{ as }t\to\infty.
\bvec end{equation}
Note, by \bvec eqref{tt1}, that \bvec eqref{intAj} holds true for $j=1$.
Let $m\in\bvec ensuremath{\mathbb N}\cup\{0\}$ be given such that
\begin{equation} \label{mal}
\alphapha \le m/2<\alphapha+1/2.
\bvec end{equation}
Since $\alphapha\ge1/2$, condition \bvec eqref{mal} gives $m\ge1$. Also, observe that \bvec eqref{mal} implies $(m-1)/2<\alphapha$. Hence, by \bvec eqref{falphaonly}, condition \bvec eqref{fj} is satisfied for $j=1,2,\ldots,m$.
Now we repeat the argument from \bvec eqref{intAj} to \bvec eqref{intAj1} for $j=1,2,\ldots,m$.
Particularly, when $j=m$ we obtain from \bvec eqref{intAj1} that
\begin{equation}s
\int_t^{t+1}|A^{(m+1)/2}u(\tau)|^2d\tau=\mathcal O(e^{-2(1-\delta)t}) \text{ as }t\to\infty.
\bvec end{equation}s
Since $\alphapha\le m/2$, this yields
\begin{equation}\label{intAm}
\int_t^{t+1}|A^{\alphapha+1/2}u(\tau)|^2d\tau=\mathcal O(e^{-2(1-\delta)t}) \text{ as }t\to\infty.
\bvec end{equation}
Using \bvec eqref{intAm} in place of \bvec eqref{Aaut0}, we can proceed as in Step 2 of part (i) and obtain \bvec eqref{preus0}. The proof is complete.
\bvec end{proof}
\section{Proofs of main results}\label{pfsec}
We will use the following elementary identities: for $\beta>0$, integer $d\ge 0$, and any $t\in \bvec ensuremath{\mathbb R}$,
\begin{equation}\label{id1}
\int_{-\infty}^t \tau^d e^{\beta \tau}\ d\tau=\frac{e^{\beta t}}{\beta}\sum_{n=0}^{d}\frac{(-1)^{d-n} d!}{n!\beta^{d-n}}t^n,
\bvec end{equation}
\begin{equation}\label{id2}
\int_t^\infty \tau^d e^{-\beta \tau}\ d\tau=\frac{e^{-\beta t}}{\beta}\sum_{n=0}^{d}\frac{d!}{n!\beta^{d-n}}t^n.
\bvec end{equation}
The next lemma is a building block of the construction of the polynomials $q_n(t)$'s. It summarizes and reformulates the facts
used in \cite{FS87} and \cite[Lemma 3.2]{FS91}, see also \cite{HM1}.
\begin{definition}
Let $X$ be a Banach space with its dual $X'$. Let $u(t)$ and $g(t)$ be functions in $L^1_{\rm loc}([0,\infty),X)$.
We say $g(t)$ is the $X$-valued distribution derivative of $u(t)$, and denote $g=u'$, if
\begin{equation}\label{disprimet}
\frac{d}{dt} \inprod{u(t),v}=\inprod{g(t),v}\text{ in the distribution sense on $(0,\infty)$}, \forall v\in X',
\bvec end{equation}
where $\inprod{\cdotp,\cdotp}$ in \bvec eqref{disprimet} denotes the usual duality pairing between an element of $X$ and $X'$.
\bvec end{definition}
\begin{lemma}\label{polylem}
Let $(X,\|\cdot\|)$ be a Banach space. Suppose $y(t)$ is a function in $C([0,\infty),X)$ that solves the following ODE
\begin{equation}s
y'(t)+ \beta y(t) =p(t)+g(t)
\bvec end{equation}s
in the $X$-valued distribution sense on $(0,\infty)$.
Here, $\beta\in \bvec ensuremath{\mathbb R}$ is a fixed constant, $p(t)$ is an $X$-valued polynomial in $t$, and $g\in L^1_{\rm loc}([0,\infty),X)$ satisfies
\begin{equation}s
\|g(t)\|\le Me^{-\delta t} \quad \forall t\ge 0, \quad \text{for some } M,\delta>0.
\bvec end{equation}s
Define $q(t)$ for $t\in \bvec ensuremath{\mathbb R}$ by
\begin{equation}\label{qdef}
q(t)=
\begin{cases}
e^{-\beta t}\int_{-\infty}^t e^{\beta\tau }p(\tau) d\tau&\text{if }\beta >0,\\
y(0) +\int_0^\infty g(\tau)d\tau + \int_0^t p(\tau)d\tau &\text{if }\beta =0,\\
-e^{-\beta t}\int_t^\infty e^{\beta\tau }p(\tau) d\tau&\text{if }\beta <0.
\bvec end{cases}
\bvec end{equation}
Then $q(t)$ is an $X$-valued polynomial of degree at most $\deg(p)+1$ that satisfies
\begin{equation}\label{pode2}
q'(t)+\beta q(t) = p(t),\quad t\in \bvec ensuremath{\mathbb R},
\bvec end{equation}
and the following estimates hold:
\begin{enumerate}[label={\rm (\roman*)}]
\item
If $\beta>0$ then
\begin{equation}\label{g1b1}
\|y(t)-q(t)\|\le \Big(\|y(0)-q(0)\| + \frac{M}{|\beta-\delta|}\Big)e^{-\min\{\delta,\beta\} t}, \quad t\ge 0,\text{ for } \beta\ne \delta,
\bvec end{equation}
and
\begin{equation}\label{g1b1equal}
\|y(t)-q(t)\|\le (\|y(0)-q(0)\| + M t )e^{-\delta t}, \quad t\ge 0,\text{ for } \beta=\delta.
\bvec end{equation}
\item If $\beta=0$ then
\begin{equation}\label{g1b2}
\|y(t)-q(t)\|\le \frac{M}{\delta}e^{-\delta t},\quad t\ge 0.
\bvec end{equation}
\item If $\beta<0$ and
\begin{equation}\label{yexpdec}
\lim_{t\to\infty} (e^{\beta t}y(t))=0,
\bvec end{equation}
then
\begin{equation}\label{g1b3}
\|y(t)-q(t)\|\le \frac{M}{|\beta|+\delta}e^{-\delta t}, \quad t\ge 0.
\bvec end{equation}
\bvec end{enumerate}
\bvec end{lemma}
\begin{proof}
The fact that $q(t)$ is a polynomial in $t$ follows the identities \bvec eqref{id1} and\bvec eqref{id2}.
The equation \bvec eqref{pode2} obviously results from the definition \bvec eqref{qdef} of $q(t)$. It remains to prove estimates \bvec eqref{g1b1}, \bvec eqref{g1b2} and \bvec eqref{g1b3}.
Let $z(t)=y(t)-q(t)$, then
\begin{equation}s
z'(t)+\beta z(t)=g(t)\text{ in the $X$-valued distribution sense on } (0,\infty).
\bvec end{equation}s
Multiplying this equation by $e^{\beta t}$ yields
\begin{equation}\label{eaz}
(e^{\beta t} z(t))'=e^{\beta t} g(t) \text{ in the $X$-valued distribution sense on } (0,\infty).
\bvec end{equation}
For $t_0\ge 0$, it follows \bvec eqref{eaz} and \cite[Ch. III, Lemma 1.1]{TemamAMSbook}
that
\begin{equation}\label{prevoc}
e^{\beta t} z(t)=\xi+\int_{t_0}^t e^{\beta \tau}g(\tau)d\tau,
\bvec end{equation}
for some $\xi\in X$ and almost all $t\in(t_0,\infty)$.
Since $e^{\beta t} z(t)$ is continuous on $[0,\infty)$ and $e^{\beta t} g(t)\in L^1_{\rm loc}([0,\infty))$, we have $\xi=e^{\beta t_0} z(t_0)$ and equation \bvec eqref{prevoc} holds for all $t\ge t_0$. Hence, we obtain the standard variation of constant formula
\begin{equation}\label{voc}
z(t)=e^{-\beta(t-t_0)} z(t_0)+e^{-\beta t} \int_{t_0}^t e^{\beta\tau}g(\tau)d\tau \quad \forall t\ge t_0.
\bvec end{equation}
(i) Case $\beta>0$.
Setting $t_0=0$ in \bvec eqref{voc}, we estimate
\begin{align*}
\|z(t)\|
&\le e^{-\beta t}\|z(0)\|+ e^{-\beta t} \int_0^t e^{\beta \tau}\|g(\tau)\| d\tau
\le e^{-\beta t}\|z(0)\| + e^{-\beta t} \int_0^t e^{\beta\tau} Me^{-\delta\tau} d\tau.
\bvec end{align*}
Since the last term is $M(e^{-\delta t}-e^{-\beta t})/(\beta-\delta)$ if $\beta\ne \delta$, and is $M t e^{-\delta t}$ if $\beta=\delta$, we easily obtain \bvec eqref{g1b1} and \bvec eqref{g1b1equal}.
(ii) Case $\beta=0$. Note from \bvec eqref{qdef} that $z(0)=y(0)-q(0)=-\int_0^\infty g(\tau)d\tau$.
Letting $t_0=0$ in \bvec eqref{prevoc} gives
\begin{equation}s
z(t)=z(0) +\int_0^t g(\tau)d\tau = - \int_t^\infty g(\tau)d\tau.
\bvec end{equation}s
Hence
\begin{equation}s
\|z(t)\|\le \int_t^\infty \|g(\tau)\|d\tau \le \int_t^\infty Me^{-\delta \tau}d\tau=\frac{M}\delta e^{-\delta t},
\bvec end{equation}s
which proves \bvec eqref{g1b2}.
(iii) Case $\beta<0$. By \bvec eqref{yexpdec} and the fact $q(t)$ is a polynomial, we have $e^{\beta t}z(t)\to 0$ as $t\to\infty$.
Then letting $t\to\infty$ in \bvec eqref{prevoc} and setting $t_0=t$ yield
\begin{equation}s
z(t)=-e^{-\beta t} \int_t^\infty e^{\beta\tau} g(\tau) d\tau.
\bvec end{equation}s
It follows that
\begin{equation}s
\|z(t)\|\le e^{-\beta t} \int_t^\infty M e^{(\beta-\delta)\tau} d\tau = e^{-\beta t} \frac{Me^{(\beta-\delta)t}}{-\beta+\delta}
=\frac{M}{|\beta|+\delta}e^{-\delta t},
\bvec end{equation}s
which proves \bvec eqref{g1b3}.
\bvec end{proof}
The remainder of this paper is focused on the proofs of main results, and will use the following notation.
\begin{notation} If $n\in\sigma (A)$, we define $R_n$ to be the orthogonal projection in $H$ on the eigenspace of $A$ corresponding to $n$. In case $n\notin \sigma (A)$, set $R_n=0$.
For $n\in \bvec ensuremath{\mathbb N}$, define $P_n=R_1+R_2+ \cdots +R_n$. Note that each vector space $P_nH$ is finite dimensional.
\bvec end{notation}
\subsection{Proof of Theorem \ref{mainthm}}
We start by obtaining some additional properties for the force $f(t)$ and solution $u(t)$ which we will make use of later.
By the expansion \bvec eqref{forcexpand} of $f(t)$ in $E^{\infty,\sigma_0}$, for each $N\in\bvec ensuremath{\mathbb N}$ and $\alphapha\geq0$, there exists a number $\delta_{N,\alphapha}\in(0,1)$ such that
\begin{equation}\label{Fcond}
\Big|f(t)-\sum_{n=1}^N f_n(t)e^{-nt}\Big|_{\alphapha,\sigma_0}=\mathcal O(e^{-(N+\delta_{N,\alphapha})t})\quad \text {as }t\to\infty.
\bvec end{equation}
Observe that we have the following immediate consequences:
\begin{enumerate}[label={\rm (\alphaph*)}]
\item The relation \bvec eqref{Fcond} implies for each $\alphapha\ge 0$ that $f(t)$ belongs to $G_{\alphapha,\sigma_0}$ for $t$ large.
\item Note that when $N=1$, the function $f(t)$ itself satisfies
\begin{equation}s
|f(t)- f_1(t) e^{-t}|_{\alphapha,\sigma_0}=\mathcal O(e^{-(1+\delta_{1,\alphapha}) t} ).
\bvec end{equation}s
Since $f_1(t)$ is a polynomial, it follows that
\begin{equation}\label{fsure}
|f(t)|_{\alphapha,\sigma_0}=\mathcal O(e^{-\lambda t}),\quad \forall \lambda\in (0,1),\quad\forall \alphapha\ge 0.
\bvec end{equation}
Consequently, for any $\varepsilon>0$, $\alphapha\ge 0$, and $\lambda\in(0,1)$, applying \bvec eqref{fsure} with $(\lambda+1)/2$ replacing $\lambda$, it follows that there is $T>0$ such that
\begin{equation}\label{fdecay}
|f(T+t)|_{\alphapha,\sigma_0}\le \varepsilon e^{-\lambda t}\quad \forall t\ge 0.
\bvec end{equation}
\item Combining \bvec eqref{fsure} for $\alphapha=0$, with the Basic Assumption, we assume, without loss of generality, for each $\lambda\in (0,1)$ that
\begin{equation}\label{fh}
|f(t)|\le M_\lambda e^{-\lambda t},\quad \forall t\ge 0, \text{ for some }M_\lambda>0.
\bvec end{equation}
\bvec end{enumerate}
For the solution $u(t)$, we summarize the key estimates in section \ref{Gdecay} into the following.
\textbf{Claim.} For any $\alphapha\geq0$ and $\delta\in (0,1)$, there exists a positive number $T_*>0$
such that $u(t)$ is a regular solution on $[T_*,\infty)$, and one has for $t\ge 0$ that
\begin{equation}\label{us0}
|u(T_*+t)|_{\alphapha+1/2,\sigma_0} \le e^{-(1-\delta) t},
\bvec end{equation}
\begin{equation}\label{Blt}
|B(u(T_*+t),u(T_*+t))|_{\alphapha,\sigma_0}\le e^{-2(1-\delta) t}.
\bvec end{equation}
\textit{Proof of {Claim}.}
We apply Proposition \ref{theo23}. By \bvec eqref{fdecay}, we have that \bvec eqref{falphaonly} holds for $\sigma=\sigma_0$, any $\alphapha\ge 1/2$ and any $\lambda_0\in(0,1)$.
Also, \bvec eqref{fkappa} is satisfied because of \bvec eqref{fh}.
Therefore \bvec eqref{preus0} and \bvec eqref{preBlt} hold for $\sigma=\sigma_0$, any $\alphapha\ge 1/2$ and any $\delta\in(0,1)$. These directly yield \bvec eqref{us0} and \bvec eqref{Blt}.
Returning to the main proof, it suffices to prove that there exist polynomials $q_n$'s for all $n\ge 1$ such that for each $N\ge 1$ the following properties ($\mathcal H1$), ($\mathcal H2$), and ($\mathcal H3$) hold true:
\begin{enumerate}
\item[($\mathcal H1$)] $q_N\in \mathcal P^{\infty,\sigma_0}$.
\item[($\mathcal H2$)] For $\alphapha\ge 1/2$,
\begin{equation}\label{remdelta}
\Big |u(t)-\sum_{n=1}^N q_n(t) e^{-nt}\Big |_{\alphapha,\sigma_0} =\mathcal O(e^{-(N+\varepsilon)t} )\quad\text{as } t\to\infty,\quad\forall\varepsilon\in(0,\delta_{N,\alphapha}^*),
\bvec end{equation}
where the numbers $\delta_{n,\alphapha}^*$'s, for $\alphapha\ge 1/2$, are defined recursively by
\begin{equation}s
\delta_{n,\alphapha}^*=\begin{cases}
\delta_{1,\alphapha},&\text{for }n=1,\\
\min\{\delta_{n,\alphapha},\delta^*_{n-1,\alphapha+1/2}\},&\text{for }n\ge 2.
\bvec end{cases}
\bvec end{equation}s
\item[($\mathcal H3$)] The ODE \bvec eqref{unODE} holds in $E^{\infty,\sigma_0}$ for $n=N$.
\bvec end{enumerate}
We prove these statements by constructing the polynomials $q_N(t)$'s recursively.
\noindent {\bf Base case: $N=1$.}
Let $k\geq1$. By taking $v\in R_kH$ in the the weak formulation \bvec eqref{varform}, we have
\begin{equation}\label{rku}
\frac{d}{dt} R_ku + k R_k u = R_k (f(t) - B(u(t),u(t)))
\bvec end{equation}
in the $R_kH$-valued distribution sense on $(0,\infty)$.
(Since $R_kH$ is finite dimensional, ``$R_kH$-valued distribution sense," is simply the same as ``distribution sense''.)
Let $w_0(t)=e^t u(t)$ and $w_{0,k}(t)=R_k w_0(t)$. By virtue of the $H_{\rm w}$-continuity of $u(t)$ (see \bvec eqref{lh:wksol}), we have $w_{0,k}\in C([0,\infty),R_kH)$. It follows from \bvec eqref{rku} that
\begin{equation}\label{wj}
\frac{d}{dt} w_{0,k} + (k-1) w_{0,k} = R_k f_1 + R_kH_0(t),
\bvec end{equation}
where
\begin{equation}\label{H0def}
H_0(t)=e^t (f(t)-F_1(t) - B(u(t),u(t))),
\bvec end{equation}
and $F_1$ is defined in \bvec eqref{uF}. Note that $R_k f_1(t)$ is an $R_kH$-valued polynomial in $t$.
Let $\alphapha\ge 1/2$ be fixed.
Using \bvec eqref{Fcond} for $N=1$ and applying \bvec eqref{Blt} with $\delta=(1-\delta_{1,\alphapha})/4$, there are $T_0>0$ and $D_0\ge 1$ such that for $t\ge 0$,
\begin{equation}\label{ch1}
e^t |f(T_0+t)-F_1(T_0+t)|_{\alphapha,\sigma_0}\le D_0e^{-\delta_{1,\alphapha} t},
\bvec end{equation}
\begin{equation}\label{ch2}
e^t |B(u(T_0+t),u(T_0+t))|_{\alphapha,\sigma_0}\le e^{(2\delta-1)t}=e^{-(1+\delta_{1,\alphapha})t/2}\le e^{-\delta_{1,\alphapha} t}.
\bvec end{equation}
Then, by setting $D_1=D_0+1$, we have
\begin{equation}\label{ch3}
|H_0(T_0+t)|_{\alphapha,\sigma_0} \le D_1e^{-\delta_{1,\alphapha} t},\quad\forall t\ge0.
\bvec end{equation}
We will now identify the components of the desired polynomial, $q_1(t)$, belonging to each eigenspace $R_kH$.
\noindent\textit{\underline{Case: $k=1$}.}
Applying Lemma \ref{polylem}(ii) to equation \bvec eqref{wj} with $X=R_1H$, $\|\cdot\|=|\cdot|_{\alphapha,\sigma_0}$, $\beta=0$,
$$y(t)=w_{0,1}(T_0+t),\quad p(t)=R_1f_1(T_0+t), \quad g(t)=R_1H_0(T_0+t),$$
we infer that there is an $R_1H$-valued polynomial $q_{1,1}(t)$ such that for any $t\ge 0$
\begin{equation}s
|w_{0,1}(T_0+t)-q_{1,1}(t)|_{\alphapha,\sigma_0} \le \frac{D_1}{\delta_{1,\alphapha}}e^{-\delta_{1,\alphapha} t},
\bvec end{equation}s
thus,
\begin{equation}\label{q11}
|R_1 w_0(t)-q_{1,1}(t-T_0)|_{\alphapha,\sigma_0} \le \frac{D_1e^{\delta_{1,\alphapha} T_0}}{\delta_{1,\alphapha}} e^{-\delta_{1,\alphapha} t},\quad\forall t\geq T_0.
\bvec end{equation}
In fact,
\begin{equation}\label{q11def}
q_{1,1}(t)= \xi_1+\int_0^t R_1f_1(\tau+T_0)d\tau\quad \text{for some } \xi_1\in R_1H.
\bvec end{equation}
\noindent\textit{\underline{Case: $k\geq2$}.} We apply Lemma \ref{polylem}(i) to equation \bvec eqref{wj} with
$$y(t)=w_{0,k}(T_0+t),\quad p(t)=R_kf_1(T_0+t),\quad g(t)=R_kH_0(T_0+t),$$
where $\beta=k-1>\delta_{1,\alphapha}$, and the norm $\|\cdot\|$ being $|\cdot|_{\alphapha,\sigma_0}$ on the space $X=R_kH$.
In particular, there is an $R_kH$-valued polynomial, $q_{1,k}(t)$ such that for any $t\ge 0$
\begin{equation}s
|w_{0,k}(T_0+t)- q_{1,k}(t)|_{\alphapha,\sigma_0} \le e^{-\delta_{1,\alphapha} t}\Big(|w_{0,k}(T_0)|_{\alphapha,\sigma_0} +|q_{1,k}(0)|_{\alphapha,\sigma_0} + \frac {D_1}{k-1-\delta_{1,\alphapha}}\Big),
\bvec end{equation}s
which implies for all $t\ge T_0$ that
\begin{equation}\label{q1j}
|w_{0,k}(t)- q_{1,k}(t-T_0)|_{\alphapha,\sigma_0} \le e^{-\delta_{1,\alphapha} (t-T_0)}\Big(|w_k(T_0)|_{\alphapha,\sigma_0} +|q_{1,k}(0)|_{\alphapha,\sigma_0} + \frac {D_1}{k-1-\delta_{1,\alphapha}}\Big).
\bvec end{equation}
In fact, for $k\ge 2$
\begin{equation}\label{q1k}
q_{1,k}(t)=-e^{(1-k)t}\int_{-\infty}^t e^{(k-1)\tau}R_kf_1(T_0+\tau)d\tau.
\bvec end{equation}
\noindent{\bf Polynomial $q_1(t)$.}
Define
\begin{equation}\label{q1def}
q_1(t)=\sum_{k=1}^\infty q_{1,k}(t-T_0),\quad t\in\bvec ensuremath{\mathbb R}.
\bvec end{equation}
We next prove that $q_1\in\mathcal{P}^{\infty,\sigma_0}$. Write
\begin{equation}s
f_1(t+T_0)=\sum_{d=0}^m a_d t^d,\quad \text{for some }a_d\in E^{\infty,\sigma_0}.
\bvec end{equation}s
Clearly, by \bvec eqref{q11def}, $R_1q_1(t+T_0)=q_{1,1}(t)$ is a $\mathcal V$-valued polynomial, and hence,
\begin{equation}\label{R1q1poly}
\text{the mapping }t\mapsto R_1q_1(t+T_0) \text{ belongs to } \mathcal{P}^{\infty,\sigma_0}.
\bvec end{equation}
We consider the remaining part $(I-R_1)q_1(t+T_0)$.
Using the integral formula \bvec eqref{id1},
\begin{align*}
(I-R_1)q_1(t+T_0)
&=\sum_{k=2}^\infty q_{1,k}(t)
=\sum_{k=2}^\infty -e^{(1-k)t} \int_{-\infty}^t e^{(k-1)\tau}\Big(\sum_{d=0}^m R_k a_d \tau^d\Big) d\tau\\
&=\sum_{k=2}^\infty \frac{1}{k-1}\sum_{d=0}^m \sum_{n=0}^{d}\frac{(-1)^{d-n} d!}{n!(k-1)^{d-n}}t^n R_k a_d\\
&=\sum_{k=2}^\infty \frac{1}{k-1}\sum_{n=0}^d\Big(\sum_{d=n}^{m}\frac{(-1)^{d-n} d!}{n!(k-1)^{d-n}}R_k a_d\Big)t^n.
\bvec end{align*}
Thus,
\begin{equation}\label{IRq1}
(I-R_1)q_1(t+T_0)=\sum_{n=0}^d b_nt^n,
\bvec end{equation}
where
the coefficient $b_n$, for $0\le n\le d$, is
\begin{align*}
b_n=\sum_{k=2}^\infty \frac{1}{k-1}\Big(\sum_{d=n}^{m}\frac{(-1)^{d-n} d!}{n!(k-1)^{d-n}}R_k a_d\Big).
\bvec end{align*}
For any $\mu\ge 0$, we have
\begin{align*}
|b_n|_{\mu+1,\sigma_0}^2=|Ab_n|_{\mu,\sigma_0}^2
&=\sum_{k=2}^\infty \Big|\frac{1}{k-1}\sum_{d=n}^m \frac{(-1)^{d-n} d!}{n!(k-1)^{d-n}} k \cdot R_k a_d\Big |_{\mu,\sigma_0}^2\\
&\le \sum_{k=2}^\infty \frac{k^2}{(k-1)^2} \Big(\sum_{d=n}^m \frac{ m!}{n!} |R_k a_d|_{\mu,\sigma_0}\Big)^2\\
&\le 4\sum_{k=2}^\infty \frac{ (m!)^2 (m-n+1)^2 }{(n!)^2} |R_ka_d|_{\mu,\sigma_0}^2.
\bvec end{align*}
(Above, we simply used $k/(k-1)\le 2$ for the last inequality.)
Thus,
\begin{equation}\label{Abn}
|b_n|_{\mu+1,\sigma_0}^2
\le \frac{ 4(m!)^2 (m-n+1)^2 }{(n!)^2} |(I-R_1)a_d|_{\mu,\sigma_0}^2<\infty.
\bvec end{equation}
Therefore, $b_n\in E^{\infty,\sigma_0}$ for all $0\le n\le d$,
and, by \bvec eqref{IRq1}, the mapping $t\mapsto (I-R_1)q_1(t+T_0)$ belongs to $\mathcal{P}^{\infty,\sigma_0}$.
This, together with \bvec eqref{R1q1poly}, implies that $t\mapsto q_1(t+T_0)$ belongs to $\mathcal{P}^{\infty,\sigma_0}$ and, ultimately, that $t\mapsto q_1(t)$ belongs to $\mathcal{P}^{\infty,\sigma_0}$, as desired.
\noindent{\bf Remainder estimate.} We estimate $|u(t)-q_1(t)e^{-t}|_{\alphapha,\sigma_0}$ now.
Firstly, inequality \bvec eqref{q11} yields
\begin{equation}\label{R1wq1}
|R_1(w_0(t)-q_1(t))|_{\alphapha,\sigma_0}=\mathcal O( e^{-\delta_{1,\alphapha} t}).
\bvec end{equation}
Secondly, we have from \bvec eqref{q1j} that
\begin{align*}
&\sum_{k= 2}^\infty |R_k(w_0(t)-q_1(t))|_{\alphapha,\sigma_0}^2 \\
&\le 3 e^{2\delta_{1,\alphapha} T_0} e^{-2\delta_{1,\alphapha} t}\sum_{k=2}^\infty\Big (|w_{0,k}(T_0)|_{\alphapha,\sigma_0}^2 +|R_kq_1(T_0)|_{\alphapha,\sigma_0}^2+\frac {D_1^2}{(k-1-\delta_{1,\alphapha})^2}\Big)\\
&\le D_2^2 e^{-2\delta_{1,\alphapha} t}
\bvec end{align*}
for all $t\ge T_0$, where
\begin{equation}s
D_2^2 =3 e^{2\delta_{1,\alphapha} T_0}\Big\{|(I-R_1)w_0(T_0)|_{\alphapha,\sigma_0}^2 + |(I-R_1)q_1(T_0)|_{\alphapha,\sigma_0}^2+D_1^2\sum_{k=2}^\infty \frac {1 }{(k-1-\delta_{1,\alphapha})^2})\Big\}<\infty.
\bvec end{equation}s
This implies
\begin{equation}\label{wq1}
|(I-R_1)(w_0(t)-q_1(t))|_{\alphapha,\sigma_0}\le D_2 e^{-\delta_{1,\alphapha} t},\quad \forall t\ge T_0.
\bvec end{equation}
Combining \bvec eqref{R1wq1} with \bvec eqref{wq1} gives
\begin{equation}s
|w_0(t)-q_1 (t)|_{\alphapha,\sigma_0}= \mathcal O (e^{-\delta_{1,\alphapha} t}),
\bvec end{equation}s
and consequently,
\begin{equation}\label{rem1}
|u(t)-q_1 (t) e^{-t}|_{\alphapha,\sigma_0}= \mathcal O (e^{-(1+\delta_{1,\alphapha})t}).
\bvec end{equation}
Thanks to \bvec eqref{rem1}, the polynomial $q_1(t)$ is independent of $\alphapha$. Hence the same $q_1$ satisfies \bvec eqref{rem1} for all $\alphapha\ge 1/2$, which proves ($\mathcal H2$) for $N=1$.
This proves \bvec eqref{remdelta} for $N=1$.
\noindent{\bf Establishing the ODE \bvec eqref{u1ODE}.}
By \bvec eqref{pode2} in Lemma \ref{polylem}, the polynomial $q_1(t)$ satisfies
\begin{equation}\label{pode3}
\frac{d}{dt} R_k q_1(T_0+t)+(k-1)R_kq_1(T_0+t)=R_kf_1(T_0+t),\quad \forall k\ge 1,\quad \forall t\in\bvec ensuremath{\mathbb R}.
\bvec end{equation}
For each $\mu\ge 0$, we have $Aq_1(T_0+t)$ and $f_1(T_0+t)$ belong to $G_{\mu,\sigma_0}$. Hence,
we can sum over $k$ in \bvec eqref{pode3} and obtain
\begin{equation}s
\frac{d}{dt} q_1(t)+(A-1)q_1(t)=f_1(t) \quad\text{ in } G_{\mu,\sigma_0},\quad \forall t\in\bvec ensuremath{\mathbb R},
\bvec end{equation}s
which implies that the differential equation \bvec eqref{u1ODE} holds in $E^{\infty,\sigma_0}$.
Therefore, $q_1$ satisfies ($\mathcal H1$), ($\mathcal H2$), and ($\mathcal H3$) for $N=1$.
\noindent{\bf Recursive step.} Let $N\ge1$. Suppose that there already exist $q_1,q_2,\ldots,q_N\in \mathcal P^{\infty,\sigma_0}$ that satisfy ($\mathcal H2$),
and the ODE \bvec eqref{unODE} holds in $E^{\infty,\sigma_0}$ for each $n=1,2,\ldots,N$.
We will construct a polynomial $q_{N+1}(t)$ that satisfies ($\mathcal H1$), ($\mathcal H2$), and ($\mathcal H3$) with $N+1$ replacing $N$.
Let $\alphapha\ge 1/2$ be given and $\varepsilon_*$ be arbitrary in $(0,\delta_{N+1,\alphapha}^*)$.
Define
$$\begin{array}r u_N=\sum_{n=1}^N u_n\quad\text{and}\quad v_N=u-\begin{array}r u_N.$$
Assumption ($\mathcal H2$) particularly yields
\begin{equation}\label{vNrate}
| v_N(t)|_{\alphapha+1/2,\sigma_0}=\mathcal O(e^{-(N+\varepsilon)t}),\quad \forall \varepsilon\in(0,\delta_{N,\alphapha+1/2}^*).
\bvec end{equation}
Subtracting \bvec eqref{unODE} for $n=1,2,\ldots,N$ from \bvec eqref{fctnse}, we have
\begin{equation}\label{uminusuN}
\frac d{dt}v_N+Av_N +B(u,u)- \sum_{m+j\le N} B(u_m,u_j)=f-\sum_{n=1}^N F_n,
\bvec end{equation}
where the functions $F_n$'s are defined in \bvec eqref{uF}.
We reformulate equation \bvec eqref{uminusuN} as
\begin{equation}\label{vNeq}
\frac d{dt}v_N+Av_N +\sum_{m+j=N+1} B(u_m,u_j)=F_{N+1}+h_N,
\bvec end{equation}
where
\begin{align*}
h_N&=-B(u,u)+\sum_{\stackrel{1\le m,j\le N}{m+j\le N+1}} B(u_m,u_j)+f-\sum_{n=1}^{N+1} F_n\\
&=-\Big\{B(u,u)-B(\begin{array}r u_N,\begin{array}r u_N)\Big\}- \Big\{B(\begin{array}r u_N,\begin{array}r u_N)-\sum_{\stackrel{1\le m,j\le N}{m+j\le N+1}} B(u_m,u_j)\Big\}+\Big\{f-\sum_{n=1}^{N+1} F_n\Big\}.
\bvec end{align*}
With this way of grouping, we rewrite $h_N$ as
\begin{equation}
\label{hdef}
h_N=-B(v_N,u)-B(\begin{array}r u_N,v_N)-\sum_{\stackrel{1\le m,j\le N}{m+j\ge N+2}} B(u_m,u_j)+\tilde F_{N+1},
\bvec end{equation}
where
\begin{equation}s
\tilde F_{N+1}(t)=f(t)-\sum_{n=1}^{N+1} F_n(t).
\bvec end{equation}s
Note in case $N=1$ that neither of the following terms
$$ \sum_{m+j\le N} B(u_m,u_j)\text{ in \bvec eqref{uminusuN}
nor }\sum_{\stackrel{1\le m,j\le N}{m+j\ge N+2}} B(u_m,u_j)\text{ in \bvec eqref{hdef} will appear.}$$
\noindent{\bf Estimate of $h_N(t)$.}
By \bvec eqref{Fcond} and Remark \ref{betterem}, we have
\begin{equation} \label{Ftil}
|\tilde F_{N+1}(t)|_{\alphapha,\sigma_0}=\mathcal O(e^{-(N+1+\delta_{N+1,\alphapha})t})
=\mathcal O(e^{-(N+1+\varepsilon_*)t}).
\bvec end{equation}
It is also obvious that
\begin{equation} \label{Bmj}
\sum_{\stackrel{1\le m,j\le N}{m+j\ge N+2}} |B(u_m,u_j)|_{\alphapha,\sigma_0}
= \sum_{\stackrel{1\le m,j\le N}{m+j\ge N+2}} e^{-(m+j)t}|B(q_m,q_j)|_{\alphapha,\sigma_0}
=\mathcal O(e^{-(N+1+\varepsilon_*)t}).
\bvec end{equation}
Take $\varepsilon\in(\varepsilon_*,\delta_{N+1,\alphapha}^*)\subset (0,\delta_{N,\alphapha+1/2}^*)$ in \bvec eqref{vNrate}, and set $\delta=\varepsilon-\varepsilon_*\in(0,1)$.
Then we have from the definition of $u_n(t)$ and \bvec eqref{us0} that
\begin{equation} \label{ubu}
|\begin{array}r u_N(t)|_{\alphapha+1/2,\sigma_0},|u(t)|_{\alphapha+1/2,\sigma_0}=\mathcal O(e^{-(1-\delta)t}).
\bvec end{equation}
By Lemma \ref{nonLem} and estimates \bvec eqref{vNrate}, \bvec eqref{ubu}, it follows that
\begin{equation} \label{BvNu}
|B(v_N,u)|_{\alphapha,\sigma_0},|B(\begin{array}r u_N,v_N)|_{\alphapha,\sigma_0}=\mathcal O(e^{-(N+\varepsilon+1-\delta)t})=\mathcal O(e^{-(N+1+\varepsilon_*)t}).\bvec end{equation}
Therefore, by \bvec eqref{hdef}, \bvec eqref{Ftil}, \bvec eqref{Bmj} and \bvec eqref{BvNu},
\begin{equation}\label{hNo}
|h_N(t)|_{\alphapha,\sigma_0}=\mathcal O(e^{-(N+1+\varepsilon_*)t}).
\bvec end{equation}
{\flushleft{\bf Construction of $q_{N+1}(t)$.}}
Using the weak formulation of \bvec eqref{vNeq}, which is similar to \bvec eqref{varform}, and then taking the test function, $v$, to be in $R_kH$, we obtain
\begin{equation}\label{vNeq1}
\frac d{dt}R_kv_N+kR_k v_N +\sum_{m+j=N+1} R_k B(u_m,u_j)=R_k F_{N+1}+ R_k h_N \text{ on } (0,\infty).
\bvec end{equation}
Let $w_{N}(t)=e^{(N+1)t} v_N(t)$ and $w_{N,k}=R_k w_N(t)$.
Using \bvec eqref{vNeq1}, we write the ODE for $w_{N,k}$ as
\begin{equation}\label{wNeq}
\frac d{dt}w_{N,k}+(k-(N+1))w_{N,k}=\Big(R_k{f}_{N+1}-\sum_{m+j=N+1} R_k B(q_m,q_j)\Big)+ H_{N,k},
\bvec end{equation}
with
$H_{N,k}(t)= e^{(N+1)t}R_k h_N(t)$.
Note from \bvec eqref{hNo} that
\begin{equation}s
|H_{N,k}(t)|_{\alphapha,\sigma_0}=\mathcal O(e^{-\varepsilon_* t}).
\bvec end{equation}s
Then there exist $T_N>0$ and $D_3>0$ such that
\begin{equation}\label{HNkD3}
|H_{N,k}(T_N+t)|_{\alphapha,\sigma_0}\le D_3 e^{-\varepsilon_* t},\quad \forall t\geq0.
\bvec end{equation}
By the first property in \bvec eqref{lh:wksol}, each $w_{N,k}(t)$ is continuous from $[0,\infty)$ to $R_kH$.
We apply Lemma \ref{polylem} to equation \bvec eqref{wNeq} on $(T_N,\infty)$ with space $X=R_kH$, norm $\|\cdot\|=|\cdot|_{\alphapha,\sigma_0}$,
solution $y(t)=w_{N,k}(T_N+t)$, constant $\beta=k-(N+1)$, polynomial
\begin{align*}
p(t)&=R_k{f}_{N+1}(T_N+t)-\sum_{m+j=N+1} R_k B(q_m(T_N+t),q_j(T_N+t)),
\bvec end{align*}
and function $g(t)=H_{N,k}(T_N+t)$ which satisfies the estimate \bvec eqref{HNkD3}.
\noindent\textit{\underline{Case $k=N+1$}.}
We have $\beta=0$, then Lemma \ref{polylem}(ii) implies that there is a polynomial $q_{N+1,N+1}(t)$ valued in $R_{N+1}H$ such that
\begin{equation}s
|w_{N,N+1}(T_N+t)- q_{N+1,N+1}(t)|_{\alphapha,\sigma_0}= \mathcal O (e^{-\varepsilon_* t}).
\bvec end{equation}s
Thus,
\begin{equation} \label{form0}
|R_{N+1}w_{N}(t)- q_{N+1,N+1}(t-T_N)|_{\alphapha,\sigma_0}= \mathcal O (e^{-\varepsilon_* t}).
\bvec end{equation}
\noindent\textit{\underline{Case $k\le N$}.} Note that $\beta<0$ and by \bvec eqref{vNrate} we have
$$\lim_{t\to\infty}( e^{\beta t}y(t) ) = \lim_{t\to\infty}e^{\beta (t-T_N)} w_{N,k}(t)= e^{-\beta T_N}\lim_{t\to\infty}e^{kt}R_kv_N(t)=0.$$
Then, by applying Lemma \ref{polylem}(iii), there is a polynomial $q_{N+1,k}(t)$ valued in $R_kH$ such that
\begin{equation}s
|w_{N,k}(T_N+t)- q_{N+1,k}(t)|_{\alphapha,\sigma_0}=\mathcal O(e^{-\varepsilon_* t}),
\bvec end{equation}s
hence,
\begin{equation}\label{form1}
|R_k w_{N}(t)- q_{N+1,k}(t-T_N)|_{\alphapha,\sigma_0}=\mathcal O(e^{-\varepsilon_* t}).
\bvec end{equation}
\noindent\textit{\underline{Case $k\ge N+2$}.}
Similarly, $\beta=k-N-1>\varepsilon_*$ and, by using Lemma \ref{polylem}(i), there is a polynomial $q_{N+1,k}(t)$ valued in $R_kH$ such that for $t\ge 0$,
\begin{equation}s
|w_{N,k}(T_N+t)
- q_{N+1,k}(t)|_{\alphapha,\sigma_0}
\le \Big(|R_k v_N(T_N)|_{\alphapha,\sigma_0} + |q_{N+1,k}(0)|_{\alphapha,\sigma_0} + \frac{D_3}{k-(N+1)-\varepsilon_*}\Big) e^{-\varepsilon_* t}.
\bvec end{equation}s
Thus
\begin{multline}\label{form2}
|R_kw_{N}(t)
- q_{N+1,k}(t-T_N)|_{\alphapha,\sigma_0} \\
\le e^{\varepsilon_* T_N} \Big(|R_k v_N(T_N)|_{\alphapha,\sigma_0} + |q_{N+1,k}(0)|_{\alphapha,\sigma_0} + \frac{D_3}{k-(N+1)-\varepsilon_*}\Big) e^{-\varepsilon_* t},
\quad\forall t\ge T_N.
\bvec end{multline}
We define $$q_{N+1}(t)=\sum_{k=1}^\infty q_{N+1,k}(t-T_N), \quad t\in \bvec ensuremath{\mathbb R}.$$
It follows from \bvec eqref{forcexpand} that $f_{N+1}\in \mathcal{P}^{\infty,\sigma_0}$, and from the recursive assumptions that
$q_m, q_j\in \mathcal{P}^{\infty,\sigma_0}$ for $1\le m,j\le N$; then by \bvec eqref{BEE}, we have $f_{N+1}-\sum_{m+j=N+1} B(q_m,q_j)\in \mathcal{P}^{\infty,\sigma_0}$. Following the same proof that shows $q_1\in\mathcal{P}^{\infty,\sigma_0}$, we argue similarly to show that $q_{N+1}\in\mathcal{P}^{\infty,\sigma_0}$.
\noindent{\bf Estimate of $v_{N+1}(t)$.}
From \bvec eqref{form0} and \bvec eqref{form1}, we immediately have
\begin{equation}\label{PNw}
|P_{N+1}(w_N(t)-q_{N+1}(t))|_{\alphapha,\sigma_0} = \mathcal O(e^{-\varepsilon_* t}).
\bvec end{equation}
Squaring \bvec eqref{form2} and summing over $k\ge N+2$, we obtain for $t\ge T_N$ that
\begin{align*}
&|(I-P_{N+1})(w_N(t)-q_{N+1}(t))|_{\alphapha,\sigma_0}^2 = \sum_{k=N+2}^\infty |R_k (w_N(t)-q_{N+1}(t))|_{\alphapha,\sigma_0}^2 \\
&\le 3 e^{2\varepsilon_* T_N} \Big(\sum_{k=N+2}^\infty|R_k v_N(T_N)|_{\alphapha,\sigma_0}^2 + \sum_{k=N+2}^\infty|R_kq_{N+1}(T_N)|_{\alphapha,\sigma_0}^2 \\
&\quad + \sum_{k=N+2}^\infty\frac{D_3^2}{(k-(N+1)-\varepsilon_*)^2}\Big)e^{-2\varepsilon_* t}.
\bvec end{align*}
Since the last three sums are convergent, we deduce
\begin{equation}\label{QNw}
|(I-P_{N+1})(w_N(t)-q_{N+1}(t))|_{\alphapha,\sigma_0} = \mathcal O(e^{-\varepsilon_* t}).
\bvec end{equation}
From \bvec eqref{PNw} and \bvec eqref{QNw}, we have
\begin{equation}\label{wNqN1}
|w_N(t)-q_{N+1}(t)|_{\alphapha,\sigma_0}= \mathcal O(e^{-\varepsilon_*t}).
\bvec end{equation}
Therefore,
\begin{equation} \label{vN1ep}
|v_{N+1}(t)|_{\alphapha,\sigma_0}
=|v_N(t)-e^{-(N+1)t} q_{N+1}(t)|_{\alphapha,\sigma_0}= \mathcal O(e^{-(N+1+\varepsilon_*)t}).
\bvec end{equation}
Thanks to \bvec eqref{wNqN1}, the polynomial $q_{N+1}(t)$ is independent of $\alphapha$ and $\varepsilon_*$. Therefore, \bvec eqref{vN1ep} holds for any $\alphapha\ge1/2$ and $\varepsilon_*\in(0,\delta_{N+1}^*)$, which proves ($\mathcal H2$) with $N+1$ replacing $N$.
\noindent{\bf Establishing the ODE \bvec eqref{unODE} for $n=N+1$.}
By our construction of the polynomials $q_{N+1,k}(t)$ above, and
by \bvec eqref{pode2} in Lemma \ref{polylem}, the polynomial $q_{N+1}(t)$ satisfies
\begin{align*}
&\frac{d}{dt} R_k q_{N+1}(T_N+t)+(k-(N+1))R_kq_{N+1}(T_N+t) \\
&\quad =R_kf_{N+1}(T_N+t)- \sum_{m+j=N+1} R_k B(q_m(T_N+t),q_j(T_N+t))
\quad \forall k\ge 1,\quad \forall t\in\bvec ensuremath{\mathbb R}.
\bvec end{align*}
This yields, for each $k\ge 1$,
\begin{equation}\label{Rkueq}
\frac{d}{dt} R_k u_{N+1}(t)+AR_ku_{N+1}(t) + \sum_{m+j=N+1} R_k B(u_m(t),u_j(t)) =R_kF_{N+1}(t),\quad \forall t\in\bvec ensuremath{\mathbb R}.
\bvec end{equation}
For any $\mu\ge 0$, since $Au_{N+1}(t)$, $\sum_{m+j=N+1} B(u_m(t),u_j(t))$, $F_{N+1}(t)$ each is a $G_{\mu,\sigma_0}$-valued polynomial, then summing up equation \bvec eqref{Rkueq} in $k$ gives
\begin{equation}s
\frac{d}{dt} u_{N+1}+Au_{N+1}+ \sum_{m+j=N+1} B(u_m,u_j)=F_{N+1} \quad \text{in } G_{\mu,\sigma_0}.
\bvec end{equation}s
Thus, the ODE \bvec eqref{unODE} holds in $E^{\infty,\sigma_0}$ for $n=N+1$.
We have established the existence of the desired polynomial, $q_{N+1}(t)$, which completes the recursive step, and hence, the proof of Theorem \ref{mainthm}.
\qed
\begin{remark}
By using the extra information \bvec eqref{fep} in Remark \ref{betterem}, we can prove directly the remainder estimate \bvec eqref{remdelta} for any $\varepsilon\in(0,1)$, which, in fact, is expected by \bvec eqref{uqa}.
Nonetheless, the above proof with specific $\varepsilon\in(0,\delta^*_{N,\alphapha})$ is more flexible and will be easily adapted in section \ref{sec43} below to serve the proof of Theorem \ref{finitetheo}.
\bvec end{remark}
\subsection{Proof of Corollary \ref{Vcor}}\label{sec42}
We follow the proof of Theorem \ref{mainthm}. Since $f_1\in \mathcal V$, there is $N_1\ge 1$ such that $f_1\in P_{N_1}H$. As a consequence, we see from \bvec eqref{q1k} that $q_{1,k}=0$ for $k>N_1$.
Hence, $q_1(t)=\sum_{k=1}^{N_1} q_{1,k}(t-T_0)$ is a polynomial in $P_{N_1}H$.
For the recursive step, the functions $f_{N+1}$, $q_m$ and $q_j$ ($1\le m,j\le N$), in this case, are $\mathcal{V}$-valued polynomials.
Hence, by \bvec eqref{BVV}, so is $f_{N+1}-\sum_{m+j=N+1} B(q_m,q_j)$. It follows that there are at most finitely many $k$ such that $q_{N+1,k}$ is nonzero. Since each $q_{N+1,k}$ is an $\mathcal{V}$-valued polynomial, clearly $q_{N+1}$, as a finite sum of those, is a $\mathcal{V}$-valued polynomial.\qed
\subsection{Proof of Theorem \ref{finitetheo}}\label{sec43}
We follow the proof of Theorem \ref{mainthm} closely and make necessary modifications.
We prove part (i), while the proof of part (ii) is entirely similar to that in Theorem \ref{mainthm} and thus omitted.
The same notation $u_n(t)$, $F_n(t)$, $v_n(t)$, $\begin{array}r u_n(t)$ as in Theorem \ref{mainthm} is used here.
By \bvec eqref{ffinite} with $N=1$,
\begin{equation}\label{ff1}
e^t |f(t)-F_1(t)|_{\alphapha_*,\sigma_0}=\mathcal O(e^{-\delta_{1,\alphapha_*} t}).
\bvec end{equation}
Also,
\begin{equation}\label{ffirst}
|f(t)|_{\alphapha_*,\sigma_0}\le |f_1(t)|_{\alphapha_*,\sigma_0}e^{-t}+|f(t)-f_1(t)e^{-t}|_{\alphapha_*,\sigma_0}=\mathcal O(e^{-\lambda t}),\quad \forall \lambda\in(0,1).
\bvec end{equation}
Using \bvec eqref{ffirst} and by applying Proposition \ref{theo23}, we have
\begin{equation} \label{newua}
|u(t)|_{\alphapha_*+1/2,\sigma_0}= \mathcal O(e^{-(1-\delta) t}),\quad \forall \delta\in(0,1),
\bvec end{equation}
\begin{equation}\label{newBu}
|B(u(t),u(t))|_{\alphapha_*,\sigma_0}=\mathcal O(e^{-2(1-\delta) t}),\quad \forall \delta\in(0,1).
\bvec end{equation}
\noindent{\bf Base case: $N=1$.}
Let $\alphapha=\alphapha_*$ and $\mu=\mu_*$.
In estimating $H_0(t)$ defined by \bvec eqref{H0def}, the estimate \bvec eqref{ch1}, resp., \bvec eqref{ch2}, comes from \bvec eqref{ff1}, resp. \bvec eqref{newBu}.
Hence we obtain the bound \bvec eqref{ch3} for $|H_0(T_0+t)|_{\alphapha,\sigma_0}$.
Then the existence and definition of $q_1(t)$ are the same as in \bvec eqref{q11def}, \bvec eqref{q1k} and \bvec eqref{q1def}.
Since $f_1\in \mathcal P^{\mu,\sigma_0}$, then the same proof gives
$q_1\in\mathcal P^{\mu+1,\sigma_0}$, see \bvec eqref{R1q1poly}, \bvec eqref{IRq1} and \bvec eqref{Abn}.
The remainder estimate \bvec eqref{rem1} still holds true, which, for the current value $\alphapha=\alphapha_*$, proves \bvec eqref{ufinite} for $N=1$.
Also, the ODE \bvec eqref{u1ODE} holds in $G_{\mu,\sigma_0}$ (for the current $\mu=\mu_*$).
If $N_*=1$, then the proof is finished here. We consider $N_*\ge 2$ now.
\noindent{\bf Recursive step.} Let $1\le N\le N_*-1$.
Assume there already exist $q_n\in \mathcal P^{\mu_n,\sigma_0}$ for $1\le n\le N$, such that
\begin{equation}\label{farate0}
| v_N(t)|_{\alphapha_N,\sigma_0}=\mathcal O(e^{-(N+\varepsilon)t}),\quad\forall \varepsilon\in(0,\delta_N^*),
\bvec end{equation}
and \bvec eqref{unODE} holds in $ G_{\mu_N,\sigma_0}$ for $n=1,2,3,\ldots,N$.
Let
\begin{equation}\label{almu}
\alphapha=\alphapha_{N+1}=\alphapha_N-1/2\ge 1/2, \quad \mu=\mu_{N+1}=\mu_N-1/2\ge \alphapha\ge 1/2.
\bvec end{equation}
Note for $n=1,2,\ldots,N$ that $\mu_n\ge \alphapha_n\ge 1/2$ and both $\mu_n$, $\alphapha_n$ are decreasing,
hence,
\begin{equation} \label{uqn}
u_n(t),q_n(t)\in G_{\mu_n,\sigma_0} \subset G_{\mu_N,\sigma_0}=G_{\mu+1/2,\sigma_0} \subset G_{\alphapha_N,\sigma_0}=G_{\alphapha+1/2,\sigma_0},\quad\forall t\in\bvec ensuremath{\mathbb R}.\bvec end{equation}
Rewrite \bvec eqref{farate0} as
\begin{equation}\label{favNrate}
| v_N(t)|_{\alphapha+1/2,\sigma_0}=\mathcal O(e^{-(N+\varepsilon)t}),\quad \forall \varepsilon\in(0,\delta_N^*).
\bvec end{equation}
We now construct a polynomial $q_{N+1}\in\mathcal P^{\mu+1,\sigma_0}$ such that
\bvec eqref{ufinite} holds true with $N+1$ replacing $N$, and the ODE \bvec eqref{unODE}, with $n=N+1$, holds in $G_{\mu_{N+1},\sigma_0}=G_{\mu,\sigma_0}$.
We proceed with the construction of $q_{N+1}(t)$ as in the proof of Theorem \ref{mainthm} using the specific values of $\alphapha$ and $\mu$ in \bvec eqref{almu}.
Note that equation \bvec eqref{vNeq} for $v_N$ still holds in the weak sense as in Definition \ref{lhdef}.
$\bullet$ We check the estimate \bvec eqref{hNo} for the function $h_N(t)$ defined by \bvec eqref{hdef}.
Let $\varepsilon_*\in(0,\delta^*_{N+1})$.
By \bvec eqref{ffinite} with $N+1$ replacing $N$, we have
\begin{equation} \label{faFtil}
|\tilde F_{N+1}(t)|_{\alphapha,\sigma_0}=\mathcal O(e^{-(N+1+\delta_{N+1})t})
=\mathcal O(e^{-(N+1+\varepsilon_*)t}).
\bvec end{equation}
Thanks to \bvec eqref{uqn} and Lemma \ref{nonLem}, estimate \bvec eqref{Bmj} stays the same as
\begin{equation} \label{faBmj}
\sum_{\stackrel{1\le m,j\le N}{m+j\ge N+2}} |B(u_m,u_j)|_{\alphapha,\sigma_0}
=\mathcal O(e^{-(N+1+\varepsilon_*)t}).
\bvec end{equation}
Again, take $\varepsilon\in(\varepsilon_*,\delta^*_{N+1})\subset (0,\delta^*_N)$ in \bvec eqref{favNrate} and $\delta=\varepsilon-\varepsilon_*\in(0,1)$.
Then we have
\begin{equation} \label{faubu}
|\begin{array}r u_N(t)|_{\alphapha+1/2,\sigma_0}=|\begin{array}r u_N(t)|_{\alphapha_N,\sigma_0}=\mathcal O(e^{-(1-\delta)t}),
\bvec end{equation}
and by \bvec eqref{newua}
\begin{equation}s
|u(t)|_{\alphapha+1/2,\sigma_0}\le |u(t)|_{\alphapha_*,\sigma_0}=\mathcal O(e^{-(1-\delta)t}).
\bvec end{equation}s
By Lemma \ref{nonLem} and estimates \bvec eqref{favNrate}, \bvec eqref{faubu}, it follows that
$$|B(v_N,u)|_{\alphapha,\sigma_0},|B(\begin{array}r u_N,v_N)|_{\alphapha,\sigma_0}=\mathcal O(e^{-(N+\varepsilon+1-\delta)t})=\mathcal O(e^{-(N+1+\varepsilon_*)t}).$$
Therefore, by \bvec eqref{faFtil}, \bvec eqref{faBmj} and \bvec eqref{faubu}, we again obtain \bvec eqref{hNo}.
$\bullet$ The same construction of $q_{N+1}(t)$ now goes through. Indeed, since $f_{N+1}\in \mathcal P^{\mu,\sigma_0}$, by \bvec eqref{uqn} and the fact that $q_m, q_j\in \mathcal P^{\mu+1/2,\sigma_0}$ for $1\le m,j\le N$, it then follows from \bvec eqref{BGG} that
$$f_{N+1}-\sum_{m+j=N+1} B(q_m,q_j)\in \mathcal P^{\mu,\sigma_0}.$$
The same proof as for the case of $q_1$ then yields that $q_{N+1}\in\mathcal P^{\mu+1,\sigma_0}$.
$\bullet$
For the estimate of $v_{N+1}(t)$, same arguments yield
\begin{equation}s
|v_{N+1}(t)|_{\alphapha,\sigma_0}
=|v_N(t)-e^{-(N+1)t} q_{N+1}(t)|_{\alphapha,\sigma_0}= \mathcal O(e^{-(N+1+\varepsilon_*)t}).
\bvec end{equation}s
Since this holds for any $\varepsilon_*\in(0,\delta_{N+1}^*)$, the remainder estimate \bvec eqref{ufinite} holds true with $N+1$ replacing $N$.
$\bullet$ As for the ODE \bvec eqref{unODE} with $n=N+1$, the proof is unchanged from that of Theorem \ref{mainthm}, noting that the ODE now holds in the corresponding space $G_{\mu,\sigma_0}$.
We have proved that the polynomial $q_{N+1}$ has the desired properties. This completes the recursive step, and hence, the proof of Theorem \ref{finitetheo}.\qed
\section*{}
\noindent\textbf{\large Acknowledgements.}
The authors would like to thank Peter Constantin for prompting the question of extending the Foias-Saut theory to the case of non-potential forces. The authors are also grateful to Ciprian Foias for his encouragement towards this work and helpful discussions,
as well as to Animikh Biswas for stimulating discussions. L.H. gratefully acknowledges the support for his research by the NSF grant DMS-1412796
{}
\bvec end{document} |
\begin{document}
\title{The conjugacy problem in automaton groups \\ is not solvable}
\author{Zoran {\v{S}}uni\'c}
\address{Dept. of Mathematics, Texas A\&M Univ. MS-3368, College Station, TX 77843-3368, USA}
\author{Enric Ventura}
\address{Dept. Mat. Apl. III, Universitat Polit\`ecnica de Catalunya, Manresa, Barcelona, Catalunya}
\begin{abstract}
(Free-abelian)-by-free, self-similar groups generated by finite self-similar sets of tree automorphisms and having
unsolvable conjugacy problem are constructed. Along the way, finitely generated, orbit undecidable, free subgroups of $\mathsf{GL}_d({\mathbb Z})$, for $d \geqslant 6$, and ${\mathcal A}ut(F_d)$, for $d \geqslant 5$, are constructed as well.
\end{abstract}
\keywords{automaton groups; (free abelian)-by-free groups; conjugacy problem; orbit decidability}
\def\textup{2010} Mathematics Subject Classification{\textup{2010} Mathematics Subject Classification}
\subjclass[2010]{20E8; 20F10}
\maketitle
\section{Introduction}
The goal of this paper is to prove the following result.
\begin{theorem}\label{t:main}
There exist automaton groups with unsolvable conjugacy problem.
\end{theorem}
The question on solvability of the conjugacy problem was raised in 2000 for the class of self-similar groups
generated by finite self-similar sets (automaton groups) by Grigorchuk, Nekrashevych and Sushchanski{\u\i}~\cite{grigorchuk-n-s:automata}. Note that the word problem is solvable for all groups in this class by a rather straightforward algorithm running in exponential time. Moreover, for an important subclass consisting of finitely generated, contracting groups the word problem is solvable in polynomial time. Given that our examples contain free nonabelian subgroups and that contracting groups, as well as the groups $\mathsf{Pol}(n)$, $n \geq0$, do not contain such subgroups (see~\cite{nekrashevych:free-subgroups} and~\cite{sidki:pol}), the following question remains open.
\begin{question}\label{q:open}
Is the conjugacy problem solvable in
(i) all finitely generated, contracting, self-similar groups?
(ii) the class of automaton groups in $\mathsf{Pol}(n)$, for $n \geqslant 0$?
\end{question}
There are many positive results on the solvability of the conjugacy problem in
automaton groups close to the first Grigorchuk group~\cite{grigorchuk:burnside} and the Gupta-Sidki examples~\cite{gupta-s:burnside}. The conjugacy problem was solved for the first Grigorchuk group independently by Leonov~\cite{leonov:conjugacy} and Rozhkov~\cite{rozhkov:conjugacy}, and for the Gupta-Sidki examples by Wilson and Zaleskii~\cite{wilson-z:conjugacy}. Grigorchuk and Wilson~\cite{grigorchuk-w:conjugacy} showed that the problem is solvable in all subgroups of finite index in the first Grigorchuk group. In fact, the results in~\cite{leonov:conjugacy,wilson-z:conjugacy,grigorchuk-w:conjugacy} apply to certain classes of groups that include the well known examples we explicitly mentioned. In a recent work Bondarenko, Bondarenko, Sidki and Zapata~\cite{bondarenko-b-s-z:conjugacy} showed that the conjugacy problem is solvable in $\mathsf{Pol}(0)$. Lysenok, Myasnikov, and Ushakov provided the first, and so far the only, significant result on the complexity of the conjugacy problem in automaton groups by providing a polynomial time solution for the first Grigorchuk group~\cite{lysenok-m-u:grigorchuk-cp}.
The strategy for our proof of Theorem~\ref{t:main} is as follows. First, we observe the following consequence of a result by Bogopolski, Martino and Ventura~\cite{bogopolski-m-v:cp}.
\begin{proposition}\label{gamma}
Let $H$ be a finitely generated group, and $\Gamma$ a finitely generated subgroup of ${\mathcal A}ut (H)$. If $\Gamma \leqslant
{\mathcal A}ut (H)$ is orbit undecidable then $H\rtimes \Gamma$ has unsolvable conjugacy problem.
\end{proposition}
Since, for $d\geqslant 4$, examples of finitely generated orbit undecidable subgroups $\Gamma$ in $GL_d(\mathbb{Z})$ are provided in~\cite{bogopolski-m-v:cp}, we obtain the existence of groups of the form $\mathbb{Z}^d \rtimes \Gamma$ with unsolvable conjugacy problem. Finally, using techniques of Brunner and Sidki~\cite{brunner-s:glnz}, we prove the following result, which implies Theorem~\ref{t:main}.
\begin{theorem}\label{cor}
Let $\Gamma$ be an arbitrary finitely generated subgroup of $GL_d(\mathbb{Z})$. Then, $\mathbb{Z}^d \rtimes \Gamma$ is an automaton group.
\end{theorem}
The examples of finitely generated orbit undecidable subgroups of $\mathsf{GL}_d({\mathbb Z})$, for $d\geqslant 4$ given in~\cite{bogopolski-m-v:cp} are based on Mikhailova's construction and are not finitely presented. By modifying the construction in~\cite{bogopolski-m-v:cp}, at the cost of increasing the dimension by 2, we determine finitely generated, orbit undecidable, free subgroups of $GL_d(\mathbb{Z})$, for $d\geqslant 6$. Note that, by~\cite[Proposition 6.9.]{bogopolski-m-v:cp} and the Tits Alternative~\cite{tits:alternative}, every orbit undecidable subgroup $\Gamma$ of $\mathsf{GL}_d({\mathbb Z})$ contains free nonabelian subgroups. By using the same technique (see Proposition~\ref{p:general-free}) we also construct finitely generated, orbit undecidable, free subgroups of ${\mathcal A}ut(F_d)$, for $d \geq 5$, answering Question~6 raised in~\cite{bogopolski-m-v:cp}.
\begin{proposition}\label{p:free-orbit-undecidable}
(a) For $d\geqslant 6$, the group $\mathsf{GL}_d({\mathbb Z})$ contains finitely generated, orbit undecidable, free subgroups.
(b) For $d\geqslant 5$, the group ${\mathcal A}ut(F_d)$ contains finitely generated, orbit undecidable, free subgroups.
\end{proposition}
This allows us to deduce the following strengthened version of
Theorem~\ref{t:main}.
\begin{theorem}\label{gros}
For every $d\geqslant 6$, there exists a finitely presented group $G$ simultaneously satisfying the following three
conditions:
i) $G$ is an automaton group,
ii) $G$ is ${\mathbb Z}^d$-by-(f.g.-free) (in fact, $G={\mathbb Z}^d \rtimes_\phi F_m$, with injective action $\phi$),
iii) $G$ has unsolvable conjugacy problem.
\end{theorem}
\section{Orbit undecidability}\label{oud}
The main result in~\cite{bogopolski-m-v:cp} can be stated in the following way.
\begin{theorem}[Bogopolski, Martino, Ventura~\cite{bogopolski-m-v:cp}]\label{bmv}
Let $G=H\rtimes F$ be a semidirect product (with $F$, $H$, and so $G$, finitely generated) such that
(i) the conjugacy problem is solvable in $F$,
(ii) for every $f \in F$, $\langle f \rangle$ has finite index in the centralizer $C_F(f)$ and there is
an algorithm that, given $f$, calculates coset representatives for $\langle f \rangle$ in $C_F(f)$,
(iii) the twisted conjugacy problem is solvable in $H$.
\noindent
Then the following are equivalent:
(a) the conjugacy problem in $G$ is solvable,
(b) the conjugacy problem in $G$ restricted to $H$ is solvable,
(c) the action group $\{\lambda_g \mid g \in G\} \leqslant {\mathcal A}ut(H)$ is orbit decidable, where $\lambda_g$ denotes the
right conjugation by $g$, restricted to $H$. \qed
\end{theorem}
The \emph{conjugacy problem in $G$ restricted to $H$} asks if, given two elements $u$ and $v$ in $H$, there exists an
element $g$ in $G$ such that $u^g=v$. The \emph{orbit problem} for a subgroup $\Gamma$ of ${\mathcal A}ut(H)$ asks if, given $u$ and $v$ in $H$, there is an automorphism $\gamma$ in $\Gamma$ such that $\gamma(u)$ is conjugate to $v$ in $H$; we say that $\Gamma$ is \emph{orbit decidable (resp. undecidable)} if the orbit problem for $\Gamma$ is solvable (resp. unsolvable). Finally, the \emph{twisted conjugacy problem} for a group $H$ asks if, given an automorphism $\varphi \in
{\mathcal A}ut (H)$ and two elements $u,v\in H$, there is $x\in H$ such that $v=\varphi(x)^{-1}ux$.
The implications $(a) \Rightarrow (b) \Leftrightarrow (c)$ in Theorem~\ref{bmv} are clear from the definitions, and do not require most of the hypotheses (as indicated in~\cite{bogopolski-m-v:cp}, the only relevant implication is $(c)
\Rightarrow (a)$). Proposition~\ref{gamma}, which is needed for our purposes, is an obvious corollary.
\begin{proposition}\label{p:general-free}
Let $G$ be a group and $H$ and $K$ be subgroups of $G$ such that
(i) $G = \langle H, K \rangle$,
(ii) the free group $F_2$ of rank 2 is a subgroup of ${\mathcal A}ut(K)$,
(iii) there exists a finitely generated orbit undecidable subgroup $\Gamma \leqslant {\mathcal A}ut(H)$,
(iv) every pair of automorphisms $\alpha \in {\mathcal A}ut(H)$ and $\beta \in {\mathcal A}ut(K)$ has a (necessarily unique) common extension to an automorphism of $G$, and
(v) two elements of $H$ are conjugate in $G$ if and only if they are conjugate in $H$.
Then, ${\mathcal A}ut(G)$ contains finitely generated, orbit undecidable, free subgroups.
\end{proposition}
\begin{proof}
Let $\Gamma = \langle g_1,\ldots,g_m \rangle$ be an orbit undecidable subgroup of ${\mathcal A}ut(H)$ and $F=\langle f_1,\dots,f_m \rangle$ be a free subgroup of rank $m$ of ${\mathcal A}ut(K)$. For, $i=1,\dots,m$, let $s_i$ be the common extension of $g_i$ and $f_i$ to an automorphism of $G$ and let $\Gamma' = \langle s_1,\dots,s_m \rangle \leqslant {\mathcal A}ut(G)$. Since $F$ is free of rank $m$, so is $\Gamma'$.
Moreover, $\Gamma'$ is orbit undecidable subgroup of ${\mathcal A}ut(G)$. Indeed, for $u,v \in H$,
\begin{align*}
(\exists \gamma' \in \Gamma')(\exists t' \in G)~\gamma'(u)=v^{t'}
&\iff (\exists \gamma \in \Gamma)(\exists t' \in G)~\gamma(u)=v^{t'} \iff \\
&\iff (\exists \gamma \in \Gamma)(\exists t \in H)~\gamma(u)=v^{t}
\end{align*}
The second equivalence follows from (v), since $\gamma(u), v \in H$. The first comes from the construction, since, for every group word $w(x_1,\dots,x_m)$, the automorphisms $\gamma'=w(s_1,\dots,s_m) \in \Gamma'$ and $\gamma=w(g_1,\dots,g_m) \in \Gamma$ agree on $H$. Therefore, the orbit problem for the instance $u,\, v\in H$ with respect to $\Gamma \leqslant {\mathcal A}ut(H)$ is equivalent to the orbit problem for the instance $u,v\in H\leqslant G$ with respect to $\Gamma' \leqslant {\mathcal A}ut(G)$, showing that an algorithm that would solve the orbit problem for $\Gamma'$ could be used to solve the orbit problem for $\Gamma$ as well. Thus $\Gamma'$ is orbit undecidable.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{p:free-orbit-undecidable}]
(a) For $d \geq 6$, let $G={\mathbb Z}^d$, $H={\mathbb Z}^{d-2}$, $K={\mathbb Z}^2$, and $G=H \oplus K$. All requirements of Proposition~\ref{p:general-free} are satisfied. In particular, (iii) holds by~\cite[Proposition~7.5]{bogopolski-m-v:cp}, and (v) holds since conjugacy is the same as equality in both $H$ and $G$.
(b) For $d \geq 5$, let $G=F_d$, $H=F_{d-2}$, $K=F_2$, and $G=H \ast K$. All requirements of Proposition~\ref{p:general-free} are satisfied. In particular, (iii) holds by~\cite[Subsection~7.2]{bogopolski-m-v:cp}, and (v) holds since the free factor $H$ is malnormal in $G$.
\end{proof}
\section{Self-similar groups and automaton groups}\label{ssg}
Let $X$ be a finite alphabet on $k$ letters. The set $X^*$ of words over $X$ has the structure of a rooted $k$-ary tree in which the empty word is the root and each vertex $u$ has $k$ children, namely the vertices $ux$, for $x$ in $X$. Every tree automorphism fixes the root and permutes the words of the same length (constituting the levels of the rooted tree) while preserving the tree structure.
Let $g$ be a tree automorphism. The action of $g$ on $X^*$ can be decomposed as follows. There is a permutation $\pi_g$ of $X$, called the \emph{root permutation} of $g$, determined by the permutation that $g$ induces on the subtrees below the root (the action of $g$ on the first letter in every word), and tree automorphisms $g|_x$, for $x$ in $X$, called the \emph{sections} of $g$, determined by the action of $g$ within these subtrees (the action of $g$ on the rest of the word behind the first letter). Both the root permutation and the sections are uniquely determined by the equality
\begin{equation}\label{e:gxw}
g(xw) = \pi_g(x)g|_x(w),
\end{equation}
for $x$ in $X$ and $w$ in $X^*$.
A group or a set of tree automorphisms is \emph{self-similar} if it contains all sections of all of its
elements. A \emph{finite automaton} is a finite self-similar set. A group $G({\mathcal A})$ of tree automorphisms generated by a finite self-similar set ${\mathcal A}$ is itself self-similar and it is called an \emph{automaton group} (realized or generated by the automaton ${\mathcal A}$). The elements of the automaton are often referred to as \emph{states} of the automaton and the automaton is said to operate on the alphabet $X$.
The \emph{boundary} of the tree $X^*$ is the set $X^\omega$ of right infinite words $x_1x_2x_3\cdots$. The tree structure induces a metric on $X^\omega$ inducing the Cantor set topology. The metric is given by $d(u,v) = \frac{1}{2^{|u \wedge v|}}$, for $u \neq v$, where $|u \wedge v|$ denotes the length of the longest common prefix of $u$ and $v$. The group of isometries of the boundary $X^\omega$ and the group of tree automorphism of $X^*$ are canonically isomorphic. Every isometry induces a tree automorphism by restricting the action on finite prefixes, and every tree automorphism induces an isometry on the boundary through an obvious limiting process. The decomposition formula~(\ref{e:gxw}) for the action of tree automorphisms is valid for boundary isometries as well ($w$ is any right infinite word in this case).
\section{Automaton groups with unsolvable conjugacy problem}\label{feina}
Let ${\mathcal M}=\{ M_1,\ldots,M_m \}$ be a set of integer $d\times d$ matrices with non-zero determinants. Let $n\geqslant 2$ be relatively prime to all of these determinants (thus, each $M_i$ is invertible over the ring ${\mathbb Z}_n$ of $n$-adic integers. For an integer matrix $M$ and an arbitrary vector $\mathbf{v}$ with integer coordinates, consider the
invertible affine transformation $M_\mathbf{v} \colon {\mathbb Z}_n^d \to {\mathbb Z}_n^d$ given by $M_\mathbf{v}(\mathbf{u})=\mathbf{v}+M\mathbf{u}$,
and let
\[
G_{{\mathcal M}, n} =\langle \{M_\mathbf{v} \mid M \in {\mathcal M}, \ \mathbf{v} \in {\mathbb Z}^d\} \rangle
\]
be the subgroup of ${\mathcal A}ff_d({\mathbb Z}_n)$ generated by all the transformations of the form $M_\mathbf{v}$, for $M\in {\mathcal M}$ and
$v\in {\mathbb Z}^d$. Denote by $\tau_\mathbf{v}$ the translation ${\mathbb Z}_n^d \to {\mathbb Z}_n^d$, $\mathbf{u} \mapsto
\mathbf{v}+\mathbf{u}$, and by $\mathbf{e}_i$ the $i$-th standard basis vector. Since $M_{\mathbf{v}} =\tau_{\mathbf{v}} M_{\mathbf{0}}$, we have
\begin{equation}\label{e:generators}
G_{{\mathcal M},n} =\langle \{M_\mathbf{0} \mid M \in {\mathcal M}\} \cup \{\tau_{\mathbf{e}_i} \mid i=1,\ldots,d\} \rangle \leqslant {\mathcal A}ff_d({\mathbb Z}_n).
\end{equation}
\begin{lemma}\label{inv}
If all matrices in ${\mathcal M}$ are invertible over ${\mathbb Z}$, then $G_{{\mathcal M},n} \cong {\mathbb Z}^d \rtimes \Gamma$,
where $\Gamma=\langle {\mathcal M} \rangle \leqslant \mathsf{GL}_d({\mathbb Z})$; in particular, $G_{{\mathcal M},n}$ does not depend on $n$.
\end{lemma}
\begin{proof}
If $M$ is an invertible matrix over ${\mathbb Z}$, and $v\in {\mathbb Z}^d$, then $M_{\mathbf{v}}\in {\mathcal A}ff_d({\mathbb Z}_n)$ restricts
to a bijective affine transformation $M_{\mathbf{v}}\in {\mathcal A}ff_d({\mathbb Z})$. Hence, we can view $G_{{\mathcal M},n}$ as a
subgroup of ${\mathcal A}ff_d({\mathbb Z})$ and, in particular, it is independent from $n$; let us denote it by $G_{{\mathcal M}}$.
Clearly, the subgroup of translations $T=\langle \tau_{\mathbf{e}_1}, \ldots, \tau_{\mathbf{e}_d}\rangle$ of $G_{{\mathcal M}}$ is free abelian of rank $d$, $T\simeq {\mathbb Z}^d$. Since each of the transformations $M_\mathbf{0}$, for $M \in {\mathcal M}$, acts on ${\mathbb Z}^d$ by multiplication by $M$, the subgroup $\langle M_\mathbf{0} \mid M \in {\mathcal M} \rangle$ of $G_{\mathcal M}$ is isomorphic to $\Gamma$ and may be safely identified with it. The subgroups $T$ and $\Gamma$ intersect trivially, since every nontrivial element of $T$ moves the zero vector in ${\mathbb Z}^d$, while no element of $\Gamma$ does. For $M \in {\mathcal M} \cup {\mathcal M}^{-1}$ (where ${\mathcal M}^{-1}$ is the set of integer matrices inverse to the matrices in ${\mathcal M}$), $j=1,\ldots,d$, and $\mathbf{u} \in {\mathbb Z}^d$,
$$
M_\mathbf{0} \tau_{\mathbf{e}_j} (M_\mathbf{0})^{-1}(\mathbf{u}) = M_\mathbf{0} \tau_{\mathbf{e}_j} (M^{-1}\mathbf{u}) = M_\mathbf{0}(\mathbf{e}_j
+ M^{-1}\mathbf{u}) = M\mathbf{e_j} + \mathbf{u} =
$$
$$
=\tau_{\mathbf{e}_1}^{m_{1,j}}\tau_{\mathbf{e}_2}^{m_{2,j}}\cdots \tau_{\mathbf{e}_d}^{m_{d,j}}(\mathbf{u}),
$$
where $m_{i,j}$ is the $(i,j)$-entry of $M$. Therefore,
for $M \in {\mathcal M} \cup {\mathcal M}^{-1}$ and $j=1,\dots,d$,
\begin{equation}\label{e:relation}
M_\mathbf{0} \tau_{\mathbf{e}_j} (M_\mathbf{0})^{-1} = \tau_{\mathbf{e}_1}^{m(1,j)}\tau_{\mathbf{e}_2}^{m(2,j)}\cdots
\tau_{\mathbf{e}_d}^{m(d,j)}.
\end{equation}
It follows that the subgroup $T\cong {\mathbb Z}^d$ is normal in $G_{\mathcal M}$ and $G_{\mathcal M} \cong {\mathbb Z}^d \rtimes \Gamma$.
\end{proof}
\begin{remark}
The equality~(\ref{e:relation}) is correct (over ${\mathbb Z}_n$) for any integer matrix with non-zero determinant relatively prime to $n$. When ${\mathcal M}=\{M\}$ consists of a single $d\times d$ integer matrix $M=(m_{i,j})$ of infinite order and determinant $k\neq 0$ relatively prime to $n$, the multiplication by $M$ embeds ${\mathbb Z}^d$ into an index $|k|$ subgroup of ${\mathbb Z}^d$ and $G_{{\mathcal M},n}$ is the ascending HNN extension of ${\mathbb Z}^d$ by a single stable letter (see~\cite{bartholdi-s:bs}), i.e.,
\[
G_{{\mathcal M},n} \cong \langle \ a_1,\ldots,a_d, t \mid [a_i,a_j]=1,~ta_jt^{-1} = a_1^{m_{1,j}}\cdots
a_d^{m_{d,j}},~\mbox{for}~1\leqslant i,j\leqslant d \ \rangle.
\]
\end{remark}
The goal now is to show that the groups $G_{{\mathcal M},n}$ constructed in this way, can all be realized by finite automata and so, they are automaton groups.
The elements of the ring ${\mathbb Z}_n$ may be (uniquely) represented as right infinite words over the alphabet $Y_n=
\{0,\ldots,n-1\}$, through the correspondence
$$
y_1y_2y_3 \cdots \quad \longleftrightarrow \quad y_1 + y_2\cdot n + y_3\cdot n^2 + \cdots,
$$
while the elements of the free $d$-dimensional module ${\mathbb Z}_n^d$, viewed as column vectors, may be (uniquely)
represented as right infinite words over the alphabet $X_n =Y_n^d=\{ (y_1,\ldots,y_d)^T \mid y_i \in Y_n, \
i=1,\ldots,d\}$ consisting of column vectors with entries in $Y_n$. Note that $|Y_n |=n$ and $|X_n |=n^d$.
For a vector $\mathbf{v}$ with integer coordinates define $\mbox{Mod}(\mathbf{v})$ and $\mbox{Div}(\mathbf{v})$ to be the vectors whose
coordinates are the remainders and the quotients, respectively, obtained by dividing the coordinates of $\mathbf{v}$ by
$n$, i.e., the unique integer vectors satisfying $\mathbf{v}=\mbox{Mod}(\mathbf{v})+n\mbox{Div}(\mathbf{v})$, with $\mbox{Mod}(\mathbf{v})\in X_n$.
\begin{lemma}
For every vector $\mathbf{v}$ with integer coordinates, and every element $\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\ldots$ in the free module ${\mathbb Z}_n^d$ (where $\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3,\ldots$ are symbols in $X_n$),
\begin{equation}\label{e:Mv}
M_\mathbf{v}(\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\cdots) = \mbox{Mod}(\mathbf{v}+M\mathbf{x}_1) +
nM_{\mbox{Div}(\mathbf{v}+M\mathbf{x}_1)}(\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots).
\end{equation}
\end{lemma}
\begin{proof}
Indeed,
\begin{align*}
M_\mathbf{v}(\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\cdots)
&= \mathbf{v}+M\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\cdots = \mathbf{v}+M(\mathbf{x}_1 + n(\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots)) \\
&= \mathbf{v}+M\mathbf{x}_1 + nM\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots \\
&= \mbox{Mod}(\mathbf{v}+M\mathbf{x}_1) + n\mbox{Div}(\mathbf{v}+M\mathbf{x}_1) + nM\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots \\
&= \mbox{Mod}(\mathbf{v}+M\mathbf{x}_1) + n(\mbox{Div}(\mathbf{v}+M\mathbf{x}_1) + M\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots) \\
&= \mbox{Mod}(\mathbf{v}+M\mathbf{x}_1) + nM_{\mbox{Div}(\mathbf{v} + M\mathbf{x}_1)}(\mathbf{x}_2\mathbf{x}_3\mathbf{x}_4\cdots).
\end{align*}
\end{proof}
Let $||M||$ be the maximal absolute row sum norm of $M$, i.e. $||M||=\max_i \sum_{j=1}^d |m_{i,j}|$, where $m_{i,j}$ is
the $(i,j)$-entry of $M$. Define $V_M$ to be the finite set of integer vectors $\mathbf{v}$ for which each coordinate is between $-||M||$ and $||M||-1$, inclusive. Note that $V_M$ is finite and contains $(2||M||)^d$ vectors.
\begin{definition}
For an integer matrix $M$, define an automaton ${\mathcal A}_{M,n}$ operating on the alphabet $X_n$ as follows: the set of
states is $S_{M,n}=\{m_{\mathbf{v}} \mid \mathbf{v} \in V_M \}$, and the root permutations and the sections are, for $\mathbf{x}$ in $X_n$, defined by
\begin{equation}\label{e:mv-root-section}
m_\mathbf{v} (\mathbf{x}) = \mbox{Mod}(\mathbf{v}+M\mathbf{x}) \qquad \text{and} \qquad
m_\mathbf{v}|_{\mathbf{x}} = m_{\mbox{Div}(\mathbf{v}+M\mathbf{x})}.
\end{equation}
\end{definition}
The automaton ${\mathcal A}_{M,n}$ is well defined (it is easy to show that, for $\mathbf{v}\in V_M$ and $\mathbf{x}\in X_n$, the entries of the vector $\mathbf{v}+M\mathbf{x}$ are bounded between $-||M||n$ and $||M||n-1$, and hence $\mbox{Div}(\mathbf{v}+M\mathbf{x}) \in V_M$).
\begin{lemma}\label{l:mvMv}
For every state $m_{\mathbf{v}}$ of the automaton ${\mathcal A}_{M,n}$, and every element
$\mathbf{u}=\mathbf{x}_1\mathbf{x}_2\mathbf{x}_3\cdots$ of the free module ${\mathbb Z}_n^d$ (i.e. every right infinite word over $X_n$),
\[ m_{\mathbf{v}}(\mathbf{u}) = M_\mathbf{v} (\mathbf{u}). \]
\end{lemma}
\begin{proof}
Follows directly from the definition of the root permutations and the sections of $m_{\mathbf{v}}$ in~(\ref{e:mv-root-section}) and equality~(\ref{e:Mv}) describing the action of $M_{\mathbf{v}}$.
\end{proof}
\begin{definition}
Let ${\mathcal A}_{{\mathcal M},n}$ be the automaton operating on the alphabet $X_n$ and having $2^d\sum_{i=1}^m ||M_i||^d$ states obtained
by taking the (disjoint) union of the automata ${\mathcal A}_{M_1,n},\ldots,{\mathcal A}_{M_m,n}$.
\end{definition}
\begin{proposition}\label{p:GM}
The group $G_{{\mathcal M},n}$ can be realized by a finite automaton acting on an alphabet of size $n^d$
and having no more than $2^d\sum_{i=1}^m ||M_i||^d$ states, where $||M_i||$ is the maximum absolute row sum norm of
$M_i$, for $i=1,\ldots,m$.
\end{proposition}
\begin{proof}
The automaton ${\mathcal A}_{{\mathcal M},n}$ satisfies the required conditions, and generates precisely the group $G_{{\mathcal M},n}$. This follows directly from~(\ref{e:generators}) and Lemma~\ref{l:mvMv}, once it is observed that $A_{{\mathcal M},n}$ has enough
states to generate $G_{{\mathcal M},n}$. However, this is clear, since each of the automata ${\mathcal A}_{M,n}$, for $M\in {\mathcal M}$,
has at least $d+1$ states, $m_\mathbf{0}, m_{-\mathbf{e}_1},\ldots,m_{-\mathbf{e}_d}$, and
$m_\mathbf{0}(m_{-\mathbf{e}_j})^{-1}=\tau_{\mathbf{e}_j}$, for $j=1,\ldots,d$.
\end{proof}
Theorem~\ref{cor} is an immediate corollary of Lemma~\ref{inv}~(ii) and Proposition~\ref{p:GM}.
\begin{proof}[Proof of Theorem~\ref{gros}]
Let $d\geqslant 6$ and let $F$ be an orbit undecidable, free subgroup of rank $m$ of $\mathsf{GL}_d({\mathbb Z})$ (such a group exists by Proposition~\ref{p:free-orbit-undecidable}). Let ${\mathcal M}=\{M_1,\ldots,M_m\}$ be a set of invertible integer $d\times d$ matrices generating $F=\langle {\mathcal M} \rangle$. Fix $n\geqslant 2$ and consider the group $G=G_{{\mathcal M},n}$. By
Proposition~\ref{p:GM}, $G$ is generated by the finite automaton ${\mathcal A}_{{\mathcal M},n}$, so it is an automaton group. By
Lemma~\ref{inv}~(ii), $G$ does not depend on $n$ and is in fact isomorphic to ${\mathbb Z}^d \rtimes F$ (since all matrices in ${\mathcal M}$ are invertible over ${\mathbb Z}$); so, it is a ${\mathbb Z}^d$-by-free group. Finally, by Proposition~\ref{gamma}, $G=G_{{\mathcal M},n}$ has unsolvable conjugacy problem.
\end{proof}
Theorem~\ref{t:main} is an immediate corollary of Theorem~\ref{gros}.
\subsection*{Acknowledgments} The authors express their gratitude to CRM at Universit\'e de Montr\'eal and the Organizers of the Thematic Semester in Geometric, Combinatorial, and Computational Group Theory for their hospitality and support in the fall of 2010, when this research was conducted. The first named author was partially supported by the NSF under DMS-0805932 and DMS-1105520. The second one was partial supported by the MEC (Spain) and the EFRD (EC) through projects number MTM2008-01550 and PR2010-0321.
\def$'${$'$}
\end{document} |
\begin{document}
\title{ extbf{Characterization of depolarizing channels using two-photon interference}
\begin{abstract}
Depolarization is one of the most important sources of error in a quantum communication link that can be introduced by the quantum channel. Even though standard quantum process tomography can, in theory, be applied to characterize this effect, in most real-world implementations depolarization cannot be distinguished from time-varying unitary transformations, especially when the time scales are much shorter than the detectors response time. In this paper, we introduce a method for distinguishing true depolarization from fast polarization rotations by employing Hong-Ou-Mandel interference. It is shown that the results are independent of the timing resolutions of the photodetectors.
\end{abstract}
\section{Introduction}
\label{intro}
In the vast majority of quantum optical communication schemes, qubits are encoded in pure states of a single degree of freedom of photons, such as the polarization state or time-bin \cite{rubenok2013real,da2013proof}. This allows for coherent superpositions of states, which are in the core of quantum key distribution (QKD) and many other quantum communication applications \cite{gisin2002quantum}. However, the quantum channel - the propagation media that connects the transmitter (Alice) and receiver (Bob) - can introduce decoherence, by coupling the qubits to other degrees of freedom, usually called the "environment"; this process is known as a depolarizing channel \cite{king2003capacity,daffer2004depolarizing}.
Characterization of a depolarizing channel can be performed by employing Quantum Process Tomography (QPT) \cite{mohseni2008quantum}. In its standard form, QPT is comprised of a series of Quantum State Tomography (QST) procedures on $d^2$ linearly independent input states, where $d$ is the dimension of the Hilbert space. In its turn, QST relies on a series of projective measurements performed on the channel output states \cite{leonhardt1995quantum,altepeter2005photonic}. In practical implementations, the measurements are performed by single-photon detectors (SPD), which have a certain uncertainty (timing jitter) in the time of arrival of the photons. This means that, for example, if the quantum channel performs a coupling between the polarization and time-bin degrees of freedom, and both the coherence time of the optical pulses and the introduced time delay between orthogonal states by the depolarizing channel are smaller than the timing jitter of the detectors, then the measurement will automatically "trace over" the time-bin degree of freedom and will not be able to correctly perform the QST.
It is a known fact in quantum physics that there is no way of distinguishing between quantum states represented by the same density operator. Since the reduced density operator resulting from the depolarizing channel can also be described as an incoherent mixture of pure states, this means that QPT is not able to distinguish between a depolarizing channel and a time-varying unitary operation which randomly rotates the input states in a time scale smaller than the SPD jitter. This is a well-characterized effect in optical fibers, where polarization scramblers are employed to mimic the effect of depolarization \cite{yao2002devices}.
In this work, we propose a method for accessing the time degree of freedom that cannot be directly accessed by the detectors. The main idea comprises of performing joint measurements in pairs of photons, each taken at a different time, via Hong-Ou-Mandel (HOM) interference. In other words, by exploiting photon bunching, which is intrinsically dependent on the indistinguishability of the photonic states, the method probes said indistinguishability and overcomes the limitations of QPT. Whenever a depolarizing channel is present, both photons are in the same quantum state and will therefore interfere and produce bunching; on the other hand, if the channel is replaced by a time-varying unitary operation, the two photons will not be in the same polarization state, and bunching will not take place.
The structure of the paper is as follows. In section 2, the problem of distinguishing between a depolarizing channel and a fast time-varying unitary transformation is introduced. In section 3, a mathematical model of the distinction method is presented, where the single-photon case is considered for simplicity. In Section 4, a practical example of the method employing coherent states is discussed and simulation results are presented. Section 5 concludes the paper.
\section{The Problem of Characterizing Depolarization}
\label{sec:2}
The depolarizing channel is mathematically described by a completely positive, trace-preserving linear map $\mathcal{E}: \mathcal{B}[\mathcal{H}] \rightarrow\mathcal{B}[\mathcal{H}] $, where $\mathcal{H}$ is the Hilbert space corresponding to the qubits and $\mathcal{B}[X]$ is the space of (bounded) linear operators in $X$, given by:
\begin{equation}
\label{eq1}
\mathcal{E}(\rho) = (1-p)\rho + p\frac{1}{2}I
\end{equation}
where $\rho$ is the density operator describing the input state \cite{daffer2004depolarizing}.
A straightforward interpretation of eq. \ref{eq1} tells us that the input state $\rho$ has a probability $p$ of being replaced by a completely mixed state or a probability $1-p$ of not being affected by the channel. Clearly, the probability $p$ is related to the output degree of polarization (DOP); assuming the input is always a pure polarization state, then $(1-p)$ corresponds to the output's DOP. Note that eq. \ref{eq1}, though correct, does not describe how depolarization takes place. In fact, there are several ways of obtaining a depolarizing channel; indeed, the Principle of Optical Equivalence, an important theorem due to Stokes, states that different incoherent superpositions of wavefields of the same frequency can result in a beam with the same Stokes parameters \cite{brosseau1998fundamentals}. Quantum theory states the same result in a slight different way: the density operator is a complete description of the quantum state, even though it can be written as a (incoherent) sum of other density operators in a non-unique way \cite{aperes}.
Two different approaches will now be presented. The first one introduces "true" depolarization, i.e. incoherence between two orthogonal polarization states, whereas the second does not actually cause depolarization, but is indistinguishable from the first example.
\subsection{Depolarization by time-polarization entanglement}
\label{subsec1}
In this example, the depolarization process is described by a partial trace over the extended Hilbert space comprised of the photon's polarization state and the environment, corresponding in this example to the time-of-arrival degree of freedom:
\begin{equation}
\label{eq2}
\mathcal{E}(\rho) = \textnormal{Tr}_t \left[ U_{\theta,\phi}(\tau) \rho \otimes\ket{0}_t\prescript{}{t}{\bra{0}} U_{\theta,\phi}(\tau)^\dagger \right]
\end{equation}
where $\ket{0}_t \in \mathcal{H}_t$ is the input state's time-bin and $U_{\theta,\phi}(\tau)$ is a unitary operator acting on the joint Hilbert space $\mathcal{H}\otimes \mathcal{H}_t$, defined by:
\begin{equation}
\label{eq3}
\begin{aligned}
U_{\theta,\phi}(\tau)\ket{\theta,\phi}\ket{0}_t = & e^{i\omega\tau}\ket{\theta,\phi}\ket{\tau}_t \\
U_{\theta,\phi}(\tau)\ket{\theta,\phi}^\perp \ket{0}_t = & \ket{\theta,\phi}^\perp\ket{0}_t
\end{aligned}
\end{equation}
where $\omega$ is the optical frequency, $\ket{\theta,\phi}$ is a generic polarization eigenstate parametrized by angles $\theta$ and $\phi$ in Poincar{\' e} Sphere and $\ket{\psi}^\perp$ is the orthogonal state to $\ket{\psi}$. Eq. \ref{eq3} can be easily interpreted as follows: the unitary transformation $U_{\theta,\phi}(\tau)$ introduces a differential group delay (DGD) of $\tau$ between its two orthogonal eigenstates. This is exactly what happens, for example, in a polarization-maintaining (PM) optical fiber. For $\omega\tau \ll 1$, eq. \ref{eq3} reduces to the effect of a wave plate or polarization controller.
If we assume a pure state $\ket{\phi_{in}} = (\ket{\theta,\phi}+ e^{i\delta}\ket{\theta,\phi}^\perp )/\sqrt{2}$ in the channel input, for an arbitrary phase difference $\delta$, the output DOP, and therefore the probability $p$ in eq. \ref{eq1}, is completely determined by the relationship between the DGD $\tau$ and the photon's coherence time $\tau_c$. In order to see this, we evaluate eq. \ref{eq2} using the unitary operator of eq. \ref{eq3} and the input state $\rho = \ket{\phi_{in}}\bra{\phi_{in}}$. A straightforward calculation yields:
\begin{equation}
\label{eq4}
\mathcal{E}(\rho) = \frac{1}{2}\begin{bmatrix}
1 & e^{-i\delta '}\prescript{}{t}{\braket{0|\tau}_t} \\
e^{i\delta '}\prescript{}{t}{\braket{0|\tau}_t} & 1 \\
\end{bmatrix}
\end{equation}
where $\delta ' = \delta - \omega\tau$, and the matrix representation is with respect to the $\{\ket{\theta,\phi}, \ket{\theta,\phi}^\perp\}$ basis. From the reduced density operator, we can now calculate the degree of polarization in the output:
\begin{equation}
\label{eq5}
DOP = \sqrt{1-4\text{det}\left[\mathcal{E}(\rho)\right]} = \left|\prescript{}{t}{\braket{0|\tau}_t} \right|
\end{equation}
which clearly shows that the DGD determines the off-diagonal elements of $\mathcal{E}(\rho)$. The exact relationship will depend, of course, on the photon's temporal coherence. For instance, assuming a Gaussian profile on the photon's temporal wavepackets, and defining the coherence time as the standard deviation of the wavepacket, we have:
\begin{equation}
\label{eq6}
\prescript{}{t}{\braket{0|\tau}_t} = \frac{1}{\tau_c\sqrt{\pi}}\int_{-\infty}^{+\infty}e^{-\tfrac{1}{2}[t^2/\tau_c^2]} e^{-\tfrac{1}{2}[(t-\tau)^2/\tau_c^2]}dt =e^{-\tfrac{1}{4}(\tau/\tau_c)^2},
\end{equation}
and, thus:
\begin{equation}
DOP = e^{-\tfrac{1}{4}(\tau/\tau_c)^2}
\end{equation}
\begin{figure}
\caption{Degree of polarization (DOP) as a function of the time-of-arrival mismatch $\tau$ of the wavepackets, assuming gaussian temporal profiles. The red region denotes a high DOP, whereas the blue area indicates low DOP, i.e. high depolarization introduced by the channel.}
\label{fig:1}
\end{figure}
As can be seen from eq. \ref{eq6}, depolarization is achieved whenever $\tau \gg \tau_c$. This can be better visualized in Fig. \ref{fig:1}, for different values of $\tau_c$.
It is important to stress that the reduced density operator of eq. \ref{eq4} actually refers to each one of the photons that go through the quantum channel. All photons are, therefore, identical. Due to the polarization-time coupling, there is no "polarization state" associated to any of the photons. This important remark is what defines this kind of channel as a "true" depolarizing channel.
\subsection{Pseudo-depolarization by incoherent mixing of polarized photons}
\label{subsec2}
Consider now a quantum channel that introduces a random unitary transformation on the input polarization state, i.e. a random rotation in Poincar{\' e} Sphere around the same axis parametrized by angles $(\theta,\phi)$ as in the previous example. Let $\alpha$ be the rotation angle and $\ket{\phi_{in}} = (\ket{\theta,\phi}+ e^{i\delta}\ket{\theta,\phi}^\perp )/\sqrt{2}$ as before, for any arbitrary phase difference $\delta$. In the $\{\ket{\theta,\phi}, \ket{\theta,\phi}^\perp\}$ basis, the unitary operator that represents the quantum channel is simply given by:
\begin{equation}
\label{eq7}
U(\alpha) = \begin{bmatrix}
1 & 0\\
0 & e^{i\alpha} \\
\end{bmatrix}
\end{equation}
where $\alpha$ is a random variable. Let us assume that it can take only two different values: $-\alpha_0$ and $+\alpha_0$, with equal probability. Then we have, for $\rho = \ket{\phi_{in}}\bra{\phi_{in}}$:
\begin{equation}
\label{eq8}
\begin{aligned}
\mathcal{E}'(\rho) = & \tfrac{1}{2}U(\alpha_0)\ket{\phi_{in}}\bra{\phi_{in}}U^\dagger(\alpha_0) + \tfrac{1}{2}U(-\alpha_0)\ket{\phi_{in}}\bra{\phi_{in}}U^\dagger(-\alpha_0) \\
= & \dfrac{1}{2}\begin{bmatrix}
1 & e^{-i\delta}\text{cos}(\alpha_0)\\
e^{i\delta}\text{cos}(\alpha_0) & 1 \\
\end{bmatrix}
\end{aligned}
\end{equation}
And, similarly to eq. \ref{eq5}, we obtain the degree of polarization
\begin{equation}
\label{eq9}
DOP' = \sqrt{1-4\text{det}\left[\mathcal{E}'(\rho)\right]} = \left|\text{cos}(\alpha_0)\right|
\end{equation}
which can assume any value between 0 and 1 as expected. The similarities between the density operators given by eqs. \ref{eq4} and \ref{eq8} are self-evident: the role of the DGD in the first example is now played by the rotation angle $\alpha_0$. However, the individual photons are completely polarized in this case, and they are not all identical to each other, which are fundamental differences from the previous example.
\section{Theoretical Model}
\label{sec:3}
As previously discussed, two quantum states described by the same density operator are indistinguishable from each other, which means that standard Quantum Process Tomography (QPT) techniques are not applicable for distinguishing the quantum state of eq. \ref{eq4} from the one in eq. \ref{eq8}. To overcome this, the measurements must access the total Hilbert space $\mathcal{H}\otimes \mathcal{H}_t$ and not solely the polarization Hilbert space $\mathcal{H}$.
A first and trivial solution is using photodetectors that can resolve the time delay $\tau$ and apply standard QPT. However, depending on the photon coherence time $\tau_c$, this may not be practically feasible. In the case of very short optical pulses, in the time scale of tens of picoseconds or lower, the timing jitter of single-photon detectors (SPD) is not small enough \cite{you2013jitter}, such that the partial trace operation of eq. \ref{eq2} will be inherently performed.
We now present a new solution that is completely independent of the SPD timing jitter, which employs Hong-Ou-Mandel (HOM) interference and is shown in Fig. \ref{fig:2}. We assume that the optical fields in the input/output of the quantum channel are comprised of pulsed single photons (i.e. Fock states), with period $\Delta T \gg \tau_c$. An optical switch (OS), with switching frequency of $1/\Delta T$, is coupled to the output of the quantum channel. An optical delay (OD), matched to the pulse period $\Delta T$, is inserted in one of the OS outputs, such that two consecutive photons arrive at the same exact moment in modes $a$ and $b$ of the beamsplitter (BS).
\begin{figure}
\caption{Setup for distinguishing true depolarization from an incoherent mixture of pure states employing HOM interference. OS: optical switch; OD: optical delay; BS: beamsplitter; D: single-photon detector}
\label{fig:2}
\end{figure}
There are two possible situations for the quantum states at modes $a$ and $b$, corresponding to the quantum channels discussed previously. In the first case (see sects. \ref{subsec1}), both photons in $a$ and $b$ will be in the same quantum state. Let $\ket{0,0}$ represent the two-mode vacuum state and $\hat{k}^\dagger$ be the bosonic creation operator in the spatio-temporal mode defined by $\tau$ and $\tau_c$ in beamsplitter mode $k$. The beamsplitter action can be represented by the pair of unitary operations:
\begin{equation}
\begin{aligned}
\label{eq10}
\hat{a}^\dagger \xrightarrow{\text{BS}} & \tfrac{1}{\sqrt{2}}(\hat{d}^\dagger-i\hat{c}^\dagger) \\
\hat{b}^\dagger \xrightarrow{\text{BS}} & \tfrac{1}{\sqrt{2}}(\hat{c}^\dagger-i\hat{d}^\dagger) \\
\end{aligned}
\end{equation}
If the initial state is given by $\hat{a}^\dagger \hat{b}^\dagger\ket{0,0}$ - one single photon in each input - a simple calculation using eq. \ref{eq10} yields the well-known result, due to HOM interference, of photon bunching:
\begin{equation}
\label{eq11}
\hat{a}^\dagger \hat{b}^\dagger\ket{0,0} \xrightarrow{\text{BS}} \tfrac{1}{\sqrt{2}}\left(\hat{c}^{\dagger 2} + \hat{d}^{\dagger 2}\right)\ket{0,0}
\end{equation}
Therefore, if coincidence measurements are taken between single-photon detectors D1 and D2, no coincidences will be found, i.e. the coincidence rate will be zero. HOM interference does not care whether the quantum states have definite polarization states; the only requirement for eq. \ref{eq10} is that both states are indistinguishable, which also assumes they arrive at the same time in the BS, which is guaranteed by construction.
On the other hand, if the quantum channel is defined by eq. \ref{eq7} (see sect. \ref{subsec2}), there is a probability that photon bunching will not necessarily take place. Without loss of generality, we can assign the horizontal-vertical basis $\{\ket{H},\ket{V}\}$ to the eigenstates of the quantum channel (e.g., by introducing a fixed unitary transformation before the optical switch). In the case the two photons are distinct from each other, we'll have an input state in the BS given by:
\begin{equation}
\label{eq12}
\ket{\text{in}} = \tfrac{1}{\sqrt{2}}\left(\hat{a}^\dagger_{H} + e^{i\alpha_0}\hat{a}^\dagger_{V}\right)\otimes \tfrac{1}{\sqrt{2}}\left(\hat{b}^\dagger_{H} + e^{-i\alpha_0}\hat{b}^\dagger_{V}\right)\ket{0,0}
\end{equation}
where the subscripts in the creation operators indicate the polarization state of each mode. Using eq. \ref{eq10}, a straightforward but somewhat lengthy calculation shows that the state after the BS is given by:
\begin{equation}
\label{eq13}
\begin{aligned}
\ket{\text{out}} = & [\tfrac{1}{2\sqrt{2}}\left(\hat{c}_H^{\dagger 2} + \hat{c}_V^{\dagger 2} + \hat{d}_H^{\dagger 2} + \hat{d}_V^{\dagger 2}\right) - \\ & \tfrac{1}{2}\text{sin}\alpha_0\left(\hat{c}_H^\dagger\hat{d}_V^\dagger - \hat{c}_V^\dagger\hat{d}_H^\dagger\right) - \tfrac{1}{2}\text{cos}\alpha_0\left(\hat{c}_H^\dagger\hat{c}_V^\dagger+ \hat{d}_H^\dagger\hat{d}_V^\dagger\right)] \ket{0,0}
\end{aligned}
\end{equation}
The conditional probability of a coincidence detection given that the two photons are in the joint state described by eq. \ref{eq12} is, therefore, given by:
\begin{equation}
\label{eq14}
P_{coinc} = \tfrac{1}{2}\text{sin}^2\alpha_0
\end{equation}
It is clear that, when $\alpha_0 = 0$, bunching always occurs. However, for $\alpha_0 \neq 0$, there is a nonzero probability of coincidence detections between detectors D1 and D2, which is an indication of the presence of time-varying unitary transformations in the quantum channel. In the case of a completely mixed state ($\alpha_0 = \pi/2$), we have a 50\% probability of coincidence, as expected.
If now we take into account all possible combinations - including the cases where the two photons are in the same quantum state - and also take into account the detection efficiencies of the single-photon detectors and the insertion loss of the optical switch, the probability of generating a coincidence is given by:
\begin{equation}
\label{eq15}
P_{coinc} = \tfrac{1}{4}\eta_{\text{os}}^2\eta_1\eta_2\text{sin}^2\alpha_0
\end{equation}
where $\eta_{\text{os}}$, $\eta_1$ and $\eta_2$ correspond, respectively, to the transmission coefficient of the optical switch and the quantum efficiency of detectors D1 and D2.
Given that a nonzero coincidence probability is only obtained whenever the input states are different from each other, it is possible, using the scheme provided in Fig. \ref{fig:2}, to probabilistically distinguish a depolarizing channel from a random polarization rotation channel with a single measurement. If the measurement results in a coincidence count, then one can conclude, with certainty, that the channel corresponds to the second kind, whereas the absence of a coincidence count does not result in any information gain, i.e., gives an inconclusive result. If the experiment is repeated a large number of times, and all parameters in eq. \ref{eq14} are known, not only one can perfectly distinguish between the two kinds of channels but, additionally, the value of $\alpha_0$ can be estimated.
\section{Practical Scenario -- Discrimination Process with Coherent States}
\label{sec:4}
It should be clear from the reasoning of the previous section that, in case single photons are transmitted through the apparatus of Fig. \ref{fig:2}, a coincidence event will herald the channel's transformation $\mathcal{E}(\rho)$ to be a time-varying unitary operation (Eq. \ref{eq8}). Fock states are, however, impractical for real-life implementations; one could employ Spontaneous Parametric Down-Conversion (SPDC) schemes in order to generate good approximations of pairs of Fock states \cite{gisin2002quantum,Bra_czyk_2010}. A still more practical approach would be employing weak coherent states.
Of course, if weak coherent states are employed instead of SPDC-generated photon pairs, the mere presence of a nonzero coincidence rate is no indication whatsoever of time-varying unitary operations. This is because, in the case of coherent states, the multi-photon emission probability introduces an upper bound to the coincidence rate \cite{ou2007multi}. Therefore, instead of simply measuring the coincidence rate between detectors D1 and D2, one now needs to determine the Hong-Ou-Mandel (HOM) visibility of the two-photon interference that takes place in the beamsplitter. It can be shown that, for a mean photon number $\mu \ll 1$, the HOM visibilities for the depolarizing channel of eq. \ref{eq2} and the time-varying unitary rotation of eq. \ref{eq8} are given, respectively, by \cite{moschandreou2018experimental}:
\begin{equation}
\label{eq16}
V_{\text{HOM}}^{\text{depol}} = \tfrac{1}{2}
\end{equation}
\begin{equation}
\label{eq17}
V_{\text{HOM}}^{\text{unit}} = \tfrac{1}{2}\text{cos}^2\alpha_0
\end{equation}
where all imperfections such as dark counts and dead times of the detectors are, initially, not taken into consideration, for simplicity. Fig. \ref{fig:3} depicts the HOM visibility values as a function of $\mu$ and $\alpha_0$. It is possible to observe the effect of the diminishing visibility as the mean photon number per pulse is increased \cite{amaral2018complementarity}.
\begin{figure}
\caption{Simulation results for Hong-Ou-Mandel visibility as a function of the mean number of photons $\mu$ and the rotation angle $\alpha_0$.}
\label{fig:3}
\end{figure}
If we recall the discussion of the single-photon case and apply it to the coherent state case, it is clear that the distinction between the different transformations can only be attested in case the visibility is measured to be different than the maximum expected visibility of $0.5$, which indicates that the channel is introducing a random polarization rotation and $\alpha_0 \neq 0$. In case $\alpha_0 = 0$, the cummulative result will be the maximum expected visibility, which is the same result for the depolarizing channel, i.e., the effect of either cannot be differentiated.
Note that, in order to accurately discriminate the transformation imposed by the channel, a cumulative measurement must be performed such that the uncertainty in the measurement results is smaller than the margin of discrimination; this margin will be given by the difference between the expected visibilities which, in turn, is a function of $\alpha_0$ and the mean number of photons $\mu$ of the incoming weak coherent states. For example, if the HOM visibility is measured as $V=\mathcal{V}+\epsilon$, with $\epsilon$ the uncertainty associated to the measurement, no reliable information can be extracted if $\epsilon > \mathcal{V}-V^{\textrm{max}}$, where $V^{\textrm{max}}$ is the maximum expected visibility. The uncertainty can be decreased at the expense of increased acquisition times; clearly, the lower the mean number of photons, the greater the required acquisition time to obtain a given uncertainty value.
To clarify this discussion, Fig. \ref{fig:4} depicts three distinct measurements that yield conclusive and inconclusive results. The point at $V=0.5$ is inconclusive irrespective of the associated uncertainty, because it can be result of either transformations, since a random polarization rotation with $\alpha_0=0$ (i.e. a fixed unitary transformation) will always achieve maximum visibility. The point at $V\approx0.25$ is also inconclusive, but due to another reason: the uncertainty associated to the measurement is still too high to allow for distinguishing between the two classes of transformations. In this case, more measurements are required in order to reduce the uncertainty. The last point, with $V\approx0.1$ is the only that yields conclusive results, since the uncertainty associated to the measurement is small enough so that one can guarantee that only a random polarization rotation would produce such result.
\begin{figure}
\caption{Exemplification of three different possible measurements and their associated results. A conclusive result is obtained whenever the measured HOM visibility is lower than the maximum visibility for the corresponding mean number of photons}
\label{fig:4}
\end{figure}
\section{Conclusion}
The differentiation between transformations imposed by a depolarizing and a pseudo-depolarizing channel has been studied in depth, with a method to practically achieve such distinction being presented for either a single-photon or weak-coherent state case. In the single-photon case, which can be approximated in practice by SPDC sources, a nonzero coincidence rate is sufficient to identify the presence of a time-varying unitary transformation. In case weak coherent states are employed, however, it turns out that a cumulative measurement is necessary in order to make sure that the measurement uncertainty is smaller than the difference between the expected value and the maximum visibility. This fact prompts a practical issue since, on one hand, the greater the number of accumulated coincidence and single counts, the more accurate the results will be (smaller uncertainty); on the other hand, the lower the mean number of photons, the closer to the maximum value the measured visibility will be, but the acquisition time will be exponentially higher.
\end{document} |
\begin{document}
\begin{abstract}
We give a bijective proof of the MacMahon-type equidistribution over the group of
signed even permutations $C_2 \wr A_n$ that was stated in~[Bernstein. Electron. J. Combin. 11 (2004) 83]. This is
done by generalizing the bijection that was introduced in the bijective
proof of the equidistribution
over the alternating group $A_n$ in~[Bernstein and Regev. S{\'e}m. Lothar. Combin. 53 (2005) B53b].
\end{abstract}
\title{A $\maj$-$\inv$ bijection for $C_2\wr A_n$}
\section{Introduction}
In~\cite{macmahon:indices} MacMahon proved that two \emph{permutation statistics\/}, namely
the \emph{length\/} (or \emph{inversion number\/})
and the \emph{major index\/}, are equidistributed
over the symmetric group $S_n$ for every $n>0$ (see also~\cite{macmahon:combinatory}). The question of finding a bijective proof of
this remarkable fact arose naturally. That open problem was finally solved by Foata~\cite{foata:netto},
who gave a canonical bijection on $S_n$, for each $n$, that maps one statistic to the other.
In~\cite{foSch:major}, Foata and Sch\"utzenberger proved a refinement by \emph{inverse descent classes\/} of MacMahon's theorem. The theorem has received many additional refinements and generalizations, including~\cite{carlitz:qBernoulli, carlitz:qEulerian, garsia:permutation, reiner:signed, krattenthaler:major, adin:flag, regev:wreath, regev:qstatistics, stanley:sign}.
In~\cite{adin:hyperoctahedral}, Adin, Brenti and Roichman gave an analogue of MacMahon's theorem for
the group of signed permutations $B_n = C_2 \wr S_n$. A refinement of that result by inverse descent classes appeared in~\cite{adin:equiHyperoct},
and a bijective proof was given in~\cite{foata:signedI}. These results are the ``signed'' analogues of
MacMahon's theorem, its refinement by Foata and Sch\"utzenberger and Foata's bijection, respectively.
The MacMahon equidistribution does not hold when the $S_n$ statistics are restricted to the alternating subgroups $A_n \subset S_n$. However, in~\cite{regev:alternating}, Regev and Roichman defined the $\ell_A$ (\emph{$A$-length\/}),
$\rmaj_{A_n}$ (\emph{alternating reverse major index}\/) and $\del_A$ (\emph{$A$-delent number\/})
statistics on $A_n$, and proved the following refined analogue of MacMahon's theorem:
\begin{thm}[{see~\cite[Theorem~6.1(2)]{regev:alternating}}]
For every $n>0$,
\begin{multline*}
\sum_{w \in A_{n+1}} q^{\ell_A(w)} t^{\del_A(w)} = \sum_{w \in
A_{n+1}} q^{\rmaj_{A_{n+1}}(w)} t^{\del_A(w)} \\
= (1+2qt)(1+q+2q^2 t)\cdots(1+q+\dots+q^{n-2}+2q^{n-1}t) .
\end{multline*}
\end{thm}
A bijective proof was later given in~\cite{bernstein:foataForAn} in the form of a mapping $\Psi:A_{n+1}\to A_{n+1}$ with the following properties.
\begin{thm}[{see~\cite[Theorem~5.8]{bernstein:foataForAn}}]\label{PR:Psi}
\begin{enumerate}
\item
The mapping $\Psi$ is a bijection of $A_{n+1}$ onto itself.
\item
For every $v\in A_{n+1}$, $\rmaj_{A_{n+1}}(v) = \ell_A(\Psi(v))$.
\item
For every $v\in A_{n+1}$, $\del_A(v)=\del_A(\Psi(v))$.
\end{enumerate}
\end{thm}
A ``signed'' analogue of the equidistribution over $A_n$ was given in~\cite{bernstein:macmahon} by defining the
$\ell_L$ (\emph{$L$-length\/}) and $\nrmaj_{L_n}$ (\emph{negative alternating reverse major index\/}) statistics on the
group of signed even permutations $L_n = C_2 \wr A_n \subset B_n$ and proving the following.
\begin{prop}[see~{\cite[Proposition~4.1]{bernstein:macmahon}}]\label{PR:1}
For every $B \subseteq [n+1]$
\begin{eqnarray*}
\sum_{\{\, \pi \in
L_{n+1} \mid \Neg(\pi^{-1})\subseteq B\,\} }q^{\nrmaj_{L_{n+1}}(\pi)} = \sum_{\{\, \pi \in
L_{n+1} \mid \Neg(\pi^{-1})\subseteq B\,\} }q^{\ell_L(\pi)} \\
= \prod_{i \in
B}(1+q^i)\prod_{i=1}^{n-1}(1+q+\dots+q^{i-1}+2q^i) ,
\end{eqnarray*}
where $\Neg(\pi^{-1}) = \{\, -\pi(i) \mid 1 \le i \le n+1,\;\pi(i)<0 \,\}$.
\end{prop}
The main result in this note is a bijective proof of Proposition~\ref{PR:1}. It is accomplished by defining a mapping $\Theta : L_{n+1} \to L_{n+1}$ for every $n>0$ and proving the following theorem.
\begin{thm}[see Theorem~\ref{TH:main}]
The mapping $\Theta$ is a bijection of $L_{n+1}$ onto itself, and for every $\pi \in L_{n+1}$,
$\nrmaj_{L_{n+1}}(\pi) = \ell_L(\Theta(\pi))$ and $\Neg(\pi^{-1}) = \Neg(\Theta(\pi)^{-1})$.
\end{thm}
The rest of this note is organized as follows: in Section~\ref{SEC:bg} we introduce some definitions and notations and give
necessary background. In Section~\ref{SEC:decomp} we review the definition of the bijection $\Psi$ and the Main Lemma of~\cite{bernstein:macmahon}, which gives a unique decomposition of elements of $L_n$. In Section~\ref{SEC:main} we define the bijection $\Theta$ and prove the main result.
\section{Background and notation}\label{SEC:bg}
\subsection{Notation}
For an integer $a\ge 0$, let $[a]=\{1,2,\dots,a\}$ (where
$[0]=\emptyset$). Let $C_k$ be the cyclic group of order $k$, let $S_n$ be the symmetric group acting on $1,\dots,n$, and let $A_n \subset S_n$ denote the alternating group.
\subsection{The symmetric group}
Recall that $S_n$ is a Coxeter
group of type $A$, its Coxeter generators being the adjacent
transpositions $\{\,s_i\,\}_{i=1}^{n-1}$ where $s_i:=(i,i+1)$. The
defining relations are the Moore-Coxeter relations:
\[
\begin{split}
s_i^2 = 1 &\quad (1\le i \le n-1),\\
(s_i s_{i+1})^3 = 1 &\quad (1 \le i < n-1),\\
(s_i s_j)^2 = 1 &\quad (|i-j|>1).
\end{split}
\]
For every $j>0$, let
\[
R^S_j = \{ 1,\,s_j,\, s_j s_{j-1},\,\dots,\,s_j s_{j-1} \cdots s_1 \} \subseteq S_{j+1} .
\]
Recall the following fact.
\begin{thm}[{see~\cite[pp.~61--62]{goldschmidt:characters}}]\label{THM:SCanRep}
Let $w \in S_n$. Then there exist unique elements $w_j \in R_j^S$, $1 \le j \le n-1$, such that $w = w_1 \cdots w_{n-1}$. Thus, the presentation $w=w_1\cdots w_{n-1}$ is unique. Call that presentation {\em the $S$-canonical presentation of $w$}.
\end{thm}
\subsection{The hyperoctahedral group}
\emph{The hyperoctahedral group\/} $B_n := C_2 \wr
S_n$ is the group of all bijections $\sigma$ of $\{\pm 1,\pm 2,\dots,\pm
n\}$ to itself satisfying $\sigma(-i)=-\sigma(i)$, with function composition as the group operation. It is also known as the group
of \emph{signed permutations\/}.
For $\sigma\in B_n$, we shall use \emph{window notation\/}, writing $\sigma=[\sigma_1,\dots,\sigma_n]$ to
mean that $\sigma(i)=\sigma_i$ for $i\in [n]$,
and let $\Neg(\sigma) := \{\,i\in[n] \mid \sigma(i)<0 \,\}$.
$B_n$ is a Coxeter
group of type $B$, generated by $s_1,\dots,s_{n-1}$ together with
an exceptional generator $s_0:=[-1,2,3,\dots,n]$
(see~\cite[Section~8.1]{bjorner:combinatorics}). In addition to the above relations between $s_1,\dots,s_{n-1}$, we have: $s_0^2 = 1$,
$(s_0 s_1)^4 = 1$, and $s_0 s_i = s_i s_0$ for all $1<i<n$.
\subsection{The alternating group}
Let $a_i := s_1 s_{i+1}$, $1 \le i \le n-1$. Then the set $A = \{\,a_i\,\}_{i=1}^{n-1}$ generates the alternating group $A_{n+1}$. This
generating set comes from~\cite{mitsuhashi:alternating}, where it is shown that the
generators satisfy the relations
\[
\begin{split}
a_1^3 = 1,&\\
a_i^2 = 1 &\quad (1 < i \le n-1),\\
(a_i a_{i+1})^3 = 1 &\quad (1 \le i < n-1),\\
(a_i a_j)^2 = 1 &\quad (|i-j|>1)
\end{split}
\]
(see~\cite[Proposition~2.5]{mitsuhashi:alternating}).
For every $j>0$, let
\[
R_j^A = \{1,\,a_j,\,a_j a_{j-1},\,\dots,\,a_j \cdots a_2,\,a_j \cdots a_2 a_1,\,a_j \cdots a_2 a_1^{-1}\} \subseteq A_{j+2}
\]
(for example, $R_3^A = \{1, a_3, a_3 a_2, a_3 a_2 a_1, a_3 a_2 a_1^{-1}\}$). One has the following
\begin{thm}[{see~\cite[Theorem~3.4]{regev:alternating}}]\label{THM:ACanRep}
Let $v \in A_{n+1}$. Then there exist unique elements $v_j \in R_j^A$, $1 \le j \le n-1$, such that $v = v_1 \cdots v_{n-1}$, and this presentation is unique.
Call that presentation {\em the $A$-canonical presentation of $v$}.
\end{thm}
\subsection{The group of signed even permutations}\label{SEC:L}
Our main result concerns the group $L_n:=C_2
\wr A_n$. It is the subgroup of $B_n$ of index 2 containing the
\emph{signed even permutations\/}.
For a more detailed discussion of $L_n$, see~\cite[Section~3]{bernstein:macmahon}
\subsection{$B_n$, $A_{n+1}$ and $L_{n+1}$ statistics}
Let $r=x_1 x_2\dots x_m$ be an $m$-letter word on a linearly-ordered alphabet $X$. The \emph{inversion
number\/} of $r$ is defined as \[\inv(r):=\#\{\,1\le i<j \le m \mid
x_i>x_j\,\} ,\] its \emph{descent set\/} is defined as
\[
\Des(r) := \{\,1 \le i < m \mid x_i>x_{i+1}\,\} ,
\] and its \emph{descent number\/} as
\[
\des(r) := \abs{\Des(r)} .
\]
For example, with $X=\mathbb{Z}$ with the usual order on the integers, if $r = 3,-4,2,1,5,-6$, then $\inv(r) = 8$, $\Des(r) = \{1,\,3,\,5\}$ and $\des(r) = 3$.
It is well known that if $w\in S_n$ then $\inv(w) = \ell_S(w)$, where $\ell_S(w)$ is the
\emph{length\/} of $w$ with respect to the Coxeter generators of $S_n$, and that
$\Des(w) = \Des_S(w):= \{\,1 \le i < n \mid \ell_S(w s_i) < \ell_S(w)\,\}$, which is the descent set of $w$ in the Coxeter sense.
Define the \emph{$B$-length\/} of $\sigma \in B_n$ in
the usual way, i.e., $\ell_B(\sigma)$ is the length of $\sigma$
with respect to the Coxeter generators of $B_n$.
The $B$-length can be computed in a combinatorial way as
\[
\ell_B(\sigma) = \inv(\sigma) + \sum_{i \in \Neg(\sigma^{-1})} i
\]
(see, for example,~{\cite[Section~8.1]{bjorner:combinatorics}}).
Given $\sigma\in B_n$, the \emph{$B$-delent number\/} of $\sigma$, $\del_B(\sigma)$, is defined as the number of left-to-right minima in $\sigma$, namely
\[
\del_B(\sigma) := \#\{\,2\le j \le n \mid \text{$\sigma(i)>\sigma(j)$ for all $1 \le i < j$} \,\} .
\]
For example, the left-to-right minima of $\sigma=[5,\,-1,\,2,\,-3,\,4]$ are $\{2,\,4\}$, so
$\del_B(\sigma)=2$.
The \emph{$A$-length} statistic on $A_{n+1}$ was defined in~\cite{regev:alternating} as the length of the $A$-canonical presentation. Given $v \in A_{n+1}$, $\ell_A(v)$ can be computed directly as
\begin{equation}\label{EQ:Alen}
\ell_A(v) = \ell_S(v)-\del_S(v) = \inv(v)-\del_B(v)
\end{equation}
(see~\cite[Proposition~4.4]{regev:alternating}).
\begin{defn}[{see~\cite[Definition~3.15]{bernstein:macmahon}}]\label{DEF:Llen}
Let $\sigma \in B_n$. Define the \emph{$L$-length of $\sigma$\/} by
\[
\ell_L(\sigma) = \ell_B(\sigma)-\del_B(\sigma) = \inv(\sigma)-\del_B(\sigma)+\sum_{i\in\Neg(\sigma^{-1})}i .
\]
\end{defn}
Given $\pi \in L_{n+1}$, let
\[
\Des_A(\pi) := \{\, 1 \le i \le n-1 \mid \ell_L(\pi a_i) \le \ell_L(\pi) \,\} ,
\]
\[
\rmaj_{L_{n+1}}(\pi) := \sum_{i \in \Des_A(\pi)} (n-i) ,
\]
and
\[
\nrmaj_{L_{n+1}}(\pi) := \rmaj_{L_{n+1}}(\pi) + \sum_{i\in \Neg(\pi^{-1})}i .
\]
For example, if $\pi=[5,-1,2,-3,4]$ then $\Des_A(\pi) = \{1,2\}$,
$\rmaj_{L_5}(\pi)=5$, and $\nrmaj_{L_5}(\pi)=5+1+3=9$.
\begin{rem}\label{RE:coincide}
Restricted to $A_{n+1}$, the $\rmaj_{L_{n+1}}$ statistic coincides with the $\rmaj_{A_{n+1}}$ statistic as defined in~\cite{regev:alternating} and used in Theorem~\ref{PR:Psi}.
\end{rem}
\section{The bijection $\Psi$ and the decomposition lemma}\label{SEC:decomp}
\subsection{The {F}oata bijection}
The {\em second fundamental transformation on words\/} $\Phi$ was introduced in~\cite{foata:netto} (for a full description, see~\cite[Section~10.6]{lothaire:words}). It is defined on any finite word $r=x_1 x_2 \dots x_m$ whose letters $x_1,\dots,x_m$ belong to a totally ordered alphabet. Instead of the original recursive definition, we give the algorithmic description of $\Phi$ from~\cite{foSch:major}.
\begin{algo}[$\Phi$]\label{ALGO:Phi}
Let $r=x_1 x_2 \dots x_m$ ;
1. Let $i:=1$, $r'_i := x_1$ ;
2. If $i=m$, let $\Phi(r):=r'_i$ and stop; else continue;
3. If the last letter of $r'_i$ is less than or equal to (respectively greater than) $x_{i+1}$, cut $r'_i$ after every letter less than or equal to (respectively greater than) $x_{i+1}$ ;
4. In each compartment of $r'_i$ determined by the previous cuts,
move the last letter in the compartment to the beginning of it;
let $t'_i$ be the word obtained after all those moves; put
$r'_{i+1} := t'_i \, x_{i+1}$ ; replace $i$ by $i+1$ and go to
step 2.
\end{algo}
\subsection{The covering map $f$ and its local inverses $g_u$}
Recall the $S$- and $A$-canonical presentations from Theorems~\ref{THM:SCanRep} and~\ref{THM:ACanRep}.
The following {\em covering map\/} $f$, which plays an important role in the construction of the
bijection $\Psi$, relates between $S_n$ and $A_{n+1}$ by canonical presentations.
\begin{defn}[{see~\cite[Definition~5.1]{regev:alternating}}]
Define $f:R_j^A \to R_j^S$ by
\begin{enumerate}
\item
$f(a_j a_{j-1}\cdots a_\ell) = s_j s_{j-1}\cdots s_\ell$ if $\ell\ge
2$, and
\item
$f(a_j\cdots a_1) = f(a_j\cdots a_1^{-1}) = s_j\cdots s_1$.
\end{enumerate}
Now extend $f:A_{n+1} \to S_n$ as follows:
let $v\in A_{n+1}$, $v=v_1 \cdots v_{n-1}$ its $A$-canonical presentation, then
\[
f(v) := f(v_1)\cdots f(v_{n-1}),
\]
which is clearly the $S$-canonical presentation of $f(v)$.
\end{defn}
In other words, given $v\in A_{n+1}$ in canonical presentation $v=a_{i_1}^{\epsilon_1} a_{i_2}^{\epsilon_2} \cdots a_{i_r}^{\epsilon_r}$, we obtain $f(v)$ simply by replacing each $a$ by an $s$ (and deleting the exponents): $f(v) = s_{i_1} s_{i_2} \cdots s_{i_r}$.
The following maps serve as ``local inverses'' of $f$.
\begin{defn}
For $u \in A_{n+1}$ with $A$-canonical presentation $u= u_1 u_2 \cdots u_{n-1}$,
define $g_u:R_j^S \to R_j^A$ by
\[
g_u(s_j s_{j-1}\cdots s_\ell) = a_j a_{j-1} \cdots a_\ell
\quad \text{if \;$\ell\ge 2$,\; and} \quad
g_u(s_j s_{j-1}\cdots s_1) = u_j.
\]
Now extend $g_u:S_n \to A_{n+1}$ as follows:
let $w\in S_n$, $w=w_1 \cdots w_{n-1}$ its $S$-canonical presentation, then
\[
g_u(w) := g_u(w_1)\cdots g_u(w_{n-1}),
\]
which is clearly the $A$-canonical presentation of $g_u(w)$.
\end{defn}
\subsection{The bijection $\Psi$}
Let $w= x_1 x_2 \dots x_m$ be an $m$-letter word on some alphabet $X$. Denote
the {\em reverse\/} of $w$ by $\mathbf{r}(w):=x_m x_{m-1} \dots x_1$, and let $\overleftarrow\Phi := \mathbf{r} \Phi \mathbf{r}$, the
{\em right-to-left Foata transformation}.
\begin{defn}\label{DEF:Psi}
Define $\Psi:A_{n+1} \to A_{n+1}$ by $\Psi(v) =
g_v(\overleftarrow\Phi(f(v)))$ .
\end{defn}
That is, the image of $v$ under $\Psi$ is obtained by applying
$\overleftarrow\Phi$ to $f(v)$ in $S_n$, then using $g_v$ as an ``inverse''
of $f$ in order to ``lift'' the result back to $A_{n+1}$.
Some of the key properties of $\Psi$ are given in Theorem~\ref{PR:Psi}.
\subsection{The decomposition lemma}
\begin{defn}
Let $r=x_1\dots x_m$ be an $m$-letter word on a linearly-ordered alphabet $X$. Define $\sort(r)$ to be the non-decreasing word with the letters of $r$.
\end{defn}
For example, with $X=\mathbb{Z}$ with the usual order on the integers,\\
$\sort(-4,\,2,\,3,\,-5,\,1,\,2) = -5,\,-4,\,1,\,2,\,2,\,3$.
\begin{defn}
For $\pi \in L_{n+1}$, define $s(\pi) \in L_{n+1}$ by
\[
s(\pi) = \begin{cases}
\sort(\pi), &\text{if $\sum_{i \in \Neg(\pi^{-1})} i$ is even};\\
\sort(\pi) s_1, &\text{otherwise}.
\end{cases}
\]
\end{defn}
The following lemma gives a unique decomposition of every element in $L_n$ into a descent-free factor and a signless
even factor.
\begin{lem}\label{LE:main}
For every $\pi \in L_{n+1}$, the only $\sigma \in L_{n+1}$ such that $\sigma^{-1}\pi \in A_{n+1}$ and $\des_A(\sigma)=0$
is $\sigma = s(\pi)$. Moreover, $\sigma = s(\pi)$ and $u = \sigma^{-1}\pi$ satisfy $\Des_A(u)=\Des_A(\pi)$, $\inv(u)-\del_B(u) = \inv(\pi)-\del_B(\pi)$, and $\Neg(\pi^{-1}) = \Neg(\sigma^{-1})$.
\end{lem}
See~\cite[Lemma~4.6]{bernstein:macmahon} for the proof.
\begin{cor}\label{COR:uniq}
If $\sigma \in L_{n+1}$ and $\des_A(\sigma) = 0$, then for every $u \in A_{n+1}$, $s(\sigma u) = \sigma$.
\end{cor}
\section{The main result}\label{SEC:main}
\begin{defn}
Define $\Theta:L_{n+1}\to L_{n+1}$ for each $n>0$ by
\[
\Theta(\pi) = s(\pi) \Psi( s(\pi)^{-1} \pi ) .
\]
\end{defn}
\begin{thm}\label{TH:main}
The mapping $\Theta$ is a bijection of $L_{n+1}$ onto itself, and for every $\pi \in L_{n+1}$,
$\nrmaj_{L_{n+1}}(\pi) = \ell_L(\Theta(\pi))$ and $\Neg(\pi^{-1}) = \Neg(\Theta(\pi)^{-1})$.
\end{thm}
\begin{exmp}
As an example, let $\pi = [3,\,-6,\,-4,\,5,\,2,\,-1] \in L_6$. We have $\Des_A(\pi) = \{1,\,3,\,4\}$
and therefore $\nrmaj_{L_6}(\pi) = 4+2+1+6+4+1 = 18$. Since $\sum_{i\in\Neg(\pi^{-1})}i = 11$ is odd,
we have $\sigma:=s(\pi)=\sort(\pi)s_1=[-4,\,-6,\,-1,\,2,\,3,\,5]$ and $u:=\sigma^{-1}\pi = [5,\,2,\,1,\,6,\,4,\,3]$. One can verify that the $A$-canonical presentation of $u$ is
$u = (1)(a_2)(a_3 a_2 a_1^{-1})(a_4 a_3)$, so $f(u)=(1)(s_2)(s_3 s_2 s_1)(s_4 s_3) = [4,\,1,\,5,\,3,\,2]$. Next
we compute $\overleftarrow\Phi(f(u))$ as follows: $r:=\mathbf{r}(f(u)) = [2,\,3,\,5,\,1,\,4]$. Applying Algorithm~\ref{ALGO:Phi}
to $r$ we get
\[
\begin{aligned}
r'_1 &= 2 \mid\\
r'_2 &= 2 \mid 3 \mid\\
r'_3 &= 2 \mid 3 \mid 5 \mid\\
r'_4 &= 2 \mid 3 \mid 5\;\;\;1 \mid\\
\Phi(r) = r'_5 &= 2\;\;\;3\;\;\;1\;\;\;5\;\;\;4 \quad,
\end{aligned}
\]
so $v:=\overleftarrow\Phi(f(u)) = [4,\,5,\,1,\,3,\,2]$, whose $S$-canonical presentation is\\ $v=(1)(s_2)(s_3 s_2 s_1)(s_4 s_3 s_2)$. Therefore $\Psi(u)=g_u(v) = (1)(a_2)(a_3 a_2 a_1^{-1})(a_4 a_3 a_2) = [2,\,5,\,6,\,1,\,4,\,3]$.
Finally, $\Theta(\pi) = \sigma\Psi(u) = [-6,\,3,\,5,\,-4,\,2,\,-1]$, and indeed $\ell_L(\Theta(\pi)) =
7-0+11 = 18 = \nrmaj_{L_6}(\pi)$.
\end{exmp}
\begin{proof}[Proof of Theorem~\ref{TH:main}]
The bijectivity of $\Theta$ follows from the bijectivity of $\Psi$ together with Corollary~\ref{COR:uniq}.
Let $\pi \in L_{n+1}$, $\sigma = s(\pi)$ and $u = \sigma^{-1}\pi$. By Definition~\ref{DEF:Llen},
\[
\ell_L(\Theta(\pi)) = \ell_L(\sigma \Psi(u)) = \inv(\sigma\Psi(u))-\del_B(\sigma\Psi(u))+\sum_{i\in\Neg((\sigma\Psi(u))^{-1})} i .
\]
By Corollary~\ref{COR:uniq} and Lemma~\ref{LE:main},
\[
\inv(\sigma\Psi(u))-\del_B(\sigma\Psi(u)) = \inv(\Psi(u))-\del_B(\Psi(u))
\]
and
\[
\Neg((\sigma\Psi(u))^{-1}) = \Neg(\sigma^{-1}) = \Neg(\pi^{-1}) ,
\]
so
\[
\ell_L(\Theta(\pi)) = \inv(\Psi(u))-\del_B(\Psi(u)) + \sum_{i \in \Neg(\pi^{-1})} i .
\]
By identity~\eqref{EQ:Alen} and Theorem~\ref{PR:Psi},
\[
\inv(\Psi(u))-\del_B(\Psi(u)) = \ell_A(\Psi(u)) = \rmaj_{A_{n+1}}(u) = \sum_{i \in \Des_A(u)} i .
\]
Again by Lemma~\ref{LE:main}, $\Des_A(u) = \Des_A(\pi)$, whence by Remark~\ref{RE:coincide}, $\rmaj_{A_{n+1}}(u) = \rmaj_{L_{n+1}}(\pi)$.
Thus
\[
\ell_L(\Theta(\pi)) = \rmaj_{L_{n+1}}(\pi)+\sum_{i \in \Neg(\pi^{-1})} i = \nrmaj_{L_{n+1}}(\pi) . \qedhere
\]
\end{proof}
\end{document} |
\begin{document}
\title[Duality and Smoothing]{Duality of holomorphic function spaces and smoothing
properties of
the Bergman projection}
\author{A.-K. Herbig, J. D. McNeal, \& E. J. Straube}
\subjclass[2000]{32A36, 32A25, 32C37}
\thanks{Research supported in part by Austrian Science Fund FWF grants Y377 and V187N13
(Herbig), National Science Foundation grants DMS--0752826 (McNeal) and DMS--0758534
(Straube), and by the Erwin Schr\"{o}dinger International Institute for
Mathematical Physics.}
\address{Department of Mathematics, \newline University of Vienna, Vienna, Austria}
\email{[email protected]}
\address{Department of Mathematics, \newline Ohio State University, Columbus, Ohio, USA}
\email{[email protected]}
\address{Department of Mathematics, \newline Texas A\&M University, College Station, Texas, USA}
\email{[email protected]}
\begin{abstract}
Let $\Omega\subset\subset\mathbb{C}^{n}$ be a domain with smooth boundary, whose
Bergman projection $B$ maps the Sobolev space $H^{k_{1}}(\Omega)$ (continuously) into
$H^{k_{2}}(\Omega)$. We establish two smoothing results: (i) the full Sobolev norm
$\|Bf\|_{k_{2}}$ is controlled by $L^2$ derivatives of $f$ taken along \textit{a single,
distinguished direction} (of order $\leq k_{1}$), and (ii) the projection of a
\textit{conjugate holomorphic} function in $L^{2}(\Omega)$ is automatically in
$H^{k_{2}}(\Omega)$. There are obvious corollaries for when $B$ is globally regular.
\end{abstract}
\maketitle
\section{{Introduction}}\label{intro}
In this paper, we consider the range of the Bergman projection, $B$, associated to
a smoothly bounded domain $\Omega\subset\subset\mathbb{C}^n$ through the following
question: what functions defined on $\Omega$ does $B$ map to elements of
$C^{\infty}(\overline{\Omega})$? Our particular interest is in finding families of
functions ${\mathcal F}$, ${\mathcal
F}\not\subset C^{\infty}(\overline{\Omega})$, such that $B\left({\mathcal F}\right)\subset
C^{\infty}(\overline{\Omega})$.
We present two partial answers to this general question, both under the hypothesis that
$B$ satisfies Condition R of Bell--Ligocka \cite{BellLigocka}, which says that
$B\left(C^{\infty}(\overline{\Omega})\right) \subseteq C^{\infty}(\overline{\Omega})$.
In the first, we show that an $L^2$ function $f$ which has square-integrable derivatives
of all orders only in a {\it single, distinguished direction} is
necessarily mapped by $B$ to a function in $C^{\infty}(\overline{\Omega})$.
The direction is distinguished by being both tangential to $b\Omega$, the boundary of $\Omega$, and not contained in
the complex tangent space to $b\Omega$ (the last property is called \textit{complex transversal}, for short).
Note that no smoothness about $f$ is assumed except in this one direction, yet $Bf$ is smooth up to the
boundary in all directions. This `partial smoothing' property of $B$ has a different
character than traditionally studied mapping properties of $B$,
which concentrate on whether $B$ preserves various Banach space structures, and was
discovered recently by the first two authors \cite{HerMcN10}. Our first result here
generalizes the main theorem in \cite{HerMcN10}, which established partial smoothing of
$B$ under the hypothesis that the $\bar\partial$-Neumann operator on $\Omega$ is compact.
The second result in this paper says that all {\it conjugate holomorphic} functions in
$L^2(\Omega)$ are mapped to $C^{\infty}(\overline{\Omega})$ by $B$. This result differs
from the first as functions in $\overline{A^0}(\Omega)$, where ${A^0}(\Omega)$ denotes the
space of square-integrable holomorphic functions on $\Omega$, need have no derivatives in
$L^2(\Omega)$.
We now state our results more precisely. $H^{k}(\Omega)$ denotes the standard
$L^{2}$ Sobolev space of order $k$, and $H_{T}^{k}(\Omega)$ denotes the Sobolev space of
order $k$ involving only derivatives in direction $T$, see Definition \ref{D:TSobolev}.
\begin{theorem}\label{T:main}
Let $\Omega\subset\subset\mathbb{C}^{n}$ be a smoothly bounded domain and let $T$
be a smooth vector field on $\overline{\Omega}$ that is tangential and complex
transversal at the boundary. Suppose that there exist a pair $(k_{1}, k_{2}) \in
\mathbb{N} \times \mathbb{N}$ and a constant $C_{1}>0$ such that
\begin{align} \label{main1}
\left\|Bf\right\|_{k_{2}}\leq
C_{1}\left\|f\right\|_{k_{1}}\qquad\hskip .2 cm\forall\hskip .2 cm f\in H^{k_{1}}(\Omega).
\end{align}
Then there exists a
constant $C_{2}>0$, such that
\begin{align}\label{main2}
\|Bf\|_{k_{2}}\leq C_{2}\|f\|_{k_{1},T}\qquad\hskip .2 cm\forall\hskip .2 cm f\in
H_{T}^{k_{1}}(\Omega) .
\end{align}
\end{theorem}
Note that in \eqref{main1}, automatically $k_{1} \geq k_{2}$. We emphasize that
\eqref{main2} is a genuine estimate: if the
right hand side is finite, i.e., $f$ has $T$ derivatives up to order $k_{1}$ in
$L^{2}(\Omega)$, then $Bf \in H^{k_{2}}(\Omega)$, and the estimate holds.
Condition R is equivalent to the statement that for each $k_{2} \in \mathbb{N}$, there
exists $k_{1} \in \mathbb{N}$ such that $B: H^{k_{1}}(\Omega) \rightarrow
H^{k_{2}}(\Omega)$. Combined with the Sobolev Lemma to the effect that
$\cap_{k=0}^{\infty}H^{k}(\Omega) =
C^{\infty}(\overline{\Omega})$, Theorem \ref{T:main} implies in particular the
$C^{\infty}(\overline{\Omega})$ result described in the second paragraph above:
\begin{corollary}\label{C:main}
Assume $\Omega$ and $T$ are as in Theorem \ref{T:main}, and that Condition R holds for
$\Omega$. Then $B$ maps $H_{T}^{\infty}:=\cap_{k=0}^{\infty}H_{T}^{k}(\Omega)$ (continuously) to
$C^{\infty}(\overline{\Omega})$.
\end{corollary}
The method we use to prove Theorem \ref{T:main} is rather general and applicable in
other situations, i.e., to other operators connected to elliptic PDEs and to other spaces
of holomorphic functions.
The proof consists of two distinct steps. The
first step is to show that when $B$ satisfies the regularity condition \eqref{main1}, the
$H^{k_{2}}(\Omega)$ norm of a holomorphic function $g$ can be estimated by pairing $g$,
in the ordinary $L^2$ inner product, against holomorphic functions contained in the unit ball of
$H^{-k_{1}}(\Omega)$ (see Proposition \ref{P:pieceofduality} below). This is a
special fact about holomorphic functions (but it can also be established for solutions to
other homogeneous elliptic systems, see \cite{Bell82b, Straube84, Ligocka86} and Appendix
B in \cite{Boas87}). This special type of duality for holomorphic functions originates in
the work of Bell \cite{Bell81a, Bell82a}, and was subsequently extended and refined in
\cite{Bell82b, BellBoas84, Komatsu84, Straube84, Barrett95}. For our purposes, duality is
used to reduce the problem of estimating $\|Bf\|_{k_{2}}$ significantly. In order to
illustrate how this works, assume for simplicity that $k_{1}=k_{2}=1$. Then, for some constant $c>0$,
\begin{align}\label{E:Bf1norm}
\|Bf\|_1&\leq c \sup\left\{\left|(Bf,h)\right|: h\in A^{1}(\Omega),\hskip .2 cm\|h\|_{-1}\leq 1
\right\}\notag\\
&=c\sup\left\{\left|(f,h)\right|: h\in A^{1}(\Omega),\hskip .2 cm\|h\|_{-1}\leq 1
\right\}.
\end{align}
The key point is that the self-adjointness of $B$ in the ordinary $L^2$ inner product,
which yields the last equality, `eliminates' $B$ from the right-hand side of
\eqref{E:Bf1norm}. This obviates the need to study the commutator of $B$ with differential
operators in order to bound the left-hand side of \eqref{E:Bf1norm}; estimating such
commutators is a difficult problem in general since $B$ is an abstractly given integral
operator. (This effect also occurs when one can use vector fields with holomorphic
coefficients to control Sobolev norms of $Bf$, see \cite{Barrett86}.) A similar use of
duality seems applicable to other self-adjoint operators, and also to $B$ itself in other
scales of Banach spaces besides $H^k(\Omega)$.
Let $T$ be the vector field from Theorem \ref{T:main}. Once \eqref{E:Bf1norm} is in hand,
the second step (still assuming $k_{1}=k_{2}=1$) consists of replacing $h$, on the
right-hand side of \eqref{E:Bf1norm}, by
the sum of the $\overline{T}$ derivative of a function, ${\mathcal H}_1$, and of another
function, ${\mathcal H}_2$, both of whose $L^2$ norms are uniformly bounded (when
$\|h\|_{-1} \leq 1$). Since $T$ is tangential, it follows that
\begin{align*}
(f,h)=(f,\overline{T}{\mathcal H}_1+{\mathcal H}_2)
&=(\overline{T}^{*}f, {\mathcal H}_1)+(f,{\mathcal H}_2) \\
&\leq C\left( \|Tf\| +\|f\|\right),
\end{align*}
where the last inequality follows after
noticing that the $L^{2}$ adjoint $\overline{T}^{*}$ differs from $-T$
by a term bounded in $L^{2}$. In this very simple way,
the full Sobolev norm of $Bf$ of order $1$ is controlled by the $L^2$ norm of the derivative of $f$
in the special direction $T$, provided ${\mathcal H}_1, {\mathcal H}_2$ exist.
Proving the existence of ${\mathcal H}_1$ and ${\mathcal H}_2$, and similar functions for
higher powers of $\overline{T}$, is conceptually simple, but, for higher powers,
somewhat technical. This fact accounts for much of the length of this paper.
To simplify matters temporarily suppose that $T$ is of the form
$T = T_{1} + L$, where both $T_{1}$
and $L$ are tangential at the boundary, $T_{1}$ is real, and $L$ is of type $(1,0)$. Note
that $T_{1}$ is complex transversal because $T$ is; consequently, if $J$ denotes the
complex structure map on $\mathbb{C}^{n}$, $JT_{1}$ is transversal to the boundary of
$\Omega$. Let now $h\in A^{1}(\Omega)$ with $\|h\|_{-1}\leq 1$ be given. The
goal is to write $h=\overline{T}{\mathcal H}_1+ {\mathcal H}_2$ where $\|{\mathcal
H}_1\|_{L^2}, \|{\mathcal H}_2\|_{L^2}$ are bounded by constants depending only on
$\Omega$ and $T$. To this end, consider first
$\mathfrak{A}\circ(JT_{1}h)$, where $\mathfrak{A}$ is the operator of
`anti-differentiation along the direction $JT_{1}$'. Then
\begin{equation}\label{Intro:1}
h(z)= \mathfrak{A}\circ JT_{1}h (z) +h(q),
\end{equation}
where $q=q(z)$ varies in a fixed compact subset of $\Omega$. Because $h$ is holomorphic,
the contributions given by $h(q)$ are easily shown to be bounded in $L^2$ and are folded into the function $\mathcal{H}_2$. Furthermore, the Cauchy--Riemann
equations show that
$JT_{1}(h)= -i\,T_{1}(h)$. Thus
\begin{align}\label{Intro:2}
\mathfrak{A}\circ JT_{1}(h)&= \mathfrak{A}\circ (-i)\, T_{1}h\notag \\
&= -i\left(\mathfrak{A}\circ \overline{T}(h) - \mathfrak{A}\left(\overline{L}
h\right)\right) \; .
\end{align}
But $\overline{L}h$ vanishes since $h$ is holomorphic. As a last step, we commute
$\overline{T}$ with $\mathfrak{A}$ in \eqref{Intro:2}:
\begin{equation}\label{Intro:3}
\mathfrak{A}\circ \overline{T}(h)=\overline{T}\circ\mathfrak{A}(h)+\left[ \mathfrak{A},
\overline{T}\right](h).
\end{equation}
The commutator in \eqref{Intro:3} is straightforward to analyze since both $\mathfrak{A}$
and $\overline{T}$ are explicit operators; this term forms the final component of the
function $\mathcal{H}_2$. The term $-i\,\mathfrak{A}(h)$ is the function $\mathcal{H}_1$.
The needed inequality $\|\mathfrak{A}(h)\|\leq C \|h\|_{-1}$ is a consequence of
Hardy's inequality, which says that $\mathfrak{A}$ gains a factor
of the boundary distance $d(z)$, together with the fact that such a factor gains a
derivative in the case of holomorphic functions:
\begin{align}\label{Hardy-hol}
\|\mathfrak{A}(h)\|\leq c_{1}\|d(z)h\| \leq c_{2}\|h\|_{-1}\; .
\end{align}
Coming to our second result, we start with an observation about the Bergman projection on
the unit disk. Modulo constants, conjugate holomorphic functions on the unit disc are
orthogonal to the Bergman space. Equivalently: their Bergman projections are constant
functions. Of course, this fails on general (even planar) domains. Our second result says
that nevertheless, if the Bergman projection satisfies a regularity estimate such as
\eqref{main1}, projections of conjugate holomorphic functions are still as good as
projections of functions smooth up to the boundary: they belong to $H^{k_{2}}(\Omega)$. In
particular, if Condition R holds, these projections are themselves smooth up to the
boundary.
\begin{theorem}\label{T:holconjsmoothing}
Let $\Omega\subset\subset\mathbb{C}^{n}$ be a smoothly bounded domain.
Suppose that the Bergman projection $B$ on $\Omega$ satisfies the regularity
condition
\eqref{main1}. Then there is a constant $C > 0$ such that
\begin{align}\label{E:holconjsmoothingfinite}
\|Bf\|_{k_{2}} \leq C\|f\|
\end{align}
for all conjugate holomorphic functions $f$ in $L^{2}(\Omega)$.
\end{theorem}
In contrast to Theorem \ref{T:main}, smoothing here takes place in all directions. But
just as with Theorem \ref{T:main}, there is an immediate corollary for when Condition R
holds.
\begin{corollary}\label{holconjsmoothingcondR}
Let $\Omega$ be as in Theorem \ref{T:holconjsmoothing} and assume that Condition R holds.
Then for every $k \in \mathbb{N}$ there is a constant $C_{k} > 0$ such that
\begin{align}\label{E:holconjsmoothinginfinite}
\|Bf\|_{k} \leq C_{k}\|f\|
\end{align}
for $f$ conjugate holomorphic in $L^{2}(\Omega)$. In particular, $Bf$ is smooth up to the
boundary.
\end{corollary}
The proof of Theorem \ref{T:holconjsmoothing}, say again for $k_{1}=k_{2}=1$, also starts
with \eqref{E:Bf1norm}. But now, estimating $|(f,h)|$ means we are estimating
the (absolute value of) the integral of a {\it holomorphic} function (namely
$\overline{f}h$) over $\Omega$. Such an integral can be dominated by
$\|\overline{f}h\|_{-m}$, for any $m \in\mathbb{N}$ (\cite{Bell82b, BellBoas84, Komatsu84,
Straube84}). Finally, the equivalence, for holomorphic functions, of membership in a
negative Sobolev space and polynomial boundedness in the reciprocal of the boundary
distance (\cite{Bell82b, Straube84}) gives the estimate $\|\overline{f}h\|_{-m} \leq
C_{m,k} \|f\|\,\|h\|_{-k}$ for $m$ big enough (relative to $k$). We remark that the last
two steps are valid for functions in the kernel of more general elliptic operators
(systems), see \cite{LionsMagenes} and \cite{Roitberg},
respectively, for the relevant results. However, the first step after invoking
\eqref{E:Bf1norm} may fail: for example, the product of two harmonic functions need not be
harmonic.
Regularity properties like \eqref{main1} are known to hold on a large class of domains.
When the domain $\Omega$ is pseudoconvex, these properties are essentially
equivalent to corresponding regularity properties of the $\overline{\partial}$-Neumann
operator. For example when the domain is of finite
type, or when it admits a defining function that is plurisubharmonic at boundary points
(in particular, when the domain is convex), \eqref{main1} holds for the pair $(k,k)$ for
all $k \in \mathbb{N}$. Nonpseudoconvex domains on which regularity estimates
for the Bergman projection hold include Reinhardt domains, complete Hartogs domains in
$\mathbb{C}^{2}$, and domains with `approximate symmetries'. For these results and for
further information on the $L^{2}$ Sobolev regularity theory of the Bergman projection
and of the $\overline{\partial}$-Neumann operator, we refer the reader to
\cite{ChenShaw01, BoasStraube99, Straube10} and their references. \cite{BoasStraube99}
also contains a discussion of the connection between the regularity theory of the Bergman
projection and duality of holomorphic function spaces.
The paper is laid out as follows. In Section \ref{S:2} we define the notions
needed for
Theorem \ref{T:main}, collect some standard definitions, and give a new proof of (a
portion of) Bell's duality theorem on holomorphic functions in Sobolev spaces. In Section
\ref{S:3}, we state the anti-differentiation result, Proposition \ref{P:antiderivative},
then give a proof of Theorem \ref{T:main} modulo a proof of Proposition
\ref{P:antiderivative}. Section \ref{S:anti} is devoted to a proof of the
anti-differentiation result, broken down into several subsections. We develop the algebra
of operators associated to anti-differentiation along integral curves to transverse vector
fields in these subsections in some detail, as we need these results for our proof of
Proposition \ref{P:antiderivative}; we also mention that the results in Section
\ref{S:anti} can be applied elsewhere. In Section \ref{S:5} we prove Theorem
\ref{T:holconjsmoothing}.
\section{{Function spaces, duality, and complex transversality}}\label{S:2}
Throughout the paper, $\Omega$ will denote a domain with smooth boundary
$b\Omega$, contained in
$\mathbb{C}^{n}$. The standard $L^2$ inner product and norm on functions defined on
$\Omega$ will be denoted
$$(f,g)=\int_\Omega f\, \bar g \qquad\text{and}\qquad \|f\|=\left(\int_\Omega |f|^2\right)^{1/2},$$
where the integrals are taken with respect to the Euclidean volume element.
If $k$ is a positive integer, we let $H^{k}(\Omega)$ denote the usual Sobolev space of complex-valued functions whose norm $\|.\|_k$
is induced by the inner product
$$(f,g)_k=\sum_{|\alpha|\leq k}\left(D^\alpha f, D^\alpha g\right),$$
for $\alpha$ a multi-index and $D^\alpha$ denoting differentiation of order $\alpha$.
Let $C^{\infty}_{0}(\Omega)$ denote the set of smooth
functions with compact support in $\Omega$ and $C^{\infty}(\overline{\Omega})$ the set of
functions smooth up to $b\Omega$. As is well-known,
$C^{\infty}(\overline{\Omega})$ is dense in $H^{k}(\Omega)$.
We let $H_{0}^{k}(\Omega)$ be the closure of $C^{\infty}_{0}(\Omega)$
in $H^{k}(\Omega)$.
The dual space of $H_{0}^{k}(\Omega)$ will be denoted $H^{-k}(\Omega)$. Because
$C^{\infty}_{0}(\Omega)$ is dense in $H^{k}_{0}(\Omega)$, $H^{-k}(\Omega)$ imbeds
naturally into the space of distributions $\mathcal{D}^{\prime}(\Omega) =
\left(C^{\infty}_{0}(\Omega)\right)^{*}$ on $\Omega$, and $L \in
\mathcal{D}^{\prime}(\Omega)$ belongs to $H^{-k}(\Omega)$ precisely when
\begin{equation}\label{E:dualnorm}
\|L\|_{-k}=\sup\left\{\left|\left\langle L,\phi\right\rangle\right|:
\phi\in C^{\infty}_{0}(\Omega), \|\phi\|_{k} \leq 1\right\}
\end{equation}
is finite. Here $\left\langle L,\phi\right\rangle$ denotes the action of the distribution
$L$ on the test function $\phi$. We shall only need to compute \eqref{E:dualnorm} on
certain $L\in L^2(\Omega)$, in which case $\left\langle L,\phi\right\rangle=(L,\bar\phi)$.
The subspace of $H^{k}(\Omega)$ consisting of holomorphic functions will be denoted
$A^{k}(\Omega)$. For consistency of notation, we let $A^{0}(\Omega)$ denote the Bergman
space: $A^{0}(\Omega)= L^2(\Omega)\cap{\mathcal O}(\Omega)$, where ${\mathcal O}(\Omega)$
denotes the space of holomorphic functions on
$\Omega$. For functions $f$ in $A^{-k}(\Omega)$, the norm $\| f\|_{-k}$ is comparable to
a weighted $L^2$ norms of $f$, with weight equal to the corresponding positive power of
the distance to $b\Omega$ (\cite{Ligocka86, Boas87}). If $r$ is a smooth
defining function for $\Omega$ -- i.e., $\Omega=\{r<0\}$ and $dr\neq 0$ when $r=0$ -- we
shall use the following version of (half of) this fact:
for $k\in\mathbb{N}$ there exists a constant $\beta_{k}>0$ such that
\begin{equation}\label{E:eqnorms2}
\|(-r)^{k}h\|\leq\beta_{k}\|h\|_{-k}\qquad\hskip .2 cm\forall\hskip .2 cm h\in A^{0}(\Omega).
\end{equation}
The Bergman projection is the orthogonal projection of $L^2(\Omega)$ onto
$A^{0}(\Omega)$. We say that $B$ satisfies {\it Condition R} (see \cite{BellLigocka}) if
it maps $C^{\infty}(\overline{\Omega})$ to itself (by continuity in $L^{2}(\Omega)$ and
the closed graph theorem for Fr\'{e}chet spaces, this map is then automatically
continuous). This property is often also referred to as {\it global regularity}. An
equivalent formulation is: for each $k_{2} \in \mathbb{N}$ there exists $k_{1} \in
\mathbb{N}$ such that $B$ maps $H^{k_{1}}(\Omega)$ (continuously) into
$H^{k_{2}}(\Omega)$. The special case where \eqref{main1} holds with $k_{1}=k_{2}$ is
usually referred to as {\it exact regularity} at level $k=k_{1}=k_{2}$. We remark that,
rather intriguingly, no instance is known where $B$ satisfies Condition R, but is not
exactly regular at all levels.
As mentioned in the introduction, a crucial fact for our proof of Theorem \ref{T:main}
is that regularity of the Bergman projection as in \eqref{main1} implies that the Sobolev
norm of a function $f\in A^{k_{2}}(\Omega)$ is controlled by pairing $f$ with a family of
holomorphic functions in the {\it ordinary} $L^2$ inner product. Duality results of this
kind are known, \cite{Bell81a, Bell82a, Bell82b, Straube84, BellBoas84, Komatsu84,
Barrett95}, but they are formulated for exact regularity, rather than \eqref{main1}, or
for Condition R (i.e., assuming \eqref{main1} holds for all $k_2\in\mathbb{N}$). In these
situations, a clean cut formulation is
possible: for example, $A^{k}(\Omega)$ and $A^{-k}_{cl}(\Omega)$, the closure of
$A^{0}(\Omega)$ in $A^{-k}(\Omega)$, are mutually dual, under a natural extension of the
$L^{2}$ pairing, when $B$ is exactly regular at level $k$. When there is a loss of
derivatives, that is when $k_{2} < k_{1}$ in \eqref{main1}, this duality has a somewhat
less striking formulation. For this reason, we only state below what we need
and give, for the reader's convenience, a straightforward proof (which seems to be new
even for the case $k_{1}=k_{2}$).
\begin{proposition}\label{P:pieceofduality}
Let $\Omega\subset\mathbb{C}^{n}$ be a domain with smooth boundary. Suppose that
\eqref{main1} holds. Then there exists a constant $c>0$ such that
\begin{align}\label{1stdual}
\|f\|_{k_{2}}\leq c\sup\left\{\left|\left(f, h\right) \right|: h\in
A^{k_{2}}(\Omega),
\|h\|_{-k_{1}}\leq 1\right \}
\end{align}
holds for all $f\in A^{0}(\Omega)$. The constant $c$ depends on $C_{1}$ from \eqref{main1},
$n$ and $k_{2}$.
\end{proposition}
\begin{remark}\label{dualitygenuine}
It is part of \eqref{1stdual} that if the right hand side is finite, then so is the left
hand side. That is, \eqref{1stdual} is a genuine estimate, as opposed to an \emph{a
priori} estimate.
\end{remark}
\begin{proof}
Let $f\in A^{0}(\Omega) \subset C^{\infty}(\Omega)$ and let $\alpha$ be a fixed
multi-index with $|\alpha|\leq k_{2}$. Then $D^{\alpha}f \in L^{2}(\Omega)$ if and only
if
\begin{align*}
\sup\left\{\left|\left( D^{\alpha}f,g \right)
\right|: g\in C^{\infty}_{0}(\Omega),\hskip .2 cm \|g\|\leq 1 \right\} < \infty \;.
\end{align*}
Moreover, in this case, $\|D^{\alpha}f\|$ is given by this supremum. Because $g\in
C_{0}^{\infty}(\Omega)$ and $f\in C^{\infty}(\Omega)$, integration by parts yields
\begin{align*}
\left| \left( D^{\alpha}f,g\right)\right|=\left|(-1)^{|\alpha|}
\left( f, D^\alpha g\right)\right|
=\left|\left(Bf, D^\alpha g\right)\right|
=\left|\left(f, BD^\alpha g\right)\right|.
\end{align*}
Thus,
\begin{align}\label{E:Dalphafestimate}
\left\|D^{\alpha}f\right\|=\sup\left\{\left|\left( f,BD^{\alpha}g \right) \right|:
g\in C^{\infty}_{0}(\Omega),\hskip .2 cm \|g\|\leq 1 \right\}.
\end{align}
Note that \eqref{main1} implies that $BD^{\alpha}g \in A^{k_{2}}(\Omega)$.
The aim now is to see that \eqref{main1} forces $BD^{\alpha}g$ to be
uniformly bounded in $H^{-k_{1}}(\Omega)$. For that, let $\varphi \in C^{\infty}_{0}(\Omega)$ with
$\|\varphi\|_{k_{1}} \leq 1$. Integration by parts and Cauchy--Schwarz give
\begin{align}\label{est1}
\left |\left(\varphi, BD^{\alpha}g\right)\right| = \left|\left(D^{\alpha}B\varphi,
g\right)\right| \leq\|B\varphi\|_{k_{2}}\|g\| \leq
C_{1}\|\varphi\|_{k_{1}}\|g\|
\leq C_{1} \; ,
\end{align}
where \eqref{main1} was used again. Thus, \eqref{E:dualnorm} shows $\|BD^{\alpha}g\|_{-k_{1}}
\leq C_{1}$. Returning to \eqref{E:Dalphafestimate}, we now obtain that
\begin{align*}
\left\|D^{\alpha}f\right\|&=C_{1}\sup\left\{\left|\left(
f,BD^{\alpha}\frac{g}{C_{1}} \right) \right|:
g\in C^{\infty}_{0}(\Omega),\hskip .2 cm \|g\|\leq 1 \right\}\\
&\leq C_{1}\sup\left\{\left|\left( f,h\right) \right|:
h\in A^{k_{2}}(\Omega),\hskip .2 cm \|h\|_{-k_{1}}\leq 1 \right\}.
\end{align*}
Summing over $|\alpha|\leq k_{2}$ gives \eqref{1stdual} with $c=\sum_{j=0}^{k_{2}} {2n \choose j}C_{1}$.
\end{proof}
\begin{remark}\label{dualityRemark}
Proposition \ref{P:pieceofduality} implies that when \eqref{main1} holds, then
\begin{align}\label{E:Bfknorm}
\left\|Bf\right\|_{k_{2}}&\leq c\sup\left\{\left|(Bf,h)\right|: h\in
A^{k_{2}}(\Omega),
\hskip .2 cm\|h\|_{-k_{1}}\leq 1
\right\}\notag\\
&=c\sup\left\{\left|(f,h)\right|: h\in A^{k_{2}}(\Omega),\hskip .2 cm\|h\|_{-k_{1}}\leq 1
\right\}
\end{align}
for all $f\in H^{k_{1}}(\Omega)$. It is \eqref{E:Bfknorm} that will be used in the proof
of Theorem \ref{T:main} (compare \eqref{E:Bf1norm} above).
\end{remark}
Let $\Omega\subset\subset\mathbb{C}^n=\{\rho<0\}$ be a smoothly bounded domain and
denote by
$L_{n}:=$ \linebreak $\sum_{j=1}^{n}(\partial \rho/\partial \overline{z_{j}})(\partial /\partial
z_{j})$ the complex normal field of type $(1,0)$. Set $T_{0}= i(L_{n}-\overline{L_{n}})$.
Then $T_{0}$ is real and tangential to $b\Omega$, and it spans
(over $\mathbb{C}$) the orthogonal complement of $T^{1,0}(b\Omega) \oplus
T^{0,1}(b\Omega)$ in $T(b\Omega) \otimes \mathbb{C}$. In particular, a vector field $T$,
with coefficients in $C^{\infty}(\overline{\Omega})$, that is tangential to $b\Omega$
can be written as
\begin{equation*}
T = aT_{0} + Y_{1}+ \overline{Y_{2}},
\end{equation*}
where $a$ is a smooth complex-valued function, and
$Y_{1}$, $Y_{2}$ are of type $(1,0)$ and tangential to $b\Omega$. The vector field $T$
is called {\it complex transversal} if it is transversal to $T^{1,0}(b\Omega) \oplus
T^{0,1}(b\Omega)$, i.e., if $a$ is nowhere vanishing on $b\Omega$. Writing
$\overline{Y_{2}}$ as $a\left((1/a)\overline{Y_{2}}+(1/\overline{a})Y_{2}\right) -
(a/\overline{a})Y_{2}$, shows that
near $b\Omega$, $T$ can be written in the form
\begin{equation}\label{T-rep}
T = a\left(T_{1} + L\right),
\end{equation}
where $T_{1}$ is real and complex transversal, $L$ is of type $(1,0)$, and both $T_{1}$ and $L$ are tangential to
$b\Omega$.
The Hilbert spaces $H_{T}^{k}(\Omega)$ that occur in Theorem \ref{T:main} are the usual
Sobolev spaces with respect to differentiation in the direction $T^{k}$, where the latter is the $k$-fold differentiation
with respect to $T$: $T^{k}(f)=T\left(T\left(\dots\left(Tf\right)\dots\right)\right)$.
\begin{definition}\label{D:TSobolev}
For $k\in\mathbb{N}$, set
$$H_{T}^{k}(\Omega)=\left\{f\in L^{2}(\Omega): T^{j}f \in L^{2}(\Omega),\;j\in\{1,\dots,k\}\right\},$$
where $T^j f$ is taken in the sense of distributions.
\end{definition}
For fixed $k$, $H_{T}^{k}(\Omega)$ is a Hilbert space with respect to the
inner product
\begin{align*}
(f,g)_{k,T}:=\sum_{j=0}^{k}\left(T^{j}f,T^{j}g\right)\qquad\hskip .2 cm\forall\hskip .2 cm f,g\in
H_{T}^{k}(\Omega),
\end{align*}
and $C^{\infty}(\overline{\Omega})$ is dense in $H_{T}^{k}(\Omega)$ with respect to the norm
$$\|f\|_{k,T}^{2}=\sum_{j=0}^{k}\|T^{j}f\|^{2}$$
induced by this inner product.
The completeness of $H_{T}^{k}(\Omega)$ is proved in the same way as for ordinary Sobolev
spaces; density of $C^{\infty}(\overline{\Omega})$ is a standard application of
Friedrichs' Lemma (see for example \cite{ChenShaw01}, Lemma D.1 and Corollary D.2). The spaces
$H_{T}^{k}(\Omega)$, $k\geq 1$, depend on the choice of the tangential, complex transversal vector field $T$; cf. Section 5 in
\cite{HerMcN10}. However, if $b$ is a smooth, non-vanishing, complex-valued function,
then $H_{T}^{k}(\Omega)$ and $H_{bT}^{k}(\Omega)$ are equal,
and an inductive argument gives, e.g., the somewhat rough estimate
\begin{align}\label{E:TbTchange}
\left\|(bT)^{k}f\right\|\leq b_{k}\|f\|_{k,T}\qquad\text{for}\hskip .2 cm
b_{k}=
\max_{\overline{\Omega}, 0\leq \ell \leq k}\left\{\left|T^{\ell}b \right|^{k} ,1\right\}.
\end{align}
This implies in particular that Theorem \ref{main1} holds for $T$ if and only if it holds
for $bT$. Moreover, \eqref{E:TbTchange} indicates how the
constant in \eqref{main2} changes (although there are other quantities that determine the
constant in \eqref{main2}).
The Fr\'{e}chet space
$H_{T}^{\infty}(\Omega) =
\cap_{k=0}^{\infty}H_{T}^{k}(\Omega)$ is equipped with the (metrizable) topology induced by the
family of norms $\{\|.\|_{k,T} : k \in \mathbb{N}\}$ (see e.g. \cite{Rudin91}). It
inherits completeness from the $H_{T}^{k}(\Omega)$. For a class of examples of the fact that
$ C^{\infty}(\overline{\Omega})\subsetneq H_{T}^{\infty}(\Omega)$, see Section 5
in \cite{HerMcN10}.
\section{{Proof of Theorem \ref{T:main}}}\label{S:3}
As indicated in the introduction, the second step in proving Theorem \ref{T:main} relies on a representation of a
holomorphic function $h$ in terms of $\overline{T}^{j}$-derivatives, $j\leq k$, of functions whose $L^{2}$ norms are
controlled by $\|h\|_{-k}$ for $k \in \mathbb{N}$.
\begin{proposition}\label{P:antiderivative}
Let $\Omega\subset\subset\mathbb{C}^{n}$ be a smoothly bounded domain and let $T$ be
a vector field which is tangential to $b\Omega$ and complex transversal. Let
$k\in\mathbb{N}$.
There exists an open neighborhood $U$ of $b\Omega$, a function $\zeta\in
C^{\infty}_{0}(\overline{\Omega}\cap U)$ which equals $1$ near $b\Omega$, and constants
$C_{j,k}>0$ for $j\in\{0,\dots,k\}$, such that for all $h\in A^{0}(\Omega)$ there exist
functions $\mathcal{H}_{j}^{k} \in H^{k}(\Omega) \cap C^{\infty}(\Omega)$,
$j\in\{0,\dots,k\}$, on $\Omega$
satisfying
\begin{itemize}
\item[(i)] $\zeta h=\sum_{j=0}^{k}\overline{T}^{j}\mathcal{H}_{j}^{k}$ on $\Omega$,
\item[(ii)] $\|\mathcal{H}_{j}^{k}\|\leq C_{j,k}\|h\|_{-k}$ for all $j\in\{0,\dots,k\}$.
\end{itemize}
\end{proposition}
\begin{remark}
For simplicity, we have only stated what we need in Proposition \ref{P:antiderivative}.
As usual in such situations, membership in $H^{k}(\Omega)$ actually comes with a norm
estimate that can be worked out with a little additional care. In fact, the arguments in
\S\ref{SS:antiderivative} can be refined to show that for some constant $C_{s,k}>0$,
$\|\mathcal{H}^{k}_{j}(h)\|_{s} \leq C_{s,k} \|h\|_{s-k}$, for $s \in \mathbb{R}$.
\end{remark}
The proof of Proposition \ref{P:antiderivative} will be given in Section \ref{S:anti}.
Assuming it is true, we now give the proof of Theorem \ref{T:main}.
We will use the notation $A\lesssim B$ to denote the existence of a constant $C$
independent of $f$, but allowed to depend on $\Omega$ and $T$, such that $A\leq C\cdot B$.
\begin{proof}[Proof of Theorem \ref{T:main}]
Assume first
$f\in C^\infty\left(\overline{\Omega}\right)$. Then $Bf \in A^{k_{2}}(\Omega)$, by
\eqref{main1}.
Inequality \eqref{E:Bfknorm} states that
\begin{align}\label{E:Bfknorm1}
\left\|Bf\right\|_{k_{2}}\leq c\sup\left\{\left|(f,h)\right|: h\in
A^{k_{2}}(\Omega),\hskip .2 cm\|h\|_{-k_{1}}\leq 1 \right\}
\end{align}
holds for some constant $c>0$ independent of $f$.
We use Proposition \ref{P:antiderivative} for $k=k_{1}$ to estimate the right-hand side of
\eqref{E:Bfknorm1} as follows.
For $T$ as in Theorem \ref{main1}, choose the neighborhood $U$ of $b\Omega$
and the cut-off function $\zeta$ described in Proposition \ref{P:antiderivative}.
Using the partition of unity $\{\zeta,1-\zeta\}$ yields
\begin{align}\label{A}
\left|\left(f,h\right)\right|&\leq\left|\left(f,\zeta h\right)\right|+\left|
\left(f,(1-\zeta)h\right)\right|\\
&=\Bigl|\bigl(f,
\sum_{j=0}^{k_{1}}\overline{T}^{j}\mathcal{H}_{j}^{k_{1}}\bigr)\Bigr|+\left|
\left(f,(1-\zeta)h\right)\right|. \notag
\end{align}
As $(1-\zeta)$ is identically zero near $b\Omega$, it follows from the
Cauchy--Schwarz inequality, \eqref{E:eqnorms2}, and $\|h\|_{-k_{1}}\leq 1$ that
\begin{align}\label{B}
\left|\left(f,(1-\zeta)h\right)\right|\leq \widetilde{\beta}_{k}\|f\|\cdot\|h\|_{-k_{1}}\leq
\widetilde{\beta}_{k}\|f\|,
\end{align}
where the constant $\widetilde{\beta}_{k}>0$ depends on $\beta_{k}$ (see \eqref{E:eqnorms2}) and $\zeta$ .
Since $T$ is tangential, integration by
parts (justified since $\mathcal{H}_{j}^{k_{1}} \in H^{k_{1}}(\Omega)$) gives
\begin{align*}
\left|\left( f, \overline{T}^{j}\mathcal{H}_{j}^{k_{1}}\right)\right|= \left|
\left(Tf, \overline{T}^{j-1}\mathcal{H}_{j}^{k_{1}}\right)+\left(g_{0}f,
\overline{T}^{j-1}\mathcal{H}_{j}^{k_{1}}\right)\right|,
\end{align*}
where $g_{0}$, depending on the first order derivatives of the coefficients of $T$, is a smooth function on $\overline{\Omega}$.
Continuing this procedure yields
\begin{align*}
\left|\left(f, \overline{T}^{j}\mathcal{H}_{j}^{k_{1}}\right)\right|\leq
\sum_{\ell=0}^{j}\left\|g_{\ell}T^{\ell}f\right\|\cdot\|\mathcal{H}_{j}^{k_{1}}\|
\end{align*}
for functions $g_{\ell}\in C^{\infty}(\overline{\Omega})$, $\ell\in\{0,\dots,j-1\}$ and
$g_{j}=1$. By part (ii) of Proposition \ref{P:antiderivative} it now follows that
\begin{align*}
\left|\left( f, \overline{T}^{j}\mathcal{H}_{j}^{k_{1}}\right)\right|\leq C_{j,k}
\sum_{\ell=0}^{j}\left\|g_{\ell}T^{\ell}f\right\|.
\end{align*}
Summing over $j\leq k_{1}$ gives, in view of \eqref{E:Bfknorm1}, \eqref{A}, and
\eqref{B} that
\begin{align}\label{Cinfty}
\left\|Bf\right\|_{k_{2}}\leq C\|f\|_{k_{1},T}\hskip .2 cm\qquad\forall\hskip .2 cm f \in
C^{\infty}(\overline{\Omega}).
\end{align}
To remove the smoothness assumption on $f$, approximate $f$ with respect to
$\|.\|_{k_{1},T}$ by functions in $C^{\infty}(\overline{\Omega})$. Then invoke
\eqref{Cinfty} and the continuity of $B$ in $L^{2}(\Omega)$ to obtain \eqref{main2}; for
more details see Lemma 4.2 in \cite{HerMcN10}.
\end{proof}
\section{{Anti-differentiation along a transverse direction}}\label{S:anti}
\subsection{General set-up}\label{SS:setup}
In this and the next two sections we work in the real setting of $\mathbb{R}^{n}$. Let
$\Omega$ be a bounded domain with smooth boundary. Denote by
$\mathcal{N}=\sum_{j=1}^{n}\mathcal{N}_{j}\frac{\partial}{\partial x_{j}}$ a vector
field with smooth coefficients in a neighborhood of $\Omega$. Suppose that $\mathcal{N}$
is transversal to $b\Omega$. Then there exists an open, bounded neighborhood
$V\subset\mathbb{R}^{n}$ of $b\Omega$ on which $\mathcal{N}$ is non-vanishing. Moreover,
for each $x\in V$ there exist a scalar $\tau_{x}>0$ and a smooth integral curve
$\varphi_{x}: (-\tau_{x},\tau_{x})\longrightarrow\mathbb{R}^{n}$ satisfying
$\varphi_{x}(0)=x$ and
\begin{align*}
\frac{\partial\varphi_{x}}{\partial t}(t)=\Bigl\langle
\mathcal{N}_{1}\left(\varphi_{x}(t)\right),
\dots,\mathcal{N}_{n}\left(\varphi_{x}(t)\right)\Bigr\rangle
\end{align*}
for all $t\in(-\tau_{x},\tau_{x})$. Because the curve $\varphi_{x}$ intersects $b\Omega$
transversally, there exists a scalar $\tau_{0}>0$ such that $\varphi_{p}$ is defined on
$(-\tau_{0},\tau_{0})$ for all $p\in b\Omega$. After possibly rescaling $\mathcal{N}$, it
may
be assumed that $\tau=2$.
For each $x\in V$ define $t_{x}$ to be the (unique) scalar for which $\varphi(t_{x},x)\in
b\Omega$, and note that $|t_{x}|=\mathcal{O}(d_{b\Omega}(x))$. It may be assumed that
$t_{x}>0$ for all $x\in V\cap\Omega$. Set $U:=\{x : |t_{x}|<1\}$. Then $U$ is an open neighborhood
of $b\Omega$. Moreover, the flow map, $\varphi(t,x):=\varphi_{x}(t)$, is
a smooth map on $(-1,1)\times U$ satisfying $\varphi(0,x)=x$ as well as
\begin{align}\label{D:integralcurve}
\frac{\partial\varphi_{\ell}}{\partial
t}(t,x)=\mathcal{N}_{\ell}\left(\varphi(t,x)\right)
\qquad\hskip .2 cm\forall\hskip .2 cm\ell\in\{1,\dots,n\}.
\end{align}
We can now make precise the anti-differentiation operator $\mathfrak{A}$ from Section
\ref{intro}. Denote by $C^{\infty}_{\overline{U}}(\Omega) := \{f \in
C^{\infty}(\Omega)\,|\,f \equiv 0 \;\text{on}\; \Omega \setminus \overline{U}\}$. Then we
define $\mathfrak{A}$ as an operator from $C^{\infty}_{\overline{U}}(\Omega)$ to itself:
\begin{align*}
\mathfrak{A}[g](x)=\left\{\begin{array}{cl}
\int_{-1}^{0}(g\circ\varphi)(s,x)\,ds
&\qquad\text{if}\hskip .2 cm x\in\Omega\cap U,\\
0 &\qquad\text{if}\hskip .2 cm x \in \Omega \setminus U.
\end{array} \right.
\end{align*}
$\mathfrak{A}$ belongs to a class of operators denoted by $\mathcal{A}^{1}_{0,0}$ below;
see Definition \ref{def1}, where the mapping properties are also discussed.
$\mathfrak{A}$ inverts $\mathcal{N}$ in the following sense.
\begin{lemma}\label{L:FTC}
Let $g \in C^{\infty}_{\overline{U}}(\Omega)$. Then
\begin{align}\label{E:FTC}
g(x)=\mathfrak{A}[\mathcal{N}g](x)\hskip .2 cm, \hskip .2 cm x\in\Omega.
\end{align}
\end{lemma}
\begin{proof}
When $x \in \Omega \setminus U$, both sides of \eqref{E:FTC} equal zero. When $x \in
\Omega \cap U$, then $g\left(\varphi(-1,x)\right)=0$ because $\varphi(-1,x) \in
\Omega \setminus U$. Note that \eqref{D:integralcurve} implies
$$\left(\left(\mathcal{N}g\right)\circ\varphi\right)(s,x) =
\frac{\partial}{\partial s}\left(\left(g\circ\varphi\right)(s,x)\right),$$ hence
an application of the Fundamental Theorem of
Calculus completes the proof of \eqref{E:FTC}.
\end{proof}
\subsection{The spaces $\mathcal{A}_{*,*}^{*}$ and their $L^{2}$ mapping
properties.}\label{SSspaces}
In order to organize the proof of Proposition \ref{P:antiderivative}, we now define
several spaces of operators related to, but more general than $\mathfrak{A}$. We group
the operators both according to their form as well as according to their mapping
properties.
\begin{definition}\label{def1}
(1) Denote by $\mathcal{A}_{\mu,0}^{1}$, $\mu\in\mathbb{N}_{0}$ the space of
operators acting on $C^{\infty}_{\overline{U}}$ which are of the form
\begin{align}\label{basicHardy}
A[g](x)=\left\{\begin{array}{cl}
\int_{-1}^{0}s^{\mu}\gamma_{A}(s,x)
\cdot(g\circ\varphi)(s, x)\,ds &\qquad\text{if}\hskip .2 cm x\in \Omega\cap U,\\
\\
0&\qquad\text{if}\hskip .2 cm x\in \Omega\setminus U
\end{array} \right.
\end{align}
for some $\gamma_{A}\in C^{\infty}([-1,0]\times \overline{U})$. Then $A[g] \in
C^{\infty}_{\overline{U}}(\Omega)$.
\noindent (2) Denote by $\mathcal{A}_{\mu,\nu}^{1}$, $\mu\in\mathbb{N}_{0}$,
$\nu\in\mathbb{N}$, the space of operators on $C^{\infty}_{\overline{U}}(\Omega)$
spanned by operators of the form $A_{\mu}\circ D^{\beta}$ for
$A_{\mu}\in\mathcal{A}_{\mu,0}^{1}$ and $|\beta|\leq\nu$.
\end{definition}
These operators have mapping properties in weighted $L^{2}$ spaces that will be very
useful for our purposes. We introduce the following definition:
\begin{definition}\label{D:Sjkspace}
An operator $A$ on $C^{\infty}_{\overline{U}}(\Omega)$ is said to belong to
$\mathcal{S}_{\nu}^{k}$ if there exists a constant $C>0$ such that
\begin{align}\label{E:Sjkspace}
\left\|t_{x}^{\ell}\cdot A[g]\right\|\leq C\sum_{|\beta|\leq
\nu}\left\|t_{x}^{\ell+k}\cdot
D^{\beta}g\right\|\qquad\forall\hskip .2 cm \ell\in\mathbb{N}_{0},\hskip .2 cm g\in
C^{\infty}_{\overline{U}}(\Omega).
\end{align}
Here, $C$ does not depend on $g$ or
$\ell$. Note that because $t_{x}$ is defined on $\Omega \cap \overline{U}$ and both $g$
and $A[g]$ vanish on $\Omega \setminus \overline{U}$, both sides of \eqref{E:Sjkspace} are
well-defined.
\end{definition}
The following lemma is the key to the mapping properties of the anti-derivative
operator $\mathfrak{A}$ and its generalizations; in particular, it makes precise the
notion of `gaining' a factor of the boundary distance. As mentioned in Section
\ref{intro}, it is a consequence of one of a group of inequalities due to Hardy
(\cite{HLP}, sections 9.8, 9.9).
\begin{lemma}\label{P:Hardy}
For given $\mu\in\mathbb{N}_{0}$, define
the operator $B_{\mu}$ on $C^{\infty}_{\overline{U}}(\Omega)$ by
\begin{align}\label{B-mu}
B_{\mu}[g](x)=
\left\{\begin{array}{cl}
\int_{-1}^{0}(t_{\varphi(s,x)})^{\mu}\cdot\left|g(\varphi(s,x) \right|\,
ds &\qquad\text{if}\hskip .2 cm x\in \Omega\cap U \\
0&\qquad\text{if}\hskip .2 cm x \in \Omega \setminus U.
\end{array}\right.
\end{align}
Then $B_{\mu}\in\mathcal{S}_{0}^{\mu+1}$.
\end{lemma}
\begin{proof}
It is convenient in this proof (but not later on) to rewrite $B_{\mu}$ using coordinates
$(\tau, p) \in [0,2]\times b\Omega$ on $\overline{\Omega}\cap\overline{V}$ given by
$(\tau,p) \rightarrow \varphi(-\tau,p)$. Expressing $B_{\mu}$ in these coordinates and
changing variables gives
\begin{align}\label{B-special}
B_{\mu}[g](\tau,p)= \int_{\tau}^{\tau +1}\sigma^{\mu}\left|g(\sigma,p)\right|\,d\sigma
\;\;,\;\; (\tau,p)\sim x = \varphi(-\tau,p) \in \Omega\cap U.
\end{align}
Estimate \eqref{E:Sjkspace} now follows from the following inequality of
Hardy's (see \cite{HLP}, Theorem 330, section 9.9) for $f \geq 0$:
\begin{align}\label{Hardy}
\int_{0}^{\infty}\left(x^{r}\int_{x}^{\infty}f(t)\,dt\right)^{2}\,dx \leq
\frac{4}{(2r+1)^{2}}\int_{0}^{\infty}\left(t^{r+1}f(t)\right)^{2}\,dt \; , \; r >
-\frac{1}{2}.
\end{align}
Indeed, to verify \eqref{E:Sjkspace} for $B_{\mu}$, it suffices to observe that for $x \sim (\tau,p)
\in \Omega\cap U$
\begin{align}
\left|\tau^{\ell}B[g](\tau,p)\right| \leq
\tau^{\ell}\int_{\tau}^{2}\sigma^{\mu}\left|g(\sigma,p)\right|\,d\sigma,
\end{align}
then to apply \eqref{Hardy} with $f(t)=t^{\mu}\left|g(t,p)\right|\;,\; 0\leq t\leq 2$, and
$f(t)=0\;,\;t>2$, $r = \ell$, for $p \in b\Omega$ fixed, and lastly, to integrate over
$b\Omega$. We also use the equivalence, with a constant that depends only on $\Omega$ and
$\mathcal{N}$, of the volume elements $dV$ and $d\tau\times d_{b\Omega}$ on
$\Omega\cap\overline{V}$, where $dV$ and $d_{b\Omega}$ denote the Euclidean volume
elements on $\mathbb{R}^{n}$ and $b\Omega$, respectively. Also note that for $r=\ell \geq
0$, the constant on the right hand side of \eqref{Hardy} is less than or equal to $4$.
Replacing it by $4$ yields a constant in \eqref{E:Sjkspace} that does not
depend on $\ell$.
\end{proof}
If $A$ is of the form $A=A_{\mu}\circ D^{\beta}$, then the inequality $|s| \leq |s|+t_{x}=
t_{\varphi(s,x)}$ and the boundedness of $|\gamma_{A}|$ on $[-1,0] \times
\overline{U}$ imply
\begin{align}\label{weighted}
\|t_{x}^{\ell}A[g]\| = \|t_{x}^{\ell}A_{\mu}[D^{\beta}g]\| \lesssim
\|t_{x}^{\ell}B_{\mu}[D^{\beta}g]\| \lesssim \|t_{x}^{\ell + \mu + 1}D^{\beta}g\|,
\end{align}
where the last inequality in \eqref{weighted} follows from Lemma \ref{P:Hardy}. This proves:
\begin{lemma}\label{C:A1estimate}
$\mathcal{A}_{\mu,\nu}^{1}\subset\mathcal{S}_{\nu}^{\mu+1}$, $\mu\,,\,\nu \in
\mathbb{N}_{0}$.
\end{lemma}
We will also need notation for compositions of operators:
\begin{definition}
\noindent(1) For a multi-index
$\alpha=(\alpha_{1},\dots,\alpha_{\ell})\in\mathbb{N}_{0}^{\ell}$,
$\ell\in\mathbb{N}$; define
\begin{align*}
\mathcal{A}_{\alpha,0}^{\ell}=\left<A_{1}\circ\dots\circ A_{\ell} :
A_{j}\in\mathcal{A}_{\alpha_{j},0}^{1}\right\rangle \; .
\end{align*}
\noindent(2) Denote by $\mathcal{A}_{\alpha,\nu}^{\ell}$,
$\alpha\in\mathbb{N}_{0}^{\ell}$,
$\nu\in\mathbb{N}$ and $\ell\in\mathbb{N}$, the space of operators on
$C^{\infty}_{\overline{U}}(\Omega)$ spanned by operators of the form
$A_{\alpha}^{\ell}\circ D^{\beta}$ for $A_{\alpha}^{\ell}\in\mathcal{A}_{\alpha,0}^{\ell}$
and $|\beta|\leq\nu$.
\end{definition}
With Lemma \ref{C:A1estimate} above, we have the following weighted mapping properties
of these operators:
\begin{lemma}\label{C:Aellestimate}
$\mathcal{A}_{\alpha,\nu}^{\ell}\subset\mathcal{S}_{\nu}^{\ell+|\alpha|}$, where
$|\alpha|=\sum_{j=1}^{\ell}\alpha_{j}$.
\end{lemma}
\subsection{On the algebra of commutators of $\mathcal{A}_{*,*}^{*}$ and differential
operators}\label{SScommutators}
Throughout this subsection, $X = \sum_{j=1}^{n}X_{j}\frac{\partial}{\partial x_{j}}$
denotes a vector field with smooth coefficients on (a neighborhood of)
$\overline{\Omega}$. We collect a series of elementary lemmas that give control over
various commutators involving the operators introduced in the previous subsection and $X$
or $D^{\beta}$. The notation $+$, or $\sum$, will be used to indicate sums of operators in
the indicated spaces.
\begin{lemma}\label{P:basiccommutator}
Let $A\in\mathcal{A}_{\mu,0}^{1}$ for some $\mu\in\mathbb{N}_{0}$. Then
\begin{align}\label{E:basiccommutator}
[A,X]\in \mathcal{A}_{\mu,0}^{1} + \mathcal{A}_{\mu+1,1}^{1}.
\end{align}
\end{lemma}
\begin{proof}
Let $g\in C^{\infty}_{\overline{U}}(\Omega)$. Then
\begin{align}\label{E:XAterm}
(X\circ A)[g](x)
=\int_{-1}^{0}s^{\mu}\Bigl(\left(X_{x}(\gamma_{A})\right)\cdot
(g\circ\varphi)\Bigr)&(s,x)\;ds\\
+\int_{-1}^{0}s^{\mu}&\gamma_{A}(s,x)\cdot
X_{x}\left(g\circ\varphi)(s,x)\right)\;ds.
\notag
\end{align}
Let $A_{0}$ be the operator in $\mathcal{A}_{\mu,0}^{1}$ such that the first term on
the right hand side of \eqref{E:XAterm} equals $-A_{0}[g](x)$. Then it follows that
\begin{align}\label{commAX}
[A,X][g](x)=A_{0}[g](x)+\int_{-1}^{0}s^{\mu}\gamma_{A}(s,
x)\Bigl((Xg)\circ
\varphi-X_{x}(g\circ\varphi)\Bigr)(s,x) \;ds.
\end{align}
$A_{0}$ belongs to $\mathcal{A}^{1}_{\mu,0}$. That the term on the right
hand side of \eqref{commAX} given by the integral belongs to $\mathcal{A}^{1}_{\mu+1,1}$
can be seen as follows. Because $\varphi(0,x) \equiv x$, $\partial \varphi_{j}/\partial
x_{k} = \delta_{j,k} + O(s)$. Therefore, the chain rule shows that
$\Bigl((Xg)\circ\varphi-X_{x}(g\circ\varphi)\Bigr)(s,x)$ is a sum of terms each of which
is of the form $s\gamma_{\beta}(s,x)D^{\beta}g(\varphi(s,x))$, where $\gamma_{\beta}$ is smooth (depending on first derivatives of $X\circ\varphi$ and second derivatives of $\varphi$) and
$|\beta|=1$.
\end{proof}
\begin{lemma}\label{L:basiccommutator1}
Let
$A_{\mu,\nu}\in\mathcal{A}_{\mu,\nu}^{1}$ for some $\mu\in\mathbb{N}_{0}$,
$\nu\in\mathbb{N}$. Then
\begin{align}\label{E:basiccommutator1}
[A_{\mu,\nu},X]\in\mathcal{A}_{\mu,\nu}^{1} + \mathcal{A}_{\mu+1,\nu+1}^{1}.
\end{align}
\end{lemma}
\begin{proof}
By definition of $\mathcal{A}_{\mu,\nu}^{1}$, $A_{\mu,\nu}$ can be written as a linear
combination of operators of the form $A_{\mu}\circ D^{\beta}$, where
$A_{\mu}\in\mathcal{A}_{\mu,0}^{1}$ and $|\beta|\leq\nu$. It suffices to show
\eqref{E:basiccommutator1} for the latter operators. For that, note that
\begin{align*}
[A_{\mu}\circ D^{\beta},X]&=A_{\mu}\circ D^{\beta}\circ X-X\circ A_{\mu}\circ D^{\beta}\\
&=A_{\mu}\circ[D^{\beta},X]+A_{\mu}\circ X\circ D^{\beta}-X\circ A_{\mu}\circ D^{\beta}\\
&=A_{\mu}\circ[D^{\beta},X]+[A_{\mu},X]\circ D^{\beta}.
\end{align*}
Since $[D^{\beta},X]$ is a differential operator of order $|\beta|-1$,
$A_{\mu}\circ[D^{\beta},X]$
belongs to $\mathcal{A}_{\mu,|\beta|-1}^{1}$. Furthermore, \eqref{E:basiccommutator}
yields that $[A_{\mu},X]\circ
D^{\beta}\in\mathcal{A}_{\mu,|\beta|}^{1} + \mathcal{A}_{\mu+1,|\beta|+1}^{1}$.
\end{proof}
Lemma \ref{P:basiccommutator} extends to iterated commutators as follows. Set
$C_{X}^{\nu}(A)=\left[C_{X}^{\nu-1}(A),X\right]$ with $C_{X}^{1}(A)=[A,X]$. That is,
$C_{X}^{\nu}(A)$ equals the $\nu$-fold iterated commutator $[\cdots[A,X],X],\cdots X]$.
\begin{lemma}\label{C:basiccommutator1}
Let $A$ be given as in Lemma \ref{P:basiccommutator}.
Then
$C_{X}^{\nu}(A)\in \sum_{j=0}^{\nu}\mathcal{A}_{\mu+j,j}^{1}$.
\end{lemma}
\begin{proof}
The proof is done via induction on $\nu$. Note first that the case $\nu=1$
is Lemma
\ref{P:basiccommutator}. Next suppose that
$C_{X}^{\nu-1}(A)\in\sum_{j=0}^{\nu-1}\mathcal{A}_{\mu+j,j}^{1}$ holds for some
$\nu\in\mathbb{N}$. It follows from Lemma
\ref{L:basiccommutator1} that $[A_{\mu+j,j},X]\in\mathcal{A}_{\mu+j,j}^{1} +
\mathcal{A}_{\mu+j+1,j+1}^{1}$ for $A_{\mu+j,j}\in\mathcal{A}_{\mu+j,j}^{1}$, which
completes the proof.
\end{proof}
We next consider commutators with higher order derivatives.
\begin{lemma}\label{L:basiccommutator2}
If $A\in\mathcal{A}_{\mu,0}^{1}$ for some $\mu \in \mathbb{N}_{0}$, then
$\left[A,D^{\beta}\right]\in \mathcal{A}_{\mu,|\beta|-1}^{1}
+ \mathcal{A}_{\mu+1,|\beta|}^{1}$.
\end{lemma}
\begin{proof}
The proof is done via induction on $|\beta|$. The case $|\beta|=1$ follows
from Lemma \ref{P:basiccommutator}. Suppose now that
$\left[A,D^{\sigma}\right]\in\mathcal{A}_{\kappa,\nu-2}^{1}\cup\mathcal{A}_{\kappa+1,\nu-1
}^{1}$
holds for any multi-index $\sigma$ of length $\nu-1$ for some $\nu\in\mathbb{N}$ and for
all $\kappa\in\mathbb{N}_{0}$. Let $\beta$ be a multi-index of length $\nu$. Then
$D^{\beta}$ may be written as $D^{\beta-\sigma}\circ D^{\sigma}$ for some
multi-index $\sigma$ of length $\nu-1$. It then follows that
\begin{align*}
[A,D^{\beta}]&=[A, D^{\beta- \sigma}]\circ D^{\sigma} + D^{\beta -
\sigma}\circ A\circ D^{\sigma} - D^{\beta - \sigma}\circ D^{\sigma}\circ A\\
&=[A, D^{\beta - \sigma}]\circ D^{\sigma} + D^{\beta - \sigma}\circ [A,
D^{\sigma}]\,.
\end{align*}
It follows from Lemma \ref{P:basiccommutator} and the fact that $|\beta-\sigma|=1$ that
$\left[A,D^{\beta - \sigma}\right]\circ
D^{\sigma}\in\mathcal{A}_{\mu,|\beta|-1}^{1} +
\mathcal{A}_{\mu+1,|\beta|}^{1}$. The induction hypothesis furnishes two operators
$A_{1} \in \mathcal{A}^{1}_{\mu,\nu -2}$ and $A_{2} \in \mathcal{A}^{1}_{\mu +1,\nu -1}$
so that $\left[A,D^{\sigma}\right] = A_{1}+A_{2}$. Then
\begin{align*}
D^{\beta - \sigma}\circ \left[A,D^{\sigma}\right] &= D^{\beta -
\sigma}\circ (A_{1}+A_{2}) \\
&= A_{1}\circ D^{\beta - \sigma} + A_{2}\circ D^{\beta - \sigma} +
[D^{\beta - \sigma},A_{1}] + [D^{\beta - \sigma},A_{2}]\; .
\end{align*}
The four terms on the right hand side are in, respectively, $\mathcal{A}^{1}_{\mu,\nu
-1}$, $\mathcal{A}^{1}_{\mu+1,\nu}$, $\mathcal{A}^{1}_{\mu,\nu-2} +
\mathcal{A}^{1}_{\mu+1,\nu-1}$, and $\mathcal{A}^{1}_{\mu+1,\nu-1} +
\mathcal{A}^{1}_{\mu+2,\nu}$. We have used Lemma \ref{L:basiccommutator1} for the last two
terms ($|\beta - \sigma|=1$!). Taking into account the (trivial) inclusions
$\mathcal{A}^{1}_{\mu,\nu-2} \subset \mathcal{A}^{1}_{\mu,\nu-1}$,
$\mathcal{A}^{1}_{\mu+1,\nu-1} \subset\mathcal{A}^{1}_{\mu,\nu-1}$, and
$\mathcal{A}^{1}_{\mu+2,\nu} \subset \mathcal{A}^{1}_{\mu+1,\nu}$, shows that $D^{\beta -
\sigma}\circ \left[A,D^{\sigma}\right] \in \mathcal{A}^{1}_{\mu,\nu-1} +
\mathcal{A}^{1}_{\mu+1,\nu}$. This completes the induction.
\end{proof}
For compositions of operators, Lemma \ref{L:basiccommutator2} takes on the following form.
\begin{lemma}\label{C:basiccommutator3}
If $A_{\alpha,\nu}\in\mathcal{A}_{\alpha,\nu}^{\ell}$ for some multi-index
$\alpha\in\mathbb{N}_{0}^{\ell}$, $\nu\,,\,\ell\in\mathbb{N}$, then
\begin{align}\label{E:basiccommutator3}
[A_{\alpha,\nu},D^{\beta}]\in\mathcal{A}_{\alpha,\nu+|\beta|-1}^{\ell}
+\sum_{j=1}^{\ell}\mathcal{A}_{\alpha+e_{j},\nu+|\beta|}^{\ell},
\end{align}
where $e_{j}$ is the standard $j$-th unit vector.
\end{lemma}
\begin{proof}
We first consider the case $\nu=0$; in this case, the proof is by induction on $\ell$. The
case $\ell=1$ is Lemma \ref{L:basiccommutator2}. The induction step is analogous to that
in
the proof of Lemma \ref{L:basiccommutator2}, with the multi-index $\alpha$ now playing the
role of $\beta$ there. We leave the details to the reader.
When $\nu>0$, $A_{\alpha,\nu}$ is a linear combination of operators of the form
$A_{\alpha,0}\circ D^{\gamma}$ with $|\gamma| \leq \nu$. Thus $[A_{\alpha,\nu},
D^{\beta}]$ is a linear combination of terms of the form
\begin{align*}
[A_{\alpha,0}\circ D^{\gamma}, D^{\beta}] =
A_{\alpha,0}\circ D^{\gamma}\circ D^{\beta} - D^{\beta}\circ A_{\alpha,0}\circ D^{\gamma}
= [A_{\alpha,0}, D^{\beta}]\circ D^{\gamma}
\end{align*}
because $D^{\gamma}\circ D^{\beta} = D^{\beta}\circ D^{\gamma}$.
\eqref{E:basiccommutator3} now follows from the case $\nu=0$ (already shown) applied to
$[A_{\alpha,0}, D^{\beta}]$.
\end{proof}
\subsection{Proof of Proposition \ref{P:antiderivative}}\label{SS:antiderivative}
Let $\Omega$ and $T$ be as in Proposition \ref{P:antiderivative}. Near $b\Omega$,
$T=a(T_{1}+L)$, with both $T_{1}$ and $L$ tangential, $T_{1}$ real, $L$ of type $(1,0)$,
and $a$ a smooth function that does not vanish near $b\Omega$ (see \eqref{T-rep}). It is
easy to see that the conclusion of Proposition \ref{P:antiderivative} holds for $T$ if and
only if it holds for $T_{1}+L$ (however, the $\mathcal{H}_{j}^{k}$'s in (i) change and so
do the constants $C_{j,k}$ in (ii), see also the short discussion surrounding
\eqref{E:TbTchange}). Therefore, it may be assume that $T=T_{1}+L$, with $T_{1}$ and $L$
as above. Set $\mathcal{N} := JT_{1}$. Because $T_{1}$ is complex transversal,
$\mathcal{N}$ is transversal to $b\Omega$, and so the general set-up of
\S\ref{SS:setup} applies.
The proof of Proposition \ref{P:antiderivative} will be achieved in two steps. The first
step consists in replacing $\mathcal{N}$ in Lemma \ref{L:FTC} by $i\overline{T}$, adding
the necessary correction, and then iterating the result. The key for the proof of
Proposition \ref{P:antiderivative} is that when applied to a holomorphic function, the
correction term is benign, as a result of the Cauchy--Riemann equations, see
\eqref{holbenign} below.
\begin{lemma}\label{L:zetahformula}
Let $h\in C^{\infty}(\Omega)$ for some $k\in\mathbb{N}$. Let $\zeta\in
C^{\infty}_{0}(\overline{\Omega}\cap U)$ a non-negative function which
is identically $1$ in a neighborhood of $b\Omega$ contained in $U$. Then
\begin{align}\label{E:zetahformula}
\left(\zeta h\right)(x)=i^{k}\left(\mathfrak{A}\circ\overline{T}\right)^{k}[\zeta h]+
\sum_{j=0}^{k-1}i^{j}\left(\mathfrak{A}\circ\overline{T}
\right)^{j}\circ\mathfrak{A}\circ(\mathcal{N}-i\overline{T})[\zeta h].
\end{align}
\end{lemma}
\begin{proof}
The proof is done by induction on $k$. Suppose first that $k=1$. Then Lemma \ref{L:FTC}
yields
\begin{align}\label{E:ATk=1}
\zeta h=\mathfrak{A}[\mathcal{N}(\zeta h)]&=\mathfrak{A}[i\overline{T}(\zeta h)]+
\mathfrak{A}[(\mathcal{N}-i\overline{T})(\zeta h)]\notag\\
&=i\left(\mathfrak{A}\circ\overline{T}\right)[\zeta h]+\mathfrak{A}\circ
(\mathcal{N}-i\overline{T})[\zeta h] \; .
\end{align}
For the induction step suppose that
\begin{align}\label{E:ATkstep}
\zeta h=i^{k-1}\left(\mathfrak{A}\circ\overline{T}\right)^{k-1}[\zeta h]+
\sum_{j=0}^{k-2}i^{j}\left(\mathfrak{A}\circ\overline{T} \right)^{j}\circ\mathfrak{A}
\circ(\mathcal{N}-i\overline{T})[\zeta h]
\end{align}
holds. Using identity \eqref{E:ATk=1} to replace $\zeta h$ in the first term
of the right hand side of \eqref{E:ATkstep} gives the result.
\end{proof}
In the second step, we will write the powers
$\left(\mathfrak{A}\circ\overline{T}\right)^{\ell}$ in terms of compositions of the form
$\left(\overline{T}\right)^{m}\circ \widetilde{\mathfrak{A}}$ with good control of $\widetilde{\mathfrak{A}}$. This is accomplished in
the following lemma; its proof relies heavily on the machinery of
\S\ref{SSspaces} and \S\ref{SScommutators}.
\begin{lemma}\label{L:ATellstuff}
Let $A\in\mathcal{A}_{0}^{1}$, and $X$ a vector field with smooth coefficients on
$\overline{\Omega}$. Then, for any $\ell\in\mathbb{N}$, there exist operators
$G_{m}^{\ell}$, $m\in\{0,\dots,\ell\}$, which belong to
$\sum_{\left\{\nu\leq\ell-m, |\alpha|-\nu\geq 0
\right\}}\mathcal{A}_{\alpha,\nu}^{\ell}$
such that
\begin{align*}
\left(A\circ X \right)^{\ell}=\sum_{m=0}^{\ell}X^{m}\circ G_{m}^{\ell}.
\end{align*}
\end{lemma}
\begin{proof}
The proof is again by induction on $\ell$. For $\ell=1$, commuting $A$ by $X$ yields
\begin{align*}
A\circ X=X\circ A+\left[A,X\right]=:X\circ G_{1}^{1}+G^{1}_{0}.
\end{align*}
It follows from the definition of $A$ that $G_{1}^{1}=A\in\mathcal{A}_{0,0}^{1}$.
Furthermore, Lemma \ref{P:basiccommutator} gives that $G_{0}^{1}$ belongs to
$\mathcal{A}_{0,0}^{1} + \mathcal{A}_{1,1}^{1}$.
For the induction step suppose that
\begin{align*}
\left(A\circ X \right)^{\ell-1}=\sum_{m=0}^{\ell-1}X^{m}\circ G_{m}^{\ell-1}
\end{align*}
holds for some $G_{m}^{\ell-1}\in \sum_{\left\{\nu\leq\ell-1-m, |\alpha|-\nu\geq 0
\right\}}\mathcal{A}_{\alpha,\nu}^{\ell-1}$.
Then
\begin{align}\label{E:ATellstuff}
\left(A\circ X \right)^{\ell}&=\sum_{m=0}^{\ell-1}A\circ X^{m+1}\circ G_{m}^{\ell-1}
=\sum_{m=1}^{\ell}A\circ X^{m}\circ G_{m-1}^{\ell-1}\notag\\
&=\sum_{m=1}^{\ell}\left(X^{m}\circ A\circ
G_{m-1}^{\ell-1}+\left[A,X^{m}\right]\circ
G_{m-1}^{\ell-1}
\right).
\end{align}
Since $A\in\mathcal{A}_{0,0}^{1}$, it follows that
\begin{align*}
A\circ G_{m-1}^{\ell-1}\in\sum_{\left\{\nu\leq\ell-m,
|\alpha|-\nu\geq 0 \right\}} \mathcal{A}_{\alpha,\nu}^{\ell}\,.
\end{align*}
To deal with the second term on the right-hand side of \eqref{E:ATellstuff}, we use the
formula
\begin{align}\label{commexpand}
\left[A,X^{m} \right]=
\sum_{j=0}^{m-1}{m \choose j} \,X^{j}\circ C_{X}^{m-j}(A),
\end{align}
where the iterated commutators $C_{*}^{*}$ are defined just before Lemma
\ref{C:basiccommutator1}. (\eqref{commexpand} is purely algebraic and is easily proved by
induction on $m$; alternatively, see \cite{DerridjTartakoff76}, Lemma 2 or
\cite{Straube10}, formula (3.54).) What must be shown then, is that
\begin{align*}
C_{X}^{m-j}(A)\circ G_{m-1}^{\ell-1}
\in \sum_{\left\{\nu\leq\ell-j, |\alpha|-\nu\geq 0
\right\}}\mathcal{A}_{\alpha,\nu}^{\ell}\; .
\end{align*}
For that, recall first that Lemma \ref{C:basiccommutator1} says that
$C_{X}^{m-j}(A)\in\sum_{k=0}^{m-j}\mathcal{A}_{k,k}^{1}$. Furthermore, it follows from
Lemma \ref{C:basiccommutator3} that for $|\beta|\leq k$
\begin{align*}
D^{\beta}\circ G_{m-1}^{\ell-1}=G_{m-1}^{\ell-1}\circ D^{\beta}+\left[D^{\beta},
G_{m-1}^{\ell-1}\right]
\in\sum_{\left\{\nu\leq\ell-m, |\alpha|-\nu\geq
0 \right\}}\mathcal{A}_{\alpha,\nu+k}^{\ell-1}\;.
\end{align*}
It then follows that $C_{X}^{m-j}(A)\circ G_{m-1}^{\ell-1}$ is contained in
\begin{align*}
\sum_{\left\{\nu\leq\ell-m, |\alpha|-\nu\geq
0\,,\,0\leq k\leq m-j \right\}}\mathcal{A}^{\ell}_{(k,\alpha),\nu+k} \; .
\end{align*}
Set $\widetilde{\nu}=\nu+k$, then $\nu\leq\ell-m$ implies that $\widetilde{\nu}\leq
\ell-j$, since $k\leq m-j$. Moreover, setting $\widetilde{\alpha}=(k,\alpha)$ yields
$|\widetilde{\alpha}|-\widetilde{\nu}=|\alpha|-\nu\geq 0$. Hence
\begin{align*}
C_{X}^{m-j}(A)\circ G_{m-1}^{\ell-1}\in
\sum_{\left\{\widetilde{\nu}\leq\ell-j, |\widetilde{\alpha}|-\widetilde{\nu}\geq 0
\right\}}\mathcal{A}^{\ell}_{\widetilde{\alpha},\widetilde{\nu}} \;,
\end{align*}
which completes the proof.
\end{proof}
We are now ready to prove Proposition \ref{P:antiderivative}. This amounts to combining
Lemmas \ref{L:zetahformula} and \ref{L:ATellstuff}, to obtain a representation (i) with
$\mathcal{H}^{k}_{j} \in C^{\infty}(\Omega)$; this works for $h \in C^{\infty}(\Omega)$.
The final step then consists in obtaining the required estimates (ii) and membership
in $H^{k}(\Omega)$ when $h$ is holomorphic.
\begin{proof}[Proof of Proposition \ref{P:antiderivative}]
For $h$ and $\zeta$ given as in Lemma \ref{L:zetahformula}, use Lemmas
\ref{L:zetahformula} and \ref{L:ATellstuff} (with $\mathfrak{A}$ and $\overline{T}$ in
place of $A$ and $X$, respectively) in
\eqref{E:zetahformula} to obtain that
$$\zeta h=\sum_{m=0}^{k}\overline{T}^{m}\mathcal{H}_{m}^{k},$$
where
\begin{align}\label{H-km1}
\mathcal{H}_{k}^{k} := G_{k}^{k}[i^{k}\zeta h],
\end{align}
and
\begin{align}\label{H-km2}
\mathcal{H}_{m}^{k} := \left(i^{k}G_{m}^{k}+\sum_{j=m}^{k-1}i^{j}G_{m}^{j}
\circ\mathfrak{A}\circ(\mathcal{N}-i\overline{T}) \right)[\zeta h]\;,\; 0 \leq m \leq k-1.
\end{align}
Because $h \in C^{\infty}(\Omega)$, so are the $\mathcal{H}^{k}_{m}$, $0\leq m \leq k$.
This establishes the representation (i) in Proposition \ref{P:antiderivative}, except for
membership in $H^{k}(\Omega)$.
We now prove the estimates (ii). By Lemmas \ref{L:ATellstuff} and \ref{C:Aellestimate},
it follows that
\begin{align*}
\left\|\mathcal{H}_{k}^{k}\right\|\lesssim\left\|t_{x}^{k}\cdot \zeta h\right\|\lesssim\|h\|_{-k},
\end{align*}
where the second step follows from \eqref{E:eqnorms2}; we use here that $h$ in
Proposition \ref{P:antiderivative} is holomorphic. By analogous reasoning, we have
\begin{align}\label{G-mk}
\|G_{m}^{k}[\zeta h]\| \lesssim \sum_{|\beta| \leq
|\alpha|}\|t_{x}^{k+|\alpha|}D^{\beta}h\| \lesssim \|D^{\beta}h\|_{-k-|\alpha|} \lesssim
\|h\|_{-k},
\end{align}
where $\alpha$ is such that $G_{m}^{k} \in \mathcal{A}^{k}_{\alpha,\nu}$ for some $\nu
\leq |\alpha|$. Note that $D^{\beta}h$ is also holomorphic, so
that \eqref{E:eqnorms2} applies. The last inequality holds because $|\beta| \leq
|\alpha|$.
To obtain the claimed estimates for the remaining terms in $\mathcal{H}_{m}^{k}$, first
note that because $G^{j}_{m} \in\sum_{\nu\leq j-m,|\alpha|-\nu\geq
0}\mathcal{A}^{j}_{\alpha,\nu}$, $G^{j}_{m}\circ\mathfrak{A}$ is a sum of terms of the
form
\begin{align*}
A^{1}_{\alpha_{1}}\circ A^{2}_{\alpha_{2}}\circ\cdots\circ A^{1}_{\alpha_{j}}\circ
D^{\gamma}\circ\mathfrak{A} = A^{1}_{\alpha_{1}}\circ\cdots\circ
A^{1}_{\alpha_{j}}\circ\mathfrak{A}\circ D^{\gamma} + A^{1}_{\alpha_{1}}\circ\cdots\circ
A^{1}_{\alpha_{j}}\circ\left[D^{\gamma},\mathfrak{A}\right],
\end{align*}
where $|\gamma| \leq |\alpha| \leq \nu \leq j-m \leq j \leq k-1$. Because
$\left[D^{\gamma},\mathfrak{A}\right] \in \mathcal{A}^{1}_{0,|\gamma|-1} +
\mathcal{A}^{1}_{1,|\gamma|}$ (Lemma \ref{L:basiccommutator2}), it follows, in view of
Lemma \ref{C:A1estimate}, that
\begin{align}\label{G-jm}
\left\|G^{j}_{m}\circ\mathfrak{A}\left(\mathcal{N} - i\overline{T}\right)[\zeta h]\right\|
\; \lesssim \; \left\|\left(\mathcal{N} - i\overline{T}\right)[\zeta h]\right\|_{k-1}.
\end{align}
The Cauchy--Riemann equations for $h$ yield
\begin{align}\label{E:YNpass}
\mathcal{N}h=iT_{1}h=i\overline{T_{1}}h = i\overline{T}h
\end{align}
since $T_{1}=\overline{T_{1}}$ and $\overline{L}h=0$. Consequently,
\begin{align}\label{holbenign}
(\mathcal{N}-i\overline{T})[\zeta
h]=\left((\mathcal{N}-i\overline{T})[\zeta]\right)h.
\end{align}
Since $\left((\mathcal{N}-i\overline{T})[\zeta]\right)$ has compact support in
$\Omega\cap U$ that does not depend on $h$, it follows (for example from
\eqref{E:eqnorms2} by using that factors of $r$ are bounded away from zero on this
support, so that introducing them will at most `increase' the norm) that there exists a
constant $C_{k}$ such that
\begin{align}\label{holweakstrong}
\left\| (\mathcal{N}-i\overline{T})[\zeta h]\right\|_{k-1}
= \left\|
\left( (\mathcal{N}-i\overline{T})[\zeta]\right) h\right\|_{k-1}
\leq C_{k}\|h\|_{-k} \; .
\end{align}
Combining \eqref{G-mk}, \eqref{G-jm}, and \eqref{holweakstrong} gives that
\begin{align*}
\left\|\mathcal{H}_{m}^{k} \right\|\lesssim \|h\|_{-k}
\end{align*}
for all $m\in\{0,\dots,k-1\}$. This concludes the proof of (ii).
It remains to see that $\mathcal{H}^{k}_{m} \in H^{k}(\Omega)$. First note that because
$(\mathcal{N}-i\overline{T})[\zeta h] \in C^{\infty}_{0}(\Omega)$, it
follows easily from the form of the operators $G^{j}_{m}$ that $G^{j}_{m}\circ
\mathfrak{A}\circ (\mathcal{N}-i\overline{T})[\zeta h] \in C^{\infty}(\overline{\Omega})
\subset H^{k}(\Omega)$, $j \leq m \leq (k-1)$. The remaining contributions in
\eqref{H-km1} and \eqref{H-km2} that need to be checked are of the form $G^{k}_{m}[\zeta
h]$, $0 \leq m \leq k$. That is, we need to show that $D^{\beta}G^{k}_{m}[\zeta h] \in
L^{2}(\Omega)$ for $|\beta| \leq k$. But $D^{\beta}G^{k}_{m} = G^{k}_{m}D^{\beta} -
\left[G^{K}_{m}, D^{\beta}\right]$. The argument is now analogous to the
discussion above. For example, in view of Lemmas
\ref{L:ATellstuff}, \ref{C:basiccommutator3}, and \ref{C:Aellestimate}, the commutator
$\left[G^{k}_{m}, D^{\beta}\right]$ is a sum of terms in
$\mathcal{S}^{k+|\alpha|}_{\nu+|\beta|-1} + \mathcal{S}^{k+|\alpha|+1}_{\nu+|\beta|}
\subseteq \mathcal{S}^{k+|\alpha|}_{\nu+|\beta|}$, with $\nu\leq k-m$ and $|\alpha|\geq
\nu$. Therefore, arguing as in \eqref{G-mk}, we see that the contribution of each of these
terms to $\left\|\left[G^{k}_{m}, D^{\beta}\right](\zeta h)\right\|$ is dominated by
$\sum_{|\gamma|\leq \nu+|\beta|}\|D^{\gamma}h\|_{-k-|\alpha|}$. Because $|\gamma|\leq
\nu+|\beta|\leq |\alpha|+|\beta|\leq |\alpha|+k$, all these terms are indeed dominated
by $\|h\|$.
The argument for $G^{k}_{m}D^{\beta}$ is similar. This concludes the proof of Proposition
\ref{P:antiderivative}.
\end{proof}
\section{Proof of Theorem \ref{T:holconjsmoothing}}\label{S:5}
\begin{proof}[Proof of Theorem \ref{T:holconjsmoothing}]
As in the proof of Theorem \ref{T:main}, we invoke duality via Proposition
\ref{P:pieceofduality} and Remark \ref{dualityRemark}:
it suffices to show that
\begin{align}
\left|\int_{\Omega}f\overline{g} \right|\leq C \|f\|\cdot\|g\|_{-k_{1}}
\end{align}
for $f\in \overline{A^{0}}$ and $g\in A^{0}(\Omega)$. We now use that for
holomorphic functions, membership in $A^{-m}(\Omega)$ for some $m$ is equivalent to having
a blow up rate near the boundary of at most a power of $1/d_{b\Omega}(z)$, where
$d_{b\Omega}(z)$ is the boundary distance function. We have the estimates
\begin{align}\label{E:Bell}
C_{m}^{1}\|h\|_{-m -2n-2}\leq\sup_{z\in\Omega}|h(z)|\cdot
d_{b\Omega}(z)^{m+2n}
\leq C_{m}^{2}\|h\|_{-m};
\end{align}
in both estimates, if the right-hand side is finite, then so is the left hand side
(and the estimate holds; that is, the estimates are genuine estimates as opposed to \emph{a
priori} estimates). The inequalities \eqref{E:Bell} are essentially Lemma 2 in
\cite{Bell82b}, except that the norm in the left most term there is the
$(-m-4n)$-norm. The stronger version given here is in \cite{Straube84}, Theorems 1.1
and 1.3; see in particular the proof of the implication (iii) $\Rightarrow$ (iv) in
Theorem 1.1.
Applying the second inequality in \eqref{E:Bell} to $\overline{f}$ (for $m=0$) and to $g$ (for $m=k_1$)
yields
\begin{equation*}
\sup_{z\in\Omega}|f(z)|\cdot d_{b\Omega}(z)^{2n}\leq C_{0}^{2}\,\|f\| \quad\text{ and }\quad
\sup_{z\in\Omega}|g(z)|\cdot d_{b\Omega}(z)^{k_1+2n}\leq C_{k_1}^{2}\,\|g\|_{-k_1}.
\end{equation*}
Multiply these inequalities, then apply the first inequality in \eqref{E:Bell} to
$\overline{f}g$ (which is holomorphic). The conclusion is
that $\overline{f}g$, hence $f\overline{g}$, belongs to $H^{-k_{1}-4n-2}(\Omega)$, and
\begin{align}\label{E:barfgestimate}
\left\|\overline{f} g\right\|_{-k_{1}-4n-2}\leq C_{k_{1}}\|f\|\cdot\|g\|_{-k_{1}}.
\end{align}
On the other hand, the following estimate also holds:
\begin{align}\label{E:fbargestimate}
\left|\int_{\Omega}f\overline{g} \right|=\left|\int_{\Omega}\overline{f}g
\right|\leq\widetilde{C}_{k_{1}}\left\|\overline{f}g
\right\|_{-k_{1}-4n-2}\|1\|_{k_{1}+4n+2}.
\end{align}
The inequality in \eqref{E:fbargestimate} is Proposition 1.9 in \cite{Straube84} which
gives this estimate for the pairing of a harmonic function in some $H^{-m}(\Omega)$ with
an arbitrary function in $C^{\infty}(\overline{\Omega})$. Note that since both $f$ and $g$
are in $L^{2}(\Omega)$, $\overline{f}g$ is integrable, and the integral denoted by
$\widetilde{\int}$ in \cite{Straube84} coincides with the ordinary integral over $\Omega$.
Combining \eqref{E:barfgestimate} and \eqref{E:fbargestimate} completes the proof of
Theorem \ref{T:holconjsmoothing}.
\end{proof}
\begin{remark}\label{duality}
When Condition R holds, one can extend $B$ by duality to a projection $\widetilde{B}$ from
the dual $\left(C^{\infty}(\overline{\Omega})\right)^{*}$ into
$\cup_{k=1}^{\infty}A^{-k}(\Omega)$. This was observed in \cite{Kerzman72}, in a note
added in proof (where the idea is attributed to Nirenberg and Tr\`{e}ves). The conclusion
of Theorem \ref{T:holconjsmoothing} remains true for this extended projection: when
$f$ is in $\overline{\cup_{k=1}^{\infty}A^{-k}(\Omega)}$, then $\widetilde{B}f$ is smooth
up to the boundary (see \cite{Straube84}, Theorem 3.4, for the `canonical' inclusion
$\overline{\cup_{k=1}^{\infty}A^{-k}(\Omega)} \hookrightarrow
\left(C^{\infty}(\overline{\Omega})\right)^{*}$). A similar discussion applies when
\eqref{main1} holds.
\end{remark}
\vskip .5cm
\end{document} |
\begin{document}
\title[Bohr's phenomenon on a regular condensator in the complex plane]{Bohr's phenomenon on a regular condensator in the complex plane}
\date{\today}
\author[P. Lassère]{Lassère Patrice}
\email{[email protected]}
\address{Lassère Patrice : Institut de Mathématiques, , UMR CNRS 5580,
Universit\'e Paul Sabatier,
118 route de Narbonne, 31062 TOULOUSE, FRANCE}
\author[E. Mazzilli]{Mazzilli Emmanuel}
\email{[email protected]}
\address{Université Lille 1, 59655 Cedex, VILLENEUVE D'ASCQ, FRANCE.}
\keywords{Functions of a complex variable, Inequalities, Schauder basis.}
\subjclass{Primary 30B10, 30A10.}
\begin{abstract}
We prove the following generalisation of Bohr's theorem : let $K\subset\mathbb C$ a continuum, $(F_{K,n})_{n\geq 0}$ its Faber polynomials, $\Omega_R$ the level sets of the Green function of $\bar{\mathbb C}\setminus K$ with singularity at infinity, then there exists $R_0$ such that for any $f=\sum_n a_n F_{K,n}\in\mathscr O(\Omega_{R_0})$ : $f( \Omega_{R_0})\subset D(0,1)$ implies $\sum_n\left\vert a_n \right\vert\cdot\Vert F_{K,n}\Vert_K<1$.
\end{abstract}
\maketitle
\section{Introduction}
The well-known Bohr's theorem \cite{bohr} states that for any function $f(z)=\sum_{n\geq 0}\,a_n z^n$ holomorphic on the unit disc $\mathbb D$ :
$$\left(\ \left \vert\sum_{n\geq 0}\, a_n z^n\right\vert<1,\ \forall\,z\in\mathbb D\ \right)\ \implies\
\left( \sum_{n\geq 0}\, \left\vert a_n z^n\right\vert<1,\ \forall\,z\in D(0,1/3)\ \right)$$
and the constant $1/3$ is optimal.
Our goal in this work is to study Bohr's theorem in the following context.
Let $K\subset\mathbb C$ be a compact in the complex plane. What are the open sets $\Omega$ containing $K$ such that the space $\mathscr O(\Omega)$ admits a topological basis\footnote{For all $f\in\mathscr O(\Omega)$ there exists an unique sequence $(a_n)_n$ of complex numbers such that $f=\sum_{n\geq 0} a_n\varphi_n$ for the usual compact convergence topology of $\mathscr O(\Omega)$.} $(\varphi_n)_n$ which verifies, for every holomorphic function $f=\sum_{n\geq 0} a_n\varphi_n\in\mathscr O(\Omega)$ :
$$\left(\ \left \vert\sum_{n\geq 0}\, a_n \varphi_n(z)\right\vert<1,\ \forall\,z\in\Omega\ \right)\ \implies\
\left( \sum_{n\geq 0}\, \left\vert a_n \right\vert\cdot\Vert\varphi_n\Vert_K<1\ \right)\ ?$$
In this case we say that the family $(K, \Omega, (\varphi_n)_{n\geq 0})$ satisfies \textbf{Bohr's property} or that \textbf{Bohr's phenomenon} is observed.
\noindent \textbf{Some examples : } $\bullet$ The family $(\overline{D(0,1/3)}, D(0,1), (z^n)_{n\geq 0})$ satisfies Bohr's phenomenon (this is Bohr's classic theorem).
\noindent $\bullet$ Note that the family $(\overline{D(0,1/3)}, D(0,1), ((3z)^n)_{n\geq 0})$ also satisfies Bohr's phenomenon. This example will play a special role in the following, since $((3z)^n)_{n\geq 0}$ is the Faber polynomial basis associated with the compact $\overline{D(0,1/3)}$.
\noindent $\bullet$ On the other hand, the family $(\overline{D(0,2/3)}, D(0,1), (z^n)_{n\geq 0})$ does not satisfy Bohr's phenomenon (due to optimality of the constant $1/3$ in Bohr's theorem).
As a starting point, for a given compact $K$ we must choose a ``good'' open neighborhood $\Omega$, that admits for $\mathscr O(\Omega)$ a ``nice'' basis $(\varphi_n)_n$. ``Nice'' here means that there are good local estimates for $\varphi_n$ on $\Omega$ but not only, since, unlike for other well-known theorems for power series on the disc \cite{lasserenguyen}, Bohr's theorem cannot be extended to all basis. For example, as pointed out by Aizenberg \cite{AAD}, it is necessary that one of the elements of the basis be a constant function.
We want to focus on the following situation :
\begin{defn} Let $K$ be a compact in $\mathbb C$ including at least two points, $K$ is a continuum if $\overline{\mathbb C}\setminus K$ is simply connected.
\end{defn}
When $K$ is a continuum it can be associated with the sequence $(F_{K,n})_n$ of its Faber polynomials. In more detail, let $\Phi\ :\ \overline{\mathbb C}\setminus K\to \overline{\mathbb C}\setminus{\overline {\mathbb D}}$ be the unique conformal mapping that verifies
$$\Phi(\infty)=\infty,\quad \Phi'(\infty)=\gamma>0.$$
Therefore $\Phi$ admits a Laurent development close to the infinity point under the form:
$$\Phi(z)=\gamma z+\gamma_0+\dfrac{\gamma_1}{z}+\dots+\dfrac{\gamma_k}{z^k}+\dots$$
and then for $n\in\mathbb N$ :
$$\begin{aligned}\Phi^n(z)&=\left( \gamma z+\gamma_0+\dfrac{\gamma_1}{z}+\dots+\dfrac{\gamma_k}{z^k}+\dots \right)^n\\
&=\underbrace{\gamma^n z^n+a_{n-1}^{(n)}z^{n-1}+\dots+a_{1}^{(n)}z+a_{0}^{(n)}}_{ F_{K,n}(z)}+\underbrace{ \dfrac{b_{1}^{(n)}}{z}+\dfrac{b_{2}^{(n)}}{z^2}+\dots+\dfrac{b_{k}^{(n)}}{z^k}+\dots}_{ E_{K,n}(z)}
\end{aligned}$$
$F_{K,n}$ is the polynomial part of the Laurent expansion at infinity of $\Phi^n$. It is a common basis for the spaces $\mathscr O(K),\ \mathscr O(\Omega_R), (R>1)$ where\footnote{\samepage $\Omega_R$ is also the level set of the Siciak-Zaharjuta extremal function $\Phi_K(z):=\sup\{\vert p(z)\vert^{1/{\text{deg}}(p)}\}$ where the supremum is taken over all complex polynomials $p$ such that $\Vert p\Vert_K\leq 1$. $\Phi_K$ is also related to the classical Green function for $\bar{\mathbb C}\setminus K$ with pole at infinity $g_K\ :\ \mathbb C\setminus K\to ]0,+\infty[$ by the equality $\log\Phi_K=g_K$ on $\mathbb C\setminus K$ . Recall that $g_K$ is the unique harmonic positive function on $\mathbb C\setminus K$ such that $\lim_{z\to\infty} \left( g_K(z)-\log\vert z\vert\right) $ exists and is finite and $\lim_{z\to w}g_K(z)=0,\ \forall\,w\in\partial(\mathbb C\setminus K)$. } $\Omega_R:=\{ z\in\mathbb C\ :\ \vert \Phi(z)\vert<R\}\cup K$. This polynomial basis exhibits remarkable properties (the relevant reference is the work by P.K.Suetin \cite{suetin}) similar to the Taylor basis $(z^n)_n$ on discs $D(0,R)$. In particular, the level sets $\Omega_R$ are the convergence domains of the series $\sum_{n\geq 0} a_n F_{K,n}$ and for any compact $L\subset \overline{\mathbb C}\setminus K$ we have
$$\lim_{n\to\infty} \Vert F_{K,n}\Vert_L^{1/n}= \Vert \Phi\Vert_L. $$
This formula is the one variable version of a more general formula (see \cite{nguyen}).
In this work, we show (Theorem 3.1) that for every continuum $K$ there exists an $R_0>1$ such that for any $R\geq R_0$ the family
$(K,\Omega_R, (F_{K,n})_{n\geq 0})$ verifies Bohr's property.
We start by studying the cases of an elliptic condensator (i.e. $K=[-1,1]$) which had been considered in a different form by Kaptanoglu and Sadik in an interesting study \cite{kap} which motivated this article (see remark 2.4).
\noindent \textbf{Acknowledgement.} Finally, we thank the Anonymous Referee for useful suggestions
improving significantly the paper.
\section{An example : the ``elliptic'' condensator $K=[-1,1]$}
Let us examine in this section the particular case where $K:=[-1,1 ]$. This is a ``fundamental'' example because this is one of the very few case (see \cite{suetin}, \cite{he} for circular lunes) where the explicit form of the conformal map $\Phi \ :\ \Omega:=\overline{\mathbb{C}}\setminus K\to \{\vert w\vert >1\}$ allows us to obtain a more precise estimation of the Faber polynomials of $K$ (see \cite{suetin}).
\noindent Here, $\Phi^{-1}(w)={1\over 2}(w+{w}^{-1})$ is the Zhukovskii function, the Faber polynomials $(F_{K,n})_n$ form a common basis for the spaces $\mathscr O(\Omega_R)$, $(R>1)$ where the boundary $\partial\Omega_R=\Phi^ {-1}(\{\vert w \vert=R\}$ of the level set $\Omega_R$ is given by the equation :
$$2z=R e^{i\theta}+R^{-1} e^{-i\theta}.$$
Theses are ellipses with foci $1$ et $-1$ and eccentricity $\varepsilon= \frac{2R}{1+R^2}$.
We observe that the polynomials $F_{K,n}$ enjoy in the target coordinates ``$w$'' a much more convenient form for computation than in the source coordinates ``$z$''.
Indeed $\Phi$ presents a simple pole at infinity which implies that
$\Phi^n+{1/ \Phi^n}$ et $\Phi^n$ have the same principal part. We observe also that
$\Phi(z)=z+\sqrt{z^2-1},$
which implies
${1/\Phi(z)}=z-\sqrt{z^2-1}$. From these last identities we can deduce\footnote{We can also deduce (see \cite{suetin}, pp. 36-37) that if $K=[-1,1]$, then the Faber polynomials are the Tchebyshev polynomials of the first kind (up to a constant $2$ if $n\geq 1$) : $F_{K,0}(z)=T_0(z),\ F_{K,n}(z)=2T_n(z),\ (n\geq 1)$ where $T_n(x)=\cos(n{ \text{arccos}} x)$.} that
${1/\Phi^n}+\Phi^n$ extends as a polynomial on $\Bbb{C}$. This is $F_{K,n}$ and if we write $F_{K,n}$ in the target coordinates ``$w$'', we get :
$$F_{K,n}(w)=w^n+ w^{-n}.$$
This important equality will allow us to write any function $f(z)=\sum_n a_n F_{K,n}(z), z\in \Omega_R$, holomorphic on $ \Omega_R$ under the form
$$f(z)=f(\Phi^{-1}(w))=\sum_n a_n F_{K,n}((\Phi^{-1}(w))=\sum_n a_n \left( w^n+w^{-n}\right),\quad 1<\vert w\vert <R,$$
and we shall often use this device from now on.
Now let us look at Bohr's phenomenon for the elliptic condensator
$(K:=[-1,1], \Omega_R, (F_{K,n})_{n\geq 0})$ given that $R>R_0$
is large enough.
Then next proposition is, in our particular case,
the equivalent version of Caratheodory's inequality.
\begin{prop} Let
$f(w)=a_0+\sum_{1}^{\infty}a_n(w^n+ w^{-n})\in \mathscr O(\{1<\vert w\vert <R\})$. Suppose that
$\texttt{re}({f})>0$, then :
$$\vert a_n\vert\leq {2\texttt{re}(a_0)\over
R^n- R^{-n}},\quad\forall\,n>0. $$
\end{prop}
\noindent\textbf{Proof : } Let $1<r<R$, then for all $n>0$ we have
$$\begin{aligned}&a_n r^{-n}&=&{1\over
2\pi}\int_{0}^{2\pi}e^{in\theta}f(r e^{i\theta})d\theta,\\
&\overline a_nr^n&=&{1\over
2\pi}\int_{0}^{2\pi}e^{in\theta}\bar f(r e^{i\theta})d\theta.
\end{aligned}$$
which easily gives
(remember that $\texttt{re}(f)>0$) :
$$\vert a_n\vert \cdot\left(r^n-r^{-n}\right)\leq\left\vert {a_nr^{-n}}+\bar a_nr^n\right\vert\leq {1\over
\pi}\int_{0}^{2\pi}\texttt{re}(f(r e^{i\theta}))d\theta=2\texttt{re}({a_0}),$$
to get the expected result, (just let $r$ tend to $R$).
$\blacksquare$
\begin{lem} Let $f=a_0+\sum_{n=1}^{\infty}a_n(w^n+w^{-n}) \in \mathscr O(\{1<\vert w\vert <R\})$. Suppose that $\vert
f\vert <1$ and $a_0>0$, then\footnote{ Note that $\vert f\vert<1$ implies $a_0<1$.} we have :
$$\vert a_n\vert\leq {2(1-a_0)\over R^n-R^{-n}}.$$
\end{lem}
\noindent\textbf{Proof : } This is classical : let $g=1-f$, then
$\texttt{re}({g})>0$ on $\{1<\vert w\vert <R\}$ and by prop. 2.1 :
$$\vert a_n\vert\leq {2(1-a_0)\over R^n-R^{-n}}.$$
$\blacksquare$
\begin{prop} For all $R\geq R_0=5.1284...$ the family $(K:=[-1,1], \Omega_R, (F_{K,n})_{n\geq 0})$ satisfies Bohr's phenomenon ($ \Omega_{R_0}$ is the ellipse with eccentricity $\varepsilon_0=0.3757...$). \end{prop}
\noindent\textbf{Proof : } Let $f=a_0+\sum_{1}^{\infty}a_nF_{K,n}\in\mathscr O(\Omega_R)$ and suppose that $\vert f\vert <1$ on
$\Omega_{R}$. In the variables ``$w$'' :
$f(w)=a_0+\sum_{1}^{\infty} a_n (w^n+ w^{-n})$ on $\{1<\vert
w\vert <R\}$ and up to a rotation (changing nothing by symmetry), we can suppose that
$a_0\geq 0$. Then by lemma 2.2 :
$$\begin{aligned}a_0+\sum\vert a_n\vert\cdot\Vert
F_{K,n}\Vert_{K}&\leq
a_0+2(1-a_0)\sum_{n=1}^{\infty}{r^n+{r^{-n}}\over
R^n-{R^{-n}}},\quad (1<r<R)\\
&\leq a_0+(1-a_0)\sum_{n=1}^{\infty}{4R^n\over R^{2n}-1}.
\end{aligned}$$
This gives
$$a_0+\sum_{n=1}^{\infty}\vert a_n\vert\cdot\Vert
F_{K,n}\Vert_{K}<1$$
if
$$\varphi(R):=\sum_{1}^{\infty}{4R^n\over R^{2n}-1}<1.$$
But $\varphi$ strictly decreases on $]1,\infty[$, $\lim_{1_+}\varphi(R)=+\infty$, $\lim_{+\infty}\varphi(R)=0$ therefore, there exists a unique
$R_0>1$ such that $\varphi(R)-1=0$ on
$]1,\infty[$ ; Mathematica gives $R_0=5.1284...$ corresponding to an eccentricity of $\varepsilon_0=0.3757...$ ; $(K:=[-1,1], \Omega_R, (F_{K,n})_{n\geq 0})$ satisfies Bohr's phenomenon for all $R\geq R_0$.
$\blacksquare$
\begin{rem} Using theorem 7 in \cite{kap}, we can deduce a weaker version of proposition 2.3 with $R_0=5.1573...$ and $\varepsilon_0=0.3738..$, so, proposition 2.3 is a slighty stronger version of theorem 7 in \cite{kap}. In another work \cite{lasseremanu} we calculate exactly the infimum of $R_0$ satisfying proposition 2.3 i.e. what we call the Bohr's radius of $K=[-1,1]$ in Theorem 3.1.
\end{rem}
\section{ Bohr's phenomenon on an arbitrary Green condensator}
\subsection{Estimations of Faber polynomials on a Green condensator } In this paragraph, we recall classical inequalities (see \cite{suetin}) on Faber polynomials of $K$ that we will use in paragraph 3.2.
Let $K\subset\mathbb C$ be a continuum, $(F_{K,n})_{n\geq 0}$ its Faber polynomials. Recall that $\Phi^n(z)=F_{K,n}(z)+E_{K,n}(z)$ where $E_{K,n}$ is the meromorphic part in the Laurent developement of $\Phi^n$ in a neighborhood of infinity. If $\Omega_r$, $(r>1)$ is the level set $\{ z\in\mathbb C\ :\ \vert\Phi(z)\vert<r\}$ then we have the following integral formulas for Faber polynomials (see Suetin, \cite{suetin}, pp 42) :
$$\forall\,z\in\Omega_r\ :\quad F_{K,n}(z)=\int_{\partial\Omega_r}{\Phi^n(t)\over t-z}dt,\ \ \leqno{(1)}$$
$$\forall\,z\in\mathbb C\setminus \overline{\Omega_r}\ :\quad E_{K,n}(z)=\int_{\partial\Omega_r}{\Phi^n(t)\over t-z}dt,\ \ \leqno{(2)}$$
Formula (2) leads to the following estimations for all $1<r<R$ :
$$\forall\,z\in\mathbb C\setminus {\Omega_R}\ :\quad\vert E_{K,n}(z)\vert\leq \int_{\partial\Omega_r}\left\vert{\Phi^n(t)\over t-z}\right\vert\cdot \vert
dt\vert\leq {r^n\texttt{lg}(\partial\Omega_r)\over \texttt{dist}(z,\partial\Omega_r)},\ \leqno{(3)}$$
($\texttt{lg}(\partial\Omega_r)$ is the euclidian length $\partial\Omega_r$,
$\texttt{dist}(z,\partial\Omega_r)$ is the euclidian distance from $z$ to $\partial\Omega_r$) and
$$\forall\,z\in\partial\Omega_R\ :\ \quad \vert F_{K,n}(z)\vert\leq R^n\left(1+{r^n\over R^n}\cdot {\texttt{lg}(\partial\Omega_r)\over \texttt{dist}(z,\partial\Omega_r)}\right)
,\ \ \leqno{(4)}$$
for all $1<r<R$. Then if $R$ is large enough, precisely if $${r^n\over
R^n}\cdot{\texttt{lg}(\partial\Omega_r)\over \texttt{dist}(z,\partial\Omega_r)}<1$$
then for all $n>0$, we have :
$$\forall\,z\in\partial\Omega_R\ :\ \quad\vert F_{K,n}(z)\vert\geq R^n\left(1-{r^n\over R^n}\cdot{\texttt{lg}(\partial\Omega_r)\over
\texttt{dist}(z,\partial\Omega_r)}\right)>0.$$
With formula (1), we deduce the estimation, for all $ r>1$ and $z\in
K$ :
$$\vert F_{K,n}(z)\vert\leq \int_{\partial\Omega_{r}}\left\vert{\Phi^n(t) \over t-z}\right\vert\cdot\vert
dt\vert\leq r^n{\texttt{lg}(\partial\Omega_{r})\over \texttt{dist}(z,\partial\Omega_{r})}.\ \leqno{(5)}$$
If moreover the compact $K$ is a domain defined by a real analytic Jordan curve, then Caratheodory's theorem ensures that $\Phi$ extends as a biholomorphism on a neighborhood of $\partial K$, say up to $\partial\Omega_{r_0}$, where $r_0<1$. From this, we get for all $r_0<R$ :
$$\forall\,z\in\mathbb C\setminus\Omega_R\ :\ \quad\vert E_{K,n}(z)\vert\leq \int_{\partial\Omega_{r_0}}\left\vert{\Phi^n(t)\over t-z}\right\vert\cdot\vert
dt\vert\leq {r_{0}^n\texttt{lg}(\partial\Omega_{r_0})\over \texttt{dist}(z,\partial\Omega_{r_0})},$$
and so the estimations
$$\begin{aligned} R^n\left(1-{r_{0}^n\over R^n}\cdot{\texttt{lg}(\partial\Omega_{r_0})\over \texttt{dist}(z,\partial\Omega_{r_0})}\right)&\leq\vert F_{K,n}(z)\vert\\
&\leq R^n\left(1+{r_{0}^n\over R^n}\cdot
{\texttt{lg}(\partial\Omega_{r_0})\over \texttt{dist}(z,\partial\Omega_{r_0})}\right),
\end{aligned}$$
for all $z\in\mathbb C\setminus\Omega_R,\ r_0<R$.
\subsection{Bohr's phenomenon on a Green condensator. } In this paragraph we extend proposition 2.3 for all continuum $K$ in the complex plane, precisely :
\begin{theo}
For all continuum $K\subset\mathbb C$, there exists a constant $R_K>1$ such that for all $R>R_K$ the family $(K,\Omega_R,(F_{K, n})_{n\geq 0},)$ satisfies Bohr's phenomenon and the infimum $R_0$ of such $R$ will be called the \textbf{Bohr's radius} of $K$.
\end{theo}
For example the Bohr radius for a disc $K=D(a,r)$ is $3$ due to Bohr's classic theorem, and in \cite{lasseremanu} we compute the exact value of $R_0$ when $K=[-1,1]$.
Before proving theorem 3.1, some intermediate results are necessary. Let $K$ be a continuum, $(F_{K,n})_{n\geq 0}$ its sequence of Faber polynomials and $z_0\in \partial K$. Consider the family $(\varphi_{n\geq 0})_{n\geq 0}$ where $\varphi_0\equiv 1$ and
$\varphi_n=F_{K,n}-F_{K,n}(z_0)\ (n\geq 1)$. It is clear that $(\varphi_n)_{n\geq 0}$ is again a basis of the spaces $\mathscr O(\Omega_R)$ for all $R>1$ and we have
\begin{theo} The family $( K, \Omega_R, (\varphi_n)_{n\geq 0})$ enjoys Bohr's property for $R$ large enough. That is to say, there exists $R>1$ such that all holomorphic function $f=\sum_n a_n\varphi_n\in\mathscr O(\Omega_R) $ with values in $\mathbb D$ satisfy
$$\sum_{n\geq 0} \vert a_n\vert\cdot\Vert \varphi_n\Vert_K=\vert f(z_0)\vert+\sum_{n\geq 1} \vert a_n\vert\cdot\Vert \varphi_n\Vert_K <1.$$
\end{theo}
\noindent \textbf{Proof : } Let $R_0>1$. We can suppose without loss of generality that $z_0=0$. Because $\varphi_n(0)=0$ for all $n>0$ we can apply
theorem 3.3 in \cite{AAD} on the open set $\Omega_{R_0}$. This implies that there exists
$D(0,\rho_0)$ where $\rho_0$ is small enough and a compact $K_1\subset\Omega_{R_0}$ such that :
$$\vert f(0)\vert
+\sum_{n\geq 1}\vert a_n\vert\cdot \Vert
\varphi_n\Vert_{D(0,\rho_0)}\leq \Vert f\Vert_{K_1},$$
for any function $f=\sum_n\, a_n\varphi_n\in\mathscr O(\Omega_{R_0})$.
Now choose $\rho_1>0$ such that $K_1\subset D(0,\rho_1)$. We have :
$$\vert f(0)\vert
+\sum_{n\geq 1}\vert a_n\vert\cdot \Vert
\varphi_n\Vert_{D(0,\rho_0)}\leq \Vert f\Vert_{D(0,\rho_1)},\leqno{(6)}$$
for all $f=\sum_n a_n\varphi_n\in\mathscr O(\Omega_R)$ where $R$ is choosen large enough so that $D(0,\rho_1)\subset \Omega_R$.
Let $f\in \mathscr O(\Omega_R)$ such that $\Vert f\Vert_{\Omega_R}\leq 1$; the invariant form of Schwarz's lemma (\cite{goluzin}, chapter 8) gives the following estimation on any disc
$D(0,\rho)\subset \Omega_R$ ($\rho\geq \rho_1$) :
$$\Vert f\Vert_{D(0,\rho_1)}\leq {\rho_1 \rho^{-1}+\vert
f(0)\vert\over 1+\vert f(0)\vert \rho_1 \rho^{-1}}.\leqno{(7)}$$
We want for $f=f(0)+\sum_{n\geq 1}\, a_n\varphi_n\in\mathscr O(\Omega_R)$
to dominate the quantity :
$\vert f(0)\vert+\sum_{n\geq 1}\, \vert a_n\vert\cdot\Vert\varphi_n\Vert_{K}$; write
$$\sum_{n\geq 1}\vert
a_n\vert\cdot\Vert\varphi_n\Vert_{K}=\sum_{n\geq 1}\vert
a_n\vert\cdot\Vert\varphi_n\Vert_{D(0,\rho_0)}\times
\dfrac{\Vert \varphi_n \Vert_{K}}{ \Vert \varphi_n \Vert_{D(0,\rho_0)}}.\ \
\leqno{(8)}$$
Let $L$ be a disc contained in $D(0,\rho_0)\setminus K$ then
$$\lim_{n\to\infty} \Vert\varphi_n\Vert_L^{1/n} =R^{\alpha_L}$$
where\footnote{$\omega$ is the extremal function associated for the pair $(K, \Omega_R)$.} $\alpha_L:=\max_{z\in L} \omega(z, K,\Omega_R)$, this is in fact true for all compact $L\subset \Omega_R\setminus K$ and this is an immediate corollary of a Nguyen Thanh Van's result (\cite{nguyen}, page 228, see also \cite{nguyenzeriahi}, \cite{zeriahi} for ``pluricomplex versions''). At this point, it's not difficult to deduce
$$\forall\,\varepsilon>0,\ \exists\, C_\varepsilon>0\ :\ \Vert\varphi_n\Vert_K\leq C_\varepsilon R^{n\varepsilon},\ \forall\,n\in\mathbb N,$$
and
$$\exists\, C>0\ :\ \Vert\varphi_n\Vert_L\geq C \cdot R^{n\frac{\alpha_L}{2}},\ \forall\,n\in\mathbb N.$$
It remains to choose $\varepsilon>0$ small enough so that $R^\varepsilon< R^{\frac{\alpha_L}{2}}$. Such a choice assures
$$0\leq \lim_{n\to+\infty}\dfrac{\Vert \varphi_n \Vert_{K}}{ \Vert \varphi_n \Vert_{D(0,\rho_0)}}\leq \lim_{n\to+\infty} \left( R^{\varepsilon-\frac{\alpha_L}{2}}\right)^n=0.$$
So the sequence $\left(\frac{\Vert \varphi_n \Vert_{K}}{ \Vert \varphi_n \Vert_{D(0,\rho_0)}}\right)_n$ is bounded : by (8) there exists $C>0$ such that
$$\sum_{n\geq 1}
\vert a_n\vert \cdot \Vert\varphi_n\Vert_{K}\leq C\sum_{n\geq 1}\vert
a_n\vert\cdot\Vert\varphi_n\Vert_{D(0,\rho_0)},$$
which give us with (6) the estimation : $$\sum_{n\geq
1} \vert a_n\vert \cdot \Vert\varphi_n\Vert_{K}\leq
C\big(\Vert f \Vert_{D(0,\rho_1)}-\vert f(0)\vert\big).$$
Finally, with the invariant Schwarz's lemma $(7)$
$$\sum_{n\geq
1} \vert a_n\vert \cdot \Vert\varphi_n\Vert_{K}\leq C\left( {\rho_1 \rho^{-1}+\vert f(0)\vert\over 1+\vert f(0)\vert
\rho_1 \rho^{-1}}-\vert f(0)\vert\right)= C\rho_1 \rho^{-1}\left({1-\vert f(0)\vert^2\over 1+\vert f(0)\vert
\rho_1 \rho^{-1}} \right)$$ which lead us to the main estimation
$$\sum_{n\geq
1} \vert a_n\vert \cdot \Vert\varphi_n\Vert_{K}\leq 2C\rho_1 \rho^{-1}(1-\vert f(0)\vert).$$
To conclude, let us choose $\rho$ large enough so that $2C\rho_1 \rho^{-1}\leq 1$, therefore, for any $R>1$ such that $D(0,\rho)\subset \Omega_R$ and
$f=f(0)+\sum_{n\geq
1}a_n\varphi_n\in \mathscr O(\Omega_R)$, $f(\Omega_R)\subset \mathbb D$, we have :
$$\vert f(0)\vert + \sum_{n\geq
1} \vert a_n\vert\cdot\Vert\varphi_n\Vert_{K}\leq 1.$$
Q.E.D.
$\blacksquare$
Of course we must now come back to the basis $(F_{K,n})_n$ :
\begin{lem} Let $\widetilde K\subset K$ be another compact, $(\varepsilon_n)_{n\geq 1}$ a complex sequence and suppose that there exists a constant $0<C<1$ such that
$$\begin{cases} \sup_{z\in \widetilde K}\vert \varphi_n(z)-\varepsilon_n\vert \leq C\cdot \Vert\varphi_n\Vert_K,\quad\forall\,n\in\mathbb N,&\qquad (9) \\
\vert\varepsilon_n\vert\leq (1-C)\cdot \Vert\varphi_n\Vert_K,\quad\forall\,n\in\mathbb N.&\qquad (10)
\end{cases}$$
Then the family $( \widetilde K,\Omega, (\widetilde\varphi_n)_{n\geq 0})$ satisfies Bohr's property with $\widetilde\varphi_0\equiv 1,\ \widetilde\varphi_n:=\varphi_n-\varepsilon_n$.
\end{lem}
\noindent\textbf{Proof : } Let $f=a_0+\sum_{n\geq 1}a_n\varphi_n=a_0+\sum_{n\geq 1}a_n\epsilon_n+\sum_{n\geq 1}a_n(\varphi_n-\varepsilon_n)\in\mathscr O(\Omega)$
and suppose that $\vert f\vert\leq 1$ on $\Omega$. We have to prove that
$$\left\vert a_0+\sum_{n\geq 1}a_n\epsilon_n\right\vert+\sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n-\varepsilon_n\Vert_{\widetilde K}\leq 1.$$
But :
$$\begin{aligned}
\left\vert a_0+\sum_{n\geq 1}a_n\epsilon_n\right\vert&+\sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n-\varepsilon_n\Vert_{\widetilde K}
\leq \\
&\leq \left\vert a_0+\sum_{n\geq 1}a_n\epsilon_n\right\vert + C\cdot \sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K \\
&\leq \vert a_0\vert +\sum_{n\geq 1}\vert a_n\vert\cdot\vert \epsilon_n\vert+ C\cdot \sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K\\
&\leq \vert a_0\vert +(1-C)\sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K+ C\cdot \sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K\\
&\leq \vert a_0\vert +\sum_{n\geq 1}\vert a_n\vert\cdot\Vert \varphi_n\Vert_K\leq 1
\end{aligned}$$
Q.E.D.
$\blacksquare$
\noindent\textbf{Proof of Theorem 3.1 : } Now let $K$ be a continuum, $\Omega_R,\ (R>1)$, a level set of the Green function of $K$ and fix $\widetilde K=\overline{\Omega_R }$. If $a\in\partial\Omega_R$ there exists (this is theorem 3.2) $R'>R$ such that the family $\left(\overline{\Omega_R }, \Omega_{R'}, (1, F_{\widetilde K, n}-F_{\widetilde K, n}(a))_{n\geq 0}\right)$ satisfies Bohr's property. Then for any function
$$f=a_0+\sum_{n\geq 1} a_n ( F_{\widetilde K, n}-F_{\widetilde K, n}(a)) \in \mathscr O(\Omega_{R'}),$$
such that $\vert f\vert \leq 1$ on $\Omega_{R'}$, we have $$\vert a_0\vert+ \sum_{n\geq 1} \vert a_n\vert\cdot \Vert F_{\widetilde K, n}-F_{\widetilde K, n}(a)\Vert_{\overline{\Omega_R }}\leq 1.$$
\noindent But (\cite{suetin}, page 35) :
$F_{\widetilde K, n}(z)=R^{-n}F_{K, n}(z)$ so
$$\begin{aligned} f(z)&=a_0+\sum_{n\geq 1} a_n \left( F_{\widetilde K, n}(z)-F_{\widetilde K, n}(a)\right)\\
&= f(z)=a_0+\sum_{n\geq 1} a_n R^{-n}\left( F_{ K, n}(z)-F_{ K, n}(a)\right).
\end{aligned}$$
Because $R>1$, this immediately implies that the basis $(1, F_{K, n}-F_{K, n}(a))_{n\geq 0}$ satisfies Bohr's property on $(\overline{\Omega_R }, \Omega_{R'})$. If we apply lemma 3.3 with $\varphi_n=F_{K, n}-F_{K, n}(a),\ a\in\partial \Omega_R$ and $-\varepsilon_n=F_{K, n}(a)$, the inequalities (9) and (10) are :
$$\begin{aligned}
&(9')\qquad& \sup_{z\in K}\vert F_{K, n}(z)\vert \leq C\cdot \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert, \\
&(10')\qquad& \vert F_{K, n}(a)\vert \leq (1-C)\cdot \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert.
\end{aligned}$$
(where $C\in]0,1[$ is a constant). For all $n\in\mathbb N$ choose $a_n\in\partial\Omega_R$ such that $\Phi(a_n)=\theta_n\Phi(a)$ where $\theta_n$ is an $n$-root of $-1$ (remember that $\Phi(\partial\Omega_R)=C(0,R)$). So
$$F_{K, n}^n(a_n)=\Phi^n(a_n)-E_{K, n}(a_n)=-\Phi(a)^n-E_{K, n}(a_n)$$
and
$$F_{K, n}(a_n)-F_{K, n}(a)=-2\Phi(a)^n-\left[ E_{K, n}(a)+E_{K, n}(a_n)\right ].$$
But because of inequality (3) in paragraph 3.1, noting $r=1+\varepsilon_0$ :
$$\left\vert E_{K, n}(a)+E_{K, n}(a_n)\right\vert \leq 2(1+\varepsilon_0)^n\dfrac{\texttt{lg}(\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R})},$$
for all $n\in\mathbb N$ and $R>1+\varepsilon_0$. Consequently :
$$\begin{aligned} \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert &\geq \vert F_{K, n}(a_n)-F_{K, n}(a)\vert \\
&\geq 2R^n\left[1-\left(\dfrac{1+\varepsilon_0}{R}\right)^n\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R})} \right]
\end{aligned}$$
for all $n\in\mathbb N$ and $R>1+\varepsilon_0$. So, as long as we choose $R$ large enough, say $R>R_0$, we can suppose that
$$ \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert \geq \dfrac{3}{2}R^n,\quad \forall\,n\in\mathbb N,\ R>R_0.\leqno{(11)}$$
Because of (4) :
$$\vert F_{K,n}(a)\vert \leq R^n\left[ 1+\left(\dfrac{1+\varepsilon_0}{R}\right)^n\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R})} \right]$$
for all $n\in\mathbb N$, $R>1+\varepsilon_0$. Because the term in between the brackets satisfies :
$$\begin{aligned}1&\leq 1+\left(\dfrac{1+\varepsilon_0}{R}\right)^n\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R})}\\
&\leq 1+\left(\dfrac{1+\varepsilon_0}{R_1}\right)\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+\varepsilon_0},\partial\Omega_{R_1})}\underset{R_1\to\infty}{\longrightarrow} 1
\end{aligned}$$
for all $R>R_1>1+\varepsilon_0$ ; it is less than $5/4$ for all $n\in\mathbb N$ and $R>R_1$ where $R_1$ is choosen large enough ; i.e.
$$\vert F_{K,n}(a)\vert \leq \dfrac{5}{4}\cdot R^n,\quad \forall\,n\in\mathbb N,\ R>R_1.$$
It follows from (11) that
$$\vert F_{K,n}(a)\vert \leq \dfrac{5}{6}\cdot \dfrac{3}{2}\cdot R^n \leq \dfrac{5}{6}\sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert,\ \forall\,n\in\mathbb N,\ R>R_2:=\max\{ R_0, R_1\}.$$
So we have proved inequality (10') with $C=1/6$.
Finaly, still because of (4) :
$$\begin{aligned}
\qquad \sup_{z\in K}\vert F_{K, n}(z)\vert &\leq \sup_{z\in\overline{\Omega_{1+2\varepsilon_0}}}\vert F_{K, n}(z)\vert \\
&\leq (1+2\varepsilon_0)^n\cdot \left[ 1+\left(\dfrac{1+\varepsilon_0}{1+\varepsilon_0}\right)^n\cdot\dfrac{\texttt{lg} (\partial\Omega_{1+\varepsilon_0})}{\texttt{dist}(\partial\Omega_{1+2\varepsilon_0},\partial\Omega_{1+\varepsilon_0})} \right]\\
&\leq A (1+2\varepsilon_0)^n,\quad\forall\,n\in\mathbb N
\end{aligned}$$
where $A$ is a constant strictly larger than $1$. Given $A>1$ being fixed it is easy to deduce that for any $R>R_3$ :
$$\sup_{z\in K}\vert F_{K, n}(z)\vert \leq A (1+2\varepsilon_0)^n \leq \dfrac{R^n}{4},\quad \forall\, n\in\mathbb N.$$
So because of (11)
$$\sup_{z\in K}\vert F_{K, n}(z)\vert \leq \dfrac{1}{6}\cdot\dfrac{3}{2}R^n \leq \dfrac{1}{6} \sup_{z\in\overline{\Omega_R}}\vert F_{K, n}(z)-F_{K, n}(a)\vert$$
for all $n\in\mathbb N$ et $R > \max\{ R_3, R_2\}$. This is formula (9') with $C=1/6$, so we can apply lemma 3.3 and deduce that the family $(K,\Omega_R, (F_{K, n})_{n\geq 0})$ satisfies Bohr's phenomenon for all $R$ large enough : theorem 3.1. is proved.
$\blacksquare$
\end{document} |
\begin{document}
\title{Constructing totally $p$-adic numbers of small height}
\author{S. Checcoli}
\address{S.~Checcoli, Institut Fourier, Universit\'e Grenoble Alpes, 100 rue des Math\'ematiques, 38610 Gi\`eres, France}
\email{[email protected]}
\author{A. Fehm}
\address{A.~Fehm, Institut f\"ur Algebra, Fakult\"at Mathematik, Technische Universit\"at Dresden, 01062 Dresden, Germany}
\email{[email protected]}
\begin{abstract}
Bombieri and Zannier gave an effective construction of algebraic numbers of small height inside the maximal Galois extension of the rationals which is totally split at a given finite set of prime numbers. They proved, in particular, an explicit upper bound for the lim inf of the height of elements in such fields.
We generalize their result in an effective way to maximal Galois extensions of number fields with given local behaviour at finitely many places.
\end{abstract}
\maketitle
\section{Introduction}
\noindent
Let $h$ denote the absolute logarithmic Weil height on
the field $\overline{\mathbb{Q}}$ of algebraic numbers.
We are interested in explicit height bounds for elements of $\overline{\mathbb{Q}}$ with special local behaviour at a finite set of primes.
The first result in this context is due to Schinzel \cite{Sch} who proved a height lower bound for elements in the field of totally real algebraic numbers $\mathbb{Q}^{\rm tr}$, the maximal Galois extension of $\mathbb{Q}$ in which the infinite prime splits totally. More precisely, he showed that every $\alpha\in \mathbb{Q}^{\rm tr}$ has either $h(\alpha)=0$ or \[h(\alpha)\geq \frac{1}{2}\log\left(\frac{1+\sqrt{5}}{2}\right).\]
Explicit upper and lower bounds for the limit infimum of the height of algebraic integers in $\mathbb{Q}^{\rm tr}$ are given in \cite{Smy80,Smy81, Fla96}.
In \cite{BZ} Bombieri and Zannier
investigate the analogous problem for the $p$-adic numbers.
More precisely, in \cite[Theorem 2]{BZ} they prove the following:
\begin{theorem}[Bombieri--Zannier]\label{BZ_lower}
Let $p_1,\dots,p_n$ be distinct prime numbers,
for each $i$ let $E_i$ be a finite Galois extension of $\mathbb{Q}_{p_i}$,
and $L$ the maximal Galois extension of $\mathbb{Q}$ contained in all $E_i$.
Denote by $e_i$ and $f_i$ the ramification index and inertia degree of $E_i/\mathbb{Q}_{p_i}$.
Then
$$
\liminf_{\alpha\in L} h(\alpha) \;\geq\; \frac{1}{2}\cdot \sum_{i=1}^n\frac{\log(p_i)}{e_i(p_i^{f_i}+1)}.
$$
\end{theorem}
In the special case $E_i=\mathbb{Q}_{p_i}$, Bombieri and Zannier in \cite[Example 2]{BZ} show that the lower bound in Theorem \ref{BZ_lower} is almost optimal. More precisely:
\begin{theorem}[Bombieri--Zannier]\label{BZ_upper}
Let $p_1,\dots,p_n$ be prime numbers and let $L$ be the maximal Galois extension
of $\mathbb{Q}$ contained in all $\mathbb{Q}_{p_i}$. Then
$$
\liminf_{\alpha\in L} h(\alpha) \;\leq\; \sum_{i=1}^n\frac{\log(p_i)}{p_i-1}.
$$
\end{theorem}
Other proofs, refinements and generalizations were given in
\cite{Fil, Pot,FiliPetsche, FP, PS19}
See also \cite{Smy07} for a general survey on the height of algebraic numbers.
In Remark \ref{rem-Fili} we will discuss in detail the contribution \cite{Fil} and how it compares to our work.
The goal of this note is to generalize in an effective way the upper bound Theorem \ref{BZ_upper}
to general $E_i$,
and to further replace the base field $\mathbb{Q}$ by an arbitrary number field. Our main result is the following:
\begin{theorem}\label{mainthm}
Let $K$ be a number field and let $\mathfrak{p}_1,\dots,\mathfrak{p}_n$ be distinct primes ideals of the ring of integers $\mathcal{O}_K$ of $K$.
For each $i$,
let $E_i$ be a finite Galois extension of the completion $F_i$ of $K$ at $\mathfrak{p}_i$.
Denote by $e_i$ and $f_i$ the ramification index and the relative inertia degree of $E_i/F_i$
and write $q_i=|\mathcal{O}_K/\mathfrak{p}_i|=p_i^{{f(\mathfrak{p}_i|p_i)}}$.
Then for the maximal Galois extension $L$ of $K$ contained in all $E_i$,
\[
\liminf_{\alpha\in L} h(\alpha) \;\leq\; \sum_{i=1}^n \frac{f(\mathfrak{p}_i|p_i)}{[K:\mathbb{Q}]}\cdot\frac{\log(p_i)}{e_i(q_i^{f_i}-1)}.
\]
More precisely, let
\[
C=\max\left\{[K:\mathbb{Q}], |\Delta_K|, \max_i (e_i f_i), \max_i q_i^{f_i}\right\}
\]
where $\Delta_K$ is the absolute discriminant of $K$.
Then for every $0<\epsilon<1$ there exist infinitely many $\alpha\in\mathcal{O}_L$ of height
\begin{equation}\label{eqn:thm1}
h(\alpha)\leq \sum_{i=1}^n \frac{f(\mathfrak{p}_i|p_i)}{[K:\mathbb{Q}]}\cdot\frac{\log(p_i)}{e_i(q_i^{f_i}-1)}+13 nC^{2n+2}\frac{\log \left([K(\alpha):K]\right)}{[K(\alpha):K]} +\begin{cases}0,&n=1\\n\epsilon,&n>1\end{cases}.
\end{equation}
Namely, for every $\rho\geq 3C^n$ there exists such $\alpha$ of degree
\begin{equation}\label{eqn:thm2}
\rho\leq [K(\alpha):K] \leq \begin{cases}C\rho,&n=1\\\rho^{\frac{(4 \log C)^{n+1}}{\log^n(1+\epsilon)}},&n>1\end{cases}.
\end{equation}
\end{theorem}
Note that in the special case $K=\mathbb{Q}$ and $E_i=\mathbb{Q}_{p_i}$
we reobtain Theorem \ref{BZ_upper},
except that Theorem \ref{mainthm} appears stronger in that the
result is effective and the $\liminf$ can be taken over algebraic integers.
However,
an inspection of the proof of Bombieri and Zannier
shows that it is effective as well and does in fact produce algebraic integers.
\begin{remark}\label{rem-Fili}
Theorem \ref{mainthm} provides an effective version of a result of Fili \cite[Theorem 1.2]{Fil}.
The bound in that result seems to differ from ours by the factor $e(\mathfrak{p}_i|p_i)$,
and \cite[Theorem 1.1]{Fil} (and similarly \cite[Theorem 9]{FiliPetsche}) also states a variant of Theorem~\ref{BZ_lower}
which contradicts our Theorem \ref{mainthm},
but
according to Paul Fili (personal communication)
this is merely an error in normalization in \cite{Fil} and \cite{FiliPetsche}
that became apparent when comparing to our result,
and the $e_v$ in the denominator of Theorems 1.1, 1.2, and Conjecture 1 of \cite{Fil} (and similarly in the statements of \cite{FiliPetsche})
should have been the absolute instead of the relative ramification index.
When this correction is made,
the lower bound of \cite[Theorem 1.1]{Fil} agrees with the one in \cite[Theorem 13]{FP},
and the upper bound of \cite[Theorem 1.2]{Fil} agrees with the one in Theorem \ref{mainthm}.
In any case, Fili's proof of \cite[Theorem 1.2]{Fil} uses capacity theory on analytic Berkovich spaces and does not provide explicit bounds on the degree and the height of a sequence of integral elements in the $\liminf$. Instead, our effective proof is more elementary and is inspired by Bombieri and Zannier's effective proof of Theorem \ref{BZ_upper}.
To the best of our knowledge,
Theorem \ref{mainthm} is the only result currently available that gives a bound
on the height in terms of the degree of such a sequence of $\alpha$,
except for the case where $K=\mathbb{Q}$ and $E_i=\mathbb{Q}_{p_i}$ for all $i$,
where such a bound can be deduced from \cite{BZ}.
We also remark that our use of \cite[Theorem 1.2]{Fil} in \cite{CF} is limited to the cases where \cite[Theorem 1.2]{Fil} agrees with Theorem \ref{mainthm}.
\end{remark}
The paper is organised as follows.
In Section \ref{prel} we collect all the preliminary results needed to prove Theorem \ref{mainthm}, namely:
a consequence of Dirichlet's theorem on simultaneous approximation (Proposition \ref{Dir}),
a bound for the size of representatives in quotient rings of rings of integers (Proposition \ref{lem:small_rep}),
a variant of Hensel's lemma (Proposition \ref{val-BZ}),
a bound for the height of a root of a polynomial defined over a number field in terms of its coefficients (Proposition \ref{H-min-root}),
and a construction of special Galois invariant sets of representatives of residue rings of local fields (Proposition \ref{A_i}).
The proof of Theorem \ref{mainthm} is carried out in Section \ref{sec:main}. We briefly sketch it here for clarity.
Following Bombieri and Zannier's strategy, given $\rho\geq 3C^n$ we construct a monic irreducible polynomial $g\in\mathcal{O}_K[X]$ such that
\begin{enumerate}[(i)]
\item\label{d-g} its degree is upper and lower bounded in terms of $\rho$ as the degree of $\alpha$ in Theorem~\ref{mainthm},
\item\label{c-g} the complex absolute value of all conjugates of its coefficients is sufficiently small, and
\item\label{r-g} all its roots are contained in all $E_i$.
\end{enumerate}
In Bombieri and Zannier's proof of Theorem \ref{BZ_upper}, \eqref{d-g} and \eqref{c-g} were achieved by using the Chinese Remainder Theorem to deform the polynomial $\prod_{i=1}^{\rho} (X-i)$ into an irreducible polynomial of the same degree with coefficients small enough to give the desired bound for the height of the roots. Then a variant of Hensel's lemma was applied to show that the roots of the constructed polynomial are still in $\mathbb{Q}_{p_i}$ for each $i$.
In our generalisation,
the degree of the polynomial is carefully chosen
to obtain \eqref{d-g}
in Section \ref{sec:degree}
via Proposition \ref{Dir} (necessary only if $n>1$, which leads to the better bounds in the case $n=1$).
The polynomial $g$ satisfying \eqref{c-g} is then constructed in Section \ref{constr-G}:
We start with polynomials $\prod_{\alpha\in \tilde A_i} (X-\alpha)$, where now $\tilde A_i\subseteq \mathcal{O}_{E_i}$ is a set constructed using Proposition \ref{A_i}.
These polynomials are then merged into an irreducible polynomial $g$ by applying the Chinese Remainder Theorem and Proposition \ref{lem:small_rep} to bound the size of its coefficients.
Property \eqref{r-g} is verified in Section \ref{roots-g}, using Proposition \ref{val-BZ}. Finally, Proposition \ref{H-min-root} is applied to show that $g$ has a root $\alpha$ of height bounded from above as desired.
\section{Notation and preliminaries}\label{prel}
\noindent
We fix some notation. If $K$ is a number field or a non-archimedean local field we let $\mathcal{O}_K$ denote the ring of integers of $K$.
For an ideal $\mathfrak{a}$ of $\mathcal{O}_K$
we denote by $N(\mathfrak{a})=|\mathcal{O}_K/\mathfrak{a}|$ its norm.
For a nonzero prime ideal $\mathfrak{p}$ of $\mathcal{O}_K$,
we denote by $v_\mathfrak{p}$ the discrete valuation on $K$
with valuation ring $(\mathcal{O}_K)_{\mathfrak{p}}$
normalized such that $v_\mathfrak{p}(K^\times)=\mathbb{Z}$.
If $L/K$ is an extension of number fields and $\mathfrak{P}$ is a prime ideal of $\mathcal{O}_L$
lying above a prime ideal $\mathfrak{p}$ of $\mathcal{O}_K$
we denote by $e(\mathfrak{P}|\mathfrak{p})$ and $f(\mathfrak{P}|\mathfrak{p})$
the ramification index and the inertia degree.
For an extension $E/F$ of non-archimedean local fields we denote
the ramification index and the inertia degree also by $e(E/F)$ and $f(E/F)$.
\subsection{Auxiliary results}
In this section we collect the preliminary results we need to prove Theorem \ref{mainthm}. These results are not related to each other and we list them in this section following their order of appearance in the proof of Theorem \ref{mainthm}.
\begin{proposition}\label{Dir}
Let $x_1,\dots,x_n$ be integers greater than $1$.
For every $\rho\geq 3$ and $0<\epsilon<1$ there exist positive integers $r,k_1,\dots,k_n$ such that $r\geq \rho$ and, for all $i$, $r\leq x_i^{k_i}\leq (1+\epsilon)r$ and
\[k_i\leq \frac{2^{2n+1}\log^n(\max_jx_j) \log(\rho)}{ \log(x_i) \log^n(1+\epsilon)}.\]
\end{proposition}
\begin{proof}
Say $x_1=\max_ix_i$.
Let $\alpha_i=2\log(\rho)/\log(x_i)$ and $Q=\lceil 2\log(x_1)/\log(1+\epsilon)\rceil$.
By the simultaneous Dirichlet approximation theorem \cite[Chapter II, Section 1, Theorem 1A]{Schm}
there exist positive integers $q,k_1,\dots,k_n$ with $1\leq q< Q^n$ such that $|q\alpha_i-k_i|\leq Q^{-1}$ for all $i$, and thus $|2\log(\rho) q-\log(x_i^{k_i})|\leq \log(1+\epsilon)/2$.
Letting $r=\min_i x_i^{k_i}$, one has $\log(r)\geq 2\log(\rho) q-1\geq \log(\rho)$ for $q\geq 1$ and $\rho\geq 3$. In addition, for all $i$, $0\leq\log(x_i^{k_i})-\log(r)\leq\log(1+\epsilon)$,
hence $r\leq x_i^{k_i}\leq (1+\epsilon)r$. Finally $k_i\leq q \alpha_i+1 \leq 2 q \alpha_i\leq 2\log(\rho)Q^n\log(x_i)^{-1}$ and replacing $Q$ we get the desired bound.
\end{proof}
The next proposition deals with bounds for the absolute value of small representatives for quotient rings.
\begin{proposition}\label{lem:small_rep}
{Let $K$ be a number field of degree $m=[K:\mathbb{Q}]$.
Given a nonzero ideal $\mathfrak{a}$ of $\mathcal{O}_K$, there exists a set of representatives $A$ of $\mathcal{O}_K/\mathfrak{a}$ such that, for every $a\in A$ and every $\sigma\in{\rm Hom}(K,\mathbb{C})$, one has
\[|\sigma(a)|\leq \delta_K N(\mathfrak{a})^{1/m}\]
where $\delta_K=m^{\frac{3}{2}}{2}^{\frac{m(m-1)}{2}}\sqrt{|\Delta_K|}$.}
\end{proposition}
\begin{proof}
This is an immediate consequence of well-known results on lattice reduction. For instance, \cite[Proposition 15]{BFH} gives
for every $\alpha\in\mathcal{O}_K$, an element $a\in\mathcal{O}_K$ with $\alpha-a\in\mathfrak{a}$
such that \[\sqrt{\sum_{\sigma}|\sigma(a)|^2}\leq m^{\frac{3}{2}}{\ell}^{\frac{m(m-1)}{2}}\sqrt{|\Delta_K|} N(\mathfrak{a})^{1/m}\]
where $\ell$ depends on certain parameters {$\eta\in\ (1/2,1), \delta\in (\eta^2,1)$ and $\theta>0$ coming from applying a variant of the LLL-reduction algorithm as in \cite[Theorem 5.4]{Cha} and \cite[Theorem 7]{NSV} (see also \cite[\S 4, p.595]{BFH}) to a $\mathbb{Z}$-basis of $\mathfrak{a}$. In particular, choosing $\eta=2/3,\delta=7/9$ and $\theta=(\sqrt{19}-4)/3$, we have $\ell=2$, which gives the claimed upper bound for $|\sigma(a)|$.}
\end{proof}
The following proposition is a variant of Hensel's lemma.
\begin{proposition}\label{val-BZ}
Let $E$ be a finite extension of $\mathbb{Q}_p$, $\mathfrak{P}$ the maximal ideal of $\mathcal{O}_E$ and $v=v_\mathfrak{P}$.
Let $f\in E[X]$ and $x_0\in E$.
Assume there exist $a,b\in\mathbb{Z}$ such that
\begin{enumerate}[(i)]
\item\label{ci} $v(f(x_0))>a+b$,
\item\label{cii} $v(f'(x_0))\leq a$,
\item\label{ciii}
$v(f^{(\nu)}(x_0)/\nu!)\geq a-(\nu-1)b$ for every $\nu\geq 2$.\end{enumerate}
Then there exists $x\in E$ with $f(x)=0$ and $v(x-x_0)>b$.
\end{proposition}
\begin{proof}
This can be proved precisely as the special case $E=\mathbb{Q}_p$ in \cite[Lemma 1]{BZ}.
Alternatively, one can reduce this to one of the standard forms of Hensel's lemma as follows.
Let $\beta\in E$ with $v(\beta)=b$.
Then $g(X):=(\beta f'(x_0))^{-1}f(\beta X+x_0)$
is in $\mathcal{O}_E[X]$ by $(i)$-$(iii)$
and has a simple zero $X=0$ modulo $\mathfrak{P}$ by $(i)$ and $(ii)$,
hence by Hensel's lemma $g$ has a zero $x'\in\mathfrak{P}$,
and $x=\beta x'+x_0$ is then the desired zero of $f$.
\end{proof}
The final proposition in this subsection gives a bound for the height of the roots of a polynomial with small algebraic coefficients.
\begin{proposition}\label{H-min-root}
Let $K$ be a number field and
let $f(X)=X^m+a_{m-1}X^{m-1}+\ldots +a_0\in \mathcal{O}_K[X]$.
If $B\geq 1$ with $|\sigma(a_i)|<B$ for every $i$ and every $\sigma\in{\rm Hom}(K,\mathbb{C})$,
then $f$ has a root $\alpha$ with
\[
h(\alpha)\leq \frac{\log(B\sqrt{m+1})}{m}.
\]
\end{proposition}
\begin{proof}
Let $M_K=M_K^0\cup M_K^{\infty}$ be the set of (finite and infinite) places of $K$ and let $d=[K:\mathbb{Q}]$.
For a place $v\in M_K$, denote by $d_v=[K_v:\mathbb{Q}_v]$ the local degree.
Let
\[
\hat{h}(f)=\log\left(\prod_{v\in M_K}M_v(f)^{d_v/d}\right)
\]
where if $v$ is non-archimedean $M_v(f)=\max_i(|a_i|_v)$, while if $v$ is archimedean and corresponds to the embedding $\sigma\in{\rm Hom}(K,\mathbb{C})$, $M_v(f)$ is the Mahler measure $M(\sigma(f))$ of the polynomial $\sigma(f)$.
By \cite[Appendix A, section A.2, pag. 210]{Zan} we have that, if $\alpha_1,\ldots, \alpha_m\in \overline{\mathbb{Q}}$ are the roots of $f$ (with multiplicities), then $\hat{h}(f)=\sum_{i=1}^m h(\alpha_i)$
where $h$ denotes the usual logarithmic Weil height.
Thus, if $\alpha$ is a root of $f$ of minimal height,
then
\begin{equation}\label{h-root}
h(\alpha)\leq \frac{\hat{h}(f)}{m}.
\end{equation}
Let $\sigma_1,\ldots,\sigma_r$ and $\tau_1,\overline{\tau_1},\ldots, \tau_s,\overline{\tau_s}$ be, respectively, the real and pairwise conjugate complex embeddings of $K$ in $\mathbb{C}$, so that $d=r+2s$.
As $f$ has coefficients in $\mathcal{O}_K$,
$M_v(f)\leq1$ if $v$ is non-archimedean and we have that
\begin{align*}
\hat{h}(f)&\leq\log\left(\prod_{v\in M_K^{\infty}}M_v(f)^{d_v/d}\right)=\log\left(\prod_{i=1}^r M(\sigma_i(f))^{1/d}\cdot \prod_{j=1}^s M(\tau_j(f))^{2/d}\right)=\\
&=\log\left(\prod_{\sigma\in{\rm Hom}(K,\mathbb{C})}M(\sigma(f))^{1/d}\right).
\end{align*}
By \cite[Section 3.2.2, formula (3.7)]{Zan} and by our hypothesis on $B$, we have that $M(\sigma(f))\leq B\sqrt{m+1}$ for all $\sigma\in{\rm Hom}(K,\mathbb{C})$, thus $\hat{h}(f)\leq \log(B\sqrt{m+1})$ and, plugging this bound into \eqref{h-root}, we conclude.
\end{proof}
\subsection{Representatives of residue rings of local fields}\label{sec-aux-p}
This subsection contains the technical key result needed to construct the local polynomials in the proof of Theorem \ref{mainthm}.
Let $E/F$ be a Galois extension of non-archimedean local fields with Galois group $G$.
{Let $\mathfrak{p}$ be the maximal ideal of $\mathcal{O}_F$,}
$\mathfrak{P}$ be the maximal ideal of $\mathcal{O}_E$
and for $k\in\mathbb{N}$ denote by
$\pi_k:\mathcal{O}_E\rightarrow\mathcal{O}_E/\mathfrak{P}^k$ the residue map.
It is known that one can always find a $G$-invariant set of representatives of the residue field $\mathcal{O}_E/\mathfrak{P}$,
e.g.~the Teichm\"uller representatives.
As long as the ramification of $E/F$ is tame,
one can also find $G$-invariant sets of representatives of each
residue ring $\mathcal{O}_E/\mathfrak{P}^k$,
but if the ramification is wild,
this is not necessarily so.
We will therefore work with the following substitute
for such a $G$-invariant set of representatives:
\begin{proposition}\label{A_i}
Let $E/F$ be a Galois extension of non-archimedean local fields
and define $G,\mathfrak{p},\mathfrak{P},\pi_k$ as above.
Let $d$ be a multiple of $|G|$.
There exists a constant $c$ such that for every $k$ there is $A\subseteq\mathcal{O}_E$
such that
\begin{enumerate}
\item $A$ is $G$-invariant,
\item all orbits of $A$ have length $|G|$,
\item $\pi_k|_{A}:A\rightarrow\mathcal{O}_E/\mathfrak{P}^k$ is $d$-to-1 and onto, and
\item $\pi_{k+c}|_{A}$ is injective.
\end{enumerate}
Moreover, if $F$ is a $p$-adic field, one can choose
\[
c\leq e(\mathfrak{P}|\mathfrak{p})\left(d+|G|+\frac{e(\mathfrak{p}|p)}{p-1}+1\right).
\]
\end{proposition}
\begin{proof}
Note that $G$ naturally acts on $\mathcal{O}_E$ and on $\mathcal{O}_E/\mathfrak{P}^k$, and that $\pi_k$ is $G$-equivariant.
Fix some primitive element $\alpha\in\mathcal{O}_E^\times$ of $E/F$
and a uniformizer $\theta\in\mathcal{O}_F$ of $v_\mathfrak{p}$,
and
let
\begin{eqnarray*}
e&=&e(\mathfrak{P}|\mathfrak{p}),\\
c_0&=&\max_{1\neq\sigma\in G} v_\mathfrak{P}(\alpha-\sigma\alpha), (\mbox{with } c_0=0\mbox{ if }G=1), \\
c_1&=&\lceil|G|+c_0/e\rceil, \mbox{ and }\\
c &=& e(d+c_1).
\end{eqnarray*}
Let $k\in\mathbb{N}$ be given.
The desired set $A$ is obtained by applying the following Claim
in the case $X=\mathcal{O}_E/\mathfrak{P}^k$:
\begin{claim}
For every $G$-invariant subset
$X\subseteq\mathcal{O}_E/\mathfrak{P}^k$
there exists a $G$-invariant subset $A\subseteq\mathcal{O}_E$
with all orbits of length $|G|$ such that
$\pi_{k+c}|_{A}$ is injective
and $\pi_k|_{A}$ is $d$-to-$1$ onto $X$.
\end{claim}
We prove the Claim by induction on $|X|$:
If $X=\emptyset$, $A=\emptyset$ satisfies the claim.
If $X\neq\emptyset$ take $x\in X$ and let $X'=X\setminus Gx$,
where $Gx$ denotes the orbit of $x$ under $G$.
By the induction hypothesis there exists
$A'\subseteq\mathcal{O}_E$ satisfying the claim for $X'$.
Choose $a\in\pi_k^{-1}(x)$
and let $k_0=\lceil\frac{k}{e}\rceil$.
Then
$$
n_0:=\min\{n\geq0: v_\mathfrak{P}(a-\sigma a)\neq e(k_0+n)+v_\mathfrak{P}(\alpha-\sigma\alpha)\;\forall 1\neq\sigma\in G\} < |G|,
$$
as $v_\mathfrak{P}(a-\sigma a)-v_\mathfrak{P}(\alpha-\sigma\alpha)$ attains less than $|G|$ many distinct values.
Thus for $1\neq\sigma\in G$,
\begin{eqnarray*}
v_\mathfrak{P}( (a+\theta^{n_0+k_0}\alpha)-\sigma(a+\theta^{n_0+k_0}\alpha) )
&=& \min\{v_\mathfrak{P}(a-\sigma a),e(n_0+k_0)+v_\mathfrak{P}(\alpha-\sigma\alpha)\}\\
&\leq&e(n_0+k_0)+c_0\\&<& k+ec_1,
\end{eqnarray*}
so if we replace $a$ by $a+\theta^{n_0+k_0}\alpha$,
we can assume without loss of generality that
$\pi_{k+ec_1}$ is injective on $Ga$
and that $|Ga|=|G|$.
If we now let
$$
A=A'\cup\{\sigma(a)+\theta^{k_0+c_1+j}:\sigma\in G,0\leq j<d/|G_x|\}
$$
where $G_x$ is the stabilizer of $x$, then $\pi_k|_{A}$ is $d$-to-$1$ onto $X=X'\cup Gx$ and $A$ is $G$-invariant with all orbits of length $|G|$.
As
$$
k+ec_1\leq e(k_0+c_1+j)<k+c,
$$
we have that $\pi_{k+c}|_{A}$ is injective.
Now, if $F$ is a $p$-adic field and if we chose $\alpha\in \mathcal{O}_E$ to be also a generator of $\mathcal{O}_E$ as a $\mathcal{O}_F$-algebra, by \cite[Chap.IV, Ex. 3(c)]{Serre}, one has the explicit bound $c_{0}\leq e(\mathfrak{P}/{p})/(p-1)$ which implies the stated bound for $c$.
\end{proof}
\begin{remark}
Note that if (4) holds for some $c,k,A$, then
also for $c',k,A$ for any $c'\geq c$.
\end{remark}
\section{Proof of Theorem \ref{mainthm}}
\label{sec:main}
\noindent
Using the notation of Theorem \ref{mainthm}, for every $1\leq i\leq n$, let
$\mathfrak{P}_i$ be the maximal ideal of $\mathcal{O}_{E_i}$,
$v_i$ the extension of $v_{\mathfrak{P}_i}$ to an algebraic closure of $E_i$,
$G_i={\rm Gal}(E_i/F_i)$ and
$d=\prod_{i=1}^n|G_i|$.
Let $C$ be the constant from Theorem \ref{mainthm},
note that $C\geq 2$,
and let $c=4C^{n+1}$.
Fix an integer $\rho\geq 3C^n$
and note that $\rho/d\geq 3$ since $d\leq C^n$.
If $n=0$ let $\epsilon=0$, otherwise
fix $0<\epsilon<1$.
\subsection{Choosing the right degree}
\label{sec:degree}
If $n>1$
we apply Proposition \ref{Dir} to $x_i={q_i}^{f_i}$
to obtain positive integers $r>\rho/d$ and $k_1,\ldots,k_n$ such that for every $i$,
\begin{enumerate}[(i)]
\item \label{bound-r}
$r\leq q_i^{f_i k_i} \leq (1+\epsilon) r$ and
\item\label{b-ki} $k_i\leq 2^{2(n+1)}(\log C)^{n}\frac{\log(\rho/d)}{\log^n(1+\epsilon)}$,
\end{enumerate}
where we used that, for every $i$, $\log 2 \leq \log(x_i)=\log({q_i^{f_i}})\leq \log C$.
It follows that
\begin{equation*}\label{b-r} \log(\rho/d)\leq \log(r)\leq (4\log C)^{n+1}\frac{\log(\rho/d)}{\log^n(1+\epsilon)}.
\end{equation*}
If $n=1$ we
instead set $r=q_1^{f_1k_1}$, where $k_1=\lceil\log(\rho/d)/\log(q_1^{f_1})\rceil$, so that (\ref{bound-r}) holds with $\epsilon=0$, and
\begin{equation*}\label{b-r-rho}
\log(\rho/d)\leq\log(r)\leq \log(\rho/d)+\log(q_1^{f_1}).
\end{equation*}
Using that $(4\log C)^{n+1}\geq\log^n(1+\epsilon)$, we conclude
\begin{equation}\label{eqn:deg}
\rho \leq dr \leq \begin{cases} C\rho,&n=1\\
\rho^{\frac{(4\log C)^{n+1}}{\log^n(1+\epsilon)}},&n>1 \end{cases}
\end{equation}
\subsection{Construction of the polynomial $g$}\label{constr-G}
We first want to prove the following:
\begin{claim}\label{claim}
For every $i$, there exists a polynomial $g_i\in \mathcal{O}_{K}[X]$ of degree $dr$ whose set of roots ${A_i}$ satisfies
\begin{enumerate}[(a)]
\item\label{aux1} $A_i\subseteq E_i$,
\item\label{aux2} $v_i({\alpha}-{\beta})< k_i+c$ for all ${\alpha},{\beta}\in{A_i}$ with ${\alpha}\neq{\beta}$, and
\item\label{aux3} $v({g_i}'(\alpha))\leq d\left(\frac{q_i^{f_i k_i}-1}{q_i^{f_i}-1}+ c\right)$ for every ${\alpha}\in{A_i}$.
\end{enumerate}
\end{claim}
\begin{proof}[Proof of the claim]
As
$e(\mathfrak{P}_i|\mathfrak{p}_i)(d+|G_i|+\frac{e(\mathfrak{p}_i|p_i)}{p_i-1}+1)\leq C(C^n+C+C+1)\leq 4C^{n+1}=c$,
by Proposition \ref{A_i} there is
a $G_i$-invariant set $A_i'\subseteq\mathcal{O}_{E_i}$ with all orbits of length $|G_i|$ such that
${A_i'}\rightarrow\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i}$ is $d$-to-$1$
and ${A_i'}\rightarrow\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i+c}$
is injective.
As $|{A_i'}|=d q_i^{f_i k_i}$, $|G_i|$ divides $d$, and $r\leq q_i^{f_i k_i}$,
there exists a $G_i$-invariant subset ${\tilde A_i}\subseteq{A_i'}$
with $|\tilde{A_i}|=d r$.
Let
$$
\tilde{g_i}=\prod_{\alpha\in \tilde{A_i}}(X-\alpha)\in\mathcal{O}_{F_i}[X].
$$
We first prove that conditions \eqref{aux1}-\eqref{aux3} hold for $\tilde{g_i}$ and the set $\tilde{A_i}$, instead of $g_i$ and $A_i$.
Note that $\tilde{g_i}\in\mathcal{O}_{F_i}[X]$ is monic of degree $dr$ and that condition \eqref{aux1} holds for $\tilde{A_i}$ by construction. Moreover, as the map $\tilde{A_i}\rightarrow\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i+c}$ is injective, we have that condition \eqref{aux2} is also satisfied for $\tilde{A_i}$.
As for condition \eqref{aux3},
note that the valuation $v_{\mathfrak{P}_i}$ on $\mathcal{O}_{E_i}$ induces
a map $\bar{v}:(\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i})\setminus\{0\}\rightarrow\{0,\dots,k_i-1\}$ such that
${v}_{\mathfrak{P}_i}(\gamma)=\bar{v}(\pi_{k_i}(\gamma))$ for all $\gamma\in\mathcal{O}_{E_i}\setminus\mathfrak{P}_i^{k_i}$, where
$\pi_{k_i}$ denotes the residue map $\mathcal{O}_{E_i}\rightarrow\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i}$.
Now
\begin{eqnarray*}
v_{\mathfrak{P}_i}(\tilde{g_i}'(\alpha)) &=& \sum_{\alpha\neq\beta\in \tilde{A_i}}v_{\mathfrak{P}_i}(\alpha-\beta)\\&\leq&
\sum_{\alpha\neq\beta\in A_i'}v_{\mathfrak{P}_i}(\alpha-\beta)\\
&=&\sum_{\stackrel{\alpha\neq\beta\in A_i'}{\pi_{k_i}(\alpha)=\pi_{k_i}(\beta)}}v_{\mathfrak{P}_i}(\alpha-\beta)+\sum_{\stackrel{\alpha\neq\beta\in A_i'}{\pi_{k_i}(\alpha)\neq\pi_{k_i}(\beta)}}v_{\mathfrak{P}_i}(\alpha-\beta)\\
&<&(d-1)\cdot(k_i+c)+d\cdot\sum_{ {0\neq a\in\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i}}}\bar{v}(a)
\end{eqnarray*}
and
\begin{eqnarray*}
\sum_{{0\neq a\in\mathcal{O}_{E_i}/\mathfrak{P}_i^{k_i}}} \bar{v}(a)
&=& \sum_{j=0}^{k_i-1}|\{a:\bar{v}(a)=j\}|\cdot j
\quad=\quad\sum_{j=0}^{k_i-1}\sum_{l=1}^j|\{a:\bar{v}(a)=j\}|\\
&=&\sum_{l=1}^{k_i-1}\sum_{j=l}^{k_i-1}|\{a:\bar{v}(a)=j\}|
\quad=\quad\sum_{l=1}^{k_i-1}|\{a:\bar{v}(a)\geq l\}|\\
&=&\sum_{l=1}^{k_i-1}(q_i^{f_i(k_i-l)}-1)
\quad=\quad \sum_{l=0}^{k_i-1}q_i^{f_i l} - k_i
\quad=\quad \frac{1-q_i^{f_i k_i}}{1-q_i^{f_i}}-k_i
\end{eqnarray*}
and plugging this into the previous inequality gives condition (\ref{aux3}) for $\tilde{g}_i$.
As $\mathcal{O}_K$ is dense in $\mathcal{O}_{F_i}$ with respect to $v_i$,
we obtain a monic polynomial ${g_i}\in\mathcal{O}_K[X]$
of degree $dr$
arbitrarily close to $\tilde{g_i}$.
Let ${A_i}$ be the set of roots of ${g_i}$.
By the continuity of roots \cite[Theorem 2.4.7]{EP} we can achieve that the roots of ${g_i}$ are arbitrarily close to the roots of $\tilde{g_i}$, in particular that conditions \eqref{aux2} and \eqref{aux3} are satisfied by $g_i$ and $A_i$.
Moreover, by Krasner's lemma \cite[Ch.II, \S 2, Proposition 4]{Lang}, we can in addition achieve condition \eqref{aux1}, completing the proof of the claim.
\end{proof}
Now, let $p_0$ be the smallest prime number not in the set $\{p_1,\ldots,p_n\}$ and let $\mathfrak{p}_0$ be a prime ideal of $\mathcal{O}_K$ above $p_0$. Fix
a monic polynomial $g_0\in\mathcal{O}_K[X]$ of degree $dr$
whose reduction modulo $\mathfrak{p}_0$ is irreducible.
Let
\begin{equation}\label{defmi}
m_i=\frac{d}{e_i}\left(\frac{q_i^{f_i k_i}-1}{q_i^{f_i}-1}+k_i+2c\right)
\end{equation}
and
$$
\mathfrak{a}=\mathfrak{p}_0\mathfrak{p}_1^{m_1}\cdots\mathfrak{p}_n^{m_n}.
$$
By the Chinese Remainder Theorem and Proposition \ref{lem:small_rep}
there exists a monic polynomial $g\in\mathcal{O}_K[X]$ such that
\begin{enumerate}
\item\label{degg_dr} $\deg g=dr$,
\item\label{cond_p0} $g\equiv g_0\mbox{ mod }\mathfrak{p}_0[X]$,
\item\label{cond_equiv} $g\equiv {g_i}\mbox{ mod }\mathfrak{p}_i^{m_i}[X]$ for $i=1,\dots,n$, and
\item\label{cond_coeff} $|\sigma(a)|\leq\delta_K N(\mathfrak{a})^{1/[K:\mathbb{Q}]}$
for every coefficient $a$ of $g$ and every $\sigma\in{\rm Hom}(K,\mathbb{C})$,
\end{enumerate}
where $\delta_K=[K:\mathbb{Q}]^{\frac{3}{2}}{2}^{\frac{[K:\mathbb{Q}]([K:\mathbb{Q}]-1)}{2}}\sqrt{|\Delta_K|}$.
Note that (\ref{cond_p0}) implies that $g$ is irreducible.
In particular, we get from (\ref{degg_dr}) and (\ref{eqn:deg})
that every root $\alpha$ of $g$ satisfies
the degree bound
(\ref{eqn:thm2}) of Theorem \ref{mainthm}.
\subsection{The roots of $g$ are in $E_i$ for every $i$.}\label{roots-g}
We claim that the conditions of Proposition \ref{val-BZ} hold for the field $E_i$, the polynomial $g$ and
$x_0=\alpha\in {A_i}$ (which lies in $E_i$ by condition \eqref{aux1}) by setting
$a=v_i({g_i}'(\alpha))$ and $b=k_i+c-1$.
Indeed, by \eqref{aux3}
\begin{eqnarray}\label{aem}
a=v_i({g_i}'(\alpha))\leq d\cdot\frac{q_i^{f_i k_i}-1}{q_i^{f_i}-1}+dc<e_im_i-b,
\end{eqnarray}
and, writing $g-{g_i}= t_i$ with $t_i\in\mathfrak{p}_i^{m_i}[X]$ by (\ref{cond_equiv}) of Section \ref{constr-G}, we have $g(\alpha)=t_i(\alpha)$ and therefore $v_i(g(\alpha))\geq e_im_i>a+b$,
so condition \eqref{ci} holds.
Similarly for condition \eqref{cii}, we have $g'(\alpha)=t_i'(\alpha)+{g_i}'(\alpha)$
and since \[v_i(t_i'(\alpha))\geq e_im_i> v_i({g_i}'(\alpha)),\] we conclude that
$v_i(g'(\alpha))=v_i({g_i}'(\alpha))=a$.
Now for $\nu\geq 2$ write
$$
{g_i}^{(\nu)}(\alpha) = \nu! {g_i}'(\alpha)\sum_{\substack{B\subseteq {A_i}\\|B|=\nu-1\\ \alpha\notin B}}\prod_{\beta\in B}(\alpha-\beta)^{-1}.
$$
Thus
$$
v_i({{g_i}^{(\nu)}(\alpha)}/{\nu!})\geq a-(\nu-1)\max_{\beta\neq\alpha}v_i(\alpha-\beta) {\geq} a-(\nu-1)b
$$
where the last inequality holds by {(\ref{aux2})}.
Moreover, using (\ref{aem}), we get
\[v_i({t_i^{(\nu)}(\alpha)}/{\nu!})\geq e_im_i-v_i(\nu!)\geq a+b-\frac{e(\mathfrak{P}_i|p_i)\nu}{p_i-1}\geq a-(\nu-1)b\]
where the last inequality holds since
$b\geq c\geq \frac{e(\mathfrak{P}_i|p_i)}{p_i-1}$.
Thus
\[v_i({g^{(\nu)}(\alpha)}/{\nu!})\geq \min\left\{v_i({{g_i}^{(\nu)}(\alpha)}/{\nu!}), v_i({t_i^{(\nu)}(\alpha)}/{\nu!})\right\}\geq a-(\nu-1)b\]
fulfilling condition \eqref{ciii}.
So Proposition \ref{val-BZ} gives ${\alpha}'\in E_i$ with $g({\alpha}')=0$ and $v_i(\alpha'-\alpha)>b$.
As $v_i(\alpha-\beta)\leq b$ for all $\beta\in {A_i}\setminus\{\alpha\}$
by (\ref{aux2}),
we conclude that ${\alpha}'\neq{\beta}'$ for all $\alpha\neq\beta$.
Hence $g$ has precisely $|{A_i}|=dr$ many roots in $E_i$.
As this holds for every $i$ and
$g$ totally splits in the maximal Galois extension $L$ of $K$ that is contained in all $E_i$. Moreover, as $g\in \mathcal{O}_K[X]$, all roots of $g$ are actually in $\mathcal{O}_L$.
\subsection{Bounding the height of the roots of $g$}\label{b-root}
From condition (\ref{cond_coeff}) of Section \ref{constr-G}, for every coefficient $a$ of $g$ and every $\sigma\in{\rm Hom}(K,\mathbb{C})$, we have
$$
|\sigma(a)|\leq B:=\delta_K N(\mathfrak{p}_0)^{1/[K:\mathbb{Q}]}\cdot\prod_{i=1}^n N(\mathfrak{p}_i)^{m_i/[K:\mathbb{Q}]}.
$$
By Proposition \ref{H-min-root}, $g$ has a root $\alpha\in\mathcal{O}_L$ with $h(\alpha)$ bounded by
\begin{equation}\label{bo-h}
\frac{\log(B\sqrt{\deg g+1})}{\deg g} \leq
\frac{\log\left(\delta_K N(\mathfrak{p}_0)^{1/[K:\mathbb{Q}]}\sqrt{\deg g+1}\right)}{\deg g}+\sum_{i=1}^n \frac{m_i}{\deg g}\cdot\frac{\log (q_i)}{[K:\mathbb{Q}]}.
\end{equation}
By the definition of $m_i$ in (\ref{defmi}) and recalling $\deg g=dr$ from \eqref{degg_dr}, we have
\begin{eqnarray}\label{eq-mi}
\frac{m_i}{\deg g} &=&\frac{1}{e_i}\cdot\frac{1}{q_i^{f_i}-1}\cdot\frac{q_i^{f_i k_i}-1}{r}+\frac{k_i}{e_i r}+\frac{2c}{e_i r}.
\end{eqnarray}
Condition \eqref{bound-r} of Section \ref{constr-G} implies that
\[
\frac{q_i^{f_i k_i}-1}{r}\leq 1+\epsilon.
\]
Moreover,
\[
\frac{k_i}{e_i r}=\frac{\log(q_i^{f_i k_i})}{e_i r f_i \log(q_i)}\leq \frac{2d}{e_i f_i \log(q_i)}\cdot \frac{\log(\deg g)}{\deg g}\leq 3 C^n \frac{\log(\deg g)}{\deg g}
\]
Finally,
$$
\frac{2c}{e_i r}\leq \frac{8d C^{n+1}}{e_i \deg g}\leq \frac{8C^{2n+1}}{\deg g}.
$$
Therefore, substituting in \eqref{eq-mi}, and recalling that $C\geq 2$ and $\rho\geq 3$, we have
\[
\frac{m_i}{\deg g} \leq \frac{1}{e_i(q_i^{f_i}-1)}+\frac{\epsilon}{e_i(q_i^{f_i}-1)}+11 C^{2n+1}\frac{\log(\deg g)}{\deg g}.
\]
Thus the second summand in \eqref{bo-h} can be bounded as
\begin{align*}
\sum_{i=1}^n \frac{m_i}{\deg g}\cdot\frac{\log(q_i)}{[K:\mathbb{Q}]}\leq \sum_{i=1}^n& \frac{f(\mathfrak{p}_i|p_i)}{[K:\mathbb{Q}]}\cdot\frac{\log(p_i)}{e_i(q_i^{f_i}-1)}+n\epsilon+11nC^{2n+2}\frac{\log(\deg g)}{\deg g}.
\end{align*}
As for the first summand in \eqref{bo-h}, note that $N(\mathfrak{p}_0)^{1/[K:\mathbb{Q}]}\leq p_0$ where $p_0$ is the smallest prime number not in the set $\{p_1,\ldots,p_n\}$, which, by Bertrand's postulate, can be bounded by $p_0<2\max_i p_i\leq 2C$.
Moreover,
\[
\delta_K=[K:\mathbb{Q}]^{\frac{3}{2}}{2}^{\frac{[K:\mathbb{Q}]([K:\mathbb{Q}]-1)}{2}}\sqrt{|\Delta_K|}\leq C^2 \cdot 2^{\frac{C(C-1)}{2}}
\]
and thus, as $C\geq 2$,
\begin{eqnarray*}
\frac{\log\left(\delta_K N(\mathfrak{p}_0)^{1/[K:\mathbb{Q}]}\sqrt{\deg g+1}\right)}{\deg g}
&\leq& \frac{\log\left(C^3 2^{\frac{C^2-C+2}{2}}\sqrt{\deg g+1}\right)}{\deg g}\leq 2nC^{2n+1}\frac{\log(\deg g)}{\deg g}.
\end{eqnarray*}
Therefore, from \eqref{bo-h} we get
\[
h(\alpha)\leq \sum_{i=1}^n \frac{f(\mathfrak{p}_i|p_i)}{[K:\mathbb{Q}]}\cdot\frac{\log(p_i)}{e_i(q_i^{f_i}-1)}+n\epsilon+13 nC^{2n+2}\frac{\log(\deg g)}{\deg g},
\]
so $\alpha$ satisfies the height bound (\ref{eqn:thm1}) of
Theorem \ref{mainthm}
(recalling that $\epsilon=0$ if $n=1$).
\section*{Acknowledgments}
\noindent The authors thank
Lukas Pottmeyer for pointing out the results in \cite{Fil,FiliPetsche,FP},
Paul Fili for the exchange regarding \cite{Fil},
and Philip Dittmann for helpful discussions on $p$-adic fields
as well as for suggesting the short proof of Proposition \ref{val-BZ}.
The first author's work has been funded by the ANR project Gardio 14-CE25-0015.
\end{document} |
\begin{document}
\title[Graphs determined by their generalized spectrum]{New families of graphs determined by their generalized spectrum
}
\author{Fenjin Liu, Johannes Siemons, Wei Wang}
\address{F. Liu: School of Science, Chang'an University, Xi'an, P.R. China, 710046\newline
School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, P.R. China, 710049\newline
School of Mathematics, University of East Anglia, Norwich, Norfolk, NR4 7TJ, UK.}
\email{[email protected]}
\address{J. Siemons: School of Mathematics, University of East Anglia, Norwich, NR4 7TJ, UK}
\email{[email protected]}
\address{W. Wang: School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, P.R. China, 710049}
\email{
wang\[email protected]}
\maketitle
{\sc Abstract:}\, We construct infinite families of graphs that are determined by their generalized spectrum. This construction is based on new formulae for the determinant of the walk matrix of a graph. The graphs constructed here all satisfy a lower divisibility for the determinant of their walk matrix.\footnote{{\sc Keywords:}\, Graph spectrum, Walk matrix, Graphs determined by generalized spectrum ~~~~\\
{\sc Mathematics Subject Classification:} 05C50, 15A18}
\section{ Introduction}
Let $G$ be a graph on the vertex set $V$ with adjacency matrix $A$ and let $\overline{G}$ be its complement, with adjacency matrix $\overline{A}.$ Then the {\it spectrum} ${\rm spec}(G)$ of $G$ is the set of all eigenvalues of $A,$ with corresponding multiplicities. It has been a longstanding problem to characterize graphs that are determined by their spectrum, that is to say: If ${\rm spec}(H)={\rm spec}(G),$ does it follow that $H$ is isomorphic to $G?$ If this is the case then $G$ is {\it determined by its spectrum,} or a DS graph for short. In~\cite{Dam-Wga,Haemers-AlmostDS,Godsil-AlmostDS} it is conjectured that almost all graphs are DS, more recent surveys can be found in \cite{Dam-Dos}.
A variant of this problem concerns the {\it generalized spectrum} of $G$ which is given by the pair $\big({\rm spec}(G),\,{\rm spec}(\overline G)\big).$ In this situation $G$ is said to be {\it determined by its generalized spectrum,} or a DGS graph for short, if $\big({\rm spec}(H),\,{\rm spec}(\overline H)\big)=\big({\rm spec}(G),\,{\rm spec}(\overline G)\big)$ implies that $H$ is isomorphic to $G.$ For the most recent results on DGS graphs see \cite{LHMao-NewMethodDS} and \cite{Wang-ASimpleCriterion} in particular. To date only a few families of graphs are known to be DS or DGS. This includes some almost complete graphs \cite{Camara-Almost}, pineapple graphs \cite{Hatice-Pineapple} and kite graphs \cite{Hatice-Kite}. These are all known to be DS, and in particular DGS. In addition all rose graphs are determined by their Laplacian spectrum except for two specific examples, see~\cite{He-Rose}.
In this paper we construct infinite sequences of DGS graphs from certain small starter graphs. This construction involves the {\it walk matrix} of a graph and recent results in~\cite{Wang-ASimpleCriterion}. Let $n$ be the number of vertices of $G$ and let $e$ be the column vector of height $n$ with all entries equal to $1.$ Then the {\it walk matrix} of $G$ is the $n\times n$ matrix
\begin{equation*}
W=\big[e,\,Ae,\,A^2e,\cdots, A^{n-1}e \big]
\end{equation*}
formed by the column vectors $A^{i}e.$ In this paper we investigate the determinant of the walk matrix of $\overline{G}$ and of graphs obtained by joins and unions of $G$ with a single new vertex.
We prove that the determinant of the walk matrix of $G$ and that of $\overline{G}$ are the same up to a sign, see Theorem \ref{thm-detbarW}. We consider a graph $G$ and a new vertex $w$. In this situation the union $G\cup w$ and the join $G\vee w$ graphs can be defined, see Section 2. In Theorem \ref{thm-detWGcupv}
we obtain a formulae for the determinant of their walk matrices in terms of the determinants of $G,$ $\overline{G}$ and $W.$
In Wang \cite{Wang-GeneralizedRevisited} it is shown that $2^{\lfloor\frac{n}{2}\rfloor}$ divides $\det(W)$. Furthermore, if
\begin{equation*}
\mathcal{C}:\quad \frac{\det(W)}{2^{\lfloor\frac{n}{2}\rfloor}}\text{\,\, is odd and square-free }
\end{equation*}
then $G$ is determined by its generalized spectrum \cite{Wang-ASimpleCriterion}. This shows that the divisor $2^{\lfloor\frac{n}{2}\rfloor}$ plays a special role for the determinant of the walk matrix, in some sense\,
$\mathcal{C}$\, is an extremal divisibility condition, see also Section 6.2 in \cite{LHMao-NewMethodDS}.
In Section~3 we give the construction of infinite families of graphs in which $\mathcal{C}$ holds, this is Theorems~\ref{thm-detWni} and~\ref{thm-detWn2to-n-above2}. This construction provides in particular infinite families of graphs which are determined by their generalized spectrum. Unlike constructing graphs with the same generalized spectrum (e.g. GM-Switching), very few methods are known for building DGS graphs. The results in this paper therefore contribute to our understanding
of DGS graphs, they also give a partial answer a problem posed in \cite{LHMao-NewMethodDS}.
All graphs in this paper are finite, simple and undirected. Our notation follows the standard texts, for instance~\cite{Cvetkovic-IntroductionGraphSpec,Godsil-AlgebraicGraphTheory}. The vertex set of the graph $G$ is denoted $V.$
The \emph{characteristic polynomial} $P(x)$ of $G$ is the characteristic polynomial of $A,$ thus $P(x)=\det(x I-A),$ and the \emph{eigenvalues} of $G$ or $A$ are the roots of $P(x).$
When it is necessary to refer to a particular graph $H$ we denote its vertices, adjacency or walk matrix by $V(H),$ $A(H)$ or $W(H),$ etc.
\section{The determinant of the walk matrix}
We begin by giving a formula for the determinant of the walk matrix of a graph and its complement.
\begin{theorem}\label{thm-detbarW}
Let $G$ be a graph on $n$ vertices with walk matrix $W.$ Then the walk matrix
of its complement satisfies
\begin{equation*}
\det(\overline{W})=(-1)^{\frac{n(n-1)}{2}}\det(W)\,\,.\end{equation*}
\end{theorem}
\begin{proof}
Let $A$ and $\overline{A}$ be the adjacency matrix of $G$ and $\overline{G}$ respectively. We show that for each $k$ the $k$-th column of $\overline{W}$ can be expressed as a linear combination of the first $k$ columns of $W$. This is true for the first and second columns of $\overline{W}$ since
\begin{equation*}
\overline{A}e=(J-I-A)e=(n-1)e-Ae,
\end{equation*}
where $J$ is the all-one matrix and $I$ is the identity matrix.
So we assume that the claim holds for some $k,$ that is, there exist numbers $c_0,c_1,\ldots,c_{k-1}\in \mathbb{R}$ such that
\begin{equation*}
\overline{A}^{k-1}e=\sum_{i=0}^{k-1}c_iA^ie.
\end{equation*}
Since $JA^ie=(e^TA^ie)e$, we have
\begin{equation}\label{eq-detWCACk}
\begin{split}
\overline{A}^{k}e&=\overline{A}(\overline{A}^{k-1}e)\\
&=(J-I-A)(\sum_{i=0}^{k-1}c_iA^ie)\\
&=[\sum_{i=0}^{k-1}c_i(e^TA^ie)-c_0]e-\sum_{i=1}^{k-1}(c_{i-1}+c_i)A^ie-c_{k-1}A^ke.
\end{split}
\end{equation}
Therefore the above claim is true. In particular, for each $k=1,2,\ldots$, we conclude that the coefficient of the vector $A^ke$ is $-c_{k-1}=(-1)^k$. Substituting \eqref{eq-detWCACk} into $\overline{W}$ gives
\begin{equation*}
\det(\overline{W})=\det \big[e,\,-Ae,\,A^2e, -A^{3}e,\cdots,\, (-1)^{n-1}A^{n-1}e \big]
\end{equation*}
and so the result follows from the multilinearity of the determinant.
\end{proof}
By the same reasoning we have the following result for the leading principal submatrices of the walk matrix.
\begin{corollary}
Let $W_1,W_2,\ldots,W_k$ (resp. $\overline{W}_1,\overline{W}_2,\ldots,\overline{W}_k$) be the first $k$ leading principal submatrices of the walk matrix $W$ (resp. $\overline{W})$ for $k=1,2,\ldots,n$. Then
\begin{equation*}
\det(\overline{W}_k)=(-1)^{\frac{k(k-1)}{2}}\det(W_k)\,\,. \end{equation*}
\end{corollary}
The graph $G$ is {\it controllable} if its walk matrix $W(G)$ is invertible. This property can also be characterized by the main eigenvalues and main eigenvectors of the graph, see~\cite{Godsil-Controllable}. The relevance of controllability becomes clear from the recent work of O'Rourke and B. Touri~\cite{ORourke-AlmostAllControllable} who proved Godsil's conjecture~\cite{Godsil-Controllable} that asymptotically all graphs are controllable.
The theorem above also implies the following well-known fact concerning controllable graphs:
\begin{corollary}\cite{Godsil-Controllable}
A graph is controllable if and only if its complement is controllable.
\end{corollary}
Let $G$ be a graph and let $w$ be a new vertex, $w\not\in V(G).$ Then the {\it union} of $G$ and the singleton graph $\{w\}$, denoted by $G\cup w$, is the graph obtained from $G$ by adding $w$ as an isolated vertex. The {\it join} of $G$ and $\{w\}$, denoted by $G\vee w$, is the graph obtained from $G$ by adding the vertex $w$ and making it adjacent to all vertices of $G$. For these graph operations we have the following result.
\begin{theorem}\label{thm-detWGcupv}
Let $G$ be a graph with adjacency matrix $A$ and walk matrix $W$. Then we have\\[-25pt]
\begin{enumerate}
\item[$(i)$] $\det(W(G\cup w))=\pm\det(A)\det(W)$ and
\item [$(ii)$] $\det(W(G\vee w))=\pm\det(\overline{A})\det(W)$.
\end{enumerate}
\end{theorem}
The sign only depends on the position of the new vertex. For instance, if $\{w,v_{1},...,v_{n}\}$ are the vertices of $G\cup w$ then $\det(W(G\cup w))=\det(A)\det(W)$ and
$\det(W(G\vee w))=\det(\overline{A})\det(W)$.
\begin{proof}
Denote by $W'=\big[Ae,\,A^2e,\,\cdots, A^{n-1}e,\,A^ne \big]$. Then, taking the new vertex $w$ as the first vertex in $G\cup \{w\},$ we have
\begin{equation*}
W(G\cup w)=\begin{bmatrix}
1&0_{n\times 1}\\
1_{n\times 1}&W'
\end{bmatrix}.
\end{equation*}
Expanding the determinant of $W(G\cup w)$ along the first row gives
\begin{equation*}
\begin{split}
\det(W(G\cup w))&=\det(W')=\det(AW)=\det(A)\det(W).
\end{split}
\end{equation*}
If the new vertex $w$ is labelled even, then $\det(W(G\cup w))=-\det(A)\det(W)$.
This proves the first part of the theorem.
For the second part, first note that $G\vee w= \overline{\overline{G}\cup w}$. Now use Theorem~\ref{thm-detbarW}\, to obtain
\begin{equation*}
\begin{split}
\det(W(G\vee w))&=\det (W(\overline{\overline{G}\cup w}))\\
&=(-1)^{\frac{n(n+1)}{2}}\det (W(\overline{G}\cup w))\\
&=\pm\det(\overline{A})\det(\overline{W})\\
&=\pm\det(\overline{A})\det(W).
\end{split}
\end{equation*}
This completes the proof.
\end{proof}
\begin{remark}\label{rem-controllable}
{\rm A graph is {\it singular} if its adjacency matrix is singular, see \cite{Sciriha-Singular, S-Z}. The theorem has an interesting consequence for singular graphs: If $G$ is controllable then $G\cup w$ is controllable if and only if $G$ is not singular. Similarly, if $G$ is controllable then $G\vee w$ is controllable if and only if $\overline{G}$ is not singular.}
\end{remark}
\section{Constructing DGS graphs}
In this section we construct families of DGS graphs by using the union and join operations.
We begin with the Coefficient Theorem of Sachs. It relates the coefficient of the characteristic polynomial of the graph to its structure. An \emph{elementary graph} is a graph in which each component is $K_2$ or a cycle.
\begin{theorem}[Sachs Coefficients Theorem, see e.g.~\cite{Cvetkovic-IntroductionGraphSpec}]\label{thm-Sach}
Let $G$ be a graph on $n$ vertices with characteristic polynomial $P(x)=x^n+c_1x^{n-1}+\cdots+c_{n-1}x+c_n$. Denote by $\mathcal{H}_i$ the set of all elementary subgraphs of $G$ with $i$ vertices. For $H$ in $\mathcal{H}_i$ let $p(H)$ denote the number of components of $H$ and $c(H)$ the number of cycles in $H$. Then
\begin{equation*}
c_i=\sum_{H\in\mathcal{H}_i}(-1)^{p(H)}2^{c(H)},\mbox{\qquad for all $i=1,\ldots,n$.}
\end{equation*}
\end{theorem}
The following theorem due to Wang~\cite{Wang-ASimpleCriterion} characterizes certain DGS graphs by an arithmetic property of the determinant of their walk matrix.
\begin{theorem}[Wang~\cite{Wang-ASimpleCriterion}]\label{thm-Wang-ASimpleCriterion}
Let $G$ be a graph on $n$ vertices with $n\geq 6$ and walk matrix $W.$ Then $2^{\lfloor{\frac{n}{2}}\rfloor}$ divides $\det(W)$. Furthermore, if $2^{-\lfloor{\frac{n}{2}}\rfloor}\det(W)$ is odd and square-free then $G$ is determined by is generalized spectrum.
\end{theorem}
We are now able to state our next result.
\begin{theorem}\label{thm-detWni}
Let $G_0,\,G_{1},\,G_{2},\,..$. be a sequence of graphs which satisfy the following conditions
\begin{equation*}
G_{i}=\begin{cases}
G_{i-1}\cup w_{i}\quad(\mbox{if $i\ge1$ is odd});\\
G_{i-1}\vee w_{i}\quad(\mbox{if $i\ge1$ is even}).
\end{cases}
\end{equation*}
Denote $n_{0}:=|V(G_{0})|$, $a:=|\det(A(G_{0}))|,$ \,$b:=|\det(W(G_{0}))|$ and $p:=|\det(A(\overline{G}_1))|$. Then\begin{equation}\label{eq-dewWGi}|\det(W(G_{i}))|=a^{\lceil\frac{i}{2}\rceil}bp^{\lfloor\frac{i}{2}\rfloor} \quad \text{for all $i\geq 1$.}
\end{equation}
\end{theorem}
\begin{proof}
By Theorem \ref{thm-detWGcupv} (i) and (ii) we have
\begin{equation*}
\begin{split}
|\det(W(G_1))|&=|\det(W(G_0\cup w_1))|=|\det(A(G_0))\det(W(G_0))|=ab,
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
|\det(W(G_2))|&=|\det(W(G_1\vee w_2))|=|\det(A(\overline{G}_1))\det(W(G_1))|=abp.
\end{split}
\end{equation*}
Thus the result holds for $i=1,2$. Since the determinant of the adjacency matrix and the constant term of its characteristic polynomial are the same up to sign we use Theorem \ref{thm-Sach} to link it to the elementary spanning subgraphs. By induction we have $|\det(A(G_{2i}))|=a$ and $|\det(A(\overline{G}_{2i+1}))|=p$ for any integer $i\ge 1$. Since all elementary spanning subgraphs $H(G_{2i})$ in $\mathcal{H}_{\substack{n_{0}+2i}}(G_{2i})$ must have $K_2=w_{2i}w_{2i-1}$ as a component, there is a bijection between the elementary spanning subgraph set $\mathcal{H}_{\substack{n_{0}+2i}}(G_{2i})$ and $\mathcal{H}_{\substack{n_{0}+2(i +1)}}(G_{2(i+1)})$ i.e., there is a bijection
\begin{equation}\label{eq-constantbijection}
\begin{split}
&f:\mathcal{H}_{\substack{n_{0}+2i}}(G_{2i})\leftrightarrow \mathcal{H}_{\substack{n_{0}+2(i+1)}}(G_{2(i+1)}) \, \,\text{with} \\
&f(H(G_{2i}))=H(G_{2i})\cup w_{2i+2}w_{2i+1}.
\end{split}
\end{equation}
Note that the component $K_2=w_{2i+2}w_{2i+1}$ only changes the sign of the constant coefficient of the characteristic polynomial. Therefore each pair of elementary spanning subgraphs $H(G_{2i})$ and $H(G_{2i})\cup w_{2i+2}w_{2i+1}$ contribute opposite signs to the constant terms of $P_{\substack{G_{2i}}}(x)$ and $P_{\substack{G_{2(i+1)}}}(x)$, respectively. Hence $|\det(A(G_{2(i+1)}))|=a$. Analogously we have $|\det(A(\overline{G}_{2i+1}))|=p$. Equation \eqref{eq-dewWGi} follows by applying Theorem \ref{thm-detWGcupv} repeatedly.
\end{proof}
From this result we obtain infinite sequences of graphs that are determined by their generalized spectrum:
\begin{theorem}\label{thm-detWn2to-n-above2}
Let $G_0,\,G_{1},\,G_{2},\,\ldots$ be as in Theorem \ref{thm-detWni} and denote $|V(G_{0})|$ by $n_{0}.$ Suppose that $\{a,p\}=\{1,2\}$ and that $b\cdot2^{-\lfloor\frac{n_0}{2}\rfloor}$ is an odd square-free integer. Then the graphs $G_{i}$ are determined by their generalized spectrum, for all $i\geq 1$.
\end{theorem}
\begin{proof}
It is easy to see that $2^{-\lfloor\frac{n_{0}+i}{2}\rfloor}\cdot \det(W(G_i))$ is odd and square-free when $\{a,p\}=\{1,2\}$ and $b\cdot {2^{-\lfloor\frac{ n_0}{2}\rfloor}}$ is an odd square-free integer. Thus our result follows from Theorem~\ref{thm-Wang-ASimpleCriterion} and Theorem~\ref{thm-detWni}.
\end{proof}
We mention several remarks and open problems.
\begin{remark}\label{prop C} {\rm In the introduction we discussed the importance of the property ${\mathcal C}$ as an extremal divisibility condition for the walk matrix of a graph. All the graphs $G_{i}$ in Theorem~\ref{thm-detWn2to-n-above2}\, now satisfy the condition ${\mathcal C}.$}\end{remark}
\begin{remark}\label{rem-odd-square-free} {\rm There are indeed many graphs $G_{0}$ that are suitable starters for such sequences. Among the $112$ connected graphs on six vertices there are $8$ controllable graphs, labelled $59, 77$ in~\cite{Cvetkovic-A-Table-V6} with $|\det(W)|=3\cdot 2^3$ and $46, 60, 67, 85, 87, 98$ in~\cite{Cvetkovic-A-Table-V6} with $|\det(W)|=2^3.$ These graphs therefore have property $\mathcal{C}$ and are controllable.
According to Theorems~\ref{thm-Wang-ASimpleCriterion}, \ref{thm-detWni} and \ref{thm-detWn2to-n-above2} we obtain infinite series of graphs based on 6 of these $8$ graphs as an initial $G_0$, namely graphs labelled $59, 77,67, 85, 87, 98$. In particular, all graphs in these series have property $\mathcal{C}$ and hence are DGS graphs.} \end{remark}
\begin{remark}\label{rem-2tonabove2}
{\rm Mao et al.~\cite{LHMao-NewMethodDS} stated the problem of characterizing graphs with $|\det(W)|= 2^{\lfloor \frac{n}{2}\rfloor}$. This condition can be viewed
as a strengthening of condition $\mathcal{C}$
and is worth investigation independently.
By Theorems~\ref{thm-detWni} and~\ref{thm-detWn2to-n-above2} we can construct infinite families of such graphs based the controllable graphs labelled $67, 85, 87, 98$ in ~\cite{Cvetkovic-A-Table-V6}.}
\end{remark}
\begin{remark}{\rm
Theorems \ref{thm-detbarW},~\ref{thm-Wang-ASimpleCriterion} and~\ref{thm-detWn2to-n-above2} imply that the complement of any $G_i$ constructed in Theorem \ref{thm-detWni} is also DGS, for all $i\ge 0.$}
\end{remark}
\begin{figure}
\caption{Constructing a family of DGS graphs from $G_0\cong\#67$}
\label{Fig1}
\end{figure}
\begin{remark}\label{ex-DGS}{\rm
Let $G_0$ be the graph labelled $67$ in~\cite{Cvetkovic-A-Table-V6}, see Fig.\ref{Fig1}. We show schematically the first five DGS graphs obtained from Theorem~\ref{thm-detWni}. It is easy to verify that these graphs satisfy $|\det(W)|=\lfloor\frac{n}{2}\rfloor$.}
\end{remark}
{\bf Open Problems.} We conclude with two open problems:\\
(i)\,\,Find other constructions of graphs satisfying property $\mathcal{C}$.\\
(ii)\,Determine the generalized spectrum of graphs with property $\mathcal{C}$ and classify such graphs.
\end{document} |
\begin{document}
\title[B\'enard convection]{ local well-posedness for the B\'enard convection without surface tension}
\author{Yunrui Zheng}
\address{Beijing International Center for Mathematical Research, Peking University, 100871, P. R. China}
\email{[email protected]}
\keywords{B\'enard convection, Boussinesq apporoximation, energy method}
\begin{abstract}
We consider the B\'enard convection in a three-dimensional domain bounded below by a fixed flatten boundary and above by a free moving surface. The domain is horizontally periodic. The fluid dynamics are governed by the Boussinesq approximation and the effect of surface tension is neglected on the free surface. Here we develop a local well-posedness theory for the equations of general case in the framework of the nonlinear energy method.
\end{abstract}
\maketitle
\section{Introduction}
\subsection{Formulation of the problem}
In this paper, we consider the B\'enard convection in a shallow horizontal layer of a fluid heated from below evolving in a moving domain
\begin{eqnarray}o
\Om(t)=\frak left\{y\in\Sigma\times\mathbb{R}\mid-1<y_3<\eta(y_1,y_2,t)\right\}.
\end{eqnarray}o
Here we assume that $\Sigma=(L_1\mathbb{T})\times(L_2\mathbb{T})$ for $\mathbb{T}=\mathbb{R}/\mathbb{Z}$ the usual $1$-torus and $L_1$, $L_2>0$ the periodicity lengths.
Assuming the Boussinesq approximation \cite{Chand}, we obtain the basic hydrodynamic equations governing B\'enard convection as
\begin{eqnarray}o
\pa_t u+u\cdot\nabla u+\frac{1}{\rho_0}\nabla p&=&\nu\Delta u+g\alpha \theta\mathbf{e}_{y_3},\quad \text{in}\ \Om(t),\\
\pa_t \theta+u\cdot\nabla \theta&=&\kappa\Delta \theta,\quad \text{in}\ \Om(t),\\
u\mid_{t=0}&=&u_0(y_1,y_2,y_3),\quad \theta\mid_{t=0}=\theta_0(y_1,y_2,y_3),
\end{eqnarray}o
Here, $u=(u_1,u_2,u_3)$ is the velocity field of the fluid satisfying $\mathop{\rm div}\nolimits u=0$, $p$ the pressure, $g>0$ the strength of gravity, $\nu>0$ the kinematic viscosity, $\alpha$ the thermal expansion coefficient, $e_{y_3}=(0,0,1)$ the unit upward vector, $\theta$ the temperature field of the fluid, $\kappa$ the thermal diffusively coefficient, and $\rho_0$ the density at the temperature $T_0$. Notice that, we have made the shift of actual pressure $\bar{p}$ by $p=\bar{p}+gy_3-p_{atm}$ with the constant atmosphere pressure $p_{atm}$.
The boundary condition is
\begin{eqnarray}o
\pa_t\eta-u^\prime\cdot\nabla\eta+u_3&=&0,\quad \text{on}\ \{y_3=\eta(t,y_1,y_2)\},\\
(pI-\nu\mathbb{D}(u))n&=&g\eta n+\sigma Hn+(\mathbf{t}\cdot\nabla)\sigma\mathbf{t},\quad \text{on}\ \{y_3=\eta(t,y_1,y_2)\},\\
n\cdot\nabla \theta+Bi \theta&=&-1,\quad \text{on}\ \{y_3=\eta(t,y_1,y_2)\},\\
u\mid_{y_3=-1}&=&0,\quad \theta\mid_{y_3=-1}=0,
\end{eqnarray}o
Here, $u^\prime=(u_1,u_2)$, $I$ the $3\times3$ identity matrix, $\mathbb{D}(u)_{ij}=\pa_iu_j+\pa_ju_i$ the symmetric gradient of $u$, $\mathscr{N}$ the upward normal vector of the free surface $\{y_3=\eta\}$, $n=\mathscr{N}/|\mathscr{N}|$ the unit upward normal vector of the free surface $\{y_3=\eta\}$ where $\mathscr{N}=(-\pa_1\eta, -\pa_2\eta, 1)$ is the upward normal vector of the free surface $\{y_3=\eta\}$ and $|\mathscr{N}|=\sqrt{(\pa_1\eta)^2+(\pa_2\eta)^2+1}$, $\mathbf{t}$ the unit tangential vector of the free surface, $Bi\ge0$ the Biot number and $H$ the mean curvature of the free surface. For simplicity, we only consider the case without surface tension in this paper, i.e. $\sigma=0$.
We will always assume the natural condition that there exists a positive number $\delta_0$ such that $1+\eta_0\ge\delta_0>0$ on $\Sigma$, which means that the initial free surface is strictly separated from the bottom. And without loss of generality, we may assume that $\rho_0=\mu=\kappa=\alpha=g=Bi=1$. That is, we will consider the equations
\begin{eqnarray}\frak label{equ:BC}
\frak left\{
\begin{aligned}
\pa_tu+u\cdot\nabla u+\nabla p-\Delta u-\theta e_{y_3}&=0& \quad \text{in}\quad \Om(t), \\
\mathop{\rm div}\nolimits u&=0& \quad \text{in}\quad \Om(t),\\
\pa_t\theta+u\cdot\nabla \theta-\Delta \theta&=0& \quad \text{in}\quad \Om(t),\\
(pI-\mathbb{D}u)n&=\eta n& \quad \text{on}\quad \{y_3=\eta(t,y_1,y_2)\},\\
\nabla \theta\cdot n+\theta &=-1&\quad\text{on}\quad\{y_3=\eta(t,y_1,y_2)\},\\
u=0,\quad \theta&=0& \quad\text{on}\quad\{y_3=-1\},\\
u\mid_{t=0}=u_0, \quad \theta\mid_{t=0}&=\theta_0&\quad \text{in}\quad \Om(0),\\
\pa_t\eta+u_1\pa_1\eta+u_2\pa_2\eta_2&=u_3& \quad\text{on}\quad\{y_3=\eta(t,y_1,y_2)\},\\
\eta\mid_{t=0}&=\eta_0&\quad\text{on}\quad\{y_3=\eta(t,y_1,y_2)\}.
\end{aligned}
\right.
\end{eqnarray}
The discussion of fourth equation in \eqref{equ:BC} may be found in \cite{WL}. The eighth equation in \eqref{equ:BC} implies that the free surface is advected with the fluid.
\subsection{Previous results}
Traditionally, the B\'enard convection problem has been studied in fixed upper boundary and in free boundary surface with surface tension.
For the problem with surface tension case, the existence and decay of global in time solutions of B\'enard convection problem with free boundary surface was proved by T. Iohara, T. Nishida and Y. Teramoto in $L^2$ spaces. T. Iohara proved this in $2$-D setting. T. Nishida and Y. Teramoto proved this in $3$-D background. They all utilized the framework of \cite{Beale2} in the Lagrangian coordinates.
\subsection{Geometrical formulation}
In the absence of surface tension effect, we will solve this problem in Eulerian coordinates. First, we straighten the time dependent domain $\Om(t)$ to a time independent domain $\Om$. The idea was introduced by J. T. Beale in section 5 of \cite{Beale2}. And in \cite{GT1}, \cite{GT2} and \cite{GT3}, Y. Guo and I. Tice proved the local and global existence results for the incompressible Navier--Stokes equations with a deformable surface using this idea. In \cite{GT1}, \cite{GT2} and \cite{GT3}, Guo and Tice assume that the surface function $\eta$ in some norms is small, which means $\eta$ is a small perturbation for the plane $\{y_3=0\}$. In order to study the free boundary problem of the incompressible Navier--Stokes equations with a general surface function $\eta$, L. Wu introduced the $\varepsilon$-Poisson integral method in \cite{LW}. In this paper, we will use the flattening transformation method introduced by L. Wu. We define $\bar{\eta}^\varepsilon$ by
\[
\bar{\eta}^\varepsilon=\mathscr{P}^\varepsilon\eta=\thinspace\text{the parametrized harmonic extension of}\thinspace \eta.
\]
The definition of $\mathscr{P}^\varepsilon\eta$ can be seen in the section 1.3.1 of \cite{LW} for the periodic case.
We introduce the mapping $\Phi^\varepsilon$ from $\Om$ to $\Om(t)$ as
\begin{equation}\frak label{map:phi}
\Phi^\varepsilon :(x_1,x_2,x_3)\mapsto (x_1,x_2,x_3+(1+x_3)\bar{\eta}^\varepsilon)=(y_1,y_2,y_3),
\end{equation}
and its Jacobian matrix
\[
\nabla\Phi^\varepsilon=\frak left(\begin{array}{ccc}
1 & 0 & 0\\
0 & 1 & 0\\
A^\varepsilon & B^\varepsilon & J^\varepsilon
\end{array}\right)
\]
and the transform matrix
\[
\mathscr{A}^\varepsilon=((\nabla\Phi^\varepsilon)^{-1})^\top=\frak left(\begin{array}{ccc}
1 & 0 & -A^\varepsilon K^\varepsilon \\
0 & 1 & -B^\varepsilon K^\varepsilon \\
0 & 0 & K^\varepsilon
\end{array}\right)
\]
where
\begin{equation}\frak label{equ:components}
A^\varepsilon=(1+x_3)\pa_1\bar{\eta}^{\varepsilon},\thinspace
B^\varepsilon=(1+x_3)\pa_2\bar{\eta}^{\varepsilon},\thinspace
J^\varepsilon=1+\bar{\eta}^\varepsilon+(1+x_3)\pa_3\bar{\eta}^\varepsilon,\thinspace
K^\varepsilon=1/J^\varepsilon.
\end{equation}
According to Theorem 2.7 in \cite{LW} and the assumption that $1+\eta_0>\delta_0>0$, there exists a $\delta>0$ such that $J^\varepsilon(0)>\delta>0$ for a sufficiently small $\varepsilon$ depending on $\|\eta_0\|_{H^{5/2}}$. This implies that $\Phi^\varepsilon(0)$ is a homomorphism. Furthermore, $\Phi^\varepsilon(0)$ is a $C^1$ diffeomorphism deduced from Lemma 2.5 and 2.6 in \cite{LW}. For simplicity, in the following, we just write $\bar{\eta}$ instead of $\bar{\eta}^\varepsilon$, while the same fashion applies to $\mathscr{A}$, $\Phi$, $A$, $B$, $J$ and $K$.
Then, we define some transformed operators. The differential operators $\nabla_{\mathscr{A}}$, $\mathop{\rm div}\nolimits_{\mathscr{A}}$ and $\Delta_{\mathscr{A}}$ are defined as follows.
\begin{align*}
&(\nabla_{\mathscr{A}}f)_i=\mathscr{A}_{ij}\pa_jf,\\
&\mathop{\rm div}\nolimits_{\mathscr{A}} u=\mathscr{A}_{ij}\pa_ju_i,\\
&\Delta_{\mathscr{A}}f=\nabla_{\mathscr{A}}\cdot\nabla_{\mathscr{A}}f.
\end{align*}
The symmetric $\mathscr{A}$-gradient $\mathbb{D}_{\mathscr{A}}$ is defined as $(\mathbb{D}_{\mathscr{A}}u)_{ij}=\mathscr{A}_{ik}\pa_ku_j+\mathscr{A}_{jk}\pa_ku_i$. And we write the stress tensor as $S_{\mathscr{A}}(p,u)=pI-\mathbb{D}_{\mathscr{A}}u$, where $I$ is the $3\times3$ identity matrix. Then we note that $\mathop{\rm div}\nolimits_{\mathscr{A}}S_{\mathscr{A}}(p,u)=\nabla_{\mathscr{A}}p-\Delta_{\mathscr{A}}u$ for vector fields satisfying $\mathop{\rm div}\nolimits_{\mathscr{A}}u=0$. We have also written $\mathscr{N}=(-\pa_1\eta,-\pa_2\eta,1)$ for the nonunit normal to $\{y_3=\eta(y_1,y_2,t)\}$.
Then the original equations \eqref{equ:BC} becomes
\begin{eqnarray}\frak label{equ:NBC}
\frak left\{
\begin{aligned}
&\pa_tu-\pa_t\bar{\eta}(1+x_3)K\pa_3u+u\cdot\nabla_{\mathscr{A}}u-\Delta_{\mathscr{A}}u+\nabla_{\mathscr{A}}p-\theta \nabla_{\mathscr{A}}y_3=0 &\quad\text{in}\quad\Om \\
&\nabla_{\mathscr{A}}\cdot u=0&\quad\text{in}\quad\Om \\
&\pa_t\theta-\pa_t\bar{\eta}(1+x_3)K\pa_3\theta+u\cdot\nabla_{\mathscr{A}}\theta-\Delta_{\mathscr{A}}\theta=0&\quad\text{in}\quad\Om \\
&(p I-\mathbb{D}_{\mathscr{A}}u)\mathscr{N}=\eta\mathscr{N}&\quad\text{on}\quad \Sigma\\
&\nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\frak left|\mathscr{N}\right|=-\frak left|\mathscr{N}\right|&\quad\text{on}\quad \Sigma\\
&u=0,\quad\theta=0& \quad\text{on}\quad \Sigma_b\\
&u(x,0)=u_0, \quad \theta(x,0)=\theta_0&\quad \text{in}\quad \Om\\
&\pa_t\eta+u_1\pa_1\eta+u_2\pa_2\eta=u_3&\quad\text{on}\quad \Sigma\\
&\eta(x^\prime,0)=\eta_0(x^\prime)&\quad\text{on}\quad \Sigma
\end{aligned}
\right.
\end{eqnarray}
where $e_3=(0, 0, 1)$ and we can split the equation \eqref{equ:NBC} into a equation governing B\'enard convection and a transport equation, i.e.
\begin{eqnarray}\frak label{equ:nonlinear BC}
\frak left\{
\begin{aligned}
&\pa_tu-\pa_t\bar{\eta}(1+x_3)K\pa_3u+u\cdot\nabla_{\mathscr{A}}u-\Delta_{\mathscr{A}}u+\nabla_{\mathscr{A}}p-\theta \nabla_{\mathscr{A}}y_3=0& \quad\text{in}\quad\Om\\
&\nabla_{\mathscr{A}}\cdot u=0&\quad\text{in}\quad\Om\\
&\pa_t\theta-\pa_t\bar{\eta}(1+x_3)K\pa_3\theta+u\cdot\nabla_{\mathscr{A}}\theta-\Delta_{\mathscr{A}}\theta=0&\quad\text{in}\quad\Om\\
& (p I-\mathbb{D}_{\mathscr{A}}u)\mathscr{N}=\eta\mathscr{N}&\quad\text{on}\quad \Sigma\\
&\nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\frak left|\mathscr{N}\right|=-\frak left|\mathscr{N}\right|&\quad\text{on}\quad \Sigma\\
&u=0,\quad\theta=0& \quad\text{on}\quad \Sigma_b\\
&u(x,0)=u_0, \quad \theta(x,0)=\theta_0&\quad \text{in}\quad \Om\\
\end{aligned}
\right.
\end{eqnarray}
and
\begin{eqnarray}
\frak left\{
\begin{aligned}
&\pa_t\eta+u_1\pa_1\eta+u_2\pa_2\eta=u_3&\quad\text{on}\quad \Sigma\\
&\eta(x^\prime,0)=\eta_0(x^\prime)&\quad\text{on}\quad \Sigma
\end{aligned}
\right.
\end{eqnarray}
Clearly, all the quantities in these two above systems are related to $\eta$.
\subsection{Main theorem}
The main result of this paper is the local well-posedness of the B\'enard convection. Before stating our result, we need to mention the issue of compatibility conditions for the initial data $(u_0,\theta_0,\eta_0)$. We will study for the regularity up to $N$ temporal derivatives for $N\ge2$ an integer. This requires us to use $u_0$, $\theta_0$ and $\eta_0$ to construct the initial data $\pa_t^ju(0)$, $\pa_t^j\theta(0)$ and $\pa_t^j\eta(0)$ for $j=1,\frak ldots,N$ and $\pa_t^jp(0)$ for $j=0,\frak ldots,N-1$. These data must then satisfy various conditions, which we describe in detail in Section 5.1, so we will not state them here.
Now for stating our result, we need to explain the notation for spaces and norms. When we write $\|\pa_t^ju\|_{H^k}$, $\|\pa_t^j\theta\|_{H^k}$ and $\|\pa_t^jp\|_{H^k}$, we always mean that the space is $H^k(\Om)$, and when we write $\|\pa_t^j\eta\|_{H^s}$, we always mean that the space is $H^s(\Sigma)$, where $H^k(\Om)$ and $H^s(\Sigma)$ are usual Sobolev spaces for $k, s\ge0$.
\begin{theorem}\frak label{thm:main}
Let $N\ge2$ be an integer. Assume that $\eta_0+1\ge\delta>0$, and that the initial data $(u_0,\theta_0,\eta_0)$ satisfies
\[
\mathscr{E}_0:=\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\|\eta_0\|_{H^{2N+1/2}}^2<\infty,
\]
as well as the $N$-th compatibility conditions \eqref{cond:compatibility N}. Then there exists a $0<T_0<1$ such that for any $0<T<T_0$, there exists a solution $(u,p,\theta,\eta)$ to \eqref{equ:NBC} on the interval $[0,T]$ that achieves the initial data. The solution obeys the estimate
\begin{eqnarray} \frak label{est:main est}
\begin{aligned}
&\sum_{j=0}^N\frak left(\sup_{0\frak le t\frak le T}\|\pa_t^ju\|_{H^{2N-2j}}^2+\|\pa_t^ju\|_{L^2H^{2N-2j+1}}^2\right)+\|\pa_t^{N+1}u\|_{(\mathscr{X}_T)^\ast}\\
&\quad+\sum_{j=0}^{N-1}\frak left(\sup_{0\frak le t\frak le T}\|\pa_t^jp\|_{H^{2N-2j-1}}^2+\|\pa_t^jp\|_{L^2H^{2N-2j}}^2\right)\\
&\quad+\sum_{j=0}^N\frak left(\sup_{0\frak le t\frak le T}\|\pa_t^j\theta\|_{H^{2N-2j}}^2+\|\pa_t^j\theta\|_{L^2H^{2N-2j+1}}^2\right)+\|\pa_t^{N+1}\theta\|_{(\mathscr{H}^1_T)^\ast}\\
&\quad+\Bigg(\sup_{0\frak le t\frak le T}\|\eta\|_{H^{2N+1/2}(\Sigma)}^2+\sum_{j=1}^N\sup_{0\frak le t\frak le T}\|\pa_t^j\eta\|_{H^{2N-2j+3/2}}^2\\
&\quad+\sum_{j=2}^{N+1}\|\pa_t^j\eta\|_{L^2H^{2N-2j+5/2}}^2\Bigg)\\
&\frak le C(\Om_0,\delta)P(\mathscr{E}_0),
\end{aligned}
\end{eqnarray}
where $C(\Om_0,\delta)>0$ depends on the initial domain $\Om_0$ and $\delta$, $P(\cdot)$ is a polynomial satisfying $P(0)=0$, and the temporal norm $L^2$ is computed on $[0,T]$. The solution is unique among functions that achieve the initial data and for which the left-hand side of \eqref{est:main est} is finite. Moreover, $\eta$ is such that the mapping $\Phi(\cdot,t)$ defined by \eqref{map:phi} is a $C^{2N-1}$ diffeomorphism for each $t\in [0,T]$.
\end{theorem}
\begin{remark}
The space $\mathscr{X}_T$ is defined in section $2$ of \cite{GT1}.
\end{remark}
\begin{remark}
Since the mapping $\Phi(\cdot,t)$ is a $C^{2N-1}$ diffeomorphism, we may change coordinates to produce solutions to \eqref{equ:BC}.
\end{remark}
\subsection{Notation and terminology}
Now, we mention some definitions, notation and conventions that we will use throughout this paper.
\begin{enumerate}[1.]
\item Constants. The constant $C>0$ will denote a universal constant that only depend on the parameters of the problem, $N$ and $\Om$, but does not depend on the data, etc. They are allowed to change from line to line. We will write $C=C(z)$ to indicate that the constant $C$ depends on $z$. And we will write $a\frak lesssim b$ to mean that $a\frak le C b$ for a universal constant $C>0$.\\
\item Polynomials. We will write $P(\cdot)$ to denote polynomials in one variable and they may change from one inequality or equality to another.\\
\item Norms. We will write $H^k$ for $H^k(\Om)$ for $k\ge0$, and $H^s(\Sigma)$ with $s\in\mathbb{R}$ for usual Sobolev spaces. Typically, we will write $H^0=L^2$, With the exception to this is we will use $L^2([0,T];H^k)$ (or $L^2([0,T];H^s(\Sigma))$) to denote the space of temporal square--integrable functions with values in $H^k$ (or $H^s(\Sigma)$).
Sometimes we will write $\|\cdot\|_k$ instead of $\|\cdot\|_{H^k(\Om)}$ or $\|\cdot\|_{H^k(\Sigma)}$. We assume that functions have natural spaces. For example, the functions $u$, $p$, $\theta$ and $\bar{\eta}$ live on $\Om$, while $\eta$ lives on $\Sigma$. So we may write $\|\cdot\|_{H^k}$ for the norms of $u$, $p$, $\theta$ and $\bar{\eta}$ in $\Om$, and $\|\cdot\|_{H^s}$ for norms of $\eta$ on $\Sigma$.
\end{enumerate}
\subsection{Plan of the paper}
In section 2, we develop the machinery of time--dependent function spaces based on \cite{GT1}. In section 3, we make some elliptic estimates for the linear steady equations of \eqref{equ:linear BC}.
In section 4, we will study the local existence theory of the following linear problem for $(u,p,\theta)$, where we think of $\eta$ (and hence $\mathscr{A}$, $\mathscr{N}$, etc.) is given:
\begin{eqnarray} \frak label{equ:linear BC}
\frak left\{
\begin{aligned}
&\pa_t u-\Delta_{\mathscr{A}}u+\nabla_{\mathscr{A}}p-\theta \nabla_{\mathscr{A}}y_3=F^1& \quad\text{in}\quad\Om,\\
&\nabla_{\mathscr{A}}\cdot u=0&\quad\text{in}\quad\Om,\\
&\pa_t\theta-\Delta_{\mathscr{A}}\theta=F^3&\quad\text{in}\quad\Om,\\
& (p I-\mathbb{D}_{\mathscr{A}}u)\mathscr{N}=F^4&\quad\text{on}\quad \Sigma,\\
&\nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\frak left|\mathscr{N}\right|=F^5&\quad\text{on}\quad \Sigma,\\
&u=0,\quad\theta=0& \quad\text{on}\quad \Sigma_b,
\end{aligned}
\right.
\end{eqnarray}
subject to the initial condition $u(0)=u_0$ and $\theta(0)=\theta_0$, with the time-dependent Galerkin method. In section 5, we construct the initial data and do some estimates for the forcing terms. In section 6, we construct solutions to \eqref{equ:NBC} using iteration and contraction, and complete the proof of Theorem \ref{thm:main}.
\section{Functional setting}
\subsection{Function spaces}
Throughout this paper, we utilize the functional spaces defined by Guo and Tice in section 2 of \cite{GT1}. The only modification is the definition of space $\mathscr{H}^1(t)$. For the vector-valued space $\mathscr{H}^1(t)$, its definition is the same as \cite{GT1}. The following is the definition for the scalar-valued space $\mathscr{H}^1(t)$.
\[
\mathscr{H}^1(t):=\frak left\{\theta| \|\theta\|_{\mathscr{H}^1}<\infty, \theta|_{\Sigma_b}=0\right\}
\]
with the norm $\|\theta\|_{\mathscr{H}^1}:=\frak left(\theta,\theta\right)_{\mathscr{H}^1}^{1/2}$, where the inner product $\frak left(\cdot,\cdot\right)_{\mathscr{H}^1}$ is defined as
\[
\frak left(\theta,\phi\right)_{\mathscr{H}^1}:=\int_{\Om}\frak left(\nabla_{\mathscr{A}(t)}\theta\cdot\nabla_{\mathscr{A}(t)}\phi\right)J(t).
\]
The following lemma implies that this space $\mathscr{H}^1$ is equivalent to the usual Sobolev space $H^1$.
\begin{lemma}\frak label{lem:theta H0 H1}
Suppose that $0<\varepsilon_0<1$ and $\|\eta-\eta_0\|_{H^{5/2}(\Sigma)}<\varepsilon_0$. Then it holds that
\begin{equation}\frak label{est:theta H0}
\|\theta\|_{H^0}^2\frak lesssim\int_\Om J|\theta|^2\frak lesssim\frak left(1+\|\eta_0\|_{H^{5/2}(\Sigma)}\right)\|\theta\|_{H^0}^2,
\end{equation}
\begin{equation}\frak label{est:theta H1}
\f{1}{\frak left(1+\|\eta_0\|_{H^{5/2}(\Sigma)}\right)^3}\|\theta\|_{H^1(\Om)}^2\frak lesssim\int_\Om J|\nabla_{\mathscr{A}}\theta|^2\frak lesssim\frak left(1+\|\eta_0\|_{H^{5/2}(\Sigma)}\right)^3\|\theta\|_{H^1(\Om)}^2.
\end{equation}
\end{lemma}
\begin{proof}
From the Poinc\'are inequality, we know that $\|\theta\|_{H^1}$ is equivalent to $\|\nabla\theta\|_{H^0}$. So in the following, we will use $\|\theta\|_{H^1}$ instead of $\|\nabla\theta\|_{H^0}$.
From the assumption and the Sobolev inequalities, we may derive that
\[
\delta\frak lesssim\|J\|_{L^\infty}\frak lesssim1+\|\bar{\eta}\|_{L^\infty}+\|\nabla\bar{\eta}\|_{L^\infty}\frak lesssim 1+\|\eta\|_{H^{5/2}}\frak lesssim 1+\|\eta_0\|_{H^{5/2}},
\]
and
\begin{align*}
\|\mathscr{A}\|_{L^\infty}&\frak lesssim\max\{1, \|AK\|_{L^\infty}^2, \|BK\|_{L^\infty}^2, \|K\|_{L^\infty}^2\}\\
&\frak lesssim1+(1+\|\nabla\bar{\eta}\|_{L^\infty}^2)\|K\|_{L^\infty}^2\frak lesssim \frak left(1+\|\eta_0\|_{H^{5/2}}\right)^2.
\end{align*}
Thus \eqref{est:theta H0} is clearly derived from the estimate of $\|J\|_{L^\infty}$ and we have that
\begin{align*}
\int_\Om J|\nabla_{\mathscr{A}}\theta|^2&\frak lesssim(1+\|\eta_0\|_{H^{5/2}})\int_\Om |\nabla_{\mathscr{A}}\theta|^2\\
&\frak lesssim(1+\|\eta_0\|_{H^{5/2}})\max\{1, \|AK\|_{L^\infty}^2, \|BK\|_{L^\infty}^2, \|K\|_{L^\infty}^2\}\|\theta\|_{H^1}^2\\
&\frak lesssim\frak left(1+\|\eta_0\|_{H^{5/2}}\right)^3\|\theta\|_{H^1}^2.
\end{align*}
Now we have proved the second inequality of \eqref{est:theta H1}.
To prove the first inequality of \eqref{est:theta H1}, we rewrite the $\|\theta\|_{\mathscr{H}^1}$ as
\[
\int_\Om J|\nabla_{\mathscr{A}}\theta|^2=\int_\Om J|\nabla_{\mathscr{A}_0}\theta|^2+\int_\Om J(\nabla_{\mathscr{A}}\theta+\nabla_{\mathscr{A}_0}\theta)\cdot(\nabla_{\mathscr{A}}\theta-\nabla_{\mathscr{A}_0}\theta),
\]
Here $\mathscr{A}_0$ is in terms of $\eta_0$. By the estimates of $\|J\|_{L^\infty}$, we derive that
\begin{align*}
\int_\Om J|\nabla_{\mathscr{A}_0}\theta|^2&\gtrsim \f{1}{1+\|\eta_0\|_{H^{5/2}}}\int_\Om J_0|\nabla_{\mathscr{A}_0}\theta|^2\\
&=\f{1}{1+\|\eta_0\|_{H^{5/2}}}\int_{\Om_0} |\nabla(\theta\circ\Phi(0))|^2\\
&\gtrsim \f{1}{(1+\|\eta_0\|_{H^{5/2}})^3}\|\theta\|_{H^1},
\end{align*}
where in the last inequality, we have used the following Lemma \ref{lem:transport}, since $\Phi(0)$ is a diffeomorphism. Here $J_0$ is in terms of $\eta_0$.
Then, using the estimates of $\|\mathscr{A}\|_{L^\infty}$ and $\|J\|_{L^\infty}$, we have that
\begin{align*}
\frak left|\int_\Om J(\nabla_{\mathscr{A}}\theta+\nabla_{\mathscr{A}_0}\theta)\cdot(\nabla_{\mathscr{A}}\theta-\nabla_{\mathscr{A}_0}\theta)\right|&\frak lesssim \|J\|_{L^\infty}\|\mathscr{A}+\mathscr{A}_0\|_{L^\infty}\|\mathscr{A}-\mathscr{A}_0\|_{L^\infty}\|\theta\|_{H^1}\\
&\frak lesssim\varepsilon_0\frak left(1+\|\eta_0\|_{H^{5/2}}\right)^3\|\theta\|_{H^1}.
\end{align*}
Then taking $\varepsilon_0$ sufficiently small, we may derive that
\begin{align*}
\int_\Om J|\nabla_{\mathscr{A}}\theta|^2&\gtrsim\int_\Om J|\nabla_{\mathscr{A}_0}\theta|^2-\frak left|\int_\Om J(\nabla_{\mathscr{A}}\theta+\nabla_{\mathscr{A}_0}\theta)\cdot(\nabla_{\mathscr{A}}\theta-\nabla_{\mathscr{A}_0}\theta)\right|\\
&\gtrsim \f{1}{(1+\|\eta_0\|_{H^{5/2}})^3}\|\theta\|_{H^1}.
\end{align*}
This is the first inequality of \eqref{est:theta H1}.
\end{proof}
We define an operator $\mathcal{K}_t$ by $\mathcal{K}_t\theta=K(t)\theta$, where $K(t):=K$ is defined as \eqref{equ:components}. Clearly, $\mathcal{K}_t$ is invertible and $\mathcal{K}_t^{-1}\Theta=K(t)^{-1}\Theta=J(t)\Theta$, and $J(t):=J=1/K$.
\begin{proposition}\frak label{prop:k}
For each $t\in[0, T]$, $\mathcal{K}_t$ is a bounded linear isomorphism: from $H^k(\Om)$ to $H^k(\Om)$ for $k=0, 1, 2$; from $L^2(\Om)$ to $\mathscr{H}^0(t)$; and from ${}_0H^1(\Om)$ to $\mathscr{H}^1(t)$. In each case, the norms of the operators $\mathcal{K}_t$, $\mathcal{K}_t^{-1}$ are bounded by a polynomial $P(\|\eta(t)\|_{H^{\f72}})$.
The mapping $\mathcal{K}$ defined by $\mathcal{K}\theta(t):=\mathcal{K}_t\theta(t)$ is a bounded linear isomorphism: from $L^2([0, T]; H^k(\Om))$ to $L^2([0, T]; H^k(\Om))$ for $k=0, 1, 2$; from $L^2([0, T]; H^0(\Om))$ to $\mathscr{H}^0_T$ and from ${_0}{H}{^1}(\Om)$ to $\mathscr{H}^1_{T}$. In each case, the operators $\mathcal{K}$ and $\mathcal{K}^{-1}$ are bounded by the polynomial $P(\sup_{0\frak le t\frak le T}\|\eta(t)\|_{H^\f72})$.
\end{proposition}
\begin{proof}
It is easy to see that for each $t\in[0,T]$,
\begin{equation}
\|\mathcal{K}_t\theta\|_{H^0}\frak lesssim \|\mathcal{K}_t\|_{C^0}\|\theta\|_{H^0}\frak lesssim P(\|\eta(t)\|_{H^{\f72}})\|\theta\|_{H^0},
\end{equation}
\begin{equation}
\|\mathcal{K}_t\theta\|_{H^1}\frak lesssim \|\mathcal{K}_t\|_{C^1}\|\theta\|_{H^1}\frak lesssim P(\|\eta(t)\|_{H^{\f72}})\|\theta\|_{H^1},
\end{equation}
\begin{equation}
\|\mathcal{K}_t\theta\|_{H^2}\frak lesssim \|\mathcal{K}_t\|_{C^1}\|\theta\|_{H^2}+\|\mathcal{K}_t\|_{H^2}\|\theta\|_{C^0}\frak lesssim P(\|\eta(t)\|_{H^{\f72}})\|\theta\|_{H^2}.
\end{equation}
These inequalities imply that $\mathcal{K}_t$ is a bounded operator from $H^k$ to $H^k$, for $k=0,1,2$. Since $\mathcal{K}_t$ is invertible, we may have the estimate $\|\mathcal{K}_t^{-1}\Theta\|_{H^k}\frak lesssim P(\|\eta(t)\|_{H^{\f72}})\|\Theta\|_{H^k}$. Thus, $\mathcal{K}_t$ is an isomorphism of $H^k$ to $H^k$, for $k=0,1,2$. With this fact in hand, Lemma \ref{lem:theta H0 H1} implies that $\mathcal{K}_t$ is an isomorphism of $L^2(\Om)$ to $\mathscr{H}^0(t)$ and of ${}_0H^1(\Om)$ to $\mathscr{H}^1(t)$.
The mapping properties of the operator $\mathcal{K}$ on space-time functions may be established in a similar manner.
\end{proof}
\subsection{Pressure as a Lagrange multiplier}
The introduction of pressure function has been studied by Guo and Tice in section 2 of \cite{GT1}, of which the modification was given by L. Wu in section 2.2 of \cite{LW}. So we omit the details here.
\section{Elliptic estimates}
\subsection{Preliminary}
Before studying the linear problem \eqref{equ:linear BC}, we need some elliptic estimates. In order to study the elliptic problem, we may transform the equations on the domain $\Om$ into constant coefficient equations on the domain $\Om^\prime=\Phi(\Om)$, where $\Phi$ is defined by \eqref{map:phi}. The following lemma shows that the mapping $\Phi$ is an isomorphism between $H^k(\Om^\prime)$ and $H^k(\Om)$. Here, the Sobolev spaces are either vector-valued or scalar-valued.
\begin{lemma}\frak label{lem:transport}
Let $\Psi: \Om\to\Om^\prime$ be a $C^1$ diffeomorphism satisfying $\Psi\in H^{k+1}_{loc}$, $\nabla\Psi-I\in H^k(\Om)$ and the Jacobi $J=\det(\nabla\Psi)>\delta>0$ almost everywhere in $\Om$ for an integer $k\ge3$. If $v\in H^m(\Om^\prime)$, then $v\circ\Psi\in H^m(\Om)$ for $m=0,1, \frak ldots, k+1$, and
\[
\|v\circ\Psi\|_{H^m(\Om)}\frak lesssim C\frak left(\|\nabla\Psi-I\|_{H^k(\Om)}\right)\|v\|_{H^m(\Om^\prime)},
\]
where $C(\|\nabla\Psi-I\|_{H^k(\Om)})$ is a constant depending on $\|\nabla\Psi-I\|_{H^k(\Om)}$. Similarly, for $u\in H^m(\Om)$, we have $u\circ\Psi^{-1}\in H^m(\Om^\prime)$ for $m=0,1, \frak ldots, k+1$, and
\[
\|u\circ\Psi^{-1}\|_{H^m(\Om^\prime)}\frak lesssim C\frak left(\|\nabla\Psi-I\|_{H^k(\Om)}\right)\|u\|_{H^m(\Om)}.
\]
Let $\Sigma^\prime=\Psi(\Sigma)$ be the top boundary of $\Om^\prime$. If $v\in H^{m-\f12}(\Sigma^\prime)$ for $m=1, \frak ldots, k-1$, then $v\circ\Psi\in H^{m-\f12}(\Sigma)$, and
\[
\|v\circ\Psi\|_{H^{m-\f12}(\Sigma)}\frak lesssim C\frak left(\|\nabla\Psi-I\|_{H^k(\Om)}\right)\|v\|_{H^{m-\f12}(\Sigma^\prime)}.
\]
If $u\in H^{m-\f12}(\Sigma)$ for $m=1, \frak ldots, k-1$, then $u\circ\Psi^{-1}\in H^{m-\f12}(\Sigma^\prime)$ and
\[
\|u\circ\Psi^{-1}\|_{H^{m-\f12}(\Sigma^\prime)}\frak lesssim C\frak left(\|\nabla\Psi-I\|_{H^k(\Om)}\right)\|u\|_{H^{m-\f12}(\Sigma)}.
\]
\end{lemma}
\begin{proof}
The proof of this lemma is the same as Lemma $3.1$ in \cite{GT1}, which has been proved by Y. Guo and I. Tice, so we omit the details here.
\end{proof}
\subsection{The $\mathscr{A}$-stationary convection problem}
In this section, we consider the stationary equations
\begin{eqnarray}\frak label{equ:SBC}
\frak left\{
\begin{aligned}
\mathop{\rm div}\nolimits_{\mathscr{A}}S_{\mathscr{A}}(p, u)-\theta \nabla_{\mathscr{A}}y_3&=F^1 \quad\text{in}\quad\Om\\
\mathop{\rm div}\nolimits_{\mathscr{A}}u&=F^2\quad\text{in}\quad\Om\\
-\Delta_{\mathscr{A}}\theta&=F^3\quad\text{in}\quad\Om\\
S_{\mathscr{A}}(p, u)\mathscr{N}&=F^4\quad\text{on}\quad \Sigma\\
\nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\frak left|\mathscr{N}\right|&=F^5\quad\text{on}\quad \Sigma\\
u=0,\quad\theta&=0\quad\text{on}\quad \Sigma_b\\
\end{aligned}
\right.
\end{eqnarray}
Before discussing the regularity for strong solution to \eqref{equ:SBC}, we need to define the weak solution of equation \eqref{equ:SBC}. Suppose $F^1\in (\mathscr{H}^1)^\ast$, $F^2\in H^0$, $F^3\in (\mathscr{H}^1)^\ast$ $F^4\in H^{-\f12}(\Sigma)$ and $F^5\in H^{-\f12}(\Sigma)$, $(u,p,\theta)$ is called a weak solution of equation \eqref{equ:SBC} if it satisfies $\nabla_{\mathscr{A}}\cdot u=F^2$,
\begin{equation}\frak label{equ:weak theta}
\frak left(\nabla_{\mathscr{A}}\theta,\nabla_{\mathscr{A}}\phi\right)_{\mathscr{H}^0}+\frak left(\theta\frak left|\mathscr{N}\right|,\phi\right)_{H^0(\Sigma)}=\frak left<F^3, \phi\right>_{(\mathscr{H}^1)^\ast}+\frak left<F^5,\phi\right>_{H^{-\f12}(\Sigma)},
\end{equation}
and
\begin{equation}\frak label{equ:weak u}
\f12\frak left(\mathbb{D}_\mathscr{A}u,\mathbb{D}_\mathscr{A}\psi\right)_{\mathscr{H}^0}+\frak left(p, \nabla_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}-\frak left(\theta \nabla_{\mathscr{A}}y_3, \psi\right)_{\mathscr{H}^0}=\frak left<F^1,\psi\right>_{(\mathscr{H}^1)\ast}-\frak left<F^4, \psi\right>_{H^{-\f12}(\Sigma)},
\end{equation}
for any $\phi, \psi\in \mathscr{H}^1$.
\begin{lemma}
Suppose $F^1\in (\mathscr{H}^1)^\ast$, $F^2\in \mathscr{H}^0$, $F^3\in (\mathscr{H}^1)^\ast$, $F^4\in H^{-\f12}(\Sigma)$ and $F^5\in H^{-\f12}(\Sigma)$. Then there exists a unique weak solution $(u, p, \theta) \in \mathscr{H}^1\times \mathscr{H}^0 \times \mathscr{H}^1$ to \eqref{equ:SBC}.
\end{lemma}
\begin{proof}
For the Hilbert space $\mathscr{H}^1$ with the inner product $\frak left(\theta,\phi\right)=\frak left(\nabla_{\mathscr{A}}\theta, \nabla_{\mathscr{A}}\phi\right)_{\mathscr{H}^0}+\frak left(\theta\frak left|\mathscr{N}\right|,\phi\right)_{H^0(\Sigma)}$, we can define a linear functional $\ell\in (\mathscr{H}^1)^\ast$ by
\[
\ell(\phi)=\frak left<F^3, \phi\right>_{(\mathscr{H}^1)^\ast}+\frak left<H^5,\phi\right>_{H^{-\f12}(\Sigma)},
\]
for all $\phi\in \mathscr{H}^1$. Then by using the Riesz representation theorem, there exists a unique $\theta\in \mathscr{H}^1$ such that
\[
\frak left(\nabla_{\mathscr{A}}\theta, \nabla_{\mathscr{A}}\phi\right)_{\mathscr{H}^0}+\frak left(\theta\frak left|\mathscr{N}\right|,\phi\right)_{H^0(\Sigma)}=\frak left<F^3, \phi\right>_{(\mathscr{H}^1)^\ast}+\frak left<H^5,\phi\right>_{H^{-\f12}(\Sigma)},
\]
for all $\phi\in \mathscr{H}^1$.
By Lemma 2.6 in \cite{GT1}, there exists a $\bar{u}\in \mathscr{H}^1$ such that $\mathop{\rm div}\nolimits_{\mathscr{A}}\bar{u}=F^2$. Then, we may restrict our test function to $\psi\in \mathscr{X}$. A straight application of Riesz representation theorem to the Hilbert space $\mathscr{X}$ with inner product defined as $\frak left(u,\psi\right)=\frak left(\mathbb{D}_{\mathscr{A}}u, \mathbb{D}_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}$ provides a unique $w\in \mathscr{X}$ such that
\begin{equation} \frak label{eq:velocity}
\f12\frak left(\mathbb{D}_{\mathscr{A}}w, \mathbb{D}_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}=-\f12\frak left(\mathbb{D}_{\mathscr{A}}\bar{u}, \mathbb{D}_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}+\frak left(\theta \nabla_{\mathscr{A}}y_3, \psi\right)_{\mathscr{H}^0}+\frak left<F^1, \psi\right>_{(\mathscr{H}^1)^\ast}-\frak left<F^4, \psi\right>_{H^{-\f12}(\Sigma)}
\end{equation}
for all $\psi\in \mathscr{X}$. Then we can find $u$ satisfying
\begin{equation}\frak label{equ:pressureless weak u}
\f12\frak left(\mathbb{D}_\mathscr{A}u,\mathbb{D}_\mathscr{A}\psi\right)_{\mathscr{H}^0}-\frak left(\theta \nabla_{\mathscr{A}}y_3, \psi\right)_{\mathscr{H}^0}=\frak left<F^1,\psi\right>_{(\mathscr{H}^1)\ast}-\frak left<F^4, \psi\right>_{H^{-\f12}(\Sigma)},
\end{equation}
by $u=w+\bar{u}\in\mathscr{H}^1$, with $\mathop{\rm div}\nolimits_{\mathscr{A}}u=F^2$.
It is easily to be seen that $u$ is unique. Suppose that there exists another $\tilde{u}$ still satisfies \eqref{equ:pressureless weak u}. Then we have $\mathop{\rm div}\nolimits_{\mathscr{A}}(u-\tilde{u})=0$, and $\frak left(\mathbb{D}_\mathscr{A}(u-\tilde{u}),\mathbb{D}_\mathscr{A}\psi\right)_{\mathscr{H}^0}=0$ for any $\psi\in \mathscr{X}$. By taking $\psi=u-\tilde{u}$, and using the Korn's inequality, we know that $\|u-\tilde{u}\|_{H^0}=0$ which implies $u=\tilde{u}$.
In order to introduce the pressure $p$, we can define $\frak lam\in (\mathscr{H}^1)^\ast$ as the difference of the left and right hand sides of \eqref{eq:velocity}. Then $\frak lam(\psi)=0$ for all $\psi\in \mathscr{X}$. According to the Proposition $2.12$ in \cite{LW}, there exists a unique $p\in \mathscr{H}^0$ satisfying $\frak left(p, \mathop{\rm div}\nolimits_{\mathscr{A}}\psi\right)_{\mathscr{H}^0}=\frak lam(\psi)$ for all $\psi\in \mathscr{H}^1$.
\end{proof}
In the next result, we establish the strong solutions of \eqref{equ:SBC} and present some elliptic estimates.
\begin{lemma}\frak label{lem:S lower regularity}
Suppose that $\eta\in H^{k+\f12}(\Sigma)$ for $k\ge3$ such that the mapping $\Phi$ defined in \eqref{map:phi} is a $C^1$ diffeomorphism of $\Om$ to $\Om^\prime=\Phi(\Om)$. If $F^1\in H^0$, $F^2\in H^1$, $F^3\in H^0$, $F^4\in H^{\f12}$ and $F^5\in H^{\f12}$, then the problem \eqref{equ:SBC} admits a unique strong solution $(u, p, \theta)\in H^2(\Om)\times H^1(\Om)\times H^2(\Om)$, i.e. $(u, p, \theta)$ satisfy \eqref{equ:SBC} a.e. in $\Om$ and on $\Sigma$, $\Sigma_b$. Moreover, for $r=2, \frak ldots, k-1$, we have the estimate
\begin{equation}\frak label{ineq:lower elliptic}
\begin{aligned}
\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}
&\frak lesssim& C(\eta)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}\\
&&+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big),
\end{aligned}
\end{equation}
whenever the right-hand side is finite, where $C(\eta)$ is a constant depending on $\|\eta\|_{H^{k+\f12}(\Sigma)}$.
\end{lemma}
\begin{proof}
First, we consider the problem
\begin{eqnarray}o
\frak left\{
\begin{aligned}
-\Delta_{\mathscr{A}}\theta&=F^3 \quad \text{in}\quad \Om,\\
\nabla_{\mathscr{A}}\theta\cdot\mathscr{N}+\theta\frak left|\mathscr{N}\right|&=F^5 \quad \text{on}\quad \Sigma,\\
\theta&=0 \quad \text{on}\quad \Sigma_b.
\end{aligned}
\right.
\end{eqnarray}o
Since the coefficients of this equation are not constants, We transform this problem to one on $\Om^\prime=\Phi(\Om)$ by introducing the unknowns $\Theta$ according to $\theta=\Theta\circ\Phi$. Then $\Theta$ should be solutions to the usual problem on $\Om^\prime=\{-1\frak le y_3\frak le \eta(y_1, y_2)\}$ with upper boundary $\Sigma^\prime=\{y_3=\eta\}$:
\begin{eqnarray}\frak label{equ:theta}
\frak left\{
\begin{aligned}
-\Delta \Theta&=F^3\circ\Phi^{-1}=G^3 &\quad\text{in}\quad\Om^\prime,\\
\nabla \Theta\cdot\mathscr{N}&+\Theta\frak left|\mathscr{N}\right|= F^5\circ\Phi^{-1}=G^5&\quad\text{on}\quad\Sigma^\prime,\\
\Theta&=0 &\quad\text{on}\quad\Sigma^\prime_b.
\end{aligned}
\right.
\end{eqnarray}
Note that, according to Lemma \ref{lem:transport}, $G^3\in H^0(\Om^\prime)$ and $G^5\in H^{1/2}(\Sigma^\prime)$.
Then we may argue as the Lemma 2.8 in \cite{Beale1} and use the Theorem 10.5 in \cite{ADN}, to obtain that there exists a unique $\Theta\in H^2(\Om^\prime)$, solving problem \eqref{equ:theta} with
\[
\|\Theta\|_{H^2(\Om^\prime)}\frak lesssim C(\eta)(\|G^3\|_{H^0(\Om^\prime)}+\|G^5\|_{H^{\f12}(\Sigma^\prime)}),
\]
for $C(\eta)$ a constant depending on $\|\eta\|_{H^{k+\f12}}$.
For the $\mathscr{A}$-Stokes equations, we introduce the unknowns $v, q$ by $u=v\circ\Phi$ and $q=p\circ\Phi$. For the usual Stokes problem
\begin{eqnarray}\frak label{equ:Stokes}
\frak left\{
\begin{aligned}
S(q, v)-\Theta e_3&=F^1\circ\Phi^{-1}=G^1& \quad\text{in}\quad\Om^\prime\\
\nabla\cdot v&=F^2\circ\Phi^{-1}=G^2&\quad\text{in}\quad\Om^\prime\\
S(q, v)\mathscr{N}&=F^4\circ\Phi^{-1}=G^4&\quad\text{on}\quad \Sigma^\prime\\
v&=0&\quad\text{on}\quad \Sigma_b,
\end{aligned}
\right.
\end{eqnarray}
we use the same argument as in the proof of Lemma 3.6 in \cite{GT1} with $G^1+\Theta e_3$ instead of $G^1$. Then we have that there exist unique $v\in H^2(\Om^\prime)$, $q\in H^1(\Om^\prime)$, solving problem \eqref{equ:Stokes} with
\[
\|v\|_{H^2(\Om^\prime)}+\|q\|_{H^1(\Om^\prime)}\frak lesssim C(\eta)\frak left(\|G^1\|_{H^0(\Om^\prime)}+\|G^2\|_{H^1(\Om^\prime)}+\|G^4\|_{H^{\f12}(\Sigma^\prime)}+\|\Theta\|_{H^0(\Om^\prime)}\right),
\]
for $C(\eta)$ a constant depending on $\|\eta\|_{H^{k+\f12}}$.
so we have that
\begin{equation}
\begin{aligned}
\|v\|_{H^2(\Om^\prime)}+\|q\|_{H^1(\Om^\prime)}+\|\Theta\|_{H^2(\Om^\prime)}&\frak lesssim
C(\eta)\Big(\|G^1\|_{H^0(\Om^\prime)}+\|G^2\|_{H^1(\Om^\prime)}\\
&\quad+\|G^3\|_{H^0(\Om^\prime)}+\|G^4\|_{H^{\f12}(\Sigma^\prime)}+\|G^5\|_{H^{\f12}(\Sigma^\prime)}\Big),
\end{aligned}
\end{equation}
for $C(\eta)$ a constant depending on $\|\eta\|_{H^{k+\f12}}$.
Then we may argue it as in Lemma 3.6 of \cite{GT1} to derive that, for $r=2, \frak ldots, k-1$,
\begin{equation}
\begin{aligned}
&\|v\|_{H^r(\Om^\prime)}+\|q\|_{H^{r-1}(\Om^\prime)}+\|\Theta\|_{H^r(\Om^\prime)}\\
&\frak lesssim
C(\eta)\Big(\|G^1\|_{H^{r-2}(\Om^\prime)}+\|G^2\|_{H^{r-1}(\Om^\prime)}+\|G^3\|_{H^{r-2}(\Om^\prime)}\\
&\quad+\|G^4\|_{H^{r-\f32}(\Sigma^\prime)}+\|G^5\|_{H^{r-\f32}(\Sigma^\prime)}\Big),
\end{aligned}
\end{equation}
for $C(\eta)$ a constant depending on $\|\eta\|_{H^{k+\f12}}$.
Now, we transform back to $\Om$ with $u=v\circ\Phi$, $p=q\circ\Phi$ and $\theta=\Theta\circ\Phi$. It is readily verified that $(u, p, T)$ are strong solutions of \eqref{equ:SBC}. According to Lemma \ref{equ:SBC},
\begin{align*}
\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}
&\frak lesssim C(\eta)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}\\
&\quad+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big),
\end{align*}
whenever the right-hand side is finite, where $C(\eta)$ is a constant depending on $\|\eta\|_{H^{k+\f12}(\Sigma)}$. This is what we want.
\end{proof}
In the next lemma, we verify that the constant in \eqref{ineq:lower elliptic} can actually only depend on the initial free surface.
\begin{lemma}\frak label{lem:initial lower regularity}
Let $k\ge3$ be an integer and suppose that $\eta\in H^{k+\f12}(\Sigma)$ and $\eta_0\in H^{k+\f12}(\Sigma)$. Then there exists a positive number $\varepsilon_0<1$ such that if $\|\eta-\eta_0\|_{H^{k-\f32}}\frak le \varepsilon_0$, the solution to \eqref{equ:SBC} satisfies
\begin{equation}\frak label{ineq:initial lower elliptic}
\begin{aligned}
\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}
&\frak lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}\\
&\quad+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big),
\end{aligned}
\end{equation}
for $r=2, \frak ldots, k-1$, whenever the right hand side is finite, where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}}$.
\end{lemma}
\begin{proof}
Here, we use the same idea as in Lemma 2.17 of \cite{LW}. We rewrite the equation \eqref{equ:SBC} with its coefficients determined by $\eta_0$, i.e. it can be thought as a perturbation of equations of \eqref{equ:SBC} in terms of initial data,
\begin{eqnarray}
\frak left\{
\begin{aligned}
\mathop{\rm div}\nolimits_{\mathscr{A}_0}S_{\mathscr{A}_0}(p, u)-\theta \nabla_{\mathscr{A}_0}y_{3,0}&=F^1+F^{1,0}& \quad\text{in}\quad\Om\\
\nabla_{\mathscr{A}_0}\cdot u&=F^2+F^{2,0}&\quad\text{in}\quad\Om\\
-\Delta_{\mathscr{A}_0}\theta&=F^3+F^{3,0}&\quad\text{in}\quad\Om\\
S_{\mathscr{A}_0}(p, u)\mathscr{N}_0&=F^4+F^{4,0}&\quad\text{on}\quad \Sigma\\
\nabla_{\mathscr{A}_0}\theta\cdot\mathscr{N}_0+\theta&\frak left|\mathscr{N}_0\right|=F^5+F^{5,0}&\quad\text{on}\quad \Sigma\\
u=0,\quad\theta&=0&\quad\text{on}\quad \Sigma_b\\
\end{aligned}
\right.
\end{eqnarray}
where
\begin{align*}
F^{1,0}&=\nabla_{\mathscr{A}_0-\mathscr{A}}\cdot S_{\mathscr{A}}(p,u)+\nabla_{\mathscr{A}_0}\cdot S_{\mathscr{A}_0-\mathscr{A}}(p, u)+\theta\nabla_{\mathscr{A}_0-\mathscr{A}}y_3+\theta\nabla_{\mathscr{A}_0}(y_{3,0}-y_3),\\
F^{2,0}&=\mathop{\rm div}\nolimits_{\mathscr{A}_0-\mathscr{A}}u,\\
F^{3,0}&=\nabla_{\mathscr{A}_0-\mathscr{A}}\cdot\nabla_{\mathscr{A}}\theta+\nabla_{\mathscr{A}_0}\cdot\nabla_{\mathscr{A}_0-\mathscr{A}}\theta,\\
F^{4,0}&=S_{\mathscr{A}_0}(p, u)(\mathscr{N}_0-\mathscr{N})+S_{\mathscr{A}_0-\mathscr{A}}(p, u)\mathscr{N},\\
F^{5,0}&=\nabla_{\mathscr{A}_0}\theta\cdot(\mathscr{N}_0-\mathscr{N})+\nabla_{\mathscr{A}_0-\mathscr{A}}\theta\cdot\mathscr{N}+\theta\frak left(\frak left|\mathscr{N}_0\right|-\frak left|\mathscr{N}\right|\right).
\end{align*}
Here, $\mathscr{A}_0$, $\mathscr{N}_0$ and $y_{3,0}$ are quantities of $\mathscr{A}$, $\mathscr{N}$ and $y_{3}$ in terms of $\eta_0$. By the assumption, we know that $\eta-\eta_0\in H^{k+\f12}(\Sigma)$ and $\|\eta-\eta_0\|_{H^{k-\f32}(\Sigma)}^\ell\frak le \|\eta-\eta_0\|_{H^{k-\f32}(\Sigma)}<1$ for any positive integer $\ell$.
By the straightforward computation, we may derive that
\begin{align*}
&\|F^{1,0}\|_{H^{r-2}}\frak le C\frak left(1+\|\eta_0\|_{H^{k+\f12}}\right)^4\|\eta-\eta_0\|_{H^{k-\f32}}\frak left(\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^{r-2}}\right),\\
&\|F^{2,0}\|_{H^{r-1}}\frak le C\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^2\|\eta-\eta_0\|_{H^{k-\f32}}\|u\|_{H^r},\\
&\|F^{3,0}\|_{H^{r-2}}\frak le C\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^4\|\eta-\eta_0\|_{H^{k-\f32}}\|\theta\|_{H^r},\\
&\|F^{4,0}\|_{H^{r-\f32}(\Sigma)}\frak le C\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^2\|\eta-\eta_0\|_{H^{k-\f32}}\frak left(\|u\|_{H^r}+\|p\|_{H^{r-1}}\right),\\
&\|F^{5,0}\|_{H^{r-\f32}(\Sigma)}\frak le C\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^2\|\eta-\eta_0\|_{H^{k-\f32}}\|\theta\|_{H^r},
\end{align*}
for $r=2, \frak ldots, k-1$.
Based on the Lemma \ref{lem:S lower regularity}, we have the estimate
\begin{align*}
&\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\
&\frak lesssim C(\eta_0)\Big(\|F^1+F^{1,0}\|_{H^{r-2}}+\|F^2+F^{2,0}\|_{H^{r-1}}+\|F^3+F^{3,0}\|_{H^{r-2}}\\
&\quad+\|F^4+F^{4,0}\|_{H^{r-\f32}(\Sigma)}+\|F^5+F^{5,0}\|_{H^{r-\f32}(\Sigma)}\Big),
\end{align*}
where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}}$. Combining the above estimates, we have
\begin{equation}
\begin{aligned}
&\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\
&\frak lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big)\\
&\quad+C(\eta_0)\Big(1+\|\eta_0\|_{H^{k+\f12}}\Big)^4\|\eta-\eta_0\|_{H^{k-\f32}}\frak left(\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\right),
\end{aligned}
\end{equation}
for $r=2, \frak ldots, k-1$.
Then, if $\|\eta-\eta_0\|_{H^{k-\f32}}$ is to be chosen small enough such that the second term of the above inequality on the right-hand side less than $\f12(\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r})$, then it can be absorbed into the left hand side, and we have that
\begin{align*}
&\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\
&\frak lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big),
\end{align*}
for $r=2, \frak ldots, k-1$.
\end{proof}
Notice that the estimate in \eqref{ineq:initial lower elliptic} can only go up to $k-1$ order, which does not satisfy our requirement. In the next result, we can achieve two more order with a bootstrap argument, where we use the idea of \cite{LW}.
\begin{proposition}\frak label{prop:high regulatrity}
Let $k\ge3$ be an integer. Suppose that $\eta\in H^{k+\f12}(\Sigma)$ as well as $\eta_0\in H^{k+\f12}(\Sigma)$ satisfying $\|\eta-\eta_0\|_{H^{k+\f12}(\Sigma)}\frak le\varepsilon_0$. Then the solution to \eqref{equ:SBC} satisfies
\begin{equation}
\begin{aligned}
&\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\
&\frak lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big),
\end{aligned}
\end{equation}
for $r=2, \frak ldots, k+1$, whenever the right hand side is finite, where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}(\Sigma)}$.
\end{proposition}
\begin{proof}
Here, we only consider the case for $r=k$ and $r=k+1$, since the conclusion has been proved when $r\frak le k-1$. For $m\in\mathbb{N}$, we define $\eta^m$ by throwing away high frequencies:
\begin{equation*}
{\frak hat{\eta}}^m(n)=\frak left\{
\begin{aligned}
&\frak hat{\eta}(n),\quad &\text{for}\quad |n|\frak le m-1,\\
&0, \quad &\text{for}\quad |n|\ge m.
\end{aligned}
\right.
\end{equation*} Then for each $m$, $\eta^m\in H^j(\Sigma)$ for arbitrary $j\ge0$ and $\eta^m\to \eta$ in $H^{k+\f12}(\Sigma)$ as $m\to\infty$.
We consider the problem \eqref{equ:SBC} with $\mathscr{A}$ and $\mathscr{N}$ replaced by $\mathscr{A}^m$ and $\mathscr{N}^m$, and $y_3$ replaced by $y_3^m$. Since $\eta^m\in H^{\f52}$, we may apply Lemma \ref{lem:S lower regularity} to deduce that there exists a unique $(u^m, p^m, \theta^m)$ which solves
\begin{eqnarray}
\frak left\{
\begin{aligned}
\mathop{\rm div}\nolimits_{\mathscr{A}^m}S_{\mathscr{A}^m}(p^m, u^m)-\theta^m \nabla_{\mathscr{A}^m}y_3^m&=F^1 \quad\text{in}\quad\Om\\
\mathop{\rm div}\nolimits_{\mathscr{A}^m}u^m&=F^2\quad\text{in}\quad\Om\\
-\Delta_{\mathscr{A}^m}\theta^m&=F^3\quad\text{in}\quad\Om\\
S_{\mathscr{A}^m}(p^m, u^m)\mathscr{N}^m&=F^4\quad\text{on}\quad \Sigma\\
\nabla_{\mathscr{A}^m}\theta^m\cdot\mathscr{N}^m+\theta^m\frak left|\mathscr{N}^m\right|&=F^5\quad\text{on}\quad \Sigma\\
u^m=0,\quad \theta^m&=0\quad\text{on}\quad \Sigma_b\\
\end{aligned}
\right.
\end{eqnarray}
and satisfies
\begin{align*}
\|u^m\|_{H^r}+\|p^m\|_{H^{r-1}}+\|\theta^m\|_{H^r}
&\frak lesssim& C(\|\eta^m\|_{H^{k+\f52}})\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}\\
&&+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big)
\end{align*}
for $r=2, \frak ldots, k+1$.
In the following, we will prove that the constant $C(\|\eta^m\|_{H^{k+\f52}})$ can be improved only in terms of $\|\eta^m\|_{H^{k+\f12}}$.
For convenience, we define
\[
\mathscr{Z}=C(\eta_0)P(\eta^m)\Big(\|F^1\|_{H^{r-2}}^2+\|F^2\|_{H^{r-1}}^2
+\|F^3\|_{H^{r-2}}^2+\|F^4\|_{H^{r-\f32}(\Sigma)}^2+\|F^5\|_{H^{r-\f32}(\Sigma)}^2\Big)
\]
where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}}$ and $P(\eta)$ is a polynomial of $\|\eta^m\|_{H^{k+\f12}}$. Then after the same computation as in the proof of Proposition 2.18 in \cite{LW} except for the only modification of $F$ replaced by $F^1+\theta^m \nabla_{\mathscr{A}^m}y_3^m$, we have
\[
\|u^m\|_{H^r}+\|p^m\|_{H^{r-1}}\frak lesssim\mathscr{Z},
\]
for $r=2, \frak ldots, k+1$. That's because in the above estimate, we only need to consider the terms $\|\theta^m\|_{H^r}$, for $r=2, \cdots, k-1$, but $\|\theta^m\|_{H^r}\frak lesssim\mathscr{Z}$ is assured by the Lemma \ref{ineq:initial lower elliptic}.
Then we consider the temperature $\theta^m$. In the following of bootstrap argument, we may abuse the notation $\theta$ instead of $\theta^m$ and also for $\eta$, $\mathscr{A}$, $\mathscr{N}$, but they should be thought as $\eta^m$, $\mathscr{A}^m$, $\mathscr{N}^m$. We write explicitly the equation of $\theta$ as
\begin{eqnarray}\frak label{equ:equ T}
\begin{aligned}
&\pa_{11}\theta+\pa_{22}\theta+(1+A^2+B^2)K^2\pa_{33}\theta-2AK\pa_{13}\theta-2BK\pa_{23}\theta\\
&\quad+(AK\pa_3(AK)+BK\pa_3(BK)-\pa_1(AK)-\pa_2(BK)+K\pa_3K)\pa_3\theta=-F^3.
\end{aligned}
\end{eqnarray}
\begin{enumerate}[step 1]
\item $r=k$ case.
By Lemma \ref{ineq:initial lower elliptic},
\[
\|\theta\|_{H^{k-1}}^2\frak lesssim C(\eta_0)\Big(\|F^3\|_{H^{k-3}}^2+\|F^5\|_{H^{k-\f52}(\Sigma)}^2\Big)\frak lesssim\mathscr{Z},
\]
where the constant $C(\eta_0)$ only depends on $\|\eta_0\|_{H^{k+\f12}}$.
For $i=1,2$, since $\pa_i \theta$ satisfies the equation
\begin{eqnarray}o
\frak left\{
\begin{aligned}
-\Delta_{\mathscr{A}}\pa_i \theta&=\bar{F}^3 \quad \text{in}\quad \Om,\\
\nabla_{\mathscr{A}}\pa_i \theta\cdot\mathscr{N}+\pa_i \theta\frak left|\mathscr{N}\right|&=\bar{F}^5 \quad \text{on}\quad \Sigma,\\
\pa_i \theta&=0 \quad \text{on}\quad \Sigma_b,
\end{aligned}
\right.
\end{eqnarray}o
where
\begin{align*}
\bar{F}^3&=\pa_i F^3+\mathop{\rm div}\nolimits_{\pa_i\mathscr{A}}\nabla_{\mathscr{A}}\theta+\mathop{\rm div}\nolimits_{\mathscr{A}}\nabla_{\pa_i\mathscr{A}}\theta,\\
\bar{F}^5&=\pa_i F^5-\nabla_{\pa_i\mathscr{A}}\theta\cdot\mathscr{N}-\nabla_{\mathscr{A}}\theta\cdot\pa_i\mathscr{N}-\theta\pa_i\frak left|\mathscr{N}\right|.
\end{align*}
Applying the Lemma A.1--A.2 in \cite{GT1}, we have
\begin{align*}
&\|\bar{F}^3\|_{H^{k-3}}^2+\|\bar{F}^5\|_{H^{k-\f52}(\Sigma)}^2\\
&\frak lesssim \|F^3\|_{H^{k-2}}^2+\|F^5\|_{H^{k-\f32}(\Sigma)}^2+P(\eta)\|\theta\|_{H^{k-1}}^2\\
&\frak lesssim\mathscr{Z}.
\end{align*}
Employing the $k-1$ order elliptic estimate, we have
\[
\|\pa_i \theta\|_{H^{k-1}}^2\frak lesssim C(\eta_0)\Big(\|\bar{F}^3\|_{H^{k-3}}^2+\|\bar{F}^5\|_{H^{k-\f52}(\Sigma)}^2\Big)\frak lesssim\mathscr{Z}.
\]
Then taking derivative $\pa_3^{k-2}$ on both sides of \eqref{equ:equ T} and focusing on the term $(1+A^2+B^2)K^2\pa_3^k\theta$, the estimates of all the other terms in $H^0$-norm implies that
\[
\|\pa_3^k\theta\|_{H^0}^2\frak lesssim\mathscr{Z}.
\]
Thus, we have proved that
\[
\|\theta\|_{H^k}^2\frak lesssim\mathscr{Z}.
\]
\item $r=k+1$ case.
For $i,j=1,2$, since $\pa_{ij}\theta$ satisfies the equation
\begin{eqnarray}o
\frak left\{
\begin{aligned}
-\Delta_{\mathscr{A}}\pa_{ij} \theta&=\tilde{F}^3 \quad \text{in}\quad \Om,\\
\nabla_{\mathscr{A}}\pa_{ij} \theta\cdot\mathscr{N}+\pa_{ij} \theta\frak left|\mathscr{N}\right|&=\tilde{F}^5 \quad \text{on}\quad \Sigma,\\
\pa_{ij} \theta&=0 \quad \text{on}\quad \Sigma_b,
\end{aligned}
\right.
\end{eqnarray}o
where
\begin{align*}
\tilde{F}^3&=\pa_{ij}F^3+\mathop{\rm div}\nolimits_{\pa_{ij}\mathscr{A}}\nabla_{\mathscr{A}}\theta
+\mathop{\rm div}\nolimits_{\mathscr{A}}\nabla_{\pa_{ij}\mathscr{A}}\theta+\mathop{\rm div}\nolimits_{\pa_i\mathscr{A}}\nabla_{\pa_j\mathscr{A}}\theta
+\mathop{\rm div}\nolimits_{\pa_j\mathscr{A}}\nabla_{\pa_i\mathscr{A}}\theta\\
&\quad+\mathop{\rm div}\nolimits_{\pa_i\mathscr{A}}\nabla_{\mathscr{A}}\pa_j \theta+\mathop{\rm div}\nolimits_{\pa_j\mathscr{A}}\nabla_{\mathscr{A}}\pa_i \theta+\mathop{\rm div}\nolimits_{\mathscr{A}}\nabla_{\pa_i\mathscr{A}}\pa_j \theta+\mathop{\rm div}\nolimits_{\mathscr{A}}\nabla_{\pa_j\mathscr{A}}\pa_i \theta,\\
\tilde{F}^5&=\pa_{ij}F^5-\nabla_{\mathscr{A}} \theta\cdot\pa_{ij}\mathscr{N}-(\nabla_{\pa_i\mathscr{A}} \theta+\nabla_{\mathscr{A}}\pa_i\theta)\cdot\pa_{j}\mathscr{N}-(\nabla_{\pa_j\mathscr{A}} \theta+\nabla_{\mathscr{A}}\pa_j\theta)\cdot\pa_{i}\mathscr{N}\\
&\quad-(\nabla_{\pa_{ij}\mathscr{A}}\theta+\nabla_{\pa_i\mathscr{A}}\pa_j\theta-\nabla_{\pa_j\mathscr{A}}\pa_i\theta)\mathscr{N}
-\theta\pa_{ij}\frak left|\mathscr{N}\right|-\pa_i\theta\pa_j\frak left|\mathscr{N}\right|-\pa_j\theta\pa_i\frak left|\mathscr{N}\right|.
\end{align*}
Applying the Lemma A.1--A.2 in \cite{GT1} to the forcing terms, we have
\begin{align*}
&\|\tilde{F}^3\|_{H^{k-3}}^2+\|\tilde{F}^5\|_{H^{k-\f52}(\Sigma)}^2\\
&\frak lesssim \|F^3\|_{H^{k-1}}^2+\|F^5\|_{H^{k-\f12}(\Sigma)}^2+P(\eta)\|\theta\|_{H^k}^2\\
&\frak lesssim\mathscr{Z}.
\end{align*}
Then the Lemma \ref{ineq:initial lower elliptic} implies that
\[
\|\pa_{ij}\theta\|_{H^{k-1}}^2\frak lesssim C(\eta_0)\frak left(\|\tilde{F}^3\|_{H^{k-3}}^2+\|\tilde{F}^5\|_{H^{k-\f52}(\Sigma)}^2\right)
\frak lesssim\mathscr{Z}.
\]
Since we have proved the case $r=k$, we take derivative $\pa_3^{k-2}\pa_i$ on both sides of \eqref{equ:equ T} for $i=1,2$ and focus on the term of $(1+A^2+B^2)K^2\pa_3^k\pa_i\theta$. Utilizing the estimates of all the other terms in $H^0$-norm, we have
\[
\|\pa_3^k\pa_i\theta\|_{H^0}^2\frak lesssim\mathscr{Z}.
\]
Then, taking derivative $\pa_3^{k-1}$ on both sides of \eqref{equ:equ T} and focusing on the term of $(1+A^2+B^2)K^2\pa_3^{k+1}\theta$, by all the estimates above, we have
\[
\|\pa_3^{k+1}\theta\|_{H^0}^2\frak lesssim\mathscr{Z}.
\]
Therefore, we have proved
\[
\|\theta\|_{H^{k+1}}^2\frak lesssim\mathscr{Z}.
\]
\end{enumerate}
Now, we go back to the original notation. According to the convergence of $\eta^m$, we have
\begin{eqnarray}\frak label{ineq:bound}
\begin{aligned}
&\|u^m\|_{H^r}^2+\|p^m\|_{H^{r-1}}^2+\|\theta^m\|_{H^r}^2\\
&\frak lesssim C(\eta_0)P(\eta^m)\Big(\|F^1\|_{H^{r-2}}^2+\|F^2\|_{H^{r-1}}^2
+\|F^3\|_{H^{r-2}}^2+\|F^4\|_{H^{r-\f32}(\Sigma)}^2+\|F^5\|_{H^{r-\f32}(\Sigma)}^2\Big)\\
&\frak lesssim C(\eta_0)P(\eta)\Big(\|F^1\|_{H^{r-2}}^2+\|F^2\|_{H^{r-1}}^2
+\|F^3\|_{H^{r-2}}^2+\|F^4\|_{H^{r-\f32}(\Sigma)}^2+\|F^5\|_{H^{r-\f32}(\Sigma)}^2\Big)\\
&\frak lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}^2+\|F^2\|_{H^{r-1}}^2
+\|F^3\|_{H^{r-2}}^2+\|F^4\|_{H^{r-\f32}(\Sigma)}^2+\|F^5\|_{H^{r-\f32}(\Sigma)}^2\Big),
\end{aligned}
\end{eqnarray}
for $r=2, \frak ldots, k+1$, where in the last inequality we have used the assumption that $\|\eta-\eta_0\|_{H^{k+\f12}}\frak le\varepsilon_0$ and the term $P(\eta_0)$ is absorbed by $C(\eta_0)$. Here $C(\eta_0)$ depends only on $\|\eta_0\|_{H^{k+\f12}}$.
The inequality of boundedness \eqref{ineq:bound} implies that the sequence $\{(u^m, p^m, \theta^m)\}$ is uniformly bounded in $H^r\times H^{r-1}\times H^r$, so we can extract a weakly convergent subsequence, which is still denoted by $\{(u^m, p^m, \theta^m)\}$. That is, $u^m\rightharpoonup u^0$ in $H^r(\Om)$, $p^m\rightharpoonup p^0$ in $H^{r-1}(\Om)$ and $\theta^m\rightharpoonup \theta^0$ in $H^r(\Om)$. Since $\eta^m\rightarrow\eta$ in $H^{k+\f12}(\Sigma)$, we also have that $\mathscr{A}^m\to\mathscr{A}$, $J^m\to J$ in $H^k(\Om)$, and $\mathscr{N}^m\to\mathscr{N}$ in $H^{k-\f12}(\Sigma)$.
After multiplying the equation $\mathop{\rm div}\nolimits_{\mathscr{A}^m}u^m=F^2$ by $wJ^m$ for $w\in C_c^\infty(\Om)$ and integrating by parts, we see that
\begin{align*}
\int_{\Om}F^2wJ^m=\int_{\Om}\mathop{\rm div}\nolimits_{\mathscr{A}^m}(u^m)wJ^m&=-\int_{\Om}u^m\cdot\nabla_{\mathscr{A}^m}wJ^m\\
&\to-\int_{\Om}u^0\cdot\nabla_{\mathscr{A}}wJ=\int_{\Om}\mathop{\rm div}\nolimits_{\mathscr{A}}(u^0)wJ,
\end{align*}
from which we deduce that $\mathop{\rm div}\nolimits_{\mathscr{A}}u^0=F^2$.
Then multiplying the third equation in \eqref{equ:SBC} by $wJ^m$ for $w\in{}_0H^1(\Om)$ and integrating by parts, we have that
\[
\int_{\Om}\nabla_{\mathscr{A}^m}\theta^m\cdot\nabla_{\mathscr{A}^m}wJ^m+\int_{\Sigma}\theta^mw\frak left|\mathscr{N}^m\right|=\int_{\Om}F^3wJ^m+\int_{\Sigma}F^5w,
\]
which, by passing to the limit $m\to\infty$, reveals that
\[
\int_{\Om}\nabla_{\mathscr{A}}\theta^0\cdot\nabla_{\mathscr{A}}wJ+\int_{\Sigma}\theta^0w\frak left|\mathscr{N}\right|=\int_{\Om}F^3wJ+\int_{\Sigma}F^5w.
\]
Finally we multiply the first equation in \eqref{equ:SBC} by $wJ^m$ for $w\in{}_0H^1(\Om)$ and integrate by parts to see that
\[
\int_{\Om}\f12\mathbb{D}_{\mathscr{A}^m}u^m:\mathbb{D}_{\mathscr{A}^m}wJ^m-p^mJ^m-\theta^m\nabla_{\mathscr{A}^m}y_3^m\cdot wJ^m=\int_{\Om}F^1\cdot wJ^m-\int_{\Sigma}F^4\cdot w.
\]
Passing to the limit $m\to\infty$ , we deduce that
\[
-\int_{\Om}\f12\mathbb{D}_{\mathscr{A}}u^0:\mathbb{D}_{\mathscr{A}} wJ+p^0\mathop{\rm div}\nolimits_{\mathscr{A}}(w)J-\theta^0\nabla_{\mathscr{A}}y_3\cdot wJ=\int_{\Om}F^1\cdot wJ-\int_{\Sigma}F^4\cdot w.
\]
After integrating by parts again, we deduce that $(u^0, p^0, \theta^0)$ satisfies \eqref{equ:SBC}. Since $(u, p, \theta)$ is the unique solution to \eqref{equ:SBC}, we have that $u=u^0$, $p=p^0$ and $\theta=\theta^0$. Then, according to the weak lower semicontinuity and the uniform boundedness of \eqref{ineq:bound}, we have that
\begin{align*}
&\|u\|_{H^r}+\|p\|_{H^{r-1}}+\|\theta\|_{H^r}\\
&\frak lesssim C(\eta_0)\Big(\|F^1\|_{H^{r-2}}+\|F^2\|_{H^{r-1}}+\|F^3\|_{H^{r-2}}+\|F^4\|_{H^{r-\f32}(\Sigma)}+\|F^5\|_{H^{r-\f32}(\Sigma)}\Big),
\end{align*}
for $r=2, \frak ldots, k+1$, where $C(\eta_0)$ is a constant depending on $\|\eta_0\|_{H^{k+\f12}(\Sigma)}$.
\end{proof}
\subsection{The $\mathscr{A}$-Poisson problem}
Now we consider the elliptic problem
\begin{eqnarray}\frak label{equ:poisson}
\frak left\{
\begin{aligned}
&\Delta_{\mathscr{A}}p=f^1\quad&\text{in}\thinspace\Om,\\
&p=f^2\quad&\text{on}\thinspace\Sigma,\\
&\nabla_{\mathscr{A}}p\cdot\nu=f^3\quad&\text{on}\thinspace\Sigma_b,
\end{aligned}
\right.
\end{eqnarray}
where $\nu$ is the outward--pointing normal on $\Sigma_b$. The details of elliptic estimates of \eqref{equ:poisson} has been interpreted in \cite{GT1} and \cite{LW}, so we omit them here.
\section{Linear estimates}
Now we study the problem \eqref{equ:linear BC}, following the path of \cite{GT1}. First, we will employ two notions of solution: weak and strong.
\subsection{The weak solution}
Suppose that a smooth solution to \eqref{equ:linear BC} exists, then by integrating over $\Om$ by parts, and in time from $0$ to $T$, we see that
\begin{eqnarray}
\begin{aligned}
\frak left(\pa_tu, \psi\right)_{L^2\mathscr{H}^0}+\f12\frak left(u, \psi\right)_{L^2\mathscr{H}^1}-\frak left(p, \mathop{\rm div}\nolimits_{\mathscr{A}}\psi\right)_{L^2\mathscr{H}^0}-\frak left(\theta \nabla_{\mathscr{A}}y_3,\psi\right)_{L^2\mathscr{H}^0}\\
=\frak left(F^1, \psi\right)_{L^2\mathscr{H}^0}
-\frak left(F^4, \psi\right)_{L^2H^0(\Sigma)},\\
\frak left(\pa_t\theta, \phi\right)_{L^2\mathscr{H}^0}+\frak left(\nabla_{\mathscr{A}}\theta, \nabla_{\mathscr{A}}\phi\right)_{L^2\mathscr{H}^0}+\frak left(\theta\frak left|\mathscr{N}\right|, \phi\right)_{L^2H^0(\Sigma)}\\
=\frak left(F^3, \psi\right)_{L^2\mathscr{H}^0}
+\frak left(F^5, \psi\right)_{L^2H^0(\Sigma)},
\end{aligned}
\end{eqnarray}
for $\phi$, $\psi\in\mathscr{H}^1_T$.
If we were to restrict the test function $\psi$ to $\psi\in\mathscr{X}$, the term $\frak left(p, \mathop{\rm div}\nolimits_{\mathscr{A}}\psi\right)_{L^2\mathscr{H}^0}$ would vanish. Then we have a pressureless weak formulation.
\begin{eqnarray}
\begin{aligned}
\frak left(\pa_tu, \psi\right)_{L^2\mathscr{H}^0}+\f12\frak left(u, \psi\right)_{L^2\mathscr{H}^1}-\frak left(\theta \nabla_{\mathscr{A}}y_3,\psi\right)_{L^2\mathscr{H}^0}\\
=\frak left(F^1, \psi\right)_{L^2\mathscr{H}^0}
-\frak left(F^4, \psi\right)_{L^2H^0(\Sigma)},\\
\frak left(\pa_t\theta, \phi\right)_{L^2\mathscr{H}^0}+\frak left(\nabla_{\mathscr{A}}\theta, \nabla_{\mathscr{A}}\phi\right)_{L^2\mathscr{H}^0}+\frak left(\theta\frak left|\mathscr{N}\right|, \phi\right)_{L^2H^0(\Sigma)}\\
=\frak left(F^3, \psi\right)_{L^2\mathscr{H}^0}
+\frak left(F^5, \psi\right)_{L^2H^0(\Sigma)},
\end{aligned}
\end{eqnarray}
This leads us to define a weak solution without pressure.
\begin{definition}
Suppose that $u_0\in \mathscr{Y}(0)$, $\theta_0\in H^0(\Om)$, $F^1-F^4\in (\mathscr{X}_T)^\ast$ and $F^3+F^5\in(\mathscr{H}^1_T)^\ast$. If there exists a pair $(u, \theta)$ achieving the initial data $u_0$, $\theta_0$ and satisfies $u\in\mathscr{H}^1_T$, $\theta\in\mathscr{H}^1_T$ and $\pa_tu\in (\mathscr{X}_T)^\ast$, $\pa_t \theta\in (\mathscr{H}^1_T)^\ast$, such that
\begin{eqnarray}\frak label{equ:lpws}
\begin{aligned}
&\frak left<\pa_tu, \psi\right>_{(\mathscr{X}_T)^\ast}+\f12\frak left(u, \psi\right)_{L^2\mathscr{H}^1}-\frak left(\theta \nabla_{\mathscr{A}}y_3,\psi\right)_{L^2\mathscr{H}^0}=\frak left(F^1-F^4, \psi\right)_{(\mathscr{X}_T)^\ast},\\
&\frak left<\pa_t\theta, \phi\right>_{(\mathscr{H}^1_T)^\ast}+\frak left(\theta,\phi\right)_{L^2\mathscr{H}^1}+\frak left(\theta\frak left|\mathscr{N}\right|, \phi\right)_{L^2H^0(\Sigma)}=\frak left(F^3+F^5, \psi\right)_{(\mathscr{H}^1_T)^\ast},
\end{aligned}
\end{eqnarray}
holds for any $\psi\in\mathscr{X}_T$ and $\phi\in\mathscr{H}^1_T$, we call the pair $(u, \theta)$ a pressureless weak solution.
\end{definition}
Since our aim is to construct solutions with high regularity to \eqref{equ:linear BC}, we will directly construct strong solutions to \eqref{equ:lpws}. And it is easy to see that weak solutions will arise as a byproduct of the construction of strong solutions to \eqref{equ:linear BC}. Hence, we will not study the existence of weak solutions.
Now we derive some properties and uniqueness of weak solutions.
\begin{lemma}
Suppose that $u$, $\theta$ are weak solutions of \eqref{equ:lpws}. Then, for almost every $t\in [0,T]$,
\begin{eqnarray}\frak label{eq:integral}
\begin{aligned}
\f12\|u(t)\|_{\mathscr{H}^0(t)}^2+\f12\int_0^t\|u(s)\|_{\mathscr{H}^1(s)}^2\,\mathrm{d}s=\f12\|u(0)\|_{\mathscr{H}^0(0)}^2+\frak left(F^1-F^4,u\right)_{(\mathscr{X}_t)^\ast}\\
+\f12\int_0^t\int_{\Om}|u(s)|^2\pa_sJ(s)\,\mathrm{d}s+\int_0^t\int_{\Om}\theta(s)\nabla_{\mathscr{A}}y_3\cdot u(s)\,\mathrm{d}s,\\
\f12\|\theta(t)\|_{\mathscr{H}^0(t)}^2+\int_0^t\|\theta(s)\|_{\mathscr{H}^1(s)}^2\,\mathrm{d}s+\int_0^t\int_{\Sigma}|\theta(s)|^2\frak left|\mathscr{N}\right|\,\mathrm{d}s=\f12\|\theta(0)\|_{\mathscr{H}^0(0)}^2\\
+\frak left(F^3+F^5,\theta\right)_{(\mathscr{H}^1_t)^\ast}+\f12\int_0^t\int_{\Om}|\theta(s)|^2\pa_sJ(s)\,\mathrm{d}s.
\end{aligned}
\end{eqnarray}
Also,
\begin{equation}\frak label{est:weak theta}
\sup_{0\frak le t\frak le T}\|\theta(t)\|_{\mathscr{H}^0(t)}^2+\|\theta\|_{\mathscr{H}^1_T}^2\frak lesssim \exp\frak left(C_0(\eta)T\right)\frak left(\|\theta(0)\|_{\mathscr{H}^0(0)}^2+\|F^3+F^5\|_{(\mathscr{H}^1_T)^\ast}^2\right),
\end{equation}
\begin{eqnarray}\frak label{est:weak u}
\begin{aligned}
\sup_{0\frak le t\frak le T}\|u(t)\|_{\mathscr{H}^0(t)}^2+\|u\|_{\mathscr{H}^1_T}^2\frak lesssim \exp\frak left(CC_0(\eta)T\right)\Big(\|u(0)\|_{\mathscr{H}^0(0)}^2+\|\theta(0)\|_{\mathscr{H}^0(0)}^2\\
+\|F^1-F^4\|_{(\mathscr{X}_T)^\ast}^2+\|F^3+F^5\|_{(\mathscr{H}^1_T)^\ast}^2\Big),
\end{aligned}
\end{eqnarray}
where $C_0(\eta):=\max\{\sup_{0\frak le t\frak le T}\|\pa_tJK\|_{L^\infty}, \sup_{0\frak le t\frak le T}\|\nabla_{\mathscr{A}}y_3\|_{L^\infty}\}$.
\end{lemma}
\begin{proof}
The identity \eqref{eq:integral} follows directly from Lemma 2.4 in \cite{GT1} and \eqref{equ:lpws} by using the test function $\psi=u\chi_{[0,t]}\in \mathscr{X}_T$, and $\phi=\theta\chi_{[0,t]}\in \mathscr{H}^1_T$, where $\chi_{[0,t]}$ is a temporal indicator function to $1$ on the interval $[0,t]$.
From \eqref{eq:integral}, we can directly derive the inequalities
\begin{equation}\frak label{est:weak theta1}
\f12\|\theta(t)\|_{\mathscr{H}^0(t)}^2+\|\theta\|_{\mathscr{H}^1_t}^2\frak le \f12\|\theta(0)\|_{\mathscr{H}^0(0)}^2+\|F^3+F^5\|_{(\mathscr{H}^1_t)^\ast}\|\theta\|_{\mathscr{H}^1_t}+\f12C_0(\eta)\|\theta(t)\|_{\mathscr{H}^0_t}^2,
\end{equation}
\begin{eqnarray}\frak label{est:weak u1}
\begin{aligned}
\f12\|u(t)\|_{\mathscr{H}^0(t)}^2+\f12\|u\|_{\mathscr{H}^1_t}^2\frak le \f12\|u(0)\|_{\mathscr{H}^0(0)}^2+\|F^1-F^4\|_{(\mathscr{X}_t)^\ast}\|u\|_{\mathscr{H}^1_t}\\
+\f12C_0(\eta)\|u(t)\|_{\mathscr{H}^0_t}^2+CC_0(\eta)\|\theta\|_{\mathscr{H}^1_t}\|u\|_{\mathscr{H}^1_t},
\end{aligned}
\end{eqnarray}
where, for \eqref{est:weak u1}, we have used the Poincar\'e inequality in Lemma A.14 on \cite{GT1}, and
\[
\|u\|_{\mathscr{H}^k_t}^2=\int_0^t\|u(s)\|_{\mathscr{H}^k(s)}^2\,\mathrm{d}s\quad \text{for}\thinspace k=0,1,
\]
and similarly for $\|\theta\|_{\mathscr{H}^k_t}^2$, $\|F^1-F^4\|_{(\mathscr{X}_t)^\ast}$, $\|F^3+F^5\|_{(\mathscr{H}^1_t)^\ast}$. Inequalities \eqref{est:weak theta1}, \eqref{est:weak u1} and Cauchy inequality imply that
\begin{eqnarray}\frak label{ineq:integral}
\begin{aligned}
\f12\|\theta(t)\|_{\mathscr{H}^0(t)}^2+\f34\|\theta\|_{\mathscr{H}^1_t}^2\frak le \f12\|\theta(0)\|_{\mathscr{H}^0(0)}^2+\|F^3-F^5\|_{(\mathscr{H}^1_t)^\ast}^2+\f12C_0(\eta)\|\theta(t)\|_{\mathscr{H}^0_t}^2,\\
\f12\|u(t)\|_{\mathscr{H}^0(t)}^2+\f18\|u\|_{\mathscr{H}^1_t}^2\frak le \f12\|u(0)\|_{\mathscr{H}^0(0)}^2+\|F^1-F^4\|_{(\mathscr{X}_t)^\ast}^2
+\f12C_0(\eta)\|u(t)\|_{\mathscr{H}^0_t}^2\\
+CC_0(\eta)\|\theta\|_{\mathscr{H}^1_t}^2,
\end{aligned}
\end{eqnarray}
Then \eqref{est:weak theta} and \eqref{est:weak u} follow from the integral inequality \eqref{ineq:integral} and Gronwall's lemma.
\end{proof}
\begin{proposition}
Weak solutions to \eqref{equ:lpws} are unique.
\end{proposition}
\begin{proof}
Suppose that $(u^1,\theta^1)$ and $(u^2,\theta^2)$ are both weak solutions to \eqref{equ:lpws}, then $(w,\vartheta)$, defined by $w=u^1-u^2$ and $\vartheta=\theta^1-\theta^2$, is a weak solution with $F^1-F^4=0$, $F^3+F^5=0$, $w(0)=u^1(0)-u^2(0)=0$ and $\vartheta(0)=\theta^1(0)-\theta^2(0)$. Then the bounds \eqref{est:weak theta} and \eqref{est:weak u} imply that $w=0$ and $\vartheta=0$. Hence, weak solutions to \eqref{equ:lpws} are unique.
\end{proof}
\subsection{The strong solution}
Before we define the strong solution, we need to define an operator $D_t$ as
\begin{equation}\frak label{def:Dt}
D_tu:=\pa_tu-Ru\quad \text{for}\quad R:=\pa_tMM^{-1},
\end{equation}
with $M=K\nabla\Phi$, where $K$, $\Phi$ are as defined in \eqref{map:phi} and \eqref{equ:components}. It is easily to be known that $D_t$ preserves the $\mathop{\rm div}\nolimits_{\mathscr{A}}$ -free condition, since
\[
J\mathop{\rm div}\nolimits_{\mathscr{A}}(D_tv)=J\mathop{\rm div}\nolimits_{\mathscr{A}}(M\pa_t(M^{-1}v))=\mathop{\rm div}\nolimits(\pa_t(M^{-1}v))=\pa_t\mathop{\rm div}\nolimits(M^{-1}v)=\pa_t(J\mathop{\rm div}\nolimits_{\mathscr{A}}v),
\]
where the equality $J\mathop{\rm div}\nolimits_{\mathscr{A}}v=\mathop{\rm div}\nolimits(M^{-1}v)$ can be found in Page 299 of \cite{GT1}.
\begin{definition}\frak label{def:strong solution}
Suppose that the forcing functions satisfy
\begin{eqnarray}\frak label{cond:force}
\begin{aligned}
F^1&\in L^2([0, T]; H^1(\Om))\cap C^0([0, T]; H^0(\Om)),\\
F^3&\in L^2([0, T]; H^1(\Om))\cap C^0([0, T]; H^0(\Om)),\\
F^4&\in L^2([0, T]; H^{\f32}(\Sigma))\cap C^0([0, T]; H^{\f12}(\Sigma)),\\
\pa_t(F^1-F^4)&\in L^2([0, T]; ({}_0H^1(\Om))^\ast), \quad\pa_t(F^3+F^5)\in L^2([0, T]; ({}_0H^1(\Om))^\ast).
\end{aligned}
\end{eqnarray}
We also assume that $u_0\in H^2\cap \mathscr{X}(0)$ and $\theta_0\in H^2\cap\mathscr{H}^1(0)$. If there exists a pair $(u, p, \theta)$ achieving the initial data $u_0$, $\theta_0$ and satisfies
\begin{eqnarray}\frak label{equ:strong solution}
\begin{aligned}
&u\in L^2([0, T]; H^3)\cap C^0([0,T];H^2)\cap \mathscr{X}_T \thinspace &\pa_tu\in L^2([0, T]; H^1)\cap C^0([0,T];H^0)\\
&D_tu\in \mathscr{X}_T,\quad\pa_t^2u\in\mathscr{X}_T^\ast \thinspace &p\in L^2([0, T]; H^2)\cap C^0([0,T];H^1)\\
&\theta\in L^2([0, T]; H^3)\cap C^0([0,T];H^2) \thinspace &\pa_t\theta\in L^2([0, T]; H^1)\cap C^0([0,T];H^0)\\
&\pa_t^2\theta\in(\mathscr{H}_T^1)^\ast,
\end{aligned}
\end{eqnarray}
such that they satisfies \eqref{equ:linear BC} in the strong sense, we call it a strong solution.
\end{definition}
Then, we have to prove the lower regularity of strong solutions.
\begin{theorem}\frak label{thm:lower regularity}
Suppose that the forcing terms and the initial data satisfy the condition in Definition \ref{def:strong solution}, and that $u_0$, $F^4(0)$ satisfy the compatibility condition
\begin{equation}\frak label{cond:compatibility}
\Pi_0\frak left(F^4(0)+\mathbb{D}_\mathscr{A_0}u_0\mathscr{N}_0\right)=0,\quad \text{where}\thinspace \mathscr{N}_0=(-\pa_1\eta_0, -\pa_2\eta_0, 1),
\end{equation}
and $\Pi_0$ is an orthogonal projection onto the tangent space of the surface $\{x_3=\eta_0\}$ defined by
\begin{equation}\frak label{def:projection}
\Pi_0v=v-(v\cdot\mathscr{N}_0)\mathscr{N}_0|\mathscr{N}_0|^{-2}.
\end{equation}
Then there exists a strong solution $(u, p, \theta)$ satisfying \eqref{equ:strong solution}. Moreover,
\begin{eqnarray}\frak label{inequ:est strong solution}
\begin{aligned}
&\|u\|_{L^\infty H^2}^2+\|u\|_{L^2H^3}^2+\|\pa_tu\|_{L^\infty H^0}^2+\|\pa_tu\|_{L^2H^1}^2+\|\pa_t^2u\|_{(\mathscr{X}_T)^\ast}+\|p\|_{L^\infty H^1}^2+\|p\|_{L^2H^2}^2\\
&\quad+\|\theta\|_{L^\infty H^2}^2+\|\theta\|_{L^2H^3}^2+\|\pa_t\theta\|_{L^\infty H^0}^2+\|\pa_t\theta\|_{L^2H^1}^2+\|\pa_t^2\theta\|_{(\mathscr{H}^1_T)^\ast}\\
&\frak lesssim P(\|\eta_0\|_{H^{5/2}})\frak left(1+\mathscr{K}(\eta)\right)\exp\frak left(C(1+\mathscr{K}(\eta))T\right)\Big(\|u_0\|_{H^2}^2+\|\theta_0\|_{H^2}^2+\|F^1(0)\|_{H^0}^2\\
&\quad+\|F^3(0)\|_{H^0}^2+\|F^4(0)\|_{H^{1/2}(\Sigma)}^2+\|F^1\|_{L^2H^1}^2+\|F^3\|_{L^2H^1}^2+\|F^4\|_{L^2H^{3/2}(\Sigma)}^2\\
&\quad+\|F^5\|_{L^2H^{3/2}(\Sigma)}^2+\|\pa_t(F^1-F^4)\|_{(\mathscr{X}_T)^\ast}^2+\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1_T)^\ast}^2\Big),
\end{aligned}
\end{eqnarray}
where $C$ is a constant independent of $\eta$ and $\mathscr{K}(\eta)$ is defined as
\begin{equation}
\mathscr{K}(\eta):=\sup_{0\frak le t\frak le T}\frak left(\|\eta\|_{H^{9/2}}^2+\|\pa_t\eta\|_{H^{7/2}}^2+\|\pa_t^2\eta\|_{H^{5/2}}^2\right).
\end{equation}
The initial pressure, $p(0)\in H^1(\Om)$ is determined by terms $u_0$, $\theta_0$, $F^1(0)$, $F^4(0)$ as a weak solution to
\begin{eqnarray}\frak label{equ:p0}
\frak left\{
\begin{aligned}
&\mathop{\rm div}\nolimits_{\mathscr{A}_0}\frak left(\nabla_{\mathscr{A}_0}p(0)-F^1(0)-\theta_0\nabla_{\mathscr{A}_0}y_{3,0}\right)=-\mathop{\rm div}\nolimits_{\mathscr{A}_0}(R(0)u_0)\in H^0(\Om),\\
&p(0)=(F^4(0)+\mathbb{D}_{\mathscr{A}_0}u_0\mathscr{N}_0)\cdot\mathscr{N}_0|\mathscr{N}_0|^{-2}\in H^{1/2}(\Sigma),\\
&\frak left(\nabla_{\mathscr{A}_0}p(0)-F^1(0)\right)\cdot\nu=\Delta_{\mathscr{A}_0}u_0\cdot\nu\in H^{-1/2}(\Sigma_b),
\end{aligned}
\right.
\end{eqnarray}
where $y_{3,0}$ in terms of $\eta_0$.
Also, $\pa_t\theta(0)$ satisfies
\begin{equation}
\pa_t\theta(0)=\Delta_{\mathscr{A}_0}\theta_0+F^3(0) \in H^0(\Om),
\end{equation}
and $D_tu(0)=\pa_tu(0)-R(0)u_0$ satisfies
\begin{equation}
D_tu(0)=\Delta_{\mathscr{A}_0}u_0-\nabla_{\mathscr{A}_0}p(0)+F^1(0)+\theta_0e_3-R(0)u_0\in\mathscr{Y}(0).
\end{equation}
Moreover, $\pa_t\theta$ satisfies
\begin{eqnarray}\frak label{equ:pat theta}
\frak left\{
\begin{aligned}
&\pa_t(\pa_t\theta)-\Delta_{\mathscr{A}}(\pa_t\theta)=\pa_tF^3+G^3\quad&\text{in}\quad\Om,\\
&\nabla_{\mathscr{A}}(\pa_t\theta)\cdot\mathscr{N}+\pa_t\theta\frak left|\mathscr{N}\right|=\pa_tF^5+G^5\quad&\text{on}\quad\Sigma,\\
&\pa_t\theta=0\quad&\text{on}\quad\Sigma_b,
\end{aligned}
\right.
\end{eqnarray}
and $D_tu$ satisfies
\begin{eqnarray}\frak label{equ:Dt u}
\frak left\{
\begin{aligned}
&\pa_t(D_tu)-\Delta_{\mathscr{A}}(D_tu)+\nabla_{\mathscr{A}}(\pa_tp)-D_t(\theta \nabla_{\mathscr{A}}y_3)=D_tF^1+G^1\quad&\text{in}\quad\Om,\\
&\mathop{\rm div}\nolimits_{\mathscr{A}}(D_tu)=0\quad&\text{in}\quad\Om,\\
&S_{\mathscr{A}}(\pa_tp,D_tu)\mathscr{N}=\pa_tF^4+G^4\quad&\text{on}\quad\Sigma,\\
&D_tu=0\quad&\text{on}\quad\Sigma_b,
\end{aligned}
\right.
\end{eqnarray}
in the weak sense of \eqref{equ:lpws}, where $G^1$ is defined by
\[
G^1=-(R+\pa_tJK)\Delta_{\mathscr{A}}u-\pa_tRu+(\pa_tJK+R+R^\top)\nabla_{\mathscr{A}}p+\mathop{\rm div}\nolimits_{\mathscr{A}}(\mathbb{D}_{\mathscr{A}}(Ru)-R\mathbb{D}_{\mathscr{A}}u+\mathbb{D}_{\pa_t\mathscr{A}}u)
\]
($R^\top$ denoting the matrix transpose of $R$), $G^3$ by
\[
G^3=-\pa_tJK\Delta_{\mathscr{A}}\theta+\mathop{\rm div}\nolimits_{\mathscr{A}}(-R\nabla_{\mathscr{A}}\theta+\nabla_{\pa_t\mathscr{A}}\theta),
\]
$G^4$ by
\[
G^4=\mathbb{D}_{\mathscr{A}}(Ru)\mathscr{N}-(pI-\mathbb{D}_{\mathscr{A}}u)\pa_t\mathscr{N}+\mathbb{D}_{\pa_t\mathscr{A}}u\mathscr{N},
\]
and $G^5$ by
\[
G^5=-\nabla_{\mathscr{A}}\theta\cdot\pa_t\mathscr{N}-\nabla_{\pa_t\mathscr{A}}\theta\cdot\mathscr{N}-\theta\pa_t\frak left|\mathscr{N}\right|.
\]
More precisely, \eqref{equ:pat theta} and \eqref{equ:Dt u} hold in the weak sense of \eqref{equ:lpws} in that
\begin{eqnarray}\frak label{equ:weak pat theta}
\begin{aligned}
&\frak left<\pa_t^2\theta,\phi\right>_{(\mathscr{H}^1_T)^\ast}+\frak left(\pa_t\theta,\phi\right)_{\mathscr{H}^1_T}+\frak left(\pa_t\theta\frak left|\mathscr{N}\right|,\phi\right)_{L^2H^0(\Sigma)}\\
&=\frak left<\pa_t(F^3+F^5)\right>_{(\mathscr{H}^1_T)^\ast}+\frak left(\pa_tJKF^3,\phi\right)_{\mathscr{H}^0_T}-\frak left(\pa_tJK\pa_t\theta,\phi\right)_{\mathscr{H}^0_T}\\
&\quad-\int_0^T\int_{\Om}\frak left(\pa_tJK\nabla_{\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\pa_t\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\mathscr{A}}\theta\cdot\nabla_{\pa_t\mathscr{A}}\phi\right)J
\end{aligned}
\end{eqnarray}
and
\begin{eqnarray}
\begin{aligned}
&\frak left<\pa_tD_tu,\psi\right>_{(\mathscr{X}_T)^\ast}+\f12\frak left(\pa_tu,\psi\right)_{\mathscr{H}^1_T}\\
&=\frak left<\pa_t(F^1-F^4),\psi\right>_{(\mathscr{X}_T)^\ast}+\frak left(\pa_t(\theta \nabla_{\mathscr{A}}y_3),\psi\right)_{\mathscr{H}^0}-\frak left(\pa_tRu+R\pa_tu,\psi\right)_{\mathscr{H}^0_T}\\
&\quad+\frak left(\pa_tJKF^1,\psi\right)_{\mathscr{H}^0_T}-\frak left(\pa_tJK\theta e_3,\psi\right)_{\mathscr{H}^0_T}-\frak left(\pa_tJK\pa_tu,\psi\right)_{\mathscr{H}^0_T}-\frak left(p,\mathop{\rm div}\nolimits_{\mathscr{A}}(R\psi)\right)_{\mathscr{H}^0_T}\\
&\quad-\f12\int_0^T\int_\Om\frak left(\pa_tJK\mathbb{D}_{\mathscr{A}}u:\mathbb{D}_{\mathscr{A}}\psi+\mathbb{D}_{\pa_t\mathscr{A}}u:\mathbb{D}_{\mathscr{A}}\psi+\mathbb{D}_{\mathscr{A}}u:\mathbb{D}_{\pa_t\mathscr{A}}\psi\right)J
\end{aligned}
\end{eqnarray}
for all $\phi\in\mathscr{H}^1_T$, $\psi\in\mathscr{X}_T$.
\end{theorem}
\begin{proof}
Here we will use the Galerkin method, which may be referred to \cite{Evans}.
Step 1. The construction of approximate solutions for $\theta$. Since the scalar-valued space $H^2(\Om)\cap{}_0H^1(\Om)$ is separable, we can choose a countable basis $\{\tilde{w}^j\}_{i=1}^\infty$. Note that this basis is time-independent. Now, we need to construct a time-dependent basis for $H^2\cap\mathscr{H}^1$. We define $\phi^j=\phi^j(t):=K(t)\tilde{w}^j$. According to the Proposition \ref{prop:k}, $\phi^j(t)\in H^2(\Om)\cap\mathscr{H}^1(t)$, and $\{\phi^j(t)\}_{j=1}^\infty$ is a basis of $H^2(\Om)\cap\mathscr{H}^1(t)$ for each $t\in [0, T]$. Moreover,
\begin{equation}\frak label{equ:dt phi}
\pa_t\phi^j(t)=\pa_tK(t)\tilde{w}^j=\pa_tKJK\tilde{w}^j=\pa_tKJ\phi^j(t),
\end{equation}
which allows us to express $\pa_t\phi^j$ in terms of $\phi^j$. For any integer $m\ge1$, we define the finite-dimensional space $\mathscr{H}^1_m(t):=$ span $\{\phi^1(t), \frak ldots, \phi^m(t)\}\subset H^2(\Om)\cap\mathscr{H}^1(t)$ and we define $\mathscr{P}^m_t: H^2(\Om)\to \mathscr{H}^1_m(t)$ for $H^2(\Om)$ orthogonal projection onto $\mathscr{H}^1_m(t)$. Clearly, if $\theta\in H^2(\Om)\cap\mathscr{H}^1(t)$, $\mathscr{P}^m_t\theta\to\theta$ as $m\to\infty$.
For each $m\ge1$, we define an approximate solution
\[
\theta^m=d^m_j(t)\phi^j(t), \quad \text{with}\quad d^m_j(t): [0, T] \to \mathbb{R} \quad\text{for}\quad j=1, \frak ldots, m,
\]
where as usual we use the Einstein convention of summation of the repeated index $j$.
We want to choose $d^m_j$ such that
\begin{equation}\frak label{equ:thetam}
\frak left(\pa_t\theta^m, \phi\right)_{\mathscr{H}^0}+\frak left(\theta^m, \phi\right)_{\mathscr{H}^1}+\frak left(\theta^m\frak left|\mathscr{N}\right|, \phi\right)_{H^0(\Sigma)}=\frak left(F^3,
\phi\right)_{\mathscr{H}^0}+\frak left(F^5, \phi\right)_{H^0(\Sigma)},
\end{equation}
with the initial data $\theta^m(0)=\mathscr{P}^m_t\theta_0\in \mathscr{H}^1_m(0)$ for each $\phi\in\mathscr{H}^1_m(t)$.
And \eqref{equ:thetam} is equivalent to the system of ODEs for $d^m_j$:
\begin{eqnarray}\frak label{equ:ode}
\begin{aligned}
\dot{d}^m_j\frak left(\phi^j, \phi^k\right)_{\mathscr{H}^0}+d^m_j\frak left(\frak left(\pa_tKJ\phi^j,\phi^k\right)_{\mathscr{H}^0}+\frak left(\phi^j, \phi^k\right)_{\mathscr{H}^1}+\frak left(\phi^j\frak left|\mathscr{N}\right|,\phi^k\right)_{H^0(\Sigma)}\right)\\
=\frak left(F^3,
\phi^k\right)_{\mathscr{H}^0}+\frak left(F^5, \phi^k\right)_{H^0(\Sigma)}
\end{aligned}
\end{eqnarray}
for $j, k=1, \frak ldots, m$. The $m\times m$ matrix with $j, k$ entry $\frak left(\phi^j, \phi^k\right)_{\mathscr{H}^0}$ is invertible, the coefficients of the linear system \eqref{equ:ode} are $C^1([0, T])$, and the forcing terms are $C^0([0, T])$, so the usual well-posedness of ODEs guarantees that the existence of a unique solution $d^m_j\in C^1([0, T])$ to \eqref{equ:ode} that satisfies the initial data. This provides the desired solution, $\theta^m$, to \eqref{equ:thetam}. Since $F^3$, $F^5$ satisfy \eqref{cond:force}, equation \eqref{equ:ode} may be differentiated in time to see that $d_j^m\in C^{1,1}([0, T])$, which means $d_j^m$ is twice differentiable almost everywhere in $[0, T]$.
Step 2. The energy estimates for $\theta^m$. Since $\theta^m(t)\in \mathscr{H}^1_m(t)$, we take $\phi=\theta^m$ as a test function in \eqref{equ:thetam}, using the Poincar\'e-type inequalities in Lemma A.14 of \cite{GT1} and usual trace theory, we have
\begin{align*}
\pa_t\f12\|\theta^m\|_{\mathscr{H}^0}^2+\|\theta^m\|_{\mathscr{H}^1}^2\frak lesssim (\|F^3\|_{\mathscr{H}^0}+\|F^5\|_{H^{1/2}(\Sigma)})\|\theta^m\|_{\mathscr{H}^1}-\f12\int_{\Om}|\theta^m|^2\pa_tJ.
\end{align*}
Then, applying Cauchy's inequality, we may derive that
\begin{align*}
\pa_t\f12\|\theta^m\|_{\mathscr{H}^0}^2+\f14\|\theta^m\|_{\mathscr{H}^1}^2\frak lesssim\|F^3\|_{\mathscr{H}^0}^2+\|F^5\|_{H^{1/2}(\Sigma)}^2+C_0(\eta)\f12\|\theta^m\|_{\mathscr{H}^0}^2
\end{align*}
with $C_0(\eta):=1+\sup_{0\frak le t\frak le T}\|\pa_tJK\|_{L^\infty}$. Using the Lemma 2.9 in \cite{LW}, we may have
\begin{eqnarray}\frak label{equ:initial thetam}
\begin{aligned}
\|\theta^m(0)\|_{\mathscr{H}^0}&\frak le P(\|\eta_0\|_{H^{5/2}})\|\theta^m(0)\|_{H^0}\frak le P(\|\eta_0\|_{H^{5/2}})\|\theta^m(0)\|_{H^2}\\
&=P(\|\eta_0\|_{H^{5/2}})\|\mathscr{P}^m_0\theta_0\|_{H^2}\frak le P(\|\eta_0\|_{H^{5/2}})\|\theta_0\|_{H^2}.
\end{aligned}
\end{eqnarray}
Now, we can utilize Gronwall's lemma to deduce energy estimates for $\theta^m$:
\begin{eqnarray}\frak label{equ:est thetam}
\begin{aligned}
\sup_{0\frak le t\frak le T}\|\theta^m\|_{\mathscr{H}^0}^2&+\|\theta^m\|_{\mathscr{H}^1_T}^2\\
&\frak lesssim P(\|\eta_0\|_{H^{5/2}})\exp(C_0(\eta)T)(\|\theta_0\|_{H^2}^2+\|F^3\|_{\mathscr{H}^0_T}^2+\|F^5\|_{L^2H^{1/2}(\Sigma)}^2).
\end{aligned}
\end{eqnarray}
Step 3. Estimates for $\pa_t\theta^m(0)$. If $\theta\in H^2(\Om)\cap\mathscr{H}^1(t)$, $\phi\in\mathscr{H}^1$, the integration by parts reveals that
\begin{equation}\frak label{equ:theta1}
\frak left(\theta, \phi\right)_{\mathscr{H}^1}=\int_{\Om}-\Delta_{\mathscr{A}}\theta\phi J+\int_{\Sigma}(\nabla_{\mathscr{A}}\theta\cdot\mathscr{N})\phi=\frak left(-\Delta_{\mathscr{A}}\theta,\phi\right)_{\mathscr{H}^0}
+\frak left(\nabla_{\mathscr{A}}\theta\cdot\mathscr{N},\phi\right)_{H^0(\Sigma)}
\end{equation}
Evaluating \eqref{equ:thetam} at $t=0$ and employing \eqref{equ:theta1}, we have that
\begin{equation}\frak label{equ:thetam0}
\frak left(\pa_t\theta^m(0), \phi\right)_{\mathscr{H}^0}=\frak left(\Delta_{\mathscr{A}_0}\theta^m(0)+F^3(0), \phi\right)_{\mathscr{H}^0},
\end{equation}
for all $\phi\in\mathscr{H}^1_m(t)$.
By virtue of \eqref{equ:dt phi}, we have that
\begin{equation}\frak label{equ:test theta}
\pa_t\theta^{m}-\pa_tK(t)J(t)\theta^m(t)=\dot{d}^m_j(t)\phi^j(t)\in\mathscr{H}^1_m(t),
\end{equation}
so that $\phi=\pa_t\theta^m(0)-\pa_tK(0)J(0)\theta^m(0)\in\mathscr{H}^1_m(0)$ is a choice for the test function in \eqref{equ:thetam0}. So using this test function in \eqref{equ:thetam0}, we have
\begin{eqnarray}\frak label{equ:thetam1}
\begin{aligned}
\|\pa_t\theta^m(0)\|_{\mathscr{H}^0}^2&\frak le \|\pa_tK(0)J(0)\theta^m(0)\|_{\mathscr{H}^0}\|\pa_t\theta^m(0)\|_{\mathscr{H}^0}\\
&+\|\pa_t\theta^m(0)-\pa_tK(0)J(0)\theta^m(0)\|_{\mathscr{H}^0}\|\Delta_{\mathscr{A}_0}\theta^m(0)+F^3(0)\|_{\mathscr{H}^0}.
\end{aligned}
\end{eqnarray}
Then after using \eqref{equ:initial thetam} and Cauchy's inequality for the right--hand side of \eqref{equ:thetam1}, we have the bound
\begin{equation}\frak label{equ:est thetam0}
\|\pa_t\theta^m(0)\|_{\mathscr{H}^0}^2\frak lesssim C_1(\eta)\frak left(\|\theta_0\|_{H^2}^2+\|F^3(0)\|_{\mathscr{H}^0}^2\right)
\end{equation}
with $C_1(\eta)=P(\|\eta_0\|_{H^{5/2}})\frak left(1+\|\pa_tK(0)J(0)\|_{L^\infty}^2+\|\mathscr{A}_0\|_{C^1}^2\right)$.
Step 4. Energy estimates for $\pa_t\theta^m$. Now, suppose that $\phi(t)=c_j^m(t)\phi^j$ for $c_j^m\in C^{0,1}([0, T])$, $j=1, \frak ldots, m$; it is proved as in \eqref{equ:test theta}, that $\pa_t\phi-\pa_tK(t)J(t)\phi\in \mathscr{H}^1_m(t)$ as well. Then in \eqref{equ:thetam}, using this $\phi$, and temporally differentiating the result equation, and then subtracting from the result equation \eqref{equ:thetam} with test function $\pa_t\phi-\pa_tK(t)J(t)\phi$, we find that
\begin{eqnarray}\frak label{equ:dt thetam1}
\begin{aligned}
&\frak left<\pa_t^2\theta^m, \phi\right>_{(\mathscr{H}^1)^\ast}+\frak left(\pa_t\theta^m,\phi\right)_{\mathscr{H}^1}+\frak left(\pa_t\theta^m\frak left|\mathscr{N}\right|, \phi\right)_{H^0(\Sigma)}\\
&=\frak left<\pa_t(F^3+F^5),\phi\right>_{(\mathscr{H}^1)^\ast}+\frak left(F^3,(\pa_tKJ+\pa_tJK)\phi\right)_{\mathscr{H}^0}+\frak left(F^5,\pa_tKJ\phi\right)_{H^0(\Sigma)}\\
&\quad-\frak left(\pa_t\theta^m, (\pa_tKJ+\pa_tJK)\phi\right)_{\mathscr{H}^0}-\frak left(\theta^m, \pa_tKJ\phi\right)_{\mathscr{H}^1}-\frak left(\theta^m, \pa_tKJ\phi\right)_{H^0(\Sigma)}\\
&\quad-\int_{\Om}\frak left(\pa_tJK\nabla_{\mathscr{A}}\theta^m\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\pa_t\mathscr{A}}\theta^m\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\mathscr{A}}\theta^m\cdot\nabla_{\pa_t\mathscr{A}}\phi\right)J.
\end{aligned}
\end{eqnarray}
According to \eqref{equ:test theta} and the fact that $d_j^m(t)$ is twice differentiable almost everwhere as we have pointed in the first step, we use $\phi=\pa_t\theta^m-\pa_tKJ\theta^m$ as a test function in \eqref{equ:dt thetam1}. Utilizing Cauchy's inequality, trace theory and the Remark $2.3$ in \cite{GT1}, we have that
\begin{eqnarray}\frak label{equ:dt thetam2}
\begin{aligned}
&\pa_t\frak left(\f12\|\pa_t\theta^m\|_{\mathscr{H}^0}^2-\frak left(\pa_t\theta^m, \pa_tKJ\theta^m\right)_{\mathscr{H}^0}\right)+\f14\|\pa_t\theta^m\|_{\mathscr{H}^1}^2\\
&\frak le C_0(\eta)\frak left(\f12\|\theta^m\|_{\mathscr{H}^0}^2-\frak left(\pa_t\theta^m, \pa_tKJ\theta^m\right)_{\mathscr{H}^0}\right)+C_2(\eta)\|\pa_t\theta^m\|_{\mathscr{H}^1}^2\\
&\quad+C\frak left(\|F^3\|_{\mathscr{H}^0}^2+\|F^5\|_{H^{1/2}(\Sigma)}^2\right)+C\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1)^\ast}
\end{aligned}
\end{eqnarray}
for $C_2(\eta)$ is defined as
\begin{eqnarray}o
C_2(\eta):&=&\sup_{0\frak le t\frak le T}\big[1+\|\pa_t(\pa_tKJ)\|_{L^\infty}^2+\|\pa_tKJ\|_{C^1}^2+\|\pa_t\mathscr{A}\|_{L^\infty}^2\\
&&\quad+(1+\|\mathscr{A}\|_{L^\infty}^2)(1+\|\pa_tJ K\|_{L^\infty}^2)\big](1+\|\pa_tKJ\|_{C^1}^2).
\end{eqnarray}o
Then according to Cauchy's inequality and Gronwall's lemma, \eqref{equ:dt thetam2} implies that
\begin{eqnarray}\frak label{equ:dt thetam3}
\begin{aligned}
&\sup_{0\frak le t\frak le T}(\|\pa_t\theta^m\|_{\mathscr{H}^0}^2+\|\pa_t\theta^m\|_{\mathscr{H}^1_T}^2\\
&\frak lesssim \exp(C_0(\eta)T)\Big(\|\pa_t\theta^m(0)\|_{\mathscr{H}^0}^2 +C_1(\eta)\|\theta^m(0)\|_{\mathscr{H}^0}^2+\|F^3\|_{\mathscr{H}^0_T}^2\\
&\quad+\|F^5\|_{L^2H^{1/2}}^2+\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1_T)^\ast}^2\Big)\\
&\quad+C_2(\eta)\frak left(\sup_{0\frak le t\frak le T}\|\theta^m\|_{\mathscr{H}^0}^2+\int_0^T\exp(C_0(\eta)(T-s))\|\theta^m(s)\|_{\mathscr{H}^1}^2\,\mathrm ds\right).
\end{aligned}
\end{eqnarray}
Now, the energy estimates for $\pa_t\theta^m$ is deduced by combining \eqref{equ:dt thetam3} with the estimates \eqref{equ:initial thetam}, \eqref{equ:est thetam} and \eqref{equ:est thetam0},
\begin{eqnarray}\frak label{equ:dt thetam4}
\begin{aligned}
&\sup_{0\frak le t\frak le T}\|\pa_t\theta^m\|_{\mathscr{H}^0}^2+\|\pa_t\theta^m\|_{\mathscr{H}^1_T}^2\\
&\frak lesssim \frak left(C_1(\eta)+C_2(\eta)\right)\exp(C_0(\eta)T)\frak left(\|\theta^m(0)\|_{\mathscr{H}^0}^2+\|F^3(0)\|_{\mathscr{H}^0}^2\right)\\
&\quad+\exp(C_0(\eta)T)\frak left[C_2(\eta)\frak left(\|F^3\|_{\mathscr{H}^0_T}^2+\|F^5\|_{L^2H^{1/2}}^2\right)+\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1_T)^\ast}^2\right].
\end{aligned}
\end{eqnarray}
Step 5. Improved estimates for $\theta^m$. Using the $\phi=\pa_t\theta^m-\pa_tKJ\theta^m\in \mathscr{H}^1_m(t)$ as a test function in \eqref{equ:thetam}, we can improve the energy estimates for $\theta^m$.
\begin{eqnarray}\frak label{equ:improve thetam}
\begin{aligned}
&\pa_t\f12\frak left(\|\theta^m\|_{\mathscr{H}^1}^2+\|\theta^m\|_{H^0(\Sigma)}^2\right)+\|\pa_t\theta^m\|_{\mathscr{H}^0}^2\\
&=\frak left(\pa_t\theta^m,\pa_tKJ\theta^m\right)_{\mathscr{H}^0}+\frak left(\theta^m,\pa_tKJ\theta^m\right)_{\mathscr{H}^1}+\frak left(F^3,\pa_t\theta^m-\pa_tKJ\theta^m\right)_{\mathscr{H}^0}\\
&\quad+\frak left(F^5,\pa_t\theta^m-\pa_tKJ\theta^m\right)_{H^0(\Sigma)}+\int_{\Om}\frak left(\nabla_{\mathscr{A}}\theta^m\cdot\nabla_{\pa_t\mathscr{A}}\theta^m+\pa_tJK\f{|\nabla_{\mathscr{A}}\theta^m|^2}{2}J\right).
\end{aligned}
\end{eqnarray}
Since we have already controlled $\|\theta^m\|_{\mathscr{H}^1_T}^2$ and $\|\pa_t\theta^m\|_{\mathscr{H}^1_T}^2$, integrating \eqref{equ:improve thetam} in time implies that
\begin{eqnarray}\frak label{equ:improve thetam1}
\begin{aligned}
&\sup_{0\frak le t\frak le T}\|\theta^m\|_{\mathscr{H}^1}^2+\|\pa_t\theta^m\|_{\mathscr{H}^0_T}^2\\
&\frak lesssim P(\|\eta_0\|_{H^{5/2}}) \frak left(C_1(\eta)+C_2(\eta)\right)\exp(C_0(\eta)T)\frak left(\|\theta_0\|_{H^0}^2+\|F^3(0)\|_{\mathscr{H}^0}^2\right)\\
&\quad+P(\|\eta_0\|_{H^{5/2}})\exp(C_0(\eta)T)\Big[C_2(\eta)\frak left(\|F^3\|_{\mathscr{H}^0_T}^2+\|F^5\|_{L^2H^{1/2}}^2\right)\\
&\quad+\|\pa_t(F^3+F^5)\|_{(\mathscr{H}^1_T)^\ast}^2\Big].
\end{aligned}
\end{eqnarray}
Step 6. Uniform bounds for \eqref{equ:dt thetam4} and \eqref{equ:improve thetam1}. Now, we seek to estimate the constants $C_i(\eta)$, $i=0, 1, 2$ in terms of the quantity $\mathscr{K}(\eta)$. A direct computation combining with the Lemma A.10 in \cite{GT1} reveal that
\begin{equation}
C_0(\eta)+C_1(\eta)+C_2(\eta)\frak le C(1+\mathscr{K}(\eta)),
\end{equation}
For a constant $C$ independent of $\eta$.
Step 7. Passing to the limit. According to the energy estimates \eqref{equ:dt thetam4} and \eqref{equ:improve thetam1} and Lemma \ref{lem:theta H0 H1}, we know that the sequence $\{\theta^m\}$ is uniformly bounded in $L^\infty H^1$ and $\{\pa_t\theta^m\}$ is uniformly bounded in $L^\infty H^0\cap L^2H^1$. Then, up to extracting a subsequence, we know that
\[
\theta^m\stackrel{\ast}\rightharpoonup \theta \thinspace \text{weakly-}\ast \thinspace\text{in}\thinspace L^\infty H^1,\thinspace\pa_t\theta^m\stackrel{\ast}\rightharpoonup\pa_t\theta\thinspace\text{in}\thinspace L^\infty H^0,\thinspace \pa_t\theta^m\rightharpoonup\pa_t\theta\thinspace\text{weakly in}\thinspace L^2H^1,
\]
as $m\to\infty$. By lower semicontinuity, the energy estimates reveal that
\[
\|\theta\|_{L^\infty H^1}^2+\|\pa_t\theta\|_{L^\infty H^0}^2+\|\pa_t\theta\|_{L^2H^1}^2
\]
is bounded from above by the right-hand side of \eqref{inequ:est strong solution}.
According these convergence results, we can integrate \eqref{equ:dt thetam1} termporally from $0$ to $T$ and let $m\to\infty$ to deduce that $\pa_t^2\theta^m\rightharpoonup\pa_t^2\theta$ weakly in $(\mathscr{H}^1_T)^\ast$, with an action of $\pa_t^2\theta$ on an element $\phi\in\mathscr{H}^1_T$ defined by replacing $\theta^m$ with $\theta$ everywhere in \eqref{equ:dt thetam1}. From passing to the limit in \eqref{equ:dt thetam1}, it is straightforward to show that $\|\pa_t^2\theta\|_{(\mathscr{H}^1_T)^\ast}^2$ is bounded from above by the right-hand side of \eqref{inequ:est strong solution}. This bound shows that $\pa_t\theta\in C^0L^2$.
Step 8. In the limit, \eqref{equ:thetam} implies that for almost every $t$,
\begin{equation}\frak label{equ:limit theta}
\frak left(\pa_t\theta,\phi\right)_{\mathscr{H}^0}+\frak left(\theta,\phi\right)_{\mathscr{H}^1}+\frak left(\theta\frak left|\mathscr{N}\right|,\phi\right)_{H^0(\Sigma)}=\frak left(F^3,\phi\right)_{\mathscr{H}^0}+\frak left(F^5,\phi\right)_{H^0(\Sigma)}\quad\text{for every}\thinspace\phi\in\mathscr{H}^1.
\end{equation}
For almost every $t\in [0, T]$, $\theta(t)$ is the unique weak solution to the elliptic problem \eqref{equ:SBC} in the sense of \eqref{equ:weak theta}, with $F^3$ replaced by $F^3(t)-\pa_t\theta(t)$ and $F^5$ replaced by $F^5(t)$. Since $F^3(t)-\pa_t\theta(t)\in H^0(\Om)$ and $F^5(t)\in H^{1/2}(\Sigma)$, Lemma \ref{lem:S lower regularity} shows that this elliptic problem admits a unique strong solution, which must coincide with the weak solution. Then applying Proposition \ref{prop:high regulatrity}, we have the bound
\begin{equation}\frak label{est:bound}
\|\theta(t)\|_{H^r}^2\frak lesssim C(\eta_0) \frak left(\|\pa_t\theta(t)\|_{\mathscr{H}^{r-2}}^2+\|F^3(t)\|_{\mathscr{H}^{r-2}}^2+\|F^5(t)\|_{H^{r-3/2}(\Sigma)}^2\right)
\end{equation}
when $r=2,3$. When $r=2$, we take the superemum of \eqref{est:bound} over $t\in [0, T]$, and when $r=3$, we integrate over $[0, T]$; the resulting inequalities imply that $\theta\in L^\infty H^2\cap L^2H^3$ with estimates as in \eqref{inequ:est strong solution}.
Then for the linear Navier--Stokes equations, the process is exactly the same as \cite{GT1}. Then we know that $(u, p, \theta)$ is a strong solution of \eqref{equ:linear BC} with the estimates as in \eqref{inequ:est strong solution}.
Step 9. The weak solution satisfied by $\pa_t\theta$ and $D_tu$. We may integrate \eqref{equ:dt thetam1} in time from $0$ to $T$ and pass the limit $m\to\infty$. For any $\phi\in\mathscr{H}^1$, we have $\pa_tKJ\phi\in\mathscr{H}^1$, so that we may subsititute $\pa_tKJ\phi$ for $\phi$ in \eqref{equ:limit theta}; this yields
\begin{eqnarray}
\begin{aligned}
&\frak left<\pa_t^2\theta,\phi\right>_{(\mathscr{H}^1_T)^\ast}+\frak left(\pa_t\theta,\phi\right)_{\mathscr{H}^1_T}+\frak left(\pa_t\theta\frak left|\mathscr{N}\right|,\phi\right)_{L^2H^0(\Sigma)}\\
&=\frak left<\pa_t(F^3+F^5)\right>_{(\mathscr{H}^1_T)^\ast}+\frak left(\pa_tJKF^3,\phi\right)_{\mathscr{H}^0_T}-\frak left(\pa_tJK\pa_t\theta,\phi\right)_{\mathscr{H}^0_T}\\
&\quad-\int_0^T\int_{\Om}\frak left(\pa_tJK\nabla_{\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\pa_t\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\mathscr{A}}\theta\cdot\nabla_{\pa_t\mathscr{A}}\phi\right)J
\end{aligned}
\end{eqnarray}
for all $\phi\in\mathscr{H}^1_T$. This is exactly the \eqref{equ:weak pat theta}. To justify that \eqref{equ:weak pat theta} implies \eqref{equ:pat theta}, we may integrate by parts for the equality
\begin{eqnarray}
\begin{aligned}
&-\int_0^T\int_{\Om}\frak left(\pa_tJK\nabla_{\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\pa_t\mathscr{A}}\theta\cdot\nabla_{\mathscr{A}}\phi+\nabla_{\mathscr{A}}\theta\cdot\nabla_{\pa_t\mathscr{A}}\phi\right)J\\
&=-\int_0^T\int_\Om\frak left(-R\nabla_{\mathscr{A}}u+\nabla_{\pa_t\mathscr{A}}u\right)\cdot\nabla_{\mathscr{A}}\phi J\\
&=\frak left(\mathop{\rm div}\nolimits_{\mathscr{A}}(-R\nabla_{\mathscr{A}}u+\nabla_{\pa_t\mathscr{A}}u),\phi\right)_{\mathscr{H}^0_T}-\frak left<\nabla_{\mathscr{A}}u\cdot\pa_t\mathscr{N}+\nabla_{\pa_t\mathscr{A}}u\cdot\mathscr{N},\phi\right>_{L^2H^{-1/2}}.
\end{aligned}
\end{eqnarray}
We then may deduce from \eqref{equ:weak pat theta} that $\pa_t\theta$ is a weak solution of \eqref{equ:pat theta} in the sense of \eqref{equ:lpws} with $\pa_t\theta(0)\in\mathscr{H}^0(0)$. Then we may appeal to the computation in \cite{GT1} to deduce that $p(0)$ satisfies the equation \eqref{equ:p0} and $D_tu$ is a weak solution of \eqref{equ:Dt u} in the sense of \eqref{equ:lpws} with $D_tu(0)\in\mathscr{Y}(0)$.
\end{proof}
\subsection{Higher regularity}
In order to state our higher regularity results for \eqref{equ:linear BC}, we need to construct the initial data and compatible conditions. First, we define the vector or scalar fields $\mathfrak{E}^{01}$, $\mathfrak{E}^{02}$, $\mathfrak{E}^1$, $\mathfrak{E}^3$ in $\Om$ and $\mathfrak{E}^4$, $\mathfrak{E}^5$ on $\Sigma$ by
\begin{eqnarray}
\begin{aligned}
\mathfrak{E}^{01}(G^1, v, q)&=\Delta_{\mathscr{A}}v-\nabla_{\mathscr{A}}q+G^1-Rv,\\
\mathfrak{E}^{02}(G^3,\Theta)&=\Delta_{\mathscr{A}}\Theta+G^3,\\
\mathfrak{E}^1(v,q)&=-(R+\pa_tJK)\Delta_{\mathscr{A}}v-\pa_tRv+(\pa_tJK+R+R^\top)\nabla_{\mathscr{A}}q\\
&\quad+\mathop{\rm div}\nolimits_{\mathscr{A}}(\mathbb{D}_{\mathscr{A}}(Rv)-R\mathbb{D}_{\mathscr{A}}v+\mathbb{D}_{\pa_t\mathscr{A}}v),\\
\mathfrak{E}^3(\Theta)&=-\pa_tJK\Delta_{\mathscr{A}}\Theta+\mathop{\rm div}\nolimits_{\mathscr{A}}(-R\nabla_{\mathscr{A}}\Theta+\nabla_{\pa_t\mathscr{A}}\Theta),\\
\mathfrak{E}^4(v,q)&=\mathbb{D}_{\mathscr{A}}(Rv)\mathscr{N}-(qI-\mathbb{D}_{\mathscr{A}}v)\pa_t\mathscr{N}+\mathbb{D}_{\pa_t\mathscr{A}}v\mathscr{N},\\
\mathfrak{E}^5(\Theta)&=-\nabla_{\mathscr{A}}\Theta\cdot\pa_t\mathscr{N}-\nabla_{\pa_t\mathscr{A}}\Theta\cdot\mathscr{N}-\Theta\pa_t\frak left|\mathscr{N}\right|,
\end{aligned}
\end{eqnarray}
and we define functions $\mathfrak{f}^1$ in $\Om$, $\mathfrak{f}^2$ on $\Sigma$ and $\mathfrak{f}^3$ on $\Sigma_b$ by
\begin{eqnarray}
\begin{aligned}
\mathfrak{f}^1(G^1, v)&=\mathop{\rm div}\nolimits_{\mathscr{A}}(G^1-Rv),\\
\mathfrak{f}^2(G^4, v)&=(G^4+\mathbb{D}_{\mathscr{A}}v{\mathscr{N}})\cdot{\mathscr{N}}|{\mathscr{N}}|^{-2},\\
\mathfrak{f}^3(G^1, v)&=(G^1+\Delta_{\mathscr{A}}v)\cdot\nu.
\end{aligned}
\end{eqnarray}
We write $F^{1,0}=F^1+\theta \nabla_{\mathscr{A}}y_3$, $F^{3,0}=F^3$, $F^{4,0}=F^4$ and $F^{5,0}=F^5$. When $F^1$, $F^3$, $F^4$, $F^5$, $u$, $p$, and $\theta$ are regularly enough, we can recursively define
\begin{eqnarray} \frak label{equ:force 1}
\begin{aligned}
F^{1,j}&:=D_tF^{1,j-1}-\pa_t^{j-1}(\theta \nabla_{\mathscr{A}}y_3)+D_t^{j-1}(\theta \nabla_{\mathscr{A}}y_3)+\mathfrak{E}^1(D_t^{j-1}u, \pa_t^{j-1}p)\\
&=D_t^jF^1-\frak left(\pa_t^{j-1}(\theta \nabla_{\mathscr{A}}y_3)-D_t^{j-1}(\theta \nabla_{\mathscr{A}}y_3)\right)+\sum_{\ell=0}^{j-1}D_t^\ell\mathfrak{E}^1(D_t^{j-\ell-1}u, \pa_t^{j-\ell-1}p),\\
F^{3,j}&:=\pa_tF^{3,j-1}+\mathfrak{E}^3(\pa_t^{j-1}\theta)=\pa_t^jF^3+\sum_{\ell=0}^{j-1}\pa_t^\ell\mathfrak{E}^3(\pa_t^{j-\ell-1}\theta),
\end{aligned}
\end{eqnarray}
in $\Om$ and
\begin{eqnarray}\frak label{equ:force 2}
\begin{aligned}
F^{4,j}&:=\pa_tF^{4,j-1}+\mathfrak{E}^4(D_t^{j-1}u, \pa_t^{j-1}p)=\pa_t^jF^4+\sum_{\ell=0}^{j-1}\pa_t^\ell\mathfrak{E}^4(D_t^{j-\ell-1}u, \pa_t^{j-\ell-1}p),\\
F^{5,j}&:=\pa_tF^{5,j-1}+\mathfrak{E}^5(\pa_t^{j-1}\theta)=\pa_t^jF^5+\sum_{\ell=0}^{j-1}\pa_t^\ell\mathfrak{E}^5(\pa_t^{j-\ell-1}\theta)
\end{aligned}
\end{eqnarray}
on $\Sigma$, for $j=1, \frak ldots, N$.
Now, we define the sums of norms with $F^1$, $F^3$, $F^4$ and $F^5$.
\begin{eqnarray} \frak label{def:force F F0}
\begin{aligned}
\mathfrak{F}(F^1,F^3,F^4,F^5)&:=\sum_{j=0}^{N-1}\frak left(\|\pa_t^jF^1\|_{L^2H^{2N-2j-1}}+\|\pa_t^jF^3\|_{L^2H^{2N-2j-1}}\right)\\
&\quad+\|\pa_t^{N}F^1\|_{L^2({}_0H^1(\Om))^\ast}+\|\pa_t^{N}F^3\|_{L^2({}_0H^1(\Om))^\ast}\\
&\quad+\sum_{j=0}^N\frak left(\|\pa_t^jF^4\|_{L^2H^{2N-2j-1/2}}+\|\pa_t^jF^5\|_{L^2H^{2N-2j-1/2}}\right)\\
&\quad+\sum_{j=0}^{N-1}\frak left(\|\pa_t^jF^1\|_{L^\infty H^{2N-2j-2}}+\|\pa_t^jF^3\|_{L^\infty H^{2N-2j-2}}\right)\\
&\quad+\sum_{j=0}^{N-1}\frak left(\|\pa_t^jF^4\|_{L^\infty H^{2N-2j-3/2}}+\|\pa_t^jF^5\|_{L^\infty H^{2N-2j-3/2}}\right),\\
\mathfrak{F}_0(F^1,F^3,F^4,F^5)&:=\sum_{j=0}^{N-1}\frak left(\|\pa_t^jF^1(0)\|_{H^{2N-2j-2}}+\|\pa_t^jF^3(0)\|_{H^{2N-2j-2}}\right)\\
&\quad+\sum_{j=0}^{N-1}\frak left(\|\pa_t^jF^4(0)\|_{H^{2N-2j-3/2}}+\|\pa_t^jF^5(0)\|_{H^{2N-2j-3/2}}\right).
\end{aligned}
\end{eqnarray}
For simplicity, we will write $\mathfrak{F}$ for $\mathfrak{F}(F^1,F^3,F^4,F^5)$ and $\mathfrak{F}_0$ for $\mathfrak{F}_0(F^1,F^3,F^4,F^5)$ throughout the rest of this paper. From the Lemma A.4 and Lemma 2.4 of \cite{GT1}, we know that if $\mathfrak{F}<\infty$, then
\begin{align*}
&\pa_t^jF^1\in C^0([0,T];H^{2N-2j-2}(\Om)),\quad \pa_t^jF^3\in C^0([0,T];H^{2N-2j-2}(\Om)),\\
&\pa_t^jF^4\in C^0([0,T];H^{2N-2j-3/2}(\Sigma)),\quad\text{and}\quad \pa_t^jF^5\in C^0([0,T];H^{2N-2j-3/2}(\Sigma))
\end{align*}
for $j=0, \frak ldots, N-1$. For $\eta$, we define
\begin{eqnarray} \frak label{def:norm eta}
\begin{aligned}
\mathfrak{D}(\eta)&:=\sum_{j=2}^{N+1}\|\pa_t^j\eta\|_{L^2H^{2N-2j+5/2}}^2,\\
\mathfrak{E}(\eta)&:=\|\eta\|_{L^\infty H^{2N+1/2}(\Sigma)}^2+\sum_{j=1}^{N}\|\pa_t^j\eta\|_{L^\infty H^{2N-2j+3/2}(\Sigma)}^2,\\
\mathfrak{K}(\eta)&:=\mathfrak{D}(\eta)+\mathfrak{E}(\eta),\\
\mathfrak{E}_0(\eta)&:=\|\eta_0\|_{H^{2N+1/2}(\Sigma)}^2+\sum_{j=1}^{N}\|\pa_t^j\eta(0)\|_{H^{2N-2j+3/2}(\Sigma)}^2.
\end{aligned}
\end{eqnarray}
These following lemmas are similar to Lemma 4.5, 4.6, 4.7 in \cite{GT1} as well as the idea of proof, so we omit these details here.
\begin{lemma}\frak label{lem:pa tv Dt v}
If $k=0,\frak ldots, 2N-1$ and $v$, $\Theta$ are sufficiently regular, then
\begin{equation}\frak label{est:pat v Dt v l2}
\|\pa_tv-D_tv\|_{L^2H^k}^2\frak lesssim P(\mathfrak{K}(\eta))\|v\|_{L^2H^k}^2,
\end{equation}
\begin{equation}
\|\pa_t(\Theta \nabla_{\mathscr{A}}y_3)-D_t(\Theta \nabla_{\mathscr{A}}y_3)\|_{L^2H^k}^2\frak lesssim P(\mathfrak{K}(\eta))\|\Theta\|_{L^2H^k}^2,
\end{equation}
and if $k=0,\frak ldots, 2N-2$, then
\begin{equation}\frak label{est:pat v Dt v linfty}
\|\pa_tv-D_tv\|_{L^\infty H^k}^2\frak lesssim P(\mathfrak{K}(\eta))\|v\|_{L^\infty H^k}^2,
\end{equation}
\begin{equation}
\|\pa_t(\Theta \nabla_{\mathscr{A}}y_3)-D_t(\Theta \nabla_{\mathscr{A}}y_3)\|_{L^\infty H^k}^2\frak lesssim P(\mathfrak{K}(\eta))\|\Theta\|_{L^\infty H^k}^2.
\end{equation}
If $m=1, \frak ldots, N-1$, $j=1, \frak ldots, m$, and $v$, $\Theta$ are sufficiently regular, then
\begin{equation}
\|\pa_t^jv-D_t^jv\|_{L^2H^{2m-2j+3}}^2\frak lesssim P(\mathfrak{K}(\eta))\sum_{\ell=0}^{j-1}\frak left(\|\pa_t^\ell v\|_{L^2H^{2m-2j+3}}^2+\|\pa_t^\ell v\|_{L^\infty H^{2m-2j+2}}^2\right),
\end{equation}
\begin{equation}\frak label{est:pa t Dt v j}
\|\pa_t^jv-D_t^jv\|_{L^\infty H^{2m-2j+2}}^2\frak lesssim P(\mathfrak{K}(\eta))\sum_{\ell=0}^{j-1}\|\pa_t^\ell v\|_{L^\infty H^{2m-2j+2}}^2,
\end{equation}
\begin{equation}
\|\pa_t^j(\Theta \nabla_{\mathscr{A}}y_3)-D_t^j(\Theta \nabla_{\mathscr{A}}y_3)\|_{L^2H^{2m-2j+2}}^2\frak lesssim P(\mathfrak{K}(\eta))\sum_{\ell=0}^{j-1}\frak left(\|\pa_t^\ell \Theta\|_{L^2H^{2m-2j+3}}^2+\|\pa_t^\ell \Theta\|_{L^\infty H^{2m-2j+2}}^2\right),
\end{equation}
\begin{equation}
\|\pa_t^j(\Theta \nabla_{\mathscr{A}}y_3)-D_t^j(\Theta \nabla_{\mathscr{A}}y_3)\|_{L^\infty H^{2m-2j+3}}^2\frak lesssim P(\mathfrak{K}(\eta))\sum_{\ell=0}^{j-1}\|\pa_t^\ell \Theta\|_{L^\infty H^{2m-2j+2}}^2,
\end{equation}
and
\begin{eqnarray}
\begin{aligned}
&\|\pa_tD_t^mv-\pa_t^{m+1}v\|_{L^2H^1}^2+\|\pa_t^2D_t^mv-\pa_t^{m+2}v\|_{(\mathscr{X}_T)^\ast}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\frak left(\|\pa_t^{m+1}v\|_{(\mathscr{X}_T)^\ast}^2+\sum_{\ell=0}^m\frak left(\|\pa_t^\ell v\|_{L^2H^1}^2+\|\pa_t^\ell v\|_{L^\infty H^2}^2\right)\right).
\end{aligned}
\end{eqnarray}
Also, if $j=0, \frak ldots, N$ and $v$ is sufficiently regular, then
\begin{equation}\frak label{equ:initial v j}
\|\pa_t^jv(0)-D_t^jv(0)\|_{H^{2N-2j}}^2\frak lesssim P(\mathfrak{E}_0(\eta))\sum_{\ell=0}^{j-1}\|\pa_t^\ell v(0)\|_{H^{2N-2j}}^2,
\end{equation}
and
if $j=0, \frak ldots, N-1$ and $\Theta$ is sufficiently regular, then
\begin{equation}\frak label{equ:initial theta j}
\|\pa_t^j(\Theta(0)\nabla_{\mathscr{A}_0}y_{3,0})-D_t^j(\Theta(0)\nabla_{\mathscr{A}_0}y_{3,0})\|_{H^{2N-2j-2}}^2\frak lesssim P(\mathfrak{E}_0(\eta))\sum_{\ell=0}^{j-1}\|\pa_t^\ell \Theta(0)\|_{H^{2N-2j-2}}^2.
\end{equation}
Here all of the $P(\cdot)$ are polynomial, allowed to be changed from line to line.
\end{lemma}
\begin{lemma}\frak label{lem:force linear}
For $m=1, \frak ldots, N-1$ and $j=1, \frak ldots, m$, the following estimates hold whenever the right--hand sides are finite:
\begin{eqnarray}\frak label{equ:force l2}
\begin{aligned}
&\|F^{1,j}\|_{L^2H^{2m-2j+1}}^2+\|F^{3,j}\|_{L^2H^{2m-2j+1}}^2+\|F^{4,j}\|_{L^2H^{2m-2j+3/2}}^2+\|F^{5,j}\|_{L^2H^{2m-2j+3/2}}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^{j-1}\frak left(\|\pa_t^\ell u\|_{L^2H^{2m-2\ell+3}}^2+\|\pa_t^\ell \theta\|_{L^2H^{2m-2\ell+3}}^2\right)\\
&\quad+\sum_{\ell=0}^{j-1}\Big(\|\pa_t^\ell u\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell \theta\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell p\|_{L^2H^{2m-2\ell+2}}^2\\
&\quad+\|\pa_t^\ell p\|_{L^\infty H^{2m-2\ell+1}}^2\Big)\bigg),
\end{aligned}
\end{eqnarray}
\begin{eqnarray}\frak label{equ:force l infity}
\begin{aligned}
&\|F^{1,j}\|_{L^\infty H^{2m-2j}}^2+\|F^{3,j}\|_{L^\infty H^{2m-2j}}^2+\|F^{4,j}\|_{L^\infty H^{2m-2j+1/2}}^2+\|F^{5,j}\|_{L^\infty H^{2m-2j+1/2}}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^{j-1}\Big(\|\pa_t^\ell u\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell \theta\|_{L^\infty H^{2m-2\ell+2}}^2\\
&\quad+\|\pa_t^\ell p\|_{L^\infty H^{2m-2\ell+1}}^2\Big)\bigg),
\end{aligned}
\end{eqnarray}
\begin{eqnarray}\frak label{equ:force dual}
\begin{aligned}
&\|\pa_t(F^{1,m}-F^{4,m})\|_{L^2({}_0H^1(\Om))^\ast}^2+\|\pa_t(F^{3,m}+F^{5,m})\|_{L^2({}_0H^1(\Om))^\ast}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\|\pa_t^m u\|_{L^2 H^{2}}^2+\|\pa_t^m \theta\|_{L^2 H^{2}}^2+\|\pa_t^m p\|_{L^2 H^{1}}^2\\
&\quad+\sum_{\ell=0}^{m-1}\Big(\|\pa_t^\ell u\|_{L^\infty H^{2}}^2+\|\pa_t^\ell u\|_{L^2 H^{2}}^3+\|\pa_t^\ell \theta\|_{L^\infty H^{2}}^2+\|\pa_t^\ell \theta\|_{L^2 H^{2}}^3\\
&\quad+\|\pa_t^\ell p\|_{L^\infty H^{1}}^2+\|\pa_t^\ell p\|_{L^2 H^{2}}^2\Big)\bigg).
\end{aligned}
\end{eqnarray}
Similarly, for $j=1, \frak ldots, N-1$,
\begin{eqnarray}\frak label{equ:initial force j}
\begin{aligned}
&\|F^{1,j}(0)\|_{H^{2N-2j-2}}^2+\|F^{3,j}(0)\|_{H^{2N-2j-2}}^2+\|F^{4,j}(0)\|_{H^{2N-2j-3/2}}^2+\|F^{5,j}(0)\|_{H^{2N-2j-3/2}}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta))\bigg(\mathfrak{F}_0+\|\pa_t^j \theta(0)\|_{H^{2N-2j}}+\sum_{\ell=0}^{j-1}\big(\|\pa_t^\ell u(0)\|_{H^{2N-2\ell}}\\
&\quad+\|\pa_t^\ell \theta(0)\|_{H^{2N-2\ell}}+\|\pa_t^\ell p(0)\|_{H^{2N-2\ell-1}}\big)\bigg).
\end{aligned}
\end{eqnarray}
Here all of the $P(\cdot)$ are polynomial allowed to be changed from line to line.
\end{lemma}
\begin{lemma}\frak label{lem:v,q,G}
Suppose that $v$, $q$, $G^1$, $G^3$ are evaluated at $t=0$ and are sufficiently regular for the right--hand sides of the following estimates to make sense. If $j=0, \frak ldots, N-1$, then
\begin{eqnarray}\frak label{equ:initial G1 v q}
\begin{aligned}
&\|\mathfrak{E}^{01}(G^1,v,q)\|_{H^{2N-2j-2}}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta))\frak left(\|v\|_{H^{2N-2j}}^2+\|q\|_{H^{2N-2j-1}}^2+\|G^1\|_{H^{2N-2j-2}}^2\right),
\end{aligned}
\end{eqnarray}
\begin{equation}\frak label{equ:e02}
\|\mathfrak{E}^{02}(G^3,\Theta)\|_{H^{2N-2j-2}}^2\frak lesssim P(\mathfrak{E}_0(\eta))\frak left(\|\Theta\|_{H^{2N-2j}}^2+\|G^3\|_{H^{2N-2j-2}}^2\right).
\end{equation}
If $j=0,\frak ldots,N-2$, then
\begin{eqnarray}\frak label{equ:initial g1 g4}
\begin{aligned}
&\|\mathfrak{f}^1(G^1,v)\|_{H^{2N-2i-3}}^2+\|\mathfrak{f}^2(G^4,v)\|_{H^{2N-2i-3/2}}^2+\|\mathfrak{f}^3(G^1,v)\|_{H^{2N-2i-5/2}}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta))\frak left(\|G^1\|_{H^{2N-2j-2}}^2+\|G^4\|_{H^{2N-2j-3/2}}^2+\|v\|_{H^{2N-2j}}^2\right).
\end{aligned}
\end{eqnarray}
For $j=N-1$, if $\mathop{\rm div}\nolimits_{\mathscr{A}(0)}v(0)=0$ in $\Om$, then
\begin{equation}
\|\mathfrak{f}^2(G^4,v)\|_{H^{1/2}}^2+\|\mathfrak{f}^3(G^1,v)\|_{H^{-1/2}}^2\frak lesssim P(\mathfrak{E}_0(\eta))\frak left(\|G^1\|_{H^{2}}^2+\|G^4\|_{H^{1/2}}^2+\|v\|_{H^{2}}^2\right).
\end{equation}
Here all of the $P(\cdot)$ are polynomial allowed to be changed from line to line.
\end{lemma}
Now we can construct the initial data and compatible conditions. We assume that $u_0\in H^{2N}(\Om)$, $\theta_0\in H^{2N}$, $\eta_0\in H^{2N+1/2}(\Sigma)$. Then we will iteratively construct the initial data $D_t^ju(0)$, $\pa_t^j\theta(0)$ for $j=1,\frak ldots, N$ and $\pa_t^jp(0)$ for $j=1,\frak ldots, N-1$. First, we denote $F^{1,0}(0)=F^1(0)\in H^{2N-2}$, $F^{3,0}(0)=F^3(0)\in H^{2N-2}$, $F^{4,0}(0)=F^4(0)\in H^{2N-3/2}$, $F^{5,0}(0)=F^5(0)\in H^{2N-3/2}$ and $D_t^0u(0)=u_0\in H^{2N}$, $\pa_t^0\theta(0)=\theta_0\in H^{2N}$. Suppose now that we have constructed $F^{1,\ell}\in H^{2N-2\ell-2}$, $F^{3,\ell}\in H^{2N-2\ell-2}$, $F^{4,\ell}\in H^{2N-2\ell-3/2}$, $F^{5,\ell}\in H^{2N-2\ell-3/2}$, and $D_t^ju(0)\in H^{2N-2\ell}$, $\pa_t^\ell\theta(0)\in H^{2N-2\ell}$ for $0\frak le\ell\frak le j\frak le N-2$; we will construct $\pa_t^jp(0)\in H^{2N-2j-1}$ as well as $D_t^{j+1}u(0)\in H^{2N-2j-2}$, $\pa_t^{j+1}\theta(0)\in H^{2N-2j-2}$, $F^{1,j+1}(0)\in H^{2N-2j-4}$, $F^{3,j+1}(0)\in H^{2N-2j-4}$, $F^{4,j+1}(0)\in H^{2N-2j-7/2}$ and $F^{5,j+1}(0)\in H^{2N-2j-7/2}$ as follows.
By virtue of estimate, we know that
\begin{eqnarray}
\begin{aligned}
f^1&=\mathfrak{f}^1(F^{1,j}(0),D_t^ju(0))\in H^{2N-2j-3},\\
f^2&=\mathfrak{f}^2(F^{4,j}(0),D_t^ju(0))\in H^{2N-2j-3/2},\\
f^3&=\mathfrak{f}^3(F^{1,j}(0),D_t^ju(0))\in H^{2N-2j-5/2}
\end{aligned}
\end{eqnarray}
This allows us to define $\pa_t^jp(0)$ as the solution to \eqref{equ:poisson}. The choice of $f^1$, $f^2$, $f^3$, implies that $\pa_t^jp(0)\in H^{2N-2j-1}$, according to the Proposition 2.15 of \cite{WL}. Now the estimates \eqref{equ:initial force j}, \eqref{equ:initial v j} and
\eqref{equ:initial G1 v q} allows us to define
\begin{align*}
D_t^{j+1}u(0)&:=\mathfrak{E}^{01}\frak left(F^{1,j}(0)+\pa_t^j(\theta(0) \nabla_{\mathscr{A}_0}y_{3,0}), D_t^ju(0), \pa_t^jp(0)\right)\in H^{2N-2j-2},\\
\pa_t^{j+1}\theta(0)&:=\mathfrak{E}^{02}\frak left(F^{3,j}(0),\pa_t^j\theta(0)\right)\in H^{2N-2j-2},\\
F^{1,j+1}(0)&:=D_t^jF^{1,j}(0)-\pa_t^j(\theta(0) \nabla_{\mathscr{A}_0}y_{3,0})+D_t^j(\theta(0) \nabla_{\mathscr{A}_0}y_{3,0})\\
&\quad+\mathfrak{E}^1\frak left(D_t^ju(0),\pa_t^jp(0)\right)\in H^{2N-2j-4},\\
F^{3,j+1}(0)&:=\pa_tF^{3,j}(0)+\mathfrak{E}^3\frak left(\pa_t^j\theta(0)\right)\in H^{2N-2j-4},\\
F^{4,j+1}(0)&:=\pa_tF^{4,j}(0)+\mathfrak{E}^4\frak left(D_t^ju(0),\pa_t^jp(0)\right)\in H^{2N-2j-7/2},\\
F^{5,j+1}(0)&:=\pa_tF^{5,j}(0)+\mathfrak{E}^5\frak left(\pa_t^j\theta(0)\right)\in H^{2N-2j-7/2}.
\end{align*}
Then, from the above analysis, we can iteratively construct all of the desired data except for $D_t^Nu(0)$, $\pa_t^{N-1}p(0)$ and $\pa_t^N\theta(0)$.
By construction, the initial data $D_t^ju(0)$, $\pa_t^jp(0)$ and $\pa_t^j\theta(0)$ are determined in terms of $u_0$, $\theta_0$ as well as $\pa_t^\ell F^1(0)$, $\pa_t^\ell F^3(0)$, $\pa_t^\ell F^4(0)$ and $\pa_t^\ell F^5(0)$ for $\ell=0, \frak ldots, N-1$. In order to use these in Theorem \ref{thm:lower regularity} and to construct $D_t^Nu(0)$, $\pa_t^{N-1}p(0)$ and $\pa_t^N\theta(0)$, we must enforce compatibility conditions for $j=0,\frak ldots,N-1$. We say that the $j$--th compatibility condition is satisfied if
\begin{eqnarray}\frak label{cond:compatibility j}
\frak left\{
\begin{aligned}
&D_t^ju(0)\in \mathscr{X}(0)\cap H^2(\Om),\\
&\Pi_0\frak left(F^{4,j}(0)+\mathbb{D}_{\mathscr{A}_0}D_t^ju(0)\mathscr{N}_0\right)=0.
\end{aligned}
\right.
\end{eqnarray}
The construction of $D_t^ju(0)$ and $\pa_t^jp(0)$ ensures that $D_t^ju(0)\in H^2(\Om)$ and $\mathop{\rm div}\nolimits_{\mathscr{A}_0}(D_t^ju(0))=0$.
In the following, we define $\pa_t^N\theta(0)\in H^0$, $\pa_t^{N-1}p(0)\in H^1$ and $D_t^N u(0)\in H^0$. First, we can define
\[
\pa_t^N\theta(0)=\mathfrak{E}^{02}(F^{3,N-1}(0),\pa_t^{N-1}\theta(0))\in H^0(\Om),
\]
employing \eqref{equ:e02} for the inclusion in $H^0$. Then using the same analysis in \cite{GT1}, the data $\pa_t^{N-1}p(0)\in H^1$ can be defined as a weak solution to \eqref{equ:poisson}. Then we define
\[
D_t^Nu(0)=\mathfrak{E}^{01}\frak left(F^{1,N-1}(0)+\pa_t^{N-1}(\theta(0) \nabla_{\mathscr{A}_0}y_{3,0}), D_t^{N-1}u(0),\pa_t^{N-1}p(0)\right)\in H^0,
\]
employing \eqref{equ:initial G1 v q} and \eqref{equ:initial theta j} for the inclusion in $H^0$. And $D_t^Nu(0)\in\mathscr{Y}(0)$ is guaranteed by the construction of $\pa_t^{N-1}p(0)$. Combining the inclusions above with the bounds \eqref{equ:initial force j}, \eqref{equ:initial g1 g4} ,
\eqref{equ:initial G1 v q} and \eqref{equ:e02} implies that
\begin{eqnarray}\frak label{equ:est initial data j}
\begin{aligned}
&\sum_{j=0}^N\|D_t^ju(0)\|_{H^{2N-2j}}^2+\sum_{j=0}^{N-1}\|\pa_t^jp(0)\|_{H^{2N-2j-1}}^2+\sum_{j=0}^N\|\pa_t^j\theta(0)\|_{H^{2N-2j}}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta))\frak left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0\right).
\end{aligned}
\end{eqnarray}
Before stating the result on higher regularity for solutions to \eqref{equ:linear BC} , we define some quantities:
\begin{eqnarray}
\begin{aligned}
\mathfrak{D}(u,p,\theta)&:=\sum_{j=0}^N\frak left(\|\pa_t^ju\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^j\theta\|_{L^2H^{2N-2j+1}}^2\right)+\|\pa_t^{N+1}u\|_{(\mathscr{X}_T)^\ast}\\
&\quad+\|\pa_t^{N+1}\theta\|_{(\mathscr{H}^1_T)^\ast}+\sum_{j=0}^{N-1}\|\pa_t^jp\|_{L^2H^{2N-2j}},\\
\mathfrak{E}(u,p,\theta)&:=\sum_{j=0}^N\frak left(\|\pa_t^ju\|_{L^\infty H^{2N-2j}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2N-2j}}^2\right)+\sum_{j=0}^{N-1}\|\pa_t^jp\|_{L^\infty H^{2N-2j-1}},\\
\mathfrak{K}(u,p,\theta)&:=\mathfrak{D}(u,p,\theta)+\mathfrak{E}(u,p,\theta).
\end{aligned}
\end{eqnarray}
\begin{theorem}\frak label{thm:higher regularity}
Suppose that $u_0\in H^{2N}(\Om)$, $\theta_0\in H^{2N}(\Om)$, $\eta_0\in H^{2N+1/2}(\Sigma)$, and $\mathfrak{F}<\infty$. Let $D_t^ju(0)\in H^{2N-2j}(\Om)$, $\pa_t^j\theta(0)\in H^{2N-2j}(\Om)$ and $\pa_t^jp(0)\in H^{2N-2j-1}(\Om)$, for $j=1, \frak ldots, N-1$ along with $D_t^Nu(0)\in\mathscr{Y}(0)$ and $\pa_t^N\theta(0)\in H^0$, all be determined in terms of $u_0$, $\theta_0$ and $\pa_t^jF^1(0)$, $\pa_t^jF^3(0)$, $\pa_t^jF^4(0)$, $\pa_t^jF^5(0)$ for $j=0, \frak ldots, N-1$.
There exists a universal constant $T_0>0$ such that if $0<T\frak le T_0$, then there exists a unique strong solution $(u,p,\theta)$ on $[0,T]$ such that
\[
\pa_t^ju\in C^0\frak left([0,T]; H^{2N-2j}(\Om)\right)\cap L^2\frak left([0,T];H^{2N-2j+1}(\Om)\right)\quad\text{for}\thinspace j=0,\frak ldots,N,
\]
\[
\pa_t^jp\in C^0\frak left([0,T]; H^{2N-2j-1}(\Om)\right)\cap L^2\frak left([0,T];H^{2N-2j}(\Om)\right)\quad\text{for}\thinspace j=0,\frak ldots,N-1,
\]
\[
\pa_t^j\theta\in C^0\frak left([0,T]; H^{2N-2j}(\Om)\right)\cap L^2\frak left([0,T];H^{2N-2j+1}(\Om)\right)\quad\text{for}\thinspace j=0,\frak ldots,N,
\]
\[
\pa_t^{N+1}u\in(\mathscr{X}_T)^\ast,\quad\text{and}\quad\pa_t^{N+1}\theta\in(\mathscr{H}^1_T)^\ast.
\]
The pair $(D_t^ju,\pa_t^jp, \pa_t^j\theta)$ satisfies
\begin{eqnarray}\frak label{equ:higher linear BC}
\frak left\{
\begin{aligned}
&\pa_t(D_t^ju)-\Delta_{\mathscr{A}}(D_t^ju)+\nabla_{\mathscr{A}}(\pa_t^jp)-\pa_t^j(\theta \nabla_{\mathscr{A}}y_3)=F^{1,j}\quad &\text{in}\thinspace\Om,\\
&\mathop{\rm div}\nolimits_{\mathscr{A}}(D_t^ju)=0\quad &\text{in}\thinspace\Om,\\
&\pa_t(\pa_t^j\theta)-\Delta_{\mathscr{A}}(\pa_t^j\theta)=F^{3,j}\quad &\text{in}\thinspace\Om,\\
&S_{\mathscr{A}}(\pa_t^jp,D_t^ju)\mathscr{N}=F^{4,j}\quad &\text{on}\thinspace \Sigma,\\
&\nabla_{\mathscr{A}}(\pa_t^j\theta)\cdot\mathscr{N}+\pa_t^j\theta\frak left|\mathscr{N}\right|=F^{5,j}\quad &\text{on}\thinspace \Sigma,\\
&D_t^ju=0,\quad \pa_t^j\theta=0\quad &\text{on}\thinspace \Sigma_b,
\end{aligned}
\right.
\end{eqnarray}
in the strong sense with initial data $\frak left(D_t^ju(0),\pa_t^jp(0),\pa_t^j\theta(0)\right)$ for $j=0,\frak ldots,N-1$, and in the weak sense with initial data $D_t^{N}u(0)\in\mathscr{Y}(0)$ and $\pa_t^{N}\theta(0)\in H^0$. Here the forcing terms $F^{1,j}$, $F^{3,j}$, $F^{4,j}$ and $F^{5,j}$ are as defined by \eqref{equ:force 1} and \eqref{equ:force 2}. Moreover, the solution satisfies the estimate
\begin{equation} \frak label{est:higher regularity}
\mathfrak{K}(u,p,\theta)\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(T P(\mathfrak{E}(\eta))\right)\frak left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\right),
\end{equation}
where the constant $C>0$, is independent of $\eta$.
\end{theorem}
\begin{proof}
First, notice that $P(\cdot,\cdot)$ and $P(\cdot)$ throughout this proof is allowed to change from line to line.
Theorem \ref{thm:lower regularity} guarantees the existence of $(u,p,\theta)$ satisfying the inclusions \eqref{equ:strong solution}. The $(D_t^ju,\pa_t^jp,\pa_t^j\theta)$ are solutions of \eqref{equ:higher linear BC} in the strong sense when $j=0$ and in the weak sense when $j=1$. Finally, the estimate \eqref{inequ:est strong solution} holds.
For an integer $m\ge0$, let $\mathbb{P}_m$ denote the proposition asserting the following three statements. First, $(D_t^ju,\pa_t^jp,\pa_t^j\theta)$ are solutions of \eqref{equ:higher linear BC} in the strong sense for $j=0,\frak ldots,m$ and in the weak sense when $j=m+1$. Second,
\[
\pa_t^ju\in L^\infty H^{2m-2j+2}\cap L^2H^{2m-2j+3},\quad \pa_t^j\theta\in L^\infty H^{2m-2j+2}\cap L^2H^{2m-2j+3}
\]
for $j=0,1,\frak ldots,m+1$, $\pa_t^{m+2}u\in(\mathscr{X}_T)^\ast$, $\pa_t^{m+2}\theta\in(\mathscr{H}^1_T)^\ast$ and
\[
\pa_t^jp\in L^\infty H^{2m-2j+1}\cap L^2H^{2m-2j+2}
\]
for $j=0,1,\frak ldots,m$. Third, the estimate
\begin{eqnarray}\frak label{est:bound pm}
\begin{aligned}
&\sum_{j=0}^{m+1}\frak left(\|\pa_t^ju\|_{L^\infty H^{2m-2j+2}}^2+\|\pa_t^ju\|_{L^2H^{2m-2j+3}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2m-2j+2}}^2+\|\pa_t^j\theta\|_{L^2H^{2m-2j+3}}^2\right)\\
&\quad+\|\pa_t^{m+2}u\|_{(\mathscr{X}_T)^\ast}^2+\|\pa_t^{m+2}\theta\|_{\mathscr{H}^1_T}^2+\sum_{j=0}^m\frak left(\|\pa_t^jp\|_{L^\infty H^{2m-2j+1}}^2+\|\pa_t^jp\|_{L^2H^{2m-2j+2}}^2\right)\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(T P(\mathfrak{E}(\eta))\right)\frak left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\right)
\end{aligned}
\end{eqnarray}
holds.
We will use a finite induction method to prove that $\mathbb{P}_m$ holds. Theorem \ref{thm:lower regularity} implies that $\mathbb{P}_0$ holds. Then in the rest of this proof, we will divide the proof into two steps.
Step 1. Proving the first assertion. Suppose that $\mathbb{P}_m$ holds for $m=0, \frak ldots, N-2$.
From \eqref{equ:force l2}--\eqref{equ:force dual} of Lemma \ref{lem:force linear}, we have that
\begin{eqnarray}\frak label{equ:force m+1 l2}
\begin{aligned}
&\|F^{1,m+1}(v,q)\|_{L^2H^1}^2+\|F^{3,m+1}(\Theta)\|_{L^2H^1}^2+\|F^{4,m+1}(v,q)\|_{L^2H^{3/2}}^2\\
&\quad+\|F^{5,m+1}(\Theta)\|_{L^2H^{3/2}}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^{m}\frak left(\|\pa_t^\ell v\|_{L^2H^3}^2+\|\pa_t^\ell \Theta\|_{L^2H^3}^2\right)\\
&\quad+\sum_{\ell=0}^{m}\Big(\|\pa_t^\ell v\|_{L^\infty H^2}^2+\|\pa_t^\ell \Theta\|_{L^\infty H^2}^2+\|\pa_t^\ell q\|_{L^2H^2}^2+\|\pa_t^\ell q\|_{L^\infty H^1}^2\Big)\bigg),
\end{aligned}
\end{eqnarray}
\begin{eqnarray}\frak label{equ:force m+1 l infty}
\begin{aligned}
&\|F^{1,m+1}(v,q)\|_{L^\infty H^0}^2+\|F^{3,m+1}(\Theta)\|_{L^\infty H^0}^2+\|F^{4,j}(v,q)\|_{L^\infty H^{1/2}}^2\\
&\quad+\|F^{5,j}(\Theta)\|_{L^\infty H^{1/2}}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^m\Big(\|\pa_t^\ell v\|_{L^\infty H^2}^2+\|\pa_t^\ell \Theta\|_{L^\infty H^2}^2+\|\pa_t^\ell q\|_{L^\infty H^1}^2\Big)\bigg),
\end{aligned}
\end{eqnarray}
\begin{eqnarray}\frak label{equ:force m+1 dual}
\begin{aligned}
&\|\pa_t(F^{1,m+1}(v,q)-F^{4,m+1}(v,q))\|_{L^2({}_0H^1(\Om))^\ast}^2\\
&\quad+\|\pa_t(F^{3,m+1}(\Theta)-F^{5,m+1}(\Theta))\|_{L^2({}_0H^1(\Om))^\ast}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\|\pa_t^{m+1} v\|_{L^2 H^{2}}^2+\|\pa_t^{m+1} \Theta\|_{L^2 H^{2}}^2+\|\pa_t^{m+1} q\|_{L^2 H^{1}}^2\\
&\quad+\sum_{\ell=0}^m\Big(\|\pa_t^\ell v\|_{L^\infty H^{2}}^2+\|\pa_t^\ell v\|_{L^2 H^{2}}^3+\|\pa_t^\ell \Theta\|_{L^\infty H^{2}}^2+\|\pa_t^\ell \Theta\|_{L^2 H^{2}}^3\\
&\quad+\|\pa_t^\ell q\|_{L^\infty H^{1}}^2+\|\pa_t^\ell q\|_{L^2 H^{2}}^2\Big)\bigg).
\end{aligned}
\end{eqnarray}
Now we will use the iteration method. We let $u^0$ be the extension of the initial data $\pa_t^ju(0)$, $j=1,\frak ldots,N$, given by Lemma A.5 in \cite{GT1}, which may also give $\theta^0$, the extension of the initial data $\pa_t^j\theta(0)$, $j=1,\frak ldots,N$, and similarly let $p^0$ be the extension of $\pa_t^jp(0)$, $j=1,\frak ldots,N-1$, given by Lemma A.6 in \cite{GT1}. By \eqref{equ:est initial data j} and the estimates given in the Lemma A.5 and Lemma A.6 in \cite{GT1}, we have
\begin{eqnarray}\frak label{est:u0 p0 theta0}
\begin{aligned}
&\sum_{j=0}^N\frak left(\|\pa_t^ju^0\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^ju^0\|_{L^\infty H^{2N-2j}}^2+\|\pa_t^j\theta^0\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^j\theta^0\|_{L^\infty H^{2N-2j}}^2\right)\\
&\quad+\sum_{j=0}^{N-1}\frak left(\|\pa_t^jp^0\|_{L^2H^{2N-2j}}^2+\|\pa_t^jp^0\|_{L^\infty H^{2N-2j-1}}^2\right)\\
&\frak lesssim \sum_{j=0}^N\|D_t^ju(0)\|_{H^{2N-2j}}^2+\sum_{j=0}^{N-1}\|\pa_t^jp(0)\|_{H^{2N-2j-1}}^2+\sum_{j=0}^N\|\pa_t^j\theta(0)\|_{H^{2N-2j}}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta))\frak left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0\right).
\end{aligned}
\end{eqnarray}
According to \eqref{equ:force m+1 l2}--\eqref{est:u0 p0 theta0}, we may derive that $F^{1,m+1}(u^0,p^0)$, $F^{3,m+1}(\theta^0)$, $F^{4,m+1}(u^0,p^0)$ and $F^{5,m+1}(\theta^0)$ satisfy \eqref{cond:force}. Also the compatibility condition \eqref{cond:compatibility} with $F^4$ replaced by $F^{4,m+1}(u^0,p^0)$ and $u_0$ replaced by $D_t^{m+1}u(0)$ holds by \eqref{cond:compatibility j} since $u^0$ and $p^0$ achieve the initial data. Then we can apply Theorem \ref{thm:lower regularity} to find a pair $(v^1,q^1,\Theta^1)$ satisfying the conclusions of the theorem.
For simplicity, we abbreviate \eqref{equ:linear BC} as
$\mathcal{L}(v,q,\Theta)=\mathbb{F}=(F^1,F^3,F^4,F^5)$. Then
\[
\mathcal{L}(v^1,q^1,\Theta^1)=\mathbb{F}^{m+1}:=(F^{1,m+1}(u^0,p^0),F^{3,m+1}(\theta^0),F^{4,m+1}(u^0,p^0),F^{5,m+1}(\theta^0)),
\]
\[
v^1(0)=D_t^{m+1}u(0),\quad q^1(0)=\pa_t^{m+1}p(0),\quad \Theta^1(0)=\pa_t^{m+1}\theta(0).
\]
If we denote the left--hand side of \eqref{inequ:est strong solution} as $\mathfrak{B}(u,p,\theta)$, then we may combine \eqref{inequ:est strong solution}, \eqref{equ:initial force j}, \eqref{equ:force m+1 l2}, \eqref{equ:force m+1 dual} and \eqref{est:u0 p0 theta0} to derive that
\[
\mathfrak{B}(v^1,q^1,\Theta^1)\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\frak left(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\right).
\]
Now, suppose that $(v^n,q^n,\Theta^n)$ is given and satisfies $\mathfrak{B}(v^n,q^n,\Theta^n)<\infty$, we define $(u^n,p^n,\theta^n)$ which satisfies the ODEs
\begin{eqnarray}\frak label{equ:ode uv}
\frak left\{
\begin{aligned}
&D_t^{m+1}u^n=v^n,\\
&\pa_t^ju^n(0)=v^n(0) \quad\text{for}\thinspace j=0,\frak ldots,m,
\end{aligned}
\right.
\end{eqnarray}
\begin{eqnarray}\frak label{equ:ode pq}
\frak left\{
\begin{aligned}
&\pa_t^{m+1}p^n=q^n,\\
&\pa_t^jp^n(0)=q^n(0) \quad\text{for}\thinspace j=0,\frak ldots,m,
\end{aligned}
\right.
\end{eqnarray}
\begin{eqnarray}\frak label{equ:ode theta Theta}
\frak left\{
\begin{aligned}
&\pa_t^{m+1}\theta^n=\Theta^n,\\
&\pa_t^j\theta^n(0)=\Theta^n(0) \quad\text{for}\thinspace j=0,\frak ldots,m.
\end{aligned}
\right.
\end{eqnarray}
From the wellposedness theory of linear ODEs, we know that these ODEs have unique solutions. If we define $\mathfrak{K}(v,q,\Theta)$ by
\begin{align*}
\mathfrak{K}(v,q,\Theta):&=\|\pa_t^{m+1}v\|_{L^2H^2}^2+\|\pa_t^{m+1}q\|_{L^2H^1}^2+\|\pa_t^{m+1}\Theta\|_{L^2H^2}^2+\sum_{\ell=0}^m\Big(\|\pa_t^\ell v\|_{L^2H^3}^2\\
&\quad+\|\pa_t^\ell v\|_{L^\infty H^2}^2+\|\pa_t^\ell \Theta\|_{L^2H^3}^2+\|\pa_t^\ell \Theta\|_{L^\infty H^2}^2+\|\pa_t^\ell q\|_{L^2H^2}^2+\|\pa_t^\ell q\|_{L^\infty H^1}^2\Big),
\end{align*}
then the solutions of \eqref{equ:ode uv}--\eqref{equ:ode theta Theta} satisfy the estimate
\begin{equation}\frak label{est:un,pn,thetan}
\begin{aligned}
\mathfrak{K}(u^n,p^n,\theta^n)&\frak lesssim P(T)P(\mathfrak{K}(\eta))\bigg(\sum_{j=0}^m\|\pa_t^ju(0)\|_{H^3}^2+\|\pa_t^jp(0)\|_{H^2}^2\\
&\quad+\|\pa_t^j\theta(0)\|_{H^3}^2+T\mathfrak{B}(v^n,q^n,\Theta^n)\bigg)<\infty,
\end{aligned}
\end{equation}
where $P(T)$ is a polynomial in $T$.
Applying Theorem \ref{thm:lower regularity} iteratively, we can obtain sequences $\{(v^n,q^n,\Theta^n)\}_{n=1}^\infty$ and $\{u^n,p^n,\theta^n\}_{n=1}^\infty$ satisfying \eqref{equ:ode uv}--\eqref{equ:ode theta Theta} and
\begin{eqnarray}\frak label{equ:iterate n n-1}
\begin{aligned}
\mathcal{L}(v^n,q^n,\Theta^n)&=\mathbb{F}^{m+1}(u^{n-1},p^{n-1},\theta^{n-1}),\\
v^n(0)&=D_t^{m+1}u(0),\quad q^n(0)=\pa_t^{m+1}p(0),\quad \Theta^n(0)=\pa_t^{m+1}\theta(0).
\end{aligned}
\end{eqnarray}
Then
\begin{align*}
\mathcal{L}(v^{n+1}-v^n,q^{n+1}-q^n,\Theta^{n+1}-\Theta^n)&=\mathbb{F}^{m+1}(u^n-u^{n-1},p^n-p^{n-1},\theta^n-\theta^{n-1}),\\
v^{n+1}(0)-v^n(0)=0,\quad &q^{n+1}(0)-q^n(0)=0,\quad \Theta^{n+1}(0)-\Theta^n(0)=0.
\end{align*}
Since the terms involving $F^1$, $F^3$, $F^4$ and $F^5$ are canceled in $\mathbb{F}^{m+1}(u^n-u^{n-1},p^n-p^{n-1},\theta^n-\theta^{n-1})$, we can use \eqref{equ:force m+1 l2} and \eqref{equ:force m+1 dual} to derive that
\begin{align*}
&\|F^{1,m+1}(u^n-u^{n-1},p^n-p^{n-1})\|_{L^2H^1}^2+\|F^{3,m+1}(\theta^n-\theta^{n-1})\|_{L^2H^1}^2\\
&\quad+\|F^{4,m+1}(u^n-u^{n-1},p^n-p^{n-1})\|_{L^2H^{3/2}}^2+\|F^{5,m+1}(\theta^n-\theta^{n-1})\|_{L^2H^{3/2}}^2\\
&\quad+\|\pa_t(F^{1,m+1}(u^n-u^{n-1},p^n-p^{n-1})-F^{4,m+1}(u^n-u^{n-1},p^n-p^{n-1}))\|_{L^2({}_0H^1(\Om))^\ast}^2\\
&\quad+\|\pa_t(F^{3,m+1}(\theta^n-\theta^{n-1})-F^{5,m+1}(\theta^n-\theta^{n-1}))\|_{L^2({}_0H^1(\Om))^\ast}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\mathfrak{K}(u^n-u^{n-1},p^n-p^{n-1},\theta^n-\theta^{n-1}).
\end{align*}
Since, for each $n$, $(u^n,p^n,\theta^n)$ achieves the same initial data, similar to the ODEs \eqref{equ:ode uv}--\eqref{equ:ode theta Theta}, we have that
\begin{equation}
\mathfrak{K}(u^n-u^{n-1},p^n-p^{n-1},\theta^n-\theta^{n-1})\frak lesssim P(\mathfrak{K}(\eta))T P(T)\mathfrak{B}(v^n-v^{n-1},q^n-q^{n-1},\Theta^n-\Theta^{n-1}).
\end{equation}
The above two estimates with \eqref{inequ:est strong solution} imply that
\begin{eqnarray}
\begin{aligned}
&\mathfrak{B}(v^{n+1}-v^n,q^{n+1}-q^n,\Theta^{n+1}-\Theta^n)\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\\
&\quad\times T P(T)\mathfrak{B}(v^n-v^{n-1},q^n-q^{n-1},\Theta^n-\Theta^{n-1}),
\end{aligned}
\end{eqnarray}
which implies that there exists a universal $T_0>0$ such that if $T\frak le T_0$, then the sequence $\{(v^n,q^n,\Theta^n)\}_{n=1}^\infty$ converges to $(v,q,\Theta)$ in the norm $\sqrt{\mathfrak{B}(\cdot,\cdot)}$, which reveals that $\{(u^n,p^n,\theta^n)\}_{n=1}^\infty$ converges to $(u,p,\theta)$ in the norm $\sqrt{\mathfrak{K}(\cdot,\cdot)}$.
By passing to the limit in \eqref{equ:ode uv}--\eqref{equ:ode theta Theta}, we have that $v=D_t^{m+1}u$, $q=\pa_t^{m+1}p$ and $\Theta=\pa_t^{m+1}\theta$. Then, passing to the limit in \eqref{equ:iterate n n-1}, we have that
\[
\mathcal{L}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)=\mathbb{F}^{m+1}(u,p,\theta).
\]
Then Theorem \ref{thm:lower regularity} with the assumption of $\mathbb{P}_m$, which provides that $(D_t^{m+1}u,\\ \pa_t^{m+1}p,\pa_t^{m+1}\theta)$ are solutions of \eqref{equ:higher linear BC} in the strong sense for $j=0,\frak ldots,m$, enables us to deduce the first assertion of $\mathbb{P}_{m+1}$.
Theorem \ref{thm:lower regularity}, together with the estimates \eqref{equ:force l2}, \eqref{equ:force m+1 dual} and \eqref{est:bound pm}, gives us that
\begin{eqnarray}
\begin{aligned}
&\mathfrak{B}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2\\
&\quad+\mathfrak{F}_0+\mathfrak{F}+\|\pa_t^{m+1}u\|_{L^2H^2}^2+\|\pa_t^{m+1}p\|_{L^2H^1}^2+\|\pa_t^{m+1}\theta\|_{L^2H^2}^2\big).
\end{aligned}
\end{eqnarray}
On the other hand, the estimate \eqref{est:pa t Dt v j} implies that
\begin{eqnarray}
\begin{aligned}
&\|\pa_t^{m+1}u\|_{L^2H^2}^2+\|\pa_t^{m+1}p\|_{L^2H^1}^2+\|\pa_t^{m+1}\theta\|_{L^2H^2}^2\\
&\frak le T\frak left(\|\pa_t^{m+1}u\|_{L^\infty H^2}^2+\|\pa_t^{m+1}p\|_{L^\infty H^1}^2+\|\pa_t^{m+1}\theta\|_{L^\infty H^2}^2\right)\\
&\frak lesssim T\frak left(\|\pa_t^{m+1}u-D_t^{m+1}u\|_{L^\infty H^2}^2+\|D_t^{m+1}u\|_{L^\infty H^2}^2+\|\pa_t^{m+1}p\|_{L^\infty H^1}^2+\|\pa_t^{m+1}\theta\|_{L^\infty H^2}^2\right)\\
&\frak lesssim T\Big(P(\mathfrak{K}(\eta))\sum_{\ell=0}^m\|\pa_t^\ell u\|_{L^\infty H^2}^2+\mathfrak{B}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)\Big)\\
&\frak lesssim T\Big(P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big)\\
&\quad+\mathfrak{B}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)\Big),
\end{aligned}
\end{eqnarray}
where in the last inequality, we have used \eqref{est:bound pm} again. Combining the above two estimates, we may further restrict the size of universal $T_0>0$ such that if $T\frak le T_0$, then
\begin{eqnarray}\frak label{est:B Dtu pa tp pa ttheta}
\begin{aligned}
&\mathfrak{B}(D_t^{m+1}u,\pa_t^{m+1}p,\pa_t^{m+1}\theta)\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big).
\end{aligned}
\end{eqnarray}
Step 2. Proving the second and third assertions. In the following, the second and third assertions will be derived simultaneously. The estimate of \eqref{est:B Dtu pa tp pa ttheta} with Lemma \ref{lem:pa tv Dt v} and estimate \eqref{est:bound pm} imply that
\begin{eqnarray}
\begin{aligned}
&\|\pa_t^{m+1}u\|_{L^2H^3}^2+\|\pa_t^{m+2}u\|_{L^2H^1}^2+\|\pa_t^{m+3}u\|_{(\mathscr{X}_T)^\ast}^2+\|\pa_t^{m+1}u\|_{L^\infty H^2}^2+\|\pa_t^{m+2}u\|_{L^\infty H^0}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\frak left(\sum_{\ell=0}^{m+2}\|\pa_t^\ell u\|_{L^2H^{2m-2\ell+3}}^2+\|\pa_t^\ell u\|_{L^\infty H^{2m-2\ell+2}}^2\right)\\
&\quad+P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big)\\
&\frak lesssim P(\mathfrak{K}(\eta))P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(p(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big)\\
&\quad+P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big)\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big).
\end{aligned}
\end{eqnarray}
Thus
\begin{eqnarray}
\begin{aligned}
&\sum_{j=m+1}^{m+2}\frak left(\|\pa_t^ju\|_{L^2H^{2(m+1)-2j+3}}^2+\|\pa_t^ju\|_{L^\infty H^{2(m+1)-2j+2}}^2\right)+\|\pa_t^{m+3}u\|_{(\mathscr{X}_T)^\ast}^2\\
&\quad+\sum_{j=m+1}^{m+2}\frak left(\|\pa_t^jp\|_{L^2H^{2(m+1)-2j+2}}^2+\|\pa_t^jp\|_{L^\infty H^{2(m+1)-2j+1}}^2\right)\\
&\quad+\sum_{j=m+1}^{m+2}\frak left(\|\pa_t^j\theta\|_{L^2H^{2(m+1)-2j+3}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2(m+1)-2j+2}}^2\right)+\|\pa_t^{m+3}\theta\|_{(\mathscr{X}_T)^\ast}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big).
\end{aligned}
\end{eqnarray}
Thus, in order to derive the second and third assertions of $\mathbb{P}_{m+1}$, it suffices to prove that
\begin{eqnarray}\frak label{est:bound pm 1}
\begin{aligned}
&\sum_{j=0}^{m}\frak left(\|\pa_t^ju\|_{L^2H^{2(m+1)-2j+3}}^2+\|\pa_t^jp\|_{L^2H^{2(m+1)-2j+2}}^2+\|\pa_t^j\theta\|_{L^2H^{2(m+1)-2j+3}}^2\right)\\
&\quad+\sum_{j=0}^m\frak left(\|\pa_t^ju\|_{L^\infty H^{2(m+1)-2j+2}}^2+\|\pa_t^jp\|_{L^\infty H^{2(m+1)-2j+1}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2(m+1)-2j+2}}^2\right)\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big).
\end{aligned}
\end{eqnarray}
In order to prove this estimate, we will use the elliptic regularity of Proposition \ref{prop:high regulatrity} with $k=2N$ and iteration argument. As the first step, we need the estimates for the forcing terms. Combining \eqref{est:bound pm} with the estimates \eqref{equ:force l2} and \eqref{equ:force l infity} of Lemma \ref{lem:force linear} implies that
\begin{eqnarray}\frak label{est:sum force j}
\begin{aligned}
&\sum_{j=1}^{m+1}\Big(\|F^{1,j}\|_{L^2H^{2m-2j+1}}^2+\|F^{3,j}\|_{L^2H^{2m-2j+1}}^2+\|F^{4,j}\|_{L^2H^{2m-2j+3/2}}^2\\
&\quad+\|F^{5,j}\|_{L^2H^{2m-2j+3/2}}^2+\|F^{1,j}\|_{L^\infty H^{2m-2j}}^2+\|F^{3,j}\|_{L^\infty H^{2m-2j}}^2\\
&\quad+\|F^{4,j}\|_{L^\infty H^{2m-2j+1/2}}^2+\|F^{5,j}\|_{L^\infty H^{2m-2j+1/2}}^2\Big)\\
&\frak lesssim P(\mathfrak{K}(\eta))\bigg(\mathfrak{F}+\sum_{\ell=0}^{j-1}\frak left(\|\pa_t^\ell u\|_{L^2H^{2m-2\ell+3}}^2+\|\pa_t^\ell \theta\|_{L^2H^{2m-2\ell+3}}^2\right)\\
&\quad+\sum_{\ell=0}^{j-1}\Big(\|\pa_t^\ell u\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell \theta\|_{L^\infty H^{2m-2\ell+2}}^2+\|\pa_t^\ell p\|_{L^2H^{2m-2\ell+2}}^2\\
&\quad+\|\pa_t^\ell p\|_{L^\infty H^{2m-2\ell+1}}^2\Big)\bigg)\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big),
\end{aligned}
\end{eqnarray}
The estimates of \eqref{est:B Dtu pa tp pa ttheta} , \eqref{est:bound pm} as well as \eqref{est:pat v Dt v l2}, \eqref{est:pat v Dt v linfty} of Lemma \ref{lem:pa tv Dt v}, allow us to deduce that
\begin{eqnarray}\frak label{est:pat Dtm u}
\begin{aligned}
&\|\pa_tD_t^mu\|_{L^\infty H^2}^2+\|\pa_tD_t^mu\|_{L^2 H^3}^2\\
&\frak lesssim \|\pa_tD_t^mu-D_t^{m+1}u\|_{L^\infty H^2}^2+\|\pa_tD_t^mu-D_t^{m+1}u\|_{L^2 H^3}^2\\
&\quad+\|D_t^{m+1}u\|_{L^\infty H^2}^2+\|D_t^{m+1}u\|_{L^2 H^3}^2\\
&\frak lesssim P(\mathfrak{K}(\eta))\frak left(\|D_t^mu\|_{L^\infty H^2}^2+\|D_t^mu\|_{L^2 H^3}^2\right)+\|D_t^{m+1}u\|_{L^\infty H^2}^2+\|D_t^{m+1}u\|_{L^2 H^3}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big).
\end{aligned}
\end{eqnarray}
Since \eqref{equ:higher linear BC} is satisfied in the strong sense for $j=m$, for almost $t\in [0,T]$, $(D_t^mu, \pa_t^mp,\\
\pa_t^m\theta)$ solves elliptic system \eqref{equ:SBC} with $F^1$ replaced by $F^{1,m}-\pa_tD_t^mu$, $F^2=0$, $F^3$ replaced by $F^{3,m}-\pa_t(\pa_t^m\theta)$ and $F^4$, $F^5$ replaced by $F^{4,m}$, $F^{5,m}$, respectively. Then, we apply Proposition \ref{prop:high regulatrity} with $r=5$, then square the resulting estimate and integrate over $[0,T]$, to deduce that
\begin{eqnarray}
\begin{aligned}
&\|D_t^mu\|_{L^2H^5}^2+\|\pa_t^mp|_{L^2H^4}^2+\|\pa_t^m\theta\|_{L^2H^5}^2\\
&\frak lesssim \|F^{1,m}-\pa_tD_t^mu\|_{L^2H^3}^2+\|F^{3,m}-\pa_t(\pa_t^m\theta)\|_{L^2H^3}^2\\
&\quad+\|F^{4,m}\|_{L^2H^{7/2}}^2+\|F^{5,m}\|_{L^2H^{7/2}}^2\\
&\frak lesssim \|F^{1,m}\|_{L^2H^3}^2+\|\pa_tD_t^mu\|_{L^2H^3}^2+\|F^{3,m}\|_{L^2H^3}^2+\|\pa_t(\pa_t^m\theta)\|_{L^2H^3}^2\\
&\quad+\|F^{4,m}\|_{L^2H^{7/2}}^2+\|F^{5,m}\|_{L^2H^{7/2}}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big),
\end{aligned}
\end{eqnarray}
where in the last inequality, we have used \eqref{est:B Dtu pa tp pa ttheta}, \eqref{est:sum force j} and \eqref{est:pat Dtm u}. Similarly, Proposition \ref{prop:high regulatrity} with $r=4$ reveals that
\begin{eqnarray}
\begin{aligned}
&\|D_t^mu\|_{L^\infty H^4}^2+\|\pa_t^mp\|_{L^\infty H^3}^2+\|\pa_t^m\theta\|_{L^\infty H^4}^2\\
&\frak lesssim \|F^{1,m}-\pa_tD_t^mu\|_{L^\infty H^2}^2+\|F^{3,m}-\pa_t(\pa_t^m\theta)\|_{L^\infty H^2}^2\\
&\quad+\|F^{4,m}\|_{L^\infty H^{5/2}}^2+\|F^{5,m}\|_{L^\infty H^{5/2}}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big).
\end{aligned}
\end{eqnarray}
By iterating to estimate $\pa_t^ju$, $\pa_t^jp$ and $\pa_t^j\theta$ for $j=1,\frak ldots,m$, as well as the above two estimates, we have that
\begin{align*}
&\|\pa_t^m u\|_{L^\infty H^4}^2+\|\pa_t^mu\|_{L^2H^5}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big).
\end{align*}
Thus, we have that
\begin{eqnarray}\frak label{est:bound pm j ge1}
\begin{aligned}
&\sum_{j=1}^{m}\frak left(\|\pa_t^ju\|_{L^2H^{2(m+1)-2j+3}}^2+\|\pa_t^jp\|_{L^2H^{2(m+1)-2j+2}}^2+\|\pa_t^j\theta\|_{L^2H^{2(m+1)-2j+3}}^2\right)\\
&\quad+\sum_{j=1}^m\frak left(\|\pa_t^ju\|_{L^\infty H^{2(m+1)-2j+2}}^2+\|\pa_t^jp\|_{L^\infty H^{2(m+1)-2j+1}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2(m+1)-2j+2}}^2\right)\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big).
\end{aligned}
\end{eqnarray}
Then we apply Proposition \ref{prop:high regulatrity} with $r=2(m+1)+3\frak le2N+1$, square the result estimate and integrate over $[0,T]$ to see that
\begin{eqnarray}\frak label{est:bound pm j=0 l2}
\begin{aligned}
&\|u\|_{L^2H^{2(m+1)+3}}^2+\|p\|_{L^2H^{2(m+1)+2}}^2+\|\theta\|_{L^2H^{2(m+1)+3}}^2\\
&\frak lesssim \|F^1-\pa_tu\|_{L^2H^{2(m+1)+1}}^2+\|F^3-\pa_t\theta\|_{L^2H^{2(m+1)+1}}^2\\
&\quad+\|F^4\|_{L^2H^{2(m+1)+3/2}}^2+\|F^5\|_{L^2H^{2(m+1)+3/2}}^2\\
&\frak lesssim \|F^1\|_{L^2H^{2(m+1)+1}}^2+\|\pa_tu\|_{L^2H^{2(m+1)+1}}^2+\|F^3\|_{L^2H^{2(m+1)+1}}^2+\|\pa_t\theta\|_{L^2H^{2(m+1)+1}}^2\\
&\quad+\|F^4\|_{L^2H^{2(m+1)+3/2}}^2+\|F^5\|_{L^2H^{2(m+1)+3/2}}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big),
\end{aligned}
\end{eqnarray}
and then again with $r=2(m+1)+2\frak le2N$ to see that
\begin{eqnarray}\frak label{est:bound pm j=0 linfty}
\begin{aligned}
&\|u\|_{L^\infty H^{2(m+1)+2}}^2+\|p\|_{L^\infty H^{2(m+1)+1}}^2+\|\theta\|_{L^\infty H^{2(m+1)+2}}^2\\
&\frak lesssim \|F^1-\pa_tu\|_{L^\infty H^{2(m+1)}}^2+\|F^3-\pa_t\theta\|_{L^\infty H^{2(m+1)}}^2\\
&\quad+\|F^4\|_{L^\infty H^{2(m+1)+1/2}}^2+\|F^5\|_{L^\infty H^{2(m+1)+1/2}}^2\\
&\frak lesssim \|F^1\|_{L^\infty H^{2(m+1)}}^2+\|\pa_tu\|_{L^\infty H^{2(m+1)}}^2+\|F^3\|_{L^\infty H^{2(m+1)}}^2+\|\pa_t\theta\|_{L^\infty H^{2(m+1)}}^2\\
&\quad+\|F^4\|_{L^\infty H^{2(m+1)+1/2}}^2+\|F^5\|_{L^\infty H^{2(m+1)+1/2}}^2\\
&\frak lesssim P(\mathfrak{E}_0(\eta),\mathfrak{K}(\eta))\exp\frak left(P(\mathfrak{E}(\eta))T\right)\big(\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\mathfrak{F}_0+\mathfrak{F}\big).
\end{aligned}
\end{eqnarray}
Thus \eqref{est:bound pm 1} is obtained by summing \eqref{est:bound pm j ge1}--\eqref{est:bound pm j=0 linfty}. This completes the proof.
\end{proof}
\section{Preliminaries for the nonlinear problem}
In order to use linear theory for the problem \eqref{equ:linear BC} to solve the nonlinear problem \eqref{equ:nonlinear BC}, we have to define forcing terms $F^1$, $F^3$, $F^4$, $F^5$ to be used in the linear estimates. Given $u$, $\theta$, $\eta$, we define
\begin{eqnarray}\frak label{equ:force u p theta}
\begin{aligned}
F^1(u,\theta,\eta)&=\pa_t\bar{\eta}(1+x_3)K\pa_3u-u\cdot\nabla_{\mathscr{A}}u\quad \text{and}\quad F^4(u,\theta,\eta)=\eta\mathscr{N},\\
F^3(u,\theta,\eta)&=\pa_t\bar{\eta}(1+x_3)K\pa_3\theta-u\cdot\nabla_{\mathscr{A}}\theta \quad \text{and}\quad F^5(u,\theta,\eta)=-\frak left|\mathscr{N}\right|,
\end{aligned}
\end{eqnarray}
where $\mathscr{A}$, $\mathscr{N}$, $K$ are determined as before by $\eta$. Then we define the quantities $\mathfrak{K}_{N}(u,\theta)$ and $\mathfrak{K}_{N}(u)$ as
\begin{eqnarray}\frak label{equ:N u theta}
\begin{aligned}
\mathfrak{K}_{N}(u,\theta)&=\sum_{j=0}^{N}\Big(\|\pa_t^ju\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^ju\|_{L^\infty H^{2N-2j}}^2\\
&\quad+\|\pa_t^j\theta\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^j\theta\|_{L^\infty H^{2N-2j}}^2\Big),
\end{aligned}
\end{eqnarray}
and
\begin{equation}
\mathfrak{K}_{N}(u)=\sum_{j=0}^{N}\Big(\|\pa_t^ju\|_{L^2H^{2N-2j+1}}^2+\|\pa_t^ju\|_{L^\infty H^{2N-2j}}^2\Big).
\end{equation}
\subsection{Initial data estimates}\frak label{sec:initial data}
Since $\eta$ is unknown for the full nonlinear problem, and its evolution is coupled to that of $u$, $p$ and $\theta$, we must reconstruct the initial data to contain this coupling, only with $u_0$, $\theta_0$ and $\eta_0$. Here we will define some quantities which have minor difference from \cite{GT1}.
\begin{equation}
\mathscr{E}_0:=\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2+\|\eta_0\|_{H^{2N+1/2}}^2,
\end{equation}
and
\begin{equation}
\mathfrak{E}_0(u,p,\theta):=\sum_{j=0}^N\|\pa_t^ju(0)\|_{H^{2N-2j}}^2+\sum_{j=0}^{N-1}\|\pa_t^jp(0)\|_{H^{2N-2j-1}}^2+\sum_{j=0}^N\|\pa_t^j\theta(0)\|_{H^{2N-2j}}^2.
\end{equation}
For $j=0,\frak ldots,N-1$,
\begin{eqnarray}
\begin{aligned}
&\mathfrak{F}_0^j(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\\
&:=\sum_{\ell=0}^j\Big(\|\pa_t^\ell F^1(0)\|_{H^{2N-2\ell-2}}^2+\|\pa_t^\ell F^3(0)\|_{H^{2N-2\ell-2}}^2+\|\pa_t^\ell F^4(0)\|_{H^{2N-2\ell-3/2}}^2\\
&\quad+\|\pa_t^\ell F^5(0)\|_{H^{2N-2\ell-3/2}}^2\Big).
\end{aligned}
\end{eqnarray}
\begin{equation}
\mathfrak{E}_0^0(\eta):=\|\eta_0\|_{H^{2N+1/2}}^2,
\end{equation}
and for $j=1,\frak ldots,N$,
\begin{equation}
\mathfrak{E}_0^j(\eta):=\|\eta_0\|_{H^{2N+1/2}}^2+\sum_{\ell=1}^j\|\pa_t^\ell\eta(0)\|_{H^{2N-2\ell+3/2}}^2.
\end{equation}
\begin{equation}
\mathfrak{E}_0^0(u,p,\theta):=\|u_0\|_{H^{2N}}^2+\|\theta_0\|_{H^{2N}}^2,
\end{equation}
and for $j=1,\frak ldots,N$,
\begin{equation}
\mathfrak{E}_0^j(u,p,\theta):=\sum_{\ell=0}^j\|\pa_t^\ell u(0)\|_{H^{2N-2\ell}}^2+\sum_{\ell=0}^{j-1}\|\pa_t^\ell p\|_{H^{2N-2\ell-1}}^2+\sum_{\ell=0}^j\|\pa_t^\ell\theta(0)\|_{H^{2N-2\ell}}^2.
\end{equation}
The following lemma is a minor modification of Lemma 5.2 in \cite{GT1}, so we omit the details of proof.
\begin{lemma}\frak label{lem:preliminary}
For $j=0, \frak ldots,N$,
\begin{equation}\frak label{est:pa t Dt u}
\|\pa_t^ju(0)-D_t^ju(0)\|_{H^{2N-2j}}^2\frak le P_j(\mathfrak{E}_0^j(\eta),\mathfrak{E}_0^j(u,p,\theta))
\end{equation}
and
\begin{equation} \frak label{est:pa t Dt theta}
\|\pa_t^j(\theta(0)\nabla_{\mathscr{A}_0}y_{3,0})-D_t^j(\theta(0)\nabla_{\mathscr{A}_0}y_{3,0})\|_{H^{2N-2j}}^2\frak le P_j(\mathfrak{E}_0^j(\eta),\mathfrak{E}_0^j(u,p,\theta))
\end{equation}
for $P_j(\cdot,\cdot)$ a polynomial such that $P_j(0,0)=0$.
For $F^1(u,\theta,\eta)$, $F^3(u,\theta,\eta)$, $F^4(u,\theta,\eta)$ and $F^5(u,\theta,\eta)$ defined by \eqref{equ:force u p theta} and $j=0,\frak ldots,N-1$, we have that
\begin{equation}\frak label{est:F 0j}
\mathfrak{F}_0^j(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\frak le P_j(\mathfrak{E}_0^{j+1}(\eta),\mathfrak{E}_0^j(u,p,\theta))
\end{equation}
for $P_j(\cdot,\cdot)$ a polynomial such that $P_j(0,0)=0$.
For $j=1,\frak ldots,N-1$, let $F^{1,j}(0)$, $F^{3,j}(0)$, $F^{4,j}(0)$ and $F^{5,j}(0)$ are determined by \eqref{equ:force 1}, \eqref{equ:force 2} and \eqref{equ:force u p theta}. Then
\begin{eqnarray}\frak label{est:force 1345}
\begin{aligned}
&\|F^{1,j}(0)\|_{H^{2N-2j-2}}^2+\|F^{3,j}(0)\|_{H^{2N-2j-2}}^2\\
&\quad+\|F^{4,j}(0)\|_{H^{2N-2j-3/2}}^2+\|F^{5,j}(0)\|_{H^{2N-2j-3/2}}^2\\
&\frak le P_j(\mathfrak{E}_0^{j+1}(\eta),\mathfrak{E}_0^j(u,p,\theta))
\end{aligned}
\end{eqnarray}
for $P_j(\cdot,\cdot)$ a polynomial such that $P_j(0,0)=0$.
For $j=1,\frak ldots,N-1$,
\begin{equation}
\frak left\|\sum_{\ell=0}^j{j \choose \ell}\pa_t^\ell\mathscr{N}(0)\cdot\pa_t^{j-\ell}u(0)\right\|_{H^{2N-2j+3/2}}^2\frak le P_j(\mathfrak{E}_0^j(\eta),\mathfrak{E}_0^j(u,p,\theta))
\end{equation}
for $P_j(\cdot,\cdot)$ a polynomial such that $P_j(0,0)=0$. Also,
\begin{equation}\frak label{est:pat eta0}
\|u_0\cdot\mathscr{N}_0\|_{H^{2N-1/2}(\Sigma)}^2\frak le \|u_0\|_{H^{2N}}^2\frak left(1+\|\eta_0\|_{H^{2N+1/2}}^2\right).
\end{equation}
\end{lemma}
This lemma allows us to construct all of the initial data $\pa_t^ju(0)$, $\pa_t^j\theta(0)$, $\pa_t^j\eta(0)$ for $j=0,\frak ldots,N$ and $\pa_t^jp(0)$ for $j=0,\frak ldots,N-1$.
Assume that $\mathscr{E}_0<\infty$. As before, we will iteratively construct the initial data, but this time we will use Lemma \ref{lem:preliminary}. We define $\pa_t\eta(0)=u_0\cdot\mathscr{N}_0$, where $u_0\in H^{2N-1/2}(\Sigma)$, and $\mathscr{N}_0$ is determined by $\eta_0$. \eqref{est:pat eta0} implies that $\|\pa_t\eta(0)\|_{H^{2N-1/2}}^2\frak lesssim P(\mathscr{E}_0)$ for a polynomial $P(\cdot)$ such that $P(0)=0$, and hence that $\mathfrak{E}_0^0(u,p,\theta)+\mathfrak{E}_0^1(\eta)\frak lesssim P(\mathscr{E}_0)$. Then \eqref{est:F 0j} with $j=0$ implies that
\begin{equation}
\mathfrak{F}_0^0(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\frak le P_0(\mathfrak{E}_0^0(\eta),\mathfrak{E}_0^0(u,p,\theta))\frak lesssim P(\mathscr{E}_0)
\end{equation}
for a polynomial $P(\cdot)$ such that $P(0)=0$. Note that in these estimates and in the estimates below, the polynomial $P(\cdot)$ of $\mathscr{E}_0$ are allowed to change from line to line, but they always satisfy $P(0)=0$.
In this paragraph, we will give the iterative definition of $\pa_t^jp(0)$, $\pa_t^{j+1}u(0)$, $\pa_t^{j+1}\theta(0)$ and $\pa_t^{j+2}\eta(0)$ for $0\frak le j\frak le N-2$. Now suppose that $\pa_t^\ell u(0)$, $\pa_t^\ell\theta(0)$ are known for $\ell=0,\frak ldots,j$, $\pa_t^\ell \eta(0)$ is known for $\ell=0,\frak ldots,j+1$, $\pa_t^\ell p(0)$ is known for $\ell=0, \frak ldots,j-1$ (with the exception for $p(0)$ when $j=0$) and
\begin{eqnarray}\frak label{est:e0j f0j}
\begin{aligned}
&\mathfrak{E}_0^j(u,p,\theta)+\mathfrak{E}_0^{j+1}(\eta)\\
&\quad+\mathfrak{F}_0^j(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\\
&\frak lesssim P(\mathscr{E}_0).
\end{aligned}
\end{eqnarray}
And according to \eqref{est:force 1345} and \eqref{est:pa t Dt u}, we know that
\begin{eqnarray}\frak label{est:f j1234 Dt u}
\begin{aligned}
&\|D_t^ju(0)\|_{H^{2N-2j}}^2+\|F^{1,j}(0)\|_{H^{2N-2j-2}}^2+\|F^{3,j}(0)\|_{H^{2N-2j-2}}^2\\
&\quad+\|F^{4,j}(0)\|_{H^{2N-2j-3/2}}^2+\|F^{5,j}(0)\|_{H^{2N-2j-3/2}}^2\\
&\frak lesssim P(\mathscr{E}_0).
\end{aligned}
\end{eqnarray}
By virtue of estimates \eqref{equ:initial g1 g4}
\begin{eqnarray}
\begin{aligned}
&\|\mathfrak{f}^1(F^{1,j}(0),D_t^ju(0))\|_{H^{2N-2i-3}}^2+\|\mathfrak{f}^2(F^{3,j}(0),D_t^ju(0))\|_{H^{2N-2i-3/2}}^2\\
&\quad+\|\mathfrak{f}^3(F^{1,j}(0),D_t^ju(0))\|_{H^{2N-2i-5/2}}^2\\
&\frak lesssim P(\mathscr{E}_0)
\end{aligned}
\end{eqnarray}
This allows us to define $\pa_t^jp(0)$ as the solution to \eqref{equ:poisson} with $f^1$, $f^2$, $f^3$ replaced by $\mathfrak{f}^1$, $\mathfrak{f}^2$, $\mathfrak{f}^3$. The Proposition 2.15 in \cite{LW} with $k=2N$ and $r=2N-2j-1$ implies that
\begin{equation}\frak label{est:pa t j p}
\|\pa_t^jp(0)\|_{H^{2N-2j-1}}^2\frak lesssim P(\mathscr{E}_0).
\end{equation}
Now we define
\begin{equation}
\pa_t^{j+1}\theta(0)=\mathfrak{E}^{02}(\pa_t^j\theta(0),F^{3,j}(0)) \in H^{2N-2j-2}.
\end{equation}
Then according to \eqref{est:e0j f0j} and \eqref{est:f j1234 Dt u}, we have that
\begin{equation}
\|\pa_t^{j+1}\theta(0)\|_{H^{2N-2j-2}}^2\frak lesssim P(\mathscr{E}_0).
\end{equation}
Now the estimates \eqref{equ:initial G1 v q},
\eqref{est:e0j f0j} and \eqref{est:f j1234 Dt u} allow us to defined
\begin{equation}
D_t^{j+1}u(0):=\mathfrak{E}^{01}\frak left(F^{1,j}(0)+\pa_t^j(\theta(0)\nabla_{\mathscr{A}_0}y_{3,0}), D_t^ju(0),\pa_t^jp(0)\right) \in H^{2N-2j-2},
\end{equation}
and then according to \eqref{est:pa t Dt u}, we have
\begin{equation}\frak label{est:pa t j+1 u}
\|\pa_t^{j+1}u(0)\|_{H^{2N-2j-2}}^2\frak le P(\mathscr{E}_0).
\end{equation}
Now the estimates \eqref{est:pat eta0}, \eqref{est:e0j f0j} and \eqref{est:pa t j+1 u} allow us to define
\[
\pa_t^{j+2}\eta(0)=\sum_{\ell=0}^{j+1}{j+1 \choose \ell}\pa_t^\ell\mathscr{N}(0)\cdot\pa_t^{j+1-\ell}u(0),
\]
and imply the estimate
\begin{equation}\frak label{est:pa t j+2 eta}
\|\pa_t^{j+2}\eta(0)\|_{H^{2N-2j-5/2}}^2\frak le P(\mathscr{E}_0).
\end{equation}
Thus, \eqref{est:e0j f0j} together with \eqref{est:pa t j p}--\eqref{est:pa t j+2 eta} imply that
\[
\mathfrak{E}_0^{j+1}(u,p,\theta)+\mathfrak{E}_0^{j+2}(\eta)\frak le P(\mathscr{E}_0),
\]
and then \eqref{est:F 0j} implies that
\[
\mathfrak{F}_0^{j+1}(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\frak le P(\mathscr{E}_0).
\]
Hence that we can deduce the estimate
\begin{align*}
&\mathfrak{E}_0^{j+1}(u,p,\theta)+\mathfrak{E}_0^{j+2}(\eta)\\
&\quad+\mathfrak{F}_0^{j+1}(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\\
&\frak le P(\mathscr{E}_0).
\end{align*}
For $j=N-2$, we have
\begin{eqnarray}\frak label{est:e0 N-1 f0 N-1}
\begin{aligned}
&\mathfrak{E}_0^{N-1}(u,p,\theta)+\mathfrak{E}_0^N(\eta)\\
&\quad+\mathfrak{F}_0^{N-1}(F^1(u,p,\theta),F^3(u,p,\theta),F^4(u,p,\theta),F^5(u,p,\theta))\\
&\frak le P(\mathscr{E}_0).
\end{aligned}
\end{eqnarray}
Then, we only need to define $\pa_t^{N-1}p(0)$, $\pa_t^N\theta(0)$ and $\pa_t^Nu(0)$. Like the construction after Lemma \ref{lem:v,q,G}, we need the compatibility conditions on $u_0$ and $\eta_0$. Now we have constructed $\pa_t^jp(0)$ for $j=0,\frak ldots,N-2$, $\pa_t^ju(0)$, $\pa_t^j\theta(0)$, $F^{1,j}(0)$, $F^{3,j}(0)$, $F^{4,j}(0)$, $F^{5,j}(0)$ for $j=0,\frak ldots,N-1$, and $\pa_t^j\eta(0)$ for $j=0,\frak ldots,N$. We say that $u_0$ and $\eta_0$ satisfy the $N$-th order compatibility conditions if \begin{eqnarray}\frak label{cond:compatibility N}
\frak left\{
\begin{aligned}
&\nabla_{\mathscr{A}_0}\cdot(D_t^ju(0))=0\quad &\text{in}\thinspace\Om,\\
&D_t^ju(0)=0\quad &\text{on}\thinspace\Sigma_b,\\
&\Pi_0\frak left(F^{4,j}(0)+\mathbb{D}_{\mathscr{A}_0}D_t^ju(0)\mathscr{N}_0\right)=0\quad &\text{on}\thinspace\Sigma,
\end{aligned}
\right.
\end{eqnarray}
for $j=0,\frak ldots,N-1$, where $\Pi_0$ is the projection defined as in \eqref{def:projection} and $D_t$ be the operator defined by \eqref{def:Dt}. Note that if $u_0$ and $\eta_0$ satisfy \eqref{cond:compatibility N}, then the $j$-th compatibility condition \eqref{cond:compatibility j} is satisfied for $j=0,\frak ldots,N-1$.
Then the construction of $\pa_t^{N-1}p(0)$ is the same as \cite{GT1} using the compatibility condition \eqref{cond:compatibility N} and the elliptic theory of $\mathscr{A}$- Poisson equations \eqref{equ:poisson} derived by Y. Guo and I. Tice in \cite{GT1} and L. Wu in \cite{LW}. And
\begin{equation}\frak label{est:pa t N-1 p}
\|\pa_t^{N-1}p(0)\|_{H^1}^2\frak le P(\mathscr{E}_0).
\end{equation}
Then we set $\pa_t^N\theta(0)=\mathfrak{E}^{02}(\pa_t^{N-1}\theta(0),F^{3,N-1}(0))\in H^0$ due to \eqref{equ:e02} and \eqref{est:force 1345}, and set $D_t^Nu(0)=\mathfrak{E}^{01}(F^{1,N-1}(0)+\pa_t^{N-1}(\theta \nabla_{\mathscr{A}_0}y_{3,0}), D_t^{N-1}u(0),\pa_t^{N-1}p(0))\in H^0$ due to \eqref{equ:initial G1 v q} and Lemma \ref{lem:preliminary}. And $D_t^Nu(0)\in \mathscr{Y}(0)$ is guaranteed by the construction of $\pa_t^{N-1}p(0)$. As before, we have
\begin{equation}\frak label{est:pa t N u theta}
\|\pa_t^Nu(0)\|_{H^0}^2+\|\pa_t^N\theta(0)\|_{H^0}^2\frak lesssim P(\mathscr{E}_0).
\end{equation}
This completes the construction of initial data. Then summing the estimates \eqref{est:e0 N-1 f0 N-1}, \eqref{est:pa t N-1 p} and \eqref{est:pa t N u theta}, we directly have the following proposition.
\begin{proposition}\frak label{prop:high order initial}
Suppose that $u_0$, $\theta_0$ and $\eta_0$ satisfy $\mathscr{E}_0<\infty$. Let the initial data $\pa_t^ju(0)$, $\pa_t^j\theta(0)$, $\pa_t^j\eta(0)$ for $j=0,\frak ldots,N$ and $\pa_t^jp(0)$ for $j=0,\frak ldots,N-1$ be given as above. Then
\begin{equation}\frak label{est:high order initial}
\mathscr{E}_0\frak le \mathfrak{E}_0(u,p,\theta)+\mathfrak{E}_0(\eta)\frak lesssim P(\mathscr{E}_0).
\end{equation}
Here $\mathfrak{E}_0(\eta)=\mathfrak{E}_0^N(\eta)$, which is defined in \eqref{def:norm eta}.
\end{proposition}
\subsection{Transport equation}
Here we consider the equation
\begin{eqnarray}\frak label{equ:transport}
\frak left\{
\begin{aligned}
&\pa_t\eta+u_1\pa_1\eta+u_2\pa_2\eta=u_3 \quad \text{on}\thinspace \Sigma,\\
&\eta(0)=\eta_0.
\end{aligned}
\right.
\end{eqnarray}
The local well--posedness of \eqref{equ:transport} has been proved by L. Wu, which is the Theorem 2.17 in \cite{LW}. The idea of his proof is similar to the proof of Theorem 5.4 in \cite{GT1}. In \cite{LW}, L. Wu has proved in Lemma 2.18, that the difference of $\eta$ and $\eta_0$ in a small time period is also small.
\subsection{Forcing estimates}
In the next section for the estimates of full nonlinear problem, we need some forcing quantities. Besides $\mathfrak{F}$ and $\mathfrak{F}_0$ which have been defined in \eqref{def:force F F0}, we define the following quantities
\begin{align*}
\mathcal{F}:&=\sum_{j=0}^{N-1}\frak left(\|\pa_t^j F^1\|_{L^2H^{2N-2j-1}}^2+\|\pa_t^j F^3\|_{L^2H^{2N-2j-1}}^2\right)+\|\pa_t^NF^1\|_{L^2H^0}^2+\|\pa_t^NF^3\|_{L^2H^0}^2\\
&\quad+\sum_{j=0}^N\frak left(\|\pa_t^jF^4\|_{L^\infty H^{2N-2j-1/2}(\Sigma)}^2+\|\pa_t^jF^5\|_{L^\infty H^{2N-2j-1/2}(\Sigma)}^2\right),
\end{align*}
\begin{align*}
\mathcal{H}:&=\sum_{j=0}^{N-1}\frak left(\|\pa_t^j F^1\|_{L^2H^{2N-2j-1}}^2+\|\pa_t^j F^3\|_{L^2H^{2N-2j-1}}^2\right)\\
&\quad+\sum_{j=0}^{N-1}\frak left(\|\pa_t^jF^4\|_{L^2 H^{2N-2j-1/2}(\Sigma)}^2+\|\pa_t^jF^5\|_{L^2 H^{2N-2j-1/2}(\Sigma)}^2\right),
\end{align*}
The following theorem is similar to Theorem 2.21 in \cite{LW} with obvious modification.
\begin{theorem}\frak label{lem:forcing estimates}
The forcing terms satisfy the estimates
\begin{eqnarray}
\mathfrak{F}&\frak lesssim& P(\mathfrak{K}(\eta))+P(\mathfrak{K}_N(u,\theta)),\\
\mathfrak{F}_0&\frak lesssim& P(\mathscr{E}_0),\\
\mathcal{F}&\frak lesssim& P(\mathfrak{K}(\eta))+P(\mathfrak{K}_N(u,\theta)),\\
\mathcal{H}&\frak lesssim& T\frak left(P(\mathfrak{K}(\eta))+P(\mathfrak{K}_N(u,\theta))\right).
\end{eqnarray}
\end{theorem}
\begin{proof}
The proof of this theorem is the same as the proof of Theorem $2.21$ in \cite{LW}, so we omit the details here.
\end{proof}
\section{Local well-posedness for the nonlinear problem}
\subsection{Construction of approximate solutions}
In order to solve the \eqref{equ:NBC}, we will construct a sequence of approximate solutions $(u^m, p^m,\theta^m, \eta^m)$, then take the limit $m\to\infty$. First, we construct an initial pair $(u^0,\theta^0,\eta^0)$ as a start point, then we iteratively define all sequences $(u^m,p^m,\theta^m,\eta^m)$ for $m\ge1$.
Suppose that the initial data $(u_0,\theta_0,\eta_0)$ has given. According to the Lemma A.5 in \cite{GT1}, there exist $u^0$ and $\theta^0$ defined in $\Om\times [0,\infty)$ with $\pa_t^ju^0(0)=\pa_t^ju(0)$, $\pa_t^j\theta^0(0)=\pa_t^j\theta(0)$, for $j=0,\frak ldots,N$, satisfying
\begin{equation}\frak label{est:sequence u0 theta0}
\mathfrak{K}_N(u^0,\theta^0)\frak lesssim P(\mathscr{E}_0).
\end{equation}
Then we consider the equation \eqref{equ:poisson} with $u$ replaced by $u^0$. From the Theorem 2.17 in \cite{LW}, the hypothesis of which is satisfied by \eqref{est:high order initial} and \eqref{est:sequence u0 theta0}, there exists a $\eta^0$ defined in $\Om\times [0,T_0)$, which satisfies $\pa_t^j\eta^0(0)=\pa_t^j\eta(0)$ for $j=0,\frak ldots,N$ as well as
\[
\mathfrak{K}(\eta^0)\frak lesssim P(\mathscr{E}_0).
\]
Then for any integer $m\ge1$, we formally define the sequence $(u^m,p^m,\theta^m,\eta^m)$ on the time interval $[0,T_m)$ as the solutions of system
\begin{eqnarray}\frak label{equ:iteration equation}
\frak left\{
\begin{aligned}
&\pa_tu^m-\Delta_{\mathscr{A}^{m-1}}u^m+\nabla_{\mathscr{A}^{m-1}}p^m+\theta^m\nabla_{\mathscr{A}^{m-1}}y_3^{m-1} &\\
&\qquad=\pa_t\bar{\eta}^{m-1}(1+x_3)K^{m-1}\pa_3u^{m-1} -u^{m-1}\cdot\nabla_{\mathscr{A}^{m-1}}u^{m-1}&\text{in}\thinspace\Om,\\
&\mathop{\rm div}\nolimits_{\mathscr{A}^{m-1}}u^m=0&\text{in}\thinspace\Om,\\
&\pa_t\theta^m-\Delta_{\mathscr{A}^{m-1}}\theta^m=\pa_t\bar{\eta}^{m-1}(1+x_3)K^{m-1}\pa_3\theta^{m-1}-u^{m-1}\cdot\nabla_{\mathscr{A}^{m-1}}\theta^{m-1} &\text{in}\thinspace\Om,\\
&S_{\mathscr{A}^{m-1}}(p^m,u^m)\mathscr{N}^{m-1}=\eta^{m-1}\mathscr{N}^{m-1} &\text{on}\thinspace\Sigma,\\
&\nabla_{\mathscr{A}^{m-1}}\theta^m\cdot\mathscr{N}^{m-1}+\theta^m\frak left|\mathscr{N}^{m-1}\right|=-\frak left|\mathscr{N}^{m-1}\right| &\text{on}\thinspace\Sigma,\\
&u^m=0,\quad\theta^m=0 &\text{on}\thinspace\Sigma_b,
\end{aligned}
\right.
\end{eqnarray}
and
\begin{equation} \frak label{equ:iteration theta}
\pa_t\eta^m=u^m\cdot\mathscr{N}^m\quad\text{on}\thinspace\Sigma,
\end{equation}
where $\mathscr{A}^{m-1}$, $\mathscr{N}^{m-1}$, $K^{m-1}$ are determined in terms of $\eta^{m-1}$ and $\mathscr{N}^m$ is in terms of $\eta^m$, with the initial data $(u^m(0), \theta^m(0), \eta^m(0))=(u_0,\theta_0,\eta_0)$.
In the following, we will prove that these sequences can be defined for any integer $m\ge1$ and the existence time $T_m$ does not shrink to $0$ as $m\to \infty$. The following theorem is a modified version of Theorem $2.24$ in \cite{LW}, which improves the estimate \eqref{est:higher regularity} using the energy structure and elliptic estimates.
\begin{theorem}\frak label{thm:boundedness}
Suppose $J(0)>\delta>0$. Assume that the initial data $(u_0,\theta_0,\eta_0)$ satisfy $\mathscr{E}_0<\infty$ and $\pa_t^ju(0)$, $\pa_t^j\theta(0)$, $\pa_t^j\eta(0)$, for $j=0,\frak ldots,N$, are given as above from the Proposition \ref{prop:high order initial}. Then there exists a positive constant $\mathscr{Z}<\infty$ and $0<\bar{T}<1$ depending on $\mathscr{E}_0$, such that if $0<T<\bar{T}$, then there exists a sequence $\{(u^m,p^m,\theta^m,\eta^m)\}_{m=0}^\infty$ (when $m=0$, the sequence should be considered as $(u^0,\theta^0,\eta^0)$) satisfying the iteration equation \eqref{equ:iteration equation} within the time interval $[0,T)$ and the following properties:
\begin{enumerate}[1.]
\item The iteration sequence satisfies
\begin{equation}
\mathfrak{K}_N(u^m,\theta^m)+\mathfrak{K}(\eta^m)\frak le \mathscr{Z}
\end{equation}
for any integer $m\ge0$, where the temporal norm is taken with respect to $[0,T)$.\\
\item $J^m(t)\ge\delta/2$ with $0\frak le t\frak le T$, for any integer $m\ge0$.
\end{enumerate}
\end{theorem}
\begin{proof}
In this proof, we will follow the path of proof of Theorem 2.24 in \cite{LW}. We will use an infinite induction to prove this theorem. Let us denote the above two assertions as statement $\mathbb{P}_m$.
Step $1$. $\mathbb{P}_0$ case. The only modification here is that the construction of $u^0$ and $\theta^0$ reveals that $\mathfrak{K}_N(u^0,\theta^0)\frak lesssim P(\mathscr{E}_0)$. Then the rest proof of this case is the same as the proof of Theorem $2.24$ in \cite{LW}. Hence, $\mathbb{P}_0$ holds. That is $\mathfrak{K}_N(u^0,\theta^0)+\mathfrak{K}(\eta^0)\frak le \mathscr{Z}$ with the temporal norm taken with respect to $[0,T)$ and $J^0(t)\ge\delta/2$ for $0\frak le t\frak le T$.
In the following, we suppose that $\mathbb{P}_{m-1}$ holds for $m\ge1$. Then we will prove that $\mathbb{P}_m$ also holds.
Step $2$. $\mathbb{P}_m$ case: energy estimates of $\theta^m$ and $u^m$. By Theorem \ref{thm:higher regularity}, the pair $(D_t^Nu^m,\pa_t^Np^m,\pa_t^N\theta^m)$ satisfies the equation
\begin{eqnarray}
\frak left\{
\begin{aligned}
&\pa_t(D_t^Nu^m)-\Delta_{\mathscr{A}^{m-1}}(D_t^Nu^m)+\nabla_{\mathscr{A}^{m-1}}(\pa_t^Np^m)&\\
&\qquad-\pa_t^N(\theta^m \nabla_{\mathscr{A}^{m-1}}y_3^{m-1})=F^{1,N}\quad &\text{in}\thinspace\Om,\\
&\mathop{\rm div}\nolimits_{\mathscr{A}^{m-1}}(D_t^Nu^m)=0\quad &\text{in}\thinspace\Om,\\
&\pa_t(\pa_t^N\theta^m)-\Delta_{\mathscr{A}^{m-1}}(\pa_t^N\theta^m)=F^{3,N}\quad &\text{in}\thinspace\Om,\\
&S_{\mathscr{A}^{m-1}}(\pa_t^Np^m,D_t^Nu^m)\mathscr{N}^{m-1}=F^{4,N}\quad &\text{on}\thinspace \Sigma,\\
&\nabla_{\mathscr{A}^{m-1}}(\pa_t^N\theta^m)\cdot\mathscr{N}^{m-1}+\pa_t^j\theta^m\frak left|\mathscr{N}^{m-1}\right|=F^{5,N}\quad &\text{on}\thinspace \Sigma,\\
&D_t^Nu^m=0,\quad \pa_t^N\theta^m=0\quad &\text{on}\thinspace \Sigma_b,
\end{aligned}
\right.
\end{eqnarray}
in the weak sense, where $F^{1,N}$, $F^{3,N}$, $F^{4,N}$ and $F^{5,N}$ are given in terms of $u^m$, $p^m$, $\theta^m$, and $u^{m-1}$, $p^{m-1}$, $\theta^{m-1}$, $\eta^{m-1}$. Then for any test function $\phi\in(\mathscr{H}^1_T)^{m-1}$, where $(\mathscr{H}^1_T)^{m-1}$ is the space $\mathscr{H}^1_T$ with $\eta$ replaced by $\eta^{m-1}$, the following holds
\begin{align*}
\frak left<\pa_t(\pa_t^N\theta^m), \phi\right>_{\ast}+\frak left(\pa_t^N\theta^m,\phi\right)_{\mathscr{H}^1_T}+\frak left(\pa_t^N\theta^m\frak left|\mathscr{N}^{m-1}\right|,\phi\right)_{L^2H^0(\Sigma)}\\
=\frak left(F^{3,N},\phi\right)_{\mathscr{H}^0_T}+\frak left(F^{5,N},\phi\right)_{L^2H^0(\Sigma)}.
\end{align*}
Therefore, when taking the test function $\phi=\pa_t^N\theta^m$, we have the energy structure
\begin{eqnarray}
\begin{aligned}
&\f12\int_{\Om} J^{m-1}|\pa_t^N\theta^m|^2+\int_0^t\int_{\Om} J^{m-1}|\nabla_{\mathscr{A}^{m-1}}(\pa_t^N\theta^m)|^2+\int_0^t\int_{\Sigma}|\pa_t^N\theta^m|^2\frak left|\mathscr{N}^{m-1}\right|\\
&=\f12\int_{\Om} J^{m-1}(0)|\pa_t^N\theta^m(0)|^2+\f12\int_0^t\int_{\Om} \pa_tJ^{m-1}|\pa_t^N\theta^m|^2\\
&\quad+\int_0^t\int_{\Om} J^{m-1}F^{3,N}\pa_t^N\theta^m+\int_0^t\int_{\Sigma}F^{5,N}\pa_t^N\theta^m.
\end{aligned}
\end{eqnarray}
By induction hypothesis, \eqref{est:high order initial}, trace theory and Cauchy inequality, we have
\begin{eqnarray}
\begin{aligned}
&\|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\|\pa_t^N\theta^m\|_{L^2H^1}^2\\
&\frak lesssim \sup_{0\frak le t\frak le T}\frak left(\f12\int_{\Om} J^{m-1}|\pa_t^N\theta^m|^2+\int_0^t\int_{\Om} J^{m-1}|\nabla_{\mathscr{A}^{m-1}}(\pa_t^N\theta^m)|^2+\int_0^t\int_{\Sigma}|\pa_t^N\theta^m|^2\right)\\
&\frak lesssim \f12\int_{\Om} J^{m-1}(0)|\pa_t^N\theta^m(0)|^2+\f12\int_0^T\int_{\Om} \pa_tJ^{m-1}|\pa_t^N\theta^m|^2+\int_0^T\int_{\Om} J^{m-1}F^{3,N}\pa_t^N\theta^m\\
&\quad+\int_0^T\int_{\Sigma}F^{5,N}\pa_t^N\theta^m\\
&\frak lesssim P(\mathscr{E}_0)+T\mathscr{Z}\|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\sqrt{T}\mathscr{Z}\|F^{3,N}\|_{L^2H^0}\|\pa_t^N\theta^m\|_{L^\infty H^0}\\
&\quad+\sqrt{T}\|F^{5,N}\|_{L^\infty H^{-1/2}(\Sigma)}\|\pa_t^N\theta^m\|_{L^2H^{1/2}(\Sigma)}\\
&\frak lesssim P(\mathscr{E}_0)+T\mathscr{Z}\|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\sqrt{T}\|F^{3,N}\|_{L^2H^0}^2\\
&\quad+\sqrt{T}\mathscr{Z}^2\|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\sqrt{T}\|F^{5,N}\|_{L^\infty H^{-1/2}(\Sigma)}^2+\sqrt{T}\|\pa_t^N\theta^m\|_{L^2H^{1/2}(\Sigma)}^2
\end{aligned}
\end{eqnarray}
for a polynomial $P(0)=0$. Taking $T\frak le \min\{1/4, 1/(16\mathscr{Z}^4)\}$ and absorbing the extra terms on the right--hand side into left--hand side imply
\begin{equation}\frak label{est:energy thetam um}
\|\pa_t^N\theta^m\|_{L^\infty H^0}^2+\|\pa_t^N\theta^m\|_{L^2H^1}^2\frak lesssim P(\mathscr{E}_0)+\sqrt{T}\|F^{3,N}\|_{L^2H^0}^2+\sqrt{T}\|F^{5,N}\|_{L^\infty H^{-1/2}(\Sigma)}^2.
\end{equation}
By induction hypothesis, we have
\begin{align*}
&\|F^{3,N}\|_{L^2H^0}^2\\
&\frak lesssim P(\mathfrak{K}(\eta^{m-1}))\frak left(\sum_{j=0}^{N-1}\|\pa_t^ju^m\|_{L^2H^2}^2+\|\pa_t^j\theta^m\|_{L^2H^2}^2\right)+\mathcal{F}\\
&\frak lesssim P(\mathscr{E}_0+\mathscr{Z})+\mathcal{F},
\end{align*}
\begin{align*}
&\|F^{5,N}\|_{L^\infty H^{-1/2}(\Sigma)}^2\\
&\frak lesssim P(\mathfrak{K}(\eta^{m-1}))\frak left(\sum_{j=0}^{N-1}\|\pa_t^ju^m\|_{L^\infty H^2}^2+\|\pa_t^j\theta^m\|_{L^\infty H^2}^2\right)+\mathcal{F}\\
&\frak lesssim P(\mathscr{E}_0+\mathscr{Z})+\mathcal{F}.
\end{align*}
And, the energy estimates about $u^m$ is the same as the proof of of Theorem $2.24$ in \cite{LW}. Therefore, we have
\begin{equation}
\|\pa_t^N u^m\|_{L^2H^1}^2+\|\pa_t^N \theta^m\|_{L^2H^1}^2\frak lesssim P(\mathscr{E}_0)+\sqrt{T}P(\mathscr{E}_0+\mathscr{Z})+\sqrt{T}\mathcal{F}.
\end{equation}
Step 3. $\mathbb{P}_m$ case: elliptic estimates for $\theta^m$, $u^m$. For $0\frak le n\frak le N-1$, the $n$-th order heat equation is
\begin{eqnarray}
\frak left\{
\begin{aligned}
&\pa_t(\pa_t^n\theta^m)-\Delta_{\mathscr{A}^{m-1}}\pa_t^n\theta^m=F^{3,n}\quad &\text{in}\thinspace \Om,\\
&\nabla_{\mathscr{A}^{m-1}}\pa_t^n\theta^m\cdot\mathscr{N}^{m-1}+\pa_t^n\theta^m\frak left|\mathscr{N}^{m-1}\right|=F^{5,n}\quad &\text{on}\thinspace \Sigma,\\
&\pa_t^n\theta^m=0\quad &\text{on}\thinspace \Sigma_b.
\end{aligned}
\right.
\end{eqnarray}
The elliptic estimate in the proof of Lemma \ref{lem:S lower regularity} reveals that
\begin{equation} \frak label{est:elliptic thetam}
\|\pa_t^n\theta^m\|_{L^2H^{2N-2n+1}}^2\frak lesssim \|F^{3,n}\|_{L^2H^{2N-2n-1}}^2+\|\pa_t^{n+1}\theta^m\|_{L^2H^{2N-2n-1}}^2+\|F^{5,n}\|_{L^2H^{2N-2n-1/2}}^2.
\end{equation}
As what we did before,
\begin{align*}
&\|F^{3,n}\|_{L^2H^{2N-2n-1}}^2\\
&\frak lesssim T P(\mathfrak{K}(\eta^{m-1}))\frak left(\sum_{j=0}^{N-2}\|\pa_t^j\theta^m\|_{L^\infty H^{2N-2j-1}}^2+\|\pa_t^ju^m\|_{L^\infty H^{2N-2j-1}}^2\right)+\mathcal{H}\\
&\frak lesssim T P(\mathscr{E}_0+\mathscr{Z})+\mathcal{H}.
\end{align*}
\begin{align*}
&\|F^{5,n}\|_{L^2H^{2N-2n-1}}^2\\
&\frak lesssim T P(\mathfrak{K}(\eta^{m-1}))\frak left(\sum_{j=0}^{N-2}\|\pa_t^j\theta^m\|_{L^\infty H^{2N-2j-1}}^2+\|\pa_t^ju^m\|_{L^\infty H^{2N-2j-1}}^2\right)+\mathcal{H}\\
&\frak lesssim T P(\mathscr{E}_0+\mathscr{Z})+\mathcal{H}.
\end{align*}
But for the term $\|\pa_t^{n+1}\theta^m\|_{L^2H^{2N-2n-1}}^2$, we estimate backward from $N-1$ to $0$. First, when $n=N-1$, this is the case of energy estimate of $\theta^m$. Then we iteratively use the elliptic estimates \eqref{est:elliptic thetam} from $n=N-2$ to $n=0$ to obtain all the control of $\|\pa_t^{n+1}\theta^m\|_{L^2H^{2N-2n-1}}^2$.
And the elliptic estimate for $u^m$ is the same as the proof of of Theorem $2.24$ in \cite{LW}. Thereore, we have that
\begin{eqnarray}\frak label{est:elliptic thetam um}
\begin{aligned}
&\sum_{n=0}^{N-1}\frak left(\|\pa_t^nu^m\|_{L^2H^{2N-2n+1}}^2+\|\pa_t^n\theta^m\|_{L^2H^{2N-2n+1}}^2\right)\\
&\frak lesssim P(\mathscr{E}_0)+\sqrt{T}P(1+\mathscr{E}_0+\mathscr{Z})+\sqrt{T}\mathcal{F}+\mathcal{H}.
\end{aligned}
\end{eqnarray}
Step $4$. $\mathbb{P}_m$ case: synthesis of estimates for $u^m$ and $\theta^m$. Combining \eqref{est:energy thetam um}, \eqref{est:elliptic thetam um} and Lemma $2.19$ in \cite{LW}, we deduce that
\begin{equation}
\mathfrak{K}_N(u^m, \theta^m)\frak lesssim P(\mathscr{E}_0)+\sqrt{T}P(\mathscr{E}_0+\mathscr{Z})+\sqrt{T}\mathcal{F}+\mathcal{H}.
\end{equation}
Then by the induction hypothesis and the forcing estimates of Lemma \ref{lem:forcing estimates}, we have that
\[
\mathcal{F}\frak lesssim P(\mathfrak{K}(\eta^{m-1}))+P(\mathfrak{K}_N(u^{m-1}, \theta^{m-1}))\frak lesssim P(\mathscr{Z}),
\]
\[
\mathcal{H}\frak lesssim T\frak left(P(\mathfrak{K}(\eta^{m-1}))+P(\mathfrak{K}_N(u^{m-1}, \theta^{m-1}))\right)\frak lesssim T P(\mathscr{Z}).
\]
Hence we obtain the estimate
\begin{equation}
\mathfrak{K}_N(u^m, \theta^m)\frak le C\frak left( P(\mathscr{E}_0)+\sqrt{T}P(\mathscr{E}_0+\mathscr{Z})\right)
\end{equation}
for some universal constant $C>0$. Taking $\mathscr{Z}\ge 2 C P(\mathscr{E}_0)$ and then taking $T$ sufficient small which depends on $\mathscr{Z}$, we can achieve that $\mathfrak{K}_N(u^m, \theta^m)\frak le 2 C P(\mathscr{E}_0)\frak le \mathscr{Z}$.
Step 5. $\mathbb{P}_m$ case: estimate for $\eta^m$ and $J^m(t)$. These estimates are exactly the same as the proof of of Theorem $2.24$ in \cite{LW}. So we omit the details here.
Thus, we can take $\mathscr{Z}=P(\mathscr{E}_0)$ for some polynomial $P(\cdot)$ and $T$ small enough depending on $\mathscr{Z}$ to deduce that
\begin{equation}
\mathfrak{K}_N(u^m,\theta^m)\frak le\mathscr{Z}
\end{equation}
and
\begin{equation}
J^m(t)\ge\delta/2 \quad \text{for}\thinspace t\in [0,T].
\end{equation}
Hence $\mathbb{P}_m$ holds. By induction, $\mathbb{P}_n$ holds for any integer $n\ge0$.
\end{proof}
\begin{theorem}\frak label{thm:uniform boundedness}
Assume the same conditions as Theorem \ref{thm:boundedness}. Then
\begin{equation}
\mathfrak{K}(u^m,p^m,\theta^m)+\mathfrak{K}(\eta^m)\frak lesssim P(\mathscr{E}_0)
\end{equation}
for a polynomial $P(\cdot)$ satisfying $P(0)=0$.
\end{theorem}
\begin{proof}
From the estimates \eqref{est:higher regularity}, \eqref{est:high order initial}, Lemma \ref{lem:forcing estimates} as well as Theorem $2.17$ in \cite{LW}, we directly have that
\[
\mathfrak{K}(u^m,p^m,\theta^m)+\mathfrak{K}(\eta^m)\frak lesssim P(\mathscr{E}_0)+P(\mathfrak{K}_N(u^m,\theta^m)+\mathfrak{K}(\eta^m)).
\]
Then, applying the Theorem \ref{thm:boundedness}, we have that
\[
\mathfrak{K}(u^m,p^m,\theta^m)+\mathfrak{K}(\eta^m)\frak lesssim P(\mathscr{E}_0).
\]
\end{proof}
\subsection{Contraction}
According to Theorem \ref{thm:uniform boundedness}, we may extract weakly converging subsequences from $\{(u^m,p^m,\theta^m,\eta^m)\}_{m=0}^\infty$. Unfortunately, the original sequence $\{(u^m,p^m,\theta^m,\eta^m)\}_{m=0}^\infty$ could not be guaranteed to converge to the same limit. In order to obtain the desired solution to \eqref{equ:NBC} by passing to the limit in \eqref{equ:iteration equation} and \eqref{equ:iteration theta}, we need to study its contraction in some norm.
For $T>0$, we define the norms
\begin{eqnarray}
\begin{aligned}
\mathfrak{N}(v,q,\Theta; T)&=\|v\|_{L^\infty H^2}^2+\|v\|_{L^2H^3}^2+\|\pa_tv\|_{L^\infty H^0}^2+\|\pa_tv\|_{L^2H^1}^2+\|q\|_{L^\infty H^1}^2+\|q\|_{L^2H^2}^2\\
&\quad+\|\Theta\|_{L^\infty H^2}^2+\|\Theta\|_{L^2H^3}^2+\|\pa_t\Theta\|_{L^\infty H^0}^2+\|\pa_t\Theta\|_{L^2H^1}^2\\
\mathfrak{M}(\zeta;T)&=\|\zeta\|_{L^\infty H^{5/2}}^2+\|\pa_t\zeta\|_{L^\infty H^{3/2}}^2+\|\pa_t^2\zeta\|_{L^2H^{1/2}}^2,
\end{aligned}
\end{eqnarray}
where the norm $L^pH^k$ is $L^p([0,T];H^k(\Om))$ in $\mathfrak{N}$, and is $L^p([0,T];H^k(\Sigma))$ in $\mathfrak{M}$.
The next theorem is not only used to prove the contraction of approximate solutions, but also used to verify the uniqueness of solutions to \eqref{equ:NBC}. To avoid confusion with $\{(u^m,p^m,\theta^m,\eta^m)\}$, we refer to velocities as $v^j$, $w^j$, pressures as $q^j$, temperatures as $\Theta^j$, $\vartheta^j$, and surface functions as $\zeta^j$ for $j=1,2$.
\begin{theorem}\frak label{thm:contraction}
For $j=1,2$, suppose that $v^j$, $q^j$, $\Theta^j$, $w^j$, $\vartheta^j$ and $\zeta^j$ satisfy the initial data $\pa_t^kv^1(0)=\pa_t^kv^2(0)$, $\pa_t^k\Theta^1(0)=\pa_t^k\Theta^2(0)$, for $k=0,1$, $q^1(0)=q^2(0)$ and $\zeta^1(0)=\zeta^2(0)$, and that the following system holds:
\begin{eqnarray}\frak label{equ:difference}
\frak left\{
\begin{aligned}
&\pa_tv^j-\Delta_{\mathscr{A}^j}v^j+\nabla_{\mathscr{A}^j}q^j-\Theta^j\nabla_{\mathscr{A}^j}y_3^j=\pa_t\bar{\zeta}^j(1+x_3)K^j\pa_3w^j\\
&\qquad-w^j\cdot\nabla_{\mathscr{A}^j}w^j\quad &\text{in}\thinspace \Om,\\
&\mathop{\rm div}\nolimits_{\mathscr{A}^j}v^j=0 \quad &\text{in}\thinspace \Om,\\
&\pa_t\Theta^j-\Delta_{\mathscr{A}^j}\Theta^j=\pa_t\bar{\zeta}^j(1+x_3)K^j\pa_3\vartheta^j-w^j\cdot\nabla_{\mathscr{A}^j}\vartheta^j\quad &\text{in}\thinspace \Om,\\
&S_{\mathscr{A}^j}(q^j,v^j)\mathscr{N}^j=\zeta^j\mathscr{N}^j\quad &\text{on}\thinspace \Sigma,\\
&\nabla_{\mathscr{A}^j}\Theta^j\cdot\mathscr{N}^j+\Theta^j\frak left|\mathscr{N}^j\right|=-\frak left|\mathscr{N}^j\right|\quad &\text{on}\thinspace \Sigma,\\
&v^j=0,\quad \Theta^j=0\quad &\text{on}\thinspace \Sigma_b,\\
&\pa_t\zeta^j=w^j\cdot\mathscr{N}^j\quad &\text{on}\thinspace \Sigma,
\end{aligned}
\right.
\end{eqnarray}
where $\mathscr{A}^j$, $\mathscr{N}^j$, $K^j$ are determined by $\zeta^j$. Assume that $\mathfrak{K}(v^j,q^j,\Theta^j)$, $\mathfrak{K}(w^j,0,\vartheta^j)$ and $\mathfrak{K}(\zeta^j)$ are bounded by $\mathscr{Z}$.
Then there exists $0<T_1<1$ such that for any $0<T<T_1$, then we have
\begin{equation}\frak label{est:n}
\mathfrak{N}(v^1-v^2,q^1-q^2,\Theta^1-\Theta^2;T)\frak le \f12\mathfrak{N}(w^1-w^2,0,\vartheta^1-\vartheta^2;T),
\end{equation}
\begin{equation}\frak label{est:m}
\mathfrak{M}(\zeta^1-\zeta^2;T)\frak lesssim \mathfrak{N}(w^1-w^2,0,\vartheta^1-\vartheta^2;T).
\end{equation}
\end{theorem}
\begin{proof}
This proof follows the path of Theorem $6.2$ in \cite{GT1}. First, we define $v=v^1-v^2$, $w=w^1-w^2$, $\Theta=\Theta^1-\Theta^2$, $\vartheta=\vartheta^1-\vartheta^2$, $q=q^1-q^2$.
Step 1. Energy evolution for differences. Like the proof of Theorem $6.2$ in \cite{GT1}, we can derive the PDE satisfied by $v$, $q$ and $\Theta$:
\begin{eqnarray}
\frak left\{
\begin{aligned}
&\pa_tv+\mathop{\rm div}\nolimits_{\mathscr{A}^1}S_{\mathscr{A}^1}(q,v)-\Theta \nabla_{\mathscr{A}^1}y_3^1=\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2)+H^1 \quad &\text{in}\thinspace \Om,\\
&\mathop{\rm div}\nolimits_{\mathscr{A}^1}v=H^2 \quad &\text{in}\thinspace \Om,\\
&\pa_t\Theta-\Delta_{\mathscr{A}^1}\Theta=\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2)+H^3 \quad &\text{in}\thinspace \Om,\\
&S_{\mathscr{A}^1}(q,v)\mathscr{N}^1=\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2\mathscr{N}^1+H^4 \quad &\text{on}\thinspace \Sigma,\\
&\nabla_{\mathscr{A}^1}\Theta\cdot\mathscr{N}^1+\Theta\frak left|\mathscr{N}^1\right|=-\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2\cdot\mathscr{N}^1+H^5 \quad &\text{on}\thinspace \Sigma,\\
&v=0,\quad \Theta=0 \quad &\text{on}\thinspace \Sigma_b,\\
&v(t=0)=0,\quad \Theta(t=0)=0,
\end{aligned}
\right.
\end{eqnarray}
and the PDE satisfied by $\pa_tv$, $\pa_tq$, $\pa_t\Theta$ from taking temporal derivative for the above system:
\begin{eqnarray}
\frak left\{
\begin{aligned}
&\pa_t(\pa_t v)+\mathop{\rm div}\nolimits_{\mathscr{A}^1}S_{\mathscr{A}^1}(\pa_t q,\pa_t v)-\pa_t(\Theta \nabla_{\mathscr{A}^1}y_3^1)&\\
&\qquad=\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{\pa_t(\mathscr{A}^1-\mathscr{A}^2)}v^2)+\tilde{H}^1 \thinspace &\text{in}\thinspace \Om,\\
&\mathop{\rm div}\nolimits_{\mathscr{A}^1}\pa_t v=\tilde{H}^2 \thinspace &\text{in}\thinspace \Om,\\
&\pa_t(\pa_t\Theta)-\Delta_{\mathscr{A}^1}\pa_t\Theta=\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{(\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2)}\Theta^2)+\tilde{H}^3 \thinspace &\text{in}\thinspace \Om,\\
&S_{\mathscr{A}^1}(\pa_t q,\pa_t v)\mathscr{N}^1=\mathbb{D}_{(\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2)}v^2\mathscr{N}^1+\tilde{H}^4 \thinspace &\text{on}\thinspace \Sigma,\\
&\nabla_{\mathscr{A}^1}\pa_t\Theta\cdot\mathscr{N}^1+\pa_t\Theta\frak left|\mathscr{N}^1\right|=-\nabla_{\pa_t(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2\cdot\mathscr{N}^1+\tilde{H}^5 \thinspace &\text{on}\thinspace \Sigma,\\
&\pa_tv=0,\quad \pa_t\Theta=0 \thinspace &\text{on}\thinspace \Sigma_b,\\
&\pa_tv(t=0)=0,\quad \pa_t\Theta(t=0)=0,
\end{aligned}
\right.
\end{eqnarray}
where $H^2$, $H^4$, $\tilde{H}^2$ and $\tilde{H}^4$ have been given by Y. Guo and I. Tice in \cite{GT1},
\begin{align*}
H^1&=\Theta^2\nabla_{\mathscr{A}^1-\mathscr{A}^2}y_3^1+\Theta^2\nabla_{\mathscr{A}^2}(y_3^1-y_3^2)+\mathop{\rm div}\nolimits_{\mathscr{A}^1-\mathscr{A}^2}(\mathbb{D}_{\mathscr{A}^2}v^2)-\nabla_{\mathscr{A}^1-\mathscr{A}^2}q^2\\
&\quad+\pa_t\bar{\zeta}^1(1+x_3)K^1(\pa_3w^1-\pa_3w^2)+(\pa_t\bar{\zeta}^1-\pa_t\bar{\zeta}^2)(1+x_3)K^1\pa_3w^2\\
&\quad+\pa_t\bar{\zeta}^1(1+x_3)(K^1-K^2)\pa_3w^2-(w^1-w^2)\cdot\nabla_{\mathscr{A}^1}w^1-w^2\cdot\nabla_{\mathscr{A}^1}(w^1-w^2)\\
&\quad-w^2\cdot\nabla_{\mathscr{A}^1-\mathscr{A}^2}w^2,\\
H^3&=\mathop{\rm div}\nolimits_{\mathscr{A}^1-\mathscr{A}^2}(\nabla_{\mathscr{A}^2}\Theta^2)+\pa_t\bar{\zeta}^1(1+x_3)K^1(\pa_3\vartheta^1-\pa_3\vartheta^2)\\
&\quad+(\pa_t\bar{\zeta^1}-\pa_t\bar{\zeta}^2)(1+x_3)K^1\pa_3\vartheta^2+\pa_t\bar{\zeta}^1(K^1-K^2)\pa_3w^2-(w^1-w^2)\cdot\nabla_{\mathscr{A}^1}\vartheta^1\\
&\quad-w^2\cdot\nabla_{\mathscr{A}^1}(\vartheta^1-\vartheta^2)-w^2\cdot\nabla_{\mathscr{A}^1-\mathscr{A}^2}\vartheta^2,\\
H^5&=-\nabla_{\mathscr{A}^2}\Theta^2\cdot(\mathscr{N}^1-\mathscr{N}^2)-\Theta^2\frak left(\frak left|\mathscr{N}^1\right|-\frak left|\mathscr{N}^2\right|\right),\\
\tilde{H}^1&=\pa_tH^1+\mathop{\rm div}\nolimits_{\pa_t\mathscr{A}^1}(\mathbb{D}_{\mathscr{A}^1-\mathscr{A}^2}v^2)+\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{\mathscr{A}^1-\mathscr{A}^2}\pa_tv^2)+\mathop{\rm div}\nolimits_{\pa_t\mathscr{A}^1}(\mathbb{D}_{\mathscr{A}^1}v)\\
&\quad+\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{\pa_t\mathscr{A}^1}v)-\nabla_{\pa_t\mathscr{A}^1}q,\\
\tilde{H}^3&=\pa_tH^3+\mathop{\rm div}\nolimits_{\pa_t\mathscr{A}^1}(\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2)+\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\pa_t\Theta^2)+\mathop{\rm div}\nolimits_{\pa_t\mathscr{A}^1}\nabla_{\mathscr{A}^1}\Theta\\
&\quad+\mathop{\rm div}\nolimits_{\mathscr{A}^1}\nabla_{\pa_t\mathscr{A}^1}\Theta,\\
\tilde{H}^5&=\pa_tH^5-\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\pa_t\Theta^2\cdot\mathscr{N}^1-\nabla_{(\mathscr{A}^1-\mathscr{A}^2)}\Theta^2\cdot\pa_t\mathscr{N}^1-\nabla_{\mathscr{A}^1}\Theta\cdot\pa_t\mathscr{N}^1\\
&\quad-\nabla_{\pa_t\mathscr{A}^1}\Theta\cdot\mathscr{N}^1-\Theta\pa_t\frak left|\mathscr{N}^1\right|.
\end{align*}
Then we can deduce the equations
\begin{eqnarray}\frak label{equ:evolution difference}
\begin{aligned}
&\f12\int_{\Om}|\pa_tv|^2J^1(t)+\f12\int_0^t\int_{\Om}|\mathbb{D}_{\mathscr{A}^1}\pa_tv|^2J^1\\
&=\f12\int_0^t\int_{\Om}|\pa_tv|^2(\pa_tJ^1K^1)J^1+\int_0^t\int_{\Om}\pa_t(\Theta \nabla_{\mathscr{A}^1}y_3^1)\cdot\pa_tv J^1\\
&\quad+\int_0^t\int_{\Om}J^1(\tilde{H}^1\cdot \pa_tv+\tilde{H}^2\pa_tq)\\
&\quad-\f12\int_0^t\int_{\Om}J^1\mathbb{D}_{\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2}v^2:\mathbb{D}_{\mathscr{A}^1}\pa_t v-\int_0^t\int_{\Sigma}\tilde{H}^3\cdot\pa_tv,\\
&\f12\int_{\Om}|\pa_t\Theta|^2J^1(t)+\int_0^t\int_{\Om}|\nabla_{\mathscr{A}^1}\pa_t\Theta|^2J^1+\int_0^t\int_{\Sigma}|\pa_t\Theta|^2\frak left|\mathscr{N}^1\right|\\
&=\f12\int_0^t\int_{\Om}|\pa_t\Theta|^2(\pa_tJ^1K^1)J^1+\int_0^t\int_{\Om}J^1\tilde{H}^3\cdot \pa_t\Theta\\
&\quad-\int_0^t\int_{\Om}J^1\nabla_{\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2}\Theta^2\cdot\nabla_{\mathscr{A}^1}\pa_t \Theta+\int_0^t\int_{\Sigma}\tilde{H}^5\cdot\pa_t\Theta.
\end{aligned}
\end{eqnarray}
Step 2. Estimates for the forcing terms. Now we need to estimate the forcing terms that appear on the right-hand sides of \eqref{equ:evolution difference}. Throughout this section, $P(\cdot)$ is written as a polynomial such that $P(0)=0$, which allows to be changed from line to line. The estimates for $\|\tilde{H}^1\|_0$, $\|\tilde{H}^2\|_0$, $\|\pa_t\tilde{H}^2\|_0$, $\|\tilde{H}^4\|_{-1/2}$, $\|H^1\|_r$, $\|H^2\|_{r+1}$, $\|H^4\|_{r+1/2}$, $\|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2)\|_r$ and $\|\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2\mathscr{N}^1\|_{r+1/2}$ have been done by Guo and Tice in \cite{GT1}. So we can directly using them only after replacing $\varepsilon$ by $\mathscr{Z}$. By the same method, we can also deduce that
\begin{eqnarray}\frak label{est:tilde H3}
\begin{aligned}
\|\tilde{H}^3\|_0&\frak lesssim P(\sqrt{\mathscr{Z}})\big(\|\Theta\|_2+\|\zeta^1-\zeta^2\|_{3/2}+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{1/2}+\|\pa_t^2\zeta^1-\pa_t^2\zeta^2\|_{1/2}\\
&\quad+\|w^1-w^2\|_0+\|\pa_tw^1-\pa_tw^2\|_0+\|\vartheta^1-\vartheta^2\|_1+\|\pa_t\vartheta^1-\pa_t\vartheta^2\|_1\big),
\end{aligned}
\end{eqnarray}
\begin{equation}\frak label{est:tilde H5}
\|\tilde{H}^5\|_{-1/2}\frak lesssim P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{1/2}+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{1/2}+\|\Theta\|_2\big),
\end{equation}
and for $r=0,1$,
\begin{eqnarray}\frak label{est:bound H3}
\begin{aligned}
\|H^3\|_r &\frak lesssim P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{r+1/2}+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{r-1/2}\\
&\quad+\|w^1-w^2\|_r+\|\vartheta^1-\vartheta^2\|_{r+1}\big),
\end{aligned}
\end{eqnarray}
\begin{equation}
\|H^5\|_{r+1/2}\frak lesssim P(\sqrt{\mathscr{Z}})\|\zeta^1-\zeta^2\|_{r+3/2},
\end{equation}
\begin{equation}
\|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{\mathscr{A}^1-\mathscr{A}^2}\Theta^2)\|_r\frak lesssim P(\sqrt{\mathscr{Z}})\|\zeta^1-\zeta^2\|_{r+3/2},
\end{equation}
\begin{equation}\frak label{est:bound difference theta2}
\|\nabla_{\mathscr{A}^1-\mathscr{A}^2}\Theta^2\cdot\mathscr{N}^1\|_{r+1/2}\frak lesssim P(\sqrt{\mathscr{Z}})\|\zeta^1-\zeta^2\|_{r+3/2}.
\end{equation}
Step 3. Energy estimates of $\pa_tv$ and $\pa_t\Theta$. First, owing to the assumption and Sobolev embeddings, we obtain that
\begin{equation}\frak label{est:bound J K}
\|J^1\|_{L^\infty}+\|K^1\|_{L^\infty}\frak lesssim 1+P(\sqrt{\mathscr{Z}})\quad \text{and}\quad \|\pa_tJ^1\|_{L^\infty}\frak lesssim P(\sqrt{\mathscr{Z}}).
\end{equation}
The bounds of \eqref{est:bound J K} reveals that
\begin{equation}\frak label{est:rhs 1 evolution difference}
\f12\int_0^t\int_{\Om}|\pa_t\Theta|^2(\pa_tJ^1K^1)J^1\frak lesssim P(\sqrt{\mathscr{Z}})\f12\int_0^t\int_{\Om}|\pa_t\Theta|^2J^1.
\end{equation}
In addition, estimates \eqref{est:tilde H3}, \eqref{est:tilde H5} together with trace theory and the Poincar\'e inequality reveals that
\begin{eqnarray}\frak label{est:rhs 2 evolution difference}
\begin{aligned}
&\int_0^t\int_{\Om}J^1\tilde{H}^3\cdot \pa_t\Theta-\int_0^t\int_{\Om}J^1\nabla_{\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2}\Theta^2\cdot\nabla_{\mathscr{A}^1}\pa_t \Theta-\int_0^t\int_{\Sigma}\tilde{H}^5\cdot\pa_t\Theta\\
&\frak le \int_0^t\int_{\Om}\|J^1\|_{L^\infty}\frak left(\|J^1\|_{L^\infty}\|\tilde{H}^3\|_0\|\pa_t\Theta\|_0+\|\nabla_{\pa_t\mathscr{A}^1-\pa_t\mathscr{A}^2}\Theta^2\|_0\|\nabla_{\mathscr{A}^1}\pa_t \Theta\|_0\right)\\
&\quad+\int_0^t\|\tilde{H}^5\|_{-1/2}\|\pa_t\Theta\|_{1/2}\\
&\frak lesssim \int_0^tP(\sqrt{\mathscr{Z}})\sqrt{\mathcal{Z}},
\end{aligned}
\end{eqnarray}
where we have written
\begin{eqnarray}
\begin{aligned}
\mathcal{Z}:&=\|\zeta^1-\zeta^2\|_{3/2}^2+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{1/2}^2+\|\pa_t^2\zeta^1-\pa_t^2\zeta^2\|_{1/2}^2\\
&\quad+\|w^1-w^2\|_1^2+\|\pa_tw^1-\pa_tw^2\|_1^2+\|\vartheta^1-\vartheta^2\|_1^2+\|\pa_t\vartheta^1-\pa_t\vartheta^2\|_1^2\\
&\quad+\|v\|_2^2+\|q\|_1^2+\|\Theta\|_2^2.
\end{aligned}
\end{eqnarray}
Combining \eqref{est:rhs 1 evolution difference}, \eqref{est:rhs 2 evolution difference}, \eqref{equ:evolution difference}, Poincar\'e inequality of Lemma A.14 in \cite{GT1} and Lemma $2.9$ in \cite{LW} and utilizing Cauchy inequality to absorb $\|\pa_t\Theta\|_1$ into left, yield that
\begin{eqnarray}
\begin{aligned}
&\f12\int_{\Om}|\pa_t\Theta|^2J^1(t)+\f12\int_0^t\|\pa_t\Theta\|_1^2\\
&\frak le P(\sqrt{\mathscr{Z}})\f12\int_{\Om}|\pa_t\Theta|^2J^1(t)+\int_0^tP(\sqrt{\mathscr{Z}})\mathcal{Z}
\end{aligned}
\end{eqnarray}
Then Gronwall's lemma and Lemma $2.9$ in \cite{LW} imply that
\begin{equation}
\|\pa_t\Theta\|_{L^\infty H^0}^2+\|\pa_t\Theta\|_{L^2H^1}^2\frak le \exp\{P(\sqrt{\mathscr{Z}})T\}\int_0^TP(\sqrt{\mathscr{Z}})\mathcal{Z}.
\end{equation}
Then energy estimates for $\pa_tv$ are likely the same as what Guo and Tice did in \cite{GT1}, so we omit the details. The energy estimates for $\pa_tv$ and $\pa_t\Theta$ allow us to deduce that
\begin{eqnarray}
\begin{aligned}
&\|\pa_tv\|_{L^\infty H^0}^2+\|\pa_tv\|_{L^2H^1}^2+\|\pa_t\Theta\|_{L^\infty H^0}^2+\|\pa_t\Theta\|_{L^2H^1}^2\\
&\frak le \exp\{P(\sqrt{\mathscr{Z}})T\}\Bigg[P(\sqrt{\mathscr{Z}})\|q\|_{L^2H^0}^2+C\|\pa_t\zeta^1-\pa_t\zeta^2\|_{L^2H^{-1/2}}^2+\int_0^TP(\sqrt{\mathscr{Z}})\mathcal{Z}\\
&\quad+P(\sqrt{\mathscr{Z}})\|q\|_{L^\infty H^0}^2\bigg(\sum_{j=0}^1\|\pa_t^j\zeta^1-\pa_t^j\zeta^2\|_{L^\infty H^{1/2}}+\|v\|_{L^\infty H^1}\bigg)\\
&\quad+P(\sqrt{\mathscr{Z}})\|q\|_{L^2 H^0}^2\bigg(\sum_{j=0}^2\|\pa_t^j\zeta^1-\pa_t^j\zeta^2\|_{L^2 H^{1/2}}+\|v\|_{L^2 H^1}\bigg)\Bigg],
\end{aligned}
\end{eqnarray}
where the temporal norm of $L^\infty$ and $L^2$ are computed over $[0,T]$.
Step 4. Elliptic estimates for $v$, $q$ and $\Theta$. For $r=0,1$, we combine Proposition \eqref{prop:high regulatrity} with estimates \eqref{est:bound H3}--\eqref{est:bound difference theta2} as well as the bounds of $\|H^1\|_r$, $\|H^2\|_{r+1}$, $\|H^4\|_{r+1/2}$ $\|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2)\|_r$, $\|\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2\mathscr{N}^1\|_{r+1/2}$ done in the proof of Theorem $6.2$ in \cite{GT1} to deduce that
\begin{eqnarray}
\begin{aligned}
&\|v\|_{r+2}^2+\|q\|_{r+1}^2+\|\Theta\|_{r+2}^2\\
&\frak lesssim C(\eta_0)\bigg(\|\pa_tv\|_r^2+\|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2)\|_r^2+\|H^1\|_r^2+\|H^2\|_{r+1}^2+\|\pa_t\Theta\|_r^2\\
&\quad+\|H^3\|_r^2+\|\mathop{\rm div}\nolimits_{\mathscr{A}^1}(\nabla_{\mathscr{A}^1-\mathscr{A}^2}\Theta^2)\|_r^2+\|\mathbb{D}_{(\mathscr{A}^1-\mathscr{A}^2)}v^2\mathscr{N}^1\|_{r+1/2}^2+\|H^4\|_{r+1/2}^2\\
&\quad+\|\nabla_{\mathscr{A}^1-\mathscr{A}^2}\Theta^2\cdot\mathscr{N}^1\|_{r+1/2}^2+\|H^5\|_{r+1/2}^2\bigg)\\
&\frak lesssim C(\eta_0)\bigg(\|\pa_tv\|_r^2+\|\pa_t\Theta\|_r^2+\|\zeta^1-\zeta^2\|_{r+1/2}^2\\
&\quad+P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{r+3/2}^2+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{r-1/2}^2\\
&\quad+\|w^1-w^2\|_{r+1}^2+\|\vartheta^1-\vartheta^2\|_{r+1}^2\big)\bigg).
\end{aligned}
\end{eqnarray}
Then we take supremum in time over $[0,T]$, when $r=0$, to deduce
\begin{eqnarray}
\begin{aligned}
&\|v\|_{L^\infty H^2}^2+\|q\|_{L^\infty H^1}^2+\|\Theta\|_{L^\infty H^2}^2\\
&\frak lesssim C(\eta_0)\bigg(\|\pa_tv\|_{L^\infty H^0}^2+\|\pa_t\Theta\|_{L^\infty H^0}^2+\|\zeta^1-\zeta^2\|_{L^\infty H^{1/2}}^2\\
&\quad+P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{L^\infty H^{3/2}}^2+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{L^\infty H^{-1/2}}^2\\
&\quad+\|w^1-w^2\|_{L^\infty H^1}^2+\|\vartheta^1-\vartheta^2\|_{L^\infty H^1}^2\big)\bigg).
\end{aligned}
\end{eqnarray}
Then we integrate over $[0,T]$ when $r=1$ to find
\begin{eqnarray}
\begin{aligned}
&\|v\|_{L^2H^3}^2+\|q\|_{L^2H^2}^2+\|\Theta\|_{L^2H^3}^2\\
&\frak lesssim C(\eta_0)\bigg(\|\pa_tv\|_{L^2H^1}^2+\|\pa_t\Theta\|_{L^2H^1}^2+\|\zeta^1-\zeta^2\|_{L^2H^{3/2}}^2\\
&\quad+P(\sqrt{\mathscr{Z}})\big(\|\zeta^1-\zeta^2\|_{L^2H^{5/2}}^2+\|\pa_t\zeta^1-\pa_t\zeta^2\|_{L^2H^{1/2}}^2\\
&\quad+\|w^1-w^2\|_{L^2H^2}^2+\|\vartheta^1-\vartheta^2\|_{L^2H^2}^2\big)\bigg).
\end{aligned}
\end{eqnarray}
Step 5. Estimates of $\zeta^1-\zeta^2$ and contraction. After making preparations in the above steps, we can derive the contraction results. Since this step follows exactly the same manner as the proof of Theorem $6.2$ in \cite{GT1}, we omit the details here. Hence, we get the \eqref{est:n} and \eqref{est:m}.
\end{proof}
\subsection{Proof of Theorem \ref{thm:main}}
Now we can combine Theorem \ref{thm:uniform boundedness} and Theorem \ref{thm:contraction} to produce a unique strong solution to \eqref{equ:NBC}. It is notable that Theorem \ref{thm:main} can be directly derived from the following theorem, which will be proved in the same manner as the proof of Theorem $6.3$ in \cite{GT1}.
\begin{theorem}
Assume that $u_0$, $\theta_0$, $\eta_0$ satisfy $\mathscr{E}_0<\infty$ and that the initial data $\pa_t^ju(0)$, etc. are constructed in Section \ref{sec:initial data} and satisfy the $N$-th compatibility conditions \eqref{cond:compatibility N}. Then there exists $0<T_0<1$ such that if $0<T\frak le T_0$, then there exists a solution $(u,p,\theta,\eta)$ to the problem \eqref{equ:NBC} on the time interval $[0,T]$ that achieves the initial data and satisfies
\begin{equation}\frak label{est:bound K}
\mathfrak{K}(u,p,\theta)+\mathfrak{K}(\eta)\frak le CP(\mathscr{E}_0),
\end{equation}
for a universal constant $C>0$. The solution is unique through functions that achieve the initial data. Moreover, $\eta$ is such that the mapping $\Phi(\cdot,t)$, defined by \eqref{map:phi}, is a $C^{2N-1}$ diffeomorphism for each $t\in [0, T]$.
\end{theorem}
\begin{proof}
Step 1. The sequences of approximate solutions. From the assumptions, we know that the hypothesis of Theorems \ref{thm:boundedness} and \ref{thm:uniform boundedness} is satisfied. These two theorems allow us to produce a sequence of $\{(u^m,p^m,\theta^m,\eta^m)\}_{m=1}^\infty$, which achieve the initial data, satisfy the systems \eqref{equ:iteration equation}, and obey the uniform bounds
\begin{equation}\frak label{est:uniform bound}
\sup_{m\ge1}\frak left(\mathfrak{K}(u^m,p^m,\theta^m)+\mathfrak{K}(\eta^m)\right)\frak le CP(\mathscr{E}_0).
\end{equation}
The uniform bounds allow us to take weak and weak-$\ast$ limits, up to the extraction of a subsequence:
\begin{align*}
&\pa_t^ju^m\rightharpoonup \pa_t^ju\quad \text{weakly in}\thinspace L^2([0,T];H^{2N-2j+1}(\Om))\thinspace \text{for}\thinspace j=0,\frak ldots,N,\\
&\pa_t^{N+1}u^m\rightharpoonup\pa_t^{N+1}u\quad \text{weakly in}\thinspace (\mathscr{X}_T)^\ast,\\
&\pa_t^ju^m\stackrel{\ast}\rightharpoonup\pa_t^ju\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N-2j}(\Om))\thinspace \text{for}\thinspace j=0,\frak ldots,N,\\
&\pa_t^jp^m\rightharpoonup\pa_t^jp\quad \text{weakly in}\thinspace L^2([0,T];H^{2N-2j}(\Om))\thinspace \text{for}\thinspace j=0,\frak ldots,N,\\
&\pa_t^jp^m\stackrel{\ast}\rightharpoonup \pa_t^jp\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N-2j-1}(\Om))\thinspace \text{for}\thinspace j=0,\frak ldots,N,\\
&\pa_t^j\theta^m\rightharpoonup \pa_t^j\theta\quad \text{weakly in}\thinspace L^2([0,T];H^{2N-2j+1}(\Om))\thinspace \text{for}\thinspace j=0,\frak ldots,N,\\
&\pa_t^{N+1}\theta^m\rightharpoonup\pa_t^{N+1}\theta\quad \text{weakly in}\thinspace (\mathscr{H}^1_T)^\ast,\\
&\pa_t^j\theta^m\stackrel{\ast}\rightharpoonup\pa_t^j\theta\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N-2j}(\Om))\thinspace \text{for}\thinspace j=0,\frak ldots,N,
\end{align*}
and
\begin{align*}
&\pa_t^j\eta^m\rightharpoonup\pa_t^j\eta \quad \text{weakly in}\thinspace L^2([0,T];H^{2N-2j+5/2}(\Sigma))\thinspace\text{for}\thinspace j=2,\frak ldots,N+1,\\
&\eta^m\stackrel{\ast}\rightharpoonup\eta\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N+1/2}(\Sigma)),\\
&\pa_t^j\eta^m\stackrel{\ast}\rightharpoonup\pa_t^j\eta\quad \text{weakly}-\ast\thinspace\text{in}\thinspace L^\infty([0,T];H^{2N-2j+3/2}(\Sigma))\thinspace \text{for}\thinspace j=1,\frak ldots,N.
\end{align*}
The collection $(v,q,\Theta,\zeta)$ achieving the initial data, that is, $\pa_t^jv(0)=\pa_t^ju(0)$, $\pa_t^j\Theta(0)=\pa_t^j\theta(0)$, $\pa_t^j\zeta(0)=\pa_t^j\eta(0)$ for $j=0,\frak ldots,N$ and $\pa_t^jq(0)=\pa_t^jp(0)$ for $j=0,\frak ldots,N-1$, is closed in the above weak topology by Lemma A.4 in \cite{GT1}. Hence the limit $(u,p,\theta,\eta)$ achieves the initial data, since each $(u^m,p^m,\theta^m,\eta^m)$ is in the above collection.
Step 2. Contraction. For $m\ge1$, we set $v^1=u^{m+2}$, $v^2=u^{m+1}$, $w^1=u^{m+1}$, $w^2=u^m$, $q^1=p^{m+2}$, $q^2=p^{m+1}$, $\Theta^1$=$\theta^{m+2}$, $\Theta^2=\theta^{m+1}$, $\vartheta^1=\theta^{m+1}$, $\vartheta^2=\theta^m$, $\zeta^1=\eta^{m+1}$, $\zeta^2=\eta^m$. Then from the construction of initial data, the initial data of $v^j$, $w^j$, $q^j$, $\Theta^j$, $\vartheta^j$, $\zeta^j$ math the hypothesis of Theorem \ref{thm:contraction}. Because of \eqref{equ:iteration equation}, \eqref{equ:difference} holds. In addition, \eqref{est:uniform bound} holds. Thus, all hypothesis of Theorem \ref{thm:contraction} are satisfied. Then
\begin{eqnarray}\frak label{est:bound N um pm thetam}
\begin{aligned}
&\mathfrak{N}(u^{m+2}-u^{m+1},p^{m+2}-p^{m+1},\theta^{m+2}-\theta^{m+1};T)\\
&\frak le\f12\mathfrak{N}(u^{m+1}-u^m,p^{m+1}-p^m,\theta^{m+1}-\theta^m;T),
\end{aligned}
\end{eqnarray}
\begin{equation}\frak label{est:bound M etam}
\mathfrak{M}(\eta^{m+1}-\eta^m;T)\frak lesssim \mathfrak{N}(u^{m+1}-u^m,p^{m+1}-p^m,\theta^{m+1}-\theta^m;T).
\end{equation}
The bound \eqref{est:bound N um pm thetam} implies that the sequence $\{(u^m,p^m,\theta^m)\}_{m=0}^\infty$ is Cauchy in the norm $\sqrt{\mathfrak{N}(\cdot,\cdot,\cdot;T)}$. Thus
\begin{eqnarray}
\frak left\{
\begin{aligned}
&u^m\to u\quad &\text{in}\thinspace L^\infty\frak left([0,T];H^2(\Om)\right)\cap L^2\frak left([0,T];H^3(\Om)\right),\\
&\pa_tu^m\to\pa_tu\quad &\text{in}\thinspace L^\infty\frak left([0,T];H^0(\Om)\right)\cap L^2\frak left([0,T];H^1(\Om)\right),\\
&p^m\to p\quad &\text{in}\thinspace L^\infty\frak left([0,T];H^1(\Om)\right)\cap L^2\frak left([0,T];H^2(\Om)\right),\\
&\theta^m\to \theta\quad &\text{in}\thinspace L^\infty\frak left([0,T];H^2(\Om)\right)\cap L^2\frak left([0,T];H^3(\Om)\right),\\
&\pa_t\theta^m\to\pa_t\theta\quad &\text{in}\thinspace L^\infty\frak left([0,T];H^0(\Om)\right)\cap L^2\frak left([0,T];H^1(\Om)\right),
\end{aligned}
\right.
\end{eqnarray}
as $m\to\infty$.
Because of \eqref{est:bound M etam}, we deduce that the sequence $\{\eta^m\}_{m=1}^\infty$ is Cauchy in the norm $\sqrt{\mathfrak{M}(\cdot;T)}$. Thus,
\begin{eqnarray}
\frak left\{
\begin{aligned}
&\eta^m\to\eta\quad &\text{in}\thinspace L^\infty\frak left([0,T];H^{5/2}(\Sigma)\right),\\
&\pa_t\eta^m\to\pa_t\eta\quad &\text{in}\thinspace L^\infty\frak left([0,T];H^{3/2}(\Sigma)\right),\\
&\pa_t^2\eta^m\to\pa_t^2\eta\quad &\text{in}\thinspace L^2\frak left([0,T];H^{1/2}(\Sigma)\right),
\end{aligned}
\right.
\end{eqnarray}
as $m\to\infty$.
Step 3. Interpolation and passing to the limit. This section is exactly the same as the proof of Theorem 6.3 in\cite{GT1}, which gives the existence of solutions and the estimate \eqref{est:bound K}.
Step 4. Uniqueness and diffemorphism. This section is similar to the proof of Theorem 6.3 in\cite{GT1}.
\end{proof}
\end{document} |
\begin{document}
\begin{abstract}
In this note we use Blanchfield forms to study knots that can be turned into an unknot using a single $\ol{t}_{2k}$ move.
\end{abstract}
\title{Untwisting number and Blanchfield pairings}
\section{Overview}
Let $K\subset S^3$ be a knot and $k\in\mathbb{Z}\setminus\{0\}$.
In this paper by a \emph{$k$--twisting move} we mean a move depicted in Figure~\ref{fig:k_untwist},
that is, a full right $k$--twist on two strands of $K$ going in the opposite direction (in \cite{Przy} this move
is called a $\ol{t}_{2k}$--move). We will call a knot \emph{$k$--simple}
if it can be unknotted by a single $k$--untwisting move. A knot is \emph{algebraically $k$--simple} if a single $k$--untwisting move
turns it into a knot with Alexander polynomial $1$.
\begin{figure}
\caption{A $k$--twisting move for $k=2$. Note that the strands in the picture go in different directions.}
\label{fig:k_untwist}
\end{figure}
Our first result gives an obstruction to the untwisting move in terms of the algebraic unknotting number \cite{Fo93,Muk90,Sae99}.
\begin{theorem}\label{cor:unknotting}
Suppose $K$ is an algebraically $k$--simple knot.
If $k$ is odd, then $K$
can be turned into a knot with Alexander polynomial $1$ using at most two crossing changes. If $k$ is even, then at most three crossing
changes are enough to turn $K$ into a knot with Alexander polynomial $1$.
\end{theorem}
Our second result restricts the homology of the double branched cover of an algebraically $k$--simple knot.
\begin{theorem}\label{thm:iscyclic}
Suppose $K$ is an algebraically $k$--simple knot. Denote by $\Sigma(K)$ the double branched cover of $K$. Then $H_1(\Sigma(K);\mathbb{Z})$ is cyclic.
\end{theorem}
Both Theorem~\ref{cor:unknotting} and Theorem~\ref{thm:iscyclic} follow from the following result, which is the main technical result of this paper.
\begin{theorem}\label{thm:main}
Suppose $K$ is an algebraically $k$--simple knot. Then there exists a polynomial $\alpha(t)\in\mathbb{Z}[t,t^{-1}]$ satisfying $\alpha(1)=0$,
$\alpha(t^{-1})=\alpha(t)$, such that
the matrix
\[\begin{pmatrix} \alpha(t) & 1 \\ 1 & -k \end{pmatrix}\]
represents the Blanchfield pairing for $K$.
\end{theorem}
Theorem~\ref{thm:main} can be regarded as a generalization of \cite[Theorem 3.2(b)]{Przy}.
It is possible to generalize the techniques used in this paper to study knots that are untwisted with several $\ol{t}_{2k}$ moves, possibly with varying
the twisting coefficients $k$. This generalization is straightforward, we omit to make the paper shorter and more concise.
Proof of Theorem~\ref{thm:main} is given in Section~\ref{sec:proof}.
Proof of Theorem~\ref{cor:unknotting} is given in Section~\ref{sec:applications}.
Section~\ref{sec:linkingforms} contains
the proof of a stronger version of Theorem~\ref{thm:iscyclic}.
\end{ack}
\section{Blanchfield pairing}\label{sec:blanchfield}
Let $K\subset S^3$ be a knot and let $M_K$ denote its zero-framed surgery. Denote by $\wt{M}_K$ the universal abelian cover of $M_K$. The chain
complex $C_*(\wt{M}_K;\mathbb{Z})$ admits the action of the deck transform and thus it has a structure of a $\Lambda$--module, where $\Lambda=\mathbb{Z}[t,t^{-1}]$.
The homology of this complex, regarded
as a $\Lambda$--module, is denoted by $H_*(M_K;\Lambda)$. The module $H_1(M_K;\Lambda)$ is called the \emph{Alexander module} of the knot $K$.
\begin{remark}
Usually the Alexander module is defined using knot complements instead of zero--framed surgeries, but the two definitions are
equivalent; see e.g. \cite{FP}.
\end{remark}
The ring $\Lambda$ has a naturally defined convolution $t\mapsto t^{-1}$.
The Blanchfield pairing defined in
\cite{Bl57} for $K$ is a sesquilinear symmetric pairing $H_1(M_K;\Lambda)\times H_1(M_K;\Lambda)\to Q/\Lambda$, where $Q$ is the field
of fractions for $\Lambda$. We refer to \cite{FP,Hi12} for a precise and detailed construction of the Blanchfield pairing and \cite{Con,CFT} for
generalizations.
\begin{definition}\label{def:repres}
We say that an $n\times n$ matrix $A$ with entries in $\Lambda$ \emph{represents} the Blanchfield pairing if $H_1(M_K;\Lambda)\cong \Lambda^n/A\Lambda^n$
as a $\Lambda$--module, under this identification the Blanchfield pairing has form $(a,b)\mapsto a^TA^{-1}\ol{b}$ and moreover
$A(1)$ is diagonalizable over $\mathbb{Z}$.
\end{definition}
It is known, see \cite{Kear}, that every Blanchfield pairing can be represented by a finite matrix. The minimal size of a matrix representing the Blanchfield
pairing of a knot is denoted by $n(K)$. It is equal to the algebraic unknotting number $u_a(K)$; see \cite{BF3,BF1}.
The invariant $n(K)$ can also be generalized for other coefficient ring $R$. In this paper we restrict to rings $R$ that are subrings of $\mathbb{C}$.
We denote by $n_R(K)$ the minimal size of a matrix over $R[t,t^{-1}]$ representing the
Blanchfield pairing over $R[t,t^{-1}]$.
We have that $n_R{K}\le n_{R'}(K)$
if $R'$ is a subring of $R$.
Often $n_R(K)$ is easier to compute than $n(K)=n_\mathbb{Z}(K)$,
for example the value of $n_{\mathbb{R}}$ can be calculated from the Tristram--Levine signature \cite{BF2}. One motivation of this paper is to give
a geometric interpretation of $n_R(K)$ for some rings $R$.
\section{Proof of Theorem~\ref{thm:main}}\label{sec:proof}
The main ingredient in the proof of Theorem~\ref{thm:main} is the following.
\begin{theorem}[see \expandafter{\cite[Theorem 2.6]{BF1}}]\label{thm:26}
Suppose $W_K$ is a topological four--manifold such that $\partial W_K=M_K$, $\pi_1(W_K)=\mathbb{Z}$ and the inclusion induced map $H_1(M_K;\mathbb{Z})\to H_1(W_K;\mathbb{Z})$
is an isomorphism. Then $H_2(W_K;\Lambda)$ is free of rank $b_2(W_K)$. Moreover if $A$ is matrix over $\Lambda$
representing the twisted intersection form on $H_2(W_K;\Lambda)$ in some basis of $H_2(W_K;\Lambda)$, then $A$ also represents
the Blanchfield pairing on $M_K$.
\end{theorem}
In the light of Theorem~\ref{thm:26}, the
proof of Theorem~\ref{thm:main} consists of constructing an appropriate manifold $W_K$ and applying Theorem~\ref{thm:26}.
The construction begins with noticing that the
twisting move can be realized by a surgery. Namely we have the following well-known fact.
\begin{proposition}\label{prop:surgerytwist}
A $k$--twisting move can be realized by a $-1/k$ surgery on a knot. That is, if $K_2$ arises from $K_1$ by a $k$--twisting move, then there is a simple closed
circle $C$ disjoint from $K_1$, such that $C$ bounds a smooth disk intersecting $K_2$ at two points with opposite signs and
such that the $-1/k$ surgery on $C$ transforms $K_1$ into $K_2$; see Figure~\ref{fig:twist_surg}
\end{proposition}
\begin{figure}
\caption{The $1/k$ surgery on the circle in the top picture induces $k$ full left twists of the two strands passing through the circle.}
\label{fig:twist_surg}
\end{figure}
\begin{remark}
The move described in Figure~\ref{fig:twist_surg} is a special case of the Rolfsen twist, see \cite[Figure~5.27]{Stipsicz}.
It can be seen on \cite[Figure 3.12]{Sav} that the surgery with a positive coefficient (i.e. the $1/k$ surgery if $k>0$)
gives rise to a left $k$--twist and the surgery with a
negative coefficient (i.e. the $-1/k$ surgery with $k>0$) gives rise to a right $k$--twist.
\end{remark}
The surgery in Figure~\ref{fig:twist_surg} can be changed into a surgery with integer coefficients as in Figure~\ref{fig:othersurg}
by a `slam-dunk' operation, see \cite[Section 5.3]{Stipsicz}.
\begin{figure}
\caption{Changing a $1/k$ surgery on a circle to a surgery on a two-component link with framings $0$ and $-k$.}
\label{fig:othersurg}
\end{figure}
Suppose $J$ is a knot with Alexander polynomial $1$ and $K$ is a knot resulting from $J$ by applying a full left $k$--twist
(so $J$ is obtained from $K$ by a full right $k$--twist).
Let $M_J$ be the zero-surgery on $J$ and $M_K$ the zero--surgery on $K$. By \cite[Theorem 117B]{FreedmanQuinn}
$M_J$ is a boundary of a topological four--manifold that is a homotopy $D^3\times S^1$. Denote this four--manifold by $W_J$.
A full left $k$--twist on $J$ can be realized as a surgery on a two-component link with framings $0$ and $-k$ as in Figure~\ref{fig:othersurg}.
Let $c_0$ and $c_1$ denote the components of this link. The curve $c_0$ has framing $0$, $c_1$ has framing $k$. Both $c_0$ and $c_1$ are curves disjoint from $J$,
so we can and will assume that they are separated from a small neighborhood of $J$ in $S^3$. Performing a 0--surgery on $J$ does not affect these curves, therefore
$c_0$ and $c_1$ can also be viewed as curves on $M_J$. Now performing surgery on $c_0$ and $c_1$ produces $M_K$.
The trace of the surgery on $c_0$ and $c_1$ yields a cobordism between $M_J$ and $M_K$. Call this cobordism $W_{JK}$.
Define now
\[W_K=W_J\cup W_{JK}\]
so that $\partial W_K=M_K$.
We have the following fact.
\begin{lemma}\label{lem:cobounds}
We have $\pi_1(W_K)\cong\mathbb{Z}$, $H_1(W_K;\mathbb{Z})\cong \mathbb{Z}$ and the inclusion of $M_K$ to $W_K$ induces an isomorphism on the first homology.
Moreover $H_2(W_K;\mathbb{Z})\cong\mathbb{Z}^2$ and there exists spherical generators of $H_2(W_K;\mathbb{Z})$.
\end{lemma}
\begin{proof}
The homology groups of $W_K$ are calculated using the Mayer-Vietoris sequence. The manifold $W_K$ is obtained
from $W_J$ by adding two-handles along null-homologous curves $c_0$ and $c_1$.
This shows that $H_1(W_K;\mathbb{Z})\cong\mathbb{Z}$ and $H_2(W_K;\mathbb{Z})\cong\mathbb{Z}^2$.
To compute $\pi_1$ we observe that $\pi_1(W_J)\cong\mathbb{Z}$. Hence $c_0,c_1$ being null-homologous
are also null-homotopic. The van Kampen theorem implies that $\pi_1(W_K)\cong\mathbb{Z}$.
To show that the generators of $H_2(W_K;\mathbb{Z})$ can be chosen to be spherical we again use the fact that $c_0$ and $c_1$ are null-homotopic in $W_J$.
This implies that $c_0$ and $c_1$ bound disks $D_0$ and $D_1$ in $W_J$. The disk $D_1$
can be chosen to be the obvious disk on $M_J$, but $D_0$ is in general only an immersed disk and it cannot lie on $M_J$ (because in general $c_0$ is
not null-homotopic on $M_J$). We can form spheres $\Sigma_0$ and $\Sigma_1$
by adding to $D_0$ and $D_1$ the cores of the two-handles that are attached. It is clear that the homology classes $[\Sigma_0]$ and $[\Sigma_1]$
generate $H_2(W_K;\mathbb{Z})$. Moreover, by construction, $\Sigma_1$ is a smoothly embedded sphere and $\Sigma_0$ can be chosen to intersect $\Sigma_1$
precisely at one point.
Finally, in order to prove that the inclusion induced map $H_1(M_K;\mathbb{Z})\to H_1(W_K;\mathbb{Z})$ is an isomorphism, invert the cobordism $W_{JK}$, that is,
present $W_{JK}$ as $M_K\times[0,1]$ with two two--handles attached. The attaching curves of these handles are homologically trivial (but not necessarily
homotopy trivial, $\pi_1(M_K)$ can be complicated),
hence the boundary inclusion induces an isomorphism $H_1(M_K;\mathbb{Z})\cong H_1(W_{JK};\mathbb{Z})$. Clearly $H_1(W_{JK};\mathbb{Z})\cong H_1(W_K;\mathbb{Z})$.
\end{proof}
Lemma~\ref{lem:cobounds} gives us two spheres $\Sigma_0,\Sigma_1\subset W_K$,
which are the generators of $H_2(W_K;\mathbb{Z})$. Choose a basepoint $x_0=\Sigma_0\cap\Sigma_1$. This choice allows us to consider $\Sigma_0$
and $\Sigma_1$ as elements of $\pi_2(W_K,x_0)$.
\begin{lemma}\label{lem:generates}
The group $\pi_2(W_K,x_0)$ is freely generated as a $\Lambda=Z[\pi_1(W_K,x_0)]$--module by classes of $\Sigma_0$ and $\Sigma_1$. In particular
$\pi_2(W_K,x_0)\cong\Lambda^2$.
\end{lemma}
\begin{proof}
The space $W_K$ is obtained from $W_J$ by attaching two two--handles along null-homotopic curves $c_0$ and $c_1$. We have that $\pi_1(W_J)=\mathbb{Z}$
and $\pi_2(W_J)=0$ by definition. The statement follows from \cite[Proposition 3.30]{Ran02}.
\end{proof}
We will use Lemma~\ref{lem:generates} in connection with the following well-known result.
\begin{lemma}\label{lem:lambdaiso}
We have an isomorphism of $\Lambda$--modules $\pi_2(W_K,x_0)\cong \pi_2(\wt{W}_K,\wt{x}_0)\cong H_2(\wt{W}_K;\mathbb{Z})\cong H_2(W_K;\Lambda)$.
\end{lemma}
\begin{proof}
The
first isomorphism in the lemma is the isomorphism of higher homotopy groups under the covering map. The second is the Hurewicz isomorphism because $\wt{W}_K$ is
simply connected. The third isomorphism is the definition of the twisted homology groups.
\end{proof}
In particular,
Lemma~\ref{lem:generates} together with Lemma~\ref{lem:lambdaiso} gives
a simple and independent argument that $H_2(W_K;\Lambda)$ is a free $\Lambda$--module, compare \cite[Lemma 2.7]{BF1}.
\begin{corollary}\label{cor:liftsgenerate}
The (classes of the) lifts of $\Sigma_0$ and $\Sigma_1$ to $\wt{W}_K$ generate $H_2(W_K;\Lambda)$ as a $\Lambda$--module.
\end{corollary}
Recall that $A(t)$ is a matrix over $\Lambda$ representing the intersection
form on $H_2(W_K;\Lambda)$.
The following result together with Theorem~\ref{thm:26} gives the proof of Theorem~\ref{thm:main} from the introduction.
\begin{theorem}\label{thm:form}
The matrix $A(t)$ has form
\[\begin{pmatrix} \alpha(t) & 1\\ 1 & -k\end{pmatrix},\]
where $\alpha(t)\in \Lambda$ is such that $\alpha(1)=0$ and $\alpha(t^{-1})=\alpha(t)$.
\end{theorem}
\begin{proof}
By Corollary~\ref{cor:liftsgenerate}
the entries of $A(t)$ are twisted intersection indices of $\Sigma_0$ and $\Sigma_1$.
For example, the bottom-right entry of $A(t)$
is equal to the twisted intersection index of $\Sigma_1$ and $\Sigma_1'$, where $\Sigma'_1$ is a small perturbation of $\Sigma_1$
intersecting $\Sigma_1$ in finitely many points.
To compute the twisted intersection index of $\Sigma_1$ and $\Sigma_1'$, choose a basing for $\Sigma_1$, $\Sigma_1'$,
that is a path $\gamma$ from $x_0$ to $\Sigma_1$ and a path $\gamma'$ from $x_0$ to $\Sigma_1'$. Let $x,x'$
be the end points of $\gamma$ and $\gamma'$.
For any intersection point $y\in\Sigma_1$ and $\Sigma_1'$
we choose a smooth path $\rho_y$ from $x$ to $y$ on $\Sigma_1$ and a path $\rho_y'$ from $x'$ to $y$ on $\Sigma_1'$; see Figure~\ref{fig:paths}.
\begin{figure}
\caption{Notation in proof of Theorem~\ref{thm:form}
\label{fig:paths}
\end{figure}
Let $\theta_y$ be the loop
$(\gamma')^{-1}(\rho_y')^{-1}\rho_y\gamma$. Define $n_y\in\mathbb{Z}$ to be the homology class of $\theta_y$ in $H_1(W_K;\mathbb{Z})\cong\mathbb{Z}$. Finally, let
$\epsilon_y$ be the sign of the intersection point $y$ assigned in the usual way, that is, if $T_y\Sigma_1\oplus T_y\Sigma_1'=T_yW_{K}$ agrees with
the orientation, we set $\epsilon_y=+1$, otherwise we set $\epsilon_y=-1$.
Given these definitions, the twisted intersection index of $\Sigma_1$ and $\Sigma_1'$ is equal to
\begin{equation}\label{eq:twistint}
\sum_{y\in\Sigma_1\cap\Sigma_1'} \epsilon_yt^{n_y}\in \mathbb{Z}[t,t^{-1}].
\end{equation}
In general this sum might depend on the choice of $\rho_y$ and $\rho_y'$. However if any smooth closed curve on $\Sigma_1$ and on $\Sigma_1'$
is homologically trivial in $W_K$ (in the language of \cite[Section 3.2]{BF3} this means that $\Sigma_1$ and $\Sigma_1'$ are homologically invisible in $W_K$),
the definition does not depend on paths $\rho_y$ and $\rho_y'$. In the present situation
$\Sigma_1$ and $\Sigma_1'$ are immersed (and even embedded) spheres, so they are homologically invisible, in particular \eqref{eq:twistint}
is a well-defined Laurent polynomial.
As $\Sigma_1$ and $\Sigma_1'$ are embedded spheres, we claim more, namely that $n_y$ does not depend on $y$. In fact, suppose $z$
is another intersection point of $\Sigma_1$ and $\Sigma_1'$.
If $n_z\neq n_y$, then
the curve $\delta=\rho_y\rho_z^{-1}\rho'_z(\rho_y')^{-1}$ is not homology trivial in $W_K$. As $\Sigma_1'$ is a perturbation of $\Sigma_1$,
the path $\rho'_z(\rho_y')^{-1}$ can be pushed by a homotopy (in $W_K$) to a path $\wt{\rho}$ on $\Sigma_1$ having the same endpoints. Then
$\rho_y\rho_z^{-1}\wt{\rho}$ is a loop hootomically equivalent to $\delta$, but this is a loop on a smoothly embedded sphere $\Sigma_1$. Hence it is
contractible in $W_K$.
This shows that $n_y=n_z$.
We conclude that the twisted intersection index of $\Sigma_1$ and $\Sigma_1'$ is equal to the standard intersection number of $\Sigma_1$ and $\Sigma_1'$
(which is equal to the self-intersection of $\Sigma_1$, that is $-k$) multiplied by $t^{n_y}$. We can choose a basing for
$\Sigma_1'$ in such a way that $n_y=0$.
An analogous, but simpler argument shows that $\Sigma_0\cdot\Sigma_1=\pm 1$. Indeed by construction $\Sigma_0\cap\Sigma_1$ consists of a single point.
It follows that the twisted intersection between $\Sigma_0$ and $\Sigma_1$ is $\pm t^m$ for some $m$. We choose a basing for $\Sigma_0$ in such a way that
$m=0$. We can also choose an orientation of $\Sigma_0$ in such a way that the sign is positive.
\end{proof}
\begin{remark}
There is an alternative calculation of the matrix $A$ using Rolfsen's argument \cite{Rol75}. However one still has to make some effort proving
that $A$ represents not only the Alexander module, but also the Blanchfield pairing.
\end{remark}
\section{Proof of Theorem~\ref{cor:unknotting}}\label{sec:applications}
We begin with proving Theorem~\ref{cor:unknotting}. The following corollary deals with the first part of this theorem.
\begin{corollary}\label{thm:unknotting}
Suppose $K$ is an algebraically $k$--simple and $k$ is odd. Then there are at most two crossing
changes that turn $K$ into a knot with Alexander polynomial $1$.
\end{corollary}
\begin{proof}
We have $A(1)=\left(\begin{smallmatrix} 0 & 1 \\ 1 &-k\end{smallmatrix}\right)$. As $k$ is odd, this matrix is diagonalizable over $\mathbb{Z}$.
By
\cite[Theorem 1.1]{BF3} we infer that the algebraic unknotting number of $K$ is at most $2$.
\end{proof}
If $k$ is even, then $A(1)$ is not diagonalizable over $\mathbb{Z}$, but $A(1)\oplus (1)$ is diagonalizable. The block matrix $A(t)\oplus (1)$ is a $3\times 3$
matrix over $\Lambda$ representing the Blanchfield pairing, so the algebraic unknotting number of $K$ is bounded from above by $3$. This shows the second
part of Theorem~\ref{cor:unknotting}.
We have the following consequence of Theorem~\ref{thm:main}.
\begin{theorem}
Suppose $K$ can be algebraically $k$--simple. Let $R_k=\mathbb{Z}[\frac1k]$. Then
$n_{R_k}=1$.
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:main} we know that the Blanchfield pairing
over $\mathbb{Z}$ can be represented by a matrix of form $\left(\begin{smallmatrix} \alpha(t) & 1 \\ 1 & -k\end{smallmatrix}\right)$. The same matrix represents
the Blanchfield pairing over $R_k$, but over $R_k$ this matrix is congruent to a matrix $\left(\begin{smallmatrix} \wt{\alpha}(t) & 0 \\ 0 & 1\end{smallmatrix}\right)$
for $\wt{\alpha}(t)\in R_k[t,t^{-1}]$.
By \cite[Proposition 1.7.1]{Ran81} (see also \cite[Proposition 3.1]{BF1}) the matrix $(\wt{\alpha}(t))$ also represents the Blanchfield pairing over $R_k[t,t^{-1}]$.
\end{proof}
The following corollary is well known, see \cite{Przy}.
\begin{corollary}
If $K$ is algebraically $k$--simple, then its Alexander polynomial is equal to $\Delta_K(t)=1+k\alpha(t)$, where $\alpha(t)\in\mathbb{Z}[t,t^{-1}]$.
\end{corollary}
\begin{proof}
This follows from Theorem~\ref{thm:unknotting} because if $A(t)$ represents the Blanchfield pairing of a knot $K$, then $\Delta_K(t)=\det A(t)$
up to multiplication by a unit in $\Lambda$.
\end{proof}
\section{Linking forms}\label{sec:linkingforms}
An abstract \emph{linking pairing} is a pair $(H,l)$,
where $H$ is a finite abelian group of an odd order and $l$ is a bilinear symmetric pairing $l\colon H\times H\to\mathbb{Q}/\mathbb{Z}$,
As a model example, if $Y$ is a closed three--manifold with $b_1(Y)=0$, there is defined a linking pairing $l(Y)$ on $H=H_1(Y;\mathbb{Z})$. If $Y=\Sigma(K)$
is the double branched cover of a knot $K$, we denote this pairing by $l(K)$. It is known that the linking pairing $l(K)$ is represented by $V+V^T$,
where $V$ is the Seifert matrix for $K$. The meaning of `represented' is explained in the following definition.
\begin{definition}
Let $P$ be an $n\times n$ matrix with integer coefficients and such that $\det P$ is odd. The \emph{linking form represented by $P$} is the pair
$(H(P),l(P))$, where $H(P)=\mathbb{Z}^n/P\mathbb{Z}^n$ and $l(P)$ is the bilinear
form defined by
\begin{align*}
\mathbb{Z}^n/P\mathbb{Z}^n\times \mathbb{Z}^n/P\mathbb{Z}^n&\to \mathbb{Q}/\mathbb{Z}\\
(a,b)&\mapsto a^T P^{-1} b\bmod 1.
\end{align*}
\end{definition}
We have the following relation between the Blanchfield form for $K$ and the linking form $l(K)$.
\begin{proposition}[see \expandafter{\cite[Lemma 3.3]{BF1}}]
If $A$ is a matrix over $\Lambda$ representing the Blanchfield pairing, then $l(A(-1))=2l(K)$.
\end{proposition}
Here $2l(K)$ means the linking pairing with the same underlying group as $l(K)$, but the linking form is multiplied by $2$; compare \cite[Section 3]{BF1}.
We can use this result to obtain the following corollary.
\begin{corollary}\label{cor:linking_form_obstruction}
Suppose $K$ is algebraically $k$--simple. Then the linking form $2l(K)$ is isometric to the linking form
represented by
\begin{equation}\label{eq:asB}
B=\begin{pmatrix} d & 1\\ 1 & -k\end{pmatrix},
\end{equation}
where $d=\alpha(-1)\in\mathbb{Z}$ is such that $-(dk+1)$ is the (signed) determinant of $K$.
\end{corollary}
As in \cite[Section 5.2]{BF1} we can use Corollary~\ref{cor:linking_form_obstruction} to obstruct untwisting number $2$.
From Corollary~\ref{cor:linking_form_obstruction} we immediately recover Theorem~\ref{thm:iscyclic} from the introduction.
\begin{proposition}\label{prop:is_cyclic}
If $K$ is algebraically $k$--simple and $\Sigma(K)$ is the double branched cover, then $H_1(\Sigma(K);\mathbb{Z})$ is cyclic.
\end{proposition}
\begin{remark}
It follows that Wendt's criterion for the unknotting number \cite{We37} coming from the double branched covers,
does not distinguish between knots that have unknotting number $1$ and knots that
are algebraically $k$--simple for some $k$.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:is_cyclic}]
By Corollary~\ref{cor:linking_form_obstruction} we infer that $H_1(\Sigma(K);\mathbb{Z})\cong \mathbb{Z}^2/BZ^2$, where $B$ is as
in \eqref{eq:asB}. Subtract from the first column of $B$ the second column multiplied by $d$ to obtain the matrix
$\left(\begin{smallmatrix} 0 & 1 \\ 1+dk & -k\end{smallmatrix}\right)$. Then add to the second row the first one multiplied by $k$. We obtain the matrix
\[B'=\begin{pmatrix} 0 & 1 \\ 1+dk & 0\end{pmatrix}.\]
Row and column operations on matrices do not affect the cokernel, hence $\mathbb{Z}^2/B'\mathbb{Z}^2\cong\mathbb{Z}^2/B\mathbb{Z}^2$. Evidently we have $\mathbb{Z}^2/B'\mathbb{Z}\cong\mathbb{Z}/|dk+1|\mathbb{Z}$.
\end{proof}
\end{document} |
\begin{document}
\publicationdetails{20}{2018}{1}{22}{3877}
\title{Computing minimum rainbow and strong rainbow colorings of block graphs}
\begin{abstract}
A path in an edge-colored graph $G$ is \emph{rainbow} if no two edges of it are colored the same. The graph $G$ is \emph{rainbow-connected} if there is a rainbow path between every pair of vertices. If there is a rainbow shortest path between every pair of vertices, the graph $G$ is \emph{strongly rainbow-connected}.
The minimum number of colors needed to make~$G$ rainbow-connected is known as the \emph{rainbow connection number} of~$G$, and is denoted by $\rc(G)$. Similarly, the minimum number of colors needed to make~$G$ strongly rainbow-connected is known as the \emph{strong rainbow connection number} of~$G$, and is denoted by~$\src(G)$.
We prove that for every~$k \geq 3$, deciding whether $\src(G) \leq k$ is $\NP$-complete for split graphs, which form a subclass of chordal graphs. Furthermore, there exists no polynomial-time algorithm for approximating the strong rainbow connection number of an $n$-vertex split graph with a factor of $n^{1/2-\epsilon}$ for any~$\epsilon > 0$ unless~$\P = \NP$.
We then turn our attention to block graphs, which also form a subclass of chordal graphs.
We determine the strong rainbow connection number of block graphs, and show it can be computed in linear time.
Finally, we provide a polynomial-time characterization of bridgeless block graphs with rainbow connection number at most 4.
\end{abstract}
\section{Introduction}
Let $G$ be an edge-colored undirected graph that is simple and finite. A path in $G$ is \emph{rainbow} if no two edges of it are colored the same. The graph $G$ is \emph{rainbow-connected} if there is a rainbow path between every pair of vertices. If there is a rainbow shortest path between every pair of vertices, the graph $G$ is \emph{strongly rainbow-connected}. The minimum number of colors needed to make $G$ rainbow-connected is known as the \emph{rainbow connection number} of $G$ and is denoted by $\rc(G)$. Likewise, the minimum number of colors needed to make $G$ strongly rainbow-connected is known as the \emph{strong rainbow connection number} of $G$ and is denoted by $\src(G)$. Rainbow connectivity was introduced by Chartrand, Johns, McKeon, and Zhang~\cite{Chartrand2008} in 2008. While being a theoretically interesting way of strengthening connectivity, rainbow connectivity also has possible applications in data transfer and networking~\cite{Li2012}. The study of rainbow colorings and several of its variants have recently attracted increasing attention in the research community. For a comprehensive treatment, we refer the reader to the books~\cite{Chartrand2008b, Li2012b}, or the recent survey~\cite{Li2012}.
Denote by $n$ the number of vertices and by $m$ the number of edges of a graph in question. It is easy to verify that $\rc(G) \leq n - 1$; indeed, such an edge-coloring is obtained by coloring the edges of a spanning tree of $G$ in distinct colors. On the other hand, we will always need as at least as many colors as is the length of a longest shortest path in $G$. Thus, an easy lower bound for $\rc(G)$ is given by the diameter of $G$, denoted by $\diam(G)$. That is, we have that $\diam(G) \leq \rc(G) \leq n-1$. It also holds that $\rc(G) \leq \src(G)$, since every strongly rainbow-connected graph is also rainbow-connected.
For extremal cases, it is easy to see that $\rc(G) = \src(G) = 1$ if and only if $G$ is a complete graph. Similarly, we have that $\rc(G) = \src(G) = m$ if and only if $G$ is a tree (for proofs, see~\cite{Chartrand2008}). Chartrand~\emph{et al.}~\cite{Chartrand2008} also determined the exact rainbow and strong rainbow connection numbers for some structured graph classes, including cycles, wheel graphs, and complete multipartite graphs.
Not surprisingly, determining the rainbow connection numbers is computationally hard. Chakraborty, Fischer, Matsliah, and Yuster~\cite{Chakraborty2009} proved that given a graph $G$, it is $\NP$-complete to decide whether $\rc(G)=2$. Ananth, Nasre, and Sarpatwar~\cite{Ananth2011} further showed that for every $k \geq 3$, deciding whether $\rc(G) \leq k$ is $\NP$-complete. Using different ideas, a proof of hardness for every $k \geq 2$ is also given by Le and Tuza~\cite{Le:tech}. The hardness of computing the strong rainbow connection number was shown by Ananth {\em et al.}~\cite{Ananth2011} as well. In particular, they proved that for every $k \geq 3$, deciding whether $\src(G) \leq k$ is $\NP$-complete, even when $G$ is bipartite. Furthermore, as $\rc(G) = 2$ if and only if $\src(G) = 2$ (for a proof, see~\cite{Chartrand2008}), it follows that deciding whether $\src(G) \leq k$ is $\NP$-complete for every $k \geq 2$.
Because rainbow-connecting graphs optimally is hard in general, there has been interest in approximation algorithms and easier special cases. Basavaraju, Chandran, Rajendraprasad, and Ramaswamy~\cite{Basavaraju2012} presented approximation algorithms for computing the rainbow connection number with factors $(r+3)$ and $(d+3)$ respectively, where $r$ is the radius and $d$ the diameter of the input graph. Chandran and Rajendraprasad~\cite{Chandran2013} proved that there is no polynomial-time algorithm to rainbow-connect graphs with less than twice the optimum number of colors, unless $\P = \NP$. Ananth {\em et al.}~\cite{Ananth2011} showed that there is no polynomial time algorithm for approximating the strong rainbow connection number of an $n$-vertex graph with a factor of $n^{1/2-\epsilon}$, where $\epsilon > 0$ unless $\NP = \ZPP$.
There is a line of research studying rainbow connection on chordal graphs (see e.g.,~\cite{Basavaraju2012,Chandran2012,Chandran2013,Chandran2014}).
In this regard, it is known to be $\NP$-complete to decide whether $\rc(G) \leq k$ for every $k \geq 2$ even when $G$ is chordal~\cite{Chandran2012,Chandran2014}.
Furthermore, the rainbow connection number of a chordal graph can not be approximated to a factor less than $5/4$ unless $\P = \NP$~\cite{Chandran2013}.
Motivated by this result, there has been interest in a deeper investigation of the rainbow connection number of subclasses of chordal graphs.
Chandran, Rajendraprasad, and Tesa{\v{r}}~\cite{Chandran2014} showed that for split graphs, the problem of deciding whether $\rc(G) = k$ is $\NP$-complete for $k \in \{2,3\}$, and in $\P$ for all other values of $k$.
Chandran and Rajendraprasad~\cite{Chandran2012} showed split graphs can be rainbow-connected in linear-time using at most one more color than the optimum.
In the same paper, the authors also gave an exact linear-time algorithm for rainbow-connecting threshold graphs.
Furthermore, they noted that their result is apparently the first efficient algorithm for optimally rainbow-connecting any non-trivial subclass of graphs.
To the best of our knowledge, the complexity of strongly rainbow-connecting chordal graphs is an open question.
Moreover, we are not aware of any efficient exact algorithms for computing the strong rainbow connection number of a non-trivial subclass of graphs.
\heading{Our results}
We further investigate the rainbow and strong rainbow connection number of subclasses of chordal graphs.
We extend the known hardness results for computing the strong rainbow connection number by showing it is $\NP$-complete to decide whether a given split graph can be strongly rainbow-connected in $k$ colors, where $k \geq 3$.
As a by-product of the proof, we obtain that there exists no polynomial-time algorithm for approximating the strong rainbow connection number of an $n$-vertex split graph with a factor of $n^{1/2-\epsilon}$ for any $\epsilon > 0$ unless $\P = \NP$.
These negative results further motivate the investigation of tractable special cases. Indeed, we determine the strong rainbow connection number of block graphs, and show that any block graph can be strongly rainbow-connected optimally in linear time.
Finally, we turn to the rainbow connection number, and characterize the bridgeless block graphs with rainbow connection number 2, 3, or 4.
\section{Preliminaries}
The graphs considered are connected, simple, and undirected. For graph-theoretic concepts not covered here, we refer the reader to~\citep{Diestel2005}.
Let $G=(V,E)$ be a graph. The \emph{diameter} of $G$, denoted by $\diam(G)$, is the length of a longest shortest path in $G$. The \emph{degree} of a vertex is the number of edges incident to it. The \emph{minimum degree} of $G$, denoted by $\delta(G)$, is the minimum of the degrees of all the vertices in $G$. If $G$ is obvious from the context, we may shorten $\delta(G)$ to $\delta$. Finally, a \emph{dominating set} is a subset $D \subseteq V$ of vertices such that every vertex in $V \setminus D$ is adjacent to at least one vertex in $D$. If $D$ induces a connected subgraph in $G$, we say $D$ is a \emph{connected dominating set}. The minimum size of a (connected) dominating set in $G$, denoted by ($\gamma_c(G)$) $\gamma(G)$, is known as the \emph{(connected) domination number} of $G$.
A \emph{chord} is an edge joining two non-consecutive vertices in a cycle. A graph is \emph{chordal} if every cycle of length 4 or more has a chord. Equivalently, a graph is chordal if it contains no induced cycle of length~4 or more.
Let us introduce some subclasses of chordal graphs that are most central for this work. A \emph{split graph} is a graph whose vertex set can be partitioned into a clique and an independent set. A \emph{cut vertex} is a vertex whose removal will disconnect the graph. A \emph{biconnected graph} is a connected graph having no cut vertices. In a \emph{block graph}, every maximal biconnected component, known as a \emph{block}, is a clique. In a block graph $G$, different blocks intersect in at most one vertex, which is a cut vertex of $G$. In other words, every edge of $G$ lies in a unique block, and $G$ is the union of its blocks. A particular property of block graphs is that they are \emph{geodetic}, meaning there is exactly one shortest path between every pair of vertices (see e.g.,~\cite{Stemple1968}). Both split graphs and block graphs are chordal.
The concept of separators is central to chordal graphs. A set $S \subseteq V$ disconnects a vertex $a$ from vertex $b$ in a graph $G$ if every path of $G$ between $a$ and $b$ contains a vertex from $S$. A non-empty set $S \subseteq V(G)$ is a \emph{minimal separator} of $G$ if there exists $a$ and $b$ such that $S$ disconnects $a$ from $b$ in $G$, and no proper subset of $S$ disconnects $a$ from $b$ in $G$. If we want to identify the vertices that $S$ disconnects, we may also refer to $S$ as a \emph{minimal $a$-$b$ separator}.
For a more comprehensive treatment on chordal graphs, we refer the reader to~\citep{Blair1993}.
\section{Hardness of strongly rainbow-connecting split graphs}
\label{sec:hardness_src}
In this section, we show that deciding whether a split graph can be strongly rainbow-connected with $k \geq 3$ colors is $\NP$-complete. We remark that it follows from the work of Chandran~\emph{et al.}~\cite{Chandran2014} that the problem is $\NP$-complete for $k=2$; however, to the best of our knowledge, the complexity of the problem for $k \geq 3$ has been open even for chordal graphs.
In the \emph{$k$-subset strong rainbow connectivity problem} (\probkSubsetSRC), we are given a graph $G$, a set of pairs $P \subseteq V(G) \times V(G)$, and an integer $k$. The goal is to decide whether $E(G)$ can be colored with $k$ colors such that each pair of vertices in $P$ is connected by a rainbow shortest path. The problem was shown to be $\NP$-complete by Ananth {\em et al.}~\cite{Ananth2011} even when the graph $G$ is a star.
\begin{lemma}[\hspace{1sp}{\cite{Ananth2011}}]
For every $k \geq 3$, the \probkSubsetSRC problem is $\NP$-complete when the graph $G$ is a star.
\end{lemma}
We reduce from this problem, and make use of some ideas of~\cite{Chakraborty2009} in the following. For convenience, we denote by \probkSRC the problem of deciding whether a given a graph $G$ can be strongly rainbow-connected in $k$ colors.
\begin{theorem}
\label{thm_src_chordal_hardness}
For every integer $k \geq 3$, it is $\NP$-complete to decide if $\src(G) \leq k$, where $G$ is a split graph.
\end{theorem}
\begin{proof}
Let $I = (S, P, k)$ be an instance of the \probkSubsetSRC problem, where $S=(V,E)$ is a star, both $p$ and $q$ in each $(p,q) \in P$ are leaves of $S$, and $k \geq 3$ is an integer. We construct an instance $I' = (G')$ of \probkSRC, where $G'=(V',E')$ is a split graph such that $I$ is a YES-instance of \probkSubsetSRC iff $I'$ is a YES-instance of \probkSRC.
\begin{figure}
\caption{A star graph $S$ on the vertex set $\{a,1,2,3\}
\label{fig:reduction_chordal}
\end{figure}
Let $a$ be the central vertex of $S$. For every vertex $v \in V \setminus \{a\}$, we add a new vertex $x_v$, and for every pair of leaves $(u,v) \in (V \times V) \setminus P$, we add a new vertex $x_{(u,v)}$. Formally, we construct $G'=(V',E')$ such that
\begin{itemize}
\item $V' = V \cup \{ x_v \mid v \in V \setminus \{a\}\} \cup \{ x_{(u,v)} \mid (u,v) \in (V \times V) \setminus P\}$,
\item $E' = E \cup E_1 \cup E_2 \cup E_3$,
\item $E_1 = \{ (v,x_v), (a,x_v) \mid v \in V \setminus \{a\} \}$,
\item $E_2 = \{ (u,x_{(u,v)}), (v,x_{(u,v)}), (a,x_{(u,v)}) \mid (u,v) \in (V \times V) \setminus P \}$, and
\item $E_3 = \{ (x,x') \mid x,x' \in V' \setminus V \}$.
\end{itemize}
Let us then verify that $G'$ is a split graph.
Observe the leaves of $S$ form an independent set in $G'$.
The remaining vertices $\{a\} \cup (V' \setminus V)$ form a clique, proving $G'$ is split. Moreover, $a$ is a dominating vertex.
An example illustrating the construction is given in Figure~\ref{fig:reduction_chordal}.
We will now prove $G'$ is strongly rainbow-connected with $k$ colors if and only if $(S, P)$ is $k$-subset strongly rainbow-connected. First, suppose $(S, P)$ is not $k$-subset strongly rainbow-connected; we will show $G'$ is not strongly rainbow-connected with $k$ colors. Observe that for each $(p,q) \in P$, there is a unique shortest path between~$p$ and~$q$ in~$S$. Moreover, the same holds for $G'$. Therefore, any strong rainbow coloring using $k$ colors must make this path strongly rainbow-connected in $G'$. But because the pairs in~$P$ cannot be strongly rainbow-connected with $k$ colors in~$S$, the graph~$G'$ cannot be strongly rainbow-connected with $k$ colors.
Finally, suppose $(S, P)$ is $k$-subset strongly rainbow-connected under some edge-coloring $\chi : E \to \{c_1,\ldots,c_k\}$. We will describe an edge-coloring $\chi'$ given to $G'$ by extending $\chi$. We retain the original coloring on the edges of $S$, that is, $\chi'(e) = \chi(e)$, for every $e \in E$. The rest of the edges are colored as follows:
\begin{itemize}
\item $\chi'(e) = c_1$, for all $e \in E_1$,
\item $\chi'(e) = c_2$, for all $e \in E_3$, and
\item $\chi'(ux_{(u,v)})=c_1$, $\chi'(vx_{(u,v)}) = c_2$, and $\chi'(ax_{(u,v)}) = c_2$ for all $(u,v) \notin P$.
\end{itemize}
It is straightforward to verify $G'$ is indeed strongly rainbow-connected under $\chi'$, completing the proof.~$\square$
\end{proof}
Ananth {\em et al.}~\cite{Ananth2011} reduced the problem of deciding whether a graph $G$ has chromatic number at most~$k$ to \probkSubsetSRC.
Finally, they reduced \probkSubsetSRC to the problem of deciding whether a bipartite graph $G'$ can be strongly rainbow-connected in $k$ colors.
In this final step, the size of $G'$ is quadratic in the input graph~$G$ of the chromatic number instance.
Moreover, since the chromatic number of an $n$-vertex graph cannot be approximated with a factor of $n^{1-\epsilon}$ for any $\epsilon > 0$ unless $\P = \NP$~\cite{Zuckerman2006}, they obtained that $\src(G')$ cannot be approximated with a factor of $n^{1/2-\epsilon}$, under the same complexity-theoretic assumptions.
We apply precisely the same reasoning to the split graph~$G'$ obtained in Theorem~\ref{thm_src_chordal_hardness}, where $G'$ has size quadratic in~$G$ (similarly assuming a chain of reductions from an arbitrary instance $G$ of chromatic number).
We obtain the following.
\begin{theorem}
There is no polynomial-time algorithm that approximates the strong rainbow connection number of an $n$-vertex split graph with a factor of $n^{1/2-\epsilon}$ for any $\epsilon > 0$, unless $\P = \NP$.
\end{theorem}
\section{Strongly rainbow-connecting block graphs in linear time}
\label{sec:algorithm}
In this section, we determine exactly the strong rainbow connection number of block graphs.\footnote{We remark that the presentation given here is simpler than the one given in the doctoral thesis of the second author~\cite[Section~5.2]{Lauri-phd}.} Furthermore, we present an exact linear-time algorithm for constructing a strong rainbow coloring using $\src(G)$ colors for a given block graph $G$. If an explicit coloring is not required (i.e., if the value of $\src(G)$ suffices), the algorithm can be further simplified.
Let $B$ be a block in a block graph $G$ whose edges are colored by using colors from the set $R = \{c_1,\ldots,c_r\}$. Then we say that $B$ is \emph{colored} and $B$ is \emph{associated} with each color $c_1,\ldots,c_r$.
In particular, if $B$ is associated with a color $c$ and no other block is associated with $c$, then we say $B$ is \emph{uniquely associated} with color $c$.
Furthermore, any color from $R$ can be used as a representative for the color of $C$. Thus we may say that $B$ has been colored $c_i$ for any $i \in \{ 1,\ldots,r \}$.
\begin{lemma}
\label{lemma_coloring}
Let $G$ be a block graph, let $B$ be a block that is uniquely associated with color $c$, let $(u,v)$ be an edge in $G$ such that $u,v \notin B$, and let $y$ be the minimal $a$-$b$ separator for any $a \in B \setminus \{y\}$ and $b \in \{u,v\}$. If no shortest $y$-$u$ path or shortest $y$-$v$ path contains $(u,v)$, then by coloring $(u,v)$ with the color $c$, any shortest path between $u$ or $v$ and $w \in B$ contains at most one edge of color $c$.
\end{lemma}
\begin{proof}
Any shortest path between $u$ or $v$ and $y$ does not contain the edge $(u,v)$, and does not contain any edges in $B$, so these paths do not have any edges of color $c$. Any shortest path between $y$ and $w$ is just an edge of color $c$.~$\square$
\end{proof}
The algorithm for strongly rainbow-connecting a block graph is presented in Algorithm~\ref{alg:src_block}.
Given a block graph $G$, the algorithm partitions the blocks of $G$ into two sets $\mathcal{V}_{<3}$ and $\mathcal{V}_{\geq 3}$ based on the number of cut vertices contained in each block.
That is, if a block contains less than~3 cut vertices, it is added to $\mathcal{V}_{<3}$.
Otherwise, it is added to $\mathcal{V}_{\geq 3}$.
Then, for each block in $\mathcal{V}_{<3}$, we introduce a new color and use it to color the edges of the block.
At the final step the algorithm goes through every block $B \in \mathcal{V}_{\geq 3}$.
Denote by $C(B)$ the set of cut vertices in $B$.
Fix~3 distinct vertices $b_1$, $b_2$, and $b_3$ in $C(B)$.
Observe that in $G \setminus E(B)$, we would have at least 3 connected components, and $b_1$, $b_2$, and $b_3$ would be in different connected components.
Suppose $E(B)$ was removed, and from each connected component $b_1$, $b_2$, and $b_3$ is in, pick a block in $\mathcal{V}_{<3}$.
The picked three blocks are each associated with a distinct color.
These colors are then used to color the edges of the block $B$.
The algorithm is illustrated in Figure~\ref{fig:src-alg-ex}: note that there are several choices of how the blocks in $\mathcal{V}_{\geq 3}$ are colored in the example, and the illustration shows one possibility.
\begin{figure}
\caption{\textbf{(a)}
\label{fig:src-alg-ex}
\end{figure}
\begin{algorithm}[t]
\caption{Algorithm for strong rainbow coloring a block graph}
\label{alg:src_block}
\begin{algorithmic}[1]
\Require A block graph $G$
\Ensure A strong rainbow coloring of $G$
\State $\mathcal{V}_{<3} := \{ U \mid U \in \mathcal{B}(G) \wedge |C(U)| < 3 \}$ \Comment{Denote by $\mathcal{B}(G)$ the blocks of $G$}
\State $\mathcal{V}_{\geq 3} := \mathcal{B}(G) \setminus \mathcal{V}_{<3}$
\ForAll{$U \in \mathcal{V}_{<3}$}
\State Color edges in $U$ with a fresh distinct color
\EndFor
\ForAll{$B \in \mathcal{V}_{\geq 3}$}
\State Let $b_1$, $b_2$, $b_3$ be distinct cut vertices in $C(B)$
\State Assume $E(B)$ was removed from $E(G)$
\State From each connected component $C_1$, $C_2$, $C_3$ is in, find a block in $\mathcal{V}_{<3}$
\State Let $c_1,c_2,c_3$ be the respective colors associated with the found blocks
\State Color all edges not incident to $b_1$ with color $c_1$
\State Color all edges incident to $b_1$, except $(b_1,b_2)$, with color $c_2$
\State Color the edge $(b_1,b_2)$ with color $c_3$
\EndFor
\end{algorithmic}
\end{algorithm}
The correctness of Algorithm~\ref{alg:src_block} is established by an invariant, which says that we always maintain the property that if the shortest path between two vertices is colored, then it is rainbow. We refer to this property as the \emph{shortest rainbow path property}.
\begin{theorem}
\label{thm_alg_correctness}
At every step, Algorithm~\ref{alg:src_block} maintains the shortest rainbow path property.
\end{theorem}
\begin{proof}
Before the execution of the first loop, nothing is colored so the claim is trivially true. Furthermore, the first loop obviously maintains the property. To see this, consider any shortest path of length $\ell \geq 1$ at any step. The path consists of $\ell$ edges that are in $\ell$ distinct blocks. Since each colored block has received a distinct color, the shortest path is rainbow. This establishes the base step for the correctness of the second loop.
Assume after iteration $i-1$ of the second loop, if the shortest path between any two vertices is colored, then it is rainbow.
We show that this property is maintained after iteration $i$ of the second loop.
Consider any edge $(u,v)$ in $B$ not incident to $b_1$, and let $y \in C_1$ be the minimal $a$-$b$ separator for any $a \in C_1 \setminus \{y\}$ and $b \in \{u,v\}$.
The algorithm states that $(u,v)$ will be colored with color $c_1$, which is uniquely associated with a block in $C_1$.
Because $u$ and $v$ are both at a distance 1 from $b_1$, it follows that neither shortest path $y$-$u$ or $y$-$v$ contains $(u,v)$. Thus by Lemma~\ref{lemma_coloring}, if the shortest $w$-$u$ path, for $w \in C_1$ is colored, then it is rainbow. (The same is true for the shortest $w$-$v$ path). Therefore, by coloring $(u,v)$ with color $c_1$, the shortest rainbow path property is maintained.
Consider any edge $(u,v)$ in $B$ not incident to $b_2$, and let $y \in C_2$ be the minimal $a$-$b$ separator for any $a \in C_2 \setminus \{y\}$ and $b \in \{u,v\}$.
By Lemma~\ref{lemma_coloring}, this edge can be colored with color $c_2$ to maintain the shortest rainbow path property.
Notice that $u$ and $v$ are both at a distance 1 from $b_2$, and because all edges not incident to $b_1$ have been colored, it follows that $b_1$ must be one of these vertices (i.e., either $u=b_1$ or $v=b_1$).
So we conclude that every edge incident to $b_1$, except $(b_1,b_2)$, can be colored with $c_2$ to maintain the shortest rainbow path property.
Now the only uncolored edge in $C$ is the edge $(b_1,b_2)$. Because $b_1$ and $b_2$ are both at a distance 1 from $b_3$, Lemma~\ref{lemma_coloring} assures us that by coloring $(b_1,b_2)$ with color $c_3$, the shortest rainbow path property is maintained.~$\square$
\end{proof}
Let us then consider the complexity of Algorithm~\ref{alg:src_block}.
It is an easy observation that lines 1 to 5 take linear time.
Observe that on line 9, we essentially perform reachability queries of the form \emph{given a block $B \in \mathcal{V}_{\geq 3}$, return a block containing less than 3 cut vertices that is reachable from cut vertex $b \in B$ with a path containing no cut vertices of $B$ besides $b$}.
In our context, such a query is performed for each cut vertex $b_1$, $b_2$, and $b_3$.
The naive way of answering such queries is to start a depth-first search (DFS) from each $b_1$, $b_2$, and $b_3$, and halt when a suitable block is found. However, such implementation requires $\Omega(d)$ time, where $d$ is the diameter of the input graph $G$. Using elementary techniques, we can preprocess the block graph $G$ before the execution of line 1 using linear time to answer such queries in $O(1)$ time. Thus, the total runtime will be linear as the for-loop on line 6 loops $O(n)$ times.
\begin{theorem}
Algorithm~\ref{alg:src_block} constructs a strong rainbow coloring in $O(n+m)$ time.
\end{theorem}
As Algorithm~\ref{alg:src_block} is correct and uses $k$ colors where $k$ is the number of blocks containing less than 3 cut vertices, we establish that $\src(G) \leq k$.
In the following, we prove that this is in fact optimal by showing a matching lower bound.
\begin{lemma}
\label{lem:src_lower_bound}
Let $G$ be a block graph, and let $k$ be the number of blocks containing less than 3 cut vertices. Then, $\src(G) \geq k$.
\end{lemma}
\begin{proof}
Let $A$ be a set of $k$ edges in $G$, one from each block containing less than 3 cut vertices, selected as follows.
For each block $B \in \mathcal{B}(G)$, if $|C(B)| = 1$, pick an edge incident to the cut vertex.
On the other hand, if $|C(B)| = 2$, pick the edge connecting the two cut vertices.
We claim that if we are to strongly rainbow-connect $G$, then the edges in $A$ must all receive distinct colors.
Suppose there are 2 edges in $A$ that are of the same color, say $(u,x) \in E(B)$ and $(v,y) \in E(B')$.
Without loss, we may assume that $u$ and $v$ are cut vertices of $B$ and $B'$, respectively, such that $d(u,v)$ is minimized.
As $G$ is geodetic the $x$-$y$ shortest path is unique, and it contains two edges of the same color.~$\square$
\end{proof}
\noindent By combining the previous lemma with Theorem~\ref{thm_alg_correctness}, we arrive at the following.
\begin{theorem}
\label{thm_src_equals_k}
Let $G$ be a block graph, and let $k$ be the number of blocks containing less than 3 cut vertices. Then, $\src(G) = k$.
\end{theorem}
If an explicit coloring is not required, then it is easy to see that there is a linear-time algorithm for computing $\src(G)$, where $G$ is a block graph. This is obtained by counting the number of blocks containing less than 3 cut vertices.
\begin{corollary}
There is an algorithm such that given a block graph $G$, it computes $\src(G)$ in $O(n+m)$ time.
\end{corollary}
\section{On the rainbow connection number of block graphs}
\label{sec:block_rc}
In this section, we consider the rainbow connection number of block graphs. As a main result of the section, we prove a polynomial-time characterization of bridgeless block graphs with rainbow connection number at most~4.
Using known results, we begin by observing a tight linear-time computable upper bound on the rainbow connection number of a block graph of minimum degree at least~2. The following result was obtained by Chandran~\emph{et al.}~\cite{Chandran2011}.
\begin{theorem}[\hspace{1sp}{\cite{Chandran2011}}]
For every connected graph $G$, with $\delta(G) \geq~2$,
\begin{equation*}
\rc(G) \leq \gamma_c(G) + 2.
\end{equation*}
\end{theorem}
Further, the connected domination number of block graphs has been determined by Chen and Xing~\cite{Chen2004}.
\begin{theorem}[\hspace{1sp}{\cite{Chen2004}}]
Let $G$ be a connected block graph, let $S$ be the set of cut vertices of~$G$, and~$\ell$ the number of blocks in $G$. Then,
\[
\gamma_c(G) =
\begin{cases}
1 & \text{for}\ \ell = 1, \\
|S| & \text{for}\ \ell \geq 2.
\end{cases}
\]
\end{theorem}
Combining the two previous theorems, we obtain the following.
\begin{theorem}
\label{thm_rc_upper_bound}
Let $G$ be a connected block graph with at least two blocks and $\delta(G) \geq~2$. Then $\rc(G) \leq |S|+2$, where $S$ is the set of cut vertices of $G$. Moreover, this bound is tight.
\end{theorem}
Finally, as the number of cut vertices can be determined in linear time, we remark that the upper bound can also be computed in linear time.
Before proceeding, let us state the following simple but useful lemma.
For this, we recall that a \emph{peripheral vertex} is a vertex of maximum eccentricity, that is, a vertex that is a starting point for some diametral path.
\begin{lemma}
\label{thm_sep_touching_two_pvertices}
Let $G$ be a block graph with at least 3 blocks, and let $x$ and $y$ be two peripheral vertices in distinct blocks. If $G$ has a cut vertex $s$ adjacent to $x$ and $y$, then $\rc(G) > \diam(G)$.
\end{lemma}
\begin{proof}
For the sake of contradiction, assume that $\rc(G) = \diam(G)$.
Let $B_x$ and $B_y$ be the two distinct blocks $x$ and $y$ are in, respectively.
Choose a vertex $z \in B_z$ such that $d(x,z) = \diam(G)$, where $B_z$ is a block different from $B_x$ and $B_y$.
Let $P_{xz}$ be the (unique) shortest $x$-$z$ path.
Since $\rc(G) = \diam(G)$, it must be the case that each edge in $E(P_{xz})$ receives a distinct color in any valid rainbow coloring using $\diam(G)$ colors.
Since $x$ and $z$ are in distinct blocks, and because $s$ is a cut vertex, it is clear that $(x,s) \in E(P_{xz})$.
Without loss, suppose the edge $(x,s)$ was colored with color $c_1$.
Then consider each uncolored edge incident to $s$ in $B_y$.
Notice we must color each such edge with color $c_1$, for otherwise $G$ would not be rainbow-connected.
But now the (unique) shortest path $P_{xy}$ repeats the color $c_1$, and thus $x$ and $y$ are not rainbow-connected.
It follows that $\rc(G) > \diam(G)$.~$\square$
\end{proof}
\noindent Figure~\ref{fig:tight_approx_example} (a) illustrates the previous claim: the block graph $G$ has two peripheral vertices adjacent to a cut vertex $s$. Both the edges $(x,s)$ and $(y,s)$ would have to receive the same color in a rainbow coloring of $G$ using $\diam(G)$ colors, but then there is no way to rainbow-connect $x$ and $y$ without introducing new colors. (Here, Figure~\ref{fig:tight_approx_example} (a) also shows the bound from Theorem~\ref{thm_rc_upper_bound} is tight).
\begin{figure}
\caption{\textbf{(a)}
\label{fig:tight_approx_example}
\end{figure}
We will then characterize the bridgeless block graphs having a rainbow connection number 2, 3, or~4. The following also determines exactly the rainbow connection number of the \emph{windmill graph} $K^{(m)}_n$ ($n > 3$), which consists of $m$ copies of $K_n$ with one vertex in common.
\begin{theorem}
\label{thm_rc_cases}
Let $G$ be a bridgeless block graph. Deciding whether $\rc(G) = k$ is in $\P$ for $k \in \{1,2,3,4\}$.
\end{theorem}
\begin{proof}
As $k \leq 4$, it is enough to consider bridgeless block graphs with diameter $d = \diam(G) \leq 4$. In what follows, we show how such graphs are efficiently and optimally colored.
\begin{itemize}
\item Case $d = 1$. Trivial as $G$ is complete.
\item Case $d = 2$. If $G$ has exactly 2 blocks, it is easy to see that $\rc(G) = 2$. Moreover, if the graph has $\rc(G) = 2$, it must have exactly 2 blocks. Suppose this is was not the case, i.e., $G$ has at least 3 blocks and $\rc(G) = 2$. By an argument similar to Lemma~\ref{thm_sep_touching_two_pvertices}, this leads to a contradiction. Thus, $\rc(G) = 2$ if and only if $G$ has exactly 2 blocks. When $G$ consists of 3 or more blocks, we will show that $\rc(G) = 3$. Let $\mathcal{B}$ be the set of all blocks of $G$, and let $a$ be the unique central vertex of $G$. For each $B \in \mathcal{B}$, color one edge incident to $a$ with color $c_1$, and every other incident edge with color $c_2$. Finally, color every uncolored edge of $G$ with color $c_3$.
To see $G$ is rainbow-connected, observe there is a rainbow path from any vertex to the central vertex $a$ avoiding a particular color in $\{c_1,c_2,c_3\}$.
\item Case $d = 3$.
The graph $G$ consists of a unique central clique, and at least 2 other blocks.
If $G$ has 3 blocks, then $\rc(G) = \src(G) = 3$.
If $G$ has 4 blocks, there are two cases: either $G$ has a cut vertex adjacent to two peripheral vertices in distinct blocks (then $\rc(G) \geq 4$ by Lemma~\ref{thm_sep_touching_two_pvertices}) or it does not (then $\rc(G) = \src(G) = 3$).
Otherwise, $G$ has at least 5 blocks.
Now, if $G$ has exactly two cut vertices, then by an argument similar to Lemma~\ref{thm_sep_touching_two_pvertices}, we have that $\rc(G) \geq 4$ as there is a cut vertex that is contained in more than two blocks.
We will then color every block that is not the central clique with 3 colors exactly as in the case $d = 2$, and color every edge of the central clique with a fresh distinct color proving that $\rc(G) = 4$.
Finally, suppose $G$ has at least 5 blocks, more than two cut vertices, but every cut vertex is contained in exactly two blocks.
Then by an argument similar to Lemma~\ref{lem:src_lower_bound}, we have that $\rc(G) \geq 4$, and the described coloring proves $\rc(G) = 4$.
\item Case $d = 4$. Let us call the set of blocks which contain the central vertex $a$ the \emph{core} of the graph $G$. The set of blocks not in the core is the \emph{outer layer}. First, suppose the core contains exactly 2 blocks, and the outer layer at most 4 blocks. Furthermore, suppose the condition of Lemma~\ref{thm_sep_touching_two_pvertices} does not hold (otherwise we would have $\rc(G) > 4$ immediately). When the outer layer contains 2 or 3 blocks, we have that $\rc(G) = \src(G) = 4$. Suppose the outer layer contains exactly 4 blocks. First, consider the case where a core block is adjacent to 3 blocks in the outer layer. Because the condition of Lemma~\ref{thm_sep_touching_two_pvertices} does not hold, it must be the case that at least one of the core blocks is not a $K_3$. Clearly, every two vertices $x$ and $y$, such that $d(x,y) = \diam(G)$, have to be connected by a rainbow shortest path. By an argument similar to Lemma~\ref{lem:src_lower_bound}, we have that $\rc(G) > 4$. Otherwise, when a core block is not adjacent to 3 blocks in the outer layer, $\rc(G) = \src(G) = 4$. Now suppose the outer layer has at least 5 blocks. As above, by an argument similar to Lemma~\ref{lem:src_lower_bound}, we have that $\rc(G) > 4$. Finally, suppose the core has 3 or more blocks. We argue that in this case, $\rc(G) = 4$ if and only if the outer layer contains exactly 2 blocks. For the sake of contradiction, suppose $\rc(G) = 4$, and that the outer layer has 3 or more blocks. If the condition of Lemma~\ref{thm_sep_touching_two_pvertices} holds, we have an immediate contradiction. Otherwise, by an argument similar to Lemma~\ref{thm_sep_touching_two_pvertices}, we arrive at a contradiction. When the outer layer contains exactly 2 blocks, we will show $\rc(G) = 4$. Let $B_1$ and $B_2$ be the blocks in the outer layer. We color every edge of $B_1$ with the color $c_1$, and every edge of $B_2$ with the color $c_4$. Then color $(b_1,a)$ with $c_2$, and $(a,b_2)$ with $c_3$, where $a$ is the central vertex of $G$, and $b_1$ and $b_2$ are the cut vertices in $B_1$ and $B_2$, respectively. For every block $B_i$ in the core, let $Q_i$ denote the set of edges in $B_i$ incident to $a$. Color the uncolored edges of $Q_i$ with either $c_2$ or $c_3$, such that both colors appear at least once in $Q_i$. Then, color every uncolored edge of the block that contains both $a$ and $b_2$ with the color $c_1$. Every other uncolored edge of $G$ receives the color~$c_4$.
We can now verify $G$ is indeed rainbow-connected under the given coloring.~$\square$
\end{itemize}
\end{proof}
It appears plausible but tedious that one could extend the theorem for larger values of $k$ as well. Thus, it is perhaps the case that deciding whether $\rc(G) = k$ is solvable in polynomial time for any fixed $k$ where~$G$ is a block graph. However, we conjecture the following.
\begin{conjecture}
Given a block graph $G$, it is $\NP$-hard to rainbow color $G$ optimally.
\end{conjecture}
Put differently, the conjecture says the decision problem is $\NP$-complete when the number of colors $k$ is not fixed but part of the input.
Given that the strong rainbow connection number of a block graph $G$ can be efficiently computed, it is interesting to ask when $\rc(G) = \src(G)$, or if the difference between $\src(G)$ and $\rc(G)$ would always be small. Because $\diam(G) \leq \rc(G)$ for any connected graph $G$, the following is an easy observation.
\begin{corollary}
Let $G$ be a block graph, and let $k$ be the number of blocks containing less than 3 cut vertices. If $k = \diam(G)$, then $\rc(G) = \src(G)$.
\end{corollary}
However, the difference between $\src(G)$ and $\rc(G)$ can be made arbitrarily large: attach $n$ triangles to a $K_n$, one to each vertex of the $K_n$ (see Figure~\ref{fig:tight_approx_example} (b) for an illustration). As $n$ increases, the rainbow connection number remains 4 by Theorem~\ref{thm_rc_cases}, while the strong rainbow connection number increases by Theorem~\ref{thm_src_equals_k}. This example also shows the difference between the upper bound of Theorem~\ref{thm_rc_upper_bound} and $\rc(G)$ can be arbitrarily large.
\section{Concluding remarks}
We studied the complexity of computing the rainbow and strong rainbow connection numbers of subclasses of chordal graphs, namely split graphs and block graphs. In particular, Theorem~\ref{thm_src_chordal_hardness} shows the strong rainbow connection number is significantly harder to approximate than the rainbow connection number, even on very restricted graph classes. Indeed, the result should be contrasted with the fact that any split graph can be rainbow colored in linear time using at most one color more than the optimum~\citep{Chandran2012}.
We believe our results for rainbow and strong rainbow coloring block graphs can serve as a starting point for an even more systematic study of strong rainbow coloring more general graph classes --- a topic which has received quite little attention despite the interest. In fact, the investigation of the strong rainbow connection number has been deemed ``much harder than that of rainbow connection number''~\cite{Li2012} (see also~\cite{Li2012} for more discussion). Given this observation, it is meaningful to consider the strong rainbow connection number of a most general restricted graph class (e.g., block graphs) for which the computation of the number is not known to be $\NP$-complete.
Finally, to avoid confusion, we note that similar problems have been considered in e.g., \citep{Uchizawa2013,Lauri15}: given an edge-colored graph $G$, decide whether $G$ is (strongly) rainbow-connected. We stress that known hardness results for these problems do not imply hardness results for \emph{finding} rainbow colorings. Indeed, the problems are strictly different.
\label{sec:biblio}
\end{document} |
\begin{document}
\title[Trace identities and almost polynomial growth]{Trace identities and almost polynomial growth}
\author{Antonio Ioppolo}
\address{IMECC, UNICAMP, S\'ergio Buarque de Holanda 651, 13083-859 Campinas, SP, Brazil}
\email{[email protected]}
\thanks{A. Ioppolo was supported by the Fapesp post-doctoral grant number 2018/17464-3}
\author{Plamen Koshlukov}
\address{IMECC, UNICAMP, S\'ergio Buarque de Holanda 651, 13083-859 Campinas, SP, Brazil}
\email{[email protected]}
\thanks{P. Koshlukov was partially supported by CNPq grant No.~302238/2019-0, and by FAPESP grant No.~2018/23690-6}
\author{Daniela La Mattina}
\address{Dipartimento di Matematica e Informatica, Universit\`a degli Studi di Palermo, Via Archirafi 34, 90123, Palermo, Italy}
\email{[email protected]}
\thanks{D. La Mattina was partially supported by GNSAGA-INDAM}
\subjclass[2010]{Primary 16R10, Secondary 16R30, 16R50}
\keywords{Trace algebras, polynomial identities, codimensions growth}
\begin{abstract}
In this paper we study algebras with trace and their trace polynomial identities over a field of characteristic 0. We consider two commutative matrix algebras: $D_2$, the algebra of $2\times 2$ diagonal matrices and $C_2$, the algebra of $2 \times 2$ matrices generated by $e_{11}+e_{22}$ and $e_{12}$. We describe all possible traces on these algebras and we study the corresponding trace codimensions. Moreover we characterize the varieties with trace of polynomial growth generated by a finite dimensional algebra. As a consequence, we see that the growth of a variety with trace is either polynomial or exponential.
\end{abstract}
\maketitle
\section{Introduction}
All algebras we consider in this paper will be associative and over a fixed field $F$ of characteristic 0. Let $F\langle X\rangle$ be the free associative algebra freely generated by the infinite countable set $X=\{x_1, x_2, \ldots\}$ over $F$. One interprets $F\langle X\rangle$ as the $F$-vector space with a basis consisting of 1 and all non-commutative monomials (that is words) on the alphabet $X$. The multiplication in $F\langle X\rangle$ is defined on the monomials by juxtaposition. Let $A$ be an algebra, it is clear that every function $\varphi\colon X\to A$ can be extended in a unique way to a homomorphism (denoted by the same letter) $\varphi\colon F\langle X\rangle\to A$. A polynomial $f\in F\langle X\rangle$ is a polynomial identity (PI for short) for the algebra $A$ whenever $f$ lies in the kernels of all homomorphisms from $F\langle X\rangle$ to $A$. Equivalently $f(x_1,\ldots,x_n)$ is a polynomial identity for $A$ whenever $f(a_1,\ldots,a_n)=0$ for any choice of $a_i \in A$. The set of all PI's for a given algebra $A$ forms an ideal in $F\langle X\rangle$ denoted by $\mbox{Id}(A)$ and called the T-ideal of $A$. Clearly $\mbox{Id}(A)$ is closed under endomorphisms. It is not difficult to prove that the converse is also true: if an ideal $I$ in $F\langle X\rangle$ is closed under endomorphisms then $I=\mbox{Id}(A)$ for some (adequate) algebra $A$. One such algebra is the relatively free algebra $F\langle X\rangle/I$.
Knowing the polynomial identities satisfied by an algebra $A$ is an important problem in Ring theory. It is also a very difficult one; it was solved completely in very few cases. These include the algebras $F$ (trivial); $M_2(F)$, the full matrix algebra of order 2; $E$, the infinite dimensional Grassmann algebra; $E\otimes E$. If one adds to the above algebras the upper triangular matrices $UT_n(F)$, one will get more or less the complete list of algebras whose identities are known.
The theory developed by A. Kemer in the 80-ies (see \cite{Kemer1991book}) solved in the affirmative the long-standing Specht problem: is every T-ideal in the free associative algebra finitely generated as a T-ideal? But the proof given by Kemer is not constructive; it suffices to mention that even the description of the generators of the T-ideal of $M_3(F)$ are not known, and it seems to be out of reach with the methods in use nowadays.
Thus finding the exact form of the polynomial identities satisfied by a given algebra is practically impossible in the vast majority of important algebras. Hence one is led to study either other types of polynomial identities or other characteristics of the T-ideals. In the former direction it is worth mentioning the study of polynomial identities in algebras graded by a group or a semigroup, in algebras with involution, in algebras with trace and so on. Clearly one has to incorporate the additional structure into the ``new'' polynomial identities. It turned out such identities are sort of ``easier'' to study than the ordinary ones.
We cite as an example the graded identities for the matrix algebras $M_n(F)$ for the natural gradings by the cyclic groups $\mathbb{Z}_n$ and $\mathbb{Z}$: these were described by Vasilovsky in \cite{Vasilovsky1998, Vasilovsky1999}. Gradings on important algebras (for the PI-theory) and the corresponding graded identities have been studied by very many authors (we refer the reader to the monograph \cite{ElduqueKochetov2013} and the references therein). The trace identities for the full matrix algebras were described independently by Razmyslov \cite{Razmyslov1974} and by Procesi \cite{Procesi1976} (see also the paper by Razmyslov \cite{Razmyslov1985} for a generalization to another important class of algebras). It turns out that the ideal of all trace identities for the matrix algebra $M_n(F)$ is generated by a single polynomial, this is the well known Hamilton--Cayley polynomial written in terms of the traces of the matrix and its powers (and then linearised). We must note here that as it often happens, the simplicity of the statement of the theorem due to Razmyslov and Procesi is largely misleading, and that the proofs are quite sophisticated and extensive.
The free associative algebra is graded by the degrees of its monomials, and also by their multidegrees. Clearly the T-ideals are homogeneous in such gradings; this implies that the relatively free algebras inherit the gradings on $F\langle X\rangle$. One might want to describe the Hilbert (or Poincar\'e) series of the relatively free algebras. This task was achieved also in very few instances.
Studying the relatively free algebras is related to studying varieties of algebras. We recall the corresponding notions and their importance. Let $A$ be an algebra with T-ideal $\mbox{Id}(A)$. The class of all algebras satisfying all polynomial identities from $\mbox{Id}(A)$ (and possibly some more PI's) is called the variety of algebras $\mbox{var}(A)$ generated by $A$. Then the relatively free algebra $F\langle X\rangle/\mbox{Id}(A)$ is the relatively free algebra in $\mbox{var}(A)$ (clearly there might be several algebras that satisfy the same polynomial identities as $A$, and passing to $\mbox{var}(A)$ one may ``forget'' about $A$ and even look for some ``better'' algebra generating the same variety).
One of the most important numerical invariants of a variety (or a T-ideal) is its codimension sequence. Let $P_n$ denote the vector space in $F\langle X\rangle$ consisting of all multilinear polynomials in $x_1$, \dots, $x_n$. One may view $P_n$ as the vector subspace of $F\langle X\rangle$ with a basis consisting of all monomials $x_{\sigma(1)} x_{\sigma(2)}\cdots x_{\sigma(n)}$ where $\sigma\in S_n$, the symmetric group on $\{1,2,\ldots,n\}$. It is well known that whenever the base field is of characteristic 0, each T-ideal $I$ is generated by its multilinear elements, that is by all intersections $I\cap P_n$, $n\ge 1$.
On the other hand $P_n$ is a left module over $S_n$ and it is clear that $P_n\cong FS_n$, the regular $S_n$-module. The T-ideals are invariant under permutations of the variables hence $I\cap P_n$ is a submodule of $P_n$. In this way one can employ the well developed theory of the representations of the symmetric (and general linear) groups and study the polynomial identities satisfied by an algebra. This approach might have seemed rather promising but in 1972 A. Regev \cite{Regev1972} established a fundamental result showing that if $I\ne 0$ then the intersections $I\cap P_n$ tend to become very large when $n\to\infty$. More precisely let $A$ be a PI-algebra (i.e., $A$ satisfies a non-trivial identity), and let $I=\mbox{Id}(A)$, denote by $P_n(A) = P_n/(P_n\cap I)$. Then $P_n(A)$ is also an $S_n$-module, its dimension $c_n(A)=\dim P_n(A)$ is called the $n$-th codimension of $A$ (or of the variety $\mbox{var}(A)$, or of the relatively free algebra in $\mbox{var}(A)$). Regev's theorem states that if $A$ satisfies an identity of degree $d$ then $c_n(A)\le (d-1)^{2n}$. Since $\dim P_n=n!$ this gives a more precise meaning of the above statement about the size of $P_n\cap I$. Recall that this exponential bound for the codimensions allowed Regev to prove that if $A$ and $B$ are both PI-algebras then their tensor product $A\otimes B$ is also a PI-algebra.
But computing the exact values of the codimensions of a given algebra is also a very difficult task, and $c_n(A)$ is known for very few algebras $A$. This is exactly the same list as above, namely that of the algebras whose identities are known. Hence one is led to study the growth of the codimension sequences. In the eighties Amitsur conjectured that for each PI-algebra $A$, the sequence $(c_n(A))^{1/n}$ converges when $n\to\infty$ and moreover its limit is an integer. This conjecture was dealt with by Giambruno and Zaicev \cite{GiambrunoZaicev1998, GiambrunoZaicev1999} (see also \cite{GiambrunoZaicev2005book}): they answered in the affirmative Amitsur's conjecture. The above limit is called the PI-exponent of a PI-algebra, $\exp(A)$. Giambruno and Zaicev's important result initiated an extensive research concerning the asymptotic behaviour of the codimension sequences of algebras. It is well known (see for example \cite[Chapter 7]{GiambrunoZaicev2005book} and the references therein) that either the codimensions of $A$ are bounded by a polynomial function or grow exponentially. The results of Giambruno and Zaicev also hold for the case of graded algebras (\cite{AljadeffGiambruno2013, AljadeffGiambrunoLaMattina2011, GiambrunoLaMattina2010}), algebras with involution (\cite{GiambrunoMiliesValenti2017}), superalgebras with superinvolution, graded involution or pseudoinvolution (\cite{Ioppolo2018, Ioppolo2020, Santos2017}) and also for large classes of non-associative algebras. It is useful to highlight that there are examples of non-associative algebras such that their PI-exponent exists but is not an integer and also examples where the PI-exponent does not exist at all.
In this paper we study trace polynomial identities. We focus our attention on two commutative subalgebras of $UT_2$, the algebra of $2 \times 2$ upper-triangular matrices over $F$: $D_2$ the algebra of diagonal matrices and $C_2 = \mbox{span}_F \{e_{11}+ e_{22}, e_{12} \}$. In \cite[Theorem 2.1]{Berele1996} A. Berele described the ideal of trace identities for the algebra $D_n$; he proved that it is generated by the commutativity law and by the Hamilton--Cayley polynomial. The asymptotic behaviour of the codimensions of trace identities was studied by A. Regev. In fact he described in \cite{Regev1988} the asymptotics of the ordinary codimensions of the full matrix algebra, and in \cite{Regev1984} he proved that the ordinary and the trace codimensions of the full matrix algebra are asymptotically equal.
Our main goal in this paper is the description of the varieties of trace algebras that are of \textsl{almost polynomial growth}. This means that the codimensions of the given variety are of exponential growth but each proper subvariety is of polynomial growth. The description we obtain is in terms of excluding the algebras $D_2$ and $C_2$ with non-zero traces and the algebra $UT_2$ with the zero trace. As a by-product of the proof we obtain that the codimension growth of the trace identities of a finite dimensional algebra is either polynomially or exponentially bounded.
It is interesting to note that if one considers the algebra $D_n$ of the diagonal $n\times n$ matrices without a trace, it is commutative and non-nilpotent, and hence its codimensions are equal to 1. But when adding a trace then it becomes of exponential growth. Hence in the case of diagonal matrices there cannot be a direct analogue of Regev's theorem mentioned above.
We recall here that similar descriptions for the ordinary codimensions can be found in \cite[Theorem 7.2.7]{GiambrunoZaicev2005book} where it was proved that the only two varieties of (ordinary) algebras of almost polynomial growth are the ones generated by the Grassmann algebra $E$ and by $UT_2$. In the case of algebras with involution, superinvolution, pseudoinvolution or graded by a finite group, a complete list of varieties of algebras of almost polynomial growth was exihibited in \cite{GiambrunoIoppoloLaMattina2016, GiambrunoIoppoloLaMattina2019, GiambrunoMishchenko2001bis, GiambrunoMishchenkoZaicev2001, IoppoloLaMattina2017, IoppoloMartino2018, LaMattina2015, Valenti2011}.
In order to obtain our results we use methods from the theory of trace polynomial identities together with a version of the Wedderburn--Malcev theorem for finite dimensional trace algebras. Here we recall that a trace function on the matrix algebra $M_n(F)$ is just a scalar multiple of the usual matrix trace. In sharp contrast with this there are very many traces on $D_n$ and $C_2$: these algebras are commutative and hence a trace is just a linear function from them into $F$.
\section{Preliminaries}
Throughout this paper $F$ will denote a field of characteristic zero and $A$ a unitary associative $F$-algebra with trace $\mbox{tr}$. We say that $A$ is an algebra with trace if it is endowed with a linear map $\mbox{tr}\colon A \rightarrow F$ such that for all $a,b \in A$ one has
\[
\mbox{tr}(ab) = \mbox{tr}(ba).
\]
In what follows, we shall identify, when it causes no misunderstanding, the element $\alpha \in F$ with $\alpha \cdot 1$, where $1$ is the unit of the algebra.
Accordingly, one can construct $F\langle X\ranglet$, the free algebra with trace on the countable set $X = \{ x_1, x_2, \ldots \}$, where $\mbox{Tr}$ is a formal trace. Let $\mathcal{M}$ denote the set of all monomials in the elements of $X$. Then $F\langle X\ranglet$ is the algebra generated by the free algebra $F\langle X\rangle$ together with the set of central (commuting) indeterminates $\mbox{Tr}(M)$, $M \in \mathcal{M}$, subject to the conditions that $\mbox{Tr}(MN) = \mbox{Tr}(NM)$,
and $\mbox{Tr}(\mbox{Tr}(M)N)=\mbox{Tr}(M)\mbox{Tr}(N)$, for all $M$, $N \in \mathcal{M}$. In other words,
\[
F\langle X\ranglet \cong F\langle X\rangle \otimes F[\mbox{Tr}(M) : M \in \mathcal{M}].
\]
The elements of the free algebra with trace are called trace polynomials.
A trace polynomial $f(x_1, \ldots, x_n, \mbox{Tr}) \in F\langle X\ranglet$ is a trace identity for $A$ if, after substituting the variables $x_i$ with arbitrary elements $a_i \in A$ and $\mbox{Tr}$ with the trace $\mbox{tr}$, we obtain $0$. We denote by $\mbox{Id}d^{tr}(A)$ the set of trace identities of $A$, which is a trace $T$-ideal ($T^{tr}$-ideal) of the free algebra with trace, i.e., an ideal invariant under all endomorphisms of $F\langle X\ranglet$.
As in the ordinary case, $\mbox{Id}d^{tr}(A)$ is completely determined by its multilinear polynomials.
\begin{Definition}
The vector space of multilinear elements of the free algebra with trace in the first $n$ variables is called the space of multilinear trace polynomials in $x_1$, \dots, $x_n$ and it is denoted by $MT_n$ ($MT$ comes from \textsl{mixed trace}). Its elements are linear combinations of expressions of the type
\[
\mbox{Tr}(x_{i_1} \cdots x_{i_a}) \cdots \mbox{Tr}(x_{j_1} \cdots x_{j_b}) x_{l_1} \cdots x_{l_c}
\]
where $ \left \{ i_1, \ldots, i_a, \ldots, j_1, \ldots, j_b, l_1, \ldots, l_c \right \} = \left \{ 1, \ldots, n \right \} $.
\end{Definition}
The non-negative integer
\[
c_n^{tr}(A) = \dim_F \dfrac{ MT_n}{MT_n \cap \mbox{Id}d^{tr}(A)}
\]
is called the $n$-th trace codimension of $A$.
A prominent role among the elements of $MT_n$ is played by the so-called pure trace polynomials, i.e., polynomials such that all the variables $x_1$, \dots, $x_n$ appear inside a trace.
\begin{Definition}
The vector space of multilinear pure trace polynomials in $x_1$, \dots, $x_n$ is the space
\[
PT_n = \mbox{span}_F \left \{ \mbox{Tr}(x_{i_1} \cdots x_{i_a}) \cdots \mbox{Tr}(x_{j_1} \ldots x_{j_b}) : \left \{ i_1, \ldots, j_b \right \} = \left \{ 1, \ldots, n \right \} \right \}.
\]
\end{Definition}
For a permutation $\sigma \in S_n$ we write (in \cite{Berele1996}, Berele uses $\sigma$ instead of $\sigma^{-1}$)
\[
\sigma^{-1} = \left ( i_1 \cdots i_{r_1} \right ) \left ( j_1 \cdots j_{r_2} \right ) \cdots \left ( l_1 \cdots l_{r_t} \right )
\]
as a product of disjoint cycles, including one-cycles and let us assume that $r_1 \geq r_2 \geq \cdots \geq r_t$. In this case we say that $\sigma$ is of cyclic type $\lambda = (r_1, \ldots, r_t)$.
We then define the pure trace monomial $ptr_{\sigma} \in PT_n$ as
\[
ptr_{\sigma}(x_1, \ldots, x_n) = \mbox{Tr} \left ( x_{i_1} \cdots x_{i_{r_1}} \right ) \mbox{Tr} \left ( x_{j_1} \cdots x_{j_{r_2}} \right ) \cdots \mbox{Tr} \left ( x_{l_1} \cdots x_{l_{r_t}} \right ).
\]
If $\displaystyle a = \sum_{\sigma \in S_n} \alpha_{\sigma} \sigma \in FS_n$, we also define $\displaystyle ptr_{a}(x_1, \ldots, x_n) = \sum_{\sigma \in S_n} \alpha_{\sigma} ptr_{\sigma}(x_1, \ldots, x_n)$.
It is useful to introduce also the so-called trace monomial $mtr_{\sigma} \in MT_{n-1}$. It is defined so that
\[
ptr_{\sigma}(x_1, \ldots, x_n) = \mbox{Tr} \left ( mtr_{\sigma}(x_1, \ldots, x_{n-1}) x_n \right ).
\]
Let now $\varphi\colon FS_n \rightarrow PT_n$ be the map defined by $\varphi(a) = ptr_a(x_1, \ldots, x_n)$. Clearly $\varphi$ is a linear isomorphism and so $\dim_F PT_{n} = \dim_F FS_{n} = n!$.
The following result is well known, and we include its proof for the sake of completeness.
\begin{Remark}
$\dim_F MT_n = (n+1)!$.
\end{Remark}
\begin{proof}
In order to prove the result we shall construct an isomorphism of vector spaces between $PT_{n+1}$ and $MT_n$. This will complete the proof since $\dim_F PT_{n+1} = (n+1)!$. Let $\varphi\colon PT_{n+1} \rightarrow MT_n$ be the linear map defined by the equality
\[
\varphi \left ( \mbox{Tr}(x_{i_1} \cdots x_{i_a}) \cdots \mbox{Tr}(x_{j_1} \cdots x_{j_b}) \mbox{Tr}(x_{l_1} \cdots x_{l_c}) \right ) = \mbox{Tr}(x_{i_1} \cdots x_{i_a}) \cdots \mbox{Tr}(x_{j_1} \cdots x_{j_b}) x_{l_1} \cdots x_{l_{c-1}}.
\]
Here we assume, as we may, that $l_c=n+1$. It is easily seen that $\varphi$ is a linear isomorphism and we are done.
\end{proof}
\section{Matrix algebras with trace}
In this section we study matrix algebras with trace. Let $M_n(F)$ be the algebra of $ n \times n $ matrices over $F$. One can endow such an algebra with the usual trace on matrices, denoted $t_1$, and defined as
\[
t_1(a) = t_1 \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots &
\ddots & \vdots \\ a_{n1} & \cdots &
a_{nn} \end{pmatrix} = a_{11} + \cdots + a_{nn} \in F.
\]
We recall that every trace on $M_n(F)$ is proportional to the usual trace $t_1$. The proof of this statement is a well known result of elementary linear algebra, we give it here for the sake of completeness.
\begin{Lemma} \label{traces on matrices}
Let $f\colon M_n(F) \rightarrow F$ be a trace. Then there exists $\alpha \in F$ such that $f = \alpha t_1$.
\end{Lemma}
\begin{proof}
Let $e_{ij}$'s denote the matrix units. First we shall prove that $f(e_{ij}) = 0$ whenever $i \neq j$. In fact, since $f(ab) = f(ba)$, for all $a$, $b \in M_n(F)$, we get that
\[
f(e_{ij}) = f(e_{ij} e_{jj}) = f(e_{jj} e_{ij}) = f(0) = 0.
\]
Moreover $f(e_{jj}) = f(e_{11})$, for all $j = 2, \ldots, n$. Indeed $ f(e_{11}) = f(e_{1j} e_{jj} e_{j1}) = f(e_{j1} e_{1j} e_{jj}) = f(e_{jj})$. For any matrix $a \in M_n(F)$, $a = (a_{ij}) = \sum_{i,j} a_{ij} e_{ij}$, we get that
\[
f(a) = f \Bigl( \sum_{i,j} a_{ij} e_{ij} \Bigr) = \sum_{j= 1}^n a_{jj} f(e_{jj}) = f(e_{11}) t_1(a),
\]
and the proof is complete.
\end{proof}
In what follows we shall use the notation $t_\alpha$ to indicate the trace on $M_n(F)$ such that $t_\alpha = \alpha t_1$. Moreover, $M_n^{t_\alpha}$ will denote the algebra of $n \times n$ matrices endowed with the trace $t_\alpha$.
In sharp contrast with the above result, there are very many different traces on the algebra $D_n = D_n(F)$ of $n \times n$ diagonal matrices over $F$.
\begin{Remark}\label{traces_on_Dn}
If $\mbox{tr}$ is a trace on $D_n$ then there exist scalars $\alpha_1$, \dots, $\alpha_n\in F$ such that for each diagonal matrix $a=\mbox{diag}(a_{11},\ldots, a_{nn})\in D_n$ one has $\mbox{tr}(a) = \alpha_1 a_{11}+\cdots+ \alpha_n a_{nn}$.
\end{Remark}
The algebra $D_n\cong F^n$ is commutative, and $D_n\cong F^n$ with component-wise operations. Hence a linear function $\mbox{tr} \colon D_n\to F$ must be of the form stated in the remark. Clearly for each choice of the scalars $\alpha_i$ one obtains a trace on $D_n$, and we have the statement of the remark.
We shall denote with the symbol $t_{\alpha_1, \ldots, \alpha_n}$ the trace $\mbox{tr}$ on $D_n$ such that, for all $a = \mbox{diag}(a_{11}, \ldots, a_{nn})$, $ \mbox{tr}(a) = \alpha_1 a_{11} + \cdots + \alpha_n a_{nn}$. Moreover, $D_n^{t_{\alpha_1, \ldots, \alpha_n}}$ will indicate the algebra $D_n$ endowed with the trace $t_{\alpha_1, \ldots, \alpha_n}$.
Let $(A, t)$ and $(B, t')$ be two algebras with trace. A homomorphism (isomorphism) of algebras $\varphi: A \rightarrow B$ is said to be a homomorphism (isomorphism) of algebras with trace if $\varphi(t(a)) = t'(\varphi(a))$, for any $a \in A$.
We have the following remark.
\begin{Remark} \label{isomorphic Dn}
Let $S_n$ be the symmetric group of order $n$ on the set $\{ 1,2, \ldots, n \}$. For all $\sigma \in S_n$, the algebras $D_n^{t_{\alpha_1, \ldots, \alpha_n}}$ and $D_n^{t_{\alpha_{\sigma(1)}, \ldots, \alpha_{\sigma(n)}}}$ are isomorphic, as algebras with trace.
\end{Remark}
\begin{proof}
We need only to observe that the linear map $\varphi \colon D_n^{t_{\alpha_1, \ldots, \alpha_n}} \rightarrow D_n^{t_{\alpha_{\sigma(1)}, \ldots, \alpha_{\sigma(n)}}}$, defined by $ \varphi(e_{ii}) = e_{\sigma(i) \sigma(i)}$, for all $i = 1, \ldots, n$, is an isomorphism of algebras with trace.
\end{proof}
Recall that a trace function $\mbox{tr}$ on an algebra $A$ is said to be degenerate if there exists a non-zero element $a \in A$ such that
\[
\mbox{tr}(ab) = 0
\]
for every $b\in A$. This means that the bilinear form $f(x,y) = \mbox{tr}(xy)$ is degenerate on $A$.
In the following lemma we describe the non-degenerate traces on $D_n$.
\begin{Lemma}\label{non_degenerate_traces_on_Dn}
Let $D_n^{t_{\alpha_1, \ldots, \alpha_n}}$ be the algebra of $n \times n$ matrices endowed with the trace
$t_{\alpha_1, \ldots, \alpha_n}$. Such a trace is non-degenerate if and only if all the scalars $\alpha_i$ are non-zero.
\end{Lemma}
\begin{proof}
Let $t_{\alpha_1, \ldots, \alpha_n}$ be non-degenerate and suppose that there exists $i$ such that $\alpha_i = 0$. Consider the matrix unit $e_{ii}$. It is easy to see that we reach a contradiction since, for any element $\mbox{diag}(a_{11},\ldots, a_{nn}) \in D_n$, we get
$$
t_{\alpha_1, \ldots, \alpha_n}(e_{ii} \mbox{diag}(a_{11},\ldots, a_{nn})) = t_{\alpha_1, \ldots, \alpha_n}(e_{ii} a_{ii} ) = \alpha_i a_{ii} = 0.
$$
In order to prove the opposite direction, let us assume that all the scalars $\alpha_i$ are non-zero. Suppose, by contradiction, that the trace $t_{\alpha_1, \ldots, \alpha_n}$ is degenerate. Hence there exist a non-zero element $a = \mbox{diag}(a_{11},\ldots, a_{nn}) \in D_n$ such that $t_{\alpha_1, \ldots, \alpha_n}(ab) = 0$, for any $b \in D_n$. In particular, let $b = e_{ii}$, for $i=1, \ldots, n$. We have that
$$
t_{\alpha_1, \ldots, \alpha_n}( a e_{ii}) = t_{\alpha_1, \ldots, \alpha_n}(\mbox{diag}(a_{11},\ldots, a_{nn}) e_{ii}) = t_{\alpha_1, \ldots, \alpha_n}(a_{ii} e_{ii} ) = \alpha_i a_{ii} = 0.
$$
Since $\alpha_i \neq 0$, for all $i = 1, \ldots, n$, we get that $a_{ii} = 0$ and so $a = 0$, a contradiction.
\end{proof}
\section{The algebras $D_2^{t_{\alpha, \beta}}$}
In this section we deal with the algebra $D_2$ of $2 \times 2$ diagonal matrices over the field $F$. In accordance with the results of Section $3$, we can define on $D_2$, up to isomorphism, only the following trace functions:
\begin{enumerate}
\item[1.] $t_{\alpha, 0}$, for any $\alpha \in F $,
\item[2.] $t_{\alpha, \alpha}$, for any non-zero $\alpha \in F $,
\item[3.] $t_{\alpha, \beta }$, for any distinct non-zero $\alpha, \beta \in F $.
\end{enumerate}
In the first part of this section our goal is to find the generators of the trace $T$-ideals of the identities of the algebra $D_2$ endowed with all possible trace functions.
Let us start with the case of $D_2^{t_{\alpha,0}}$. Recall that, if $\alpha = 0$, then $D_2^{t_{0,0}}$ is the algebra $D_2$ with zero trace. So $\mbox{Id}^{tr}(D_2^{t_{0,0}})$ is generated by the commutator $[x_1, x_2]$ and $\mbox{Tr}(x)$ and $c_n^{tr}(D_2^{t_{0,0}}) = c_n(D_2^{t_{0,0}}) =1$.
For $\alpha \neq 0$, we have the following result.
\begin{Theorem} \label{identities and codimensions of D2 t alpha 0}
Let $\alpha \in F \setminus \{ 0 \}$. The trace $T$-ideal $\mbox{Id}d^{tr}(D_2^{t_{\alpha, 0}})$ is generated, as a trace $T$-ideal, by the polynomials:
\begin{itemize}
\item[•] $ f_1 = [x_1,x_2]$,
\item[•] $f_2 = \mbox{Tr}(x_1) \mbox{Tr}(x_2) - \alpha \mbox{Tr}(x_1 x_2)$.
\end{itemize}
Moreover
\[
c_n^{tr}(D_2^{t_{\alpha, 0}}) = 2^{n}.
\]
\end{Theorem}
\begin{proof}
It is clear that $T = \langle f_1, f_2 \rangle_{T^{tr}} \subseteq \mbox{Id}d^{tr}(D_2^{t_{\alpha, 0}})$.
We need to prove the opposite inclusion. Let $f \in MT_n$ be a multilinear trace polynomial of degree $n$. It is clear that $f$ can be written (mod $T$) as a linear combination of the polynomials
\begin{equation} \label{non identities of D2 t alpha 0}
\mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}},
\end{equation}
where $ \left \{ i_1, \ldots, i_k, j_1, \ldots, j_{n-k} \right \} = \left \{ 1, \ldots, n \right \}$, $i_1 < \cdots < i_k$ and $j_1 < \cdots < j_{n-k}$.
Our goal is to show that the polynomials in \eqref{non identities of D2 t alpha 0} are linearly independent modulo $ \mbox{Id}d^{tr}(D_2^{t_{\alpha, 0}})$. To this end, let $g = g(x_1, \ldots, x_n, \mbox{Tr})$ be a linear combination of the above polynomials which is a trace identity:
\[
g(x_1, \ldots, x_n, \mbox{Tr}) = \sum_{I,J} a_{I,J} \mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}},
\]
where $I = \{ x_{i_1}, \ldots, x_{i_k} \}$, $J = \{ x_{j_1}, \ldots, x_{j_{n-k}} \}$, and $i_1 < \cdots < i_k$, $j_1 < \cdots < j_{n-k}$.
We claim that $g$ is actually the zero polynomial. Suppose that, for some fixed $I = \{ x_{i_1}, \ldots, x_{i_k} \}$ and $J = \{ x_{j_1}, \ldots, x_{j_{n-k}} \}$, one has that $a_{I,J} \neq 0$. We consider the following evaluation:
\[
x_{i_1} = \cdots = x_{i_k} = e_{11}, \ \ \ \ \ \ \ \
x_{j_1} = \cdots = x_{j_{n-k}} = e_{22}, \ \ \ \ \ \ \ \
\mbox{Tr} = t_{\alpha, 0}.
\]
It follows that $g(e_{11}, \ldots, e_{11}, e_{22}, \ldots, e_{22}, t_{\alpha,0}) = a_{I,J} \alpha e_{22} = 0$. Hence $a_{I,J} = 0$, a contradiction. The claim is proved and so
\[
\mbox{Id}d^{tr}(D_2^{t_{\alpha, 0}}) = T.
\]
Finally, in order to compute the $n$-th trace codimension sequence of our algebra, we have only to count how many elements in \eqref{non identities of D2 t alpha 0} there are. Fixed $k$, there are exactly $\binom{n}{k}$ elements of the type $\mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}}$, $i_1 < \cdots < i_k$, $j_1 < \cdots < j_{n-k}$. Hence the polynomials in \eqref{non identities of D2 t alpha 0} are exactly $ \sum_{k=0}^n \binom{n}{k} = 2^n$ and the proof is complete.
\end{proof}
Now, we consider $D_2^{t_{\alpha,\alpha}}$. Recall that, for any $\begin{pmatrix}
a & 0 \\
0 & b
\end{pmatrix} \in D_2$, we have that $
t_{\alpha, \alpha} \begin{pmatrix}
a & 0 \\
0 & b
\end{pmatrix} = \alpha (a + b).$
\begin{Theorem} \label{identities of D2 talpha alpha}
Let $\alpha \in F \setminus \{ 0 \}$. The trace $T$-ideal $\mbox{Id}d^{tr}(D_2^{t_{\alpha, \alpha}})$ is generated, as a trace $T$-ideal, by the polynomials:
\begin{itemize}
\item[•] $ f_1 = [x_1,x_2]$,
\item[•] $ f_3 = \alpha^2 x_1x_2 + \alpha^2 x_2 x_1 + \mbox{Tr}(x_1)\mbox{Tr}(x_2) - \alpha \mbox{Tr}(x_1)x_2 - \alpha \mbox{Tr}(x_2)x_1 - \alpha \mbox{Tr}(x_1 x_2) $.
\end{itemize}
Moreover
\[
c_n^{tr}(D_2^{t_{\alpha, \alpha}})= 2^n.
\]
\end{Theorem}
\begin{proof}
In case $\alpha=1$, Berele (\cite[Theorem $2.1$]{Berele1996}) proved that $\mbox{Id}d^{tr}(D_2^{t_{\alpha, \alpha}}) = \langle f_1, f_3 \rangle_{T^{tr}}$. The proof when $\alpha\ne 1$ follows word by word that one given by Berele in \cite{Berele1996}.
In order to find the trace codimensions, we remark that the trace polynomials
\begin{equation} \label{non identities of D2 with trace}
\mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}}, \ \ \ \ \ \ \ \ \ \ \left \{ i_1, \ldots, i_k, j_1, \ldots, j_{n-k} \right \} = \left \{ 1, \ldots, n \right \}, \ \ i_1 < \cdots < i_k, \ \ j_1 < \cdots < j_{n-k},
\end{equation}
form a basis of $MT_n \pmod{ MT_n \cap \mbox{Id}d^{tr}(D_2^{t_{\alpha, \alpha}})}$. Hence, their number, which is the $n$-th trace codimension sequence of $D_2^{t_{\alpha, \alpha}}$, is $\sum_{k=0}^n \binom{n}{k} = 2^n$ and the proof is complete.
\end{proof}
\begin{Remark}
\label{differentvarieties}
Here we observe a curious fact. It follows from Theorems \ref{identities and codimensions of D2 t alpha 0} and \ref{identities of D2 talpha alpha} that the relatively free algebras in the varieties of algebras with trace generated by $D_2^{t_{\alpha,0}}$ and by $D_2^{t_{\alpha,\alpha}}$ are quite similar. In fact the multilinear components of degree $n$ in these two relatively free algebras are isomorphic. But this is an isomorphism of vector spaces which cannot be extended to an isomorphism of the corresponding algebras. It can be easily seen that neither of these two varieties is a subvariety of the other as soon as $\alpha\ne 0$.
\begin{enumerate}
\item
The trace identity $f_2=Tr(x_1)Tr(x_2)-\alpha Tr(x_1x_2)$ does not hold for the algebra $D_2^{t_{\alpha,\alpha}}$. One evaluates it on the ``generic" diagonal matrices $d_1=diag(a_1,b_1)$ and $d_2=diag(a_2,b_2)$ and gets $\alpha^2(a_1b_2 + a_2b_1)$ which does not vanish on $D_2^{t_{\alpha,\alpha}}$.
\item
Likewise $D_2^{t_{\alpha,0}}$ does not satisfy the trace identity $f_3$. Once again substituting $x_1$ and $x_2$ in $f_3$ by the generic matrices $d_1$ and $d_2$ we get a diagonal matrix with 0 at position $(1,1)$ and a non-zero entry $\alpha^2(2b_1b_2 -a_1b_2-a_2b_1)$ at position $(2,2)$.
\end{enumerate}
This question is addressed in a more general form in Lemmas~\ref{D2 delta 0 no T equ}, \ref{D2 gamma gamma no T equ}, and \ref{D2 alfa beta no T equ}.
\end{Remark}
Finally we consider the trace algebra $D_2^{t_{\alpha,\beta}}$. Recall that, for any $\begin{pmatrix}
a & 0 \\
0 & b
\end{pmatrix} \in D_2$, we have that
$
t_{\alpha, \beta} \begin{pmatrix}
a & 0 \\
0 & b
\end{pmatrix} = \alpha a + \beta b.
$
\begin{Theorem} \label{identities and codimensions of D2 talpha beta}
Let $\alpha$, $\beta \in F \setminus \{ 0 \}$, $\alpha \neq \beta$. As a trace $T$-ideal, $\mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}})$ is generated by the polynomials:
\begin{itemize}
\item[•] $ f_1 = [x_1,x_2]$,
\item[•] $f_4 = -x_1 \mbox{Tr}(x_2) \mbox{Tr}(x_3) + (\alpha + \beta) x_1 \mbox{Tr}(x_2 x_3) + x_3 \mbox{Tr}(x_1) \mbox{Tr}(x_2) - (\alpha + \beta) x_3 \mbox{Tr}(x_1 x_2) - \mbox{Tr}(x_1) \mbox{Tr}(x_2 x_3) + \mbox{Tr}(x_3)\mbox{Tr}(x_1 x_2)$,
\item[•] $f_5 = \mbox{Tr}(x_1) \mbox{Tr}(x_2) \mbox{Tr}(x_3) - (\alpha \beta^2 + \alpha^2 \beta) x_1 x_2 x_3 + \alpha \beta x_1 x_2 \mbox{Tr}(x_3) + \alpha \beta x_1 x_3 \mbox{Tr}(x_2) + \alpha \beta x_2 x_3 \mbox{Tr}(x_1) - (\alpha + \beta) x_1 \mbox{Tr}(x_2) \mbox{Tr}(x_3) + (\alpha^2 + \alpha \beta + \beta^2) x_1 \mbox{Tr}(x_2 x_3) - \alpha \beta x_2 \mbox{Tr}(x_1 x_3) - \alpha \beta x_3 \mbox{Tr}(x_1 x_2) + \alpha \beta \mbox{Tr}(x_1 x_2 x_3) - (\alpha + \beta) \mbox{Tr}(x_1) \mbox{Tr}(x_2 x_3)$.
\end{itemize}
Moreover
\[
c_n^{tr}(D_2^{t_{\alpha, \beta}}) = 2^{n+1}-n-1.
\]
\end{Theorem}
\begin{proof}
Write $I = \langle f_1, f_4, f_5 \rangle_{T^{tr}}$. An immediate (but tedious) verification shows that $ I \subseteq \mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}})$. In order to obtain the opposite inclusion, first we shall prove that the polynomials
\begin{equation} \label{non identities of D2 alfa beta 1}
x_{i_1} \cdots x_{i_{k}} \mbox{Tr}(x_{h_1} \cdots x_{h_{n-k}}), \ \ \ \ \
x_{i_1} \cdots x_{i_{k}} \mbox{Tr}(x_{j_1} \cdots x_{j_{s-1}}) \mbox{Tr}(x_{j_s}),
\end{equation}
where $ i_1 < \cdots < i_k$, $ h_1 < \cdots < h_{n-k}$ and $ j_1 < \cdots < j_{s-1} < j_s$, span $ MT_n $, modulo $ MT_n \cap I $, for every $n \geq 1$.
In order to achieve this goal we shall use an induction. Let $f \in MT_n$ be a multilinear trace polynomial of degree $n$. Hence it is a linear combination of polynomials of the type
\[
x_{i_1} \cdots x_{i_a} \mbox{Tr}(x_{j_1} \cdots x_{j_b}) \cdots \mbox{Tr}(x_{l_1} \cdots x_{l_c})
\]
where $ \left \{ i_1, \ldots, i_a, j_1, \ldots, j_b, \ldots, l_1, \ldots, l_c \right \} = \left \{ 1, \ldots, n \right \} $.
Because of the identity $f_5 \equiv 0$, we can kill all products of three traces (and more than three traces). So we may consider only monomials with either no trace, or with one, or with two traces. Clearly the identity $f_1$ implies that we can assume all of these monomials ordered, outside and also inside each trace.
In the case of monomials with two traces, now we want to show how to reduce one of these traces to be of a monomial of length $1$ (that is a variable). To this end, in $f_4$ take $x_1$ as a letter, and $x_2$, $x_3$ as monomials. The last term is ``undesirable'', and it is written as a combination of either one trace, or two traces where one of these is a trace of a letter (that is $x_1$). The only problem is the first term of $f_4$. But it has a letter outside the traces. Since the total degree of the monomials inside traces in the first summand of $f_4$ will be less than the initial one, we can apply the induction.
Suppose now we have a linear combination of monomials where we have either no traces (at most one of these), or just one trace (and the variables outside the trace are ordered, as well as those inside the trace), or two traces. In the latter case we may assume that the variables outside the trace are ordered, and that these in the first trace are ordered (increasing) as well. And moreover the second trace is of a variable, not monomial. In this case we are concerned with the monomials having two traces. We use once again $f_4$, in fact the last two summands in it, to exchange
variables between the two traces. We get then monomials with one trace or with two traces but of lower degree inside the traces, and as above continue by induction.
In conclusion, we can suppose that in the case of two traces, the variables are ordered in the following way:
\[
x_{i_1} \cdots x_{i_a} \mbox{Tr}(x_{j_1}\cdots x_{j_b}) \mbox{Tr}(x_j)
\]
where $i_1<\cdots<i_a$ and $j_1<\cdots<j_b<j$.
We next show that the polynomials in \eqref{non identities of D2 alfa beta 1} are linearly independent modulo $ \mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}})$.
Let us take generic diagonal matrices $X_i=(a_i, b_i)$, that is we consider $a_i$ and $b_i$ as commuting independent variables. If the monomials we consider are not linearly independent there will be a non-trivial linear combination among them which vanishes. Form such a linear combination and evaluate it on the above defined generic diagonal matrices $X_i$. We order the monomials in $a_i$ and $b_i$ obtained in the linear combination, at positions $(1,1)$ and $(2,2)$ of the resulting matrix, as follows.
In the first coordinate (that is position $(1,1)$ of the matrices) we consider $a_1<\cdots<a_n<b_1<\cdots<b_n$, in the second coordinate $(2,2)$ of the matrices in $D_2$ we take $b_1<\cdots<b_n<a_1<\cdots<a_n$. Then we extend this order lexicographically to all monomials in $F[a_i, b_i]$. In fact these are two orders, one for position $(1,1)$ and another for position $(2,2)$ of the diagonal matrices.
In order to simplify the notation, let us assume that the largest monomial in the first coordinate is $a_1 \cdots a_k b_{k+1} \cdots b_n$. If $k=n$ then there is no trace at all. The case $k=n-1$ is also clear: we have only one trace and it is $\mbox{Tr}(x_n)$. So take $k\le n-2$. Such a monomial can come from either $M = x_1\cdots x_k \mbox{Tr}(x_{k+1}\cdots x_n)$ or from $N = x_1\cdots x_k \mbox{Tr}(x_{k+1}\cdots x_{n-1})\mbox{Tr}(x_n)$. Clearly in the second coordinate the largest monomial will be $b_1\cdots b_k a_{k+1}\cdots a_n$, and it comes from the above two elements only.
Now suppose that a linear combination of monomials vanishes on the generic matrices $X_i$. Then the largest monomials will cancel and so there will exist scalars $p$,
$q\in F$ such that the largest monomials in $pM+qN$ will cancel. This means, after computing the traces, that $p\beta + q\beta^2 = 0$ and $p\alpha+q\alpha^2=0$. Consider $p$ and $q$ as variables in a $2\times
2$ system. The determinant of the system is $\alpha\beta(\alpha-\beta)$. Since $\alpha$ and $\beta$ are both non-zero and since $\alpha\ne \beta$, we get that $p=q=0$ and we cancel out the largest monomials.
In conclusion the polynomials in \eqref{non identities of D2 alfa beta 1} are linearly independent modulo $ \mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}})$.
Since $ MT_n \cap \mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}}) \supseteq MT_n \cap I $, they form a basis of $ MT_n$ modulo $MT_n \cap \mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}}) $ and $ \mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}}) = I $.
Finally, in order to compute the codimension sequence of our algebra, we have only to observe the following facts. We have only one monomial with no trace at all, and exactly $n$ monomials where $n-1$ letters are outside the traces (and the remaining one is inside a trace). Then we have $2{n\choose s}$ elements of the type
\[
x_{i_1}\cdots x_{i_k} \mbox{Tr}(x_{j_1}\cdots x_{j_s}) \mbox{\ \ \ \ \ or \ \ \ \ \ } x_{i_1}\cdots x_{i_k} \mbox{Tr}(x_{j_1}\cdots x_{j_{s-1}}) \mbox{Tr}(x_{j_s}), \qquad k+s=n.
\]
In conclusion we get that
$$
c_n^{tr}(D_2^{t_{\alpha, \beta}}) = \binom{n}{0} + \binom{n}{1} + 2\sum_{s = 2}^{n} \binom{n}{s} = 2^{n+1}-n-1.$$
\end{proof}
Given a variety $\mathcal{V}$ of algebras with trace, the growth of $\mathcal{V}$ is the growth of the sequence of trace codimensions of any algebra $A$ generating $\mathcal{V}$, i.e., $\mathcal{V} = \mbox{var}^{tr}(A) $. We say that $\mathcal{V}$ has almost polynomial growth if it grows exponentially but any proper subvariety has polynomial growth.
In the following theorem we prove that the algebras $D_2^{t_{\alpha, \alpha}}$ generate varieties of almost polynomial growth.
\begin{Theorem} \label{D2 alfa APG}
The algebras $D_2^{t_{\alpha, \alpha}}$, $\alpha \in F \setminus \{ 0 \}$, generate varieties of almost polynomial growth.
\end{Theorem}
\begin{proof}
By Theorem \ref{identities of D2 talpha alpha}, the variety generated by $D_2^{t_{\alpha, \alpha}}$ has exponential growth.
We are left to prove that any proper subvariety of $\mbox{var}^{tr}(D_2^{t_{\alpha, \alpha}})$ has polynomial growth. Let $\mbox{var}^{tr}(A) \subsetneq \mbox{var}^{tr}(D_2^{t_{\alpha, \alpha}})$. Then there exists a multilinear trace polynomial $f$ of degree $n$ which is a trace identity for $A$ but not for $D_2^{t_{\alpha, \alpha}}$. We can write $f$ as
\begin{equation}
f = \sum_{k=0}^n \sum_{I} \alpha_{k,I,J} \mbox{Tr}(x_{i_1} \cdots x_{i_k}) x_{j_1} \cdots x_{j_{n-k}} + h
\end{equation}
where $h \in \mbox{Id}d^{tr}(D_2^{t_{\alpha, \alpha}})$, $I = \{ i_1, \ldots, i_k \}$, $J = \{ j_1, \ldots, j_{n-k} \}$, $i_1 < \cdots < i_k$ and $j_1 < \cdots < j_{n-k}$.
Let $M$ be the largest $k$ such that $\alpha_{k,I,J} \neq 0$. There may exist several monomials in $f$ with that same $k$, we choose the one with the least monomial with respect to its trace part $x_{i_1} \cdots x_{i_k}$ (in the usual lexicographical order on the monomials in $x_1$, \dots, $x_n$ induced by $x_1<\cdots<x_n$).
Now consider a monomial $g$ of degree $n' > n+M$ of the type
\[
g = \mbox{Tr}(x_{l_1} \cdots x_{l_a}) x_{k_1} \cdots x_{k_{n'-a}}
\]
with $a > 2M$ and $n'-a > n-M$. We split the monomial $x_{l_1} \cdots x_{l_a}$ inside the trace, in $M$ monomials $y_1 = x_{l_1} \cdots x_{l_{a_1}}$, \dots, $y_M = x_{l_{a_{M-1}+1}} \cdots x_{l_{a}}$, each one with $\lfloor \frac{a}{M} \rfloor$ or $\lceil \frac{a}{M} \rceil$ variables. We also let $y_{M+1} = x_{k_1}$, \dots, $y_{n'-a+M} = x_{k_{n'-a}}$.
Now, because of $f(y_1, \ldots, y_n) \equiv 0$, we can write $g \pmod{\mbox{Id}d^{tr}(A)}$ as a linear combination of monomials having either less than $M$ variables $y_i$ inside the trace, or $M$ variables $y_i$ inside the trace, but at least one of these variables is not among $y_1$, \dots, $y_M$. Passing back to $x_1$, \dots, $x_{n'}$ we see that $g$ is a linear combination of monomials with less than $a$ variables inside the trace. If $a > 2M$ is still satisfied for some of these monomials (for the new value of $a$) we repeat the procedure and so on.
Thus after several such steps we shall write $g$ as a linear combination of monomials with at most $2M$ variables inside the traces. It follows that, for $n$ large enough,
\[
c_n^{tr}(A) \leq \sum_{k = 0}^{2M} \binom{n}{k} \approx bn^{2M}
\]
where $b$ is a constant. Hence $\mbox{var}^{tr}(A)$ has polynomial growth and the proof is complete.
\end{proof}
With a similar proof we obtain also the following result.
\begin{Theorem} \label{D2 alfa 0 APG}
The algebras $D_2^{t_{\alpha, 0}}$, $\alpha \in F \setminus \{ 0 \}$, generate varieties of almost polynomial growth.
\end{Theorem}
We conclude this section by proving some results showing that the algebras $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma, \gamma}}$ and $D_2^{t_{\delta, 0}}$ are not $T^{tr}$-equivalent. Recall that given two algebras with trace $A$ and $B$, $A$ is $T^{tr}$-equivalent to $B$ and we write $A \sim_{T^{tr}} B$, in case $\mbox{Id}d^{tr}(A) = \mbox{Id}d^{tr}(B)$.
\begin{Lemma} \label{D2 delta 0 no T equ}
Let $\alpha$, $\beta, \gamma, \delta, \epsilon \in F \setminus \{0 \}$, $\alpha \neq \beta$, $\delta \neq \epsilon$. Then
\begin{itemize}
\item[1.] $\mbox{Id}d^{tr}(D_2^{t_{\delta, 0}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}})$.
\item[2.] $ \mbox{Id}d^{tr}(D_2^{t_{\delta, 0}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\gamma,\gamma}})$.
\item[3.] $ \mbox{Id}d^{tr}(D_2^{t_{\delta, 0}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\epsilon, 0}})$.
\end{itemize}
\end{Lemma}
\begin{proof}
Let us consider the polynomial
\[
f_2 = \mbox{Tr}(x_1)\mbox{Tr}(x_2) - \delta \mbox{Tr}(x_1 x_2).
\]
We have seen in Theorem \ref{identities and codimensions of D2 t alpha 0} that $f_2$ is a trace identity of $D_2^{t_{\delta, 0}}$. In order to complete the proof we need only to show that $f_2$ does not vanish on the algebras $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma, \gamma}}$ and $D_2^{t_{\epsilon, 0}}$. By considering the evaluation $x_1 = e_{11}$ and $x_2 = e_{22}$, we obtain that $f_2(e_{11}, e_{22}, t_{\alpha, \beta}) = \alpha \beta (e_{11}+ e_{22}) \neq 0$ and $f_2(e_{11}, e_{22}, t_{\gamma, \gamma}) =\gamma^2 (e_{11} + e_{22}) \neq 0$. Hence $f_2$ is not a trace identity of $D_2^{t_{\alpha, \beta}}$ and $D_2^{t_{\gamma, \gamma}}$ and we are done in the first two cases. Finally, evaluating $x_1 = x_2 = e_{11}$, we get $f_2(e_{11}, e_{11}, t_{\epsilon, 0}) = \epsilon (\epsilon - \delta) (e_{11} + e_{22}) \neq 0$ and the proof is complete.
\end{proof}
\begin{Lemma} \label{D2 gamma gamma no T equ}
Let $\alpha$, $\beta, \gamma, \delta, \kappa \in F \setminus \{0 \}$, $\alpha \neq \beta$, $\gamma \neq \kappa$. Then
\begin{itemize}
\item[1.] $\mbox{Id}d^{tr}(D_2^{t_{\gamma, \gamma}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}})$.
\item[2.] $ \mbox{Id}d^{tr}(D_2^{t_{\gamma, \gamma}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\kappa,\kappa}})$.
\item[3.] $ \mbox{Id}d^{tr}(D_2^{t_{\gamma, \gamma}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\delta, 0}})$.
\end{itemize}
\end{Lemma}
\begin{proof}
Let us consider the polynomial
$$
f_3 = \gamma^2 x_1x_2 + \gamma^2 x_2 x_1 + \mbox{Tr}(x_1)\mbox{Tr}(x_2) - \gamma \mbox{Tr}(x_1)x_2 - \gamma \mbox{Tr}(x_2)x_1 - \gamma \mbox{Tr}(x_1 x_2).
$$
We have seen in Theorem \ref{identities of D2 talpha alpha} that $f_3$ is a trace identity of $D_2^{t_{\gamma, \gamma}}$. By considering the evaluation $x_1 = e_{11}$ and $x_2 = e_{22}$, we obtain that $f_3(e_{11}, e_{22}, t_{\alpha, \beta}) = \beta (\alpha - \gamma) e_{11} + \alpha (\beta - \gamma) e_{22} \neq 0$, $f_3(e_{11}, e_{22}, t_{\kappa, \kappa}) =\kappa(\kappa- \gamma) (e_{11} + e_{22}) \neq 0$ and $f_3(e_{11}, e_{22}, t_{\delta, 0}) = -\gamma \delta e_{22} \neq 0$. Hence $f_3$ is not a trace identity of $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\kappa, \kappa}}$ and $D_2^{t_{\delta, 0}}$ and the proof is complete.
\end{proof}
\begin{Lemma} \label{D2 alfa beta no T equ}
Let $\alpha$, $\beta, \gamma, \delta, \eta, \mu \in F \setminus \{0 \}$, $\alpha \neq \beta$, $\eta \neq \mu$, $\{ \alpha, \beta \} \neq \{ \eta, \mu \}$. Then
\begin{itemize}
\item[1.] $\mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\eta, \mu}})$.
\item[2.] $\mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\gamma,\gamma}})$.
\item[3.] $\mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\delta, 0}})$.
\end{itemize}
\end{Lemma}
\begin{proof}
By Theorem \ref{identities and codimensions of D2 talpha beta} we know that the polynomials $f_4$ and $f_5$ are trace identities of $D_2^{t_{\alpha, \beta}}$. In order to complete the proof we shall show that such polynomials do not vanish on the algebras $D_2^{t_{\eta, \mu}}$, $D_2^{t_{\gamma, \gamma}}$ and $D_2^{t_{\delta, 0}}$.
\begin{itemize}
\item[1.]
We have to consider two different cases. If $\alpha + \beta \neq \eta + \mu$, then $f_4(e_{11}, e_{22}, e_{22}, t_{\eta, \mu}) = \mu (\alpha + \beta - \eta - \mu) e_{11} \neq 0$ and we are done in this case. Now, let us suppose that $\alpha + \beta = \eta + \mu$. In this case, for some $\lambda \in F$, we obtain that $f_5(e_{11}, e_{22}, e_{22}, t_{\eta, \mu}) = \lambda e_{11} + \eta (\beta - \mu) (\alpha - \mu) e_{22}$ is non-zero since the hypothesis $\{ \eta, \mu \} \neq \{ \alpha, \beta \}$ implies that $\beta \neq \mu$ and $\alpha \neq \mu$.
\item[2.]
It is the same proof of item $1.$ in which $\eta = \mu = \gamma$.
\item[3.] The evaluation $x_1 = x_2 = e_{22}$, $x_3 = e_{11}$ gives $f_5(e_{22}, e_{22}, e_{11}, t_{\delta, 0}) = \alpha \beta \delta e_{22} \neq 0$.
\end{itemize}
\end{proof}
\section{The algebras $C_2^{t_{\alpha, \beta}}$}
In this section we focus our attention on the $F$-algebra
$$
C_2 = \left \{ \begin{pmatrix}
a & b \\
0 & a
\end{pmatrix} : a,b \in F \right \}.
$$
Since $C_2$ is commutative, every trace on $C_2$ is just a linear map $C_2 \rightarrow F$. Hence, if $\mbox{tr}$ is a trace on $C_2$, then there exist $\alpha, \beta \in F$ such that
$$
\mbox{tr} \left ( \begin{pmatrix}
a & b \\
0 & a
\end{pmatrix} \right ) = \alpha a + \beta b.
$$
We denote such a trace by $t_{\alpha, \beta}$. Moreover, $C_2^{t_{\alpha, \beta}}$ indicates the algebra $C_2$ endowed with the trace $t_{\alpha, \beta}$.
\begin{Lemma} \label{identities of C2 t alfa 0}
Let $\alpha \in F$. Then $C_2^{t_{\alpha, 0}}$ satisfies the following trace identities of degree $2:$
\begin{itemize}
\item[1.] $[x_1, x_2] \equiv 0$.
\item[2.] $\mbox{Tr}(x_1) \mbox{Tr}(x_2) - \alpha \mbox{Tr}(x_1 x_2) \equiv 0$.
\item[3.] $\mbox{Tr}(x_1) \mbox{Tr}(x_2) - \alpha \mbox{Tr}(x_1) x_2 - \alpha \mbox{Tr}(x_2) x_1 + \alpha^2 x_1 x_2 \equiv 0$.
\end{itemize}
\end{Lemma}
\begin{proof}
The result follows by an immediate verification.
\end{proof}
If $\alpha = 0$ then $C_2^{t_{0,0}}$ is a commutative algebra with zero trace and $c_n^{tr}(C_2^{t_{0,0}}) = 1$, for all $n \geq 1$. In case $\alpha \neq 0$, by putting together Lemma \ref{identities of C2 t alfa 0} and Theorem \ref{identities and codimensions of D2 t alpha 0} we get that $\mbox{var}^{tr}(C_2^{t_{\alpha,0}}) \subsetneq \mbox{var}^{tr}(D_2^{t_{\alpha,0}}) $. Hence, by Theorem \ref{D2 alfa 0 APG}, $C_2^{t_{\alpha,0}}$ generates a variety of polynomial growth.
\begin{Remark} \label{C alpha, beta equivalent to C alpha beta'}
Let $\alpha, \beta, \beta' \in F$ with $\beta, \beta' \neq 0$. The algebras $C_2^{t_{\alpha, \beta}}$ and $C_2^{t_{\alpha, \beta'}}$ are isomorphic, as algebras with trace.
\end{Remark}
\begin{proof}
We need only to observe that the linear map $\varphi \colon C_2^{t_{\alpha, \beta}} \rightarrow C_2^{t_{\alpha, \beta'}}$, defined by
$$
\varphi \left ( \begin{pmatrix}
a & b \\
0 & a
\end{pmatrix} \right ) = \begin{pmatrix}
a & \beta \beta'^{-1} b \\
0 & a
\end{pmatrix},
$$
is an isomorphism of algebras with trace.
\end{proof}
With a straightforward computation we get the following result.
\begin{Lemma} \label{identity of C alfa}
Let $\alpha \in F$. Then:
\begin{itemize}
\item[1.] $C_2^{t_{\alpha,1}}$ does not satisfy any multilinear trace identity of degree $2$ which is not a consequence
of $[x_1, x_2] \equiv 0$.
\item[2.] $C_2^{t_{\alpha,1}}$ satisfies the following trace identity of degree $3:$
$$
f_\alpha = \alpha x_1 x_2 x_3 + \mbox{Tr}(x_1 x_2) x_3 + \mbox{Tr}(x_1 x_3) x_2 + \mbox{Tr}(x_2 x_3) x_1 - \mbox{Tr}(x_1) x_2 x_3 - \mbox{Tr}(x_2) x_1 x_3 - \mbox{Tr}(x_3) x_1 x_2 - \mbox{Tr}(x_1 x_2 x_3).
$$
\end{itemize}
\end{Lemma}
Next we shall prove that, for any $\alpha \in F$, the algebra $C_2^{t_{\alpha,1}}$ generates a variety of exponential growth.
\begin{Theorem} \label{C alfa has exp growth}
For any $\alpha \in F$, the algebra $C_2^{t_{\alpha,1}}$ generates a variety of exponential growth.
\end{Theorem}
\begin{proof}
Let us consider the following set of trace monomials of degree $n$:
\begin{equation} \label{monomials}
\mbox{Tr}(x_{i_1}) \cdots \mbox{Tr}(x_{i_k}) x_{j_1} \cdots x_{j_{n-k}},
\end{equation}
where $\{ i_1, \ldots, i_k, j_1, \ldots, j_{n-k} \} = \{ 1, \ldots, n \}$, $i_1 < \cdots < i_k$, $j_1 < \cdots < j_{n-k}$, $k = 0, \ldots, n$.
The number of elements in \eqref{monomials} is exactly $
\sum_{k = 0}^n \binom{n}{k} = 2^n$. So, in order to prove the theorem we shall show that the monomials in \eqref{monomials} are linearly independent, modulo $\mbox{Id}d^{tr}(C_2^{t_{\alpha,1}})$. To this end, let $g \in \mbox{Id}d^{tr}(C_2^{t_{\alpha,1}})$ be a linear combination of the above elements:
$$
g(x_1, \ldots, x_n) = \sum_{I,J} a_{I,J} \mbox{Tr}(x_{i_1}) \cdots \mbox{Tr}(x_{i_k}) x_{j_1} \cdots x_{j_{n-k}},
$$
where $k= 0, \ldots, n$, $I = \{ x_{i_1}, \ldots, x_{i_k} \}$, $J = \{ x_{j_1}, \ldots, x_{j_{n-k}} \}$ and $i_1 < \cdots < i_k, \ j_1 < \cdots < j_{n-k}$.
We claim that $g$ is actually the zero polynomial. Let $k$ be the largest integer such that $\alpha_{I,J} \neq 0$, with fixed $I = \{ x_{i_1}, \ldots, x_{i_k} \}$ and $J = \{ x_{j_1}, \ldots, x_{j_{n-k}} \}$. By making the evaluation $x_{i_1} = \cdots = x_{i_k} = e_{12}$ and $x_{j_1} = \cdots = x_{j_{n-k}} = e_{11} + e_{22}$, we get $g = \alpha_{I,J} (e_{11} + e_{22}) + \gamma e_{12} = 0$. This implies $ \alpha_{I,J} = 0$, a contradiction.
\end{proof}
We conclude this section with the following results comparing trace $T$-ideals.
\begin{Lemma} \label{C alfa no T equ C beta}
Let $\alpha, \beta \in F$ be two distinct elements. Then $\mbox{Id}d^{tr}(C_2^{t_{\alpha,1}}) \not \subset \mbox{Id}d^{tr}(C_2^{t_{\beta,1}}) $.
\end{Lemma}
\begin{proof}
Let us consider the polynomial
$$
f_\alpha = \alpha x_1 x_2 x_3 + \mbox{Tr}(x_1 x_2) x_3 + \mbox{Tr}(x_1 x_3) x_2 + \mbox{Tr}(x_2 x_3) x_1 - \mbox{Tr}(x_1) x_2 x_3 - \mbox{Tr}(x_2) x_1 x_3 - \mbox{Tr}(x_3) x_1 x_2 - \mbox{Tr}(x_1 x_2 x_3).
$$
We have seen in Lemma \ref{identity of C alfa} that $f_\alpha$ is a trace identity of $C_2^{t_{\alpha,1}}$. In order to complete the proof we need only show that such a polynomial does not vanish on $C_2^{t_{\beta,1}}$. By considering the evaluation $x_1 = x_2 = x_3 = e_{11} + e_{22} \in C_2^{t_{\beta,1}} $, we get
\[
f_\alpha(e_{11} + e_{22}, e_{11} + e_{22}, e_{11} + e_{22}, t_{\beta,1}) = (\alpha - \beta) (e_{11} + e_{22}).
\]
Since $\alpha \neq \beta $, $f_\alpha$ does not vanish on $C_2^{t_{\beta,1}}$ and we are done.
\end{proof}
\begin{Lemma} \label{D2 alpha 0, D2 alfa alfa no T equ C beta}
Let $\alpha, \beta, \gamma, \delta \in F \setminus \{0 \}$, $\epsilon \in F$, $\alpha \neq \beta$. Then
\begin{itemize}
\item[1.] $\mbox{Id}d^{tr}(D_2^{t_{\delta,0}}) \not \subset \mbox{Id}d^{tr}(C_2^{t_{\epsilon,1}}) $,
\item[2.] $\mbox{Id}d^{tr}(D_2^{t_{\gamma,\gamma}}) \not \subset \mbox{Id}d^{tr}(C_2^{t_{\epsilon,1}}) $,
\item[3.] $\mbox{Id}d^{tr}(D_2^{t_{\alpha,\beta}}) \not \subset \mbox{Id}d^{tr}(C_2^{t_{\epsilon,1}}) $.
\end{itemize}
\end{Lemma}
\begin{proof}
By Theorems \ref{identities and codimensions of D2 t alpha 0} and \ref{identities of D2 talpha alpha}, we know that the algebras $D_2^{t_{\delta,0}}$ and $D_2^{t_{\gamma,\gamma}}$ satisfy trace identities of degree $2$ which are not a consequence of $[x_1, x_2] \equiv 0$. This does not happen for the algebra $C_2^{t_{\epsilon,1}}$ (see the first item of Lemma \ref{identity of C alfa}) and so the proof of the first two items is complete.
In order to prove the last item, let us consider the polynomial $f_5$ of Theorem \ref{identities and codimensions of D2 talpha beta}, which is a trace identity of $D_2^{t_{\alpha, \beta}}$. Such a polynomial does not vanish on $C_2^{t_{\epsilon,1}}$. In fact, by considering the evaluation $x_1 = x_2 = x_3 = e_{12} \in C_2^{t_{\epsilon,1}} $, we get
\[
f_5(e_{12}, e_{12}, e_{12}, t_{\epsilon,1}) = e_{11} + e_{22} - (\alpha + \beta)e_{12} \neq 0.
\]
\end{proof}
\begin{Lemma} \label{C alfa no equ D2 beta gamma}
Let $\alpha, \beta, \gamma \in F$, $\alpha \neq 0$. Then $\mbox{Id}d^{tr}(C_2^{t_{\gamma,1}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\alpha, \beta}}) $.
\end{Lemma}
\begin{proof}
Let us consider the polynomial $f_\gamma$ of Lemma \ref{identity of C alfa}, which is a trace identity of $C_2^{t_{\gamma,1}}$. We shall show that it does not vanish on $D_2^{t_{\alpha, \beta}}$. By considering the evaluation $x_1 = x_2 = e_{11}$ and $x_3 = e_{22}$, we get
\[
f_\gamma(e_{11}, e_{11}, e_{22}, t_{\alpha, \beta}) = \alpha e_{22} - \beta e_{11}.
\]
Since $\alpha \neq 0$, $f_\gamma$ does not vanish on $D_2^{t_{\alpha, \beta}}$ and we are done.
In particular, in case $\beta = \alpha$ we get that $\mbox{Id}d^{tr}(C_2^{t_{\gamma,1}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\alpha, \alpha}}) $ and in case $\beta = 0$ $\mbox{Id}d^{tr}(C_2^{t_{\gamma,1}}) \not \subset \mbox{Id}d^{tr}(D_2^{t_{\alpha, 0}}) $.
\end{proof}
\section{Algebras with trace of polynomial growth}
We start this section by describing a version of the Wedderburn-Malcev theorem for finite dimensional algebras with trace.
First we recall some definitions. Let $A$ be a unitary algebra with trace $\mbox{tr}$. A subset (subalgebra, ideal) $ S \subseteq A$ is a trace-subset (subalgebra, ideal) of $A$ if it is stable under the trace; in other words for all $ s \in S $, one has $ \mbox{tr}(s) \in S $.
\begin{Definition}
Let $A$ be an algebra with trace. $A$ is called a trace-simple algebra if
\begin{enumerate}
\item[1.] $A^2 \neq 0$,
\item[2.] $A$ has no non-trivial trace-ideals.
\end{enumerate}
\end{Definition}
\begin{Remark} \label{simple implies trace simple}
Let $A$ be an algebra with trace $\mbox{tr}$.
\begin{enumerate}
\item[1.] If $A$ is simple (as an algebra) then $A$ is trace-simple.
\item[2.] If $I$ is a proper trace-ideal of $A$ then the trace vanishes on $I$.
\end{enumerate}
\end{Remark}
\begin{proof}
The first item is obvious. For the second one, let us suppose that there exists $a\in I$ such that $tr(a)=\alpha\ne 0$. Hence $\alpha \in F$ is invertible. Moreover, since $I$ is a trace-ideal, it contains $\alpha$ and so we would have $I=A$, a contradiction.
Notice that the second item of the remark also holds for one-sided ideals.
\end{proof}
In the following result we give a version of the Wedderburn--Malcev theorem for finite dimensional algebras with trace.
\begin{Theorem}\label{WM}
Let $A$ be a finite dimensional unitary algebra with trace $\mbox{tr}$ over an algebraically closed field $F$ of characteristic $0$. Then there exists a semisimple trace-subalgebra $B$ such that
\[
A=B+J(A) = B_1 \oplus \cdots \oplus B_k + J(A)
\]
where $J = J(A)$ is the Jacobson radical of $A$ and $B_1$, \dots, $B_k$ are simple algebras.
\end{Theorem}
\begin{proof}
By the Wedderburn--Malcev theorem for the ordinary case (see for example \cite[Theorem $3.4.3$]{GiambrunoZaicev2005book}), we can write $A$ as a direct sum of vector spaces
\[
A = B + J = B_1 \oplus \cdots \oplus B_k + J
\]
where $B$ is a maximal semisimple subalgebra of $A$, $J= J(A)$ is the Jacobson radical of $A$, and $B_i$ are simple algebras, $i = 1$, \dots, $k$. By the Theorems of Wedderburn and Wedderburn--Artin on simple and semisimple algebras (see for instance \cite[Theorems 1.4.4, 2.1.6]{Herstein1968book}), and since $F$ is algebraically closed, we have that
\[
B = B_1 \oplus \cdots \oplus B_k = M_{n_1}(F) \oplus \cdots \oplus M_{n_k}(F).
\]
Here $M_{n_i}(F)$ is the simple algebra of $n_i \times n_i$ matrices, $i = 1$, \dots, $k$. Clearly $B$ is a trace-subalgebra since $1_A \in B$. Moreover, by considering the restriction of the trace $\mbox{tr}$ on $B$ it is easy to see that there exist $\alpha_i \in F$ such that
\[
tr(a_1, \ldots, a_k) = \sum_{i = 1}^{k} t_{\alpha_i}(a_i)
\]
where $a_i \in M_{n_i}(F)$, $t_{\alpha_i} = \alpha_i t_1^i$, and $t_1^i$ is the ordinary trace on the matrix algebra $M_{n_i}(F)$.
\end{proof}
In order to prove the main result of this paper we need the following lemmas.
\begin{Lemma} \label{C alfa in F+J}
Let $A = B + J$ be a finite dimensional algebra with trace $\mbox{tr}$. If there exists $j \in J$ such that $\mbox{tr}(j) \neq 0$ then $C_2^{t_{\alpha,1}} \in \mbox{var}^{tr}(A)$, for some $\alpha \in F$.
\end{Lemma}
\begin{proof}
Let us consider the trace subalgebra $B'$ of $A$ generated by $1$, $j$ over $F$ and let $I$ be the ideal of $B'$ generated by $j^n$, where $n$ is the least integer such that $\mbox{tr}(j^n) = \mbox{tr}(j^{n+1}) = \cdots = 0$. Then the quotient algebra $\bar{B} = B'/I$ is an algebra with trace $t$ defined as $t(a+I) = \mbox{tr}(a)$, for any $a \in B'$. Obviously $\bar{B} = \mbox{span} \{ \bar{1} = 1+I, \bar{j} = j+I, \ldots, \bar{j}^{n-1} = j^{n-1}+I\}$. Let $\alpha = \mbox{tr}(1)$ and $\beta = \mbox{tr}({j}^{n-1}) \neq 0$.
We claim that $C_2^{t_{\alpha, \beta}} \in \mbox{var}^{tr}(\bar{B})$. Let $\varphi\colon C_2^{t_{\alpha, \beta}} \to \bar{B} $ be the linear map defined by $\varphi(e_{11}+e_{22}) = \bar{1}$ and $\varphi(e_{12}) = \bar{j}^{n-1}$. It is easy to check that $\varphi$ is an injective homomorphism of algebras with trace. Hence $C_2^{t_{\alpha, \beta}}$ is isomorphic to a trace subalgebra of $\bar{B}$ and the claim is proved.
Since, by Remark \ref{C alpha, beta equivalent to C alpha beta'}, $C_2^{t_{\alpha, \beta}} \cong C_2^{t_{\alpha, 1}}$, it follows that $C_2^{t_{\alpha,1}} \in \mbox{var}^{tr}(A)$ and the proof is complete.
\end{proof}
\begin{Lemma} \label{D2 alfa alfa in Mn alfa}
For any $\alpha \in F$, the algebra $D_2^{t_{\alpha, \alpha}}$ belongs to the variety generated by $M_n^{t_\alpha}$.
\end{Lemma}
\begin{proof}
Let us recall that we denote by $M_n^{t_\alpha}$ the algebra of the $n\times n$ matrices endowed with the trace $t_\alpha$; this is the usual trace multiplied by the scalar $\alpha\in F$.
Since $D_2^{t_{\alpha,\alpha}} \subseteq M_2^{t_\alpha}$ as algebras with (the same) trace it follows that $D_2^{t_{\alpha,\alpha}}$ satisfies all trace identities of $M_2^{t_\alpha}$ (and some additional ones). Therefore $D_2^{t_{\alpha,\alpha}} \in \mbox{var}^{tr}(M_2^{t_\alpha})$. In order to complete the proof we need just to show that $M_2^{t_{\alpha}} \in \mbox{var}^{tr}(M_n^{t_\alpha})$. To this end, let $f \in \mbox{Id}^{tr}(M_n^{t_\alpha})$ be a multilinear trace identity of degree $m$ and suppose, by contradiction, that
there exists elementary matrices $e_{i_1 j_1}$, \dots, $e_{i_m j_m}$ in $M_2^{t_\alpha}$ such that $f(e_{i_1 j_1}, \ldots, e_{i_m j_m}) = \sum \alpha_{i, j} e_{i j} \neq 0$. Notice that, if we denote by $e_{ij}'$ the elementary matrices in $M_n^{t_\alpha}$, then $f(e_{i_1 j_1}', \ldots, e_{i_m j_m}') = \sum \alpha_{i, j} e_{i j} + \sum_{i=3}^n \beta_{ii} e_{ii} \neq 0$, a contradiction.
\end{proof}
In order to prove the main result of this paper we have to consider also the algebra $UT_2$ of $ 2 \times 2$ upper-triangular matrices endowed with zero trace. In the following theorem we collect some results concerning such an algebra.
\begin{Theorem} \label{UT2}
Let $UT_2$ be the algebra of $ 2 \times 2$ upper-triangular matrices endowed with zero trace.
\begin{itemize}
\item[1.] The trace $T$-ideal $\mbox{Id}^{tr}(UT_2)$ is generated by $[x_1, x_2] [x_3, x_4]$ and $\mbox{Tr}(x)$.
\item[2.] $UT_2$ generates a variety of almost polynomial growth.
\item[3.] $ \mbox{Id}^{tr}(UT_2) \nsubseteq \mbox{Id}^{tr}(A)$, where $A \in \{ D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}, D_2^{t_{\delta,0}}, C_2^{t_{\epsilon,1}} \}$, $\alpha$, $\beta$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha \ne \beta$, $\epsilon \in F$.
\item[4.] $ \mbox{Id}^{tr}(A) \nsubseteq \mbox{Id}^{tr}(UT_2)$, where $A \in \{ D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}, D_2^{t_{\delta,0}}, C_2^{t_{\epsilon,1}} \}$, $\alpha$, $\beta$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha \ne \beta$, $\epsilon \in F$.
\end{itemize}
\end{Theorem}
\begin{proof}
The first two items follows directly from the ordinary case (see, for instance \cite[Chapter 4 and 7]{GiambrunoZaicev2005book}).
For the item (3) it is sufficient to observe that $\mbox{Tr}(x) \equiv 0$ is a trace-identity of $UT_2$ but such a polynomial does not vanish on $A$, for any $A \in \{ D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}, D_2^{t_{\delta,0}}, C_2^{t_{\epsilon,1}} \}$. Finally, since the algebras $D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}, D_2^{t_{\delta,0}}, C_2^{t_{\epsilon,1}}$ are commutative and $UT_2$ is not, we get item (4), and the proof is complete.
\end{proof}
Now we are in a position to prove the following theorem characterizing the varieties of unitary algebras with trace which are generated by finite dimensional algebras, and have polynomial growth of their codimensions.
\begin{Theorem} \label{characterization}
Let $A$ be a finite dimensional unitary algebra with trace $\mbox{tr}$ over a field $F$ of characteristic zero. Then the sequence $c_n^{tr}(A)$, $n=1$, 2, \dots, is polynomially bounded if and only if $ D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $D_2^{t_{\delta,0}}$, $C_2^{t_{\epsilon,1}}, UT_2 \notin \mbox{var}^{tr}(A)$, for any choice of $\alpha$, $\beta$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha \ne \beta$, $\epsilon \in F$.
\end{Theorem}
\begin{proof}
By Theorems \ref{identities and codimensions of D2 t alpha 0}, \ref{identities of D2 talpha alpha}, \ref{identities and codimensions of D2 talpha beta}, \ref{C alfa has exp growth}, \ref{UT2}, the algebras $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $D_2^{t_{\delta,0}}$, $C_2^{t_{\epsilon,1}}$ and $UT_2$ generate varieties of exponential growth. Hence, if $c_n^{tr}(A)$ is polynomially bounded, then $ D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $ D_2^{t_{\delta,0}}$, $C_2^{t_{\epsilon,1}}, UT_2 \notin \mbox{var}^{tr}(A)$, for any $\alpha$, $\beta$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha \ne\beta$, $\epsilon \in F$.
Conversely suppose that $ D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $ D_2^{t_{\delta, 0}}$, $C_2^{t_{\epsilon,1}}, UT_2 \notin \mbox{var}^{tr}(A)$, for any $\alpha$, $\beta$, $\gamma$, $ \delta \in F \setminus \{ 0 \}$, $\alpha \ne\beta$, $\epsilon \in F$. Since we are dealing with codimensions, and these do not change under extensions of the base field, we may assume that the field $F$ is algebraically closed. By Theorem \ref{WM}, we get that
\[
A = M_{n_1}(F) \oplus \cdots \oplus M_{n_k}(F) + J, \ \ k \geq 1,
\]
and there exist constants $\alpha_i$ such that, for $a_i \in M_{n_i}(F)$, we have
\[
tr(a_1, \ldots, a_k) = \sum_{i = 1}^{k} t_{\alpha_i}(a_i).
\]
Since $D_2^{t_{\gamma,\gamma}} \notin \mbox{var}^{tr}(A)$, for any $\gamma \in F \setminus \{ 0 \}$, and since, by Lemma \ref{D2 alfa alfa in Mn alfa}, we have that, for $n\ge 2$, $D_2^{t_{\gamma,\gamma}} \in \mbox{var}^{tr}(M^{t_\gamma}_n) \subseteq \mbox{var}^{tr}(A) $, we get that $n_i = 1$, for every $i = 1$, \dots, $k$. Hence
\[
A=A_1\oplus \cdots\oplus A_k + J
\]
where for every $i=1$, \dots, $k$, $A_i\cong F$ and the trace on it is $t_{\alpha_i}$.
Since, for any $\alpha \in F$, $C_2^{t_{\alpha,1}} \not \in \mbox{var}^{tr}(A)$, by Lemma \ref{C alfa in F+J} we must have that the trace vanishes on $J$.
Now, if for any $i = 1$, \dots, $k$, the trace on $A_i$ is zero, since $UT_2 \not \in \mbox{var}^{tr}(A)$, then, for any $i \neq j$, we must have $A_i J A_j = 0$. Hence, for $n \geq 1$, $c_n^{tr}(A) = c_n(A)$ is polynomially bounded (see, for instance \cite[Chapter 7]{GiambrunoZaicev2005book}) and we are done in this case.
Hence, we may assume that there exists $i$ such that the trace on $A_i$ is $t_{\alpha_i}$, with $\alpha_i \neq 0$.
Let $F_\alpha$ denote the field $F$ endowed with the trace $t_\alpha$. We claim that $F_\alpha \oplus F_\beta$ is isomorphic to $D_2^{t_{\alpha, \beta}}$ if $\alpha \neq \beta$ (notice that $\beta$ could be zero) and to $D_2^{t_{\alpha, \alpha}}$ otherwise. Here we shall denote by $t$ the trace map on $F_{\alpha} \oplus F_{\beta} $ defined as $t((a,b)) = t_{\alpha}(a) + t_{\beta}(b)$, for all $(a,b) \in F_\alpha \oplus F_\beta$. In order to prove the claim, let us consider the linear map $\varphi\colon D_2 \rightarrow F_\alpha \oplus F_\beta $ such that
\[
\varphi \begin{pmatrix} 1 & 0 \\ 0 &
0 \end{pmatrix} = (1,0) \ \ \ \ \ \ \ \ \ \mbox{and} \ \ \ \ \ \ \ \ \ \varphi \begin{pmatrix} 0 & 0 \\ 0 &
1 \end{pmatrix} = (0,1).
\]
It is easily seen that $\varphi$ is an isomorphism of algebras.
Now, if $\alpha \neq \beta$, we have that
\[
\varphi \left ( t_{\alpha, \beta} \begin{pmatrix} 1 & 0 \\ 0 &
0 \end{pmatrix} \right ) = \varphi(\alpha) = (\alpha, \alpha) = t(1,0) = t \left ( \varphi \begin{pmatrix} 1 & 0 \\ 0 &
0 \end{pmatrix} \right ),
\]
\[
\varphi \left ( t_{\alpha, \beta} \begin{pmatrix} 0 & 0 \\ 0 &
1 \end{pmatrix} \right ) = \varphi(\beta) = (\beta, \beta) = t(0,1) = t \left ( \varphi \begin{pmatrix} 0 & 0 \\ 0 &
1 \end{pmatrix} \right ),
\]
and so $\varphi$ is an isomorphism of algebras with trace between $D_2^{t_{\alpha, \beta}}$ and $F_\alpha \oplus F_\beta$. In the same way, if $\alpha = \beta$, we get a trace isomorphism between $D_2^{t_{\alpha,\alpha}}$ and $F_\alpha \oplus F_\alpha$.
Hence, since $D_2^{t_{\alpha, \beta}}, D_2^{t_{\gamma,\gamma}}$, $ D_2^{t_{\delta,0}} \notin \mbox{var}^{tr}(A)$, it follows that
\[
A = B + J
\]
where $B \cong F$ and for all $a = b+j \in A$, $tr(a) = tr(b+j) = \alpha b$, with $\alpha \neq 0$.
In order to complete the proof we need to show that $B + J$ has polynomially bounded trace codimensions.
Notice that the following polynomials are trace identities of $B + J$:
\begin{enumerate}
\item[1.] $\alpha \mbox{Tr}(x_1 x_2) - \mbox{Tr}(x_1) \mbox{Tr}(x_2) \equiv 0$,
\item[2.] $ \left ( \mbox{Tr}(x_1) - \alpha x_1 \right )\cdots \left ( \mbox{Tr}(x_{q+1}) - \alpha x_{q+1} \right )\equiv 0$, where $J^q \neq 0$ and $J^{q+1} = 0$.
\end{enumerate}
Modulo the first trace identity, every multilinear trace polynomial in the variables $x_1$, \dots, $x_n$ is a linear combination of expressions of the type
\[
\mbox{Tr}(x_{i_1}) \cdots \mbox{Tr}(x_{i_a}) x_{j_1} \cdots x_{j_b}
\]
where $ \left \{ i_1, \ldots, i_a, j_1, \ldots, j_b \right \} = \left \{ 1, \ldots, n \right \} $ and $i_1 < \cdots < i_a $. The second identity implies that we can suppose $a\le q$. Indeed if $a\ge q+1$ then one can represent the product of $q+1$ traces as a linear combination of elements with fewer traces by expanding the second trace identity. Therefore we may suppose $a\le q$.
We can further reduce the form of the latter polynomials. As we consider algebras with unit we can rewrite each monomial $x_{j_1} \cdots x_{j_b}$ as a linear combination of elements of the type
\[
x_{p_1} \cdots x_{p_c} \left [ x_{l_1}, \ldots , x_{l_{k}} \right ] \cdots \left [ x_{m_1}, \ldots , x_{m_{h}} \right ].
\]
Here the commutators that appear are left normed, that is $[u,v] = uv-vu$, $[u,v,w] = [[u,v],w]$ and so on. Moreover
\[
\left \{ i_1, \ldots, i_a, p_1, \ldots, p_c, l_1, \ldots, l_k, \ldots, m_1, \ldots, m_h \right \} = \left \{ 1, \ldots, n \right \}.
\]
By applying the Poincar\'e--Birkhoff--Witt theorem, we can assume further that $p_1 < \cdots < p_c $, and that the commutators are ordered. Therefore we can write each trace polynomial as a linear combination of elements of the type
\[
\mbox{Tr}(x_{i_1}) \cdots \mbox{Tr}(x_{i_a}) x_{p_1} \cdots x_{p_c} [ x_{l_1}, \ldots , x_{l_{k_1}} ] \cdots [ x_{m_1}, \ldots , x_{m_{k_h}} ],
\]
where $ \left \{ i_1, \ldots, i_a, p_1, \ldots, p_c, l_1, \ldots, l_{k_1}, \ldots, m_1, \ldots, m_{k_h} \right \} = \left \{ 1, \ldots, n \right \} $ and $i_1 < \cdots < i_a $, $p_1 < \cdots < p_c $, $a \leq q$. The algebra $B$ in the decomposition $A=B+J$ is commutative. Hence each commutator vanishes when evaluated on elements of $B$ only. Thus in order to get a non-zero element we must substitute in a commutator at least one element from $J$. Since $J^{q+1} = 0$, a product of $q+1$ commutators is a trace identity. Therefore we also get the restriction
$ K := k_1 + \cdots + k_h \leq q$. In this way
\begin{eqnarray*}
c_n^{tr}(A) &\le & \sum_{a=0}^q \binom{n}{a} \left ( \sum_{K = k_1 + \cdots + k_h = 0}^q \binom{n-a}{K} \binom{K}{k_1, \ldots, k_h } k_1! \cdots k_h! \right ) \\
&=&\sum_{a=0}^q \dfrac{n(n-1) \cdots (n-a+1)}{a!}\left ( \sum_{K = k_1 + \cdots + k_h=0}^q \dfrac{(n-a)!}{(n-a-K)!} \right ) \\
& \approx & cn^{2q}
\end{eqnarray*}
where $c$ is a constant. Hence $A = B + J$ has polynomial growth and the proof is complete.
\end{proof}
As an immediate consequence, we get the following result.
\begin{Corollary} If $A$ is a finite dimensional unitary algebra with trace, then the sequence $c^{tr}_n(A)$, $n=1$, 2, \dots, is either polynomially bounded or grows exponentially.
\end{Corollary}
Now we prove the following corollary.
\begin{Corollary} Let $\alpha$, $\beta \in F \setminus \{ 0 \}$, $\alpha \neq \beta$. Any proper subvariety of $\mbox{var}^{tr}(D_2^{t_{\alpha, \beta}})$, generated by a finite dimensional algebra with trace, has polynomial growth.
\end{Corollary}
\begin{proof}
Let $\mathcal{V} = \mbox{var}^{tr}(A) \subsetneq \mbox{var}^{tr}(D_2^{t_{\alpha, \beta}})$ where $A$ is a finite dimensional algebra with trace. As a consequence of Lemmas \ref{D2 alfa beta no T equ}, \ref{D2 alpha 0, D2 alfa alfa no T equ C beta} (item $3.$) and Theorem \ref{UT2}, we get that $UT_2$, $D_2^{t_{\alpha', \beta'}}$, $D_2^{t_{\gamma, \gamma}}$, $D_2^{t_{\delta,0}}$, $ C_2^{t_{\epsilon,1}} \not \in \mbox{var}^{tr}(A)$, for any $\alpha'$, $\beta'$, $\gamma$, $\delta \in F \setminus \{ 0 \}$, $\alpha' \ne\beta'$, $\epsilon \in F$. Hence Theorem \ref{characterization} applies and the proof is complete.
\end{proof}
With a similar approach we obtain the following result.
\begin{Corollary} For any $\epsilon \in F$, any proper subvariety of $\mbox{var}^{tr}(C_2^{t_{\epsilon,1}})$, generated by a finite dimensional algebra with trace, has polynomial growth.
\end{Corollary}
According to the previous results, with an abuse of terminology, we may say that $D_2^{t_{\alpha, \beta}}$ and $C_2^{t_{\epsilon,1}}$ generate varieties of almost polynomial growth. As a consequence we state the following corollary.
\begin{Corollary} The algebras $UT_2$, $D_2^{t_{\alpha, \beta}}$, $D_2^{t_{\gamma,\gamma}}$, $D_2^{t_{\delta,0}}$ and $C_2^{t_{\epsilon,1}}$, $\alpha, \beta, \gamma, \delta \in F \setminus \{ 0 \}$, $\alpha \neq \beta$, $\epsilon \in F$, are the only finite dimensional algebras with trace generating varieties of almost polynomial growth.
\end{Corollary}
\textbf{Acknowledgements}
We thank the Referee whose comments were appreciated. Remark~\ref{differentvarieties} was added in order to answer a question raised by the Referee.
\end{document} |
\begin{document}
\title{The non-repetitive colorings of grids}
\author{Tianyi Tao \\
School of mathematical sciencese \\
Fudan University \\
{\tt [email protected]}}
\date{\today}
\maketitle
\begin{abstract}
A vertex coloring of a graph $G$ is non-repetitive if $G$ contains no path for which
the first half of the path is assigned the same sequence of colors as the second half.
Thue shows that path $P_n$ is non-repetitively 3-colorable. We prove that the grid $P_n\square P_n$ is non-repetitively 12-colorable and extending this method to high-dimensional situations.
\end{abstract}
\section{Introduction}
A vertex coloring of a graph $G$ is non-repetitive if $G$ contains no path for which
the first half of the path is assigned the same sequence of colors as the second half. We call the minimal number of colors needed the Thue number of $G$ and denote it by $\pi(G)$.
Thue \cite{1} proved that $\pi(P_n)=3$ for path of any number of vertices $n$, in other words, there is a sequence composed of three symbols of $a,b$ and $c$ of any length, in which there is no identical adjacent blocks. We take the first few bits of this sequence as examples of common using in the later parts of the paper.
A block of symbols is called a palindrome if it is equal to its reflection. For instance, $aa,aba,abcba$ are palindromes. It is easy to see that a non-repetitive sequence composed of three symbols with a length of more than 5 must contain a palindrome. In the sequence above, we can insert the fourth symbol $d$ between consecutive blocks of length 2.
It is easy to check that such construction can obtain non-repetitive and palindrome-free sequences of arbitrary length.
Now let's review the definitions of Cartesian product, tensor product, and strong product. Let $G$ and $H$ be graphs. The Cartesian product of $G$ and $H$, denoted by $G\square H$, is the graph with vertex set $V(G)\times V(H)$, where $(g,h)$ and $(g',h')$ are adjacent if $g=g',hh'\in E(H)$ or $h=h',gg'\in E(G)$. The tensor product of $G$ and $H$, denoted by $G\times H$, is the graph with vertex set $V(G)\times V(H)$, where $(g,h)$ and $(g',h')$ are adjacent if $gg'\in E(G)$ and $hh'\in E(H)$. The strong product of $G$ and $H$, denoted by $G\boxtimes H$, is the union of $G\square H$ and $G\times H$.
Now let's talk about the non-repetitive colorings of grid $P_n\square P_n$ for any fixed $n$. We can give the structure\cite{2} in Figure 1 through the 4-symbol sequence, so that $\pi(P_n\square P_n)\le 16.$
It should be mentioned that if the four horizontal or vertical symbols are replaced by three, the resulting coloring may no longer be non-repetitive due to the appearance of palindromes. In the coloring obtained by product, the repetitive path, for example, the path in red in Figure 2 is hard to avoid. This topic will be discussed in depth in the last section.
This paper will give another construction and shows $\pi(P_n\square P_n)\le12$ and extending this construction to high-dimensional situations.
\section{Two-dimensional case}
There is an important observation below.
We give the following coloring Figure 3 for $P_n\square P_n$: if the distance from a vertex to the upper left corner is $i$, let its color be the $(i+1)$-th bit in the sequence (II). In such coloring, the color sequence of any path in the grid will correspond to the color sequence of a walk in (II).
A lazy walk in a graph $G$ is a walk in the pseudograph obtained from $G$ by adding a loop at each vertex.\\
\begin{theorem}
If $W=\{v_1,\cdots,v_k,v_{k+1},\cdots v_{2k}\}$ is a repetitive lazy walk in (II), which means $v_i$ and $v_{k+i}$ have the same color forall $1\le i\le k$, then $v_i=v_{k+i}$ forall $1\le i\le k$. In other words, any repetitive lazy walk is composed of two identical lazy walks.
\end{theorem}
\begin{proof}
For $1\le i\le k-1,$ if $v_i$ and $v_{i+1}$ are in the same location, then $v_{k+i}$ and $v_{k+i+1}$ are also in the (perhaps another)same location. So we can assume that $W$ is a walk without losing generality. The contents of walk and lazy walk is discussed in detail in \cite{3}
We discuss the shape of the walk in two cases.
If $v_1$ to $v_2$ and $v_{k+1}$ to $v_{k+2}$ are in the same direction, because there is no palindrome in the sequence (II), $v_2$ to $v_3$ and $v_{k+2}$ to $v_{k+3}$ are in the same direction. Continuing this process, we can see that the first half and the second half of $W$ are equal in the sense of translation. If $v_1$ and $v_{k+1}$ are in different positions in the sequence (II), then it will conflict with the property of no repetitive block in (II).
If $v_1$ to $v_2$ and $v_{k+1}$ to $v_{k+2}$ are in the opposite direction, in the same way, we can show that the first half and the second half of $W$ are equal in the sense of symmetry. Whether $v_1$ and $v_{k+1}$ are in the same position or not, it will conflict with the property of no palindrome in (II).
To sum up, there is only one possibility for the composition of $W$, that is, $W$ is composed of two indetical walks.
\end{proof}
\begin{theorem}
$P_n\square P_n$ is non-repetitively 12-colorable forall $n$.
\end{theorem}
\begin{proof}
Construct the following coloring of grid in Figure 4.
We divide the vertices on the grid into two types according to the parity of the distance to the upper left corner. Each type of vertex is divided into the disjoint union of vertices on parallel lines, and the vertices on each line are colored with the same color according to the order in (II), where $x, y, z, w$ are four new colors with the same status as $a, b, c, d$. It should be noted that in this construction, two lines of different types are not parallel.
In Figure 4, there are many repetitive paths, such as $a\to z\to a\to z$ in the red box. Our goal is to prove that we can destroy all the repetitive paths in Figure 4 by destroying the repetitive path with length of 4 above.
Let $P=\{v_1,\cdots,v_k,v_{k+1},\cdots v_{2k}\}$ is a repetitive path in Figure 4, then $k$ must be even. Without losing generality, set the color of $v_1$ is $a$ and the color of $v_2$ is $z$. Let $P_1=\{v_1,v_3,\cdots,v_{k-1},v_{k+1},\cdots v_{2k-1}\}$ and $P_2=\{v_2,v_4,\cdots,v_{k},v_{k+2},\cdots v_{2k}\}$, then $P_1$ and $P_2$ are repetitive lazy walks in each 4-color sequence. Using \textbf{Theorem 1} to $P_1$ and $P_2$, $v_{k+1}$ must be on line $L_1$ and $v_{k+2}$ must be on line $L_2$ in Figure 5.
The locations of $v_{k+1}$ and $v_{k+2}$ are uniquely determined, that is, $v_{k+1}$ is exactly the vertex on the left of $v_2$, and $v_{k+2}$ is exactly the vertex on the left of $v_1$.
We divide $x, y, z, w$ into $x_1, x_2, y_1, y_2, z_1, z_2, w_1, w_2$ by parity in Figure 6, then all repetitive paths are destroyed. As a result, the grid is non-repetitively 12-colorable.
\end{proof}
\section{High-dimensional case}
Let $P$ be an infinite path. Denote $n$ $P$'s Cartesian product, strong product and tensor product by $(\square P)^n$, $(\boxtimes P)^n$ and $(\times P)^n$.
By constructing non-repetitive palindrome-free sequences on each coordinate and multiplying them (Figure 1 is two-dimensional case), we know that $\pi(\boxtimes P)^n\le 4^n$.\cite{2} Now we talk about $(\times P)^n$ first.
For $n=2$, $P\times P$ has 2 components, each of which is isomorphic to $P\square P$, so $\pi(P\times P)=\pi(P\square P)\le 12$. For $n\ge3$, $(\times P)^n$ has $2^{n-1}$ components, each of which is different from $(\square P)^n$. Notice that $(\times P)^n$ is $2^n$-regluar but $(\square P)^n$ is $2n$-regular.\\
\begin{theorem}
In $(\boxtimes P)^n$ with the non-repetitive $4^n$ coloring above, any repetitive lazy walk is composed of two indentical lazy walks.
\end{theorem}
\begin{proof}
Let $W=\{v_1,\cdots,v_k,v_{k+1},\cdots v_{2k}\}$ is a repetitive lazy walk, if $v_1$ and $v_{k+1}$ are in the same location, by the coloring is palindrome-free, then $v_i$ and $v_{k+i}$ are in the same location forall $1\le i\le k$. If $v_1$ and $v_{k+1}$ are not in the same location, then one of the coordinates of these two vertices must be different. Just look at the color on this coordinate, and the contradiction is given by \textbf{Theorem 1}.
\end{proof}
\begin{theorem}
$(\times P)^n$ is non-repetitively $(4^{n-1}+4\cdot2^{n-1})$-colorable.
\end{theorem}
\begin{proof}
Consider one of the component of $(\times P)^n$. According to the geometric viewpoint in $n$-dimensional Euclidean space, the graph is divided into a series of parallel $(n-1)$-dimensional hyperplanes that intersect the graph. Let the equations for these hyperplanes be $x_n=\lambda, \lambda\in\mathbb{Z}$.
For odd $\lambda$, let the vertices on each hyperplane be the same color, which is taken from (II) consecutively. For example, all vertices satisfies $x_n=1$ get the color $a$, all vertices satisfies $x_n=3$ get the color $b$, all vertices satisfies $x_n=5$ get the color $d$, all vertices satisfies $x_n=7$ get the color $c$ and all vertices satisfies $x_n=9$ get the color $a\cdots$
For even $\lambda$, the vertices on each hyperplane share the same set of $4^{n-1}$ colors, that is, all vertices on the line $x_1=i_1,\cdots,x_{n-1}=i_{n-1}$ share a same color. Figure 7 is a schematic diagram when $n$=3, whose dashed lines are not edges in the graph.
Let $P=\{v_1,\cdots,v_k,v_{k+1},\cdots,v_{2k}\}$ be a repetitive path in $(\times P)^n$ with the coloring above, let $P_1$ and $P_2$ be paths formed by the odd number and even number of vertices of $P$. Without losing generality, we assume that $v_1$ is on a hyperplane with an odd coefficient $\lambda$, so $P_1$ is a repetitive lazy walk in a path with 4-colored non-repetitive palindrome-free coloring and $P_2$ is a repetitive lazy walk in $(\boxtimes P)^{n-1}$ with the $4^{n-1}$-coloring in \textbf{Theorem 3}. As a result, $v_1$ and $v_{k+1}$ are in a same hyperplane (they have the same $n$-th coordinate), $v_2$ and $v_{k+2}$ are in a same line (they have the same $1,\cdots,n-1$-th coordinates).
On the hyperplanes of odd coefficients $\lambda$, by divding each color into $2^{n-1}$ colors, all repetitive paths can be destroyed (Figure 8). This shows $\pi((\times P)^n)\le 4^{n-1}+4\cdot2^{n-1}$.
\end{proof}
\begin{theorem}
$(\square P)^n$ is non-repetitively $(4^{n-1}+4n)$-colorable.
\end{theorem}
\begin{proof}
Similar to the proof of \textbf{Theorem 4}, the only difference is that we take the hyperplane sequence of the equations $\sum\limits_{j=1}^n=\lambda,\lambda\in\mathbb{Z}$ at this time.
Let $P=\{v_1,\cdots,v_k,v_{k+1},\cdots,v_{2k}\}$ be a path in $(\square P)^n$ which start at a hyperplane with an odd coefficient $\lambda$. Similarly, let $P_1$ and $P_2$ be the paths formed by the odd number and even number of vertices of $P$.
For odd $\lambda$, let the vertices on each hyperplane be the same color, which is taken from (II) consecutively.
For even $\lambda$, we want all the hyperplanes to share a coloring. In this situation, $P_2$ is a lazy walk in a graph obtained by filling the entire $(n-1)$-dimensional Euclidean space with infinite regular $(n-1)$-dimensional simplexes. Note that it is a subgraph of $(\boxtimes P)^{n-1}$, we can give them a $4^{n-1}$ coloring to use \textbf{Theorem 3}. Figure 9 is a schematic diagram when $n$=3.
If $u$ is a vertex of $(\square P)^n$ in the hyperplane $\sum\limits_{j=1}^n=\lambda_0$, then the neighbours of $u$ in the hyperplane $\sum\limits_{j=1}^n=\lambda_0+1$ is exactly the $n$ vertices of an $(n-1)$-dimensional simplex.
Note that the chromatic number of the graph obtained by filling the entire $(n-1)$-dimensional Euclidean space with infinite regular $(n-1)$-dimensional simplexes is exactly $n$, we can destroy all repetitive paths by dividing each color into $n$ colors in the hyperplanes with odd coefficient $\lambda$ (Figure 10). This shows $\pi((\square P)^n)\le 4^{n-1}+4n$.
\end{proof}
Unfortunately, \textbf{Theorem 5} only gives a good bound when $n$ is small (for example, $n$=3). For large $n$, we know $\pi((\square P)^n)=O(n^2)$ by Lovász local lemma.\cite{4}\cite{5} Bounding the non-repetitive chromatic number of a graph obtained by filling $n$-dimensional space with $n$-dimensional simplexes maybe a method to improve \textbf{Theorem 5}.
\section{From the perspective of stroll-nonrepetitive coloring}
A walk $\{v_1,\cdots,v_k,v_{k+1},\cdots,v_{2k}\}$ in a graph is boring if $v_i=v_{k+i}$ forall $i$, a coloring is walk-nonrepetitive if every repetitive colored walk is boring. For a graph $G$, the walk-nonrepetitive chromatic number $\sigma(G)$ is the minimum number of colors in a walk-nonrepetitive coloring of $G$. \textbf{Theorem 3} means that $\sigma((\boxtimes P)^n)\le 4^n$.
A stroll in a graph $G$ is a walk $\{v_1,\cdots,v_k,v_{k+1},\cdots,v_{2k}\}$ such that $v_i\not=v_{k+i}$ for all $i$. A coloring of $G$ is stroll-nonrepetitive if no stroll is repetitively colored.
For a graph $G$, the stroll-nonrepetitive chromatic number $\rho(G)$ is the minimum number
of colors in a stroll-nonrepetitive coloring of $G$.
Obviously we have $\pi(G)\le\rho(G)\le\sigma(G)$ for any graph $G$.
In the \textbf{Lemma 2.16} of \cite{3}, for any graphs $G, H$,we have that $\pi(G\boxtimes H)\le\rho(G\boxtimes H)\le\rho(G)\cdot\sigma(H)$. Let $G$ and $H$ be paths, then $\pi(P\square P)\le\pi(P\boxtimes P)\le 4\rho(P)$. If $\rho(P)\le3$, then $\pi(P\square P)\le12$ is obvious so that our work is unnecessary.
However, \cite{3} asked a question in the preprint on arxiv.org before publication: whether $\rho(P)\le3$ for every path $P$. Pascal Ochem first disproved this conjecture. He shows that a long path is not stroll-nonrepetitively 3-colorable. We give a simple proof of this theorem following.
\begin{theorem}
$P_n$ is not stroll-nonrepetitively 3-colorable for large n.
\end{theorem}
\begin{proof}
We know that a non-repetitive sequence composed of three symbols $a,b,c$ always contains palindromes when the length is large enough. Through simple observation, we know that if there is a 3-palindrome such as $aba$ in the sequence, it must be the middle three members of a 5-palindrome $cabac$. Let's consider the following question: What is the possible structure of the intersection between two 5-palindromes?
First of all, we should note that if we know what the two symbols after a certain 5-palindrome are, then we can determine the position of the next 5-palindrome. For example the two symbols after $cabac$ can be $ba,bc$ or $ab$, they determine the three ways following in which the next 5-palindromes appear.
Form\ding{172}: $cabacba$, the next two symbols must be $bc$. So it is $cabacbabc$, whose two adjacent 5-palindromes share only one symbol.
Form\ding{173}: $cabacbc$, the next symbol must be $a$. So it is $cabacbca$, whose two adjacent 5-palindromes share two symbols.
Form\ding{174}: $cabacab$, whose two adjacent 5-palindromes share three symbols.
Note that Form\ding{174} contains a repetitive stroll $c\to a\to b\to a\to c\to a\to b\to a$. Thus, if we have a stroll-nonrepetitive coloring for path, the Form\ding{174} can not appear.
Now we prove that Form\ding{173} cannot appear consecutively. If not, we get the symbol sequence $cabacbcabac$. If the next symbol of the sequence is $b$, there will be a repetitive path. If the next symbol of the sequence is $a$, There will be a Form\ding{174} appearing later.
Form\ding{172} cannot appear 3 consecutive times, if not, we get the symbol sequence $cabacbabcabacbabc$, which contains a repetitive path.
By these reasons above, in a sufficiently long symbol sequence, there will always be structures that Form\ding{172} followed by Form\ding{173} followed by Form\ding{172}. Then we get the symbol sequence $cabacbabcacbacab$, which contains a repetitive stroll $a\to c\to b\to a\to b\to c\to a\to c\to b\to a\to b\to c$. So $\rho(P_n)=4$ for large $n$.
\end{proof}
$\rho(P)=4$ means that it is impossible to reduce $\pi(P\boxtimes P)$ below 16 by using the product coloring method. From another perspective, when we studying non-repetitive chromatic number of graphs, the two structures of Cartesian product and strong product may be used completely different kinds of coloring methods.
\end{document} |
\begin{document}
\setcounter{chapter}{12}
\chapter{Graph colouring algorithms}
\centerline{\sc Thore Husfeldt}
\noindent
\begin{minipage}{7cm}
\begin{tabbing}
1. \= Introduction\\
2. \> Greedy colouring\\
3. \> Local augmentation\\
4. \> Recursion\\
5. \> Subgraph expansion\\
6. \> Vector colouring\\
7. \> Reductions\\
References
\end{tabbing}
\end{minipage}
\begin{quote}\small \it
This chapter presents an introduction to graph colouring algorithms.\footnote{
Appears as Thore Husfeldt, \emph{Graph colouring algorithms},
Chapter XIII of \emph{Topics in Chromatic Graph Theory}, L. W. Beineke and Robin J. Wilson (eds.),
Encyclopedia of Mathematics and its Applications,
Cambridge University Press, ISBN 978-1-107-03350-4, 2015, pp. 277--303.
}
The focus is on vertex-colouring algorithms that work for general
classes of graphs with worst-case performance guarantees in a
sequential model of computation.
The presentation aims to demonstrate the breadth of available
techniques and is organized by algorithmic paradigm.
\end{quote}
\section{Introduction}
A straightforward algorithm for finding a vertex-colouring of a graph
is to search systematically among all mappings from the set of
vertices to the set of colours, a technique often called
\emph{exhaustive} or \emph{brute force}:
\begin{algor}{X}{Exhaustive search}{Given an integer $q\geq 1$ and a
graph $G$ with vertex set $V$, this algorithm finds a vertex-colouring using $q$ colours
if one exists.}
\item[X1]
[Main loop] For each mapping $ffon V\rightarrow
\{1,2,\ldots,q\}$, do Step X2.
\item[X2] [Check $f$] If every edge $vw$ satisfies $f(v)\neq
f(w)$, terminate with $f$ as the result. \qed
\end{algor}
This algorithm has few redeeming qualities, other than its being
correct.
We consider it here because it serves as an opportunity to make
explicit the framework in which we present more interesting
algorithms.
\subsection*{Model of computation}
If $G$ has $n$ vertices and $m$ edges, then the number of operations
used by Algorithm X can be asymptotically bounded by $O(q^n(n+m))$,
which we call the \emph{running time} of the algorithm.
To make such a claim, we tacitly assume a computational model that
includes primitive operations, such as iterating over all mappings
from one finite set $A$ to another finite set $B$ in time
$O(|B|^{|A|})$ (Step X1), or iterating over all edges in time $O(n+m)$
(Step X2).
For instance, we assume that the input graph is represented by an
array of sequences indexed by vertices; the sequence stored at vertex
$v$ contains the neighouring vertices $N(v)$, see Fig.~\ref{fig: cap}.
\begin{figure}
\caption{\label{fig: cap}
\label{fig: cap}
\end{figure}
This representation allows us to iterate over the neighbours of a
vertex in time $O(\deg v)$.
(An alternative representation, such as an incidence or adjacency
matrix, would not allow this.)
Note that detecting whether two graphs are isomorphic is \emph{not} a
primitive operation.
The convention of expressing computational resources using asymptotic
notation is consistent with our somewhat cavalier attitude towards the
details of our computational model.
Our assumptions are consistent with the behaviour of a modern computer
in a high-level programming language.
Nevertheless, we will explain our algorithms in plain English.
\subsection*{Worst-case asymptotic analysis}
Note that we could have fixed the colouring of a specific vertex $v$
as $f(v)=0$, reducing Algorithm X's running time to
$O(q^{n-1}(n+m))$.
A moment's thought shows that this reasoning can then be extended to
cliques of size $r\geq 1$: search through all $\binom{n}{r}$ induced
subgraphs until a clique of size $r$ is found, arbitrarily map these
vertices to $\{1,2,\ldots,r\}$ and then let Algorithm X colour the
remaining vertices.
This reduces the running time to $O(q^{n-\omega(G)} n^{\omega(G)}(n+
m))$, where $\omega(G)$ is the clique size.
This may be quite useful for some graphs.
Another observation is that in the best case, the running time is
$O(n+m)$.
However, we will normally not pursue this kind of argument.
Instead, we are maximally pessimistic about the input and the
algorithm's underspecified choices.
In other words, we understand running times as worst-case performance
guarantees, rather than `typical' running times or average running
times over some distribution.
Sometimes we may even say that Algorithm X requires time
$q^n\operatorname{poly}(n)$, where we leave the polynomial factor unspecified in
order to signal the perfunctory attention we extend to these issues.
\subsection*{Overview and notation}
Straightforward variants of Algorithm X can be used to solve some
other graph colouring problems.
For instance, to find a list-colouring, we restrict the range of
values for each $f(v)$ to a given list; to find an edge-colouring,
we iterate over all mappings $ffon
E\rightarrow\{1,2,\ldots,q\}$.
Another modification is to count the number of colourings instead of
finding just one.
These extensions provide baseline algorithms for list-colouring,
edge-colouring, the chromatic polynomial, the chromatic index, and so
forth.
However, for purposes of exposition, we present algorithms in their
\emph{least} general form, emphasizing the algorithmic idea rather
than its (sometimes quite pedestrian) generalizations.
The algorithms are organized by algorithmic technique rather than
problem type, graph class, optimality criterion, or computational
complexity.
These sections are largely independent and can be read in any order,
except perhaps for Algorithm G in Section~\ref{sec: greedy}.
The final section takes a step back and relates
the various colouring problems to each other.
\section{Greedy colouring}
\label{sec: greedy}
The following algorithm, sometimes called the \emph{greedy} or
\emph{sequential} algorithm, considers the vertices one by one and uses the
first available colour.
\begin{algor}{G}{Greedy vertex-colouring}{ Given a graph $G$ with
maximum degree $\Delta$ and an ordering $v_1,v_2,\ldots,v_n$ of
its vertices, this algorithm finds a vertex-colouring with
$\max_{i} |\{\, j< i fon v_jv_i \in E\,\}|+1 \leq \Delta + 1$
colours.}
\item[G1] [Initialize] Set $i=0$.
\item[G2] [Next vertex]
Increment $i$.
If $i=n+1$, terminate with $f$ as the result.
\item[G3]
[Find the colours $N(v_i)$] Compute the set
$C=\bigcup_{j<i} f(v_j)$ of colours already assigned to the
neighbours of $v_i$.
\item[G4] [Assign the smallest available colour to $v_i$]
For increasing $c=1,2,\dots$, check whether $c\in C$. If not, set
$f(v_i)= c$ and return to Step G2. \qed
\end{algor}
For the number of colours, it is clear that in Step G4, the value of
$c$ is at most $|C|$, which is bounded by the number of neighbours of $v_i$
among $v_1,v_2, \ldots,v_{i-1}$.
In particular, Algorithm G establishes that $\chi(G)\leq \Delta(G)+1$.
For the running time, note that both Steps G3 and G4 take at most
$O(1+\deg v_i)$ operations.
Summing over all $i$, the total time spent in Steps G3 and G4 is
asymptotically bounded by $n+(\deg v_1+\deg v_2+\cdots+\deg v_n) = n+2m$.
Thus, Algorithm G takes time $O(n+m)$.
\paragraph{Optimal ordering}
The size of the colouring computed by Algorithm G depends heavily on
the vertex ordering.
Its worst-case behaviour is poor.
For instance, it spends $\frac{1}{2}n$ colours on the 2-colourable \emph{crown
graph} shown in Fig.~\ref{fig: crown graph}.
\begin{figure}
\caption{\label{fig: crown graph}
\label{fig: crown graph}
\end{figure}
On the other hand, for every graph there exists an ordering for which
Algorithm G uses an optimal number of colours; indeed, any ordering
that satisfies $f(v_i)\leq f(v_{i+1})$ for an optimal colouring
$f$ has this property.
Since there are $n!$ different orderings, this observation is
algorithmically quite useless.
An ordering is \emph{perfect} for a graph if, for every induced
subgraph, Algorithm G results in an optimal colouring; triangulated
graphs and comparability graphs always admit such an ordering, as
shown by Chvátal~\cite{chvatal}.
\subsection*{Randomness}
Algorithm G performs quite well on random graphs, whatever the vertex
ordering.
For almost all $n$-vertex graphs, it uses $n/(\log n- 3\log\log n)$
colours, which is roughly twice the optimum value (see \cite{GMcD}).
This suggests the following randomized algorithm. For a graph $G$,
choose a vertex ordering at random and then execute Algorithm G.
For many problems, it is a sound algorithmic design strategy
to trade good average-case behaviour for good (expected) worst-case
behaviour in this way.
However, for Algorithm G the result is quite poor:
for every
$\epsilon>0$ there exist graphs with chromatic number $n^\epsilon$
for which the randomized algorithm uses $\Omega(n/\log n)$ colours
with high probability, as shown by Ku\v{c}era \cite{Kucera}.
\subsection*{Other orderings}
In the \emph{largest-first} vertex-degree ordering introduced by Welsh and
Powell \cite{WP}, the vertices are ordered such that $\deg v_1 \geq
\deg v_2 \geq\cdots\geq \deg v_n$.
This establishes the bound $\chi(G) \leq 1+\max_i\min \{\deg
v_i,i-1\}$, which is sometimes better than $1+\Delta$, such as in
Fig.~\ref{fig: C5vee1}.
\begin{figure}
\caption{\label{fig: C5vee1}
\label{fig: C5vee1}
\end{figure}
Closely related in spirit is Matula's \emph{smallest-last} ordering
\cite{M}, given as follows: choose as the last vertex $v_n$ a vertex
of minimum degree in $G$, and proceed recursively with $G - v_n$, see Fig.~\ref{fig: orderings}.
With this ordering, the size of the resulting colouring is be bounded
by the Szekeres--Wilf bound \cite{SW}, \[\chi(G)\leq
\operatorname{dgn}(G)+1\,,\] where the \emph{degeneracy}
$\operatorname{dgn}(G)$ is the maximum over all subgraphs $H$ of $G$
of the minimum degree $\delta(H)$.
This ordering optimally colours crown graphs and many other classes of
graphs, and uses six colours on any planar graph.
\begin{figure}
\caption{\label{fig: orderings}
\label{fig: orderings}
\end{figure}
Other orderings are dynamic in the sense that the ordering is
determined during the execution of the algorithm, rather than in advance.
For example, Brélaz \cite{Brel} suggests choosing the next vertex from among those adjacent
to the largest number of different colours.
Many other orderings have been investigated (see the surveys of Kosowski and Manuszewski~\cite{KM} and Maffray~\cite{maffray}).
Many of them perform quite well on instances that one may encounter
`in practice', but attempts at formalizing what this means are
quixotic.
\subsection*{2-colourable graphs}
Of particular interest are those vertex orderings in which every
vertex $v_i$ is adjacent to some vertex $v_j$ with $j<i$.
Such orderings can be computed in time $O(m+n)$ using basic graph-traversal algorithms.
This algorithm is sufficiently important to be made explicit.
\begin{algor}{B}{Bipartition}{
Given a connected graph $G$, this algorithm finds a 2-colouring if one exists.
Otherwise, it outputs an odd cycle.}
\item[B1]
[Initialize]
Let $f(v_1)=1$ and let $Q$ (the `queue') be an empty sequence.
For each neighbour $w$ of $v_1$, set $p(w)=v_1$ (the `parent' of
$w$) and add $w$ to $Q$.
\item[B2]
[Next vertex] If $Q$ is empty, go to Step B3.
Otherwise, remove the first vertex $v$ from $Q$ and
set $f(v)$ to the colour not already assigned to $p(v)$.
For each neighbour $w$ of $v$, if $w$ is not yet coloured and
does not belong to $Q$, then set $p(w)=v$ and add $w$ to the end of $Q$.
Repeat Step B2.
\item[B3]
[Verify 2-colouring] Iterate over all edges to verify that $f(v)\neq
f(w)$ for every edge $vw$.
If so, terminate with $f$ as the result.
\item[B4]
[Construct odd cycle]
Let $vw$ be an edge with $f(v)=f(w)$ and
let $u$ be the nearest common ancestor of $v$ and $w$ in the
tree defined by $p$.
Output the path $w,p(w),p(p(w)),\ldots,u$, followed by the reversal of the
path $v,p(v),p(p(v)),\allowbreak\ldots,\allowbreak u$, followed by the edge $vw$.
\qed
\end{algor}
Fig.~\ref{fig: Algorithm B} shows an execution of Algorithm B finding
a 2-colouring.
\begin{figure}
\caption{\label{fig: Algorithm B}
\label{fig: Algorithm B}
\end{figure}
Algorithm B is an example of a `certifying' algorithm: an algorithm
that produces a witness to certify its correctness, in this case an
odd cycle if the graph is not 2-colourable.
To see that the cycle constructed in Step~B4 has odd length, note that on the
two paths $w,p(w),p(p(w)),\ldots, u$ and $v,p(v),p(p(v)),\ldots,u$, each vertex
has a different colour from its predecessor.
Since the respective endpoints of both paths have the same colour,
they must contain the same number of edges modulo 2.
In particular, their total length is even.
With the additional edge $vw$, the length of the resulting cycle is odd.
The order in which the vertices are considered by Algorithm B depends on
the first-in first-out behaviour of the queue $Q$.
The resulting ordering is called \emph{breadth-first}.
An important variant uses a last-in first-out `stack' instead of
a queue; the resulting ordering is called
\emph{depth-first}.
Fig.~\ref{fig: dfs} shows the resulting behaviour
on the graph from Fig.~\ref{fig: Algorithm B}.
\begin{figure}
\caption{\label{fig: dfs}
\label{fig: dfs}
\end{figure}
Algorithm B works also for the list-colouring problem, provided that
for each vertex $v$, the available list of colours $L(v)$ has size at
most 2.
This observation leads to a simple, randomized, exponential-time
algorithm for 3-colouring due to Beigel and Eppstein \cite{BE}.
\begin{algor}{P}{Palette restriction}{Given a graph, this algorithm finds a 3-colouring if one exists.}
\item[P1] [Forbid one colour at each vertex] For each vertex $v$, select a list $L(v)$ of colours
available at $v$ uniformly and independently at random from the three
lists $\{1,2\}$, $\{2,3\}$, and $\{1,3\}$.
\item[P2] [Attempt 2-colouring] Try to solve the list-colouring instance given by $L$ using
Algorithm B, setting $f(v_1)=\min L(v_1)$ in Step B1.
If successful, terminate with the resulting colouring.
Otherwise, return to Step P1. \qed
\end{algor}
To analyse the running time, consider a 3-colouring $f$.
For each vertex $v$, colour $f(v)$ belongs to $L(v)$ with probablity
$\frac{2}{3}$.
Thus, with probability at least $(\frac{2}{3})^n$, the list colouring instance
constructed in step P1 has a solution.
It follows that the expected number of repetitions is $(\frac{3}{2})^n$, each of which
takes polynomial time.
\subsection*{Wigderson's algorithm}
Algorithms~B and G appear together in Wigderson's algorithm~\cite{W}:
\begin{algor}{W}{Wigderson's algorithm}
{ Given a $3$-chromatic graph $G$, this algorithm finds a vertex-colouring with
$O(\surd n)$ colours.}
\item[W1]
[Initialize] Let $c=1$.
\item[W2]
[$\Delta(G)\geq \lceil\surd n\rceil$] Consider a vertex $v$ in $G$ with
$\deg v\geq \lceil \surd n\rceil$; if no such vertex exists,
go to Step W3.
Use Algorithm B to 2-colour the neighbourhood $G[N(v)]$ with
colours $c$ and $c+1$.
Remove $N(v)$ from $G$ and increase $c$ by $\chi(G[N(v)])$.
Repeat Step W2.
\item[W3] [$\Delta(G)<\lceil\surd n\rceil$] Use Algorithm G to colour the
remaining vertices with the colours $c,c+1,\ldots, c+\lceil \surd
n\rceil$. \qed
\end{algor}
Fig.~\ref{fig: Algorithm W} shows an execution of Algorithm W finding
a 5-colouring of the 16-vertex instance from Fig.~\ref{fig:
cap}.
\begin{figure}
\caption{\label{fig: Algorithm W}
\label{fig: Algorithm W}
\end{figure}
The running time is clearly bounded by $O(n+m)$.
To analyse the number of colours, we first need to verify Step W2.
Since $G$ is 3-colourable, so is the subgraph induced by
$N(v)\cup\{v\}$.
Now, if $G[N(v)]$ requires 3 colours, then $G[N(v)\cup \{v\}]$
requires 4,
so $G[N(v)]$ is 2-colourable and therefore Step W2 is correct.
Note that Step~W2 can be run at most $O(\surd n)$ times, each using at most
two colours.
Step W3 expends another $\lceil\surd n\rceil$ colours according to
Algorithm G.
Algorithm W naturally extends to graphs with $\chi(G) >3$.
In this case, Step W2 calls Algorithm W recursively to colour
$(\chi(G)-1)$-colourable neighbourhoods.
The resulting algorithm uses $O(n^{1-1/(1-\chi(G))})$ colours.
\section{Recursion}
Recursion is a fundamental algorithmic design technique.
The idea is to reduce a problem to one or more simpler instances of
the same problem.
\subsection*{Contraction}
The oldest recursive construction for graph colouring expresses
the chromatic polynomial $P(G,q)$ and the chromatic number $\chi(G)$ in terms of edge-contractions:
For non-adjacent vertices $v$, $w$ and integer $q=0,1,\ldots,n$,
\begin{align*}
P(G,q) & = P(G+vw,q) + P (G/vw,q)\,,\\
\chi(G)& = \min\{ \chi(G+vw) , \chi(G/vw)\} \,,
\end{align*}
see Chapter 3, Section 2.1.
These `addition--contraction' recurrences immediately imply a
recursive algorithm.
For instance,
\begin{align*}
P(\vcenter{\hbox{\begin{tikzpicture}[scale=.2]
\node (0) [v] at (0,1) {};
\node (1) [v] at (0,-1) {};
\node (2) [v] at (2,1) {};
\node (4) [v] at (2,-1) {};
\draw (0)--(1)--(2)--(0);
\draw (1)--(4)--(0);
\end{tikzpicture}}},q
)
&=
P(\vcenter{\hbox{\begin{tikzpicture}[scale=.2]
\node (0) [v] at (0,1) {};
\node (1) [v] at (0,-1) {};
\node (2) [v] at (2,1) {};
\node (4) [v] at (2,-1) {};
\draw (0)--(1)--(2)--(0);
\draw (1)--(4)--(0);
\draw (2)--(4);
\end{tikzpicture}}},q
)
+
P(\vcenter{\hbox{\begin{tikzpicture}[scale=.25]
\node (0) [v] at (0,1) {};
\node (1) [v] at (0,-1) {};
\node (2) [v] at (1,0) {};
\draw (0)--(1)--(2)--(0);
\end{tikzpicture}}},q
)\\
&= P(K_4,q)+P(K_3,q) = q(q-1)(q-2)\bigl((q-3)(q-4)+1\bigr) \,.
\end{align*}
Note that the graphs at the end of the recursion are complete.
For sparse graphs, it is more useful to express the same idea as a
`deletion--contraction' recurrence, which deletes and contracts edges until
the graph is empty:
\[ P(G,q) = P(G/e,q) - P (G-e,q)\qquad(e\in E)\,.\]
Many other graph
problems beside colouring can be expressed by a deletion--contraction
recurrence.
The most general graph invariant that can be defined in this fashion is the Tutte polynomial (see
\cite{BHKK-tutte} and \cite{HPR} for its algorithmic aspects).
The algorithm implied by these recursions is sometimes called \emph{Zykov's
algorithm} \cite{Z}.
Here is the deletion--contraction version.
\begin{algor}{C}{Contraction}{Given a graph $G$, this algorithm returns
the sequence of coefficients
$(a_0,a_1,\ldots,a_n)$ of the chromatic polynomial
$P(G,q)=\sum_{i=0}^n a_iq^i$.}
\item[C1] [Base] If $G$ has no edges then return the coefficients
$(0,0,\ldots,0,1)$, corresponding to the polynomial $P(G,q)=
q^n$.
\item[C2] [Recursion] Pick an edge $e$ and construct the graphs $G'=G/e$ and
$G''=G-e$.
Call Algorithm C recursively to compute $P(G',q)$ and $P(G'',q)$ as
sequences of coefficients $(a_0',a_1',\ldots, a_n')$ and
$(a_0'',a_1'',\ldots, a_n'')$.
Return $(a_0' - a_0'',a_1'-a_1'',\ldots, a_n' - a_n'')$, corresponding to the
polynomial $P(G/e,q)-P(G-e,q)$. \qed
\end{algor}
To analyse the running time, let $T(r)$ be the number of
executions of Step~C1 for graphs with $n$ vertices and $m$ edges, where
$r=n+m$.
The two graphs constructed in Step~C2 have size $n-1+m-1=r-2$ and
$n+m-1=r-1$, respectively, so $T$ satisfies $T(r)=T(r-1)+T(r-2)$.
This is a well-known recurrence with solution $T(r) =O(\varphi^r)$, where
$\varphi=\frac{1}{2}(1+\surd 5)$ is the golden ratio.
Thus, Algorithm C requires $\varphi^{n+m}\operatorname{poly}(n)=O(1.619^{n+m})$ time.
A similar analysis for the algorithm implied by the deletion--addition
recursion gives $\varphi^{n+\overline{m}}\operatorname{poly}(n)$, where
$\overline{m}=\binom{n}{2}-m$ is the number of edges in the complement
of $G$.
These worst-case bounds are often very pessimistic.
They do not take into account that recurrences can be stopped as soon
as the graph is a tree (or some other easily recognized graph whose
chromatic polynomial is known as a closed formula), or that $P$
factorizes over connected components.
Moreover, we can use graph isomorphism heuristics and tabulation to
avoid some unnecessary recomputation of isomorphic subproblems (see \cite{HPR}).
Thus, Algorithm C is a more useful algorithm than its exponential
running time may indicate.
\subsection*{Vertex partitions and dynamic programming}
We turn to a different recurrence, which expresses
$\chi(G)$ in terms of induced subgraphs of $G$.
By taking $S$ to be a colour class of an optimal colouring of $G$, we
observe that every graph has an independent set of vertices $S$ for
which \( \chi(G) = 1+\chi(G-S)\).
Thus, we have
\begin{equation}\label{eq: dc}
\chi(G) = 1 + \min \chi(G- S)\,,
\end{equation} where the minimum is taken over all non-empty
independent sets $S$ in $G$.
The recursive algorithm implied by \eqref{eq: dc} is too slow to be of
interest.
We expedite it using the fundamental algorithmic idea of \emph{dynamic
programming}.
The central observation is that the subproblems $\chi(G-S)$ for
various vertex-subsets $S$ appearing in \eqref{eq: dc} are computed
over and over again.
It thus makes sense to store these $2^n$ values in a table when they
are first computed.
Subsequent evaluations can then be handled by consulting the table.
We express the resulting algorithm in a bottom-up fashion:
\begin{algor}{D}{Dynamic programming}{
Given a graph $G$, this algorithm computes a table $T$ with $T(W)=\chi(G[W])$, for each
$W\subseteq V$.}
\item[D1] [Initialize] Construct a table with (initially undefined)
entries $T(W)$ for each $W\subseteq V$.
Set $T(\emptyset)=0$.
\item[D2] [Main loop]
List all vertex-subsets $W_1,W_2,\ldots,W_{2^n}\subseteq V$ in
non-decreasing order of their size.
Do Step D3 for $W=W_2,W_3,\ldots, W_{2^n}$, then terminate.
\item[D3] [Determine $T(W)$]
Set $T(W) = 1 + \min T(W\setminus S)$, where the minimum is taken
over all non-empty independent sets $S$ in $G[W]$.
\qed
\end{algor}
The ordering of subsets in the main loop D2 ensures that each set is
handled before any of its supersets.
In particular, all values $T(W\setminus S)$ needed in Step~D3 will have
been previously computed, so the algorithm is well defined.
The minimization in Step~D3 is implemented by iterating over all
$2^{|W|}$ subsets of $W$.
Thus, the total running time of Algorithm~D is within a polynomial
factor of
\begin{equation}
\label{eq: D time}
\sum_{W\subseteq V} 2^{|W|} = \sum_{k=0}^n \binom{n}{k} 2^k = 3^n\,.
\end{equation}
This rather straightforward application of dynamic programming already
provides the non-trivial insight that the chromatic number can be
computed in time exponential in the number of vertices, rather than depending
exponentially on $m$, $\chi(G)$, or a superlinear function of $n$.
\subsection*{Maximal independent sets}
To pursue this idea a little further we notice that $S$ in \eqref{eq:
dc} can be assumed to be a \emph{maximal} independent set -- that is,
not a proper subset of another independent set.
To see this, let $f$ be an optimal colouring and consider the
colour class $S=f^{-1}(1)$.
If $S$ is not maximal, then repeatedly pick a vertex $v$ that is not
adjacent to $S$, and set $f(v)=1$.
By considering the disjoint union of $\frac{1}{3}k$ triangles, we see
that there exist $k$-vertex graphs with $3^{k/3}$ maximal
independent sets.
It is known that this is also an upper bound, and that the maximal
independent sets can be enumerated within a polynomial factor of that
bound (see \cite{BronKerbosch}, \cite{MM} and \cite{TTT}).
We therefore have the following result:
\begin{thm}\label{thm: bk} The maximal independent sets of a graph on $k$ vertices
can be listed in time $O(3^{k/3})$ and polynomial space.
\end{thm}
We can apply this idea to Algorithm~D.
The minimization in Step~D3 now takes the following form:
\begin{description}
\item[D3$\mathbf '$] [Determine $T(W)$]
Set $T(W) = 1 + \min T(W\!\setminus\! S)$, where the minimum is taken
over all maximal independent sets $S$ in $G[W]$.
\end{description}
Using Theorem~\ref{thm: bk} with $k=|W|$ for the minimization in Step~D3$'$,
the total running time of Algorithm~D comes within a polynomial factor
of
\[
\sum_{k=0}^n \binom{n}{k} 3^{k/3} = (1+3^{1/3})^n =O(2.443^n) \,.\]
For many years, this was the fastest known algorithm for the chromatic number.
\subsection*{3-colouring}
Of particular interest is the 3-colouring case.
Here, it makes more sense to let the outer loop iterate over all
maximal independent sets and check whether the complement is bipartite.
\begin{algor}{L}{Lawler's algorithm}{
Given a graph $G$, this algorithm finds a $3$-colouring if one exists.}
\item[L1] [Main loop] For each maximal independent set $S$ of $G$,
do Step~L2.
\item[L2]
[Try $f(S)=3$] Use Algorithm~B to find a colouring $ffon
V\setminus S\rightarrow\{1,2\}$ of $G-S$ if one exists.
In that case, extend $f$ to all of $V$ by setting $f(v)=3$
for each $v\in S$, and terminate with $f$ as the result.
\qed
\end{algor}
The running time of Algorithm L is dominated by the number of
executions of L2, which, according to Theorem~\ref{thm: bk}, is
$3^{n/3}$.
Thus, Algorithm L decides 3-colourability in time $3^{n/3}\operatorname{poly}(n)=O(1.442^n)$ and
polynomial space.
The use of maximal independent sets goes back to Christofides
\cite{Christofides}, while Algorithms D and L are due to Lawler
\cite{Law}.
A series of improvements to these ideas have further reduced these
running times.
At the time of writing, the best-known time bound for 3-colouring is
$O(1.329^n)$ by Beigel and Eppstein \cite{BE}.
\section{Subgraph expansion}
The \emph{Whitney expansion} \cite{Whitney} of the chromatic
polynomial is
\[ P(G,q) = \sum_{A\subseteq E} (-1)^{|A|} q^{k(A)}\,;\] see Chapter~3,
Section~2 for a proof.
It expresses the chromatic polynomial as an alternating sum of terms,
each of which depends on the number of connected components $k(A)$ of
the edge-subset $A\subseteq E$.
Determining $k(A)$ is a well-studied algorithmic graph problem, which
can be solved in time $O(n+m)$ (for example, by depth-first search).
Thus, the Whitney expansion can be evaluated in time $O(2^m(n+m))$.
A more recent expression (see \cite{BH08}) provides an expansion over
\emph{induced} subgraphs:
\begin{thm}
For $W\subseteq V$, let $g(W)$ be the number of non-empty
independent sets in $G[W]$.
Then $G$ can be $q$-coloured if and only if
\begin{equation}\label{eq: ie c}
\sum_{W\subseteq V}
(-1)^{|V\setminus W|} \bigl(g(W)\bigr)^q> 0\,.
\end{equation}
\end{thm}
\begin{proof}
For each $W\subseteq V$, the term $\bigl(g(W)\bigr)^q$ counts the
number of ways of selecting $q$ non-empty independent sets $S_1,S_2,\ldots,
S_q$, where $S_i\subseteq W$.
For $U\subseteq V$, let $h(U)$ be the number of ways of selecting
$q$ non-empty independent sets whose union is $U$.
Then $(g(W))^q = \sum_{U\subseteq W} h(U)$, so
\begin{align*} \sum_{W\subseteq V} (-1)^{|V\setminus W|} \bigl(g(W)\bigr)^q &=
\sum_{W\subseteq V} (-1)^{|V\setminus W|} \sum_{U\subseteq W} h(U)\\ &=
\sum_{U\subseteq V} h(U) \sum_{W\supseteq U} (-1)^{|V\setminus
W|} = h(V)\,.
\end{align*}
For the last step, note that the inner sum (over $W$, with
$U\subseteq W\subseteq V$) vanishes except when $U=V$, because there
are as many odd-sized as even-sized sets sandwiched between
different sets, by the principle of inclusion--exclusion.
If $h(V)$ is non-zero, then there exist independent sets
$S_1,S_2,\ldots,S_q$ whose union is $V$. These sets correspond to a
colouring: associate a colour with the vertices in each set,
breaking ties arbitrarily.
\end{proof}
For each $W\subseteq V$, we can compute the value $g(W)$ in time
$O(2^{|W|}m)$ by constructing each non-empty subset of $W$ and testing
it for independence.
Thus, the total running time for evaluating $\eqref{eq: ie c}$ is
within a polynomial factor of $3^n$, just as in the analysis
\eqref{eq: D time} for Algorithm D; however, the space requirement
here is only polynomial.
We can further reduce the running time to $O(2.247^n)$ by using
dedicated algorithms for evaluating $g(W)$ from the literature
(see \cite{BHK}).
If exponential space is available, we can do even better.
To that end, we first introduce a recurrence for $g$.
\begin{thm}\label{thm: g}
Let $W\subseteq V$. We have $g(\emptyset) = 0$, and, for every $v\in W$,
\begin{equation}\label{eq: g}
g(W)= g(W\setminus \{v\}) +
g(W\setminus N[v])+1\,.
\end{equation}
\end{thm}
\begin{proof}
Fix $v\in W$.
The non-empty independent sets $S\subseteq W$ can be partitioned
into two classes with $v\notin S$ and $v\in S$.
In the first case, $S$ is a non-empty independent set with
$S\subseteq W\setminus\{v\}$ and thus accounted for by the first
term of \eqref{eq: g}.
Consider the second case.
Since $S$ contains $v$ and is independent, it contains no vertex
from $N(v)$.
Thus, $S$ is a non-empty independent set with $\{v\}\subseteq
S\subseteq W\setminus N(v)$.
The number of such sets is the same as the number of (not
necessarily non-empty) independent sets $S'$ with $S'\subseteq
W\setminus N[v]$, because of the bijective mapping $S\mapsto S'$
where $S'=S\setminus\{v\}$.
By induction, the number of such sets is $g(W\setminus N[v])+1$,
where the `$+1$' term accounts for the empty set.
\end{proof}
This leads to the following algorithm, due to Björklund \emph{et al.} \cite{BHK}:
\begin{algor}{I}{Inclusion--exclusion}{
Given a graph $G$ and an integer $q\geq 1$, this algorithm determines whether $G$ can be
$q$-coloured.}
\item[I1] [Tabulate $g$] Set $g(\emptyset) = 0$.
For each non-empty subset $W\subseteq V$ in inclusion order, pick
$v\in W$ and set
\( g(W)= g(W\setminus \{v\}) +
g(W\setminus N[v])+1\).
\item[I2] [Evaluate \eqref{eq: ie c}]
If $\sum_{W\subseteq V} (-1)^{|V\setminus W|} \bigl(g(W)\bigr)^q> 0$ output
`yes', otherwise `no'.
\qed
\end{algor}
Both Steps I1 and I2 take time $2^n\operatorname{poly}(n)$, and the algorithm
requires a table with $2^n$ entries.
Fig.~\ref{fig: Algorithm I} shows the computations of Algorithm~I on a
small graph for $q=2$ and $q=3$, with $a_q(W)= (-1)^{|V\setminus W|} \bigl(g(W)\bigr)^q$.
The sum of the entries in column $a_2$ is $0$, so there is no 2-colouring.
The sum of the entries in column $a_3$ is $18$, so a 3-colouring exists.
\begin{figure}
\caption{\label{fig: Algorithm I}
\label{fig: Algorithm I}
\end{figure}
With slight modifications, Algorithm~I can be made to work
for other colouring problems such as the chromatic polynomial and
list-colouring, also in time and space $2^n\operatorname{poly}(n)$ (see \cite{BHK});
currently, this is the fastest known algorithm for these problems.
For the chromatic polynomial, the space requirement can be reduced to
$O(1.292^n)$, while maintaining the $2^n\operatorname{poly}(n)$ running time (see
\cite{BHKK-lin}).
\section{Local augmentation}
Sometimes, a non-optimal colouring can be improved by a local change
that recolours some vertices.
This general idea is the basis of many local search heuristics and
also several central theorems.
\subsection*{Kempe changes}
An important example, for edge-colouring, establishes Vizing's
theorem, $\Delta(G)\leq \chi'(G)\leq\Delta(G)+1$.
Chapter 5 gives a modern and more general presentation of the
underlying idea, and our focus in the present chapter is to make the
algorithm explicit.
A colour is \emph{free} at $v$ if it does not appear on an edge at $v$.
(We consider an edge-colouring with $\Delta(G)+1$ colours, so every
vertex has at least one free colour.)
A (Vizing) \emph{fan} around $v$ is a maximal set of edges $vw_0,vw_1,\ldots,vw_r$,
where $vw_0$ is not yet coloured and the other edges are coloured as
follows.
For $j=0,1,\ldots,r$, no colour is free at both $v$ and $w_j$.
For $j=1,2,\ldots,r$, the $j$th fan edge $vw_j$ has colour $j$ and the
colours appearing around $w_j$ include $1,2,\ldots,j$ but not $j+1$ (see Fig.~\ref{fig: fan}(a)).
Such a fan allows a recolouring by moving colours
as follows: remove the colour from $vw_j$ and set
$f(vw_0)=1,\allowbreak f(vw_1)=2,\allowbreak\ldots,\allowbreak f(vw_{j-1})=j$.
This is called \emph{downshifting from $j$} (see Fig.~\ref{fig: fan}(b)).
\begin{figure}
\caption{\label{fig: fan}
\label{fig: fan}
\end{figure}
\begin{algor}{V}{Vizing's algorithm}{Given a graph $G$, this algorithm finds an edge
colouring with at most $\Delta(G)+1$ colours in time $O(nm)$.}
\item[V1]
[Initialize]
Order the edges arbitrarily $e_1,e_2,\ldots,e_m$.
Let $i=0$.
\item[V2]
[Extend colouring to next edge]
Increment $i$. If $i=m+1$ then terminate. Otherwise, let $vw=e_i$.
\item[V3] [Easy case]
If a colour $c$ is free at both $v$ and $w$, then set
$f(vw)=c$ and return to Step~V2.
\item[V4]
[Find $w_0$ and $w_1$]
Let $w_0=w$.
Pick a free colour at $w_0$ and call it $1$.
Let $vw_1$ be the edge incident with $v$ coloured $1$.
(Such an edge exists because $1$ is not also free at $v$.)
\item[V5]
[Find $w_2$]
Pick a free colour at $w_1$ and call it $2$.
If $2$ is also free at $v$, then set $f(vw_0)=1$, $f(vw_1)=2$, and
return to Step~V2.
Otherwise, let $vw_2$ be the edge incident with $v$ coloured $2$. Set $r=2$.
\item[V6]
[Extend fan to $w_{r+1}$]
Pick a free colour at $w_r$ and call it $r+1$.
If $r+1$ is also free at $v$ then downshift from $r$, recolour
$f(vw_r)=c_{r+1}$ and return to Step~V2.
Otherwise, let $vw_{r+1}$ be the edge incident with $v$ coloured $r+1$.
If each colour $1,2,\ldots,r$ appears around $w_{r+1}$, then increment
$r$ and repeat Step~V6.
\item[V7]
[Build a $\{0,j\}$-path from $w_j$ or from $w_{r+1}$]
Let $j\in\{1,2,\ldots, r\}$ be a free colour at $w_{r+1}$
and let $0$ be a colour free at $v$ and different from $j$.
Construct two maximal $\{0,j\}$-coloured paths $P_j$ and $P_{r+1}$
from $w_j$ and $w_{r+1}$, respectively, by following edges of
alternating colours $0,j,0,j,\ldots$ (see Fig.~\ref{fig: fan}(c)).
(The paths cannot both end in $v$.)
Let $k=j$ or $r+1$ so that $P_k$ does not end in $v$.
\item[V8] [Flip colours on $P_k$] Recolour the edges on $P_k$ by exchanging $0$
and $j$. Downshift from $k$, recolour $f(vw_k)=0$, and return to Step~V2.\qed
\end{algor}
To see that this algorithm is correct, one needs to check that
the recolourings in Steps V6 and V8 are legal.
A careful analysis is given by Misra and Gries \cite{MiGr}.
For the running time, first note that Step V6 is repeated at most
$\deg v$ times, so the algorithm eventually has to leave that step.
The most time-consuming step is Step~V7; a $\{0,j\}$-path can be
constructed in time $O(n)$ if for each vertext we maintain a table of
incident edges indexed by colour.
Thus the total running time of Algorithm~V is $O(mn)$.
Another example from this class of algorithms appears in the proof of
Brooks's theorem (see Chapter~2 and \cite{Brooks}), which relies on an algorithm that
follows Algorithm G but attempts to re-colour the vertices of
bichromatic components whenever a fresh colour is about to be
introduced.
\subsection*{Random changes}
There are many other graph colouring algorithms that fall under the
umbrella of local transformations.
Of particular interest are local search algorithms that recolour
individual vertices at random.
This idea defines a random process on the set of colourings called the
\emph{Glauber} or \emph{Metropolis} dynamics, or the natural Markov chain Monte
Carlo method.
The aim here is not merely to find a colouring (since
$q>4\Delta$, this would be easily done by Algorithm G), but to find a
colouring that is uniformly distributed among all $q$-colourings.
\begin{algor}{M}{Metropolis}{Given a graph $G$ with maximum degree
$\Delta$ and a $q$-colouring $f_0$ for $q> 4\Delta$, this algorithm finds a
uniform random $q$-colouring $f_T$ in polynomial time.}
\item[M1]
[Outer loop]
Set $T=\lceil qn\ln 2n/ (q-4\Delta)\rceil$. Do Step M2
for $t=1,2,\ldots, T$, then terminate.
\item[M2]
[Recolour a random vertex]
Pick a vertex $v\in V$ and a colour $c\in \{1,2,\ldots,q\}$ uniformly at
random.
Set $f_t=f_{t-1}$.
If $c$ does not appear among $v$'s neighbours, then set $f_t(v)=c$.
\qed
\end{algor}
An initial colouring $f_0$ can be provided in polynomial time because $q>
\Delta+1$ -- for example, by Algorithm G.
To see that the choice of initial colouring $f_0$ has no influence on
the result $f_T$, we consider two different initial colourings $f_0$ and
$f_0'$ and execute Algorithm~M on both, using the same random choices
for $v$ and $c$ in each step.
Let $d_t=|\{\, vfon f_t(v)\neq f_t'(v)\,\}|$ be the number
of \emph{disagreeing} vertices after $t$ executions of Step M2.
Each step can change only a single vertex, so
$|d_t-d_{t-1}|=1$, $0$, or $-1$.
We have $d_t= d_{t-1}+1$ only if $f_{t-1}(v)=f_{t-1}'(v)$ but
$f_t(v)\neq f_t'(v)$, so exactly one of the two processes rejects the
colour change.
In particular, $v$ must have a (disagreeing) neighbour $w$ with
$c=f_{t-1}(w)\neq f_{t-1}'(w)$ or $f_{t-1}(w)\neq f_{t-1}'(w)=c$.
There are $d_{t-1}$ choices for $w$ and therefore $2\Delta
d_{t-1}$ choices for $c$ and $v$.
Similarly, we have $d_t=d_{t-1} -1$ only if $f_{t-1}(v)\neq
f_{t-1}(v)$ and $c$ does not appear in $v$'s neighbourhood in either
$f_{t-1}$ or $f_{t-1}'$.
There are at least $(q-2\Delta)d_{t-1} $ such choices for $c$ and $v$.
Thus, the expected value of $d_t$ can be bounded as follows:
\begin{equation*}\mathbf E[d_t]\leq
\mathbf E[d_{t-1}] + \frac{(q-2\Delta)\mathbf E[d_{t-1}]}{qn} -
\frac{2\Delta \mathbf E[d_{t-1}]}{qn} =
\mathbf E [d_{t-1}]\biggl(1-\frac{q-4\Delta}{qn}\biggr)\,.\end{equation*} Iterating this
argument and using $d_0\leq n$, we have
\begin{equation*}
\mathbf E[d_T]\leq n\biggl(1-\frac{q-4\Delta}{qn}\biggr)^T \leq
n\exp\biggl(-\frac{T(q-4\Delta)}{qn}\biggr) \leq n\exp(-\ln 2n)
=\textstyle\frac{1}{2}\,.
\end{equation*} By Markov's inequality, and because $d_T$ is a
non-negative integer, we conclude that \[\Pr(f_T=f_T') = \Pr(d_T=0) \geq
1-\Pr(d_T\geq 1)\geq 1-\mathbf E[d_T] \geq \textstyle\frac{1}{2}\,.\]
We content ourselves with this argument, which shows that the
process is `sufficiently random' in the sense of being memoryless.
Informally, we can convince ourselves that $f_T$ is uniformly
distributed because we can assume that $f_0'$ in the above argument
was sampled according to such a distribution.
This intuition can be formalized using standard coupling arguments for
Markov chains; our calculations above show that the `mixing time' of
Algorithm~M is $O(n\log n)$.
Algorithm~M and its variants have been well studied, and the
analysis can be much improved (see the survey of Frieze and Vigoda \cite{FV}).
Randomized local search has wide appeal across disciplines, including
simulations in statistical physics and heuristic methods in
combinatorial optimization.
\section{Vector colouring}
\label{sec: SDP}
We now turn to a variant of vertex-colouring that is particularly
interesting from an algorithmic point of view.
\subsection*{Vector chromatic number}
Let $S^{d-1} = \{\,\mathbf x \in \mathbb R^d fon \lVert\mathbf
x\rVert = 1\,\}$.
A \emph{vector $q$-colouring} in $d\leq n$ dimensions is a mapping
$xfon V\rightarrow S^{d-1}$ from the vertex-set to the set of
$d$-dimensional unit vectors for which neighbouring vectors are `far
apart', in the sense that their scalar product satisfies
\[ \langle x(v), x(w)\rangle
\leq -\frac{1}{q-1}\,, \qquad \text{for $vw\in E$}.
\]
The smallest such number $q$ is called the \emph{vector chromatic number}
$\vec\chi(G)$, which need not be an integer.
For instance, the vertices of the 3-chromatic cycle graph $C_5$ can be laid out
on the unit circle
in the form of a pentagram
$\vcenter{\hbox{\begin{tikzpicture}
[every node/.style={fill,draw, circle, inner sep = .5pt},scale=.25]
\foreach \i in {0,1,2,3,4} \node (\i) at (90+72*\i:1) {};
\draw (0)--(2)--(4)--(1)--(3)--(0);
\end{tikzpicture}}} $.
Then the angle between vectors corresponding to neighbouring vertices
is $\frac{4}{5}\pi$, corresponding to the scalar product $-1/(\surd
5-1)$, so $\vec\chi(C_5)\leq \surd 5<3$.
\begin{thm}\label{thm: sandwich} If $G$ has clique number $\omega(G)$,
then $\omega(G)\leq \vec\chi(G)\leq \chi(G)$.
\end{thm}
\begin{proof}
For the first inequality, let $W$ be a clique in $G$ of size
$r=\omega(G)$ and consider a vector $q$-colouring $x$ of $G$.
Let $\mathbf y= \sum_{v\in W} x(v)$.
Then \[ 0 \leq \langle \mathbf y, \mathbf y \rangle \leq r\cdot
1+r(r-1) \cdot \biggl(-\frac{1}{q-1}\biggr)\,,
\]
which implies that $r\leq q$.
For the second inequality, place the vertices belonging to each
colour class at the corners of a $(q-1)$-dimensional simplex.
To be specific, let $ffon V\rightarrow\{1,2,\ldots,q\}$ be an
optimal $q$-colouring and
define $x(v)=(x_1,x_2,\ldots, x_n)$ by
\begin{equation*}
x_i=
\begin{cases}
\bigl((q-1)/q\bigr)^{1/2}\,, & \text{if $i=f(v)$}\,;\\
- \bigl(q(q-1)\bigr)^{-1/2}\,, & \text{if $i\neq f(v)$ and $i\leq q$}\,;\\
0\,, & \text{if $i> q$}\,.
\end{cases}
\end{equation*}
Then we have
\[\langle x(v), x(v)\rangle = \frac{q-1}{q} +
\frac{q-1}{q(q-1)} = 1\,,\]
and for $v$ and $w$ with $f(v)\neq f(w)$ we
have
\[\langle x(v),x(w)\rangle = 2 \biggl(\frac{q-1}{q}\biggr)^{1/2}
\biggl(- \biggl(\frac{q}{q-1}\biggr)^{1/2}\biggr) + \frac{q-2}{q(q-1)}
=-\frac{1}{q-1}\,.\]
Thus, $x$ is a vector $q$-colouring, so
$\vec\chi(G)$ is at most $q$.
\end{proof}
What makes vector colourings interesting from the algorithmic point of
view is that they can be found in polynomial time, at least
approximately, using algorithms based on semidefinite programming.
The details behind those constructions lie far outside the scope of
this chapter (see Gärtner and Matoušek \cite{GM}).
\begin{thm}\label{thm: sdp} Given a graph $G$ with
$\vec\chi(G)=q$, a vector $(q+\epsilon)$-colouring of $G$ can be
found in
time polynomial in $n$ and $\log (1/\epsilon)$.
\end{thm}
For a graph with $\omega(G)=\chi(G)$, Theorem~\ref{thm: sandwich}
shows that the vector chromatic number equals the chromatic number.
In particular, it is an integer, and can be determined in polynomial
time using Theorem~\ref{thm: sdp} with $\epsilon <\frac{1}{2}$.
This shows that the chromatic numbers of perfect graphs can be
determined in polynomial time.
The theory behind this result counts as one of the highlights of
combinatorial optimization (see Grötschel, Lovász and
Schrijver \cite{GLS}).
How does the vector chromatic number behave for general graphs?
For $q=2$, the vectors have to point in exactly opposite directions.
In particular, there can be only two vectors for each connected
component, so vector 2-colouring is equivalent to 2-colouring.
But already for $q=3$, the situation becomes more interesting, since
there exist vector 3-colourable graphs that are not 3-colourable.
For instance, the Grötzsch graph, the smallest triangle-free graph
with chromatic number 4, admits the vector 3-colouring shown in
Fig.~\ref{fig: grotzsch clean} as an embedding on the unit sphere.
\begin{figure}
\caption{\label{fig: grotzsch clean}
\label{fig: grotzsch clean}
\end{figure}
More complicated constructions (that we cannot visualize) show that
there exist vector 3-colourable graphs with chromatic number at least
$n^{0.157}$ (see \cite{FLS} and \cite{KMS}).
\subsection*{Randomized rounding}
Even though the gap between $\vec\chi$ and $\chi$ can be large for
graphs in general, vector colouring turns out to be a useful starting
point for (standard) colouring.
The next algorithm, due to Karger, Motwani and Sudan \cite{KMS},
translates a vector colouring into a (standard) vertex-colouring using random
hyperplanes.
\begin{algor}{R}{Randomized rounding of vector colouring}{
Given a $3$-chromatic graph $G$ with maximum degree $\Delta$,
this algorithm finds a $q$-colouring in polynomial time, where the expected size of
$q$ is $\mathbf
E[q]=O(\Delta^{0.681} \log n)$.}
\item[R1] [Vector colour] Set $\epsilon = 2\cdot 10^{-5}$ and
compute a vector $(3+\epsilon)$-colouring $x$ of $G$ using
semidefinite programming.
Let $\alpha \geq \arccos(-1/(2+\epsilon))$ be the minimum
angle in radians between neighbouring vertices.
\item[R2] [Round] Set
\[
r=\lceil\log_{\pi/(\pi-\alpha)} (2\Delta)\rceil
\]
and construct $r$ random hyperplanes $H_1,H_2,\ldots, H_r$ in
$\mathbb R^n$.
For each vertex $v$, let $f(v)$ be the binary number
$b_rb_{r-1}\cdots b_1$, where $b_i=1$ if and only if $x(v)$ is on the positive
side of the $i$th hyperplane $H_i$.
\item[R3] [Handle monochromatic edges recursively] Iterate over all edges to
find the set of monochromatic edges $M=\{\, vw\in Efon
f(v)=f(w)\,\}$.
Recolour these vertices by running Algorithm R recursively on
$G[M]$, with fresh colours. \qed
\end{algor}
Figure~\ref{fig: grotzsch} illustrates the behaviour of Algorithm R on
the vector $3$-colouring of the Grötzsch graph from Fig.~\ref{fig:
grotzsch clean}.
Two hyperplanes separate the vertices into four parts.
The resulting vertex-colouring with colours from $\{0,1\}^2$ is shown
to the right.
In this example, the set $M$ of monochromatic edges determined in Step
M3 contains only the single edge $v_{10}v_{11}$, drawn bold in the figure.
\begin{figure}
\caption{\label{fig: grotzsch}
\label{fig: grotzsch}
\end{figure}
Algorithm R algorithm runs in polynomial time, because
Theorem~\ref{thm: sdp} ensures that Step~R1 can be performed in
polynomial time.
We proceed to analyze the size of the final colouring.
Step R2 uses the colours $\{0,1,\ldots, 2^{r-1}\}$, so the number of
colours used in each Step R2 is
\begin{equation}\label{eq: colsperround} 2^r \leq
(2\Delta)^{-1/\log (\pi/(\pi-\alpha) )} < (2\Delta)^{0.631}\,,
\end{equation}
what is more difficult is to bound the total number of recursive
invocations.
To this end, we need to understand how fast the instance size,
determined by the size of $M$ in Step R3, shrinks.
Let $e$ be an edge whose endpoints received the vector colours $\mathbf
x$ and $\mathbf y$.
Elementary geometrical considerations establish the following result.
\begin{thm}\label{thm: angles}
Let $\mathbf x , \mathbf y \in \mathbb R^d$ with angle $\varphi$ (in radians).
A random hyperplane in $\mathbb R^d$ fails to separate $\mathbf x$ and
$\mathbf y$ with probability $1-\varphi/\pi$.
\end{thm}
The angle between the vectors $\mathbf x$ and $\mathbf y$ is at most $\alpha$.
(To gain some intuition of this, if we ignore the error term $\epsilon$,
Theorem~\ref{thm: angles} shows that $\mathbf x$ and $\mathbf y$ end up on the same
side of a random hyperplane with probability $1-\alpha/\pi \leq
1-\arccos(-\frac{1}{2})/\pi = 1-2\pi/3\pi = \frac{1}{3}.$)
The edge $e$ is monochromatic if all $r$ independent random hyperplanes
fail to separate $\mathbf x$ and $\mathbf y$ in Step~R2.
Thus,
\begin{equation*}
\Pr(e\in M) \leq (1-\alpha/\pi)^r \leq (\pi/(\pi-\alpha))^{-r} \leq 1/2\Delta\,.
\end{equation*} By
linearity of expectation, the expected size of $M$ is
\begin{equation*}
\mathbf E [|M|]=\sum_{e\in E}\Pr(e\in M)
\leq m/2\Delta\leq \textstyle\frac{1}{4}n\,.
\end{equation*}
Since each edge has two vertices, the expected number of vertices in
the recursive instance $G[M]$ is at most $\frac{1}{2}n$, and
in general, for $i>2$, the expected number of vertices $n_i$ in the
$i$th instance satisfies $n_i\leq \frac{1}{2}n_{i-1}$.
In particular, $n_t\leq 1$ after $t=O(\log n)$ rounds, at which point
the algorithm terminates.
With the bound \eqref{eq: colsperround} on the number of colours used
per round, we conclude that the total number of colours used is
$O(\Delta^{0.631} \log n)$ in expectation.
In terms of $\Delta$, Algorithm R is much better than the
bound of $\Delta+1$ guaranteed by Algorithm G.
For an expression in terms of $n$, we are tempted to bound $\Delta$ by
$O(n)$, but that just shows that the number of colours is $O(n^{0.631}
\log n)$, which is worse than the $O(\surd n)$ colours from
Algorithm~W.
Instead, we employ a hybrid approach.
Run Steps W1 and W2 as long as the maximum degree of the graph $G$ is
larger than some threshold $d$, and
then colour the remaining graph using Algorithm R.
The number of colours used by the combined algorithm is of the order of
\( (2n/d) + (2d)^{0.631}\log n\), which is minimized around $d =
n^{1/1.631}$ with value $O(n^{0.387})$.
Variants of Algorithm R for general $q$-colouring and with
intricate rounding schemes have been investigated further (see
Langberg's survey \cite{Langberg}).
The current best polynomial-time algorithm for colouring a
3-chromatic graph based on vector colouring uses $O(n^{0.208})$
colours, due to Chlamtac \cite{C}.
\section{Reductions}
\label{sec: reductions}
The algorithms in this chapter are summarized in Table~\ref{tab: Algorithms}.
\begin{table}[h!t]\small
\begin{tabular}{@{}l@{}llll@{}}\toprule
\multicolumn{2}{@{}l}{Algorithm} & Time & Problem\\ \midrule
{\bf B} & Bipartition &$O(n+m)$ & 2-colouring \\
{\bf C} & Contraction & $O(1.619^{n+m})$ & $P(G,q)$ \\
{\bf D} & Dynamic programming & $3^n\operatorname{poly}(n)$ & $\chi(G)$ \\
{\bf G} & Greedy & $O(n+m)$ & $(\Delta(G)+1)$-colouring \\
{\bf I} & Inclusion--exclusion & $2^n\operatorname{poly} (n)$ & $\chi(G)$ \\
{\bf L} & Lawler's algorithm & $O(1.443^n)$ & 3-colouring \\
{\bf M\,\,} & Metropolis dynamics & $\operatorname{poly}(n)$ & random $q$-colouring ($q>4\Delta$) \\
{\bf P} & Palette restriction & $1.5^n\operatorname{poly}(n)$ & 3-colouring\\
{\bf R} & Rounded vector colouring & $\operatorname{poly}(n)$ & $O(\Delta^{0.681} \log
n)$-colouring for $\chi(G)=3$\\
{\bf V} & Vizing's algorithm & $O(mn)$ & edge ($\Delta(G)+1$)-colouring \\
{\bf W} & Wigderson's algorithm & $O(n+m)$ & $O(\surd n)$-colouring for $\chi(G)=3$ \\
{\bf X} & Exhaustive search & $q^n\operatorname{poly}(n)$ & $P(G,q)$ \\\bottomrule
\end{tabular}
\caption{\label{tab: Algorithms}
Algorithms discussed in this survey}
\end{table}
Not only do these algorithms achieve different running times and
quality guarantees, they also differ in which specific problem they
consider.
Let us now be more precise about the variants of the graph colouring
problem:
\begin{description}
\item[{\it Decision}] Given a graph $G$ and an integer $q$, decide whether $q$
can be $q$-coloured.
\item[{\it Chromatic number}] Given a graph $G$, compute the chromatic
number $\chi(G)$.
\item[{\it Construction}] Given a graph $G$ and an integer $q$, construct a
$q$-colouring of $G$.
\item[{\it Counting}] Given a graph $G$ and an integer $q$, compute
the number $P(G,q)$ of $q$-colourings of $G$.
\item[{\it Sampling}] Given a graph $G$ and an integer $q$, construct a random
$q$-colouring of $G$.
\item[{\it Chromatic polynomial}] Given a graph $G$, compute the chromatic
polynomial -- that is, the coefficients of the integer polynomial $q\mapsto
P(G,q)$.
\end{description}
Some of these problems are related by using fairly straightforward
reductions.
For example, the decision problem is easily solved using the
chromatic number by comparing $q$ with $\chi(G)$; conversely,
$\chi(G)$ can be determined by solving the decision problem for
$q=1,2,\ldots,n$.
It is also clear that if we can construct a $q$-colouring, then we can
decide that one exists.
What is perhaps less clear is the other direction.
This is seen by a self-reduction that follows the contraction algorithm,
Algorithm~C.
\noindent {\bf Reduction C}
(\emph{Constructing a colouring using a decision algorithm}). Suppose that we have an algorithm
that decides whether a given graph $G$ can be $q$-coloured.
If $G=K_n$ and $n\leq q$, give each vertex its own colour and
terminate.
Otherwise, select two non-adjacent vertices $v$ and $w$ in $G$.
If $G+vw$ cannot be $q$-coloured, then every $q$-colouring $f$ of $G$
must have $f(v)=f(w)$.
Thus we can identify $v$ and $w$ and recursively find a $q$-colouring
for $G/vw$.
Otherwise, there exists a $q$-colouring of $G$ with $f(v)\neq f(w)$, so
we recursively find a colouring for $G+vw$.
\qed
Some of our algorithms work only for a specific fixed $q$, such as
Algorithm B for 2-colourability or Algorithm L for 3-colourability.
Clearly, they both reduce to the decision problem where $q$ is part
of the input.
But what about the other direction?
The answer turns out to depend strongly on $q$:
the decision problem reduces to 3-colorability, but not to
2-colorability.
\noindent {\bf Reduction L}
(\emph{$q$-colouring using $3$-colouring}).
Given a graph $G= (V,E)$ and an integer $q$, this reduction constructs a graph $H$ that
is 3-colourable with colours $\{0,1,2\}$ if and only if $G$ is
$q$-colourable with colours $\{1,2,\ldots, q\}$.
First, to fix some colour names, the graph $H$ contains a triangle with
the vertices $0,1,2$.
We assume that vertex $i$ has colour $i$, for $i=0,1,2$.
For each vertex $v\in V$, the graph $H$ contains $2q$ vertices
$v_1,v_2,\ldots,v_q$ and $v_1',v_2',\ldots,v_q'$.
Our intuition is that the $v_i$s act as indicators for a colour in $G$
in the following sense: if $v_i$ has colour $1$ in $H$ then $v$
has colour $i$ in $G$.
The vertices are arranged as in Fig.~\ref{fig: reduction L}(a); the
right-most vertex is 1 or 2, depending on the parity of $q$.
\begin{figure}
\caption{\label{fig: reduction L}
\label{fig: reduction L}
\end{figure}
The vertices $v_1,v_2,\ldots,v_q$ are all adjacent to 2, and so must be
coloured 0 or 1.
Moreover, at least one of them must be coloured 1, since
otherwise, the colours for $v_1',v_2',\ldots,v_q'$ are forced to
alternate as $1,2,1,\ldots$, conflicting with the colour of the right-most
vertex.
Now consider an edge $vw$ in $G$.
Let $v_1,v_2,\ldots,v_q$ and $w_1,w_2,\ldots, w_q$ be the
corresponding `indicator' vertices in $H$.
For each colour $i=1,2,\ldots, q$, the vertices $v_i$ and $w_i$ are
connected by a `fresh' triangle as shown in Fig.~\ref{fig: reduction L}(b).
This ensures that $v_i$ and $w_i$ cannot both be 1.
In other words, $v$ and $w$ cannot have received the same colour.
\qed
The above reduction, essentially due to Lovász \cite{Lov},
can easily be extended to a larger, fixed $q> 3$, because $G$ is
$q$-colourable if and only if $G$ with an added `apex' vertex
adjacent to all other vertices is $(q+1)$-colourable.
For instance, 4-colourability is not easier than 3-colourability for
general graphs.
Thus, all $q$-colouring problems for $q\geq 3$ are (in some sense)
equally difficult.
This is consistent with the fact that the case $q=2$ admits a very fast
algorithm (Algorithm~B), whereas none of the others does.
Many constructions have been published that show the computational
difficulty of colouring for restricted classes of graphs.
We will sketch an interesting example due to Stockmeyer \cite{S}: the
restriction of the case $q=3$ to planar graphs.
Consider the subgraph in Fig.~\ref{fig: planar}(a), called a
\emph{planarity gadget}.
\begin{figure}
\caption{\label{fig: planar}
\label{fig: planar}
\end{figure}
One can check that this subgraph has the property that every
3-colouring $f$ satisfies $f(\mathrm E)=f(\mathrm W)$ and
$f(\mathrm N)=f(\mathrm S)$.
Moreover, every partial assignment $f$ to $\{\mathrm N, \mathrm S,
\mathrm E, \mathrm W\}$ that satisfies $f(\mathrm E)=f(\mathrm
W)$ and $f(\mathrm N)=f(\mathrm S)$ can be extended to a
3-colouring of the entire subgraph.
The gadget is used to transform a given (non-planar) graph $G$ as
follows.
Draw $G$ in the plane and for each edge $vw$ replace each edge
intersection by the planarity gadget.
The outer vertices of neighbouring gadgets are identified, and $v$ is
identified with $\mathrm W$ in its neighbouring gadget (see
Fig.~\ref{fig: planar}(b)).
The resulting graph is planar, and it can be checked that it is
3-chromatic if and only if $G$ is 3-chromatic.
Thus, the restriction to planar instances does not make
3-colourability computationally easier.
Unlike the case for non-planar graphs, this construction cannot be generalized
to larger $q>3$, since the decision problem for planar graphs and
every $q\geq 4$ has answer `yes' because of the four-colour theorem.
\subsection*{Computational complexity}
The field of computational complexity relates algorithmic problems
from various domains to one another in order to establish a notion of
computational difficulty.
The chromatic number problem was one of the first to be analysed in
this fashion.
The following reduction, essentially from the seminal paper of Karp
\cite{Karp}, shows that computing the chromatic number is `hard for
the complexity class NP' by reducing from the NP-hard
satisfiability problem for Boolean formulas on conjunctive normal form (CNF).
This implies that all other problem in the class NP reduce to the
chromatic number.
The input to \emph{CNF-Satisfiability} is a Boolean formula consisting of $s$
clauses $C_1,C_2,\ldots,C_s$.
Each clause $C_j$ consists of a disjunction
$C_j=(l_{j1}\vee l_{j2}\vee \cdots\vee l_{jk})$ of literals.
Every literal is a variable $x_1,x_2,\ldots,x_r$ or its negation
$\overline{x}_1,\overline{x}_2,\ldots, \overline{x}_r$.
The problem is to find an assignment of the variables to `true' and
`false' that makes all clauses true.
\noindent {\bf Reduction K}
(\emph{Satisfiability using chromatic number}).
Given an instance $C_1,C_2,\allowbreak\ldots,\allowbreak C_s$ of CNF-Satisfiability over the
variables $x_1,x_2,\ldots, x_r$, this reduction constructs a graph $G$ on $3r+s+1$
vertices such that $G$ can be coloured with $r+1$ colours if and only
the instance is satisfiable.
The graph $G$ contains a complete subgraph on $r+1$ vertices
$\{0,1,\ldots,r\}$.
In any colouring, these vertices receive different colours, say
$f(i)=i$.
The intuition is that the colour $0$ represents `false', while the
other colours represent `true'.
For each variable $x_i$ $(1\leq i\leq r)$ the graph contains two
adjacent `literal' vertices $v_i$ and $\overline{v}_i$, both adjacent
to all `true colour' vertices $\{1,2,\ldots,r\}$ except $i$.
Thus, one of the two vertices $v_i,\overline{v}_i$ must be assigned
the `true' colour $i$, and the other must be coloured $0$.
The construction is completed with `clause' vertices $w_j$, one for
each clause $C_j$ $(1\leq j\leq s)$.
Let $x_{i_1}, x_{i_2}, \ldots, x_{i_k}$ be the variables appearing
(positively or negatively) in $C_j$.
Then $w_j$ is adjacent to $\{0,1,\ldots, r\}\setminus
\{i_1,i_2,\ldots, i_k\}$.
This ensures that only the `true' colours $\{i_1,i_2,\ldots, i_k\}$ are
available at $w_j$.
Furthermore, if $x_i$ appears positive in $C_j$, then $w_j$ is
adjacent to $\overline{v}_i$; if $x_i$ appears negated in $C_j$, then
$w_j$ is adjacent to $v_i$.
Figure~\ref{fig: K} shows the reduction for a small instance
consisting of just the clause $C_1=(x_1\vee \overline{x}_2
\vee\overline{x}_3)$ and a valid colouring corresponding to the
assignment $x_1=x_3=\text{true}$, $x_2=\text{false}$; the edges of the
clique on $\{0,1,2,3\}$ are not shown.
\begin{figure}
\caption{\label{fig: K}
\label{fig: K}
\end{figure}
Thus, the only colours available to $w_j$ are those chosen by its
literals.
\qed
\subsection*{Edge-colouring}
A mapping $ffon E\rightarrow\{1,2,\ldots, q\}$ is an edge-colouring
of $G$ if and only if it is a vertex-colouring of the line graph
$L(G)$ of $G$.
In particular, every vertex-colouring algorithm can be used as an edge-colouring algorithm by running it on $L(G)$.
For instance, Algorithm~I computes the chromatic index in time
$2^{m}\operatorname{poly}(n)$, which is the fastest currently known algorithm.
Similarly, Algorithm G finds an edge-colouring with $(2\Delta-1)$ colours, but this
is worse than Algorithm V.
In fact, since $\Delta\leq \chi'(G)\leq \Delta +1$, Algorithm V
determines the chromatic index within an additive error of 1.
However, deciding which of the two candidate values for $\chi'(G)$ is
correct is an NP-hard problem, as shown by Holyer \cite{Hol} for
$\chi'(G)=3$ and Leven and Galil \cite{LG} for $\chi'(G)>3$.
\subsection*{Approximating the chromatic number}
Algorithm V shows that the chromatic index can be very well
approximated.
In contrast, approximating the chromatic \emph{number} is much harder.
In particular, it is NP-hard to 4-colour a 3-chromatic graph
(see \cite{GuKh}).
This rules out an approximate vertex-colouring algorithm with a
performance guarantee as good as Algorithm V, but is far from explaining
why the considerable machinery behind, say, Algorithm R results only
in a colouring of size $n^c$ for 3-chromatic graphs.
The best currently known exponent is $c=0.204$ (see \cite{KaTh}).
For sufficiently large fixed $q$, it is NP-hard to find an
$\exp(\Omega(q^{1/3}))$-colouring for a $q$-colourable graph.
If $q$ is not fixed, even stronger hardness results are known.
We saw in Section~\ref{sec: SDP} that the polynomial-time computable
function $\vec\chi(G)$ is a lower bound on $\chi(G)$, even though the
gap can sometimes be large, say $\chi(G) \geq n^{0.157}\vec\chi(G)$
for some graphs.
Can we guarantee a corresponding upper bound for $\vec\chi$?
If not, maybe there is some other polynomial-time
computable function $g$ so that we can guarantee, for example,
$g(G)\leq \chi(G)\leq n^{0.999} g(G)$?
The answer turns out to be `no' under standard complexity-theoretic
assumptions: For every $\epsilon>0$, it is NP-hard to approximate
$\chi(G)$ within a factor $n^{1-\epsilon}$, as shown by Zuckerman \cite{Zuck}.
\subsection*{Counting}
The problem of counting the $q$-colourings is solved by evaluating
$P(G,q)$.
Conversely, because the chromatic polynomial has degree $n$, it can be
interpolated using Lagrangian interpolation from the values of the
counting problem at $q=0,1,\ldots, n$.
Moreover, note that $\chi(G)\geq q$ if and only if $P(G,q)>0$, so it is NP-hard
to count the number of $q$-colourings simply because the decision
problem is known to be hard.
In fact, the counting problem is hard for Valiant's counting class
$\#$P.
On the other hand, an important result in counting complexity
\cite{JVV} relates the estimation of the size of a finite set to the
problem of uniformly sampling from it.
In particular, a uniform sampler such as Algorithm~M serves as a
`fully polynomial randomized approximation scheme' (FPRAS) for the
number of colours.
Thus, provided that $q>4\Delta$, Algorithm~M can be used to compute a
value $g(G)$ for which $(1-\epsilon)g(G)\leq P(G,q) \leq (1+\epsilon)g(G)$
with high probability in time polynomial in $n$ and $1/\epsilon$ for
any $\epsilon>0$.
Much better bounds on $q$ are known (see the survey of Frieze and Vigoda \cite{FV}).
Without some bound on $q$, such an FPRAS is unlikely to exist because,
with $\epsilon = \frac{1}{2}$, it would constitute a randomized
algorithm for the decision problem and would therefore imply that all
of NP can be solved in randomized polynomial time.
\subsection*{Conclusion}
Together, the algorithms and reductions presented in this survey give
a picture of the computational aspects of graph colouring.
For instance, 2-colouring admits a polynomial time algorithm, while
3-colouring does not.
In the planar case, 4-colouring is trivial, but 3-colouring is not.
An almost optimal edge-colouring can be found in polynomial time, but
vertex-colouring is very difficult to approximate.
If $q$ is sufficiently large compared to $\Delta(G)$ then the set of
colourings can be sampled and approximately counted, but not counted
exactly.
Finally, even the computationally hard colouring problems admit
techniques that are much better than our initial Algorithm~X.
None of these insights is obvious from the definition of graph
colouring, so the algorithmic perspective on chromatic graph theory
has proved to be a fertile source of questions with interesting
answers.
\renewcommand{References}{References}
\end{document} |
\begin{document}
\title{On cubic Kummer towers of Garcia, Stichtenoth and Thomas type}
\begin{abstract}
In this paper we initiate the study of the class of cubic Kummer type towers considered by Garcia, Stichtenoth and Thomas in 1997 by classifying the asymptotically good ones in this class.
\end{abstract}
\mathcalaketitle
\section{Introduction}
\leftabel{intro}
It is well known the importance of asymptotically good recursive towers in coding theory and some other branches of information theory (see, for instance, \cite{NX01}). Among the class of recursive towers there is an important one, namely the class of Kummer type towers which are recursively defined by equations of the form $y^m=f(x)$ for some suitable exponent $m$ and rational function $f(x)\in K(x)$. A particular case was studied by Garcia, Stichtenoth and Thomas in \cite{GST97} where a Kummer tower over a finite field $\mathcalathbb{F}_q$ with $q\equiv 1\mathcalod m$ is recursively defined by an equation of the form
\begin{equation}\leftabel{kummergst97}
y^m=x^df(x)\,,
\end{equation}
where $f(x)$ is a polynomial of degree $m-d$ such that $f(0)\neq 0$ and $\gcd(d,m)=1$. The authors showed that they have positive splitting rate and, assuming the existence of a subset $S_0$ of $\mathcalathbb{F}_q$ with certain properties, the good asymptotic behavior of such towers can be deduced together with a concrete non trivial lower bound for their limit. Later Lenstra showed in \cite{Le02} that in the case of an equation of the form \eqref{kummergst97} over a prime field, there is not such a set $S_0$ satisfying the above conditions of Garcia, Stichtenoth and Thomas. Because of Lenstra's result it seems reasonable to expect that many Kummer towers defined by equations of the form \eqref{kummergst97} have infinite genus. However, to the best of our knowledge there are not examples of such towers in the literature.
The aim of this paper is to classify those asymptotically good Kummer type towers considered by Garcia, Stichtenoth and Thomas in \cite{GST97} recursively defined by an equation of the form
\begin{equation}\leftabel{cubicgst}
y^3=xf(x)\,,
\end{equation}
over a finite field $\mathcalathbb{F}_q$ where $q\equiv 1\mathcalod 3$ and $f(t)\in\mathcalathbb{F}_q[t]$ is a monic and quadratic polynomial. It was shown in \cite{GST97} that there are choices of the polynomial $f$ giving good asymptotic behavior and even optimal behavior. For instance if $f(x)=x^2+x+1$ then the equation \eqref{cubicgst} defines an optimal tower over $\mathcalathbb{F}_4$, a finite field with four elements (see \cite[Example 2.3]{GST97}). It is worth to point out that the quadratic case (i.e. an equation of the form $y^2=x(x+a)$ with $0\neq a\in \mathcalathbb{F}_q$) is already included in the extensive computational search of good quadratic tame towers performed in \cite{MaWu05}.
The organization of the paper is as follows. In Section \mathcalathbb{R}f{notanddef} we give the basic definitions and we establish the notation to be used throughout the paper. In Section \mathcalathbb{R}f{genus} we give an overview of the main ideas, in the general setting of towers of function fields over a perfect field $K$, used to prove the infiniteness of the genus of a tower. In Section \mathcalathbb{R}f{pyramid} we prove some criteria involving the basic function field associated to a tower to check the infiniteness of its genus. Finally in Section \mathcalathbb{R}f{examples} we prove our main result (Theorem \mathcalathbb{R}f{teoexe2}) where we show that asymptotically good towers defined by an equation of the form \eqref{kummergst97}
\[y^3=x(x^2+bx+c)\,,\]
with $b,c \in \mathcalathbb{F}_q$ and $q\equiv 1 \mathcalod 3$ fall into three mutually disjoint classes according to the way the quadratic polynomial $x^2+bx+c$ splits into linear factors over $\mathcalathbb{F}_q$. From this result many examples of non skew recursive Kummer towers with positive splitting rate and infinite genus can be given. We would like to point out that there are very few known examples showing this phenomena. An example of a non skew Kummer tower (but not of the form \eqref{kummergst97}) with infinite genus over a prime field $\mathcalathbb{F}_p$ was given in \cite{MaWu05} but, as we will show at the end of Section \mathcalathbb{R}f{genus}, there is a mistake in the argument used by the authors. There are also examples of non skew Kummer towers with bad asymptotic behavior over some non-prime finite fields given by Hasegawa in \cite{Ha05} but those Kummer towers have zero splitting rate.
\section{Notation and Definitions}\leftabel{notanddef}
In this work we shall be concerned with \emph{ towers} of function fields and this means a sequence $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over a field $K$ where for each index $i\geq 0$ the field $F_i$ is a proper subfield of $F_{i+1}$, the field extension $F_{i+1}/F_i$ is finite and separable and $K$ is the full field of constants of each field $F_i$ (i.e. $K$ is algebraically closed in each $F_i$). If the genus $g(F_i)\rightightarrow \infty$ as $i\rightightarrow \infty$ we shall say that $\mathcal{F}$ is a {\em tower in the sense of Garcia and Stichtenoth}.
Following \cite{Stichbook09} (see also \cite{GS07}), one way of constructing towers of function fields over $K$ is by giving a bivariate polynomial
\[H\in K[X,Y]\,,\]
and a transcendental element $x_0$ over $K$. In this situation a tower $\mathcalathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over $K$ is defined as
\begin{enumerate}[(i)]
\item $F_0=K(x_0)$, and
\item $F_{i+1}=F_i(x_{i+1})$ where $H(x_i,x_{i+1})=0$ for $i\geq 0$.
\end{enumerate}
A suitable choice of the bivariate polynomial $H$ must be made in order to have towers. When the choice of $H$ satisfies all the required conditions we shall say that the tower $\mathcalathcal{F}$ constructed in this way is a {\em recursive tower} of function fields over $K$. Note that for a recursive tower $\mathcalathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over $K$ we have that
$$F_i=K(x_0,\leftdots,x_i)\mathcalathbb{Q}quad \text{for }i\geq 0,$$
where $\{x_i\}_{i=0}^{\infty}$ is a sequence of transcendental elements over $K$.
Associated to a recursive tower $\mathcalathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields $F_i$ over $K$ we have the so called {\em basic function field} $K(x,y)$ where $x$ is transcendental over $K$ and $H(x,y)=0$.
For the sake of simplicity we shall say from now on that $H$ defines the tower $\mathcal{F}$ or, equivalently, that tower $\mathcal{F}$ is recursively defined by the equation $H(x,y)=0$.
A tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over a perfect field $K$ of positive characteristic is called \emph{tame} if the ramification index $e(Q|P)$ of any place $Q$ of $F_{i+1}$ lying above a place $P$ of $F_i$ is relatively prime to the characteristic of $K$ for all $i\geq 0$. Otherwise the tower $\mathcal{F}$ is called \emph{wild}.
The set of places of a function field $F$ over $K$ will be denoted by $\mathbb{P}(F)$.
The following definitions are important when dealing with the asymptotic behavior of a tower. Let $\mathcalathcal{F}=\{F_i\}_{i=0}^{\infty}$ be a tower of function fields
over a finite field $\mathcalathbb{F}_q$ with $q$ elements.
The {\em splitting rate} $\nu(\mathcal{F})$ and the {\em genus} $\gamma(\mathcal{F})$ of $\mathcal{F}$ over $F_0$ are defined, respectively, as
$$\nu(\mathcal{F})\colon=\leftim_{i\rightightarrow \infty}\frac{N(F_i)}{[F_i:F_0]}\,, \mathcalathbb{Q}quad\gamma(\mathcal{F})\colon=\leftim_{i\rightightarrow \infty}\frac{g(F_i)}{[F_i:F_0]}\,.$$
If $g(F_i)\geq 2$
for $i\geq i_0\geq 0,$ the {\em limit} $\leftambda(\mathcal{F})$ of $\mathcal{F}$ is defined as $$\leftambda(\mathcal{F})\colon=\leftim_{i\rightightarrow \infty}\frac{N(F_i)}{g(F_i)}\,.$$
It can be seen that all the above limits exist and that $\leftambda(\mathcal{F})\geq 0$ (see \cite[Chapter 7]{Stichbook09}).
Note that the definition of the genus of $\mathcal{F}$ makes sense also in the case of a tower $\mathcal{F}$ of function fields over a perfect field $K$.
We shall say that a tower $\mathcalathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over $\mathcalathbb{F}_q$ is {\em asymptotically good} if $\nu(\mathcal{F})>0$ and $\gamma(\mathcal{F})<\infty$. If either $\nu(\mathcal{F})=0$ or $\gamma(\mathcal{F})=\infty$ we shall say that $\mathcal{F}$ is {\em asymptotically bad}.
From the well-known Hurwitz genus formula
(see \cite[Theorem 3.4.13]{Stichbook09}) we see that the condition $g(F_i)\geq 2$ for $i\geq i_0$ in the definition of $\leftambda(\mathcal{F})$ implies that $g(F_i)\rightightarrow \infty$ as $i\rightightarrow \infty$.
Hence, when we speak of the limit of a tower of function fields we are actually speaking of the limit of a tower in the sense of Garcia and Stichtenoth (see \cite[Section 7.2]{Stichbook09}).
It is easy to check that in the case of a tower $\mathcal{F}$ we have that $\mathcal{F}$ is asymptotically good if and only if $\leftambda(\mathcal{F})>0$. Therefore a tower $\mathcal{F}$ is asymptotically bad if and only if $\leftambda(\mathcal{F})=0$.
\section{The genus of a tower}\leftabel{genus}
As we mentioned in the introduction, a simple and useful condition implying that $H \in \mathcalathbb{F}_q[x,y]$ does not give rise to an asymptotically good recursive tower $\mathcal{F}$ of function fields over $\mathcalathbb{F}_q$ is that $\deg_xH\neq\deg_yH$. With this situation in mind
we shall say that a recursive tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over a perfect field $K$ defined by a polynomial $H \in K[x,y]$ is {\em non skew} if $\deg_xH=\deg_yH$. In the skew case (i.e. $\deg_xH\neq \deg_yH$) we might have that $[F_{i+1}:F_i]\geq 2$ for all $i\geq 0$ and
even that $g(F_i)\rightightarrow \infty$ as $i\rightightarrow\infty$ but, nevertheless, $\mathcal{F}$ will be asymptotically bad. What happens is that if $\deg_yH>\deg_xH$ then
the splitting rate $\nu(\mathcal{F})$ is zero (this situation makes sense in the case $K=\mathcalathbb{F}_q)$ and if $\deg_xH>\deg_yH$ the genus $\gamma(\mathcal{F})$ is infinite (see \cite{GS07} for details). Therefore the study of good asymptotic behavior in the case of recursive towers must be focused on non skew towers. Since the splitting rate of recursive towers defined by an equation of the form \eqref{kummergst97} is positive, their good asymptotic behavior is determined by their genus.
From now on $K$ will denote a perfect field and we recall that $K$ is assumed to be the full field of constants of each function field $F_i$ of any given tower $\mathcal{F}$ over $K$. We recall a well-known formula for the genus of a tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ in terms of a subtower $\mathcal{F}'=\{F_{s_i}\}_{i=1}^\infty$, namely
\begin{equation}\leftabel{ecu1paper3}
\gamma(\mathcal{F})=
\leftim_{i \rightightarrow \infty}\frac {g(F_{s_i})}{[F_{s_i}:F_0]}=g(F_0)-1+\frac 1 2 \sum_{i=1}^\infty
\frac{\deg{\operatorname{Diff}(F_{s_{i+1}}/F_{s_i})}}{[F_{s_{i+1}}:F_0]}.\end{equation}
\begin{rem}\leftabel{remarkdivisor}
Suppose now that there exist positive
functions $c_1(t)$ and $c_2(t)$, defined for $t\geq 0$, and a divisor $B_i\in \mathcal{D}(F_i)$ such that for each $i\geq 1$
\begin{enumerate}[\text{Condition} (a):]
\item $\deg B_i\geq c_1(i)[F_i:F_0]$ and\leftabel{thm3.2-a}
\item $\sum\leftimits_{P\in supp(B_i)}\sum\leftimits_{Q|P}d(Q|P)\deg Q\geq c_2(i)[F_{{i+1}}\colon F_i]\deg{B_i}\,,$ \leftabel{thm3.2-b}
\end{enumerate}
where the inner sum runs over all places $Q$ of $F_{i+1}$ lying above $P$, then it is easy to see from \eqref{ecu1paper3} that if the series
\begin{equation}\leftabel{thm3.2-c}
\sum_{i=1}^{\infty}c_1(i)c_2(i)
\end{equation} is divergent then $\gamma(\mathcal{F})=\infty$.
\end{rem}
With the same hypotheses as in Remark~\mathcalathbb{R}f{remarkdivisor}, if in addition $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ is non skew and recursively defined by the equation $H(x,y)=0$ such that $H(x,y)$, as a polynomial with coefficients in $K(y)$, is irreducible in $K(y)[x]$ then condition \eqref{thm3.2-a}
can be replaced by the following
\begin{enumerate}[(a')]
\item $\deg{B_{j}}\geq c_1(j)\cdot \deg(b(x_{j}))^{j}$ where $b\in K(T)$ is a rational function and $(b(x_{j}))^{j}$ denotes either the pole divisor or the zero divisor of $b(x_{j})$ in $F_{j}$,\leftabel{thm3.2-a'}
\end{enumerate}
and the same result hold, i.e., $\gamma(\mathcal{F})=\infty$. These are the usual ways of proving the infiniteness of the genus of a recursive tower $\mathcal{F}$.
In particular the existence of a divisor as in Remark \mathcalathbb{R}f{remarkdivisor} can be proved by showing that sufficiently many places of $F_i$ are ramified in $F_{i+1}$ in the sense that the number $r_i=\#(R_i)$ where
\[R_i=\{P\in\mathbb{P}(F_i)\,:\,\text{$P$ is ramified in $F_{i+1}$}\}\,.\] satisfies the estimate
\[r_i\geq c_i[F_{s_{i+1}}:F_0]\,,\]
where $c_i>0$ for $i\geq 1$ and the series
$\sum_{i=1}^{\infty}c_i$
is divergent. It is easily seen that the divisor of $F_i$
\[B_i=\sum_{P\in R_i}P\,,\]
satisfies the conditions \eqref{thm3.2-a} and \eqref{thm3.2-b} of Remark \mathcalathbb{R}f{remarkdivisor}
with $c_1(i)= c_i[F_{i+1}:F_i]$ and $c_2(i)=[F_{i+1}:F_i]^{-1}$.
We recall now a standard result from the theory of constant field extensions (see \cite[Theorem 3.6.3]{Stichbook09}): let $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ be a tower of function fields over $K$. By considering the constant field extensions $\bar{F_i}=F_i\cdot K'$ where $K'$ is an algebraic closure of $K$, we have the so called constant field extension tower $\bar{\mathcal{F}}=\{\bar{F_i}\}_{i=0}^{\infty}$ of function fields over $K'$ and
\[\gamma(\mathcal{F})=\gamma(\bar{\mathcal{F}})\,.\]
Now we can prove the following result which will be useful later.
\begin{pro}\leftabel{propmajwulf} Let $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ be a tower of function fields over $K$. Suppose that either each extension $F_{i+1}/F_i$ is Galois or that there exists a constant $M$ such that $[F_{i+1}:F_i]\lefteq M$ for $i\geq 0$. In order to have infinite genus it suffices to find, for infinitely many indices $i\geq 1$, a place $P_i$ of $F_0$ unramified in $F_i$ and such that each place of $F_i$ lying above $P_i$ is ramified in $F_{i+1}$.
In particular, suppose that the tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ is a non skew recursive tower defined by a suitable polynomial $H\in K[x,y]$. Let $\{x_i\}_{i=0}^{\infty}$ be a sequence of transcendental elements over $K$ such that $F_{i+1}=F_i(x_{i+1})$ where $H(x_{i+1},x_i)=0$. Then $\gamma(\mathcal{F})=\infty$ if
\begin{enumerate}[(i)]
\item $H$, as a polynomial with coefficients in $K(y)$, is irreducible in $K(y)[x]$.
\item There exists an index $k\geq 0$ such that for infinitely many indices $i\geq 0$ there is a place $P_i$ of $K(x_{i-k},\leftdots,x_i)$ which is unramified in $F_i$ and each place of $F_i$ lying above $P_i$ is ramified in $F_{i+1}$.
\end{enumerate}
\end{pro}
\begin{proof}
We may assume that $K$ is algebraically closed since, by passing to the constant field tower $\bar{\mathcal{F}}=\{\bar{F_i}\}_{i=0}^{\infty}$ with $\bar{F_i}=F_i\cdot K'$ where $K'$ is an algebraic closure of $K$, we have $\gamma(\mathcal{F})=\gamma(\bar{\mathcal{F}})$. In this situation we have that for each $i\geq 0$ the place $P_i$ of $F_0$ splits completely in $F_i$ and each place $Q$ of $F_i$ lying above $P_i$ ramifies in $F_{i+1}$. Now consider the following sets
\[R_i=\{P\in\mathbb{P}(F_i)\,:\,\text{$P$ is ramified in $F_{i+1}$}\}\,,\]
and
\[A_i=\{Q\in\mathbb{P}(F_{i+1})\,:\,\text{$Q$ lies over some $P\in R_i$}\}\,.\]
and set $r_i=\#(R_i)$. Let $B_i$ be a divisor of $F_i$ defined as
\[B_i=\sum_{P\in R_i}P\,.\]
Then $\deg B_i \geq r_i\geq [F_i:F_0]$, because every place $Q$ of $F_i$ lying above $P_i$ is in $R_i$ and $P_i$ splits completely in $F_i$, so that condition \eqref{thm3.2-a} of Remark \mathcalathbb{R}f{remarkdivisor} holds with $c_1(i)=1$.
Now suppose that each extension $F_{i+1}/F_i$ is Galois. Then $A_i$ is the set of all places of $F_{i+1}$ lying above a place of $R_i$. Therefore
\begin{align*}
\sum_{P\in supp(B_i)}\underset{Q|P}{\sum_{Q\in\mathbb{P}(F_{i+1})}}d(Q|P)\deg Q & \geq \sum_{P\in R_i}\sum_{Q\in A_i}d(Q|P)\deg Q\\ & \geq \frac{1}{2}\sum_{P\in R_i}\sum_{Q\in A_i}e(Q|P)f(Q|P)\deg P\\
&=\frac{1}{2}[F_{i+1}:F_i]\, \sum_{P\in R_i}\deg P\\
&\geq \frac{1}{2}[F_{i+1}:F_i]\,\deg B_i \,.
\end{align*}
Then condition \eqref{thm3.2-b} of Remark \mathcalathbb{R}f{remarkdivisor} holds with $c_2(i)=1/2$ and the series $\sum_{i=1}^{\infty}c_1(i)c_2(i)$ is divergent. Hence $\gamma(\mathcal{F})=\infty$.
In the case that $[F_{i+1}:F_i]\lefteq M$ for $i\geq 0$ by taking $c_2(i)=M^{-1}$
we arrive to the same conclusion.
Finally suppose that the tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ is non skew and recursive. Since $\mathcal{F}$ is non skew and $(i)$ holds, we have that $[F_i:F_0]=m^i=[F_i:K(x_i)]$ where $m=\deg_yH=\deg_xH$. Now we proceed with the same divisor $B_i$ as defined above using $(ii)$. We have that
\[\deg B_i\geq [F_i:K(x_{i-k},\leftdots,x_i)]=m^{-k}[F_i:K(x_i)]=m^{-k}[F_i:F_0]\,,\]
so that by taking $c_1(i)=m^{-k}$ and $c_2(i)=m^{-k-1}$ we have the desired conclusion.
\end{proof}
An example of the situation described in the second part of Proposition \mathcalathbb{R}f{propmajwulf} for $k=0$ was given in Lemma 3.2 in \cite{MaWu05} and applied to the non skew Kummer tower
\[y^3=1-\lefteft(\frac{x-1}{x+1}\rightight)^3\,,\]
over $\mathcalathbb{F}_p$ with $p\equiv 1,7\mathcalod 12$. Unfortunately there is a mistake in the proof as we show now. The basic function field associated to that tower is $\mathcalathbb{F}_p(x,y)$ and both extensions $\mathcalathbb{F}_p(x,y)/\mathcalathbb{F}_p(x)$ and $\mathcalathbb{F}_p(x,y)/\mathcalathbb{F}_p(y)$ are Galois. The key part of the argument is that $-3^{-1}$ is not a square in $\mathcalathbb{F}_p$ with $p\equiv 1,7\mathcalod 12$. With this we would have that the polynomial $x^2+3^{-1}$ is irreducible in $\mathcalathbb{F}_p[x]$ and then it would define the place $P_{x^2+3^{-1}}$ of $\mathcalathbb{F}_p(x)$ which is not only totally ramified in $\mathcalathbb{F}_p(x,y)$ (by the theory of Kummer extensions) but also of degree $2$, which is crucial for their argument. From these facts the authors deduce that the above equation defines a tower in the sense of Garcia and Stichtenoth with infinite genus. But any such prime is congruent to $1$ modulo $3$ and $-3^{-1}$ is a square in $\mathcalathbb{F}_p$ for $p\equiv 1\mathcalod 3$ as can be easily seen using the quadratic reciprocity law. Thus the polynomial $x^2+3^{-1}$ is not irreducible in $\mathcalathbb{F}_p[x]$ so it does not define a place of $\mathcalathbb{F}_p(x)$.
\section{Climbing the pyramid}\leftabel{pyramid}
In this section and the next one we shall use the following convention: a place defined by a monic and irreducible polynomial $f\in K[x]$ in a rational function field $K(x)$ will be denoted by $P_{f(x)}$. A slight modification of the arguments given in Lemma 3.2 of \cite{MaWu05} allowed us to prove the following useful criterion for infinite genus in the case of recursive towers and we include the proof for the sake of completeness. The main difficulty on the applicability of Lemma 3.2 of \cite{MaWu05} is that it requires that both extensions $K(x,y)/K(x)$ and $K(x,y)/K(y)$ be Galois, which is something unusual or simply hard to prove. Getting rid of the condition $K(x,y)/K(y)$ being Galois was the key ingredient in proving the main result in the next section.
\begin{pro}\leftabel{examplemawu} Let $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ be a non skew recursive tower of function fields over $K$ defined by a polynomial $H\in K[x,y]$ with the same degree $m$ in both variables. Let $K(x,y)$ be the basic function field associated to $\mathcal{F}$ and consider the set
$$N=\{\deg{R} : R\in \mathbb{P}(K(y)) \text{ and $R$ is ramified in $K(x,y)$}\} \,. $$
Let $d\in \mathcalathbb{N}$ such that $\gcd(d,m)=1$ and $n\ncon 0\mathcalod d$ for all $n\in N$.
Suppose that there is a place $P$ of $K(x)$ with the following properties:
\newcounter{saveenum}
\begin{enumerate}[(a)]
\item $\deg{P}=d$ and\leftabel{item-a-mawu}
\item $P$ is ramified in $K(x,y)$.\leftabel{item-b-mawu}
\setcounter{saveenum}{\value{enumi}}
\end{enumerate}
Then $\gamma(F)=\infty$ if $K(x,y)/K(x)$ is a Galois extension and $H$, as a polynomial with coefficients in $K(y)$, is irreducible in $K(y)[x]$.
\end{pro}
\begin{proof}
Consider a sequence $\{x_i\}_{i=0}^{\infty}$ of transcendental elements over $K$ such that
\[F_0=K(x_0)\mathcalathbb{Q}uad\text{and}\mathcalathbb{Q}uad F_{i+1}=F_i(x_{i+1})\,,\]
where $H(x_i,x_{i+1})=0$ for $i\geq 0$. Let $i\geq 1$. By the above assumptions there is a place $P_i$ of $K(x_i)$ ramified in the extension $K(x_i, x_{i+1})/K(x_i)$ with $\deg{P_i}=d$. Let $Q$ be a place of $F_i$ lying above $P_i$. Let $P_0, P_1, \leftdots, P_i$ be the restrictions of $Q$ to $K(x_0), K(x_1),\leftdots, K(x_i)$ respectively and let $P'_j$ be a place of $K(x_j,x_{j+1})$ lying above $P_j$ for $j=1,\leftdots i$ (see Figure \mathcalathbb{R}f{figu5.9} below).
\begin{figure}
\caption{Ramification of $P_0, P_1,\leftdots P_i$ in the pyramid.}
\end{figure}
By hypothesis we have that $e(P'_i|P_i)=1$. On the other hand
\begin{equation}\leftabel{inertiadegree}
f(P_j'|P_j)\deg{P_j}=\deg{P_j'}=f(P_j'|P_{j-1})\deg{P_{j-1}}\,,
\end{equation}
for $1\lefteq j\lefteq i $ where $f(P_j'|P_j)$ and $f(P_j'|P_{j-1})$ are the respective inertia degrees. Since $d=\deg P_i$ and $\gcd(d,m)=1$ from \eqref{inertiadegree} for $j=i$ we must have that $d$ is a divisor of $\deg P_{i-1}$, otherwise there would be a prime factor of $d$ dividing $m$ because $K(x_{i-1},x_i)/K(x_{i-1})$ is Galois and in this case $f(P_i'|P_{i-1})$ is a divisor of $m$.
Continuing in this way using \eqref{inertiadegree} we see that $d$ is a divisor of $\deg P_j$ for $j=1,\leftdots i$ and this implies, by hypothesis, that each place $P_j$ is unramified in the extension $K(x_{j-1},x_j)/K(x_j)$ for $j=1,\leftdots i$.
We have now a ramification situation as in Figure \mathcalathbb{R}f{figu5.9} below. By Abhyankar's Lemma (see \cite[Theorem 3.9.1]{Stichbook09}) it follows that $e(Q|P_i)=1$. Now let $Q'$ be a place of $F_{i+1}$ lying above $Q$ and let $P_{i+1}'$ be the restriction of $Q'$ to $K(x_i,x_{i+1})$. Then $P_{i+1}'$ lies above $P_i$ and $e(P_{i+1}'|P_i)=e>1$ because $P_i$ is ramified in $K(x_i,x_{i+1})$ and the extension $K(x_i,x_{i+1})/K(x_i)$ is Galois. Once again, by Abhyankar's Lemma, we have that $e(Q'|Q)=e(P'_{i+1}|P_i)>1$. Then we are in the conditions $(i)$ and $(ii)$ of Proposition \mathcalathbb{R}f{propmajwulf} with $k=0$ and thus $\gamma(\mathcal{F})=\infty$.
\end{proof}
\begin{rem}\leftabel{remark-thm4-2}
Note that if we have a ramification situation as in Figure \mathcalathbb{R}f{figu5.9} above and $P_i$ is totally ramified in $K(x_i,x_{i+1})$ for all $i\geq 0$ then $Q$ is totally ramified in $F_{i+1}$ for all $i\geq 0$ because $e=[K(x_i,x_{i+1}):K(x_i)]=[F_{i+1}:F_i]$. Therefore if a recursive sequence $\mathcal{F}$ of function fields is defined by a separable polynomial $H(x,y)$ in the second variable and for each $i\geq 0$ we have a ramification situation as in Figure \mathcalathbb{R}f{figu5.9} and $P_i$ is totally ramified in $K(x_i,x_{i+1})$ for all $i\geq 0$ then $K$ is the full field of constants of each $F_i$ so that $\mathcal{F}$ is, in fact, a tower.
\end{rem}
\section{Classification of asymptotically good cubic towers of Garcia, Stichtenoth and Thomas type}\leftabel{examples}
We prove now our main result. As we said in the introduction Garcia, Stichtenoth and Thomas introduced in \cite{GST97} an interesting class of Kummer type towers over a finite field $\mathcalathbb{F}_q$ with $q\equiv 1\mathcalod m$ defined by an equation of the form
\begin{equation}
y^m=x^df(x)\,,
\end{equation}
where $f(x)$ is a polynomial of degree $m-d$ such that $f(0)\neq 0$ and $\gcd(d,m)=1$. These Kummer type towers have positive splitting rate but over prime fields Lenstra \cite{Le02} showed that they fail to satisfy a well-known criterion for finite ramification locus given in \cite{GST97} which is the main tool in proving the finiteness of their genus. In this context the next result is important in the study of the cubic case of these Kummer type towers.
\begin{thm}\leftabel{teoexe2}
Let $p$ be a prime number and let $q=p^r$ with $r\in \mathcalathbb{N}$ such that $q\equiv 1 \mathcalod 3$. Let $f(t)=t^2+bt+c \in \mathcalathbb{F}_q[t]$ be a polynomial such that $t=0$ is not a double root. Let $\mathcalathcal{F}$ be a Kummer type tower over $\mathcalathbb{F}_q$ recursively defined by the equation
\begin{equation}\leftabel{badkummerexample}
y^3=xf(x)\,.
\end{equation}
If $\mathcalathcal{F}$ is asymptotically good then the polynomial $f$ splits into linear factors over $\mathcalathbb{F}_q$. This implies that any asymptotically good tower recursively defined by \eqref{badkummerexample} is of one and only one of the following three types:
\begin{enumerate}[Type 1.]
\item Recursively defined by $y^3=x(x+\alpha)(x+\beta)$ with non zero $\alpha\neq\beta\in\mathcalathbb{F}_q$. \leftabel{a-teoexe2}
\item Recursively defined by $y^3=x^2(x+\alpha)$ with non zero $\alpha\in\mathcalathbb{F}_q$. \leftabel{b-teoexe2}
\item Recursively defined by $y^3=x(x+\alpha)^2$ with non zero $\alpha\in\mathcalathbb{F}_q$. \leftabel{c-teoexe2}
\end{enumerate}
\end{thm}
\begin{proof}
On the contrary, suppose that the polynomial $f$ is irreducible over $\mathcalathbb{F}_q$. Let us consider the basic function field $F=\mathcalathbb{F}_q(x,y)$. Since the polynomial $f(x)$ is irreducible in $\mathcalathbb{F}_q[x]$ we have that the place $P_{f(x)}$ of $\mathcalathbb{F}_q(x)$ associated to $f(x)$ is of degree $2$ and, by the general theory of Kummer extensions (see \cite[Proposition 6.3.1]{Stichbook09}, $P_{f(x)}$ is totally ramified in $F$. In fact it is easy to see that the genus of $F$ is one and
\[\operatorname{Diff} (F/\mathcalathbb{F}_q(x))=2Q_1+2Q_2\,,\]
where $Q_1$ is the only place of $F$ lying above $P_x$ (the zero of $x$ in $\mathcalathbb{F}_q(x)$) and $Q_2$ is the only place of $F$ lying above $P_{f(x)}$. Also $Q_1$ is of degree $1$ and $Q_2$ is of degree $2$.
The extension $F/\mathcalathbb{F}_q(y)$ is of degree $3$ because the polynomial
\[\phi(t)=tf(t)-y^3\in \mathcalathbb{F}_q(y)[t]\,,\]
is the minimal polynomial of $x$ over $\mathcalathbb{F}_q(y)$, otherwise $\phi(t)$ would have a root $z\neq y$ in $ \mathcalathbb{F}_q(y)$ and this would imply that $y$ is algebraic over $ \mathcalathbb{F}_q$, a contradiction. Clearly the extension $F/\mathcalathbb{F}_q(y)$ is tame.
By choosing the place $P_{f(x)}$ of $\mathcalathbb{F}_q(x)$ we have that items \eqref{item-a-mawu} and \eqref{item-b-mawu} with $d=2$ hold in Proposition~\mathcalathbb{R}f{examplemawu} so it remains to prove that the integers in the set
$$N=\{\deg{R} : R\in \mathbb{P}(\mathcalathbb{F}_q(y)) \text{ and $R$ is ramified in $F$}\} \,, $$
are odd integers. We shall use the following notation: for $z\in F$ the symbols $(z)_F$, $(z)_0^F$ and $(z)_{\infty}^F$ denote the principal divisor, the zero divisor and the pole divisor of $z$ in $F$ respectively. Using the well known expression of the different divisor in terms of differentials (see Chapter $4$ of \cite{Stichbook09}) we have that
\begin{align}\leftabel{diffequality}
\begin{split}
\operatorname{Diff} (F/\mathcalathbb{F}_q(y)) &= 2(y)_{\infty}^F+(dy)_F\\
&= 2(y)_{\infty}^F+\lefteft(\frac{f(x) + x f'(x)}{3y^2}\rightight)_F+(dx)_F\\
&=2(y)_{\infty}^F+\lefteft(\frac{(x-\beta_1)(x-\beta_2)}{y^2}\rightight)_F-2(x)_{\infty}^F+\operatorname{Diff} (F/\mathcalathbb{F}_q(x))\\
& =2(y)_{\infty}^F+\lefteft(\frac{(x-\beta_1)(x-\beta_2)}{y^2}\rightight)_F-2(x)_{\infty}^F+2Q_1+2Q_2\,.
\end{split}
\end{align}
We show now that $(y)_{\infty}^F=(x)_{\infty}^F$. Let $Q\in \operatorname{supp} (y)_{\infty}^F$ and let $S=Q\cap \mathcalathbb{F}_q(x)$. Then
\[3\nu_Q(y)=e(Q|S)((\nu_S(x)+\nu_S(f(x)))\,.\]
Since $\nu_Q(y)<0$ we must have that $S=P_{\infty}^x$, the pole of $x$ in $\mathcalathbb{F}_q(x)$. Hence $\nu_Q(y)=-e(Q|P_{\infty}^x)=-1$ because by Kummer theory (see \cite[Proposition 6.3.1]{Stichbook09}) $P_{\infty}^x$ is unramified in $F$. Then
\[-3=\nu_Q(y^3)=\nu_Q(x)+\nu_Q(f(x))\,,\]
and this implies that $\nu_Q(x)<0$. Therefore $-3=3\nu_Q(x)$ and we have $\nu_Q(x)=-1$ which says that $Q\in \operatorname{supp} (x)_{\infty}^F$ and $\nu_Q(\operatorname{supp} (y)_{\infty}^F)=\nu_Q(\operatorname{supp} (x)_{\infty}^F)$.
Reciprocally let $Q\in \operatorname{supp} (x)_{\infty}^F$. Since $\nu_Q(x)<0$ we have
\[3\nu_Q(y)=\nu_Q(x)+\nu_Q(f(x))=3\nu_Q(x)\,,\]
so that $\nu_Q(y)=\nu_Q(x)<0$. If $S=Q\cap \mathcalathbb{F}_q(x)$ then
\[3\nu_Q(y)=e(Q|S)((\nu_S(x)+\nu_S(f(x)))\,,\]
and we must have again that $S=P_{\infty}^x$. This implies that $\nu_Q(y)=-e(Q|P_{\infty}^x)=-1$. Therefore $Q\in \operatorname{supp} (y)_{\infty}^F$ and $\nu_Q(\operatorname{supp} (x)_{\infty}^F)=\nu_Q(\operatorname{supp} (y)_{\infty}^F)$. Hence $(y)_{\infty}^F=(x)_{\infty}^F$ as claimed.
From \eqref{diffequality} we have now that
\begin{align}\leftabel{diffequalityreduced}
\begin{split}
\operatorname{Diff} (F/\mathcalathbb{F}_q(y))&=\lefteft(\frac{(x-\beta_1)(x-\beta_2)}{y^2}\rightight)_F+2Q_1+2Q_2\\&=(z)_0^F-(z)_\infty^F+2Q_1+2Q_2\,,
\end{split}
\end{align}
where $z=(x-\beta_1)(x-\beta_2)y^{-2}$.
Let $Q$ be a place of $F$ in the support of $(z)_0^F$. Then $\nu_Q(z)>0$ and thus one of the following two cases can occur:
\begin{enumerate}[(i)]
\item $\nu_Q(x-\beta_i)>0$ for $i=1$ or $i=2$. In either case $Q$ lies above the rational place $P_{x-\beta_i}$ of $\mathcalathbb{F}_q(x)$. Since $F/K(x)$ is a Galois extension of degree $3$ and $\deg Q=f(Q|P)\deg P_{x-\beta_i}$ we have that either $\deg Q =1$ or $\deg Q =3$.
\item $\nu_Q(y)<0$. Let $S=Q\cap \mathcalathbb{F}_q(x)$. We have $$3\nu_Q(y)=e(S|Q)(\nu_S(x)+\nu_S(f(x)))\,.$$ Since $\nu_S(x)\geq 0$ leads to a contradiction we must have $\nu_S(x)<0$ and thus $S=P_\infty^x$. The same argument used in (i) above shows that either $\deg Q =1$ or $\deg Q =3$.
\end{enumerate}
Now let $Q$ be a place of $F$ in the support of $(z)_\infty^F$. Then $\nu_Q(z)<0$ and thus one of the following two cases can occur:
\begin{enumerate}[(a)]
\item $\nu_Q(x-\beta_i)<0$ for $i=1$ or $i=2$. In either case $\nu_Q(x)<0$ so that $Q$ lies above the place $P_\infty^x$ of $\mathcalathbb{F}_q(x)$ and the same argument given in (i) above shows that either $\deg Q =1$ or $\deg Q =3$.
\item $\nu_Q(y)>0$. Let $S=Q\cap \mathcalathbb{F}_q(x)$. We have
\begin{equation}\leftabel{case2}
3\nu_Q(y)=e(S|Q)(\nu_S(x)+\nu_S(f(x)))\,.
\end{equation}
Since $\nu_S(x)< 0$ leads to a contradiction we must have that $\nu_S(x)\geq 0$. If $\nu_S(x)> 0$ then $S=P_x$ and so $Q=Q_1$. If $\nu_S(x)=0$ then we must have that $\nu_S(f(x))>0$ because the left hand side of \eqref{case2} is positive. Therefore if $\nu_S(x)=0$ then $S=P_{f(x)}$ and thus $Q=Q_2$.
\end{enumerate}
On the other hand $\nu_{Q_i}(y)=1$ for $i=1,2$ as it is easy to see from the definition of each $Q_i$. Then $\nu_{Q_i}(z)=-2\nu_{Q_i}(y)=-2$ so that, in fact, the divisor $-2Q_1-2Q_2$ is part of the divisor $(z)_F$. This implies that both places $Q_1$ and $Q_2$ are not in the support of $\operatorname{Diff} (F/\mathcalathbb{F}_q(y))$. From the cases (i), (ii) and (a) above we conclude that every place in the support of $\operatorname{Diff} (F/\mathcalathbb{F}_q(y))$ is of degree $1$ or $3$. Therefore no place of even degree en $\mathcalathbb{F}_q(y)$ can ramify in $F$ as we claimed. In this way we see that all the conditions of Proposition \mathcalathbb{R}f{examplemawu} hold so that the equation
\[y^3=xf(x)\,,\]
defines a Kummer tower $\mathcal{F}$ over $\mathcalathbb{F}_q$ with infinite genus if $f(x)$ is irreducible over $\mathcalathbb{F}_q$ and this proves the theorem.
\end{proof}
{}
\end{document} |
\begin{document}
\title{New constraint qualifications for
mathematical programs with equilibrium
constraints via variational analysis}
\author{Helmut Gfrerer\thanks{Institute of Computational Mathematics, Johannes Kepler University Linz, A-4040 Linz, Austria, e-mail:
[email protected].} \and Jane J. Ye\thanks{Department of Mathematics
and Statistics, University of Victoria, Victoria, B.C., Canada V8W 2Y2, e-mail: [email protected].}}
\date{}
\maketitle
\begin{abstract}
In this paper, we study the mathematical program with equilibrium constraints (MPEC) formulated as a mathematical program with a parametric generalized equation involving {the} regular normal cone. Compared with the usual way of formulating MPEC through a KKT condition, this formulation has {the} advantage that it does not involve extra multipliers as new variables, and it usually requires weaker assumptions on the problem data. {U}sing the so-called first order sufficient condition for metric subregularity, we derive verifiable sufficient conditions for the metric subregularity of the involved set-valued mapping, or equivalently the calmness of the perturbed generalized equation mapping.
\vskip 10 true pt
\noindent {\bf Key words}: mathematical programs with equilibrium constraints, constraint qualification, metric subregularity, calmness.
\vskip 10 true pt
\noindent {\bf AMS subject classification}: 49J53, 90C30, 90C33, 90C46.
\end{abstract}
\section{Introduction}
A mathematical program with equilibrium constraints (MPEC) usually refers to an optimization problem in which the essential constraints are defined by a parametric variational inequality or complementarity system.
Since many equilibrium phenomena that arise from engineering and economics are characterized by either an optimization problem or a variational inequality, this justifies the name mathematical program with equilibrium constraints (\cite{Luo-Pang-Ralph, Out-Koc-Zowe}).
During the last two decades, more and more applications for MPECs have been found and there has been much progress made in both theories and algorithms for solving MPECs.
For easy discussion, consider the following mathematical program with a variational inequality constraint
\begin{eqnarray}
\mbox{(MPVIC)}\qquad\min_{(x,y)\in C} &&F(x,y)\nonumber\\
\mbox{s.t.}&& \langle \varphii(x,y), y'-y\rangle\geq 0 \quad \forall y'\in \Gamma, \label{VI}
\end{eqnarray}
where $C\subset \mathbb{R}^n\times\mathbb{R}^m$, $\Gamma:=\{y\in \mathbb{R}^m | g(y)\leq 0\}$, $F:\mathbb{R}^n\times\mathbb{R}^m\to\mathbb{R}$, $\varphii:\mathbb{R}^n\times \mathbb{R}^m\to\mathbb{R}^m$, $g:\mathbb{R}^m\to\mathbb{R}^q$ are sufficiently smooth. If the set $\Gamma$ is convex, then MPVIC can be equivalently written as a mathematical program with a generalized equation constraint
\begin{eqnarray*}
\mbox{(MPGE)}\qquad\min_{(x,y)\in C} &&F(x,y)\\
\mbox{s.t.}&& 0\in \varphii(x,y)+ {N}_\Gamma(y),
\end{eqnarray*}
where $N_\Gamma(y)$ is the normal cone to set $\Gamma$ at $y$ in the sense of convex analysis. If $g$ is affine or certain constraint qualification such as the Slater condition holds for the constraint $g(y)\leq 0$, then it is known that
$ {N}_\Gamma(y)=\nabla g(y)^T N_{\mathbb{R}_-^q}(g(y)).$
Consequently
\begin{equation}\label{GE-equiv}
0\in \varphii(x,y) +{N}_\Gamma(y) \Leftrightarrow \exists \lambda: 0\in \left (\varphii(x,y)+\nabla g(y)^T\lambda, g(y){\rm ri\,}ght )+N_{\mathbb{R}^m \times \mathbb{R}_+^q}(y, \lambda),
\end{equation}
where $\lambda$ is referred to a multiplier.
This suggests to consider the mathematical program with a complementarity constraint
\begin{eqnarray*}
\mbox{(MPCC)}\qquad\min_{(x,y)\in C, \lambda\in \mathbb{R}^q} &&F(x,y)\\
\mbox{s.t.}&& 0\in \left ( \varphii(x,y)+\nabla g(y)^T\lambda, g(y){\rm ri\,}ght )+N_{\mathbb{R}^m \times \mathbb{R}_+^q}(y, \lambda).
\end{eqnarray*}
In the case where the equivalence (\ref{GE-equiv}) holds for a unique multiplier $\lambda$ for each $y$, (MPGE) and (MPCC) are obviously equivalent while in the case where the multipliers are not unique then the two problems are not {necessarily} equivalent if the local optimal solutions are considered (see Dempe and Dutta \cite{Dam-Dut} in the context of bilevel programs). Precisely, it may be possible that for a local solution $(\bar x, \bar y, \bar \lambda)$ of (MPCC), the pair $(\bar x, \bar y)$ is not a local solution of (MPGE). This is a serious drawback for using {the} MPCC reformulation, since a numerical method computing a stationary point for (MPCC) may not have anything to do with the solution to the original MPEC. This shows that whenever possible, one should consider solving problem (MPGE) instead of problem (MPCC). Another fact we want to mention is that
in many equilibrium problems, the constraint set $\Gamma$ or the function $g$ may not be convex. In this case, if $y$ solves the variational inequality (\ref{VI}), then $y'=y$ is a global minimizer of the optimization problem:
$\displaystyle \min_{y'} ~\langle \varphii(x,y), y' \rangle \
\mbox{ s.t. } y'\in \Gamma,$
and hence by Fermat's rule (see, e.g., \cite[Theorem 10.1]{RoWe98}) it is a solution of the generalized equation
\begin{equation}\label{ge}
0\in \varphii(x,y)+ \widehat{N}_\Gamma(y),\end{equation}
where $\widehat{N}_\Gamma(y)$ is the regular normal cone to $\Gamma$ at $y$ (see Definition \ref{normalcone}). In the nonconvex case, by replacing the original variational inequality constraint (\ref{VI}) by the generalized equation (\ref{ge}), the feasible region is enlarged and the resulting MPGE may not be equivalent to the original MPVIC. However, if the solution $(\bar x,\bar y)$ of MPGE is feasible for the original MPVIC, then it must be a solution of the original MPVIC; see \cite{Bouza} for this approach in the context of bilevel programs.
Based on the above discussion, in this paper we consider MPECs in the form
\begin{eqnarray*}
\mbox{(MPEC)}\qquad\min
&&F(x,y)\\
\mbox{s.t.}&& 0\in \varphii(x,y)+ \widehat{N}_\Gamma(y),
\\&&G(x,y)\leq 0,
\end{eqnarray*}
where $\Gamma$ is possibly non-convex and $G:\mathbb{R}^n\times\mathbb{R}^m\to\mathbb{R}^p$ is smooth.
Besides of the issue of equivalent problem formulations, one has to consider constraint qualifications as well. This task is of particular importance for deriving optimality conditions. For the problem (MPCC) there exist a lot of constraint qualifications from the MPEC-literature ensuring the Mordukhovich (M-)stationarity of locally optimal solutions. The weakest one of these constraint qualifications appears to be MPEC-GCQ (Guignard constraint qualification) as introduced by Flegel and Kanzow \cite{FleKan05}, see \cite{FleKanOut07} for a proof of M-stationarity of local optimally solutions under MPEC-GCQ. For the problem (MPEC) it was shown by Ye and Ye \cite{YeYe} that calmness of the perturbation mapping associated with the constraints of (MPEC) (called pseudo upper-Lipschitz continuity in \cite{YeYe}) guarantees M-stationarity of solutions. \cite{Ad-Hen-Out} has compared the two formulations (MPEC) and (MPCC) in terms of calmness. {The authors} pointed out {there} that, very often, the calmness condition related to (MPEC) is satisfied at some $(\bar x, \bar y)$ while the one for (MPCC) are not fulfilled at $(\bar x,\bar y,\lambda)$ for certain multiplier $\lambda$. In particular \cite[Example 6]{Ad-Hen-Out} shows that it may be possible that the constraint for (MPEC) satisfies the calmness condition at $(\bar x,\bar y, 0)$ while the one for corresponding (MPCC) does not satisfy the calmness condition at $(\bar x,\bar y, \lambda, 0)$ for any multiplier $\lambda$. In this paper we first show that if multipliers are not unique then {the} MPEC Mangasarian-Fromovitz constraint qualification (MFCQ) never holds for problem (MPCC). Then we present an example for which MPEC-GCQ is violated at $(\bar x,\bar y, \lambda, 0)$ for any multiplier $\lambda$ while the calmness holds for the corresponding (MPEC) at $(\bar x,\bar y,0)$. Note that in contrast to
\cite[Example 6]{Ad-Hen-Out}, $\Gamma$ in our example is even convex. However, very little is known how to verify the calmness for (MPEC) when the multiplier $\lambda$ is not unique. When $\varphii$, $g$ and $G$ are affine, calmness follows simply by Robinson's result on polyhedral multifunctions \cite{Robinson81}. Another approach is to verify calmness by showing the stronger Aubin property (also called pseudo Lipschitz continuity or Lipschitz-like property) via the so-called Mordukhovich criterion, cf. \cite{Mor}. However, the Mordukhovich criterion involves the limiting coderivate of $\widehat N_\Gamma(\cdot)$, which is very difficult to compute in the case of nonunique $\lambda$; see \cite{GfrOut14b}.
The main goal of this paper is to derive a simply verifiable criterion for the so-called metric subregularity constraint qualification (MSCQ); see Definition \ref{DefMetrSubregCQ}, which is equivalent to calmness. Our sufficient condition for MSCQ involves only first-order derivatives of $\varphii$ and $G$ and derivatives up to the second-order of $g$ at $(\bar x,\bar y)$ and is therefore efficiently checkable. Our approach is mainly based on the sufficient conditions for metric subregularity as recently developed in \cite{Gfr11,Gfr13a,Gfr14b,GfrKl16} and some implications of metric subregularity which can be found in \cite{GfrMo16,GfrOut15}. A special feature is that the imposed constraint qualification on both the lower level system $g(y)\leq0$ and the upper level system $G(x,y)\leq0$ is only MSCQ, which is much weaker than MFCQ usually required.
\if{\blue{\noindent Recently a few papers have been devoted to MPECs in the form of (MPEC). \cite{HenOutSur} computed the regular coderivative for the solution mapping of the generalized equation (\ref{ge}) and used it to derive an S-stationary condition. \cite{Ad-Hen-Out} has compared the two formulations (MPEC) and (MPCC) in terms of calmness of the perturbation mapping associated with the generalized equations (also called metric subregularity constraint qualification (MSCQ) in this paper; see Definition \ref{DefMetrSubregCQ}). They pointed out that, very often, the calmness qualification condition related to (MPEC) is satisfied while the one for (MPCC) for certain multipliers $\lambda$ are not fulfilled. In particular \cite[Example 6]{Ad-Hen-Out} shows that it may be possible that the constraint for the MPEC satisfies the calmness condition at $(\bar x,\bar y, 0)$ while the one for corresponding MPCC does not satisfy the calmness condition at $(\bar x,\bar y, \lambda, 0)$ for any $\lambda\in \Lambda(\bar x, \bar y)$.
Recently \cite[Theorems 5 and 6]{GfrOut14} derived the formula for the graphical derivative and the regular coderivative of the solution map to (MPEC) and consequently the strong (S-) stationary conditions for (MPEC) under MSCQ and some constraint qualifications that are weaker than the one given in \cite{HenOutSur}, cf. \cite[Remark 1]{GfrOut14}. In order to obtain an Mordukhovich (M-) stationary condition,
it was shown in (\cite{YeYe}) that MSCQ is a constraint qualification. But for the general problem in the form of (MPEC), very little is known how to verify MSCQ in case when the multipliers are not unique, except the case when $g$ is affine. In this paper we fill this gap by providing a verifiable sufficient condition for MSCQ for nonlinear $g$ under very weak assumptions.
}}\fi
We organize our paper as follows. Section 2 contains the preliminaries and preliminary results. In section 3 we discuss the difficulties involved in formulating MPECs as (MPCC). Section
4 gives new verifiable sufficient conditions for MSCQ.
The following {notation} will be used throughout the paper. We denote by ${\cal B}_{\mathbb{R}^q}$ the closed unit ball in $\mathbb{R}^q$ while when
no confusion arises we denote it by $ {\cal B}$. By ${\cal B}(\bar z; r)$ we denote the closed ball centered at $\bar z$ with radius $r$. ${\cal S}_{\mathbb{R}^q}$ is the unit sphere in $\mathbb{R}^q$. For a matrix
$A$, we denote by $A^T$ its transpose. The inner product of two vectors $x, y$ is denoted by
$x^T y$ or $\langle x,y\rangle$ and by $x\perp y$ we mean $\langle x, y\rangle =0$. Let $\Omega \subset \mathbb{R}^d$ and $z \in \mathbb{R}^d$, we denote by $\dist{z, \Omega}$ the distance from $z $ to set $\Omega$. The polar cone of a set $\Omega$ is
$\Omega^\circ=\{x|x^Tv\leq 0 \ \forall v\in \Omega\}$ and $\Omega^\perp$ denotes the orthogonal complement to $\Omega$. For a set $\Omega$, we denote by ${\rm conv\,} \Omega$ and ${\rm cl\,}\Omega$ the convex hull
and the closure of $\Omega$ respectively. For a differentiable mapping $P:\mathbb R^d{\rm ri\,}ghtarrow \mathbb R^s$, we denote by $\nabla P(z)$ the Jacobian matrix of $P$ at $z$ if $s>1$ and the gradient vector if $s=1$. For a function $f:\mathbb{R}^d {\rm ri\,}ghtarrow \mathbb{R}$, we denote by $\nabla^2 f(\bar z)$ the Hessian matrix of $f$ at $\bar z$. Let $M:\mathbb{R}^d{\rm ri\,}ghtrightarrows\mathbb{R}^s$ be an arbitrary set-valued mapping, we denote its graph by $ {\rm gph}M:=\{(z,w)| w\in M(z)\}.$ $o:\mathbb{R}_+{\rm ri\,}ghtarrow \mathbb{R}$ denotes a function with the property that $o(\lambda)/\lambda{\rm ri\,}ghtarrow 0$ when $\lambda\downarrow 0$.
\section{{Basic definitions} and preliminary results}
In this section we gather some preliminaries and preliminary results in variational analysis that will be needed in the paper. The reader may find more details in the monographs \cite{Clarke,Mor,RoWe98} and in the papers we refer to.
\begin{definition}
Given a set
$\Omega\subset\mathbb R^d$ and a point $\bar z\in\Omega$,
the (Bouligand-Severi) {\em tangent/contingent cone} to $\Omega$
at $\bar z$ is a closed cone defined by
\begin{equation*}\label{normalcone}
T_\Omega(\bar z)
:={\cal B}ig\{u\in\mathbb R^d{\cal B}ig|\;\exists\,t_k\downarrow
0,\;u_k\to u\;\mbox{ with }\;\bar z+t_k u_k\in\Omega ~\forall ~ k\}.
\end{equation*}
The (Fr\'{e}chet) {\em regular normal cone} and the (Mordukhovich) {\em limiting/basic normal cone} to $\Omega$ at $\bar
z\in\Omega$ are defined by
\begin{eqnarray}
&& \widehat N_\Omega(\bar
z):={\cal B}ig\{v^\ast\in\mathbb{R}^d{\cal B}ig|\;\limsup_{z\stackrel{\Omega}{\to}\bar
z}\frac{\skalp{ v^\ast,z-\bar z}}{\|z-\bar z\|}\le
0{\cal B}ig\} \nonumber \\
\mbox{and } &&
N_\Omega(\bar z):=\left \{z^\ast \,\vert\, \exists z_{k}\stackrel{\Omega}{\to}\bar z \mbox{ and } z^\ast_k{\rm ri\,}ghtarrow z^\ast \mbox{ such that } z^\ast_{k}\in \widehat{N}_{\Omega}(z_k) \ \forall k {\rm ri\,}ght \}
\nonumber
\end{eqnarray}
respectively.
\end{definition}
\noindent {Note that $\widehat N_\Omega(\bar z)=(T_\Omega(\bar z))^\circ$ and w}hen the set $\Omega$ is convex, the tangent/contingent cone and the regular/limiting normal cone reduce to
the classical tangent cone and normal cone of convex analysis
respectively.
It is easy to see that $u\in T_\Omega(\bar z)$ if and only if $\liminf_{t\downarrow 0} t^{-1}\dist{\bar z+tu, \Omega}=0$.
Recall that a set $\Omega$ is said to be geometrically derivable at a point $\bar z\in \Omega$ if the tangent cone coincides with the derivable cone at $\bar x$, i.e., $u\in T_\Omega(\bar z)$ if and only if $\lim_{t\downarrow 0} t^{-1}\dist{\bar z+tu, \Omega}=0$; see e.g. \cite{RoWe98}.
From the definitions of various tangent cones, it is easy to see that if a set $\Omega$ is Clarke regular in the sense of \cite[Definition 2.4.6]{Clarke} then it must be geometrically derivable and the converse relation is in general false. The following proposition therefore improves the rule of tangents to product sets given in \cite[Proposition 6.41]{RoWe98}. The proof is omitted since it follows from the definitions of the tangent cone and
derivability.
\begin{proposition}[Rule of Tangents to Product Sets]\label{productset} Let $\Omega=\Omega_1\times \Omega_2$ with $\Omega_1\subset \mathbb{R}^{d_1}, \Omega_2\in C^{d_2}$ closed. Then at any $\bar z=(\bar z_1,\bar z_2)$ with $\bar z_1\in \Omega_1, \bar z_2\in \Omega_2$, one has
$$T_\Omega(\bar z )\subset T_{\Omega_1}(\bar z_1 )\times T_{\Omega_2}(\bar z_2 ).$$
Furthermore the equality holds if at least one of sets $\Omega_1, \Omega_2$ is geometrically derivable.
\end{proposition}
The following directional version of the limiting normal cone was introduced in \cite{Gfr13a}.
\begin{definition}Given a set
$\Omega\subset\mathbb R^d$, a point $\bar{z}\in \Omega$
and a direction $w\in \mathbb{R}^{d}$, the {\em limiting normal cone} to $\Omega$ in direction $w$ at $\bar{z}$ is defined by
\[
N_{\Omega}(\bar{z}; w):=\left \{z^{*} | \exists t_{k}\downarrow 0, w_{k}{\rm ri\,}ghtarrow w, z^{*}_{k}{\rm ri\,}ghtarrow z^{*}: z^{*}_{k}\in \widehat{N}_{\Omega}(\bar{z}+ t_{k}w_{k}) \ \forall k {\rm ri\,}ght \}.
\]
\end{definition}
By definition it is easy to see that $N_{\Omega}(\bar{z}; 0)=N_{\Omega}(\bar{z})$ and $N_{\Omega}(\bar{z}; u)=\emptyset$ if $u\not \in T_{\Omega}(\bar{z})$.
Further by \cite[Lemma 2.1]{Gfr14b}, if $\Omega$ is a union of finitely many closed convex sets, then one has the following relationship between the limiting normal cone and its directional version.
\begin{proposition}\cite[Lemma 2.1]{Gfr14b}\label{relationship}
Let $\Omega\subset \mathbb{R}^d$ be a union of finitely many closed convex sets, $\bar z\in \Omega, u\in \mathbb{R}^d$. Then
$N_{\Omega}(\bar{z}; u)\subset N_{\Omega}(\bar{z})\cap \{u\}^\perp $
and the equality holds if
the set $\Omega$ is convex and $u\in T_\Omega(\bar z)$.
\end{proposition}
Next we consider constraint qualifications for a constraint system of the form
\begin{equation}
\label{EqGenOptProbl}
z\in\Omega:=\{z\,\vert\, P(z)\in D\},
\end{equation}
where $P:\mathbb{R}^d\to\mathbb{R}^s$ and $D\subset\mathbb{R}^s$ is closed.
\begin{definition}[cf. \cite{FleKanOut07}]Let $\bar z\in \Omega$ where $\Omega$ is defined as in (\ref{EqGenOptProbl}) with $P$ smooth, and $T_\Omega^{\rm lin}(\bar z)$ be the linearized cone of $\Omega$ at $\bar z$ defined by
\begin{equation}
T_\Omega^{\rm lin}(\bar z)=\{w|\nabla P(\bar z) w\in T_D (P(\bar z))\}.
\label{lcone}
\end{equation}
We say that the {\em generalized Abadie constraint qualification} (GACQ) and the {\em generalized Guignard constraint qualification} (GGCQ) hold at $\bar z$, if
\[T_\Omega(\bar z)=T_\Omega^{\rm lin}(\bar z) \mbox{ and } {(T_\Omega(\bar z))^\circ}=(T_\Omega^{\rm lin}(\bar z))^\circ\]
respectively.
\end{definition}
It is obvious that GACQ implies GGCQ which is considered as the weakest constraint qualification. In the case of a standard nonlinear program, GACQ and GGCQ reduce to the standard definitions of Abadie and Guignard constraint qualification {respectively}. Under GGCQ, any local optimal solution to
a disjunctive problem, i.e., an optimization problem where the constraint has the form (\ref{EqGenOptProbl}) with the set $D$ equal to a union of finitely many polyhedral convex sets, must be M-stationary (see e.g. \cite[Theorem 7]{FleKanOut07}).
GACQ and GGCQ are weak constraint qualifications, but they are usually difficult to verify. Hence we are interested in constraint qualifications that are effectively verifiable, and yet not too strong. The following notion of metric subregularity is the base of the constraint qualification which plays a central role in this paper.
\begin{definition}
Let $M:\mathbb{R}^d{\rm ri\,}ghtrightarrows\mathbb{R}^s$ be a set-valued mapping and let $(\bar z,\bar w)\in{\rm gph\,} M$. We say that $M$ is {\em metrically subregular} at $(\bar z,\bar w)$ if there exist a neighborhood $W$ of $\bar z$ and a positive number $\kappa>0$ such that
\begin{equation}\label{EqMetrSubReg}\dist{z,M^{-1}(\bar w)}\leq\kappa\dist{\bar w,M(z)}\ \; \forall z\in W.
\end{equation}
\end{definition}
The metric subregularity property was introduced in \cite{Ioffe79} for single-valued maps under the terminology ``regularity at a point'' and the name of ``metric subregularity'' was suggested in \cite{DoRo04}.
Note that the metrical subregularity at $(\bar z,0)\in{\rm gph\,} M$ is also referred to the existence of a local error bound at $\bar z$.
It is easy to see that $M$ is metrically subregular at $(\bar z,\bar w)$ if and only if its inverse set-valued map $M^{-1}$ is calm at $(\bar w,\bar z)\in {\rm gph\,} M^{-1}$, i.e., there exist a neighborhood $W$ of $\bar z$, a neighborhood $V$ of $\bar w$ and a positive number $\kappa>0$ such that
$$M^{-1}(w)\cap V\subset M^{-1}(\bar w) +\kappa \|w-\bar w\| {\cal B} \quad \forall z\in W. $$
While the term for the calmness of a set-valued map was first coined in \cite{RoWe98}, it was introduced as the pseudo-upper Lipschitz continuity in \cite{YeYe} taking into { account} that it is weaker than both the pseudo Lipschitz continuity of Aubin \cite{Aubin} and the upper Lipschitz continuity of Robinson \cite{Robinson75,Robinson76} .
More general constraints can be easily written in the form $P(z)\in D$.
For instance,
a set
$\Omega=\{z\,\vert\,
P_1(z)\in D_1, 0\in P_2(z)+Q(z)\}$
where $P_i:\mathbb{R}^{d}\to\mathbb{R}^{s_i}$, $i=1,2$ and $Q:\mathbb{R}^{d}{\rm ri\,}ghtrightarrows\mathbb{R}^{s_2}$ is a set-valued map can also be written as
\[\Omega=\{z\,\vert\, P(z)\in D\}\;\mbox{ with }\; P(z):=\left(\begin{array}{c}P_1(z)\\(z,-P_2(z))\end{array}{\rm ri\,}ght),\ D:=D_1\times
{\rm gph\,} Q.\]
We now show that for both representations of $\Omega$ the properties of metric subregularity
for the {maps} describing the constraints are equivalent.
\begin{proposition}\label{PropEquGrSubReg}Let $P_i:\mathbb{R}^{d}\to\mathbb{R}^{s_i}$, $i=1,2$, {$D_1\subset \mathbb{R}^{s_1}$} be closed and $Q:\mathbb{R}^{d}{\rm ri\,}ghtrightarrows\mathbb{R}^{s_2}$ be a set-valued map with a closed graph. Further assume that $P_1$ and $P_2$ are Lipschitz near $\bar z$. Then the set-valued map
\[M_1(z):=\left(\begin{array}
{c}P_1(z)-D_1\\
P_2(z)+Q(z)
\end{array}{\rm ri\,}ght)\]
is metrically subregular at $(\bar z,(0,0))$ if and only if the set-valued map
\[M_2(z):=\left(\begin{array}
{c}P_1(z)\\(z,-P_2(z))
\end{array}{\rm ri\,}ght)-D_1\times{\rm gph\,} Q\]
is metrically subregular at $(\bar z,(0,0,0))$.
\end{proposition}
\begin{proof}
Assume without loss of generality that the image space $\mathbb{R}^{s_1}\times\mathbb{R}^{s_2}$ of $M_1$ is equipped with the norm $\norm{(y_1,y_2)}=\norm{y_1}+\norm{y_2}$, whereas we use the norm $\norm{(y_1,z,y_2)}=\norm{y_1}+\norm{z}+\norm{y_2}$ for the image space $\mathbb{R}^{s_1}\times\mathbb{R}^d\times\mathbb{R}^{s_2}$ of $M_2$. If $M_2$ is metrically subregular at $(\bar z,(0,0,0))$, then there are a neighborhood $W$ of $\bar z$ and a constant $\kappa$ such that for all $z\in W$ we have
\begin{eqnarray*}\dist{z,\Omega}&\leq& \kappa \distb{(0,0,0), M_2(z)}\\
&=&\kappa\big(\dist{P_1(z),D_1}+\inf\{\norm{z-\tilde z}+\norm{-P_2(z)-\tilde y}\,\vert\, (\tilde z,\tilde y)\in{\rm gph\,} Q\}\big)\\
&\leq & \kappa\big(\dist{P_1(z),D_1}+\inf\{\norm{-P_2(z)-\tilde y}\,\vert\, \tilde y\in Q(z)\}\big)=\kappa \distb{(0,0),M_1(z)} ,
\end{eqnarray*}
which shows metric subregularity of $M_1$. Now assume that $M_1$ is metrically subregular at $(\bar z,(0,0))$ and hence we can find a radius $r>0$ and a real $\kappa$ such that
\[\dist{z,\Omega}\leq\kappa \distb{(0,0),M_1(z)}\ \forall z\in {\cal B}(\bar z;r).\]
Further assume that {$P_1, P_2$ are} Lipschitz with modulus $L$ on ${\cal B}(\bar z;r)$, and consider $z\in {\cal B}(\bar z;r/(2+L))$. Since ${\rm gph\,} Q$ is closed, there are $(\tilde z,\tilde y)\in {\rm gph\,} Q$ with
\[\norm{z-\tilde z}+\norm{-P_2(z)-\tilde y}={\distb{(z,-P_2(z)),{\rm gph\,} Q}}.\]
Then
\[\norm{z-\tilde z}\leq \distb{(z,-P_2(z)),{\rm gph\,} Q}\leq \norm{z-\bar z}+\norm{-P_2(z)+P_2(\bar z)}\leq (1+L)\norm{z-\bar z}\]
implying $\norm{\bar z-\tilde z}\leq \norm{\bar z-z}+\norm{z-\tilde z}\leq (2+L)\norm{z-\bar z}\leq r$ and
\begin{eqnarray*}\dist{\tilde z,\Omega}&\leq& \kappa \distb{(0,0),M_1(\tilde z)}
=\kappa{\cal B}ig(\dist{P_1(\tilde z),D_1}+\distb{-P_2(\tilde z),Q(\tilde z)}{\cal B}ig)\\
&\leq& \kappa\big(\dist{P_1(\tilde z),D_1}+\norm{-P_2(\tilde z)-\tilde y}\big)\\
&\leq &\kappa\big(2L\norm{z-\tilde z}+\dist{P_1(z),D_1}+\norm{-P_2(z)-\tilde y}\big).
\end{eqnarray*}
Taking into account $\dist{z,\Omega}\leq\dist{\tilde z,\Omega}+\norm{z-\tilde z}$ we arrive at
\begin{eqnarray*}\dist{z,\Omega}&\leq&\kappa\max\{2L+\frac 1\kappa,1\}\big(\dist{P_1(z),D_1}+\norm{z-\tilde z}+\norm{-P_2(z)-\tilde y}\big)\\
&=&\kappa\max\{2L+\frac 1\kappa,1\}\distb{(0,0,0),M_2(z)},
\end{eqnarray*}
establishing metric subregularity of $M_2$ at {$(\bar z,(0,0,0))$}.
\end{proof}
Since the metric subregularity of the set-valued map $M(z):=P(z)-D$ at $(\bar z,0)$ implies GACQ holding at $\bar z$, see e.g., \cite[Proposition 1]{HenOut05}, it can serve as a constraint qualification.
{Following \cite[Definition 3.2]{GfrMo15a}, we define it as a constraint qualification below.}
\begin{definition}[\bf metric subregularity constraint qualification]\label{DefMetrSubregCQ} {Let $P(\bar z)\in D$.
We say that the {\sc metric subregularity constraint qualification (MSCQ)} holds at $\bar z$ for the system $P(z)\in D$} if the set-valued map $M(z):=P(z)-D$ is metrically subregular at $(\bar z,0)$, or equivalently the perturbed set-valued map $M^{-1}(w):=\{z| w\in P(z)-D\}$ is calm at $(0, \bar z)$.
\end{definition}
There exist several sufficient conditions for MSCQ in the literature. Here are the two most frequently used ones. The first case is when the linear CQ holds, i.e., when $P$ is affine and $D$ is a union of finitely many polyhedral convex sets.
The second case is when the no nonzero abnormal multiplier constraint qualification (NNAMCQ) holds at $\bar z$ (see e.g., \cite{Ye}):
\begin{equation}\label{NNAMCQ}
\nabla P(\bar z)^T\lambda=0,\;\lambda\in N_D(P(\bar z))\quad\Longrightarrow\quad\lambda=0.\end{equation}
It is known that NNAMCQ is equivalent to MFCQ in the case of standard nonlinear programming. Condition (\ref{NNAMCQ}) appears under different terminologies in the literature; e.g., while it is called NNAMCQ in \cite{Ye}, it is referred to generalized MFCQ (GMFCQ) in \cite{FleKanOut07}.
The linear CQ and NNAMCQ may be still too strong for some problems to hold. Recently some new constraint qualifications for standard nonlinear programs have been introduced in the literature that are stronger than MSCQ and weaker than the linear CQ and/or NNAMCQ; see e.g. \cite{AndHaeSchSilMP,AndHaeSchSilSIAM}. These CQs include the relaxed constant positive linear dependence condition (RCPLD) (see \cite[Theorem 4.2]{GuoZhangLin}), the constant rank of the subspace component condition (CRSC) (see \cite[Corollary 4.1]{GuoZhangLin}) and the quasinormality \cite[Theorem 5.2]{guoyezhang-infinite}.
In this paper we will use the following sufficient conditions.
\begin{theorem}\label{ThSuffCondMS}Let $\bar z \in \Omega$ where $\Omega $ is defined as in (\ref{EqGenOptProbl}).
MSCQ holds at $\bar z$ if one of the following conditions is fulfilled:
\begin{itemize}
\item First-order sufficient condition for metric subregularity (FOSCMS) for the system $P(z)\in D$ with $P$ smooth, cf. \cite[Corollary 1]{GfrKl16} : for every $0\not=w\in T_\Omega^{\rm lin}(\bar z)$ one has
\[\nabla P(\bar z)^T\lambda=0,\;\lambda\in N_D(P(\bar z);\nabla P(\bar z)w)\quad\Longrightarrow\quad\lambda=0.\]
\item Second-order sufficient condition for metric subregularity (SOSCMS) for the inequality system $P(z)\in \mathbb{R}^s_-$ with $P$ twice
Fr\'echet differentiable at $\bar z$, cf. \cite[Theorem 6.1]{Gfr11}: For every $0\not=w\in T_\Omega^{\rm lin}(\bar z)$ one has
\[\nabla P(\bar z)^T\lambda=0,\;\lambda\in N_{\mathbb{R}^s_-}(P(\bar z)),\; w^T\nabla^2(\lambda^TP)(\bar z)w\geq 0\quad\Longrightarrow\quad\lambda=0.\]
\end{itemize}
\end{theorem}
In the case $T_\Omega^{\rm lin}(\bar z)=\{0\}$, FOSCMS satisfies automatically. By the definition of the linearized cone (\ref{lcone}), $T_\Omega^{\rm lin}(\bar z)=\{0\}$ means that
\[\nabla P(\bar z)w=\xi, \quad \xi \in T_D(P(\bar z))\;\Longrightarrow\;w=0.\]
By the graphical derivative criterion for strong metric subregularity \cite[Theorem 4E.1]{DoRo14}, this is equivalent to saying that the set-valued map $M(z)=P(z)-D$ is strongly metrically subregular (or equivalently its inverse is isolated calm) at $(\bar z,0)$.
When the set $D$ is convex, by the relationship between the limiting normal cone and its directional version in Proposition \ref{relationship},
$$N_D(P(\bar z);\nabla P(\bar z)w)=N_D(P(\bar z) )\cap \{\nabla P(\bar z)w\}^\perp.$$
Consequently in the case where $ T_\Omega^{\rm lin}(\bar z ) \not =\{0\}$ and $D$ is convex, FOSCMS reduces to NNAMCQ. Indeed, suppose that $\nabla P(\bar z)^T\lambda=0$ and $\lambda \in N_D(P(\bar z) )$. Then $\lambda^T (\nabla P(\bar z)w)=0$. Hence $\lambda \in N_D(P(\bar z);\nabla P(\bar z)w)$ which implies from FOSCMS that $\lambda=0$.
Hence for convex $D$,
FOSCMS is equivalent to saying that either the strong metric subregularity or the NNAMCQ (\ref{NNAMCQ}) holds at $(\bar z,0)$. In the case of an inequality system $P(z)\leq 0$ and $ T_\Omega^{\rm lin}(\bar z ) \not =\{0\}$,
SOSCMS is obviously weaker than NNAMCQ.
In many situations, the constraint system $ P(z)\in D$ can be {splitted} into two parts such that one part can be easily verified to satisfy MSCQ. For example
\begin{equation}\label{EqConstrSplit}P(z)=(P_1(z),P_2(z))\in D=D_1\times D_2\end{equation}
where $P_i:\mathbb{R}^d\to\mathbb{R}^{s_i}$ are smooth and $D_i\subset \mathbb{R}^{s_i}$, $i=1,2$ are closed, and for one part, let say $P_2(z)\in D_2$, it is known in advance that the map $P_2(\cdot)-D_2$ is metrically subregular at $(\bar z,0)$. In this case the following theorem is useful.
\begin{theorem}
\label{ThSuffCondMSSplit}{Let $P(\bar z)\in D$ with $P$ smooth and $D$ closed
and assume that $P$ and $D$} can be written in the form \eqref{EqConstrSplit} such that the set-valued map $P_2(z)-D_2$ is metrically subregular at $(\bar z,0)$. Further assume for every $0\not=w\in T_\Omega^{\rm lin}(\bar z)$ one has
\[\nabla P_1(\bar z)^T\lambda^1+\nabla P_2(\bar z)^T\lambda^2=0,\;\lambda^i\in N_{D_i}(P_i(\bar z);\nabla P_i(\bar z)w)\;i=1,2\;\Longrightarrow\;\lambda^1=0.\]
Then MSCQ holds at $\bar z$ {for the system $P(z)\in D$}.
\end{theorem}
\begin{proof}
Let the set-valued maps $M$, $M_i( i=1,2)$ be given by $M(z):=P(z)-D$ and $M_i(z)=P_i(z)-D_i( i=1,2)$ respectively. Since $P_1$ is assumed to be smooth, it is also Lipschitz near $\bar z$ and thus $M_1$ has the Aubin property around $(\bar z,0)$. Consider any direction $0\not=w\in T_\Omega^{\rm lin}(\bar z)$. By \cite[Definition 2(3.)]{Gfr13a} the limit set critical for directional metric regularity ${\rm Cr}_{\mathbb{R}^{s_1}}M((\bar z,0); w)$ {with respect to $w$ and $\mathbb{R}^{s_1}$ at $(\bar z, 0)$} is defined as the collection of all elements $(v,z^\ast)\in\mathbb{R}^s\times\mathbb{R}^d$ such that there are sequences $t_k\searrow 0$, $(w_k,v_k,z_k^\ast)\to (w,v,z^\ast)$, $\lambda_k\in{\cal S}_{\mathbb{R}^s}$ and a real $\beta>0$ such that $(z_k^\ast,\lambda_k)\in\widehat N_{{\rm gph\,} M}(\bar z+t_kw_k, t_kv_k)$ and $\norm{\lambda_k^1}\geq\beta$ hold for all $k$, where $\lambda_k=(\lambda_k^1,\lambda_k^2)\in\mathbb{R}^{s_1}\times\mathbb{R}^{s_2}$. We claim that $(0,0)\not\in {\rm Cr}_{\mathbb{R}^{s_1}}M((\bar z,0); w)$. Assume on the contrary that $(0,0)\in {\rm Cr}_{\mathbb{R}^{s_1}}M((\bar z,0); w)$ and consider the corresponding sequences $(t_k,w_k,v_k,z_k^\ast,\lambda_k)$. The sequence $\lambda_k$ is bounded and by passing to a subsequence we can assume that $\lambda_k$ converges to some $\lambda=(\lambda^1,\lambda^2)$ satisfying $\norm{\lambda^1}\geq \beta>0$. Since $(z_k^\ast,\lambda_k)\in \widehat N_{{\rm gph\,} M}(\bar z+t_kw_k, t_kv_k)$ it follows from \cite[Exercise 6.7]{RoWe98} that
$-\lambda_k\in\widehat N_D(P(\bar z+t_kw_k)-t_kv_k)$ and $z_k^\ast=-\nabla P(\bar z+t_kw_k)^T\lambda_k$ implying $-\lambda\in N_D(P(\bar z);\nabla P(\bar z)w)$ and $\nabla P(\bar z)^T(-\lambda)=\nabla P_1(\bar z)^T(-\lambda^1)+\nabla P_2(\bar z)^T(-\lambda^2)=0$. From \cite[Lemma 1]{GfrKl16} we also conclude $-\lambda^i\in N_{D_i}(P_i(\bar z);\nabla P_i(\bar z)w)$ resulting in a contradiction to the assumption of the theorem. Hence our claim $(0,0)\not\in {\rm Cr}_{\mathbb{R}^{s_1}}M((\bar z,0); w)$ holds true and by \cite[Lemmas 2, 3, Theorem 6]{Gfr13a} it follows that $M$ is metrically subregular in direction $w$ at $(\bar z,0)$, where directional metric subregularity is defined in \cite[Definition 1]{Gfr13a}. Since by {definition} $M$ is metrically subregular in every direction $w\not \in T_\Omega^{\rm lin}(\bar z)$, we conclude from \cite[Lemma 2.7]{Gfr14b} that $M$ is metrically subregular at $(\bar z,0)$.
\end{proof}
We now discuss some consequences of MSCQ. First we have the following change of coordinate formula for normal cones.
\begin{proposition}\label{PropInclNormalcone}
Let $\bar z\in \Omega:=\{z| P(z)\in D\}$ with $P$ smooth and $D$ closed.
Then
\begin{equation}\label{EqInclRegNormalCone}
\widehat N_\Omega(\bar z)\supset \nabla P(\bar z)^T\widehat N_D(P(\bar z)).
\end{equation}
Further, if MSCQ holds at $\bar z$ for the system $P(z)\in D$, then
\begin{equation}\label{EqInclLimNormalCone}
\widehat N_\Omega(\bar z)\subset N_\Omega(\bar z)\subset \nabla P(\bar z)^T N_D(P(\bar z)).
\end{equation}
In particular if MSCQ holds at $\bar z$ for the system $P(z)\in D$ with convex $D$, then
\begin{equation} \label{EqInclNormalCone} \widehat N_\Omega(\bar z)= N_\Omega(\bar z)=\nabla P(\bar z)^T N_D(P(\bar z)).\end{equation}
\end{proposition}
\begin{proof}
The inclusion \eqref{EqInclRegNormalCone} follows from \cite[Theorem 6.14]{RoWe98}.
The first inclusion in {(\ref{EqInclLimNormalCone})} follows immediately from the definitions of the regular/limiting normal cone, whereas the second one follows from \cite[Theorem 4.1]{HenJouOut02}. When $D$ is convex, the regular normal cone coincides with the limiting normal cone and hence (\ref{EqInclNormalCone}) follows by combining (\ref{EqInclRegNormalCone}) {and} (\ref{EqInclLimNormalCone}).
\end{proof}
In the case where $D =\mathbb{R}^{s_1}_-\times\{0\}^{s_2}$, it is well-known in nonlinear programming theory that MFCQ or equivalently NNAMCQ is a necessary and sufficient condition for the compactness of the set of Lagrange multipliers. In the case where $D\not =\mathbb{R}^{s_1}_-\times\{0\}^{s_2}$, NNAMCQ also implies the boundedness of the multipliers. However MSCQ is weaker than NNAMCQ and hence the set of Lagrange multipliers may be unbounded if MSCQ holds but NNAMCQ fails. However Theorem \ref{ThBMP} shows that under MSCQ one can extract some uniformly compact subset of the multipliers.
\begin{definition}[cf. \cite{GfrMo16}]
Let $\bar z\in \Omega:=\{z| P(z)\in D\}$ with $P$ smooth and $D$ closed. We say that the {\em bounded multiplier property} (BMP) holds at $\bar z$ for the system $P(z)\in D$, if there is some modulus $\kappa\geq 0$ and some neighborhood $W$ of $\bar z$ such that for every $z\in W\cap \Omega$ and every $z^\ast\in N_{\Omega}(z)$ there is some $\lambda\in \kappa\norm{z^\ast}
{\cal B}_{\mathbb{R}^s}\cap N_D(P(z))$ satisfying
\[z^\ast=\nabla P(z)^T\lambda.\]
\end{definition}
The following theorem gives a sharper upper estimate for the normal cone than (\ref{EqInclLimNormalCone}).
\begin{theorem}\label{ThBMP} Let $\bar z\in \Omega:=\{z\,\vert\, P(z)\in D\}$ and assume that MSCQ holds at the point $\bar z$ for the system $P(z)\in D$.
Let $W$ denote an open neighborhood of $\bar z$ and let $\kappa\geq 0$ be a real such that
$$\dist{z, \Omega} \leq \kappa \dist{P(z), D} \quad \forall z\in W.$$ Then
\[N_{\Omega}(z)\subset \left \{z^\ast\in\mathbb{R}^d\,\vert\, \exists \lambda\in \kappa\norm{z^\ast} {\cal B}_{\mathbb{R}^s}\cap N_D(P(z))\;\mbox{with}\; z^\ast=\nabla P(z)^T\lambda {\rm ri\,}ght \}\quad \forall z\in W.\]
In particular BMP holds at $\bar z$ for the system $P(z)\in D$.
\end{theorem}
\begin{proof} Under the assumption, the set-valued map $M(z):=P(z)-D$ is metrically subregular at $(\bar z,0)$. The definition of the metric subregularity justifies the existence of the open neighborhood $W$ and the number $\kappa$ in the assumption. Hence
for each $z\in M^{-1}(0)\cap W=\Omega\cap W$ the map $M$ is also metrically subregular at $(z,0)$ and by applying {\cite[Proposition 4.1]{GfrOut15}} we obtain
\[N_\Omega(z)=N_{M^{-1}(0)}(z;0)\subset\{z^\ast\,\vert\, \exists \lambda\in \kappa\norm{z^\ast}{\cal B}_{\mathbb{R}^s}: (z^\ast,\lambda)\in N_{{\rm gph\,} M}((z,0);(0,0))\}.\]
It follows from \cite[Exercise 6.7]{RoWe98} that $$N_{{\rm gph\,} M}((z,0);(0,0))=N_{{\rm gph\,} M}((z,0))= \{(z^\ast,\lambda)\,\vert\, -\lambda\in N_D(P(z)), z^\ast=\nabla P(z)^T(-\lambda)\}.$$ Hence the assertion follows.
\end{proof}
\section{Failure of {MPCC-tailored constraint qualifications for problem (MPCC)}}
In this section, we discuss difficulties involved in {MPCC-tailored} constraint qualifications for the problem (MPCC) by
considering the constraint system for problem
(MPCC) in the following form
$$
\widetilde \Omega :=\left \{(x,y,\lambda): \begin{array}{l}
0=h(x,y, \lambda):=\varphii(x,y)+\nabla g(y)^T\lambda,\\
0\geq g(y) \perp -\lambda \leq 0\\
G(x,y)\leq 0\end{array}{\rm ri\,}ght \},
$$
where $\varphii:\mathbb{R}^n\times \mathbb{R}^m\to\mathbb{R}^m$ and $G:\mathbb{R}^n\times\mathbb{R}^m\to \mathbb{R}^p$ are continuously differentiable and $g:\mathbb{R}^m\to\mathbb{R}^q$ is twice continuously differentiable.
Given a triple $(\bar x,\bar y,\bar\lambda)\in \widetilde \Omega $ we define the following index sets of active constraints:
\begin{eqnarray*}
&&{\cal I}_g:={\cal I}_g(\bar y,\bar\lambda):=\{i{\in \{1,\ldots,q\}}\,\vert\, g_i(\bar y)=0, \bar\lambda_i>0\},\\
&&{\cal I}_\lambda:={\cal I}_\lambda(\bar y,\bar\lambda):=\{i{\in \{1,\ldots,q\}}\,\vert\, g_i(\bar y)<0, \bar\lambda_i=0\},\\
&&{\cal I}_0:={\cal I}_0(\bar y,\bar\lambda):=\{i{\in \{1,\ldots,q\}}\,\vert\, g_i(\bar y)=0, \bar\lambda_i=0\},\\
&&{\cal I}_G:={\cal I}_G(\bar x,\bar y):=\{i{\in \{1,\ldots,p\}}\,\vert\, G_i(\bar x,\bar y)=0\}.
\end{eqnarray*}
\begin{definition}[\cite{ScheelScholtes}]
We say that MPCC-MFCQ holds at $(\bar x,\bar y,\bar\lambda)$ if the gradient vectors
\begin{equation}\label{MPEC-MFCQ} \nabla h_i(\bar x, \bar y, \bar\lambda), i=1,\dots, m, \ (0, \nabla g_i(\bar y) ,0), i\in {\cal I}_g\cup {\cal I}_0, \ (0,0,e_i) , i\in {\cal I}_\lambda\cup {\cal I}_0,
\end{equation}
where $e_i$ denotes the unit vector with the ith component equal to $1$, are linearly independent and there exists a vector $(d_x,d_y,d_\lambda)\in\mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^q$ orthogonal to the vectors in (\ref{MPEC-MFCQ}) and such that
$$\nabla G_i(\bar x,\bar y)(d_x,d_y)<0, i\in {\cal I}_G.$$
We say that {MPCC-LICQ} holds at $(\bar x,\bar y,\bar\lambda)$ if the gradient vectors
$$\nabla h_i(\bar x, \bar y, \bar\lambda), i=1, \dots, m, \ (0,\nabla g_i(\bar y) ,0), i\in {\cal I}_g\cup {\cal I}_0, \ (0,0,e_i) , i\in {\cal I}_\lambda\cup I_0 ,\ (\nabla G_i (\bar x,\bar y),0), i\in {\cal I}_G
$$
are linearly independent.
\end{definition}
{MPCC-MFCQ} implies that for every partition $(\beta_1,\beta_2)$ of ${\cal I}_0$ the branch
\begin{equation}\label{branch}
\left \{ \begin{array}{l}
\varphii(x,y)+\nabla g(y)^T\lambda=0,\\
g_i(y)=0, {\lambda_i\geq 0, i\in {\cal I}_g,\; \lambda_i=0, g_i(y) \leq 0,}\; i\in {\cal I}_\lambda,\\
g_i(y)=0, \lambda_i\geq 0,i\in\beta_1,\; g_i(y)\leq 0,\lambda_i=0,i\in\beta_2,\\
G(x,y)\leq 0
\end{array} {\rm ri\,}ght.
\end{equation} satisfies MFCQ at $(\bar x,\bar y,\bar\lambda)$.
We now show that {MPCC}-MFCQ never {holds} for (MPCC) if the lower level program has more than one multiplier.
\begin{proposition}\label{Prop5}
Let $(\bar x,\bar y,\bar\lambda)\in \widetilde \Omega$ and assume that there exists a second multiplier $\hat\lambda\not=\bar\lambda$ such that $(\bar x,\bar y,\hat \lambda)\in \widetilde \Omega$. Then for every partition $(\beta_1,\beta_2)$ of ${\cal I}_0$ the branch (\ref{branch})
does not fulfill MFCQ at $(\bar x,\bar y,\bar\lambda)$.
\end{proposition}
\begin{proof}
Since $\nabla g(\bar y)^T(\hat\lambda-\bar\lambda)=0$, {$(\hat\lambda-\bar\lambda)_i\geq 0$, $i\in {{\cal I}_\lambda\cup \beta_2}$ and $\hat\lambda-\bar\lambda\not =0$,} the assertion follows immediately.
\end{proof}
Since {MPCC}-MFCQ is stronger than the {MPCC}-LICQ, we have the following corollary immediately.
\begin{corollary}\label{Cor1}Let $(\bar x,\bar y,\bar\lambda)\in \widetilde \Omega$ and assume that there exists a second multiplier $\hat\lambda\not=\bar\lambda$ such that $(\bar x,\bar y,\hat \lambda)\in \widetilde \Omega$. Then {MPCC}-LICQ fails at $(\bar x,\bar y,\bar\lambda)$.
\end{corollary}
{It is worth noting that our
{result in Proposition \ref{Prop5} is} only valid under the assumption that $g(y)$ is independent of $x$. In the case of bilevel programming where the
lower level problem has a constraint dependent of the upper level variable, an example
given in \cite[Example 4.10]{Mehlitz-Wachsmuth} shows that if the multiplier is not unique, then the corresponding MPCC-MFCQ may hold at some of the multipliers and fail to hold at others.}
\begin{definition}[see e.g. \cite{FleKanOut07}] Let $(\bar x,\bar y,\bar\lambda)$ be feasible for (MPCC). We say {MPCC-ACQ}
and
{MPCC-GCQ} hold if
$$T_{\widetilde \Omega }(\bar x,\bar y,\bar\lambda)=T_{\rm MPCC}^{\rm lin}(\bar x,\bar y,\bar\lambda) \mbox{ and } \widehat N_{\widetilde \Omega}(\bar x,\bar y,\bar\lambda)=(T_{\rm MPCC}^{\rm lin}(\bar x,\bar y,\bar\lambda))^\circ$$
respectively, where
\begin{eqnarray*}\lefteqn{T_{\rm MPCC}^{\rm lin}(\bar x,\bar y,\bar\lambda)}\\
&:=&\left \{(u,v,\mu)\in\mathbb{R}^n\times\mathbb{R}^m\times\mathbb{R}^q\,\vert\,
\begin{array}{l}\nabla_{x} \varphii(\bar x,\bar y)u+\nabla_y (\varphii+\nabla_{y}(\lambda^Tg))(\bar x,\bar y)v +\nabla g(\bar y)^T\mu=0,\\
\nabla g_i(\bar y)v=0,i\in {\cal I}_g,\;\mu_i=0,i\in {\cal I}_\lambda,\\
\nabla g_i(\bar y)v\leq 0,\mu_i\geq 0,\mu_i\nabla g_i(\bar y)v=0, i\in {\cal I}_0,\\
\nabla G_i(\bar x,\bar y)(u,v)\leq 0,i\in {\cal I}_G
\end{array}{\rm ri\,}ght \}\end{eqnarray*}
is the MPEC linearized cone at $(\bar x,\bar y,\bar\lambda)$.
\end{definition}
\noindent Note that {MPCC}-ACQ and {MPCC}-GCQ are the GACQ and GGCQ for the equivalent formulation of the set $\widetilde \Omega$ in the form of $P(z)\in D$ with $D$ involving the complementarity set
$$D_{cc}:=\{(a,b)\in \mathbb{R}_-^q\times \mathbb{R}_-^q| a^Tb=0\}$$
respectively. {MPCC}-MFCQ implies {MPCC}-ACQ (cf. \cite{FleKan05}) and from definition it is easy to see that {MPCC}-ACQ is stronger than {MPCC}-GCQ. Under {MPCC}-GCQ, it is known that a local optimal solution of (MPCC) must be a M-stationary point (\cite[Theorem 14]{FleKanOut07}). Although {MPCC}-GCQ is weaker than most of other {MPCC-tailored} constraint qualifications,
the following example shows that the constraint qualification {MPCC}-GCQ still can be violated when the multiplier for the lower level is not unique. In contrast to \cite[Example 6]{Ad-Hen-Out}, all the constraints are convex .
\begin{example}\label{Ex1}
Consider MPEC
\begin{eqnarray}
\min_{x,y} && F(x,y):=x_1-\frac{3}{2} y_1 + x_2-\frac 32 y_2- y_3 \nonumber \\
s.t. && 0\in \varphii(x,y)+N_\Gamma( y), \label{EXGE}\\
&& G_1(x,y)=G_1(x):=-x_1-2x_2\leq 0,\nonumber \\
&& G_2(x,y)=G_2(x):=-2x_1-x_2\leq 0,\nonumber
\end{eqnarray}
where
$$\varphii(x,y):=\left(\begin{array}{c}
y_1-x_1\\
y_2-x_2\\
-1\end{array}{\rm ri\,}ght), \quad \Gamma:=\left\{y\in \mathbb{R}^3|
g_1(y):=y_3+\frac12 y_1^2\leq 0,\;g_2(y):=y_3+\frac12 y_2^2\leq 0 {\rm ri\,}ght\}.$$
Let $\bar x=(0,0)$, $\bar y=(0,0,0)$.
The lower level inequality system $g(y)\leq 0$ is convex satisfying the Slater condition and therefore $y$ is a solution to the parametric generalized equation (\ref{EXGE}) if and only if $y'=y$ is a global minimizer of the optimization problem:
$\displaystyle \min_{y'} ~\langle \varphii(x,y), y' \rangle \
\mbox{ s.t. } y'\in \Gamma,$ and if and only if there is a multiplier $\lambda$ fulfilling KKT-conditions
\begin{eqnarray}
\label{EqKKT} &&\left(\begin{array}{c}
y_1-x_1+\lambda_1y_1\\
y_2-x_2+\lambda_2y_2\\
-1+\lambda_1+\lambda_2\end{array}{\rm ri\,}ght)=\left(\begin{array}
{c}0\\0\\0
\end{array}{\rm ri\,}ght),\\
\nonumber && 0\geq y_3+\frac12 y_1^2\perp -\lambda_1\leq 0,\\
\nonumber &&0\geq y_3+\frac12 y_2^2\perp-\lambda_2\leq 0.
\end{eqnarray}
{Let ${\cal F}:=\{x\,\vert\, G_1(x)\leq 0, G_2(x)\leq 0\}$. Then ${\cal F}={\cal F}_1\cup {\cal F}_2\cup{\cal F}_3$ where
\begin{eqnarray*}
{\cal F}_1&:=&\left \{(x_1,x_2)\in \mathbb{R}^2 \,\vert\, 2\vert x_2\vert\leq x_1 {\rm ri\,}ght \},\\
{\cal F}_2&:=&\left\{(x_1,x_2)\in \mathbb{R}^2\,\vert\, \frac {x_1}2\leq x_2\leq 2x_1{\rm ri\,}ght\},\\
{\cal F}_3&:=&\left\{(x_1,x_2)\in \mathbb{R}^2\,\vert\, 2\vert x_1\vert\leq x_2{\rm ri\,}ght\}.
\end{eqnarray*}
Straightforward calculations yield that for each $x\in {\cal F}$ there exists a unique solution $y(x)$, which is given by
\[y(x)=\begin{cases}(\frac{x_1}2, x_2,-\frac 18 x_1^2)& \mbox{if $x\in{\cal F}_1$,}\\
( \frac{x_1+x_2}3, \frac{x_1+x_2}3,-\frac1{18}(x_1+x_2)^2)& \mbox{if $x\in{\cal F}_2$,}\\
( x_1, \frac{x_2}2,-\frac 18 x_2^2)& \mbox{if $x\in{\cal F}_3$.}
\end{cases}\]}
Further, at {$\bar x=(0,0)$ {we have} $y(\bar x)=(0,0,0)$} and the set of the multipliers is
$${ \Lambda:=}\{\lambda\in \mathbb{R}_+^2| \lambda_1+\lambda_2=1\},$$
while for all {$x\not=(0,0)$} the gradients of the lower level constraints active at $y(x)$ are linearly independent and the unique multiplier is given by
\begin{equation}\label{lambda}\lambda(x)=\begin{cases}(1,0)& \mbox{if $x\in{\cal F}_1$,}\\
( \frac{2x_1-x_2}{x_1+x_2}, \frac{2x_2-x_1}{x_1+x_2})&\mbox{if $x\in{\cal F}_2$,}\\
( 0,1)& \mbox{if $x\in{\cal F}_3$.}
\end{cases}\end{equation}
Since
\[F(x,y(x))=\begin{cases}
\frac 14 x_1-\frac 12 x_2+\frac 18 x_1^2&\mbox{if $x\in{\cal F}_1$},\\
\frac1{18}(x_1+x_2)^2&\mbox{if $x\in{\cal F}_2$},\\
\frac 14 x_2-\frac 12 x_1+\frac 18 x_2^2&\mbox{if $x\in{\cal F}_3$},\\
\end{cases}\]
{and ${\cal F}={\cal F}_1\cup {\cal F}_2\cup{\cal F}_3$,} we see that $(\bar x,\bar y)$ is a globally optimal solution of the MPEC.
The original problem is equivalent to the following MPCC:
\begin{eqnarray*}
\min_{x,y,\lambda}&& x_1-\frac 32 y_1 + x_2-\frac 32 y_2- y_3\\
\mbox{s.t.}&&\mbox{$x,y,\lambda$ {fulfill \eqref{EqKKT},}}\\
&&-2x_1-x_2\leq 0,\\
&&-x_1-2x_2\leq 0.
\end{eqnarray*}
The feasible region of this problem is
\begin{eqnarray*}
\widetilde \Omega=\bigcup_{\bar x\not=x\in {\cal F}}\{(x,y(x),\lambda(x))\}
{\cup(\{(\bar x,\bar y)\}\times \Lambda)}.
\end{eqnarray*}
Any $(\bar x,\bar y, \lambda)$ where $\lambda \in {\Lambda}$ is a globally optimal solution. However
it is easy to verify that unless $\lambda_1=\lambda_2=0.5$, any $(\bar x, \bar y, \lambda)$ is not even a weak stationary point, implying by \cite[Theorem 7]{FleKanOut07} that {MPCC}-GCQ and consequently {MPCC}-ACQ {fails} to hold. Now consider $\lambda=(0.5,0.5)$.
The MPEC linearized cone $T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$ is the collection of all $(u,v,\mu)$ such that
\begin{equation}\label{KKTeqn}
\left(\begin{array}{c}
1.5v_1-u_1\\
1.5v_2-u_2\\
\mu_1+\mu_2\end{array}{\rm ri\,}ght)=\left(\begin{array}{c}
0\\
0\\
0\end{array}{\rm ri\,}ght),\quad
\begin{array}{l}v_3=0,\\
-2u_1-u_2\leq 0,\; -u_1-2u_2\leq 0.\end{array}
\end{equation}
Next we compute the actual tangent cone $T_{\widetilde \Omega}(\bar x,\bar y,\lambda)$. Consider sequences $t_k\downarrow 0$, $(u^k,v^k,\mu^k)\to (u,v,\mu)$ such that $(\bar x,\bar y,\lambda)+t_k(u^k,v^k,\mu^k)\in\widetilde\Omega$. If $u^k\not=0$ for infinitely many $k$, then {$\bar x+t_k u^k\not =0$} and hence $(\bar y+t_kv^k,\lambda+t_k\mu^k)=(y(\bar x+t_ku^k),\lambda(\bar x+t_ku^k))$ for those $k$. Since $\lambda=(0.5,0.5)$, it follows from (\ref{lambda}) that $\bar x+t_ku^k\in{\cal F}_2$ for infinitely many $k$, implying, by passing to a subsequence if necessary,
\[v=\lim_{k\to \infty}\frac{y(\bar x+t_ku^k)-\bar y}{t_k}=\frac 13(u_1+u_2,u_1+u_2,0)\]
and
\begin{eqnarray*}\mu&=&\lim_{k\to\infty}\frac{\lambda(\bar x+t_ku^k)-\lambda}{t_k}=
\lim_{k\to\infty}\frac{(\frac{2u_1^k-u_2^k}{u_1^k+u_2^k}, \frac{2u_2^k-u_1^k}{u_1^k+u_2^k})-(0.5,0.5)}{t_k}\\
&=&\lim_{k\to\infty}1.5\frac{(\frac{u_1^k-u_2^k}{u_1^k+u_2^k}, \frac{u_2^k-u_1^k}{u_1^k+u_2^k})}{t_k}.\end{eqnarray*}
Hence $v_1=v_2=\frac{1}{3}(u_1+u_2),\ v_3=0$ {and $\mu_1+\mu_2=0$.}
{Also from (\ref{KKTeqn}), we have $u_1=u_2$ since $v_1=v_2$ and the tangent cone $T_{\widetilde \Omega}(\bar x,\bar y,\lambda)$ is always a subset of the MPEC linearized cone $T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$ (see e.g. \cite[Lemma 3.2]{FleKan05}).}
Further, since $\bar x+t_ku^k\in{\cal F}_2$, we must have $u_1\geq 0$.
If $u^k=0$ for all but finitely many $k$, then we have $v^k=0$ and $\lambda+t_k \mu^k\in\Lambda$ implying $\mu_1+\mu_2=0$. Putting all together, we obtain that
the actual tangent cone $T_{\widetilde\Omega}(\bar x,\bar y,\lambda)$ to the feasible set is the collection of all $(u,v,\mu)$ satisfying
\begin{eqnarray*}
&&u_1=u_2\geq 0, v_1=v_2=\frac 23 u_1,\\
&&v_3=0, \mu_1+\mu_2=0.
\end{eqnarray*}
Now it is easy to see that $T_{\widetilde\Omega}(\bar x,\bar y,\lambda)\not=T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$. Moreover since both $T_{\widetilde\Omega}(\bar x,\bar y,\lambda)$ and $T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$ are convex polyhedral {sets}, one also has
$(T_{\widetilde\Omega}(\bar x,\bar y,\lambda))^\circ\not=(T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda))^\circ$
and thus MPEC-GCQ does not hold for $\lambda=(0.5,0.5)$ as well.
\if{We will now show that at any solution point $(\bar x,\bar y,\lambda)$ the constraint qualifications MPEC-ACQ and MPEC-GCQ fail to hold.
Consider $\lambda=(1,0)$.
The MPEC linearized cone $T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda)$ is the collection of all $(u,v,\mu)$ such that
\begin{eqnarray*}
\left(\begin{array}{c}
2v_1-u_1\\
v_2-u_2\\
\mu_1+\mu_2\end{array}{\rm ri\,}ght)=0,\quad
\begin{array}{l}v_3=0,\\
v_3\leq 0,\; \mu_2\geq 0,\; \mu_2v_3=0,\\
-2u_1-u_2\leq 0,\; -u_1-2u_2\leq 0,\end{array}
\end{eqnarray*}
which can be simplified to
\begin{eqnarray*}
\left(\begin{array}{c}
2v_1-u_1\\
v_2-u_2\\
\mu_1+\mu_2\end{array}{\rm ri\,}ght)=0,\quad
\begin{array}{l}v_3=0,\\
\mu_2\geq 0,\\
-2u_1-u_2\leq 0,\; -u_1-2u_2\leq 0.\end{array}
\end{eqnarray*}
Next we compute the actual tangent cone $T_\Omega(\bar x,\bar y,\lambda)$. Consider sequences $t_k\downarrow 0$, $(u^k,v^k,\mu^k)\to (u,v,\mu)$ such that $(\bar x,\bar y,\lambda)+t_k(u^k,v^k,\mu^k)\in\Omega$ and $u^k\not=0$ $\forall k$. If $\bar x+t_ku^k\in{\cal F}_1$ for infinitely many $k$, then immediately $u_1\geq 2\vert u_2\vert$, {$v=(\frac {u_1}2,u_2, 0)$}, $\mu=(0,0)$ follows. On the other hand, if $\bar x+t_ku^k\in{\cal F}_2$ for infinitely many $k$, then, by passing to a subsequence if necessary,
\begin{eqnarray*}\mu&=&\lim_{k\to\infty}\frac{\lambda(\bar x+t_ku^k)-\lambda}{t_k}=
\lim_{k\to\infty}\frac{(\frac{2u_1^k-u_2^k}{u_1^k+u_2^k}, \frac{2u_2^k-u_1^k}{u_1^k+u_2^k})-(1,0)}{t_k}\\
&=&\lim_{k\to\infty}\frac{(\frac{u_1^k-2u_2^k}{u_1^k+u_2^k}, \frac{2u_2^k-u_1^k}{u_1^k+u_2^k})}{t_k}.\end{eqnarray*}
Hence $2u_2=u_1$ and $\mu_1+\mu_2=0$. Finally, the case $\bar x+t_ku^k\in{\cal F}_3$ is not possible due to $\lambda(x)=(0,1)$ $\forall 0\not=x\in {\cal F}_3$. Putting all together, we obtain that
the actual tangent cone to the feasible set is the collection of all $(u,v,\mu)$ satisfying
\begin{eqnarray*}
\left(\begin{array}{c}
2v_1-u_1\\
v_2-u_2\\
\mu_1+\mu_2\end{array}{\rm ri\,}ght)=0,\quad
\begin{array}{l}v_3=0,\\
\mu_2\geq 0,\; \mu_2(2u_2-u_1)=0,\\
u_1\geq 2\vert u_2\vert.\end{array}
\end{eqnarray*}
Since
$(T_\Omega(\bar x,\bar y,\lambda))^\circ\not=(T^{\rm lin}_{\rm MPCC}(\bar x,\bar y,\lambda))^\circ$,
MPCC-GCQ does not hold. The other cases $\lambda_1,\lambda_2>0, \lambda_1+\lambda_2=1$ and $\lambda=(0,1)$ can be treated similarly and we again obtain that MPCC-GCQ does not hold.
}\fi
\end{example}
\section{Sufficient condition for MSCQ}
As we discussed in the introduction and section 3, there are much difficulties involved in formulating {an} MPEC as (MPCC). In this section, we turn our attention to problem (MPEC)
with the constraint system defined in the following form
\begin{equation}\label{Omega}
\Omega:=\left \{(x,y): \begin{array}{l}
0\in \varphii(x,y)+ \widehat{N}_\Gamma(y)\\
G(x,y)\leq 0\end{array}{\rm ri\,}ght \},
\end{equation}
where
$\Gamma:=\{y\in \mathbb{R}^m | g(y)\leq 0\}$,
$\varphii:\mathbb{R}^n\times \mathbb{R}^m\to\mathbb{R}^m$ and $G:\mathbb{R}^n\times\mathbb{R}^m\to \mathbb{R}^p$
are continuously differentiable and $g:\mathbb{R}^m\to\mathbb{R}^q$ is twice continuously differentiable.
Let $(\bar x,\bar y)$ be a feasible solution of problem (MPEC).
We assume that MSCQ is fulfilled for the constraint $g(y)\leq 0$ at $\bar y$. Then by definition MSCQ also holds for all points $y\in\Gamma$ near $\bar y$ and by Proposition \ref{PropInclNormalcone} the following equations hold for such $y$:
\[N_\Gamma(y)=\widehat N_\Gamma(y)=\nabla g(y)^T N_{\mathbb{R}^q_-}(g(y)),\]
where $N_{\mathbb{R}^q_-}(g(y))=\{\lambda\in\mathbb{R}^q_+\,\vert\, \lambda_i=0, i\not\in {\cal I}(y)\}$ {and ${\cal I}(y):=\{i\in\{1,\ldots,q\}\,\vert\, g_i(y)=0\}$ is the index set of active inequality constraints.}
For the sake of simplicity we do not include equality constraints in either the upper or the lower level constraints. We are using MSCQ as the basic constraint qualification for both the upper and the lower level constraints and this allows us to write an equality constraint $h(x)=0$ equivalently as two inequality constraints $h(x)\leq0,\ -h(x)\leq 0$ without affecting MSCQ.
In the case where $\Gamma$ is convex, MSCQ is proposed in \cite{YeYe} as a constraint qualification for the M-stationary condition. Two types of sufficient conditions were given for MSCQ. One is the case when all involved functions are affine and the other is when metric regularity holds. In this section
by making use of FOSCMS for the split system in Theorem \ref{ThSuffCondMSSplit}, we derive some new sufficient condition for MSCQ for the constraint system (\ref{Omega}).
Applying the new constraint qualification to {the} problem in Example \ref{Ex1}, we show that in contrast to the MPCC reformulation under which even the weakest constraint qualification MPEC-GCQ fails at $(\bar x, \bar y, \lambda)$ for all multipliers $\lambda$, the MSCQ holds at $(\bar x, \bar y)$ for the original formulation.
In order to apply FOSCMS in Theorem \ref{ThSuffCondMSSplit}, we need to calculate the linearized cone $T_\Omega^{\rm lin} (\bar z)$ and consequently we need to calculate the tangent cone $T_{{\rm gph}\widehat{N}_\Gamma}(\bar y, -\varphii(\bar x, \bar y))$. We now perform this task. First we introduce some notations.
Given vectors $y\in\Gamma$, $y^\ast\in\mathbb{R}^m$, consider the {\em set of multipliers}
\begin{eqnarray}\label{Lambda}
\Lambda(y,y^\ast):=\big\{\lambda \in\mathbb{R}^q_+\big|\;\nabla g(y)^T\lambda=y^\ast, \lambda_i=0, i\not\in {\cal I}(y)
\big\}.
\end{eqnarray}
{For a multiplier $\lambda$, the corresponding collection of {\em strict complementarity indexes} is denoted by}
\begin{eqnarray}\label{EqI+}
I^+(\lambda):=\big\{i\in \{1,\ldots,q\}\big|\;\lambda_i>0\big\}\;\mbox{ for }\;\lambda=(\lambda_1,\ldots,\lambda_q)\in\mathbb{R}^q_+.
\end{eqnarray}
Denote by ${\cal E}(y,y^\ast)$ the collection of all the {\em extreme points} of the closed and convex set of multipliers $\Lambda(y,y^\ast)$ and recall that $\lambda\in\Lambda(y,y^\ast)$ belongs to ${\cal E}(y,y^\ast)$ if and only if the family of gradients $\{\nabla g_i(y)\,\vert\, i\in I^+(\lambda)\}$ is linearly independent. Further ${\cal E}(y,y^\ast)\ne\emptyset$ if and only if $\Lambda(y,y^\ast)\ne\emptyset$.
To proceed further, recall the notion of the {\em critical cone} to $\Gamma$ at $(y,y^\ast)\in{\rm gph\,}
\widehat{N}_\Gamma$ given by
$K(y,y^\ast):=T_\Gamma(y)\cap\{y^\ast\}^\perp$
and define the {\em multiplier set in a direction} $v\in K(y,y^\ast)$ by
\begin{eqnarray}\label{EqDir-mult}
\Lambda(y,y^\ast;v):=\mathop{\rm arg\,max}\limits\big\{v^T\nabla^2(\lambda^Tg)(y)v\,\vert\, \lambda\in\Lambda(y,y^\ast)\big\}.
\end{eqnarray}
Note that $\Lambda(y,y^\ast;v)$ is the solution set of a linear optimization problem and therefore $\Lambda(y,y^\ast;v)\cap {\cal E}(y,y^\ast)\not=\emptyset$ whenever $\Lambda(y,y^\ast;v)\not=\emptyset$. Further we denote the corresponding optimal function value by
\begin{eqnarray}\label{EqDir-multval}
\theta(y,y^\ast;v):=\max\big\{v^T\nabla^2(\lambda^Tg)(y)v\,\vert\, \lambda\in\Lambda(y,y^\ast)\big\}.
\end{eqnarray}
The critical cone to $\Gamma$ has the following two expressions.
\begin{proposition}(see e.g. \cite[Proposition 4.3]{GfrMo15a})\label{criticalcone}
Suppose that MSCQ holds for the {system} $g(y) \in \mathbb{R}_-^q$ at $y$. Then the critical cone to $\Gamma$ at $(y,y^\ast)\in{\rm gph\,}\widehat N_\Gamma$ is a convex polyhedron that can be explicitly expressed as
$$K(y,y^*)=\{v| \nabla g(y)v \in T_{\mathbb{R}^q_-}(g(y)), v^T y^*=0\}.$$
Moreover for any $\lambda \in \Lambda(y,y^\ast)$,
$$K(y,y^*)=\left \{v| \nabla g(y)v \left \{\begin{array}{ll}
=0 \mbox{ if } \lambda_i>0\\
\leq 0 \mbox{ if } \lambda_i=0 \end{array} {\rm ri\,}ght. {\rm ri\,}ght \}.$$
\end{proposition}
Based on the expression for the critical cone, it is easy to see that the normal cone to the critical cone has the following expression.
\begin{lemma}\cite[Lemma 1]{{GfrOut14}} \label{lemma1} Assume MSCQ holds at $y$ for
the system $g(y)\in\mathbb{R}^q_-$. Let $v\in K(y,y^*), \lambda \in \Lambda(y, y^*)$. Then
$${N}_{K(y,y^*)}(v)=\{\nabla g(y)^T\mu|\mu^T\nabla g(y)v=0, \mu\in
T_{N_{\mathbb{R}_-^q}(g(y))}(\lambda)\}.$$
\end{lemma}
We are now ready to calculate the tangent cone to the graph of $\widehat N_\Gamma$. This result will be needed in the sufficient condition for MSCQ and it is also of an independent interest.
The first equation in the formula \eqref{EqTanConeGrNormalCone} was first shown in \cite[Theorem~1]{GfrOut14} under the extra assumption that the metric regularity holds locally uniformly except for $\bar y$, whereas in \cite{ChiHi16} this extra assumption was removed.
\begin{theorem}\label{ThTanConeGrNormalCone}
Given $\bar y\in\Gamma$, assume that MSCQ holds at $\bar y$ for
the system $g(y)\in\mathbb{R}^q_-$. Then there is a real $\kappa>0$ and a neighorhood $V$ of $\bar y$ such that
for any $y\in\Gamma\cap V$ and any $y^\ast\in\widehat N_\Gamma(y)$
the tangent cone to the graph of $\widehat N_\Gamma$ at $(y, y^*)$ can be calculated by
\begin{eqnarray}\label{EqTanConeGrNormalCone}
\lefteqn{T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast)}\\
\nonumber
&=&\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\;\mbox{ with }\;
v^\ast\in\nabla^2(\lambda^Tg)(y)v+N_{K(y,y^\ast)}(v)\big\}\\
\nonumber&=&\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast} {\cal B}_{\mathbb{R}^q}\;\mbox{ with }\;
v^\ast\in\nabla^2(\lambda^Tg)(y)v+N_{K(y,y^\ast)}(v)\big\},
\end{eqnarray}
where the critical cone $K(y,y^\ast)$ and the normal cone $N_{K(y,y^\ast)}(v)$ can be calculated as in Proposition \ref{criticalcone} and Lemma \ref{lemma1} respectively, and
the set ${\rm gph\,} \widehat N_\Gamma$ is geometrically derivable at $(y, y^*)$.
\end{theorem}
\begin{proof} Since MSCQ holds at $\bar y$ for
the system $g(y)\in\mathbb{R}^q_-$, we can find an open neighborhood $V$ of $\bar y$ and and a real $\kappa>0$ such that
\begin{equation}
\dist{y,\Gamma}\leq \kappa\dist{g(y),\mathbb{R}^q_-}\ \forall y\in V, \label{errorb} \end{equation}
which means that MSCQ holds at every $y\in\Gamma\cap V$. Therefore $K(y,y^\ast)$ and and
$N_{K(y,y^\ast)}(v)$ can be calculated as in Proposition \ref{criticalcone} and Lemma \ref{lemma1} respectively. By the proof of the first part of \cite[Theorem~1]{GfrOut14} we obtain that for every $y^\ast\in\widehat N_\Gamma(y)$,
\begin{eqnarray*}
\nonumber\lefteqn{\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast} {\cal B}_{\mathbb{R}^q}\;\mbox{ with }\;
v^\ast\in\nabla^2(\lambda^Tg)(y)v+ N_{K(y,y^\ast)}(v)\big\}}\\
\label{EqInclAux1}&\subset &\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\;\mbox{ with }\;
v^\ast\in\nabla^2(\lambda^Tg)(y)v+ N_{K(y,y^\ast)}(v)\big\}\qquad\\
\label{EqInclAux2}& \subset & \big\{(v,v^\ast)\in\mathbb{R}^{2m}\big| \lim_{t\downarrow 0}t^{-1}\distb{(y+tv,y^\ast+tv^\ast), {\rm gph\,} \widehat N_\Gamma}=0\}\\
\label{EqInclAux3}&\subset & T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast).
\end{eqnarray*}
We now show the reversed inclusion
\begin{eqnarray}
\lefteqn{T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast)}\label{reverse}\\
&\subset&\big\{(v,v^\ast)\in\mathbb{R}^{2m}\big|\;\exists\,\lambda\in\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast} {\cal B}_{\mathbb{R}^q}\;\mbox{ with }\;
v^\ast\in\nabla^2(\lambda^Tg)(y)v+N_{K(y,y^\ast)}(v)\big\}.\nonumber
\end{eqnarray}
Although the proof technique is essentially the same as \cite[Theorem~1]{GfrOut14}, for completeness we provide the detailed proof.
Consider $y\in \Gamma\cap V$, $y^\ast\in \widehat N_\Gamma(y)$ and
let $(v,v^*)\in T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast)$. Then by definition of the tangent cone, there exist sequences $t_k\downarrow 0, v_k{\rm ri\,}ghtarrow v, v_k^* {\rm ri\,}ghtarrow v^*$ such that
$ y_k^*:={y^*}+t_kv_k^* \in \widehat N_\Gamma(y_k)$, where $y_k:=y+t_kv_k$. By passing to a subsequence if necessary we can assume that $y_k\in V$ $\forall k$ and that there is some index set $\widetilde{{\cal I}}\subset {\cal I}(y)$ such that ${\cal I}(y_k)=\widetilde{{\cal I}}$ hold for all $k$. For every $i\in {{\cal I}(y)}$ we have
\begin{equation}\label{tylor}
g_i(y_k)=g_i(y)+t_k\nabla g_i(y)v_k +o(t_k)=t_k\nabla g_i(y)v_k +o(t_k)\left \{\begin{array}{ll}
=0 &\mbox{ if } i\in \widetilde{{\cal I}},\\
\leq 0 & \mbox{ if } i\in {\cal I}(y) \setminus \widetilde{{\cal I}}.\end{array}
{\rm ri\,}ght.
\end{equation}
Dividing by $t_k$ and passing to the limit we obtain
\begin{equation}\label{(21)}
\nabla g_i(y)v \left \{\begin{array}{ll}
=0 &\mbox{ if } i\in \widetilde{{\cal I}},\\
\leq 0 & \mbox{ if } i\in {\cal I}(y) \setminus \widetilde{{\cal I}},\end{array}
{\rm ri\,}ght.
\end{equation}
which means $v\in T_\Gamma^{\rm lin}(y)$. Since MSCQ holds at every $y\in\Gamma\cap V$, we have that the GACQ holds at $y$ as well and hence $v\in T_\Gamma(y)$.
{Since (\ref{errorb}) holds and $y_k\in V$, $y_k^*\in \widehat N_\Gamma(y_k)=N_\Gamma(y_k)$,} by Theorem \ref{ThBMP}
there exists a sequence of multipliers $\lambda^k\in\Lambda(y_k,y_k^\ast)\cap\kappa\norm{y_k^\ast} {\cal B}_{\mathbb{R}^q}$ as $k\in\mathbb{N}$. Consequently we {assume} that there exists $c_1\geq 0$ such that $\|\lambda^k\|\leq c_1$ for all $k$.
Let
\begin{equation}\label{Psi}
\Psi_{\widetilde{{\cal I}}}(y^*):=\{\lambda\in \mathbb{R}^q|\nabla g(y)^T\lambda =y^*, \lambda_i \geq 0, i \in \widetilde{{\cal I}}, \lambda_i=0, i \not \in \widetilde{{\cal I}}\}.
\end{equation}
By Hoffman's Lemma there is some constant $\beta$ such that for every $y^*\in \mathbb{R}^m$ with
$\Psi_{\widetilde{{\cal I}}}(y^*)\not =\emptyset$ one has
\begin{equation}\dist{\lambda, \Psi_{\widetilde{{\cal I}}}(y^*)}\leq \beta(\|\nabla g(y)^T\lambda -y^*\|+\sum_{i\in \widetilde{{\cal I}}}{\max \{-\lambda_i}, 0 \}+\sum_{i\not \in \widetilde{{\cal I}}}|\lambda_i|) \quad \forall \lambda\in \mathbb{R}^q.\label{hoffman}
\end{equation}
Since
$$\nabla g(y)^T\lambda^k-y^*=t_kv_k^*+(\nabla g(y)-\nabla g(y_k))^T \lambda^k$$
and $\|\nabla g(y)-\nabla g(y_k)\|\leq c_2\|y_k-y\|=c_2 t_k \|v_k\|$ for some $c_2\geq 0$, by (\ref{hoffman}) we can find for each $k$ some
$\widetilde{\lambda}^k \in \Psi_{\widetilde{{\cal I}}}(y^*)\subset \Lambda (y,y^*)$ with
$\| \widetilde{\lambda}^k-\lambda^k\| \leq \beta t_k(\|v_k^*{\|}+c_1c_2\|v_k\|)$.
Taking $\mu^k:=(\lambda^k-\widetilde{\lambda}^k)/t_k$ we have that $(\mu^k)$ is uniformly bounded. By passing to subsequence if necessary we assume
that $(\lambda^k)$ and $(\mu^k)$ are convergent to some $\lambda\in \Lambda(y,y^*)\cap \kappa \norm{y^\ast} {\cal B}_{\mathbb{R}^q}$,
and some $\mu$ respectively. Obviously the sequence $(\tilde \lambda^k)$ converges to $\lambda$ as well.
Since $\lambda_i^k=\widetilde{\lambda}^k_i=0, i \not \in \widetilde{{\cal I}}$, {by virtue of (\ref{(21)})} we have ${\mu^k}^T \nabla g(y) v=0 \ \forall k$ implying
\begin{equation} \label{mu}
\mu\in (\nabla g(y) v)^\perp .
\end{equation}
Taking into account
{${\lambda^k}^T g(y_k)=0$} and (\ref{tylor}), we obtain
$$0=\lim_{k{\rm ri\,}ghtarrow \infty} \frac{{{\lambda^k}^T }g(y_k)}{t_k}=\lim_{k{\rm ri\,}ghtarrow \infty}{{\lambda^k}^T} \nabla g(y)v_k ={y^*}^Tv. $$ Therefore combining the above with $v\in T_\Gamma(y)$ we have
\begin{equation}\label{criticalc}
v\in K(y, y^*).
\end{equation}
Further we have for all $\lambda' \in \Lambda(y,y^\ast)$, since $\widetilde{\lambda}^k\in \Lambda(y,y^\ast)$,
\begin{eqnarray*}
0 & \leq & ({\widetilde{\lambda}^k}-\lambda')^T g(y_k) =({\widetilde{\lambda}^k}-\lambda')^T( g(y)+t_k\nabla g(y) v_k+\frac{1}{2} t_k^2 v_k^T \nabla^2 g(y) v_k +o(t_k^2))\\
&=& ({\widetilde{\lambda}^k}-\lambda')^T (\frac{1}{2} t_k^2 v_k^T \nabla^2 g(y) v_k +o(t_k^2)).
\end{eqnarray*}
Dividing by $t_k^2$ and passing to the limit we obtain
$(\lambda-\lambda')^T v^T \nabla^2 g(y) v\geq 0 \quad \forall \lambda' \in \Lambda(y,y^\ast)$
and hence $\lambda\in \Lambda(y,y^*;v)$.
Since $$y_k^*=\nabla g(y)^T \widetilde \lambda^k+t_k v_k^*=\nabla g(y_k)^T \lambda^k,$$
we obtain
\begin{eqnarray*}
v^*&=& \lim_{k{\rm ri\,}ghtarrow \infty} v_k^* =\lim_{k{\rm ri\,}ghtarrow \infty}\frac{\nabla g(y_k)^T \lambda^k-\nabla g(y)^T \widetilde \lambda^k}{t_k}\\
&=&\lim_{k{\rm ri\,}ghtarrow \infty}\frac{(\nabla g(y_k)-\nabla g(y))^T {\lambda}^k+\nabla g({y})^T (\lambda^k-\widetilde \lambda^k)}{t_k}\\
&=& \nabla^2 ({{\lambda}}^Tg)(y)v+\nabla g(y)^T \mu.
\end{eqnarray*}
If $\mu \in T_{{N}_{\mathbb{R}_-^q}(g(y))}(\lambda)$,
since (\ref{mu}) holds, by using Lemma \ref{lemma1} we have $\nabla g(y)^T \mu\in {{N}}_{K(y,y^*)}(v)$ and hence the inclusion (\ref{reverse})
is proved.
Otherwise, by taking into account
\[T_{{N}_{\mathbb{R}_-^q}(g(y))}(\lambda)=\{\mu\in \mathbb{R} ^q\,\vert\, \mu_i\geq 0 \mbox{ if }\lambda_i=0\}\] and $\mu_i=0$, $i\not\in \widetilde I$,
the set $J:=\{ i\in \widetilde{\cal I} \,\vert\, \lambda_i=0, \mu_i <0\}$ is not empty.
Since $\mu^k$ converges to $\mu$, we can choose some index $\bar{k}$ such that
$\mu^{\bar{k}}_i =(\lambda_i^{\bar{k}}-\widetilde\lambda_i^{\bar{k}})/ t_{\bar{k}}\leq \mu_i/2 \ \forall i \in J$. Set $\widetilde{\mu}:=\mu+2(
\widetilde \lambda^{\bar{k}}-\lambda)/t_{\bar{k}}$.
Then for all $i$ with $\lambda_i=0$ we have $\widetilde{\mu}_i\geq \mu_i$ and for all $i\in J$ we have
$$ \widetilde{\mu}_i=\mu_i +2(
\widetilde\lambda_i^{\bar{k}}-\lambda_i)/t_{\bar{k}}\geq \mu_i +2(
\widetilde\lambda_i^{\bar{k}}-\tilde\lambda_i^{\bar k})/t_{\bar{k}}\geq 0$$
and therefore $\widetilde{\mu} \in T_{N_{\mathbb{R}_-^q(g(y))}} (\lambda)$.
Observing that $\nabla g(y)^T\widetilde{\mu}=\nabla g(y)^T{\mu}$ because of $\lambda, \widetilde\lambda^{\bar k}\in \Lambda (y, y^*)$ and taking into account Lemma \ref{lemma1} we have $\nabla g(y)^T \widetilde{\mu}\in {N}_{K(y,y^*)}(v)$ and hence the inclusion (\ref{reverse})
is proved. This finishes the proof of the theorem.
\end{proof}
Since the regular normal cone is the polar of the tangent cone, the following characterization of the regular normal cone of ${\rm gph\,}\widehat N_\Gamma$ follows from the formula for the tangent cone in Theorem \ref{ThTanConeGrNormalCone}.
\begin{corollary}\label{CorSecOrd} Assume that MSCQ is satisfied for the system $g(y)\leq0$ at $\bar y\in\Gamma$. Then there is a neighborhood $V$ of $\bar y$ such that for every $(y,y^\ast)\in{\rm gph\,}\widehat N_\Gamma$ with $y\in V$ the following assertion holds: given any pair $(w^\ast,w)\in \widehat N_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)$ we have $w\in K(y,y^\ast)$ and
\begin{equation}\label{EqBasicIneqTiltStab}
\skalp{w^\ast,w}+ w^T\nabla^2(\lambda^Tg)(y)w \leq 0\;\mbox{ whenever }\;\lambda\in\Lambda(y,y^\ast;w).
\end{equation}
\end{corollary}
\begin{proof}
Choose $V$ such that \eqref{EqTanConeGrNormalCone} holds true for every $y\in \Gamma\cap V$ and consider any $(y,y^\ast)\in{\rm gph\,}\widehat N_\Gamma$ with $y\in V$ and $(w^\ast,w)\in \widehat N_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)$. By the definition of the regular normal cone we have $\widehat N_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)=\big( T_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)\big)^\circ$ and, since $\{0\}\times N_{K(y,y^\ast)}(0)\subset T_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)$, we obtain
\[\skalp{w^\ast,0}+\skalp{w,v^\ast}\leq 0 \ \forall v^\ast \in N_{K(y,y^\ast)}(0)=K(y,y^\ast)^\circ,\]
implying $w\in {\rm cl\,}{\rm conv\,} K(y,y^\ast)=K(y,y^\ast)$. By \eqref{EqTanConeGrNormalCone} we have $(w, \nabla^2(\lambda^Tg)(y)w)\in T_{{\rm gph\,}\widehat N_\Gamma}(y,y^\ast)$ for every $\lambda\in\Lambda(y,y^\ast;w)$ and therefore the claimed inequality
\[\skalp{w^\ast,w}+\skalp{w, \nabla^2(\lambda^Tg)(y)w}=\skalp{w^\ast,w}+ w^T\nabla^2(\lambda^Tg)(y)w \leq 0\]
follows.
\end{proof}
The following result will be needed in the proof of Theorem \ref{ThSuffCondMS_FOGE}.
\begin{lemma}\label{LemBndSecOrdMult}Given $\bar y\in\Gamma$, assume that MSCQ holds at $\bar y$. Then there is a real $\kappa'>0$ such that for any $y\in\Gamma$ sufficiently close to $\bar y$, any normal vector $y^\ast\in\widehat N_\Gamma(y)$ and any critical direction $v\in K(y,y^\ast)$ one has
\begin{equation}\label{EqBndSecOrdMult}\Lambda(y,y^\ast;v)\cap{\cal E}(y,y^\ast)\cap \kappa'\norm{y^\ast} {\cal B}_{\mathbb{R}^q}\not=\emptyset.\end{equation}
\end{lemma}
\begin{proof}
Let $\kappa>0$ be chosen according to Theorem \ref{ThTanConeGrNormalCone} and consider $y\in\Gamma$ as close to $\bar y$ such that MSCQ holds at $y$ and \eqref{EqTanConeGrNormalCone} is valid for every $y^\ast\in \widehat N_\Gamma(y)$. Consider $y^\ast\in \widehat N_\Gamma(y)$ and a critical direction $v\in K(y,y^\ast)$. By \cite[Proposition 4.3]{GfrMo15a} we have $\Lambda(y,y^\ast;v)\not=\emptyset$ and, by taking any $\lambda\in \Lambda(y,y^\ast;v)$, we obtain from Theorem \ref{ThTanConeGrNormalCone} that $(v,v^\ast)\in T_{{\rm gph\,} \widehat N_\Gamma}(y,y^\ast)$ with $v^\ast=\nabla^2(\lambda^Tg)(y)v$. Applying Theorem \ref{ThTanConeGrNormalCone} once more, we see that $v^\ast\in \nabla^2(\tilde\lambda^Tg)(y)v+N_{K(y,y^\ast)}(v)$ with $\tilde\lambda\in\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast} {\cal B}_{\mathbb{R}^q}$ showing that $\Lambda(y,y^\ast;v)\cap \kappa\norm{y^\ast}{\cal B}_{\mathbb{R}^q}\not=\emptyset$. Next consider a solution $\bar\lambda$ of the linear optimization problem
\[\min\sum_{i=1}^q\lambda_i\quad\mbox{subject to }\lambda\in \Lambda(y,y^\ast;v).\]
We can choose $\bar\lambda$ as an extreme point of the polyhedron $\Lambda(y,y^\ast;v)$ implying $\bar\lambda\in {\cal E}(y,y^\ast)$.
Since $\Lambda(y,y^\ast;v)\subset \mathbb{R}^q_+$, we obtain
$$\norm{\bar\lambda}\leq \sum_{i=1}^q\vert\bar\lambda_i\vert=\sum_{i=1}^q \bar \lambda_i\leq \sum_{i=1}^q\tilde\lambda_i\leq \sqrt{q}\norm{\tilde\lambda}\leq\sqrt{q}\kappa\norm{y^\ast},$$ and hence \eqref{EqBndSecOrdMult} follows with $\kappa'=\kappa\sqrt{q}$.
\end{proof}
We are now in {position} to state a verifiable sufficient condition for MSCQ to hold for problem (MPEC).
\begin{theorem}\label{ThSuffCondMS_FOGE}
Given $(\bar x,\bar y)\in \Omega$ defined as in (\ref{Omega}), assume that MSCQ holds both for the lower level problem constraints $g(y)\leq 0$ at $\bar y$ and for the upper level constraints $G(x,y)\leq0$ at
$(\bar x,\bar y)$. Further assume that
\begin{equation}\label{EqNonDegG}
\nabla_x G(\bar x,\bar y)^T\eta=0,\ \eta\in N_{\mathbb{R}^p_-}(G(\bar x,\bar y))\quad \Longrightarrow\quad \nabla_y G(\bar x,\bar y)^T\eta=0
\end{equation}
and assume that
there do not exist $(u,v)\not=0$, $\lambda\in\Lambda(\bar y,-\varphii(\bar x,\bar y);v)\cap {\cal E}(\bar y,-\varphii(\bar x,\bar y))$, $\eta\in\mathbb{R}^p_+$ and $w\not=0$ satisfying
\begin{eqnarray}
\label{EqSuffMS1}&&\nabla G(\bar x,\bar y)
(u,v)\in T_{\mathbb{R}^p_-}(G(\bar x,\bar y)),\; \\
\label{EqSuffMS2}
&&(v,-\nabla_{x}\varphii(\bar x,\bar y)u-\nabla_{y}\varphii(\bar x,\bar y)v)\in T_{{\rm gph\,} \widehat N_\Gamma}(\bar y,-\varphii(\bar x,\bar y)),\\
\label{EqSuffMS3}&&-\nabla_{x}\varphii(\bar x,\bar y)^Tw+\nabla_{x} G(\bar x,\bar y)^T\eta=0,\;\eta\in N_{\mathbb{R}^p_-}(G(\bar x,\bar y)),\; \eta^T\nabla G(\bar x,\bar y)(u,v)=0,\\
\label{EqSuffMS4}&&\nabla g_i(\bar y)w=0, i\in I^+(\lambda),\; w^T\left (\nabla_{y}\varphii(\bar x,\bar y)+\nabla^2(\lambda^Tg(\bar y){\rm ri\,}ght )w-\eta^T\nabla_y G(\bar x,\bar y)w\leq 0,\qquad
\end{eqnarray}
where the tangent cone $ T_{{\rm gph\,} \widehat N_\Gamma}(\bar y,-\varphii(\bar x,\bar y))$ can be calculated as in Theorem \ref{ThTanConeGrNormalCone}.
Then the multifunction $M_{\rm MPEC}$ defined by
\begin{equation}\label{MPEC}
M_{\rm MPEC}{(x,y)}:=\vek{\varphii(x,y)+\widehat N_\Gamma(y)\\G(x,y)-\mathbb{R}^p_-}
\end{equation} is metrically subregular at $\big((\bar x,\bar y),0\big)$.
\end{theorem}
\begin{proof}By
Proposition \ref{PropEquGrSubReg}, it suffices to show that the multifunction $P(x,y)-D$ with $P$ and $D$ given by
$$ P(x,y):=\left(\begin{array}{c}y,-\varphii(x,y)\\G(x,y)\end{array}{\rm ri\,}ght) \mbox{ and } D:={\rm gph\,} \widehat N_\Gamma\times\mathbb{R}^p_-$$ is metrically subregular at $\big((\bar x,\bar y),0\big)$.
We now invoke Theorem \ref{ThSuffCondMSSplit} with
$$P_1(x,y):=(y,-\varphii(x,y)),\ P_2(x,y):=G(x,y),\ D_1:={\rm gph\,} \widehat N_\Gamma,D_2:=\mathbb{R}^p_-.$$ By the assumption $P_2(x,y)-D_2$ is metrically subregular at {$\big((\bar x,\bar y),0\big)$}. Assume to the contrary that $P(\cdot,\cdot)-D$ is not metrically subregular at $\big((\bar x,\bar y),0\big)$. Then by Theorem \ref{ThSuffCondMSSplit}, there exist $0\not=z=(u,v) \in T^{\rm lin}_\Omega(\bar x,\bar y)$ and a directional limiting normal $z^\ast=(w^\ast,w,\eta)\in\mathbb{R}^m\times\mathbb{R}^m\times\mathbb{R}^p$ such that $\nabla P(\bar x,\bar y)^Tz^\ast=0$, $(w^\ast,w)\in N_{{\rm gph\,} \widehat N_\Gamma}(P_1(\bar x,\bar y); \nabla P_1(\bar x,\bar y)z)$, $\eta\in N_{\mathbb{R}^p_-}\big(G(\bar x,\bar y);\nabla G(\bar x,\bar y)(u,v)\big)$ and $(w^\ast,w)\not=0$.
Hence
\begin{equation}\label{w*eqn}
0=\nabla P(\bar x,\bar y)^Tz^\ast=\left(\begin{array}{c}-\nabla_{x}\varphii(\bar x,\bar y)^Tw+\nabla_{x}G(\bar x,\bar y)^T\eta\\
w^\ast-\nabla_{y}\varphii(\bar x,\bar y)^Tw+\nabla_y G(\bar x,\bar y)^T\eta
\end{array}{\rm ri\,}ght).\end{equation}
Since $z=(u,v) \in T^{\rm lin}_\Omega(\bar x,\bar y)$, by the rule of tangents to product sets from Proposition \ref{productset} we obtain
\[\nabla P(\bar x,\bar y)z=\left(\begin{array}{c}(v,-\nabla_{x}\varphii(\bar x,\bar y)u-\nabla_{y}\varphii(\bar x,\bar y)v)\\
\nabla G(\bar x,\bar y)(u,v)\end{array}{\rm ri\,}ght)\in T_{{\rm gph\,} \widehat N_\Gamma}(\bar y,\bar ya)\times T_{\mathbb{R}^p_-}\big(G(\bar x,\bar y)\big),\]
where $\bar ya:=-\varphii(\bar x,\bar y)$.
It follows that $
(v,-\nabla_{x}\varphii(\bar x,\bar y)u-\nabla_{y}\varphii(\bar x,\bar y)v)\in T_{{\rm gph\,} \widehat N_\Gamma}(\bar y,\bar ya)$ and consequently by Theorem \ref{ThTanConeGrNormalCone} we have $v\in K(\bar y,\bar ya)$.
Further we deduce from Proposition \ref{relationship} that
\[\eta\in N_{\mathbb{R}^p_-}(G(\bar x,\bar y)), \ \eta^T\nabla G(\bar x,\bar y)(u,v)=0.\]
So far we have shown that $u,v,\eta,w$ fulfill \eqref{EqSuffMS1}-\eqref{EqSuffMS3}. Further we have $w\not=0$, because if $w=0$ then by virtue of \eqref{EqNonDegG} and \eqref{w*eqn} we would obtain $\nabla_xG(\bar x,\bar y)^T\eta=0$, $\nabla_yG(\bar x,\bar y)^T\eta=0$ and consequently $w^\ast=0$ contradicting $(w^\ast,w)\not=0$. If we can show the existence of $\lambda\in\Lambda(\bar y,\bar ya;v)\cap {\cal E}(\bar y,\bar ya)$ such that \eqref{EqSuffMS4} holds, then
we have obtained the desired contradiction to our assumptions, and this would complete the proof.
Since $(w^\ast,w)\in N_{{\rm gph\,} \widehat N_\Gamma}(P_1(\bar x,\bar y); \nabla P_1(\bar x,\bar y)z)$, by
the definition of the directional limiting normal cone, there are sequences $t_k\downarrow 0$, $d_k=(v_k,v_k^\ast)\in\mathbb{R}^m\times\mathbb{R}^m$ and $(w_k^\ast,w_k)\in\mathbb{R}^m\times\mathbb{R}^m$ satisfying $(w_k^\ast,w_k)\in\widehat N_{{\rm gph\,} \widehat N_\Gamma}(P_1(\bar x,\bar y)+t_kd_k)$ $\forall k$ and $(d_k,w_k^\ast,w_k)\to(\nabla P_1(\bar x,\bar y)z,w^\ast,w)$. That is, $(y_k,y_k^\ast):=(\bar y,\bar ya)+t_k(v_k,v_k^\ast)\in {\rm gph\,} \widehat N_\Gamma$,
$(w_k^\ast,w_k)\in \widehat N_{{\rm gph\,} \widehat N_\Gamma}(y_k,y_k^\ast)$
and $(v_k, v_k^\ast)\to (v, -\nabla_{x}\varphii(\bar x,\bar y)u-\nabla_{y}\varphii(\bar x,\bar y)v)$.
By passing to a subsequence if necessary, we can assume that MSCQ holds for $g(y)\leq 0$ at $y_k$ for all $k$ and by invoking Corollary \ref{CorSecOrd} we obtain $w_k\in K(y_k,y_k^\ast)$, and
\begin{equation}\label{EqSecOrdAux1}{w_k^\ast}^Tw_k+w_k^T\nabla^2(\lambda^T g)(y_k)w_k\leq 0 \mbox{ whenever }\lambda\in\Lambda(y_k,y_k^\ast;w_k).
\end{equation}
By Lemma \ref{LemBndSecOrdMult} we can find a uniformly bounded sequence $\lambda^k\in\Lambda(y_k,y_k^\ast;w_k)\cap {\cal E}(y_k,y_k^\ast)$. In particular, following from the proof of Lemma \ref{LemBndSecOrdMult}, we can choose $\lambda^k$ as an optimal solution of the linear optimization problem
\begin{equation}\label{EqMinL1Norm}\min\sum_{i=1}^q\lambda_i\;\mbox{ subject to }\;\lambda\in \Lambda(y_k,y_k^\ast;w_k).
\end{equation}
By passing once more to a subsequence if necessary, we can assume that $\lambda^k$ converges to $\bar\lambda$, and we easily conclude $\bar\lambda\in\Lambda(\bar y,\bar ya)$ and
${w^\ast}^Tw+w^T\nabla^2(\bar\lambda^T g)(\bar y)w\leq 0$, which together with $w^\ast-\nabla_{y}\varphii(\bar x,\bar y)^Tw+\nabla_y G(\bar x,\bar y)^T\eta=0$ (see (\ref{w*eqn})) results in
\begin{equation}
\label{EqSecOrdAux2}w^T\left (\nabla_{y}\varphii(\bar x,\bar y)+\nabla^2(\bar\lambda^Tg)(\bar y){\rm ri\,}ght )w-\eta^T\nabla_y G(\bar x,\bar y)w\leq 0.
\end{equation}
Further, we can assume that $I^+(\bar\lambda)\subset I^+(\lambda^k)$ and therefore, because of $\lambda^k\in N_{\mathbb{R}^q_-}(g(y_k))$, $\bar\lambda^Tg(y_k)={\lambda^k}^Tg(y_k)=0$. Hence for every $\lambda\in\Lambda(\bar y,\bar ya)$ we obtain
\begin{eqnarray*}
0&\geq& (\lambda-\bar\lambda)^Tg(y_k)\\
&=&(\lambda-\bar\lambda)^Tg(\bar y)+\nabla((\lambda-\bar\lambda)^Tg)(\bar y)(y_k-\bar y)\\
&&\quad+\frac 12 (y_k-\bar y)^T\nabla^2((\lambda-\bar\lambda)^Tg)(\bar y)(y_k-\bar y) +o(\norm{y_k-\bar y}^2)\\
&=&\frac{t_k^2}2 v_k^T\nabla^2((\lambda-\bar\lambda)^Tg)(\bar y)v_k+o(t_k^2\norm{v_k}^2).
\end{eqnarray*}
Dividing by $t_k^2/2$ and passing to the limit yields $0\geq v^T\nabla^2((\lambda-\bar\lambda)^Tg)(\bar y)v$ and thus $\bar\lambda\in\Lambda(\bar y,\bar ya;v)$. Since $w_k\in K(y_k,y_k^\ast)$ by Proposition \ref{criticalcone} we have $\nabla g_i(y_k)w_k=0$, $i\in I^+(\lambda^k)$ from which $\nabla g_i(\bar y)w=0$, $i\in I^+(\bar\lambda)$ follows.
It is known that the polyhedron $\Lambda( \bar y, \bar y^*)$ can be represented as the sum of the convex hull of its extreme points $ {\cal E}(\bar y,\bar ya)$ and its recession cone ${\cal R}:=\{\lambda \in N_{\mathbb{R}^q_-}(g(\bar y))| \nabla g(\bar y)^T\lambda=0 \}$. We show by contradiction that $\bar\lambda\in{\rm conv\,} {\cal E}(\bar y,\bar ya)$. Assuming on the contrary that $\bar\lambda\not\in {\rm conv\,} {\cal E}(\bar y,\bar ya)$, then $\bar\lambda$ has the representation $\bar\lambda=\lambda^c+\lambda^r$ with $\lambda^c\in {\rm conv\,} {\cal E}(\bar y,\bar ya)$ and $\lambda^r\not=0$ belongs to the recession cone ${\cal R}$, i.e.
\begin{equation}\label{recessioncone}
\lambda^r\in N_{\mathbb{R}^q_-}(g(\bar y)),\ \nabla g(\bar y)^T\lambda^r=0.\end{equation}
Since $\lambda^k\in\Lambda(y_k,y_k^\ast;w_k)$, it is a solution to the linear program:
\begin{eqnarray*}
\max_{\lambda\geq 0} && w_k^T\nabla^2(\lambda^Tg)(y_k)(w_k)\\
s.t. && \nabla g(y_k)^T\lambda=y_k^*\\
&& \lambda^Tg(y_k)=0.
\end{eqnarray*}
By duality theory of linear programming, for each $k$ there is some $r_k\in\mathbb{R}^m$ verifying
\[\nabla g_i(y_k)r_k+w_k^T\nabla^2 g_i(y_k)w_k\leq 0,\ \lambda^k_i(\nabla g_i(y_k)r_k+w_k^T\nabla^2 g_i(y_k)w_k)=0,\ i\in {\cal I}(y_k).\]
Since $\Lambda(y_k,y_k^\ast;w_k)=\{\lambda\in\Lambda(y_k,y_k^\ast)\,\vert\, w_k^T\nabla^2(\lambda^Tg)({y_k})w_k\geq\theta(y_k,y_k^\ast;w_k)\}$ and $\lambda^k$ solves \eqref{EqMinL1Norm}, again by duality theory of linear programming we can find for each $k$ some $s_k\in\mathbb{R}^m$ and $\beta_k\in\mathbb{R}_+$ such that
\[\nabla g_i(y_k)s_k+\beta_k w_k^T\nabla^2 g_i(y_k)w_k\leq 1,\ \lambda^k_i(\nabla g_i(y_k)s_k+\beta_k w_k^T\nabla^2 g_i(y_k)w_k-1)=0,\ i\in {\cal I}(y_k).\]
Next we define for every $k$ the elements $\tilde \lambda^k\in\mathbb{R}^q_+$, $\xi_k^\ast\in\mathbb{R}^m$ by
\begin{eqnarray}
&& \tilde\lambda^k_i:=\begin{cases}
\lambda^r_i&\mbox{if $i\in I^+(\lambda^r)$,}\\
\frac 1k&\mbox{if $i\in I^+(\lambda^k)\setminus I^+(\lambda^r)$,}\\
0&\mbox{else,}\quad
\end{cases} \nonumber\\
&& \xi_k^\ast:=\nabla g(y_k)^T\tilde\lambda^k.\label{xi}
\end{eqnarray}
Since $I^+(\lambda^r)\subset I^+(\bar\lambda)\subset I^+(\lambda^k)$, we obtain $I^+(\tilde\lambda^k)=I^+(\lambda^k)$, $\tilde\lambda^k\in N_{\mathbb{R}^q_-}(g(y_k))$ and $\xi_k^\ast\in N_\Gamma(y_k)$.
Thus $w_k\in K(y_k,\xi_k^\ast)$ by Proposition \ref{criticalcone} and
\[\nabla g_i(y_k)r_k+w_k^T\nabla^2 g_i(y_k)w_k\leq 0,\ \tilde\lambda^k_i(\nabla g_i(y_k)r_k+w_k^T\nabla^2 g_i(y_k)w_k)=0,\ i\in {\cal I}(y_k)\]
implying $\tilde\lambda^k\in\Lambda(y_k,\xi_k^\ast;w_k)$ by duality theory of linear programming. Moreover, because of $I^+(\tilde\lambda^k)=I^+(\lambda^k)$ we also have
\[\nabla g_i(y_k)s_k+\beta_k w_k^T\nabla^2 g_i(y_k)w_k\leq 1,\ \tilde\lambda^k_i(\nabla g_i(y_k)s_k+\beta_k w_k^T\nabla^2 g_i(y_k)w_k-1)=0,\ i\in {\cal I}(y_k),\]
implying that $\tilde\lambda^k$ is solution of the linear program
\[\min\sum_{i=1}^q\lambda_i\;\mbox{ subject to }\;\lambda\in \Lambda(y_k,\xi_k^\ast;w_k),\]
and, together with $\Lambda(y_k,\xi_k^\ast;w_k)\subset \mathbb{R}^q_+$,
\[\min\{\norm{\lambda}\,\vert\, \lambda\in \Lambda(y_k,\xi_k^\ast;w_k)\}\geq \frac 1{\sqrt{q}}\min\{\sum_{i=1}^q\lambda_i\,\vert\, \lambda\in \Lambda(y_k,\xi_k^\ast;w_k)\}{\geq}\frac{\sum_{i=1}^q\lambda_i^r}{\sqrt{q}}:=\beta>0.\]
Taking into account that $\lim_{k\to\infty}\tilde\lambda^k=\lambda^r$ and (\ref{recessioncone}), (\ref{xi}), we conclude $\lim_{k\to\infty}\norm{\xi_k^\ast}=0$, showing that for every real $\kappa'$ we have
\[\Lambda(y_k,\xi_k^\ast;w_k)\cap {\cal E}(y_k,\xi_k^\ast)\cap\kappa'\norm{\xi_k^\ast} {\cal B}_{\mathbb{R}^q}\subset \Lambda(y_k,\xi_k^\ast;w_k)\cap \kappa'\norm{\xi_k^\ast} {\cal B}_{\mathbb{R}^q}=\emptyset\]
for all $k$ sufficiently large contradicting the statement of Lemma \ref{LemBndSecOrdMult}. Hence $\bar\lambda\in {\rm conv\,}{\cal E}(\bar y,\bar ya)$ and thus $\bar\lambda$ admits a representation as convex combination
\[\bar\lambda=\sum_{j=1}^N\alpha_j{\hat\lambda^j}\;\mbox{ with }\; \sum_{j=1}^N \alpha_j=1,\;0<\alpha_j\leq 1,\; {\hat\lambda^j}\in{\cal E}(\bar y,\bar ya),\;j=1,\ldots,N.\]
Since $\bar\lambda\in \Lambda(\bar y,\bar ya;v)$ we have $\theta(\bar y,\bar ya;v)=v^T\nabla^2(\bar\lambda^Tg)(\bar y)v=\sum_{j=1}^N \alpha_j v^T\nabla^2({\hat\lambda{}^j}^Tg)(\bar y)v$ implying, together with $v^T\nabla^2({\hat\lambda{}^j}^T g)(\bar y)v\leq \theta(\bar y,\bar ya;v)$, that $v^T\nabla^2({{\hat\lambda{}^j}}^Tg)(\bar y)v= \theta(\bar y,\bar ya;v)$ and consequently ${\hat\lambda^j}\in \Lambda(\bar y,\bar ya;v)$. It follows from (\ref{EqSecOrdAux2}) that
\begin{eqnarray*}\lefteqn{\sum_{j=1}^N\alpha_j\left( w^T\big (\nabla_{y}\varphii(\bar x,\bar y)+\nabla^2({\hat\lambda{}^j}^Tg)(\bar y)\big )w-\eta^T\nabla_y G(\bar x,\bar y)w{\rm ri\,}ght)}\\
&&=w^T\left (\nabla_{y}\varphii(\bar x,\bar y)+\nabla^2(\bar\lambda^Tg)(\bar y){\rm ri\,}ght )w-\eta^T\nabla_y G(\bar x,\bar y)w\leq 0\end{eqnarray*}
and hence there exists some index $\bar j$ with
$$w^T\left (\nabla_{y}\varphii(\bar x,\bar y)+\nabla^2({\hat\lambda{}^{\bar j}}^Tg)(\bar y){\rm ri\,}ght )w-\eta^T\nabla_y G(\bar x,\bar y)w\leq 0.$$
Further, by Proposition \ref{criticalcone} we have $\nabla g_i(\bar y)w=0$ $\forall i\in I^+(\bar\lambda)\supset I^+({\hat\lambda^{\bar j}})$ and we see that \eqref{EqSuffMS4} is fulfilled with $\lambda={\hat\lambda^{\bar j}}$.
\end{proof}
\if{
By virtue of \cite[Theorem 3.2]{YeYe}, we have the following necessary optimality condition for (MPEC).
\begin{proposition}
Let $(\bar x,\bar y)\in \Omega$ defined as in (\ref{Omega}) be a local optimal solution of problem (MPEC) where all functions $F, \varphii, G$ are continuously differentiable and $g$ is twice continuously differentiable. Suppose all constraint qualifications in Theorem \ref{ThSuffCondMS_FOGE} hold at $(\bar x,\bar y)$. Then there are multipliers $(\mu, \nu)\in \mathbb{R}^m\times \mathbb{R}^p$ and $\varrho\in \mathbb{R}^p$ which solve the system
\begin{eqnarray*}
&& 0=\nabla_x F(\bar x, \bar y)-\nabla_x \varphii(\bar x,\bar y)^T \nu +\nabla_x G(\bar x,\bar y)^T \varrho,\\
&& 0=\nabla_y F(\bar x, \bar y)-\nabla_y\varphii(\bar x,\bar y)^T \nu +\mu+\nabla_y G(\bar x,\bar y)^T \varrho,\\
&& (\mu, \nu) \in N_{gph\widehat{N}_\Gamma}(\bar y, -\varphii(\bar x,\bar y)),\\
&&\varrho\in N_{\mathbb{R}^p_-}(G(\bar x,\bar y)).
\end{eqnarray*}
\end{proposition}}\fi
\begin{example}[Example \ref{Ex1} revisited] \label{Ex1GE}
Instead of reformulating the MPEC as a (MPCC), we consider the MPEC in the original form (MPEC).
Since for the constraints $g(y)\leq 0$ of the lower level problem MFCQ is fulfilled at $\bar y$ and the gradients of the upper level constraints $G(x,y)\leq 0$ are linearly independent, MSCQ holds for both constraint systems. Condition \eqref{EqNonDegG} is obviously fulfilled due to $\nabla_y G(x,y)=0$. Setting $\bar ya:=-\varphii(\bar x,\bar y)=(0,0,1)$, as in Example \ref{Ex1} we obtain
\[ \Lambda(\bar y,\bar ya)=\{(\lambda_1,\lambda_2)\in\mathbb{R}^2_+\,\vert\, \lambda_1+\lambda_2=1\}.\]
Since $\nabla g_1(\bar y)=\nabla g_2(\bar y)=(0,0,1)$ and for every $\lambda\in\Lambda(\bar y,\bar ya)$ either $\lambda_1>0$ or $\lambda_2>0$, we deduce
\[W(\lambda):=\{w\in\mathbb{R}^3\,\vert\, \nabla g_i(\bar y)w=0,\;i\in I^+(\lambda)\}={\mathbb{R}^2}\times\{0\}\ \ \forall \lambda\in\Lambda(\bar y,\bar ya).\]
Since
\[w^T\left (\nabla_{y}\varphii(\bar x,\bar y)+\nabla^2(\lambda^Tg)(\bar y){\rm ri\,}ght )w-\eta^T\nabla_y G(\bar x,\bar y)w=(1+\lambda_1)w_1^2+(1+\lambda_2)w_2^2\geq 0\]
there cannot exist $0 \not=w \in W(\lambda)$ and $\lambda\in\Lambda(\bar y,\bar ya)$ fulfilling \eqref{EqSuffMS4}. Hence by virtue of Theorem \ref{ThSuffCondMS_FOGE}, MSCQ holds at $(\bar x, \bar y)$.
\if{ we easily obtain
\[K(\bar y,\bar ya)=\mathbb{R}^2\times\{0\}, \ \Lambda(\bar y,\bar ya)=\{(\lambda_1,\lambda_2)\in\mathbb{R}^2_+\,\vert\, \lambda_1+\lambda_2=1\}.\]
Since $\nabla g_1(\bar y)=\nabla g_2(\bar y)=(0,0,1)$ and for every $\lambda\in\Lambda(\bar y,\bar ya)$ either $\lambda_1>0$ or $\lambda_2>0$, we deduce
\[\{w\in\mathbb{R}^3\,\vert\, \nabla g_i(\bar y)w=0,\;i\in I^+(\lambda)\}=R^2\times\{0\}\ \forall \lambda\in\Lambda(\bar y,\bar ya)\]
Since
\[w^T\left (\nabla_{y}\varphii(\bar x,\bar y)+\nabla^2(\lambda^Tg)(\bar y){\rm ri\,}ght )w-\eta^T\nabla_y G(\bar x,\bar y)w=(1+\lambda_1)w_1^2+(1+\lambda_2)w_2^2\]
there cannot exist $w\not=0$}\fi
\end{example}
\end{document} |
\begin{document}
\title{A multi-level spectral deferred correction method\thanks{Robert Speck and Daniel Ruprecht acknowledge supported by Swiss National Science Foundation grant 145271 under the lead agency agreement through the project "ExaSolvers" within the Priority Programme 1648 "Software for Exascale Computing" of the Deutsche Forschungsgemeinschaft. Matthias Bolten acknowledges support from DFG through the project "ExaStencils" within SPPEXA. Daniel Ruprecht and Matthew Emmett also thankfully acknowledge support by grant SNF-147597. Matthew Emmett and Michael Minion were supported by the Applied Mathematics Program of the DOE Office of Advanced Scientific Computing Research under the U.S. Department of Energy under contract DE-AC02-05CH11231. Michael Minion was also supported by the U.S. National Science Foundation grant DMS-1217080.
}}
\author{Robert Speck \and Daniel Ruprecht \and Matthew Emmett \and Michael Minion \and Matthias Bolten \and Rolf Krause
}
\authorrunning{R. Speck, D. Ruprecht, M. Emmett, M. Minion, M. Bolten, R. Krause}
\institute{R. Speck \at
J\"ulich Supercomputing Centre, Forschungszentrum J\"ulich, Germany and Institute of Computational Science, Universit{\`a} della Svizzera italiana, Lugano, Switzerland.\\
\mathrm{e}mail{[email protected]}
\and
D. Ruprecht \at
Institute of Computational Science, Universit{\`a} della Svizzera italiana, Lugano, Switzerland.\\
\mathrm{e}mail{[email protected]}
\and
M. Emmett \at
Center for Computational Sciences and Engineering, Lawrence Berkeley National Laboratory, USA.\\
\mathrm{e}mail{[email protected]}
\and
M. Minion \at
Institute for Computational and Mathematical Engineering, Stanford University, USA.\\
\mathrm{e}mail{[email protected]}
\and
M. Bolten \at
Department of Mathematics, Bergische Universit\"at Wuppertal, Germany.\\
\mathrm{e}mail{[email protected]}
\and
R. Krause\at
Institute of Computational Science, Universit{\`a} della Svizzera italiana, Lugano, Switzerland.\\
\mathrm{e}mail{[email protected]}
}
\mathrm{d}ate{Received: date / Accepted: date}
\maketitle
\begin{abstract}
The spectral deferred correction (SDC) method is an iterative scheme for computing a
higher-order collocation solution to an ODE by performing a series
of correction sweeps using a low-order timestepping method.
This paper examines a variation of SDC for the temporal integration of PDEs called
multi-level spectral deferred corrections (MLSDC), where sweeps are performed on a hierarchy of levels
and an FAS correction term, as in nonlinear multigrid methods, couples solutions on different levels.
Three different strategies to reduce the computational cost of correction
sweeps on the coarser levels are examined:
reducing the degrees of freedom, reducing the order of the spatial discretization,
and reducing the accuracy when solving linear systems arising in implicit temporal integration.
Several numerical examples demonstrate the effect of multi-level coarsening
on the convergence and cost of SDC integration.
In particular, MLSDC can provide significant savings in compute time compared to SDC for a three-dimensional problem.
\keywords{spectral deferred corrections \and multi-level spectral deferred corrections \and FAS correction \and PFASST}
\subclass{65M55 \and 65M70 \and 65Y05}
\mathrm{e}nd{abstract}
\section{Introduction}
The numerical approximation of initial value ordinary differential equations is a fundamental problem in computational science, and many integration methods for problems of different character have been developed \cite{ascherPetzold,HairerI,HairerII}.
Among different solution strategies, this paper focuses on a class of iterative methods called Spectral Deferred Corrections (SDC) \cite{dutt2000spectral}, which is a variant of the defect and deferred correction methods developed in the 1960s \cite{bohmerHemkerStetter:1984,pereyra:1968,pereyra:1967,pereyra:1966,stetter:1974,zadunaisky:1964}.
In SDC methods, high-order temporal approximations are computed over a time\-step by discretizing and approximating a series of correction equations on intermediate substeps.
These corrections are applied iteratively to a provisional solution computed on the substeps, with each iteration -- or {\it sweep} -- improving the solution and raising the formal order of accuracy of the method, see e.g.~\cite{ChristliebEtAl2011_CMS,ChristliebEtAl2010_MoC,ShuEtAl2007}.
The correction equations are cast in the form of a Picard integral equation containing an explicitly calculated term corresponding to the temporal integration of the function values from the previous iteration.
Substeps in SDC methods are chosen to correspond to Gaussian quadrature nodes, and hence the integrals can be stably computed to a very high order of accuracy.
One attractive feature of SDC methods is that the numerical method used to approximate the correction equations can be low-order (even first-order) accurate, while the solution after many iterations can in principal be of arbitrarily high-order of accuracy.
This has been exploited to create SDC methods that allow the governing equations to be split into two or more pieces that can be treated either implicitly or explicitly and/or with different timesteps, see e.g.~\cite{bourlioux2003high,bouzarthMinion:2011,layton2004conservative,minion2003semi}.
For high-order SDC methods constructed from low-order propagators, the provisional solution and the solution after the first few correction iterations are of lower-order compared to the final solution.
Hence it is possible to reduce the computational work done on these early iterations by reducing the number of substeps (i.e. quadrature nodes) since higher-order integrals are not yet necessary.
In \cite{laytonMinion:2005,minion2003semi}, the number of substeps used in initial iterations of SDC methods is appropriately reduced to match the accuracy of the solution, and the methods there are referred to as {\it ladder methods}.
Ladder methods progress from a low-order coarse solution to a high-order fine solution by performing one or more SDC sweeps on the coarse level and then using an interpolated (in time and possibly space) version of the solution as the provisional solution for the next correction sweep.
In both \cite{laytonMinion:2005,minion2003semi} the authors conclude that the reduction in work obtained by using ladder methods is essentially offset by a corresponding decrease in accuracy, making ladder methods no more computationally efficient than non-ladder SDC methods.
On the other hand, in \cite{Layton2009}, SDC methods for a method of lines discretizations of PDEs are explored wherein the ladder strategy allows both spatial and temporal coarsening as well as the use of lower-order spatial discretizations in initial iterations.
The numerical results in \cite{Layton2009} indicate that adding spatial coarsening to SDC methods for PDEs can increase the overall efficiency of the timestepping scheme, although this evidence is based only on numerical experiments using simple test cases.
This paper significantly extends the idea of using spatial coarsening in SDC when solving PDEs.
A general multi-level strategy is analyzed wherein correction sweeps are applied to different levels as in the V-cycles of multigrid methods (e.g. \cite{brandt:1977,briggs}).
A similar strategy is used in the parallel full approximation scheme in space and time (PFASST), see~\cite{EmmettMinion2012,Minion2010} and also~\cite{SpeckEtAl2012}, to enable concurrency in time by iterating on multiple timesteps simultaneously.
As in nonlinear multigrid methods, multi-level SDC applies an FAS-type correction to enhance the accuracy of the solution on coarse levels.
Therefore, some of the fine sweeps required by a single-level SDC algorithm can be replaced by coarse sweeps, which are relatively cheaper when spatial coarsening strategies are used.
The paper introduces MLSDC and discusses three such spatial coarsening strategies: (1) reducing the number of degrees of freedom, (2) reducing the order of the discretization and (3) reducing the accuracy of implicit solves.
To enable the use of a high-order compact stencils for spatial operators, several modifications to SDC and MLSDC are presented that incorporate a weighting matrix.
It is shown for example problems in one and two dimensions that the number of MLSDC iterations required to converge to the collocation solution can be fewer than for SDC, even when the problem is poorly resolved in space.
Furthermore, results from a three-dimensional benchmark problem demonstrate that MLSDC can significantly reduce time-to-solution compared to single-level SDC.
\section{Multi-level spectral deferred corrections} \label{sec:MLSDC}
The details of the MLSDC schemes are presented in this section.
The original SDC method is first reviewed in \S\ref{subsec:sdc}, while MLSDC along with a brief review of FAS corrections, the incorporation of weighting matrices and a discussion of different coarsening strategies is presented in \S\ref{subsec:mlsdc}.
\subsection{Spectral deferred corrections}\label{subsec:sdc}
SDC methods for ODEs were first introduced in \cite{dutt2000spectral}, and were subsequently refined and extended e.g. in~\cite{hansen2006convergence,huang2006accelerating,minion2003semi,minion2004semi}.
SDC methods iteratively compute the solution to the collocation equation by approximating a series of correction equations at spectral quadrature nodes using low-order substepping methods.
The derivation of SDC starts from the Picard integral form of a generic IVP given by
\begin{equation}
\label{eq:picard}
u(t) = u_{0} + \int_{0}^t f\bigl(u(s), s\bigr) \mathrm{d} s
\mathrm{e}nd{equation}
where $t \in [0,T]$, $u_0,u(t) \in {\mathbb{R}}^N$, and $f: {\mathbb{R}}^N \times {\mathbb{R}} \rightarrow {\mathbb{R}}^N$.
We now focus on a single timestep $[T_n, T_{n+1}]$, which is divided into substeps by defining
a set of quadrature nodes on the interval. Here we consider Lobatto quadrature and denote
$M+1$ nodes ${\bm{t}} := (t_m)_{m=0,\ldots,M}$ such that
$T_n=t_0 < t_{1} < \ldots < t_{M} = T_{n+1}$.
We now denote the collocation polynomial on $[T_{n}, T_{n+1}]$ by $u_p(t)$ and
write $U_j = u_p(t_j) \approx u(t_j)$.
In order to derive equations for the intermediate solutions $U_j$, we define quadrature weights
\begin{equation}
\label{eq:quad_weights}
q_{m,j} := \frac{1}{\Delta t} \int_{T_{n}}^{t_{m}} l_{j}(s) \ ds, \ m=0,\ldots,M, \ j=0,\ldots,M
\mathrm{e}nd{equation}
where $(l_{j})_{j=0,\ldots,M}$ are the Lagrange polynomials defined by the nodes ${\bm{t}}$, and $\Delta t = T_{N+1}-T_N$.
Inserting $u_p(t)$ into~\mathrm{e}qref{eq:picard} and noting that the quadrature with weights defined in~\mathrm{e}qref{eq:quad_weights} integrates the polynomial $u_p(t)$ exactly, we obtain
\begin{equation}
\label{eq:disc_coll}
U_m = u_0 + \Delta t \sum_{j=0}^{M} q_{m,j} f(U_{j}, t_j), \ m=0, \ldots, M.
\mathrm{e}nd{equation}
For a more compact notation, we now define the {\it integration matrix} ${\bm{q}}$ to be the $M+1 \times M+1$
matrix consisting of entries $q_{m,j}$. Note that because we use Gauss-Lobatto nodes, the first row of ${\bm{q}}$ is all zeros. Next, we denote
\begin{equation}
{\bm{U}} := \left[ U_0, \ldots, U_M \right]^T,\nonumber
\mathrm{e}nd{equation}
and
\begin{equation}
{\bm{F}}({\bm{U}}) := \left[ F_0, \ldots, F_M \right]^T := \left[ f(U_0, t_0), \ldots, f(U_M, t_M) \right]^T. \nonumber
\mathrm{e}nd{equation}
In order to multiply the integration matrix ${\bm{q}}$ with the vector of the right-hand side values, we define ${\bm{Q}} := {\bm{q}}\otimes \bm{I}_N$ where $\bm{I}_N\in\mathbb{R}^{N\times N}$ is the identity matrix and $\otimes$ is the Kronecker product.
With these definitions, the set of equations in~\mathrm{e}qref{eq:disc_coll} can be written more compactly as
\begin{equation}
\label{eq:compact}
{\bm{U}} = {\bm{U}}_0 + \Delta t\, {\bm{Q}}\, {\bm{F}}({\bm{U}}) \nonumber
\mathrm{e}nd{equation}
where ${\bm{U}}_0 := U_0 \otimes \bm{I}_N$.
Eq. \mathrm{e}qref{eq:compact} is an implicit equation for the unknowns in ${\bm{U}}$, and is also referred to
as the collocation formulation.
Because we use Gauss-Lobatto nodes, the value $U_M$ readily approximates the solution $u(T_{n+1})$.
Here, we consider ODEs that can be split into stiff ($f^I$) and non-stiff ($f^E$) pieces so that
\begin{equation}
f(u(t),t) = f^E\bigl(u(t), t\bigr) + f^I\bigl(u(t), t\bigr).\nonumber
\mathrm{e}nd{equation}
SDC iterations begin by spreading the initial condition $U_0$ to each of the collocation nodes so that the provisional solution ${\bm{U}}^0$ is given by ${\bm{U}}^0 = [U_0, \cdots, U_0]$.
We define by
\begin{equation}
s_{m,j} := \frac{1}{\Delta t} \int_{t_{m-1}}^{t_m} l_{j}(s) \ ds, \ m=1, \ldots, M\nonumber
\mathrm{e}nd{equation}
the quadrature weights for node-to-note integration, approximating integrals over $[t_{m-1}, t_{m}]$, and as ${\bm{s}}$ the $M \times M+1$ matrix consisting of the entries $s_{m,j}$.
Note that ${\bm{s}}$ can be easily constructed from the integration matrix ${\bm{q}}$.
Furthermore, we denote as before ${\bm{S}}:={\bm{s}}\otimes\bm{I}_N$.
Then, the semi-implicit update equation corresponding to the forward/backward Euler substepping method for computing ${\bm{U}}^{k+1}$ is given by
\begin{multline}
\label{eq:imexsdc}
U^{k+1}_{m+1} = U^{k+1}_m
+ \Delta t_m
\bigl[ f^E(U^{k+1}_{m}, t_{m}) - f^E(U^k_{m}, t_{m}) \bigr] \\
+ \Delta t_m
\bigl[ f^I(U^{k+1}_{m+1}, t_{m+1}) - f^I(U^k_{m+1}, t_{m+1}) \bigr]
+ \Delta t\, S^{k}_{m}
\mathrm{e}nd{multline}
where $S^{k}_{m}$ is the $m^{\rm th}$ row of ${\bm{S}} {\bm{F}}({\bm{U}}^k)$ and $\Delta t_{m} := t_{m+1} - t_{m}$.
The process of solving \mathrm{e}qref{eq:imexsdc} at each node is referred to as an \mathrm{e}mph{SDC sweep} or an \mathrm{e}mph{SDC iteration} (see Algorithm~\ref{alg:sdcsweep}).
SDC with a fixed number of $k$ iterations and first-order sweeps is formally $O(\Delta t^k)$ up to the accuracy
of the underlying integration rule~\cite{ChristliebEtAl2009,ShuEtAl2007}.
When SDC iterations converge, the scheme becomes equivalent to the collocation scheme determined by the quadrature nodes, and hence is of order $2M$ with $M+1$ Lobatto nodes.
\begin{algorithm}[t]
\algorithmfootnote{The FAS correction, denoted by ${\bm{\tau}}$, is included here to ellucidate how FAS corrections derived in \S\ref{subsec:mlsdc} are incorporated into an SDC sweep -- for plain, single level SDC algorithms the FAS correction ${\bm{\tau}}$ would be zero.}
\SetKwComment{Comment}{\# }{}
\SetCommentSty{textit}
\DontPrintSemicolon
\KwData{Initial $U_0$, function evaluations ${\bm{F}}({\bm{U}}^k)$ from the previous iteration, and (optionally) FAS corrections ${\bm{\tau}}$.}
\KwResult{Solution ${\bm{U}}^{k+1}$ and function evaluations ${\bm{F}}({\bm{U}}^{k+1})$.}
\BlankLine
{\mathbb{C}}omment{Compute integrals}
\For{$m=0 \ldots M-1$}{
$S^{k}_{m} \longleftarrow \Delta t \sum_{j=0}^M s_{m,j} (F^{E,k}_{j} + F^{I,k}_j)$
}
\BlankLine
{\mathbb{C}}omment{Set initial condition and compute function evaluation}
$t \longleftarrow t_0$; $U^{k+1}_0 \longleftarrow U_0$ \;
$F^{E,k+1}_0 \longleftarrow f^E(U_0, t)$ \;
$F^{I,k+1}_0 \longleftarrow f^I(U_0, t)$ \;
\BlankLine
{\mathbb{C}}omment{Forward/backward Euler substepping for correction}
\For{$m=0 \ldots M-1$}{
$t \longleftarrow t + \Delta t_m$ \;
${\rm RHS} \longleftarrow U^{k+1}_{m} + \Delta t_m \bigl( F^{E,k+1}_{m} - F^{E,k}_{m} - F^{I,k}_{m+1} \bigr) + S^k_{m} + \tau_m$ \;
$U^{k+1}_{m+1} \longleftarrow {\rm Solve}\bigl( U - \Delta t_m f^I(U, t) = {\rm RHS} \bigr)$ for $U$ \;
$F^{E,k+1}_{m+1} \longleftarrow f^E(U^{k+1}_{m+1}, t)$ \;
$F^{I,k+1}_{m+1} \longleftarrow f^I(U^{k+1}_{m+1}, t)$ \;
}
\caption{IMEX SDC sweep algorithm.}
\label{alg:sdcsweep}
\mathrm{e}nd{algorithm}
It has been shown \cite{huang2006accelerating,laytonMinion:2005}
that in certain situations (particularly
stiff equations) the convergence of SDC iterates can
slow down considerably for large values of $\Delta t$. For a fixed
number of iterations, this lack of convergence is characterized
by order reduction. Hence in this study, to allow for a reasonable comparison of SDC and MLSDC, we perform iterations until a specified convergence criterion is met.
Convergence is monitored by computing the SDC residual
\begin{equation}
\label{eq:residual}
{\bm{r}}^k = {\bm{U}}_0 + \Delta t {\bm{Q}} {\bm{F}}({\bm{U}}^k) - {\bm{U}}^k,
\mathrm{e}nd{equation}
and the iteration is terminated when the norm of the residual drops
below a prescribed tolerance.
Similary, if SDC or MLSDC are used to solve the collocation problem up to some fixed tolerance, one also observes a significant increase in the number of iterations required to reach a set tolerance.
Accelerating the convergence of SDC for stiff problems has been studied in e.g.~\cite{HuangEtAl2006,Weiser2013}.
\subsection{Multi-level spectral deferred corrections}\label{subsec:mlsdc}
In multi-level SDC (MLSDC), SDC sweeps are performed on a hierarchy of discretizations
or \mathrm{e}mph{levels} to solve the collocation equation~\mathrm{e}qref{eq:compact}.
This section presents the details of the MLSDC iterations
for a generic set of levels, and in Sect. \ref{sec:mlsdc_spatial_coarsening},
three different coarsening strategies are explored.
For the following, we define levels $\mathrm{e}ll=1 \ldots L$, where $\mathrm{e}ll = 1$ is
the discretization that is to be solved (referred to generically
as the {\it fine} level), and subsequent
levels $\mathrm{e}ll=2 \ldots L$ are defined by successive coarsening
of a type to be specified later.
\subsubsection{FAS correction}
Solutions on different MLSDC levels are coupled in the same manner as used in the full approximation scheme (FAS) for nonlinear multigrid methods (see e.g. \cite{brandt:1977}).
The FAS correction for coarse SDC iterations
is determined by considering SDC as an iterative method for solving
the collocation formulation~\mathrm{e}qref{eq:compact}, where the
operators $A_\mathrm{e}ll$ are given by $A_\mathrm{e}ll({\bm{U}}_\mathrm{e}ll) \mathrm{e}quiv
{\bm{U}}_\mathrm{e}ll - \Delta t {\bm{Q}}_\mathrm{e}ll {\bm{F}}_\mathrm{e}ll({\bm{U}}_\mathrm{e}ll)$.
Note that the approximations $A_\mathrm{e}ll$ of the operator $A$ can differ substantially between levels as will be discussed in \S\ref{sec:mlsdc_spatial_coarsening}.
Furthermore, we assume that suitable restriction (denote by $R$) and interpolation operators between levels are available, see \S\ref{subsubsec:transfer}.
The FAS correction for coarse-grid sweeps is then given by
\begin{equation}
\label{eq:tau_sdc}
\bm{\tau}_{\mathrm{e}ll+1} = A_{\mathrm{e}ll+1}(R {\bm{U}}_{\mathrm{e}ll}) - R A_\mathrm{e}ll({\bm{U}}_\mathrm{e}ll) = \Delta t \bigl( R {\bm{Q}}_{\mathrm{e}ll} {\bm{F}}_{\mathrm{e}ll}({\bm{U}}_\mathrm{e}ll)
- {\bm{Q}}_{\mathrm{e}ll+1} {\bm{F}}_{\mathrm{e}ll+1} (R {\bm{U}}_\mathrm{e}ll)\bigr).
\mathrm{e}nd{equation}
In particular, if the fine residual is zero (i.e., ${\bm{U}}_{\mathrm{e}ll} \mathrm{e}quiv
{\bm{U}}_{0,\mathrm{e}ll} + \Delta t {\bm{Q}}_{\mathrm{e}ll} {\bm{F}}_{\mathrm{e}ll}({\bm{U}}_\mathrm{e}ll)$) the FAS-corrected
coarse equation becomes
\begin{eqnarray}
{\bm{U}}_{\mathrm{e}ll+1}- \Delta t {\bm{Q}}_{\mathrm{e}ll+1} {\bm{F}}_{\mathrm{e}ll+1}({\bm{U}}_{\mathrm{e}ll+1})
& = & R{\bm{U}}_{0,\mathrm{e}ll} + \Delta t \bigl( R {\bm{Q}}_{\mathrm{e}ll} {\bm{F}}_{\mathrm{e}ll}({\bm{U}}_\mathrm{e}ll)
- {\bm{Q}}_{\mathrm{e}ll+1} {\bm{F}}_{\mathrm{e}ll+1}(R{\bm{U}}_\mathrm{e}ll) \bigr) \nonumber\\
& = & R {\bm{U}}_{\mathrm{e}ll} - \Delta t {\bm{Q}}_{\mathrm{e}ll+1} {\bm{F}}_{\mathrm{e}ll+1}(R{\bm{U}}_\mathrm{e}ll)\nonumber
\mathrm{e}nd{eqnarray}
so that the coarse solution is the restriction of the fine solution. Note that for multi-level schemes, FAS-corrections from finer levels need to be restricted and incorporated to coarser levels as well, i.e.~if on level $\mathrm{e}ll$ the equation is already corrected by $\bm{\tau}_\mathrm{e}ll$ with
\begin{equation}
A_\mathrm{e}ll({\bm{U}}_\mathrm{e}ll) = {\bm{U}}_\mathrm{e}ll - \Delta t {\bm{Q}}_\mathrm{e}ll {\bm{F}}_\mathrm{e}ll({\bm{U}}_\mathrm{e}ll) - \bm{\tau}_\mathrm{e}ll,\nonumber
\mathrm{e}nd{equation}
the correction $\bm{\tau}_{\mathrm{e}ll+1}$ for level $\mathrm{e}ll+1$ is then given by
\begin{equation}
\bm{\tau}_{\mathrm{e}ll+1} = A_{\mathrm{e}ll+1}(R {\bm{U}}_{\mathrm{e}ll}) - R A_\mathrm{e}ll({\bm{U}}_\mathrm{e}ll) = \Delta t \bigl( R {\bm{Q}}_{\mathrm{e}ll} {\bm{F}}_{\mathrm{e}ll}({\bm{U}}_\mathrm{e}ll)
- {\bm{Q}}_{\mathrm{e}ll+1} {\bm{F}}_{\mathrm{e}ll+1} (R {\bm{U}}_\mathrm{e}ll)\bigr) + R\bm{\tau}_\mathrm{e}ll.\nonumber
\mathrm{e}nd{equation}
Coarse levels thus include the FAS corrections of all finer levels.
\subsubsection{The MLSDC algorithm}
The MLSDC scheme introduced here proceeds as follows. The initial
condition $U_0$ and its function evaluation are spread to each of the
collocation nodes on the finest level so that the first provisional
solution ${\bm{U}}^0_1$ is given by
\begin{equation}
{\bm{U}}^0_1 = [ U_0, \ldots, U_0 ].\nonumber
\mathrm{e}nd{equation}
A single MLSDC iteration then consists of the following steps:
\begin{enumerate}
\item Perform one fine SDC sweep using the values ${\bm{U}}^{k}_1$ and
${\bm{F}}_1({\bm{U}}^{k}_1)$. This will yield provisional updated values
${\bm{U}}^{k+1}_1$ and ${\bm{F}}_1({\bm{U}}^{k+1}_1)$.
\item Sweep from fine to coarse: for each $\mathrm{e}ll=2\ldots L$:
\begin{enumerate}
\item Restrict the fine values ${\bm{U}}^{k+1}_{\mathrm{e}ll-1}$ to the coarse
values ${\bm{U}}_{\mathrm{e}ll}^{k}$ and compute ${\bm{F}}_{\mathrm{e}ll}({\bm{U}}_{\mathrm{e}ll}^{k})$.
\item Compute the FAS correction
${\bm{\tau}}^k_{\mathrm{e}ll}$ using ${\bm{F}}_{\mathrm{e}ll-1}({\bm{U}}^{k+1}_{\mathrm{e}ll-1})$, ${\bm{F}}_{\mathrm{e}ll}({\bm{U}}^{k}_\mathrm{e}ll)$,
and ${\bm{\tau}}^k_{\mathrm{e}ll-1}$ (if available).
\item Perform $n_{\mathrm{e}ll}$ SDC sweeps with the values on level $\mathrm{e}ll$
beginning with $\ULvec{\mathrm{e}ll}{k}$, ${\bm{F}}_{\mathrm{e}ll}(\ULvec{\mathrm{e}ll}{k})$ and the FAS correction
${\bm{\tau}}^k_{\mathrm{e}ll}$. This will yield new values $\ULvec{\mathrm{e}ll}{k+1}$
and ${\bm{F}}_{\mathrm{e}ll}(\ULvec{\mathrm{e}ll}{k+1})$.
\mathrm{e}nd{enumerate}
\item Sweep from coarse to fine: for each $\mathrm{e}ll=L-1\ldots 1$:
\begin{enumerate}
\item Interpolate coarse grid correction $\ULvec{\mathrm{e}ll+1}{k+1} - R \ULvec{\mathrm{e}ll}{k+1}$ and
add to $\ULvec{\mathrm{e}ll}{k+1}$. Recompute new values
${\bm{F}}_{\mathrm{e}ll}(\ULvec{\mathrm{e}ll}{k+1})$
\item If $\mathrm{e}ll > 1$, perform $n_{\mathrm{e}ll}$ SDC sweeps beginning with
values $\ULvec{\mathrm{e}ll}{k+1}$, ${\bm{F}}_{\mathrm{e}ll}(\ULvec{\mathrm{e}ll}{k+1})$ and the FAS
correction ${\bm{\tau}}^k_{\mathrm{e}ll}$. This will once again yield new
values $\ULvec{\mathrm{e}ll}{k+1}$ and ${\bm{F}}_{\mathrm{e}ll}(\ULvec{\mathrm{e}ll}{k+1})$.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{enumerate}
Note that when interpolating from coarse to fine levels the correction
${\bm{U}}^{k+1}_{\mathrm{e}ll+1} - R {\bm{U}}^k_{\mathrm{e}ll+1}$ is interpolated and
subsequently added to ${\bm{U}}^{k+1}_{\mathrm{e}ll}$ instead of simply
overwriting the fine values with interpolated coarse values. Also
note that instead of interpolating solution values
${\bm{U}}^{k+1}_{\mathrm{e}ll+1}$ to ${\bm{U}}^{k+1}_\mathrm{e}ll$ and immediately
re-evaluating the function values ${\bm{F}}_\mathrm{e}ll({\bm{U}}^{k+1}_\mathrm{e}ll)$, the
change in the function values can be interpolated as well.
Doing so reduces the cost of
the interpolation step, but possibly at the cost of
increasing the number of MLSDC iterations required to reach
convergence. Since no significant increase could be observed during our tests,
we skip the re-evaluation of the right-hand side and use interpolation of the
coarse function values throughout this work.
The above is summarized by Algorithm~\ref{alg:mlsdc}.
\begin{algorithm}[t]
\SetKwComment{Comment}{\# }{}
\SetCommentSty{textit}
\DontPrintSemicolon
\KwData{Initial $\UL{1}{k}{0}$ and function evaluations $\FLvec{1}{k}$ from the previous iteration on the fine level.}
\KwResult{Solution $\ULvec{\mathrm{e}ll}{k+1}$ and function evaluations $\FLvec{\mathrm{e}ll}{k+1}$ on all levels.}
\BlankLine
{\mathbb{C}}omment{Perform fine sweep and check convergence criteria}
$\ULvec{1}{k+1}$, $\FLvec{1}{k+1} \longleftarrow$ SDCSweep$\bigl(\ULvec{1}{k},\,\FLvec{1}{k}\bigr)$ \;
\If{ fine level has converged }{
return \;
}
\BlankLine
{\mathbb{C}}omment{Cycle from fine to coarse}
\For{$\mathrm{e}ll=1 \ldots L-1$}{
{\mathbb{C}}omment{Restrict, re-evaluate, and save restriction (used later during interpolation)}
\For{$m = 0 \ldots M$}{
$\UL{\mathrm{e}ll+1}{k}{m} \longleftarrow$ Restrict$\bigl( \UL{\mathrm{e}ll}{k+1}{m} \bigr)$ \;
$\FL{\mathrm{e}ll+1}{k}{m} \longleftarrow$ FEval$\bigl(\UL{\mathrm{e}ll+1}{k+1}{m} \bigr)$ \;
$\ULtmp{\mathrm{e}ll+1}{k}{m} \longleftarrow \UL{\mathrm{e}ll+1}{k}{m}$\;
}
{\mathbb{C}}omment{Compute FAS correction and sweep}
$\BLvec{\mathrm{e}ll+1}{k} \longleftarrow$ FAS$\bigl(\FLvec{\mathrm{e}ll}{k+1},\, \FLvec{\mathrm{e}ll+1}{k},\, \BLvec{\mathrm{e}ll}{k}\bigr)$ \;
$\ULvec{\mathrm{e}ll+1}{k+1}$, $\FLvec{\mathrm{e}ll+1}{k+1} \longleftarrow$ SDCSweep$\bigl(\ULvec{\mathrm{e}ll+1}{k},\,\FLvec{\mathrm{e}ll+1}{k},\, \BLvec{\mathrm{e}ll+1}{k} \bigr)$ \;
}
\BlankLine
{\mathbb{C}}omment{Cycle from coarse to fine}
\For{$\mathrm{e}ll=L-1 \ldots 2$}{
{\mathbb{C}}omment{Interpolate coarse correction and re-evaluate}
\For{$m = 0 \ldots M$}{
$\UL{\mathrm{e}ll}{k+1}{m} \longleftarrow \UL{\mathrm{e}ll}{k+1}{m} +$ Interpolate$\bigl(\UL{\mathrm{e}ll+1}{k+1}{m} - \ULtmp{\mathrm{e}ll+1}{k}{m} \bigr)$ \;
$\FL{\mathrm{e}ll}{k+1}{m} \longleftarrow$ FEval$\bigl(\UL{\mathrm{e}ll}{k+1}{m}\bigr)$\;
}
$\ULvec{\mathrm{e}ll}{k+1}$, $\FLvec{\mathrm{e}ll}{k+1} \longleftarrow$ SDCSweep$\bigl(\ULvec{\mathrm{e}ll}{k+1},\,\FLvec{\mathrm{e}ll}{k+1},\, \BLvec{\mathrm{e}ll}{k} \bigr)$ \;
}
\BlankLine
{\mathbb{C}}omment{Return to finest level before next iteration}
\For{$m = 0 \ldots M$}{
$\UL{1}{k+1}{m} \longleftarrow \UL{1}{k+1}{m} +$ Interpolate$\bigl(\UL{2}{k+1}{m} - \ULtmp{2}{k}{m} \bigr)$ \;
$\FL{1}{k+1}{m} \longleftarrow$ FEval$\bigl(\UL{1}{k+1}{m}\bigr)$\;
}
\BlankLine
\caption{MLSDC iteration for $L$ levels.}
\label{alg:mlsdc}
\mathrm{e}nd{algorithm}
\subsubsection{Semi-implicit MLSDC with compact stencils}\label{subsec:imex_linear}
In order to achieve higher-order accuracy with finite difference discretizations in space, the use of Mehr\-stellen discretizations is a common technique especially when using multigrid methods~\cite{trottenberg_multigrid:_2000}. While the straightforward use of larger stencils leads to larger matrix bandwidths and higher communication costs during parallel runs, \mathrm{e}mph{high-order compact} schemes allow for high-order accuracy with stencils of minimal extent~\cite{spotz_high-order_1996}. The compact stencil for a given discretization is obtained by approximating the leading order error term by a finite difference approximation of the right-hand side, resulting in a weighting matrix.
Discretizing e.g.~the heat equation $\pdesol{u}_t = \nabla^2 \pdesol{u}$ in space\footnote{We adopt here and in the upcoming examples the following notation: Solutions of PDEs are
denoted with an underline, e.g. $\pdesol{u}$, and depend continuously on
one or more spatial variables and a time variable. Discretizing a
PDE in space by the method of lines results in an IVP with dimension $N$ equal to the
degrees of freedom of the spatial discretization. The solution of
such an IVP is a vector-valued function denoted by a lower case letter, e.g.~$\odesol{u}$, and
depends continuously on time. The numerical approximation of $\odesol{u}$
at some point in time $t_{m}$ is
denoted by a capital letter, e.g.~$\mathrm{d}isc{U}_{m}^{k}$, where $k$ corresponds to the iteration number.}
yields
\begin{align}
Wu_t = Au\nonumber
\mathrm{e}nd{align}
with system matrix $A$ and weighting matrix $W$. Formally, the discrete Laplacian is given by $\inv{W}A$. Using this approach, a fourth-order approximation of the Laplacian can be achieved using only nearest neighbors (three-point stencil in 1D, nine-point-stencil in 2D, 19-point stencil in 3D).
For further reading on compact schemes we refer to~\cite{lele_compact_1992,spotz_high-order_1996,trottenberg_multigrid:_2000}.
The presence of a weighting matrix requires some modifications to MLSDC. We start with the semi-implicit SDC update equation~\mathrm{e}qref{eq:imexsdc} given by
\begin{multline}
\label{eq:imexsdc_ii}
U^{k+1}_{m+1} = U^{k+1}_m
+ \Delta t_m
\bigl[ f^E(U^{k+1}_{m}, t_{m}) - f^E(U^k_{m}, t_{m}) \bigr] \\
+ \Delta t_m
\bigl[ f^I(U^{k+1}_{m+1}, t_{m}) - f^I(U^k_{m+1}, t_{m}) \bigr]
+ \Delta t\, S^{k}_m.
\mathrm{e}nd{multline}
Next, we assume a linear, autonomous implicit part $f^I(U,t) = f^I(U)= \inv{W}AU$ for a spatial vector $U$ with sparse matrices $W$ and $A$ stemming from the discretization of the Laplacian with compact stencils. Furthermore, we define
\begin{equation}
\tilde{f}^I(U) = AU\nonumber
\mathrm{e}nd{equation}
so that
\begin{equation}
\label{eq:f_tilde_def}
\tilde{f}^I(U) = Wf^I(U).
\mathrm{e}nd{equation}
With these definitions \mathrm{e}qref{eq:imexsdc_ii} becomes
\begin{multline}
\left(I-\Delta t_m\, \inv{W}A\right)U^{k+1}_{m+1} = U^{k+1}_m
+ \Delta t_m
\bigl[ f^E(U^{k+1}_{m}, t_{m}) - f^E(U^k_{m}, t_{m}) \bigr] \nonumber\\ - \Delta t_m\, \inv{W}AU^k_{m+1} + \Delta t\, S^{k}_m.\nonumber
\mathrm{e}nd{multline}
Since the operator $\left(I-\Delta t_m\, \inv{W}A\right)$ is not sparse, we
avoid computing with it by multiplying the equation above by $W$, so that
\begin{multline}
\label{eq:linear_imex_sdc_W}
\left(W-\Delta t_m\, A\right)U^{k+1}_{m+1} = WU^{k+1}_m
+ \Delta t_m
W\bigl[ f^E(U^{k+1}_{m}, t_{m}) - f^E(U^k_{m}, t_{m}) \bigr] \\ - \Delta t_m\, \tilde{f}^I(U^k_{m+1}) + \Delta t\, \tilde{S}^{k}_m
\mathrm{e}nd{multline}
where $\tilde{S}^{k}_m$ now represents the $m^{\rm th}$ row of ${\bm{S}} \tilde{{\bm{F}}}^k({\bm{U}}^k)$, using $Wf^E(U^k_m,t_{m})$ and $\tilde{f^I}(U^k_m)$ instead of $f^E(U^k_m,t_{m})$ and $f^I(U^k_m)$ as integrands, that is $\tilde{S}^{k}_m = \sum_{j=0}^M s_{m,j} \bigl( W f^E(U^k_j,t_{j}) + \tilde{f^I}(U^k_j) \bigr)$.
While this equation avoids the inversion of $W$, the computation of the residual does not. By equation~\mathrm{e}qref{eq:residual}, the $m^{\rm th}$ component of the residual at iteration $k$ reads either
\begin{equation}
r^k_m = U_0 + \Delta t \left({\bm{Q}} {\bm{F}}({\bm{U}}^k) \right)_m- U^k_m,\nonumber
\mathrm{e}nd{equation}
or, after multiplication with $W$,
\begin{equation}
Wr^k_m = WU_0 + \Delta t \left( {\bm{Q}} \tilde{{\bm{F}}}({\bm{U}}^{k})\right)_m - WU_m^k.\nonumber
\mathrm{e}nd{equation}
Both equations require the solution of a linear system with matrix $W$, either to compute the components of ${\bm{F}}({\bm{U}}^k)$ from~\mathrm{e}qref{eq:f_tilde_def} or to retrieve $r^k_m$ from $W r^k_m$.
Note that the subscript $m$ denotes here the $m^{\rm th}$ column.
Thus, we either need to obtain $r^k_m$ from $Wr^k_m$ (in case $Wf^E$ is stored during the SDC sweep) or $f^I$ from $\tilde{f}^I$ (in case $f^E$ is stored). In either case, solving a linear system with the weighting matrix becomes inevitable for the computation of the formally correct residual.
Furthermore, evaluating \mathrm{e}qref{eq:tau_sdc} for the FAS correction also requires the explicit use of $f^E$ and $f^I = \inv{W}\tilde{f}^I$ to compute $R {\bm{Q}}_{\mathrm{e}ll} {\bm{F}}_{\mathrm{e}ll}({\bm{U}}_{\mathrm{e}ll})$.
Moreover, from \mathrm{e}qref{eq:linear_imex_sdc_W} we note that weighted SDC sweeps on coarse levels $\mathrm{e}ll+1$ require the computation of $W_{\mathrm{e}ll+1}\tau_{\mathrm{e}ll+1,m}$ on all coarse nodes ${\bm t}_\mathrm{e}ll$ so that ${\bm{Q}}_{\mathrm{e}ll+1} {\bm{F}}_{\mathrm{e}ll+1}(R{\bm{U}}_{\mathrm{e}ll})$ can be replaced by ${\bm{Q}}_{\mathrm{e}ll+1} \tilde{{\bm{F}}}_{\mathrm{e}ll+1}(R {\bm{U}}_{\mathrm{e}ll})$.
For spatial discretizations in which both parts $f^E$ and $f^I$ of the right-hand side make use of weighting matrices $W^E$ and $W^I$ or e.g.~for finite element discretizations with a mass matrix, we note that similar modifications to the MLSDC scheme as presented here must be made.
The investigation of MLSDC for finite element discretizations is left for future work.
\subsubsection{Coarsening strategies}\label{sec:mlsdc_spatial_coarsening}
The goal in MLSDC methods is to reduce the total cost of the method by performing
SDC sweeps on coarsened levels at reduced computational cost. In this section
we describe the three types of spatial coarsening used in the numerical examples:
\begin{enumerate}
\itemsep1em
\item {\sc Reduced resolution in space}: Use fewer degrees of freedom for the spatial representation (e.g. nodes, cells, points, particles, etc.) on the coarse levels.
This directly translates into significant computational savings for evaluations of $f$, particularly for 3D problems.
This approach requires spatial interpolation and restriction operators to transfer the solution between levels.
\item {\sc Reduced order in space}: Use a spatial discretization on the coarse levels that is of reduced order.
Lower-order finite difference stencils, for example, are typically cheaper to evaluate than higher-order ones, see~\cite{RuprechtKrause2012} for an application of this strategy for the time-parallel Parareal method.
\item {\sc Reduced implicit solve in space:} Use only a few iterations of a spatial solver in every substep, if an implicit or implicit-explicit method is used in the SDC sweeps.
By not solving the linear or nonlinear system in each SDC substep to full accuracy, savings in execution time can be achieved.
\mathrm{e}nd{enumerate}
We note that a fourth possibility not pursued here is to use a
simplified physical representation of the problem on coarse levels.
This approach requires a detailed understanding of the
problem to derive suitable coarse level models and appropriate coarsening
and interpolation operators.
Similar ideas have been studied for Parareal in~\cite{DaiEtAl2013_ESAIM,HautWingate2013}.
The spatial coarsening strategies outlined above can significantly
reduce the cost of a coarse level SDC substep, but do not affect the
number of substeps used. In principle, it is also possible to
reduce the number of quadrature nodes on coarser levels as in the ladder
schemes mentioned in the introduction.
In this paper, no such temporal coarsening is applied and we
focus on the application of spatial coarsening strategies which leads
to a large reduction of the runtime for coarse level sweeps.
\subsubsection{Transfer operators}\label{subsubsec:transfer}
In order to apply Strategy 1 and reduce the number of spatial degrees of freedom, transfer operators between different levels are required.
In the tests presented here that are based on finite difference discretizations on simple cartesian meshes, the spatial degrees of freedom are aligned, so that simple injection can be used for restriction.
We have observed that the order of the used spatial interpolation has a strong impact on the convergence of MLSDC.
While global information transfer when using e.g.~spectral methods does not influence the convergence properties of MLSDC, the use of local Lagrangian interpolation for finite difference stencils has to be applied with care.
In numerical experiments not documented here, MLSDC with simple linear interpolation required twice as many iterations as MLSDC with fifth-order spatial interpolation.
Further, low resolutions in space combined with low-order interpolation led to significant degradation of the convergence speed of MLSDC, while high spatial resolutions were much less sensitive.
Throughout the paper, Strategy 1 is applied with third-order Lagrangian interpolation, which has proven to be sufficient in all cases studied here.
We note that the transfer operators would be different if e.g.~finite elements were used and operators between element spaces of different order and/or on different meshes would be required.
\subsubsection{Stability of SDC and MLSDC}
Stability domains for SDC are presented in e.g.~\cite{dutt2000spectral}.
The stability of semi-implicit SDC is addressed in~\cite{minion2003semi} and the issue of order reduction for stiff problems is discussed.
Split SDC methods are further analyzed theoretically and numerically in~\cite{HagstromZhou2006}.
A stability analysis for MLSDC is complicated by the fact that it would need to consider the effects of the different spatial coarsening strategies laid out in~\ref{sec:mlsdc_spatial_coarsening}.
Therefore, it cannot simply use Dahlquist's test equation but has to resort to some well-defined PDE examples in order to assess stability.
Hence, for MLSDC the results presented here are experimental but development of a theory for the convergence properties of MLSDC is ongoing work.
However, in all examples presented below, stability properties of SDC and MLSDC appeared to be comparable, but a comprehensive analysis is left for future work.
\section{Numerical Examples}\label{sec:num_examples}
In this section we investigate the performance of MLSDC for four numerical examples.
First, in order to demonstrate that the FAS correction in MLSDC is not unusable for hyperbolic problems per se, performance for the 1D wave equation is studied in \S\ref{subsec:wave_eq}.
To investigate performance for a nonlinear problem, MLSDC is then applied to the 1D viscous Burgers' equation in \S\ref{subsec:visc_burg}.
A detailed investigation of different error components is given and we verify that the FAS corrections allow the solutions on coarse levels to converge to the accuracy determined by the discretization on the \mathrm{e}mph{finest} level.
The 2D Navier-Stokes equations in vorticity-velocity form are solved in \S\ref{subsec:shear_layer}, showing again a reduction of the number of required iterations by MLSDC, although using a coarsened spatial resolution is found to have a negative impact on convergence, if the fine level is already under-resolved.
In \S\ref{subsec:visc_burger3d}, a {\sc fortran} implementation of MLSDC is applied to the three-dimensional Burgers' equation and it is demonstrated that the reduction in fine level sweeps translates into a significant reduction of computing time.
Throughout all examples, we make use of a linear geometric multigrid solver~\cite{chow2006,trottenberg_multigrid:_2000} with JOR relaxation in 3D and SOR relaxation 1D and 2D as smoothers, to solve the linear problems in the implicit part as well as to solve the linear system with the weighting matrix for the residual and the FAS correction.
The parallel implementation of the multigrid solver used for the last example is described in~\cite{bolten:2014}.
In the examples below, we compare the number of sweeps on the fine and most expensive level required by SDC or MLSDC to converge up to a set tolerance.
For SDC, which sweeps only on the fine level, this number is identical to the number of iterations.
For MLSDC, each iteration consists of one cycle through the level hierarchy, starting from the finest level, going up to the coarsest and then down again, with one SDC sweep on each level on the way up and down, cf. Algorithm~\ref{alg:mlsdc}.
Except for the last iteration, the final fine sweep is also the first fine sweep of the next iteration, so that for MLSDC the number of fine sweeps is equal to the number of iterations plus one.
Note that a factor of two coarsening in the spatial resolution in each dimension yields a factor of eight reduction in degrees of freedom in three dimensions, which makes coarse level sweeps significantly less expensive.
\subsection{Wave equation}\label{subsec:wave_eq}
For spatial multigrid, the FAS formalism is mostly derived and analyzed for stationary elliptic or parabolic problems, although there are examples of applications to hyperbolic problems as well~\cite{Alam2006,south1977}.
Here, as a first test, we investigate the performance of MLSDC for a simple 1D wave equation to verify that the FAS procedure as used in MLSDC does not break down for a hyperbolic problem per se.
The problem considered here, with the wave equation written as a first order system, reads
\begin{align}
u_{t}(x,t) + v_{x}(x,t) &= 0 \nonumber \\
v_{t}(x,t) + u_{x}(x,t) &= 0 \nonumber
\mathrm{e}nd{align}
on $x \in [0,1]$ with periodic boundary conditions and
\begin{equation}
u(x,0) = \mathrm{e}xp\left( -\frac{1}{2} \left( \frac{x - 0.5}{0.1} \right)^{2} \right), \quad v(x,0) = 0\nonumber
\mathrm{e}nd{equation}
for $0 \leq t \leq T$. For the spatial derivatives, centered differences of $4^{\rm th}$ order with $128$ points are used on the fine level and of $2^{\rm nd}$ order with $64$ points on the coarse.
Both SDC and MLSDC perform $40$ timesteps of length $\Delta t = 0.025$ to integrate up to $T=1.0$ and iterations on each step are performed until $\left\| {\bm{r}}^k \right\|_{\infty} \leq 5 \times 10^{-8}$.
The average number of fine level sweeps over all steps for SDC and MLSDC is shown in Table~\ref{tab:wave_eq} for three different values of $M$.
In all cases, MLSDC leads to savings in terms of required fine level sweeps.
We note that for a fine level spatial resolution of only $64$ points, using spatial coarsening has a significant negative effect on the performance of MLSDC (not documented here): This suggests that for a problem which is spatially under-resolved on the finest level, further coarsening the spatial resolution within MLSDC might hurt performance, see also~\S\ref{subsec:shear_layer}.
\begin{table}[th]
\centering
\begin{tabular}{|c|c|c|} \hline
$M$ & SDC & MLSDC(1,2) \\ \hline
$3$ & 18.5 & 11.1 \\
$5$ & 17.6 & 10.6 \\
$7$ & 14.3 & 8.2 \\ \hline
\mathrm{e}nd{tabular}
\caption{Average number of fine level sweeps over all time-steps of SDC and MLSDC for the wave equation example to reach a residual of $\left\| {\bm{r}}^{k} \right\|_{\infty} \leq 5 \times 10^{-8}$. The numbers in parentheses after MLSDC indicate the used coarsening strategies, see~\S\ref{sec:mlsdc_spatial_coarsening}.}\label{tab:wave_eq}
\mathrm{e}nd{table}
\subsection{1D viscous Burgers' equation}\label{subsec:visc_burg}
In this section we investigate the effect of coarsening in MLSDC
by considering the nonlinear viscous Burgers' equation
\begin{align}
\pdesol{u}_{t} + \pdesol{u}\cdot\pdesol{u}_{x} &= \nu \pdesol{u}_{xx}, \ x \in [-1,1], \ t \in [0,t_{\rm end}] \nonumber \\
\pdesol{u}(x,0) &= u^{0}(x) \label{eq:pde} \\
\pdesol{u}(-1,t) &= \pdesol{u}(1,t), \nonumber
\mathrm{e}nd{align}
with $\nu > 0$ and initial condition
\begin{equation}
u^{0}(x) = \mathrm{e}xp\left( -\frac{x^2}{\sigma^2} \right), \quad \sigma = 0.1\nonumber
\mathrm{e}nd{equation}
corresponding to a Gaussian peak strongly localized around $x=0$.
We denote the evaluation of the continuous function $\pdesol{u}$ on a given spatial mesh with points $(x_{i})_{i=1,\ldots,N}$ with a subscript $N$, so that
\begin{equation}
\pdesol{u}_{N}(t) := \left( \pdesol{u}(x_{i}, t) \right)_{i=1,\ldots,N} \in \mathbb{R}^{N}.\nonumber
\mathrm{e}nd{equation}
Discretization of~\mathrm{e}qref{eq:pde} in space then yields an initial value problem
\begin{align}
\label{eq:ivp}
\odesol{u}_{t}(t) &= f_{N}(\odesol{u}(t)), \quad \odesol{u}(t) \in \mathbb{R}^{N}, \quad t \in [0,t_{\rm end}] \nonumber \\
\odesol{u}(0) &= \pdesol{u}^{0}_{N}
\mathrm{e}nd{align}
with solution $u$. Finally, we denote by $\mathrm{d}isc{u}_{N, M, \Delta t,k} \in \mathbb{R}^{N}$ the result of solving~\mathrm{e}qref{eq:ivp} with $k$ iterations of MLSDC using a timestep of $\Delta t$, $M$ substeps (or $M+1$ Lobatto collocation nodes), and an $N$-point spatial mesh on the finest level over one time step.
Two runs are performed here, solving~\mathrm{e}qref{eq:pde} with $\nu=1.0$ and $\nu=0.1$ with a single MLSDC timestep $t_{\rm end} = \Delta t = 0.01$.
MLSDC with two levels with 7 Gauss-Lobatto collocation points is used with a spatial mesh of $N = 256$ points on the fine level, and $N=128$ on the coarse level (Strategy 1).
The advective term is discretized using a $5^{\rm th}$-order WENO finite difference method~\cite{JiangShu1996} on the fine level and a simple $1^{\rm st}$-order upwind scheme on the coarse level.
For the Laplacian, a $4^{\rm th}$-order compact stencil is used on the fine level and a $2^{\rm nd}$-order stencil is used on the coarse level (Strategy 2).
The advective term is treated explicitly while the diffusion term is treated implicitly.
The resulting linear system is solved using a linear multigrid solver with a tolerance of $5\times10^{-14}$ on the fine level but solved only approximately using a single V-cycle on the coarse level (Strategy 3).
A fixed number of $K=80$ MLSDC iterations is performed here without setting a tolerance for the MLSDC residual.
In order to assess the different error components, a reference PDE solution $\pdesol{u}_{N}(\Delta t)$ is computed with a single-level SDC scheme on a mesh with $N=1,024$ points using $M+1=9$ and $\Delta t = 10^{-4}$.
An ODE solution $\odesol{u}(\Delta t)$ is computed by running single-level SDC using $M+1=9$, $\Delta t = 10^{-4}$ and the same spatial discretization as on the fine level of the MLSDC run.
Finally, the collocation solution $\odesol{u}^{\rm coll}(\Delta t)$ is computed by performing $100$ iterations of single-level SDC with $M+1=7$ and again the same spatial discretization as the MLSDC fine level.
Reference ODE and collocation solutions are computed for the coarse level using the same parameters and the MLSDC coarse level spatial discretization.
\subsubsection{Error components in MLSDC}\label{subsubsec:error_comp}
The relative error of the fully discrete MLSDC solution to the analytical solution $\pdesol{u}$ of the PDE~\mathrm{e}qref{eq:pde} after a single timestep of length $\Delta t$ is given by
\begin{equation}
\label{eq:pde_error}
\varepsilon^{\rm PDE} := \frac{ \left\| \pdesol{u}_{N}(\Delta t) - \mathrm{d}isc{u}_{N, M, \Delta t,k} \right\| }{ \left\| \pdesol{u}_{N}(\Delta t) \right\| },
\mathrm{e}nd{equation}
where $\left\| \cdot \right\|$ denotes some norm on $\mathbb{R}^{N}$.
All errors are hereafter reported using the maximum norm $\left\| \cdot \right\|_{\infty}$.
The error $\varepsilon^{\rm PDE}$ includes contributions from three sources
\begin{align}
\mathrm{e}psilon_{N} &:= \frac{ \left\| \pdesol{u}_{N}(\Delta t) - \odesol{u}(\Delta t) \right\|}{\left\| \pdesol{u}_{N}(\Delta t) \right\|} \approx \text{(i) -- relative spatial error},\nonumber \\
\mathrm{e}psilon_{\Delta t} &:= \frac{ \left\| \odesol{u}(\Delta t) - \odesol{u}^{\rm coll}(\Delta t) \right\| }{ \left\| \pdesol{u}_{N}(\Delta t) \right\| } \approx \text{(ii) -- relative temporal error},\nonumber \\
\varepsilon^{\rm coll} &:= \frac{ \left\| \odesol{u}^{\rm coll}(\Delta t) - \mathrm{d}isc{u}_{N,M,\Delta t,k} \right\| }{ \left\| \pdesol{u}_{N}(\Delta t) \right\|} \approx \text{(iii) -- iteration error},\nonumber
\mathrm{e}nd{align}
with $u^{\rm coll}$ denoting the exact solution of the collocation equation~\mathrm{e}qref{eq:compact}.
Here, (i) is the spatial discretization error; (ii) is the temporal discretization error, which is the error from replacing the analytical Picard formulation~\mathrm{e}qref{eq:picard} with the discrete collocation problem~\mathrm{e}qref{eq:compact}; and (iii) is the error from solving the collocation equation approximately using the MLSDC iteration.
The PDE error~\mathrm{e}qref{eq:pde_error} can be estimated using the triangle inequality according to
\begin{equation}
\varepsilon^{\rm PDE} \leq \mathrm{e}psilon^{N} + \mathrm{e}psilon_{\Delta t} + \varepsilon^{\rm coll}.\nonumber
\mathrm{e}nd{equation}
In addition to the PDE error, we define the error between the MLSDC solution and the analytical solution of the semi-discrete ODE~\mathrm{e}qref{eq:ivp} as
\begin{equation}
\label{eq:temp_error}
\varepsilon^{\rm ODE} := \frac{ \left\| \odesol{u}(\Delta t) - \mathrm{d}isc{u}_{N,M,\Delta t,k} \right\| }{ \left\| \pdesol{u}_{N}(\Delta t) \right\| }
\leq \varepsilon_{\Delta t} + \varepsilon^{\rm coll}.
\mathrm{e}nd{equation}
Note that $\varepsilon^{\rm ODE}$ contains contributions from (ii) and (iii), and once the MLSDC iteration has converged, error~\mathrm{e}qref{eq:temp_error} reduces to the error arising from replacing the exact Picard integral~\mathrm{e}qref{eq:picard} by the collocation formula~\mathrm{e}qref{eq:compact}.
The three different error components of MLSDC, $\varepsilon^{\rm PDE}$, $\varepsilon^{\rm ODE}$ and $\varepsilon^{\rm coll}$ are expected to saturate at different levels as $k \to \infty$ according to
\begin{align}
\varepsilon^{\rm PDE} &\to \max\{ \mathrm{e}psilon_{N}, \mathrm{e}psilon_{\Delta t} \},\nonumber \\
\varepsilon^{\rm ODE} &\to \mathrm{e}psilon_{\Delta t}, \text{ and }\nonumber \\
\varepsilon^{\rm coll} &\to 0.\nonumber
\mathrm{e}nd{align}
The crucial point here is that due to the presence of the FAS correction included in MLSDC, we expect $\varepsilon^{\rm PDE}$, $\varepsilon^{\rm ODE}$ and $\varepsilon^{\rm coll}$ on \mathrm{e}mph{all} levels to saturate at values of $\mathrm{e}psilon_{N}$ and $\mathrm{e}psilon_{\Delta t}$ determined by the discretization used on the \mathrm{e}mph{finest} level.
That is, the FAS correction should allow MLSDC to represent the solution on all coarse levels to the same accuracy as on the finest level.
This is verified in \S\ref{subsubsec:all_level_conv}.
\subsubsection{Convergence of MLSDC on all levels}\label{subsubsec:all_level_conv}
Figure~\ref{fig:burgers} shows the three error components $\varepsilon^{\rm PDE}$ (green squares), $\varepsilon^{\rm ODE}$ (blue diamonds) and $\varepsilon^{\rm coll}$ (red circles) for $\nu=0.1$ (upper) and $\nu = 1.0$ (lower) plotted against the iteration number $k$.
The errors on the fine level are shown on the left in Figures~\ref{fig:burgers_a} and~\ref{fig:burgers_c}, while errors on the coarse mesh are shown on the right.
Furthermore, the estimated spatial discretization error $\mathrm{e}psilon_{N}$ (dashed) and temporal discretization error $\mathrm{e}psilon_{\Delta t}$ (dash-dotted) are indicated by black lines.
For $\nu=0.1$, we note that the PDE error $\varepsilon^{\rm PDE}$ on the fine level (Figures~\ref{fig:burgers_a} and~\ref{fig:burgers_c}) saturates -- as expected -- at a level determined by the spatial discretization error $\mathrm{e}psilon_{N}$; and the ODE error $\varepsilon^{\rm ODE}$ saturates at the level of the temporal discretization error $\mathrm{e}psilon_{\Delta t}$. The collocation error $\varepsilon^{\rm coll}$ saturates at near machine accuracy.
Increasing the viscosity to $\nu=1.0$, the spatial error remains at about $10^{-7}$ on the fine level but the time discretization error significantly increases compared to $\nu=0.1$.
Thus in Figure~\ref{fig:burgers_c}, both the PDE and the ODE error saturate at the value indicated by $\mathrm{e}psilon_{\Delta t}$.
Once again, the collocation error goes down to machine accuracy, although the rate of convergence is somewhat slower compared to $\nu=0.1$.
On the coarse level (Figures~\ref{fig:burgers_b} and~\ref{fig:burgers_d}), the estimated spatial error $\mathrm{e}psilon_{N}$ is noticeably higher because the values of $N$ are smaller and the order of the spatial discretization is lower.
However, as expected, the coarse level error of MLSDC saturates at values determined by the accuracy of the \mathrm{e}mph{finest} level. The saturation of $\varepsilon^{\rm PDE}$ and $\varepsilon^{\rm ODE}$ are identical in the left and right figures, despite the difference in $\mathrm{e}psilon_{N}$ and $\mathrm{e}psilon_{\Delta t}$.
This demonstrates that the FAS correction in MLSDC allows the solutions on coarse levels to obtain the accuracy of the finest level as long as sufficiently many iterations are performed.
\begin{figure}[th]
\centering
\subfloat[Errors on fine level for $\nu = 0.1$.\label{fig:burgers_a}]{\includegraphics[width=0.48\textwidth,type=pdf,ext=.pdf,read=.pdf]{error_01_f_vc}}
\subfloat[Errors on coarse level for $\nu = 0.1$.\label{fig:burgers_b}]{\includegraphics[width=0.48\textwidth,type=pdf,ext=.pdf,read=.pdf]{error_01_c_vc}}\newline \centering
\subfloat[Errors on fine level for $\nu = 1.0$\label{fig:burgers_c}]{\includegraphics[width=0.48\textwidth,type=pdf,ext=.pdf,read=.pdf]{error_1_f_vc}}
\subfloat[Errors on coarse level for $\nu=1.0$\label{fig:burgers_d}]{\includegraphics[width=0.48\textwidth,type=pdf,ext=.pdf,read=.pdf]{error_1_c_vc}}
\caption{Errors on fine and coarse level of MLSDC vs. iteration count.
The dashed line indicates the spatial error $\mathrm{e}psilon_{N}$ while the dot-dashed line indicates the temporal error $\mathrm{e}psilon_{\Delta t}$.
The red circles indicate the difference $\varepsilon^{\rm coll}$ between MLSDC and the collocation solution,
the blue diamonds indicate the difference $\varepsilon^{\rm ODE}$ between MLSDC and the ODE solution and the green squares indicate the difference $\varepsilon^{\rm PDE}$ between MLSDC and the PDE solution.
In (c) and (d), $\varepsilon^{\rm ODE}$ is nearly identical to $\varepsilon^{\rm PDE}$.
Note how the FAS correction in MLSDC allows the coarse level to attain the same accuracy as the fine level solution:
the saturation limits on the fine and coarse mesh are identical.}
\label{fig:burgers}
\mathrm{e}nd{figure}
\subsubsection{Required iterations}
Table~\ref{tab:burger_it} shows the number of fine level sweeps required by SDC and MLSDC to reduce the infinity norm of the residual ${\bm{r}}^k$, see ~\mathrm{e}qref{eq:residual}, below $10^{-5}$.
For both setups, $\nu=0.1$ as well as $\nu=1.0$, MLSDC reduces the number of required fine sweeps compared to single-level SDC.
In turn, however, MLSDC adds some overhead from coarse level sweeps.
If these are cheap enough, the reduced iteration number will result in reduced computing time, cf. \S\ref{subsec:visc_burger3d}.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|}\hline
\multicolumn{2}{|c|}{$\nu=0.1$} & \multicolumn{2}{|c|}{$\nu=1.0$} \\ \hline
Method & \# Fine sweeps & Method & \# Fine sweeps \\ \hline
SDC & 4 & SDC & 12 \\
MLSDC & 3 & MLSDC & 7 \\ \hline
\mathrm{e}nd{tabular}
\caption{Number of fine level sweeps required to reach a residual of $\left\| {\bm{r}}^{k} \right\|_{\infty} \leq 10^{-5}$ for SDC and multi-level SDC for Burgers' equation with $\nu=0.1$ and $\nu=1.0$.}\label{tab:burger_it}
\mathrm{e}nd{table}
\subsubsection{Stopping criteria} Note that the overall PDE error of the solution is not reduced further by additional iterations once $\varepsilon^{\rm coll} \leq \max\{ \mathrm{e}psilon_{N}, \mathrm{e}psilon_{\Delta t} \}$.
In Figures~\ref{fig:burgers_a}--\ref{fig:burgers_d}, this corresponds to the point where the line with red circles (iteration error) drops below the dot-dashed line (indicating $\varepsilon_{\Delta t}$) or dashed line (indicating $\varepsilon_{N}$).
The MLSDC solution, however, continues to converge to the collocation solution.
In a scenario where the PDE error is the main criterion for the quality of a solution, iterating beyond $\varepsilon^{\rm PDE}$ no longer improves the solution.
This suggests adaptively setting the tolerance for the residual of the MLSDC iteration in accordance with error estimators for $\mathrm{e}psilon_{\rm N}$ and $\mathrm{e}psilon_{\Delta t}$ to avoid unnecessary further iterations.
\subsection{Shear layer instability}\label{subsec:shear_layer}
In this example, we study the behavior of MLSDC in the case where the exact solution is not
well resolved.
We consider a shear layer instability in a 2D doubly periodic domain governed by the vorticity-velocity formulation of the 2D Navier-Stokes equations given by
\begin{align}
\pdesol{\omega}_t + \pdesol{u}\cdot\nabla\pdesol{\omega} = \nu\nabla^{2}\pdesol{\omega}\nonumber
\mathrm{e}nd{align}
with velocity $\pdesol{u}\in\mathbb{R}^2\times[0,\infty)$, vorticity $\pdesol{\omega} = \nabla\times\pdesol{u}\in\mathbb{R}^\times[0,\infty)$ and viscosity $\nu\in\mathbb{R}^{+}$. We consider the spatial domain $[0,1]^2$ with periodic boundary conditions in all directions and the initial conditions
\begin{align}
\pdesol{u}_1^0(x,y) &= -1.0 + \tanh(\rho(0.5 - y)) + \tanh(\rho(y - 0.25))\nonumber\\
\pdesol{u}_2^0(x,y) &= -\mathrm{d}elta \sin(2\pi(x + 0.25)).\nonumber
\mathrm{e}nd{align}
These initial conditions correspond to two horizontal shear layers, of ``thickness'' $\rho = 50$, at $y = 0.75$ and $y = 0.25$, with a disturbance of magnitude $\mathrm{d}elta= 0.05$ in the vertical velocity $\pdesol{u}_2$.
As in \S\ref{subsec:visc_burg}, the system is split into implicit/explicit parts according to
\begin{align}
\pdesol{\omega}_t &= f^E(\pdesol{\omega}) + f^I(\pdesol{\omega})\nonumber
\mathrm{e}nd{align}
where
\begin{align}
f^E(\pdesol{\omega}) &= -\pdesol{u}\cdot\nabla\pdesol{\omega}\nonumber\\
f^I(\pdesol{\omega}) &= \nu\nabla^{2}\pdesol{\omega}.\nonumber
\mathrm{e}nd{align}
While the implicit term $f^I$ is discretized and solved as before, we apply a streamfunction approach for the explicit term $f^E$: for periodic boundary conditions, we can assume $\pdesol{u} = \nabla\times\pdesol{\psi}$ for a solenoidal streamfunction $\pdesol{\psi}$. Thus,
\begin{align}
\pdesol{\omega} = \nabla\times(\nabla\times\pdesol{\psi}) = -\nabla^{2}\pdesol{\psi}.\nonumber
\mathrm{e}nd{align}
We refer to~\cite{chorin_mathematical_1990} for more details.
To compute $f^E_{p,N}(\odesol{\omega})$ with order-$p$ operators on an $N\times N$ mesh, we therefore solve the Poisson problem
\begin{align}
-\nabla^{2}\pdesol{\psi} = \pdesol{\omega}\nonumber
\mathrm{e}nd{align}
for $\pdesol{\psi}$ using the linear multigrid method described previously, calculate the discretized version of $\pdesol{u} = \nabla\times\pdesol{\psi}$ and finally compute the discretization of $\pdesol{u}\cdot\nabla\pdesol{\omega}$, both with order-$p$ operators.
Two levels with $M+1=9$ collocation nodes are used with a $128 \times 128$ point spatial mesh and a fourth order stencil on the fine level.
Different combinations of coarsening are tested (the numbers in parentheses correspond to the strategies as listed in \S\ref{sec:mlsdc_spatial_coarsening}):
\begin{enumerate}
\item MLSDC(1,2) uses a coarsened $64 \times 64$ point mesh on the coarse level and second-order stencils.
\item MLSDC(1,2,3(1)) as MLSDC(1,2) but also solves the implicit linear systems in the coarse SDC sweep only approximately with a single V-cycle.
\item MLSDC(1,2,3(2)) as MLSDC(1,2,3(1)) but with two V-cycles.
\item MLSDC(2,3(1)) uses also a $128 \times 128$ point mesh on the coarse level, but second-order stencils and approximate linear solves using a single V-cycle.
\mathrm{e}nd{enumerate}
The simulation computes $256$ timesteps of MLSDC up to a final time $t=1.0$.
As reference, a classical SDC solution is computed using $1024$ timesteps with $M+1=13$ collocation nodes and the fine level spatial discretization.
Both SDC and MLSDC iterate until the residual satisfies $\left\| {\bm{r}}^k \right\|_{\infty} \leq 10^{-12}$.
\subsubsection{Vorticity field on all levels}
Figure~\ref{fig:sli_field} shows the vorticity field at the end of the simulation on the fine and the coarse level.
The relative maximum error $\varepsilon^{\rm ODE}$ at time $t=1$ is approximately $10^{-12}$ (which corresponds to the spatial and temporal residual thresholds that were used for all runs in this example).
We note that simply running SDC with the coarse level spatial discretization from MLSDC(1,2) gives completely unsatisfactory results (not shown): spurious vortices exist in addition to the two correct vortices and strong spurious oscillations are present in the vorticity field.
In contrast, the coarse level solution from MLSDC shown in Figure~\ref{fig:sli_field_c} looks reasonable, again because of the FAS correction.
\begin{figure}[t!]
\centering
\subfloat[MLSDC, fine level: $128\times 128$, $p=4$\label{fig:sli_field_b}]{\includegraphics[width=0.48\textwidth,type=pdf,ext=.pdf,read=.pdf]{mlsdc_NV16384_N512_M9_nl3_qt1_te2_dt0-0039062_np6_tc0_sc3_la1E-04_level01_vfield}}
\subfloat[MLSDC, coarse level: $64\times 64$, $p=2$\label{fig:sli_field_c}]{\includegraphics[width=0.48\textwidth,type=pdf,ext=.pdf,read=.pdf]{mlsdc_NV16384_N512_M9_nl3_qt1_te2_dt0-0039062_np6_tc0_sc3_la1E-04_level02_vfield}}
\caption{Vorticity of the solution of the shear layer instability at $t=1.0$ on the fine level (left) and coarse level (right) using MLSDC(1,2,3(1)).}
\label{fig:sli_field}
\mathrm{e}nd{figure}
\subsubsection{Required iterations}
Table~\ref{tab:vortex_it} shows the average number of fine level sweeps over all timesteps required by SDC and MLSDC to converge.
The configurations MLSDC(1,2), MLSDC(1,2,3(1)) and MLSDC(1,2,3(2)) do not reduce the number of sweeps, but instead lead to a small increase.
Avoiding a coarsened spatial mesh in MLSDC(2,3(1)), however, saves a small amount of fine sweeps compared to SDC.
Note that here, in contrast to the example presented in \S\ref{subsec:visc_burger3d}, Strategy 1 has a significant negative impact on the performance of MLSDC.
This illustrates that coarsening in MLSDC cannot be used in the same way for every problem: a careful adaption of the employed strategies to the problem at hand is necessary.
\begin{table}[th]
\centering
\begin{tabular}{|c|c|} \hline
Method & \# Fine sweeps on average \\ \hline
SDC & 6.46 \\
MLSDC(1,2) & 6.64 \\
MLSDC(1,2,3(1)) & 6.62 \\
MLSDC(1,2,3(2)) & 6.64 \\
MLSDC(2,3(1)) & 5.26 \\ \hline
\mathrm{e}nd{tabular}
\caption{Average number of fine level sweeps required to converge for SDC and MLSDC for the shear layer instability. The numbers indicate the different coarsening strategies.}\label{tab:vortex_it}
\mathrm{e}nd{table}
\subsection{Three-dimensional viscous Burgers' equation}\label{subsec:visc_burger3d}
To demonstrate that MLSDC can not only reduce iterations but also runtime, we consider viscous Burgers' equation in three dimensions
\begin{equation}
\pdesol{u}_{t}(\textbf{x},t) + \pdesol{u}(\textbf{x},t) \cdot \nabla \pdesol{u}(\textbf{x},t)= \nu \nabla^2 \pdesol{u}(\textbf{x},t), \quad \textbf{x} \in [0,1]^{3}, \quad 0 \leq t \leq 1\nonumber
\mathrm{e}nd{equation}
with $\textbf{x} = (x,y,z)$, initial value
\begin{equation}
\pdesol{u}(\textbf{x}, t) = \mathrm{e}xp\left( -\frac{(x-0.5)^2 + (y-0.5)^2 + (z-0.5)^2}{\sigma^2} \right), \quad \sigma=0.1,\nonumber
\mathrm{e}nd{equation}
homogeneous Dirichlet boundary condition and diffusion coefficients $\nu=0.1$ and $\nu=1.0$.
The problem is solved using a {\sc fortran} implementation of MLSDC combined with a {\sc c} implementation of a parallel multigrid solver (PMG) in space~\cite{bolten:2014}.
A single timestep of length $\Delta t = 0.01$ is performed with MLSDC, corresponding to CFL numbers from the diffusive term on the fine level, that is
\begin{equation}
C_{\rm diff} := \frac{\nu \Delta t}{\Delta x^{2}},\nonumber
\mathrm{e}nd{equation}
of about $C_{\rm diff}=66$ (for $\nu=0.1$) and $C_{\rm diff} = 656$ (for $\nu=1.0$).
The diffusion term is integrated implicitly using PMG to solve the corresponding linear system and the advection term
is treated explicitly.
Simulations are run on $512$ cores on the IBM BlueGene/Q JUQUEEN at the J\"ulich Supercomputing Centre.
MLSDC is run with $M+1=3$, $M+1=5$ and $M+1=7$ Gauss-Lobatto nodes with a tolerance for the residual of $10^{-5}$.
Two MLSDC levels are used with all three types of coarsening applied:
\begin{enumerate}
\item The fine level uses a $255^{3}$ point mesh and the coarse level $127^{3}$.
\item A $4^{\rm th}$-order compact difference stencil for the Laplacian and a $5^{\rm th}$-order WENO~\cite{JiangShu1996} for the advection term are used on the fine level; a $2^{\rm nd}$-order stencil for the Laplacian and a $1^{\rm st}$-order upwind scheme for advection on the coarse.
\item The accuracy of the implicit solve on the coarse level is varied by fixing the number of V-cycles of PMG on this level.
\mathrm{e}nd{enumerate}
Three runs are performed, each with a different number of V-cycles on the coarse level.
In the first run, the coarse level linear systems are solved to full accuracy, whereas
the second and third runs use one and two V-cycles of PMG on the coarse level, respectively, instead of solving to full accuracy. These cases are
referred to as MLSDC(1,2), MLSDC(1,2,3(1)), and MLSDC(1,2,3(2)).
On the fine level, implicit systems are always solved to full accuracy (the PMG multigrid iteration reaches a tolerance of reach a tolerance of $10^{-12}$ or stalls).
\paragraph{Required iterations and runtimes.}
Table~\ref{tab:burger3d_eq} shows both the required fine level sweeps for SDC and MLSDC as well as the total runtimes in seconds for $\nu = 0.1$ and $\nu = 1.0$ for three different values of $M$.
MLSDC(1,2) and MLSDC(1,2,3(2)) in all cases manage to significantly reduce the number of fine sweeps required for convergence in comparison to single-level SDC, typically by about a factor of two.
These savings in fine level sweeps translate into runtime savings on the order of $30-40 \%$.
For $3$ and $5$ quadrature nodes, there is no negative impact in terms of additional fine sweeps by using a reduced implicit solve on the coarse level and MLSDC(1,2,3(2)) is therefore faster than MLSDC(1,2).
However, since coarse level V-cycles are very cheap due to spatial coarsening, the additional savings in runtime are small.
For $7$ quadrature nodes, using a reduced implicit solve on the coarse level in MLSDC(1,2,3(2)) comes at the price of an additional MLSDC iteration and therefore, MLSDC(1,2) is the fastest variant in this case.
Using only a single V-cycle for implicit solves on the coarse grid in MLSDC(1,2,3(1)) results in a modest to significant increase in the number of required MLSDC iterations compared to MLSDC(1,2,3(2)) in almost all cases. The only exception is the run with $3$ nodes and $\nu=0.1$.
Therefore, MLSDC(1,2,3(1)) is typically significantly slower than MLSDC(1,2) or MLSDC(1,2,3(2)) and not recommended for use in three dimensions.
For $7$ quadrature nodes, using only a single V-cycle leads to a dramatic increase in the number of required fine sweeps and MLSDC becomes much slower than single level SDC, indicating that the inaccurate coarse level has a negative impact on convergence.
\begin{table}[t]
\centering
{\centering \bf $M+1 = 3$ Gauss-Lobatto nodes
\linebreak}
\begin{tabular}{|c|c|c|} \hline
\multicolumn{3}{|c|}{$\nu=0.1$} \\ \hline
Method & F-Sweeps & Runtime (sec) \\ \hline
SDC & 9 & 39.4 \\
MLSDC(1,2) & 4 & 26.2 \\
MLSDC(1,2,3(2)) & 4 & 25.6 \\
MLSDC(1,2,3(1)) & 5 & 29.7 \\ \hline
\mathrm{e}nd{tabular}
\begin{tabular}{|c|c|c|} \hline
\multicolumn{3}{|c|}{$\nu=1.0$} \\ \hline
Method & F-Sweeps & Runtime (sec) \\ \hline
SDC & 16 & 74.1 \\
MLSDC(1,2) & 8 & 49.1 \\
MLSDC(1,2,3(2)) & 8 & 47.0 \\
MLSDC(1,2,3(1)) & 8 & 46.7 \\ \hline
\mathrm{e}nd{tabular}
{\centering \bf $M+1 = 5$ Gauss-Lobatto nodes
\linebreak}
\begin{tabular}{|c|c|c|} \hline
\multicolumn{3}{|c|}{$\nu=0.1$} \\ \hline
Method & F-Sweeps & Runtime (sec) \\ \hline
SDC & 7 & 59.5 \\
MLSDC(1,2) & 3 & 40.8 \\
MLSDC(1,2,3(2)) & 3 & 39.8 \\
MLSDC(1,2,3(1)) & 8 & 79.7 \\ \hline
\mathrm{e}nd{tabular}
\begin{tabular}{|c|c|c|} \hline
\multicolumn{3}{|c|}{$\nu=1.0$} \\ \hline
Method & F-Sweeps & Runtime (sec) \\ \hline
SDC & 18 & 162.7 \\
MLSDC(1,2) & 9 & 105.6 \\
MLSDC(1,2,3(2)) & 9 & 101.5 \\
MLSDC(1,2,3(1)) & 14 & 142.8 \\ \hline
\mathrm{e}nd{tabular}
{\centering \bf $M+1 = 7$ Gauss-Lobatto nodes
\linebreak}
\begin{tabular}{|c|c|c|} \hline
\multicolumn{3}{|c|}{$\nu=0.1$} \\ \hline
Method & F-Sweeps & Runtime (sec) \\ \hline
SDC & 5 & 82.4 \\
MLSDC(1,2) & 2 & 46.1 \\
MLSDC(1,2,3(2)) & 3 & 57.2 \\
MLSDC(1,2,3(1)) & 11 & 147.2 \\ \hline
\mathrm{e}nd{tabular}
\begin{tabular}{|c|c|c|} \hline
\multicolumn{3}{|c|}{$\nu=1.0$} \\ \hline
Method & F-Sweeps & Runtime (sec) \\ \hline
SDC & 17 & 224.7 \\
MLSDC(1,2) & 8 & 139.5 \\
MLSDC(1,2,3(2)) & 9 & 148.1 \\
MLSDC(1,2,3(1)) & 44 & 560.4 \\ \hline
\mathrm{e}nd{tabular}
\caption{Number of required fine level sweeps and resulting runtimes in seconds by SDC and MLSDC for 3D viscous Burgers' equation. The numbers in parentheses after MLSDC indicate the employed coarsening strategies, see~\S\ref{sec:mlsdc_spatial_coarsening}. Reduced implicit solves are indicated by $3(n)$ where $n$ indicates the fixed number of multigrid V-cycles. Otherwise, PMG iterates until a residual of $10^{-12}$ is reached or the iteration stalls. The tolerance for the SDC/MLSDC iteration is $10^{-5}$.}\label{tab:burger3d_eq}
\mathrm{e}nd{table}
\section{Discussion} \label{sec:outro}
The paper analyzes the multi-level spectral deferred correction method (MLSDC), an extension to the original single-level spectral deferred corrections (SDC) as well as ladder SDC methods.
In contrast to SDC, MLSDC performs correction sweeps in time on a hierarchy of discretization levels, similar to V-cycles in classical multigrid.
An FAS correction is used to increase the accuracy on coarse levels.
The paper also presents a new procedure to incorporate weighting matrices arising in higher-order
compact finite difference stencils into the SDC method.
The advantage of MLSDC is that it shifts computational work from the fine level to coarse levels, thereby reducing the number of fine SDC sweeps and, therefore, the time-to-solution.
For MLSDC to be efficient, a reduced representation of the problem on the coarse levels has to be used in order to make coarse level sweeps cheap in terms of computing time.
Three strategies are investigated numerically, namely (1) using fewer degrees of freedom,
(2) reducing the order of the discretization, and (3) reducing the accuracy of the linear solver in implicit substeps on the coarse level.
Numerical results are presented for the wave equation, viscous Burgers' equation in 1D and 3D and for the 2D Navier-Stokes equation in vorticity-velocity formulation.
It is demonstrated that because of the FAS correction, the solutions on all levels converge up to the accuracy determined by the discretization on the finest level.
More significantly, in all four examples, MLSDC can reduce the number of fine level sweeps required to converge compared to single level SDC.
For the 3D example this translates directly into significantly reduced computing times in comparison to single-level SDC.
One potential continuation of this work is to investigate reducing the accuracy of
implicit solves on the fine level in MLSDC as well. In~\cite{SpeckEtAl2014_DDM2013}, so called
{\it inexact} spectral deferred corrections (ISDC) methods are considered, where implicit solves at
each SDC node are replaced by a small number of multigrid V-cycles. As with MLSDC, the reduced cost ofx
implicit solves are somewhat offset by an increase in the number of SDC iterations required for
convergence. Nevertheless, numerical results in \cite{SpeckEtAl2014_DDM2013} demonstrate an overall reduction of
cost for ISDC methods versus SDC for certain test cases. The optimal
combination of coarsening and reducing V-cycles for SDC methods using multigrid for implicit solves
appears to be problem-dependent, and an analysis of this topic is in preparation.
The MLSDC algorithm has also been applied to Adaptive Mesh Refinement (AMR) methods popular in finite-volume methods for conservative systems.
In the AMR + MLSDC algorithm, each AMR level is associated with its own MLSDC level, resulting in a hierarchy of hybrid space/time discretizations with increasing space/time resolution. When a new (high resolution) level is added to the AMR hierarchy, a new MLSDC level is created.
The resulting scheme differs from traditional sub-cycling AMR time-stepping schemes in a few notable aspects: fine level sub-cycling is achieved through increased temporal resolution of the MLSDC nodes; flux corrections across coarse/fine AMR grid boundaries are naturally incorporated into the MLSDC FAS correction; fine AMR ghost cells eventually become high-order accurate through the iterative nature of MLSDC V-cycling; and finally, the cost of implicit solves on all levels decreases with each MLSDC V-cycle as initial guesses improve.
Preliminary results suggest that the AMR+MLSDC algorithm can be successfully applied to the compressible Navier-Stokes equations with stiff chemistry for the direct numerical simulation of combustion problems.
A detailed description of the AMR+MLSDC algorithm with applications is currently in preparation.
Finally, the impact and performance of the coarsening strategies presented here are also of relevance to the parallel full approximation scheme in space and time (PFASST)~\cite{EmmettMinion2012_DDM,EmmettMinion2012,Minion2010,SpeckEtAl2012} algorithm, which is a time-parallel scheme for ODEs and PDEs.
Like MLSDC, PFASST employs a hierarchy of levels but
performs SDC sweeps on multiple time intervals concurrently with
corrections to initial conditions being communicated forward in time during the iterations.
Parallel efficiency in PFASST can be achieved because fine SDC sweeps are done in parallel while
sweeps on the coarsest level are in essence done serially.
In the PFASST algorithm, there is a trade-off between decreasing the cost on coarse levels
to improve parallel efficiency and retaining good accuracy on the coarse level to minimize the
number of parallel iterations required to converge.
In~\cite{EmmettMinion2012} it was shown that, for mesh-based PDE discretizations, using a spatial mesh with fewer points on the coarse level
in conjunction with a reduced number of quadrature nodes, led to a method with significant parallel speed up.
Incorporating the additional coarsening strategies presented here for MLSDC into PFASST would
further reduce the cost of coarse levels, but it is unclear how this might translate into
an increase in the number of parallel PFASST iterations required.
\begin{thebibliography}{10}
\providecommand{\url}[1]{{#1}}
\providecommand{\urlprefix}{URL }
\mathrm{e}xpandafter\ifx\csname urlstyle\mathrm{e}ndcsname\relax
\providecommand{\mathrm{d}oi}[1]{DOI~\mathrm{d}iscretionary{}{}{}#1}\mathrm{e}lse
\providecommand{\mathrm{d}oi}{DOI~\mathrm{d}iscretionary{}{}{}\begingroup
\urlstyle{rm}\Url}\fi
\bibitem{Alam2006}
Alam, J.M., Kevlahan, N.K.R., Vasilyev, O.V.: Simultaneous spaceÐtime adaptive
wavelet solution of nonlinear parabolic differential equations.
\newblock Journal of Computational Physics \textbf{214}(2), 829 -- 857 (2006).
\bibitem{ascherPetzold}
Ascher, U.M., Petzold, L.R.: Computer Methods for Ordinary Differential
Equations and Differential-Algebraic Equations.
\newblock SIAM, Philadelphia, PA (2000)
\bibitem{bohmerHemkerStetter:1984}
B\"ohmer, K., Hemker, P., Stetter, H.J.: The defect correction approach.
\newblock In: K.~B\"ohmer, H.J. Stetter (eds.) Defect Correction Methods.
Theory and Applications, pp. 1--32. Springer-Verlag (1984)
\bibitem{bolten:2014}
Bolten, M.: Evaluation of a multigrid solver for 3-level {Toeplitz} and
circulant matrices on {Blue Gene/Q}.
\newblock In: K.~Binder, G.~M\"unster, M.~Kremer (eds.) NIC Symposium 2014, pp.
345--352. John von Neumann Institute for Computing (2014).
\newblock (to appear)
\bibitem{bourlioux2003high}
Bourlioux, A., Layton, A.T., Minion, M.L.: High-order multi-implicit spectral
deferred correction methods for problems of reactive flow.
\newblock Journal of Computational Physics \textbf{189}(2), 651--675 (2003)
\bibitem{bouzarthMinion:2011}
Bouzarth, E.L., Minion, M.L.: A multirate time integrator for regularized
stokeslets.
\newblock Journal of Computational Physics \textbf{229}(11), 4208--4224 (2010)
\bibitem{brandt:1977}
Brandt, A.: Multi-level adaptive solutions to boundary-value problems.
\newblock Math. Comp. \textbf{31}(138), 333--390 (1977)
\bibitem{briggs}
Briggs, W.L.: A Multigrid Tutorial.
\newblock SIAM, Philadelphia, PA (1987)
\bibitem{chorin_mathematical_1990}
Chorin, A.J., Marsden, J.E.: A mathematical introduction to fluid mechanics,
2nd edn.
\newblock Springer-Verlag (1990)
\bibitem{chow2006}
Chow, E., Falgout, R.D., Hu, J.J., Tuminaro, R.S., Yang, U.M.: A survey of
parallelization techniques for multigrid solvers.
\newblock In: Parallel Processing for Scientific Computing, SIAM Series of
Software, Environements and Tools. SIAM (2006)
\bibitem{ChristliebEtAl2011_CMS}
Christlieb, A., Morton, M., Ong, B., Qiu, J.M.: Semi-implicit integral deferred
correction constructed with additive {R}unge--{K}utta methods.
\newblock Communications in Mathematical Science \textbf{9}(3), 879--902
(2011).
\bibitem{ChristliebEtAl2009}
Christlieb, A., Ong, B., Qiu, J.M.: Comments on high-order integrators embedded
within integral deferred correction methods.
\newblock Communications in Applied Mathematics and Computational Science
\textbf{4}(1), 27--56 (2009).
\bibitem{ChristliebEtAl2010_MoC}
Christlieb, A., Ong, B.W., Qiu, J.M.: Integral deferred correction methods
constructed with high order {R}unge-{K}utta integrators.
\newblock Mathematics of Computation \textbf{79}, 761--783 (2010).
\bibitem{DaiEtAl2013_ESAIM}
Dai, X., Le~Bris, C., Legoll, F., Maday, Y.: Symmetric parareal algorithms for
hamiltonian systems.
\newblock ESAIM: Mathematical Modelling and Numerical Analysis \textbf{47},
717--742 (2013).
\bibitem{pereyra:1968}
Daniel, J.W., Pereyra, V., Schumaker, L.L.: Iterated deferred corrections for
initial value problems.
\newblock Acta Cient. Venezolana \textbf{19}, 128--135 (1968)
\bibitem{dutt2000spectral}
Dutt, A., Greengard, L., Rokhlin, V.: Spectral deferred correction methods for
ordinary differential equations.
\newblock BIT Numerical Mathematics \textbf{40}(2), 241--266 (2000)
\bibitem{EmmettMinion2012_DDM}
Emmett, M., Minion, M.L.: Efficient implementation of a multi-level parallel in
time algorithm.
\newblock In: Proceedings of the 21st International Conference on Domain
Decomposition Methods, Lecture Notes in Computational Science and Engineering
(2012).
\newblock (In press)
\bibitem{EmmettMinion2012}
Emmett, M., Minion, M.L.: Toward an efficient parallel in time method for
partial differential equations.
\newblock Communications in Applied Mathematics and Computational Science
\textbf{7}, 105--132 (2012).
\bibitem{HagstromZhou2006}
Hagstrom, T., Zhou, R.: On the spectral deferred correction of splitting
methods for initial value problems.
\newblock Communications in Applied Mathematics and Computational Science
\textbf{1}(1), 169--205 (2006).
\bibitem{HairerI}
Hairer, E., Norsett, S.P., Wanner, G.: Solving Ordinary Differential Equations
I, Nonstiff Problems.
\newblock Springer-Verlag, Berlin (1987)
\bibitem{HairerII}
Hairer, E., Wanner, G.: Solving Ordinary Differential Equations {II}, Stiff and
Differential-Algebraic Problems.
\newblock Springer-Verlag, Berlin (1991)
\bibitem{hansen2006convergence}
Hansen, A.C., Strain, J.: Convergence theory for spectral deferred correction.
\newblock Preprint (2006)
\bibitem{HautWingate2013}
Haut, T., Wingate, B.: An asymptotic parallel-in-time method for highly
oscillatory {PDE}s.
\newblock SIAM Journal on Scientific Computing (2014).
\newblock In press
\bibitem{huang2006accelerating}
Huang, J., Jia, J., Minion, M.: Accelerating the convergence of spectral
deferred correction methods.
\newblock Journal of Computational Physics \textbf{214}(2), 633--656 (2006)
\bibitem{HuangEtAl2006}
Huang, J., Jia, J., Minion, M.: Accelerating the convergence of spectral
deferred correction methods.
\newblock Journal of Computational Physics \textbf{214}(2), 633 -- 656 (2006).
\bibitem{Hunter2007}
Hunter, J.D.: Matplotlib: A 2{D} graphics environment.
\newblock Computing In Science \& Engineering \textbf{9}(3), 90--95 (2007)
\bibitem{JiangShu1996}
Jiang, G.S., Shu, C.W.: Efficient implementation of weighted {ENO} schemes.
\newblock Journal of Computational Physics \textbf{126}, 202--228 (1996)
\bibitem{Layton2009}
Layton, A.T.: On the efficiency of spectral deferred correction methods for
time-dependent partial differential equations.
\newblock Applied Numerical Mathematics \textbf{59}(7), 1629 -- 1643 (2009).
\bibitem{layton2004conservative}
Layton, A.T., Minion, M.L.: Conservative multi-implicit spectral deferred
correction methods for reacting gas dynamics.
\newblock Journal of Computational Physics \textbf{194}(2), 697--715 (2004)
\bibitem{laytonMinion:2005}
Layton, A.T., Minion, M.L.: Implications of the choice of quadrature nodes for
{P}icard integral deferred corrections methods for ordinary differential
equations.
\newblock BIT Numerical Mathematics \textbf{45}, 341--373 (2005)
\bibitem{lele_compact_1992}
Lele, S.K.: Compact finite difference schemes with spectral-like resolution.
\newblock Journal of Computational Physics \textbf{103}(1), 16--42 (1992)
\bibitem{minion2003semi}
Minion, M.L.: Semi-implicit spectral deferred correction methods for ordinary
differential equations.
\newblock Communications in Mathematical Sciences \textbf{1}(3), 471--500
(2003)
\bibitem{minion2004semi}
Minion, M.L.: Semi-implicit projection methods for incompressible flow based on
spectral deferred corrections.
\newblock Applied numerical mathematics \textbf{48}(3), 369--387 (2004)
\bibitem{Minion2010}
Minion, M.L.: A hybrid parareal spectral deferred corrections method.
\newblock Communications in Applied Mathematics and Computational Science
\textbf{5}(2), 265--301 (2010).
\bibitem{pereyra:1967}
Pereyra, V.: Iterated deferred corrections for nonlinear operator equations.
\newblock Numerische Mathematik \textbf{10}, 316--323 (1966)
\bibitem{pereyra:1966}
Pereyra, V.: On improving an approximate solution of a functional equation by
deferred corrections.
\newblock Numerische Mathematik \textbf{8}, 376--391 (1966)
\bibitem{RuprechtKrause2012}
Ruprecht, D., Krause, R.: Explicit parallel-in-time integration of a linear
acoustic-advection system.
\newblock Computers \& Fluids \textbf{59}(0), 72 -- 83 (2012).
\bibitem{south1977}
South, J.C., Brandt, A.: Application of a multi-level grid method to transonic
flow calculations.
\newblock In: Transonic flow problems in turbomachinery, pp. 180--206.
Hemisphere (1977)
\bibitem{SpeckEtAl2012}
Speck, R., Ruprecht, D., Krause, R., Emmett, M., Minion, M., Winkel, M.,
Gibbon, P.: A massively space-time parallel {N}-body solver.
\newblock In: Proceedings of the International Conference on High Performance
Computing, Networking, Storage and Analysis, SC '12, pp. 92:1--92:11. IEEE
Computer Society Press, Los Alamitos, CA, USA (2012).
\bibitem{SpeckEtAl2014_DDM2013}
Speck, R., Ruprecht, D., Minion, M., Emmett, M., Krause, R.: Inexact spectral
deferred corrections using single-cycle multigrid (2014).
\newblock ArXiv:1401.7824 [math.NA]
\bibitem{spotz_high-order_1996}
Spotz, W.F., Carey, G.F.: A high-order compact formulation for the {3D} poisson
equation.
\newblock Numerical Methods for Partial Differential Equations \textbf{12}(2),
235--243 (1996)
\bibitem{stetter:1974}
Stetter, H.J.: Economical global error estimation.
\newblock In: R.A. Willoughby (ed.) Stiff Differential Systems, pp. 245--258
(1974)
\bibitem{trottenberg_multigrid:_2000}
Trottenberg, U., Oosterlee, C.W.: Multigrid: Basics, Parallelism and
Adaptivity.
\newblock Academic Press (2000)
\bibitem{Weiser2013}
Weiser, M.: Faster {SDC} convergence on non-equidistant grids with {DIRK}
sweeps (2013).
\newblock {ZIB} Report 13--30
\bibitem{ShuEtAl2007}
Xia, Y., Xu, Y., Shu, C.W.: Efficient time discretization for local
discontinuous galerkin methods.
\newblock Discrete and Continuous Dynamical Systems -- Series B \textbf{8}(3),
677 -- 693 (2007).
\bibitem{zadunaisky:1964}
Zadunaisky, P.: A method for the estimation of errors propagated in the
numerical solution of a system of ordinary differential equations.
\newblock In: G.~Contopoulus (ed.) The Theory of Orbits in the Solar System and
in Stellar Systems. Proceedings of International Astronomical Union,
Symposium 25 (1964)
\mathrm{e}nd{thebibliography}
\mathrm{e}nd{document} |
\begin{document}
\title{The Farrell-Jones Conjecture for the solvable Baumslag-Solitar groups}
\author{Tom Farrell and Xiaolei Wu}
\begin{abstract}
In this paper, we
prove the Farrell-Jones Conjecture for the solvable Baumslag-Solitar groups with coefficients in an additive category. We also extend our results to groups of the form, Z[1/p] semidirect product with any virtually cyclic group, where p is a prime number.
\end{abstract}
{\mathfrak{o}}otnotetext{Date: November, 2012}
{\mathfrak{o}}otnotetext{2010 Mathematics Subject Classification: 18F25,19A31,19B28.}
\keywords{Baumslag-Solitar group, Farrell-Jones conjecture, K-theory of group rings, L-theory of group rings, flow space.}
\maketitle
\section{Introduction}
In the Farrell and Linnell paper \cite{FL}, they proved that if the fibered isomorphism conjecture is true for all nearly crystallographic groups, then it is true for all virtually solvable groups. However, they were not able to verify the fibered isomorphism conjecture for all nearly crystallographic groups. In particular, they pointed out that the fibered isomorphism conjecture has not been verified for the group ${\mathbb{Z} {[{\frac{1}{2}} ]}} \rtimes_\alpha \mathbb{Z}$, where $\alpha$ is multiplication by $2$. Note this group is isomorphic to the Baumslag-Solitar group $BS(1,2)$. Recall that the Baumslag-Solitar group $ BS(m,n)$ is defined by $\mathrm{la}ngle a,b~|~ba^mb^{-1} = a^n \mathrm{rig}htarrowngle$ and all the solvable ones are isomorphic to $BS(1,d)$. Note that $BS(m,n) \mathrm{c}g BS(n,m) \mathrm{c}g BS(-m,-n)$. Using new technology developed by Bartels, L\"uck and Reich in \cite{BL1},\cite{BL2}, \cite{BLR}, \cite{BLR2}, we prove the following result.
\begin{thm} \mathrm{la}bel{mth}
The K-theoretic and L-theoretic Farrell-Jones Conjecture is true for all solvable Baumslag-Solitar groups with coefficients in an additive category.
\end{thm}
Note the truth of the Farrell-Jones conjecture with coefficients in an additive category implies the fibered isomorphism conjecture. For more information about the Farrell-Jones conjecture and its fibered version, see for example \cite{FJ}. For the precise formulation and discussion of the Farrell-Jones conjecture
with coefficients in an additive category, see for example \cite{BFL}, \cite{BLR2}. For our convenience, we will first prove the Farrell-Jones conjecture for $BS(1,d)$, where $d >1$. The same proof applies to the $d < -1$ case. Note that $BS(1,1)$ and $B(1,-1)$ are the fundamental groups of the torus and Klein bottle respectively. And the Farrell-Jones conjecture is known for those groups. The authors want to point out here that our current method can not be applied to all Baumslag-Solitar groups. For example, we do not know whether the Farrell-Jones conjecture is true for the group $BS(2,3)$. For convenience we will denote the group $BS(1,d)$ as ${\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$ for the rest of the paper.
\begin{rem}
C. Wegner generalized the method in this paper and proved the Farrell-Jones conjecture for all virtually solvable groups in \cite{W2}. Using Wegner's result, we proved the Farrell-Jones conjecture for all Baumslag-Solitar groups in \cite{FW3}. Independently, G. Gandini, S. Meinert and H. R\"uping proved the Farrell-Jones conjecture for the fundamental group of any graph of abelian groups in \cite{GMR}, which includes all Baumslag-Solitar groups.
\end{rem}
Our strategy is to show that ${\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$ is in fact a Farrell-Hsiang group, defined by Bartels and L\"uck in \cite{BL1}. The main difficulty is to find a suitable flow space, ours is a horizontal subspace of Bartels and L\"uck's flow space in \cite{BL2}. The horizontal flow space allows us to exploit some negative curvature present in the solvable Baumslag-Solitar groups. Once our flow space is defined, we prove in Lemma \ref{l3} that it has a lot of good properties, which implies that it has a
$\mathcal{VC}yc$-cover by results of Bartels, L\"uck and Reich in \cite{BLR}. It should be mentioned that our results rely heavily on the existence of such a cover. In the proof of our main theorem, we also use a Proposition of Bartels, L\"uck and Reich from \cite{BLR2}, Proposition 5.3. In the last section, we extend our results to any group of the form $\mathbb{Z} {[{\frac{1}{p}} ]} \rtimes C$, where $p$ is any prime number, C is any virtually cyclic group.
In this paper, without further assumption, when we write ${\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$, $\alpha$ is always multiplication by $d$, and we will assume $d$ is a fixed integer bigger than $1$. When we write Farrell-Jones conjecture, we mean the Farrell-Jones conjecture with coefficients in an additive category. Let G be a (discrete) group acting on a space X. We say the action is proper if for any $x \in X$ there is an open neighborhood U of $x$ such that $\{g \in G ~|~gU \bigcap U \neq {\emptyset}\}$ is finite.
\textbf{Acknowledgements.} This research was in part supported by the National Science Foundation. We would like to thank Arthur Bartels, Robert Bieri, Zhi Qi and Adrian Vasiu for helpful discussions. The second author also want to take this opportunity to thank his advisor, Tom Farrell, for introducing him to this wonderful field. We also thank the referees for many suggestions on how to improve the
readability of this paper and pointing out many typos in the previous version of the paper.
\section{A Model for $E({\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z})$} \mathrm{la}bel{model}
In this section, we give a model for $E({\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z})$, a contractible space
with free, proper and discontinuous $\Gamma$ action, where $\Gamma$ is the group ${\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$. We also put a metric on it, such that $\Gamma$ acts isometrically on
$E({\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z})$. Most part of the material in this section has been well studied before, see for example \cite{ECH}, section 7.4, and \cite{FM}.
Let X be $S^1 \times [0,1] / (z,0) \thicksim (z^d, 1)$. Its fundamental group is $\mathrm{la}ngle x,k ~|~ kxk^{-1} ~=~ x^d\mathrm{rig}htarrowngle$, which is isomorphic to our group ${\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$. Since its universal cover is contractible, it is a model for $E({\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z})$.
Let $T_d$ be the oriented infinite, regular, $(d+1)$-valent tree with edge length $1$ (see Figure \ref{tree}). At each vertex there are $d$ incoming and $1$ outgoing edges. There is a natural clockwise (from left to right) order on the $d$ incoming edges when we embed $T_d$ in the upper half plane with a specified horizontal line as shown in Figure \ref{tree}. We will denote the specified horizontal line by $L_0$, $L_0$ is the line $\ldots P_{-2}P_{-1}P_{0}P_{1}P_{2}\ldots$ in figure \ref{tree}. It is not too hard to show that the universal cover of X is $T_d \times \mathbb{R}$. However, it is not easy to figure out how $\Gamma$ acts on $T_d \times \mathbb{R}$. So what we do here is give an action of $\Gamma$ on $T_d \times \mathbb{R}$, and show that ${(T_d \times \mathbb{R})} /\Gamma$ is homeomorphic to X.
\begin{figure}
\caption{$T_2$ : Oriented Infinite Regular 3-valent Tree}
\end{figure}
We first assume $d$ is a prime number. We can embed $\Gamma$ into $GL_2(\mathbb{Q})$ by mapping $(x,k) \in {\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$ to $\left(
\begin{array}{cc}
d^{-k} & x \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$. There is a natural action of $GL_2(\mathbb{Q})$ on this tree. We explain briefly how $\Gamma$ acts on it. For more information, see for example \cite{Se}, page 69 - 78. The action of $\Gamma$ on $T_d$ when $d$ is not a prime can be induced from the case when $d$ is a prime, details will be put in the Appendix.
Note that $\Gamma$ is generated by $\left(
\begin{array}{cc}
d^{-1} & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ and $\left(
\begin{array}{cc}
1 & d^{-m} \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ , so we only need to show how these elements act on $T_d$. Choose the vertex $P_0$ in $L_0$ as the basepoint. Denote the infinite point along $L_0$ towards the positive direction as $\omega$; cf. Figure \ref{tree}. For any $z \in T_d$, there is a unique geodesic starting from $z$ and moving towards $\omega$; denote it as $[x, \omega)$. Define the action of $\left(
\begin{array}{cc}
d^{-1} & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ on the tree by translation along $L_0$, moving along the line $L_0$ towards the positive direction by $1$ units. For example in the case $d =2$ as in Figure \ref{tree}, $\left(
\begin{array}{cc}
d^{-1} & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ acts on the tree by translation along $L_0$, moving right along the line $L_0$ by $1$ units. Hence $P_n$ will be translated to $P_{n+1}$, and every vertex or edge growing on the tree with root $P_n$ will also be translated to the corresponding vertex or edge growing from root $P_{n+1}$. We first give some notation and terminology before we explain how $\beta_m = {\left(
\begin{array}{cc}
1 & d^{-m} \\
0 & 1 \\
\end{array}\mathrm{rig}ht)}$ acts on $T_d$. We define a Busemann function $f_d:T_d\mathrm{rig}htarrow\mathbb{R}$, by mapping the line $L_0$ isometrically to $\mathbb{R}$ with $P_0$ mapped to $0$ and $\omega$ to $-\infty$ (orientation reversed); for an arbitrary point $Q$, define $f_d(Q) = f_d(P) + d(P,Q)$, where $P$ is the closest point in the line $L_0$ from $Q$. For example in the case $d =2$ as in Figure \ref{tree}, we map $L_0$ to the real line $\mathbb{R}$ with $f_d(P_n) = -n$. Then for example $f_d(R_{1}) = 1$ since the closest point to $R_1$ is $P_1$ and $d(R_1,P_1) = 2$. Let $H_n = f_d^{-1}(n)$ which we call a horosphere in $T_d$ with center $\omega$. Likewise, let $B_n = f_d^{-1}((-\infty,n])$ be the corresponding horoball; cf. [\cite{BGS}, p23] for this terminology. And if $z \in H_n$, let $T_d(z)$ be the subtree in $T_d$ rooted at $z$ and growing outside $B_n$. Finally, for each $z \in H_n$ and $l \in {\mathbb{Z}}^+$, let
$$ S(z,l) ~=~\{z' \in T_d(z) ~|~ d(z,z') =l\}.$$
\begin{rem}\mathrm{la}bel{bus}
These definitions of $f_d$, $H_n$, $B_n$ and $S(z,l)$ for the tree $T_d$ are valid for any positive integer $d$ (not just for primes).
\end{rem}
With the terminology above, we describe the key features of the $\beta_m$ action on $T_d$ as follows (keep in mind $\beta_m^d = \beta_{m-1}$ and $\left(
\begin{array}{cc}
d^{-1} & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ conjugates $\beta_m$ to $ \beta_{m+1}$):
\begin{itemize}
\item[(i)] $\beta_m$ fixes each point in $B_{-m}$;
\item[(ii)] For each $z \in H_{-m}$ and $l \in {\mathbb{Z}}^+$, $\beta_m$ leaves $S(z,l)$ invariant and cyclically permutes its members. In particular, for each $z \in H_{-m}$, if we label elements in $S(z,1)$ by $(1,2,\cdots, d)$ with order preserved, then $\beta_m$ maps $(1,2,\cdots, d-1,d)$ to $(2,3,\cdots,d,1)$.
\end{itemize}
Note that $P_n$ has isotropy $\{ \left(
\begin{array}{cc}
1 & d^{-n}s \\
0 & 1 \\
\end{array}\mathrm{rig}ht)~|~ s \in \mathbb{Z}\}$. One of the good things about this action is that it fixes the infinite point $\omega$. We also define the action of $\Gamma$ on $\mathbb{R}$ by ${\left(
\begin{array}{cc}
d^{-k} & x \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} ~w = {d^{-k}w + x}$, for $w \in \mathbb{R}$.
With these preparations, we define the action of $\Gamma$ on $T_d \times \mathbb{R}$ to be the diagonal action, i.e. $g (z,w) = (gz,gw)$, for $g \in \Gamma, ~(z,w) \in T_d \times \mathbb{R}$. It is not hard to check that $\Gamma$ acts freely, properly and discontinuously on
$T_d \times \mathbb{R}$. A fundamental domain is $P_0P_1\times [0,1]$, while very luckily ${(T_d \times \mathbb{R})}/\Gamma$ is homeomorphic to our previous space X. So $T_d \times \mathbb{R}$ is a model for $E({\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z})$.
\begin{rem}{\mathrm{la}bel{act}}
Let $G_d$ be the group~ $\{{\left(
\begin{array}{cc}
d^n \cdot \frac{s_1}{s_2} & b \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} ~|~ s_1,~s_2~ are~ nonzero~ integers~$\\
$ coprime~ to ~d, ~b ~\in \mathbb{Q}\}$. Note $\Gamma$ is a subgroup of $G_d$ and the action of $\Gamma$ on $T_d$ can be extended to $G_d$ for any integer prime $d >1$ (see \cite{Se}, Chaper II, Section 1.3); the case when $d$ is not a prime can be induced from the case $d$ is a prime and will be explained in the Appendix. We will assume from now on $d$ is not a prime.
On the other hand, our definition of the $\Gamma$ action on $\mathbb{R}$ can also be easily extended to $G_d$; in fact, for any $w \in \mathbb{R}$, ${\left(
\begin{array}{cc}
d^n \cdot \frac{s_1}{s_2} & b \\
0 & 1 \\
\end{array}\mathrm{rig}ht)}$ acts on $\mathbb{R}$ by ${\left(
\begin{array}{cc}
d^n \cdot \frac{s_1}{s_2} & b \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} ~w = {d^n \cdot \frac{s_1}{s_2} \cdot w + b}$. Hence we can define an action of $G_d$ on $T_d \times \mathbb{R}$ by the diagonal action for any $d > 1$. Furthermore, $\Gamma$ as a subgroup of $G_d$ acts on $T_d \times \mathbb{R}$ and ${(T_d \times \mathbb{R})}/\Gamma$ is homeomorphic to our previous space X. Note that this action is the ``boundary" of the natural (isometric) diagonal action of $G_d$ on $T_d \times \mathds{H}^2$ in the upper half plane model for $\mathds{H}^2$ where we identify $\mathbb{R}$ in $T_d \times \mathbb{R}$ with the $x$-axis of $\mathbb{R}^2$. Moreover, elements of the form ${\left(\begin{array}{cc}
s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} $ fix the line $L_0$ in $T_d$ where $s$ is an positive integer coprime to $d$, though it will not fix the whole tree $T_d$ in general; for more details about the case $d$ is a prime see \cite{Se}, Chapter II, Section 1.3, p 77, Stabilizers of straight paths: Cardan subgroups. One important observation is that the action of $g = {\left(\begin{array}{cc}
s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} $ on $T_d$ does not change the value of the Busemann function; i.e., $f_d(g(z)) = f_d(z)$, for every $z \in T_d$.
\end{rem}
We now define a metric on $T_d \times \mathbb{R}$ so that $\Gamma$ acts isometrically on it. Note first that both the tree $T_d$ and the real line $\mathbb{R}$ already have canonical metrics. Now we define the metric on $T_d \times \mathbb{R}$ by the warped product of $T_d$ and $\mathbb{R}$ with respect to the warping function $d^{-f_d}$, where $f_d$ is the Busemann function we defined before. $T_d \times \mathbb{R}$ is a metric space under this metric (see for example \cite{CH} Proposition 3.1). If we restricted to the two dimensional subspace $L_0 \times \mathbb{R}$, the metric will be
$$dz^2 ~+~ (d^{-f_d(z)})^2~dw^2, ~where~z~\in ~L_0,~ w ~\in~ \mathbb{R}.$$
Simple calculations show that this is a space with constant curvature $-(\ln d)^2$. Hence $T_d \times \mathbb{R}$ is constructed by gluing together infinitely many copies of hyperbolic planes with curvature $-(\ln d)^2$ along $[z,\omega) \times \mathbb{R}$, where $z$ can be any vertex in $T_d$. In fact, each line $L$ in $T_d$ going towards $\omega$ determined a hyperbolic plane $H_L = L \times \mathbb{R}$. And if $z \in T_d$ is the first point $L$ and $L'$ meet towards $\omega$, then $H_L$ and $H_{L'}$ are glued together along $[z,\omega)\times \mathbb{R}$. Note if we identify $L \times \mathbb{R}$ with the Poincar\'e disk model (with curvature $-(\ln d)^2$), $z \times \mathbb{R}$ will be
mapped to a horosphere. Correspondingly, $[z,\omega)$ will be mapped to the region outside the horoball (the disk
bounded by the horosphere) in the Poincar\'e disk, which is not convex. Hence, $T_d \times \mathbb{R}$ might not be a CAT(0) space, in fact,
our group is not a CAT(0) group since it contains a non-finitely generated Abelian subgroup; cf. Corollary 7.6 in \cite{BH}, p247.
We are going to need the following lemmas in the future.
\begin{lem}{\mathrm{la}bel{dd}}
For any two points $(z_1,w_1),(z_2,w_2) \in T_d \times \mathbb{R}$,
$$d_{T_d \times \mathbb{R}}((z_1,w_1),(z_2,w_2)) \geq d_{T_d \times \mathbb{R}}((z_1,w_2),(z_2,w_2)) = d_T(z_1,z_2) $$
\end{lem}
\noindent{\bf Proof}\quad By Lemma 3.1 in \cite{CH}, we have $d_{T_d \times \mathbb{R}}((z_1,w_1),(z_2,w_2)) \geq d_T(z_1,z_2)$. By Lemma 3.2 in \cite{CH}, $d_{T_d \times \mathbb{R}}((z_1,w_2),(z_2,w_2)) = d_T(z_1,z_2)$.
$\square$
\begin{rem}
It is not always true that $d_{T_d \times \mathbb{R}}((z_1,w_1),(z_2,w_2)) \geq |w_1 - w_2|$.
\end{rem}
\begin{cor}{\mathrm{la}bel{ddc}}
Let $z_1,z_2 \in T_d$, $w_1,w_2 \in \mathbb{R}$, then the following inequality holds
$$ d_{T_d \times \mathbb{R}}((z_1,w_1),(z_2,w_2)) \geq \frac{1}{2} d_{T_d \times \mathbb{R}}((z_1,w_1),(z_1,w_2)) $$
\end{cor}
\noindent{\bf Proof}\quad By the triangle inequality on metric spaces, we have \\
$\hspace*{5mm} d_{T_d \times \mathbb{R}}((z_1,w_1),(z_2,w_2)) + d_{T_d \times \mathbb{R}}((z_2,w_2),(z_1,w_2)) \geq d_{T_d \times \mathbb{R}}((z_1,w_1),(z_1,w_2))$.\\
By Lemma \ref{dd}, $d_{T_d \times \mathbb{R}}((z_1,w_1),(z_2,w_2)) \geq d_T(z_1,z_2) = d_{T_d \times \mathbb{R}}((z_2,w_2),(z_1,w_2)) $, therefore $ 2 d_{T_d \times \mathbb{R}}((z_1,w_1),(z_2,w_2)) \geq d_{T_d \times \mathbb{R}}((z_1,w_1),(z_1,w_2))$.
$\square$
\begin{lem}{\mathrm{la}bel{led}}
Let $z_0$ be a fixed point in $T_d$, $w_1,w_2,w_3$ are three different points in $\mathbb{R}$ such that $|w_1 - w_2| < |w_1 - w_3|$, then
$$d_{T_d \times \mathbb{R}}((z_0,w_1),(z_0,w_2)) < d_{T_d \times \mathbb{R}}((z_0,w_1),(z_0,w_3))$$
\end{lem}
\noindent{\bf Proof}\quad It is not hard to see that we can arrange the geodesics connecting $(z_0,w_1)$ and $(z_0,w_i)$, i =2,3, to lie within a single hyperbolic plane. Hence we only need to prove the lemma in the hyperbolic plane $L_0 \times \mathbb{R}$. We can map $L_0 \times \mathbb{R}$ to the Poincar\'e disk model with $(z_0,w_1)$ as the center; $\{z_0\} \times \mathbb{R}$ will be mapped to a horosphere. Now our lemma follows easily.
$\square$
\begin{lem}\mathrm{la}bel{vl}
Let $z_0$ be a fixed point in $T_d$, $w_1,w_2$ are two fixed points in $\mathbb{R}$, denote the distance $d_{T_d \times \mathbb{R}}((z_0, \frac{w_1}{n}),(z_0, \frac{w_2}{n})) $ by $D_n$, then for any given integer $n>0$,
$$\sinh{(\frac{\ln d} {2}D_1)} = n ~\sinh{(\frac{\ln d} {2} D_n )} $$
Therefore,
$$D_n = \frac{2}{\ln d} arcsinh(\frac{1}{n} \sinh{(\frac{\ln d} {2}D_1)})$$
In particular,
$$\lim_{n \mathrm{rig}htarrow \infty} D_n = 0.$$
\end{lem}
\noindent{\bf Proof}\quad
Denote the induced inner metric on $\{z_0\} \times \mathbb{R}$ as $d$, then $d(w_1,w_2) = n ~d(\frac{w_1}{n},\frac{w_2}{n})$. Note if we use the Poincar\'e disk model, $\{z_0\} \times \mathbb{R}$ will be mapped to a horosphere, hence for any two points $w_1, w_2 \in \{z_0\} \times \mathbb{R}$,
$$d(w_1,w_2) = \frac{2}{\ln d} \sinh{(\frac{\ln d} {2} d_{T_d \times \mathbb{R}}((z_0, w_1),(z_0, w_2)))}$$
see for example \cite{EH}, Theorem 4.6. Now apply this to both sides of the equation $d(w_1,w_2) = n ~d_{\mathbb{R}}(\frac{w_1}{n},\frac{w_2}{n})$, our lemma follows.
$\square$
\section{Flow Space for $ T_d \times {\mathbb{R}}$}
In this section we define a flow space for $E({\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z})$,
which is a horizontal subspace of the flow space defined in Bartels and L\"uck's paper \cite{BL2}. Then we prove that it
has lots of good properties, which guarantees that it has a long thin cover.
We first introduce Bartels and L\"uck's flow space starting with the notion of generalized geodesic.
\begin{defn}
Let X be a metric space. A continuous map $c : \mathbb{R} \mathrm{rig}htarrow X$ is called a \textbf{generalized geodesic} if
there are $c_{-}, c_{+} \in \bar{\mathbb{R}} := \mathbb{R} \coprod \{-\infty, \infty \}$ satisfying
$$ c_- \leq c_+, c_- \neq \infty, c_+ \neq -\infty $$
such that $c$ is locally constant on the complement of the interval $I_c := (c_-, c_+)$ and
restricts to an isometry on $I_c$.
\end {defn}
\begin{defn}
Let $(X, d_X)$ be a metric space . Let \textbf{$FS(X)$} be the set of all generalized geodesics in
$X$. We define a metric on $FS(X)$ by
$$ d_{FS(X)}(c,d) := \int_{\mathbb{R}} \frac{d_X(c(t), d(t))}{2e^{|t|}} dt $$
\end{defn}
The flow on $FS(X)$ is defined by
$$ \Phi: FS(X) \times \mathbb{R} \mathrm{rig}htarrow FS(X) $$
where $ {\Phi}_{\tau} (c)(t) = c(t + \tau)$ for $ \tau \in \mathbb{R}$, $c \in FS(X)$ and $t \in \mathbb{R}$.
\begin{lem}{\mathrm{la}bel{inq}}
The map $\Phi$ is a continuous flow and if we let $c,d \in FS(X)$, $\tau \in \mathbb{R}$,
then the following inequality holds
$$ e^{-|\tau|} d_{FS(X)}(c,d) \leq d_{FS(X)}(\Phi_{\tau}(c),\Phi_{\tau}(d)) \leq e^{|\tau|} d_{FS(X)}(c,d) $$
\end{lem}
\noindent{\bf Proof}\quad A more general version is proved in \cite{BL2}, Lemma 1.3.
$\square$
Note that the isometry group of $(X, d_X)$ acts canonically on $FS(X)$. Recall a map is proper if the inverse image of every compact subset is compact. Bartels and L\"uck also proved the following for the flow space $FS(X)$ in \cite{BL2} Proposition 1.9 and 1.11.
\begin{prop} \mathrm{la}bel{ac}
If $(X,d_X)$ is a proper metric space, then $(FS(X),d_{FS(X)})$ is a proper metric space, in particular it is a complete metric space. Furthermore, if a group $\Gamma$ acts isometrically and properly on $(X, d_X)$, then $\Gamma$ also acts on $(FS(X),d_{FS(X)})$ isometrically and properly. In addition, if $\Gamma$ acts cocompactly on $X$, then $\Gamma$ acts cocompactly on $FS(X)$.
\end{prop}
Now we define our flow space by
$$HFS(T_d \times\mathbb {R}) := FS(T_d)\times \mathbb {R}$$
where $T_d$ has its natural metric with edge length $1$. Since $\Gamma$ has an action on both $FS(T_d)$ and $\mathbb{R}$,
$\Gamma$ will have a diagonal action on $FS(T_d)\times \mathbb {R}$ also. One can think of $HFS(T_d \times\mathbb {R})$ as the horizontal subspace of $FS(T_d \times\mathbb {R})$. In fact, there is a natural embedding of $HFS(T_d \times\mathbb {R})$ (as a topological space with product topology) into $FS(T_d \times \mathbb {R})$ defined as follows:
for a generalized geodesic $c$ on $T_d$, and $w \in \mathbb{R}$, we define a generalized geodesic on $T_d \times \mathbb{R}$, which maps $t \in \mathbb{R}$ to $(c(t),w) \in T_d \times \mathbb {R}$. $HFS(T_d \times\mathbb {R})$ will inherit a metric from this embedding.
For the rest of this section, let $X = T_d \times \mathbb {R}$.
\begin{lem}\mathrm{la}bel{prop}
The flow space $HFS(T_d \times\mathbb {R})$ is a proper metric space, in particular a complete metric space.
\end{lem}
\noindent{\bf Proof}\quad
In order to prove $HFS(T_d \times\mathbb {R})$ is a proper metric space, we need to show every closed ball $B_r(c) ~= ~\{c' ~|~ d_{HFS(T_d \times\mathbb {R})}(c,c') \leq r\}$ in $HFS(T_d \times\mathbb {R})$ is compact. Let $\{c_i\}$ be a Cauchy sequence in the closed ball $B_r(c)$, we need to show it converges to a point in $B_r(c)$. Since the space $FS(T_d \times \mathbb {R})$ is proper, we can now assume $\{c_i\}$ converges to a point $c_0$ in $FS(T_d \times \mathbb {R})$. We only need to show $c_0 \in HFS(T_d \times\mathbb {R})$. Denote the projection map from $T_d \times \mathbb{R}$ to $T_d$ as $q_1$, from $T_d \times \mathbb{R}$ to $\mathbb{R}$ as $q_2$, then $c_i(t) = (q_1(c_i(t),q_2(c_i(t)))$. Suppose $c_0 \notin HFS(T_d \times\mathbb {R})$, then $q_2(c_0(t))$ is not a constant map. Choose a big enough close interval $I$ in $\mathbb{R}$ such that $q_2(c_0(t))$ restricted to $I$ is not a constant, we can assume the maximum value is $A_1$, while the minimum is $A_2$, where $A_1 > A_2$. Let $\delta = A_1 - A_2 >0$ and $I_1 = \{t \in I~|~ q_2(c_0(t)) > A_1 - \frac{\delta}{4}\}$, correspondingly $I_2 = \{t \in I~|~ q_2(c_0(t)) < A_2 + \frac{\delta}{4}\}$. Note $I_1$ and $I_2$ are nonempty sets with measure bigger than $0$. Now for any given $c_i$, if $q_2(c_i) \geq \frac{A_1 + A_2}{2}$, then
$$d_{FS(X)}(c_0, c_i) = \int_{\mathbb{R}} \frac{d_{T_d \times \mathbb{R}}(c_0(t), c_i(t))}{2e^{|t|}} dt \hspace*{69mm}$$
$$\geq \int_{I_2} \frac{d_{T_d \times \mathbb{R}}(c_0(t), c_i(t))}{2e^{|t|}} dt \hspace*{46mm}$$
$$\geq \int_{I_2} \frac{d_{T_d \times \mathbb{R}}((q_1(c_0(t)),q_2(c_0(t))), (q_1(c_i(t)),q_2(c_i(t))))}{2e^{|t|}} dt $$
$$\hspace*{13mm} = \int_{I_2} \frac{\frac{1}{2}d_{T_d \times \mathbb{R}}(c_0(t), (q_1(c_0(t)), q_2(c_i(t))))}{2e^{|t|}} dt\hspace*{5mm}( by~ Corollary~ \ref{ddc})$$
$$\hspace*{13mm} \geq \int_{I_2} \frac{\frac{1}{2}d_{T_d \times \mathbb{R}}(c_0(t), (q_1(c_0(t)), \frac{A_1 +A_2}{2})))}{2e^{|t|}} dt > 0\hspace*{5mm}( by~ Lemma~ \ref{led})$$
The last integral is independent of $c_i$; denote its value as $\epsilon_1$. For the same reason, if $q_2(c_i) < \frac{A_1 + A_2}{2}$, there exists $\epsilon_2 > 0$ such that $d_{FS(X)}(c_0, c_i) \geq \epsilon_2$. Let $\epsilon = \min(\epsilon_1,\epsilon_2) > 0$, then $d_{FS(X)}(c_0, c_i) \geq \epsilon >0 $.
Hence the sequence $\{c_i\}$ can never converge to $c_0$, contradiction.
$\square$
\begin{rem}
The proof in fact shows that the embedding $HFS(X) \subset FS(X)$ is a closed $\Gamma$-equivariant embedding.
\end{rem}
We define now the flow
$$ \Phi: HFS(T_d \times\mathbb{R}) \times \mathbb{R} \mathrm{rig}htarrow HFS(T_d \times\mathbb{R}) $$
by $ {\Phi}_{\tau} ((c,w))(t) = (c(t + \tau),w)$ for $c \in FS(T_d)$ and $ \tau,w,t \in \mathbb{R}$. Note $\Phi$ is a $\Gamma$-equivariant flow.
~\\
\begin{lem} {\mathrm{la}bel{l3}}
The flow space $HFS(X)$ has the following properties:
\begin{itemize}
\item[(i)] $\Gamma$ acts properly and cocompactly on $HFS(X)$.
\item [(ii)] Given $C > 0$, there are only finitely many $\Gamma$ orbits of periodic flow curves with period less than $C$ (but bigger than $0$).
\item [(iii)] Let $HFS(X)^{\mathbb{R}}$ denote the $\mathbb{R}$-fixed point set, i.e., the set of points $c \in HFS(X)$ for which ${\Phi}_{\tau} (c) = c$ for all $\tau \in \mathbb{R}$, then $HFS(X) - HFS(X)^{\mathbb{R}}$ is locally connected.
\item[(iv)] If we put
$$ k_\Gamma := sup\{ |H| | H \subseteq \Gamma subgroup~with ~finite~ order~ |H| \} ;$$
$$d_{HFS(X)} := dim(HFS(X) - {HFS(X)}^{\mathbb{R}}); \hspace*{37mm}$$
then $k_\Gamma$ and $d_{HFS(X)}$ are finite.
\end{itemize}
\end{lem}
\noindent{\bf Proof}\quad Those properties are essentially implied by results in \cite{BL2}, section 1.
\begin{itemize}
\item[(i)] The $\Gamma$ action on $HFS(X)$ is proper since $HFS(X)$ $\Gamma$-equivariantly embeds into $FS(X)$ and the $\Gamma$-action on $FS(X)$ is proper by Proposition \ref{ac}. By Lemma 1.10 in \cite{BL2}, for any $w_0 \in \mathbb{R}$, the evaluation map $FS(T_d) \mathrm{rig}htarrow T_d$ defined by $c \mathrm{rig}htarrow c(w_0)$ is proper. Hence $FS(T_d) \times \mathbb{R} \mathrm{rig}htarrow T_d \times \mathbb{R}$ is proper. This will induce a map $(FS(T_d) \times \mathbb{R})/{\Gamma} \mathrm{rig}htarrow (T_d \times \mathbb{R})/{\Gamma}$. Since $(T_d \times \mathbb{R})/{\Gamma}$ is compact, $(FS(T_d) \times \mathbb{R})/{\Gamma}$ is compact also. One can prove this by choosing a compact fundamental domain in $T_d \times \mathbb{R}$, and using the fact that the map $FS(T_d) \times \mathbb{R} \mathrm{rig}htarrow T_d \times \mathbb{R}$ is proper. Hence $\Gamma$ acts cocompactly on $HFS(X)$.
\item [(ii)] Note that periodic orbits in $HFS(T_d\times \mathbb{R})$ are periodic orbits in $FS(T_d\times \mathbb{R})$, which move horizontally (i.e., move along the tree direction, with $\mathbb{R}$ coordinate fixed). Note also that the embedding of $HFS(T_d\times \mathbb{R})$ into $FS(T_d\times \mathbb{R})$ is a $\Gamma$-equivariant map, and there are only finitely many nonzero horizontal periodical geodesics on $(T_d\times \mathbb{R})/{\Gamma}$ of period less than $C$. In fact $(T_d\times\mathbb{R})/{\Gamma} = S^1 \times [0,1] / (z,0) \thicksim (z^d, 1)$ (see Section \ref{model}) and horizontal periodical geodesics with period $m$ on it corresponding to solutions of the equation $d^m x ~\equiv~ x~ (~mod~ 1~)$, where $x \in \mathbb{R}/\mathbb{Z}$. Since for any positive integer $m$, the equation has finitely many solutions. In fact the solutions are $x = \frac{k}{d^m- 1}, ~k \in \mathbb{Z}$, $x\in\mathbb{R}/\mathbb{Z}$. Hence for fixed $m$, the number of solutions is $d^m- 1$. So there will be only finitely many nonzero horizontal periodical orbits on $HFS((T_d\times\mathbb{R})/{\Gamma} )$ with period less than $C$. Hence the claim in (ii) now follows.
\item [(iii)] This is because $T_d$ is a tree, hence a CAT(0) space. In Bartels and L\"uck's paper \cite{BL2}, Proposition 2.10, they proved that for any CAT(0) space A, $FS(A) - FS(A)^{\mathbb{R}}$ is locally connected. Our flow space $HFS(X) = FS(T_d) \times \mathbb{R}$. Since the flow on our flow space only flow on the first factor, $HFS(X)^{\mathbb{R}} = FS(T_d)^{\mathbb{R}} \times \mathbb{R}$. So $HFS(X) - HFS(X)^{\mathbb{R}}$ will be locally connected as well.
\item[(iv)] Since $\Gamma$ is a torsion free group, $k_\Gamma = 1$. Note that any two point can be connected by a unique geodesic in the tree $T_d$, it is not hard to see that the flow space $FS(T_d)$ will have dimension less than $5$. Therefore our flow space $HFS(X) = FS(T_d) \times \mathbb{R}$ has finite dimension, and hence $d_{HFS(X)}$ is finite. One can also consult Bartels and L\"uck's paper \cite{BL2}, they have more general results for CAT(0) spaces.
\end{itemize}
$\square$
\begin{rem} \mathrm{la}bel{remfl}
We define an embedding $\Psi: T_d \times \mathbb{R} \mathrm{rig}htarrow FS(T_d) \times \mathbb{R}$, by $(z,w) \mathrm{rig}htarrow (c_z,w)$, where $c_z$
is the unique generalized geodesic which sends $(-\infty, 0)$ to $z$, and $[0,\infty)$ isometrically to the geodesic $[z,\omega)$. Recall $[z,\omega)$ is the unique geodesic connecting $z$ and the specified infinity point $\omega$ of $T_d$ defined in Section \ref{model}. Also, we can flow this embedding by flowing its image in $HFS(X)$; define $\Psi_{\tau}(z,w) = \Phi_{\tau}(\Psi(z,w))$. It is easy to see that $\Psi_{\tau}$ is a $\Gamma$-equivariant map since $\omega$ is fixed under the group action.
\end{rem}
We need the following lemma in the future.
\begin{lem} \mathrm{la}bel{lpar}
Let $z_0$ be a fixed point in $T_d$, $w_1,w_2$ are two fixed points in $\mathbb{R}$, and $P_n = (z_0,\frac{w_1}{n})$, $Q_n = (z_0,\frac{w_2}{n})$, $d_X(P_1,Q_1) < D$. Then for any $\epsilon > 0$, there exists a number $\bar{N}$, which depends only on $\epsilon$, $D$ and $d$, such that for any $n > \bar{N}$
$$d_X(P_n,Q_n) < \frac{\epsilon}{4}$$
and
$$ d_{HFS(X)}(\Psi(P_n),\Psi(Q_n)) \leq \epsilon$$
\end{lem}
\noindent{\bf Proof}\quad Choose $T= \max\{1,\ln {\frac{4}{\epsilon}}\}$ . Since $\Psi(P_n)(T)$ and $\Psi(Q_n)(T)$ have the same $T_d$ coordinate, by Lemma \ref{vl}, we can choose a big enough integer $N$ such that for any $n > N$, $d_X(\Psi(P_n)(T),\Psi(Q_n)(T)) < \frac{\epsilon}{4}$. Note $\bar{N}$ depends only on $\epsilon$, $D$ and $d$. Using the definition of generalized geodesic, we have $d_{X}( \Psi(P_n)(t), \Psi(P_n)(T)) = t-T$ and $d_{X}(\Psi(Q_n)(t), \Psi(Q_n)(T)) = t-T$, where $t \geq T$. Hence for any $t \geq T$, by triangle inequality, we have the following
$$d_{X}(\Psi(P_n)(t)),\Psi(Q_n)(t))\hspace*{88mm} $$
$$\leq d_{X}( \Psi(P_n)(t), \Psi(P_n)(T)) + d_{X}(\Psi(P_n)(T),\Psi(Q_n)(T)) + d_{X}(\Psi(Q_n)(T), \Psi(Q_n)(t)) $$
$$\leq \frac{\epsilon}{4} + 2(t-T) \hspace*{103mm}$$
On the other hand, the metric defined on $T_d \times \mathbb{R}$ is expanding in the $\mathbb{R}$ direction when moving towards $\omega$. Hence for any $0\leq t \leq T$, we have
$$d_{X}(\Psi(P_n)(t)),\Psi(Q_n)(t)) \leq d_{X}(\Psi(P_n)(T)),\Psi(Q_n)(T)) < \frac{\epsilon}{4}$$
And for $t \leq 0$, $\Psi(P_n)(t) = \Psi(P_n)(0) = P_n$, $\Psi(Q_n)(t) = \Psi(Q_n)(0) = Q_n$, hence
$$d_{X}(\Psi(P_n)(t)),\Psi(Q_n)(t)) = d_{X}(P_n,Q_n) < \frac{\epsilon}{4}.$$
Therefore, for any $n>\bar{N}$
$$d_{HFS(X)}(\Psi(P_n),\Psi(Q_n)) = \int_{\mathbb{R}} \frac{d_X(\Psi(P_n)(t)),\Psi(Q_n)(t)))}{2e^{|t|}} dt \hspace*{30mm} $$
$$ = \int_{(-\infty,0]} \frac{d_X(\Psi(P_n)(t)),\Psi(Q_n)(t)))}{2e^{|t|}} dt ~~+~~ \int_{[0,T]} \frac{d_X(\Psi(P_n)(t)),\Psi(Q_n)(t)))}{2e^{|t|}} dt$$
$$\hspace*{20mm}+ \int_{[T, \infty)} \frac{d_X(\Psi(P_n)(t)),\Psi(Q_n)(t)))}{2e^{|t|}} dt$$
$$ \leq \int_{(-\infty,0]} \frac{\frac{\epsilon}{4}}{2e^{|t|}} dt \hspace*{2mm}+ \hspace*{2mm}\int_{[0,T]} \frac{\frac{\epsilon}{4}}{2e^{|t|}} dt \hspace*{2mm}+\hspace*{2mm} \int_{[T,\infty)} \frac{\frac{\epsilon}{4} + 2(t-T)}{2e^{|t|}} dt \hspace*{20mm}$$
$$= \frac{\epsilon}{4} + e^{-T} \leq \frac{\epsilon}{4} + \frac{\epsilon}{4} = \frac{\epsilon}{2}\hspace*{75mm}$$
Hence we proved the Lemma.
$\square$
Because of the properties proved in Lemma \ref{l3}, Theorem 1.4 in \cite{BLR} yields a long thin cover for $HFS(X)$; i.e. the following result holds
\begin{prop}\mathrm{la}bel{ltc}
There exists a natural number $N$, depending only on $k_\Gamma,~d_{HFS(X)}$ and the action of $\Gamma$ on an arbitrary neighborhood of ${HFS(X)}^{\mathbb{R}}$ such that for every $\mathrm{la}mbda > 0$ there is an $\mathcal{VC}yc$-cover $\mathcal{U}$ of $HFS(X)$ with the following properties:
\begin{itemize}
\item[(i)] dim $\mathcal{U} \leq N$;
\item [(ii)] For every $x \in HFS(X)$ there exists $U_x \in \mathcal{U}$ such that
$$ \Phi_{[-\mathrm{la}mbda, \mathrm{la}mbda]}(x) := \{ \Phi_{\tau}(x) ~|~ \tau \in [-\mathrm{la}mbda, \mathrm{la}mbda] \} \subseteq U_x; $$
\item [(iii)] ${\Gamma} \setminus {\mathcal{U}}$ is finite.
\end{itemize}
where $\mathcal{VC}yc$ denote the collections of virtually cyclic subgroups of a group.
\end{prop}
Recall that the dimension of a cover $\mathcal{U}$ is defined to be the greatest $N$ such that there exists $N+1$ elements in $\mathcal{U}$ with nonempty intersection. In general, for a collection of subgroups $\mathcal{F}$, we define a $\mathcal{F}$-cover as following.
\begin{defn}
Let G be a group and Z be a G-space. Let $\mathcal{F}$ be a collection of subgroups of G. An open cover $\mathcal{U}$ of Z is called an \textbf{$\mathcal{F}$-cover} if the following three conditions are satisfied.\\
(i) For $g\in G$ and $ U \in \mathcal{U}$ we have either $g(U) = U$ or $g(U) \bigcap U = \emptyset$;\\
(ii) For $g\in G$ and $ U \in \mathcal{U}$, we have $g(U) \in \mathcal{U}$;\\
(iii) For $U \in \mathcal{U}$ the subgroup $G_U := \{ g \in G~|~ g(U) = U \}$ is a member of $\mathcal{F}$.
\end{defn}
For a subset A of a metric space Z and $\delta >0$, $A^{\delta}$ denotes the set of all points $z \in Z$ for which $d(z,A) < \delta$. Combining Lemma \ref{inq} and the fact that $\Gamma$ acts cocompactly on ${FS(T_d) \times \mathbb{R}}$ (Lemma \ref{l3}, (i)), Proposition \ref{ltc} can be improved to the following.
\begin{prop}\mathrm{la}bel{lte}
There exists a natural number $N$, depending only on $k_\Gamma,d_{HFS(X)}$ and the action of $\Gamma$ on an arbitrary neighborhood of ${HFS(X)}^{\mathbb{R}}$ such that for every $\mathrm{la}mbda > 0$ there is a $\mathcal{VC}yc$-cover $\mathcal{U}$ of $HFS(X)$ with the following properties:
\begin{itemize}
\item[(i)] dim $\mathcal{U} \leq N$;
\item [(ii)] There exists a $\delta >0$ depends on $\mathrm{la}mbda$ such that for every $x \in HFS(X)$ there exists $U_x \in \mathcal{U}$ such that
$$ (\Phi_{[-\mathrm{la}mbda, \mathrm{la}mbda]}(x))^{\delta} ~\subseteq~ U_x; ~~~~~~~$$
\item [(iii)] ${\Gamma} \setminus {\mathcal{U}}$ is finite.
\end{itemize}
\end{prop}
\noindent{\bf Proof}\quad A simple modification of the argument in \cite{BLR}, section 1.3, page 1804-1805, yields the result. In their proof, they used a lemma (Lemma 7.2) which will be replaced by Lemma \ref{inq} in our case.
$\square$
\section {Hyper-elementary subgroups of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$} {\mathrm{la}bel{gts}}
In this section, we study the hyper-elementary subgroups of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$, where $d$ is an integer such that $|d| >1$, $q$ is a prime number greater than $|d| +1$, $s$ is a positive integer, $\alpha$ is multiplication by $d$, $t_s$ is the order of $d$ in the group of units of the ring ${\mathbb{Z}}_{q^s}$. The reason we study this is ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$ can be realized as a quotient of ${\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_d \mathbb{Z}$. In fact, note first $(q^s {\mathbb{Z} {[{\frac{1}{d}} ]}}) \rtimes_d \{0\}$ is a normal subgroup of ${\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_d \mathbb{Z}$, therefore after we modulo it out, the quotient group is ${\mathbb{Z}}_{q^s} \rtimes_d {\mathbb{Z}}$. $1 \in \mathbb{Z}$ acts on ${\mathbb{Z}}_{q^s}$ as multiplication by $d$, which has order $t_s$ when considered as an element of $Aut({\mathbb{Z}}_{q^s})$. Hence we can further map ${\mathbb{Z}}_{q^s} \rtimes_d {\mathbb{Z}}$ to ${\mathbb{Z}}_{q^s} \rtimes_d {\mathbb{Z}}_{t_s}$.
We denote the group of units of a ring $R$ by $U(R)$, hence $t_s$ is the order of $d$ in $U({\mathbb{Z}}_{q^s})$.
\begin{defn}
A \textbf{hyper-elementary group} H is an extension of a $p$-group by a cyclic group of order n, where $p$ is a prime number, $(n, p) = 1$, in other words,
there exists a short exact sequence
$$ 1 \mathrm{rig}htarrow C_n \mathrm{rig}htarrow H \mathrm{rig}htarrow G_p \mathrm{rig}htarrow 1 $$
where $C_n$ is a cyclic group of order n, $G_p$ is a $p$-group such that $(n, p) = 1$.
\end{defn}
Note first that the order of the units group $U({\mathbb{Z}}_{q})$
is $q - 1$, while the order of $U({\mathbb{Z}}_{q^s})$ is $q^s - q^{s - 1} = q^{s - 1}(q - 1)$.
\begin{lem} \mathrm{la}bel{l1}
If $d^t \equiv 1 ~(~mod~ q~)$, then $d^{tq^{s-1}} \equiv 1 (~mod~ q^s~)$.
\end{lem}
\noindent{\bf Proof}\quad We prove this by induction. For $s = 1$, this is automatically true. Now assume it is true for $k$, we prove it for $k + 1$.
By hypothesis, $d^{tq^{k-1}} = mq^k + 1$,where $m$ is an integer. So $d^{tq^k} = (mq^k + 1)^q$. Expanding the right side, it is easy to see that
$d^{tq^{k}} \equiv 1 (~mod~ q^{k +1}~) $.
$\square$
\\
If we assume the order of $d$ in $U({\mathbb{Z}}_{q})$ is $t_1$, and the order of $d$ in $U({\mathbb{Z}}_{q^s})$ is $t_s$, then $t_1 ~|~ q - 1$, and $t_s ~|~ q^{s - 1}(q - 1)$. Let $t_s = {m_s}q^{k_s}$, where $(m_s, q) = 1$. Then what the previous lemma says is that $m_s ~|~ t_1$ and $k_s \leq s - 1$. In the following lemma, we prove $m_s = t_1$.
\begin{lem}\mathrm{la}bel{l2}
Assume $t_1$ is the order of $d$ in $U({\mathbb{Z}}_{q})$, $t_s = {m_s}q^{k_s}$ is the order of $d$ in $U({\mathbb{Z}}_{q^s})$, where $(m_s, q) = 1$, then $m_s = t_1$.
\end{lem}
\noindent{\bf Proof}\quad We prove it by contradiction. Assume $m_s \neq t_1$, denote $d^{m_s} = a$, then $a$ is not equal to $1$ modulo $q$, since $m_s ~|~ t_1$. But since the order of $d$ in
$U({\mathbb{Z}}_{q^s})$ is ${m_s}q^{k_s}$, we have ~$d^{{m_s}q^{k_s}} \equiv~ a^{q^{k_s}} \equiv 1 ~(~mod ~q^s~)$. This means $a^{q^{k_s}} \equiv 1 ~(~mod ~q~)$, which is not true. In fact, by Fermat's theorem, $a^{q^{ k_s}} \equiv a^{q^{k_s - 1}} \equiv \ldots a ~(~mod ~q~)$.
$\square$
We list some basic formulas we are going to use frequently:\\
(1) $(a,b)(a',b') = (a+{d^{-b}}a', b+b')$;~\\
(2) ${(a,b)}^n = ((1 + d^{-b} + \ldots + d^{-(n - 1)b})a,nb)$;
~\\
(3) ${(a,b)}^{-1} = (-{d^b}a, -b)$;
~\\
(4) $(x,y)(a,b){(x,y)}^{-1} = ((1 - d^{-b})x + {d^{-y}}a,b)$;\\
where $(a,b),(a',b'),(x,y) \in {\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$, $n$ is a positive integer and $d^{-b}$ is the inverse element of $d^b$ in the unit group $U({\mathbb{Z}}_{q^s})$, etc.
\begin{lem} \mathrm{la}bel{conj}
For any $(a,b) \in {\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$, if $t_1 \nmid b$, then $(a,b)$ can be conjugate to $(0,b)$.
\end{lem}
\noindent{\bf Proof}\quad
Note first if $t_1 \nmid b$, then $d^{-b} \nequiv 1 ~(~mod~ q~) $, hence $1 - d^{-b}$ is a unit in $Z_{q^s}$. Since $(x,y)(a,b){(x,y)}^{-1} = ((1 - d^{-b})x + {d^{-y}}a,b)$, if $1 - d^{-b}$ is a unit in $Z_{q^s}$, then we can find $x,y$, such that $(1 - d^{-b})x + {d^{-y}}a ~=~0$, which means we can conjugate $(a,b)$ to $(0,b)$. In fact, we can always take $y = 0$.
$\square$
\begin{thm}\mathrm{la}bel{thm3}
For $s>q$ and $q$ a prime number bigger than $|d|+1$, every hyper-elementary subgroup H of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$ can be conjugated to a subgroup of one of the following three types of subgroups: \\
\hspace*{5mm} type 1:\hspace*{2mm} ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{q^{k_s}}$;\\
\hspace*{5mm} type 2:\hspace*{2mm} ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_1}$;\\
\hspace*{5mm} type 3:\hspace*{2mm} $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_s}$.\\
where $\alpha$ is multiplication by $d$, $t_1$ is the order of $d$ in the group of units $U({\mathbb{Z}}_q)$ and $t_s$ is the order of $d$ in the group of units $U({\mathbb{Z}}_{q^s})$, $t_s = t_1 q^{k_s}$.
\end{thm}
\begin{rem}
The homomorphism $\alpha: {\mathbb{Z}}_{q^{k_s}} \mathrm{rig}htarrow Aut({\mathbb{Z}}_{q^s})$ in ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{q^{k_s}}$ is the restriction of the homomorphism $\alpha: {\mathbb{Z}}_{t_s} \mathrm{rig}htarrow Aut({\mathbb{Z}}_{q^s})$, to the subgroup
${\mathbb{Z}}_{q^{k_s}} \subseteq {\mathbb{Z}}_{t_s}$. There is a similar remark for ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_1}$.
\end{rem}
\noindent{\bf Proof}\quad Denote the cyclic part of the hyper-elementary subgroup H by C. If $s > q$, then $q^s > |d|^{q-1}$, which implies $k_s \geq 1$.
First, if C is a trivial group, then H is just a $p$-group. By Sylow's theorem, any $p$-group can be conjugate to a subgroup of a maximal $p$-group. Note that the order of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$ is $q^s \cdot t_s= q^s\cdot {t_1}q^{k_s} = t_1q^{s+k_s}$, where $t_1$ and $q$ are coprime, hence any $p$-group in ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$ can be conjugated to a subgroup of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{q^{k_s}}$ (type 1), or $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_1}$ which is a subgroup of $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_s}$ (type 3).
Now assume C is non-trivial, say C is generated by $(a,b) \in {\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$.
If $b = 0$, C will have order a power of $q$, then since the $p$-group belonging to $H$ has to be coprime to q, and its order has to divide $t_1$, which means H is a subgroup of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_1}$ (type 2). So for the rest of the proof, we will assume that $b \neq 0$.
If $a = 0$, C lies in $\{0\} \rtimes_\alpha {\mathbb{Z}}_{t_s}$. If the $p$-group part of H is trivial, then we are in type 3. Otherwise in order for C to be a normal subgroup, for any $(x,y) \in H$, $(x,y)(0,b){(x,y)}^{-1} = ((1 - d^{-b})x, b)$ has to lie in C as well, which means $(1 - d^{-b})x$ has to be equal to zero in ${\mathbb{Z}}_{q^s}$. Note first that this means C lies in the center of H. If $x = 0$ for all $(x,y)$, then H is a subgroup of $\{0\} \rtimes_\alpha {\mathbb{Z}}_{t^s} $ (type 3). If $x \neq 0$ for some $(x,y)$ which lies in the $p$-group of H, then at least, $(1 - d^{-b}) \equiv 0 ~(~mod~ q~)$ and $x \equiv 0 ~(~mod~ q~)$. Hence $d^{-b} \equiv 1 ~(~mod~ q~)$, therefore $t_1~|~b$. So the order of the cyclic group is a power of $q$, while the order of H's $p$-group has to be coprime to $q$. By Sylow's theorem, H's $p$-group can be conjugate to a subgroup of $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_1}$, hence it is actually a cyclic subgroup. This implies the hyper-elementary subgroup H is an abelian group since C is in the center of H. Combined with the fact that the order of the cyclic group and $p$-group are coprime, we have H is again a cyclic group, say generated by $(a',b')$, where $b'$ is not zero. The case $a' = 0$ is contained in type 3. For the case $a' \neq 0$, if $t_1~\nmid~b'$, by Lemma \ref{conj}, $(a',b')$ can be conjugate to $(0,b')$ (type 3). When $t_1 ~|~ b'$, H will lie in ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{q^{k_s}}$ (type 1).
Now suppose both $a$ and $b$ are not zero. If $t_1$ does not divide $b$, then by Lemma \ref{conj}, we can conjugate $(a,b)$ to $(0,b)$, which can be included in the $a = 0$ case. When $t_1~|~b$, then $d^{-b} \equiv 1 ~(~mod~ q~) $. And $b$ lies in ${\mathbb{Z}}_{q^{k_s}}$($\subseteq {\mathbb{Z}}_{t_s}$), hence the order of C is a power of $q$ since $(a,b)$ generates C in ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$. Let $b$'s order in ${\mathbb{Z}}_{t^s}$ be $q^r$. Since H's $p$-group has order coprime to $q$, again by Sylow's theorem, it can be conjugate to a subgroup of $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_1}$. Therefore we can assume the generator of H's $p$-group to be $(0,y)$. If $y \neq 0$, then $t_1$ does not divide $y$, so $d^{y}$ is not equal to $1$ modulo $q$. On the other hand, since $(0,y)(a,b){(0,y)}^{-1} = ({d^{-y}}a,b)$, $({d^{-y}}a,b)$ has to lie in C, which means $({d^{-y}}a,b) = {(a,b)}^n$ for some $n$. Recall from formula (2) that ${(a,b)}^n = ((1 + d^{-b} + \ldots + d^{-(n - 1)b})a,nb)$. In order for $nb = b$, which is the same as $(n-1)b = 0$, $b$'s order $q^r$ has to divide $n - 1$. For such $n$, $\Sigma = 1 + d^{-b} + \ldots + d^{-(n - 1)b}$ will equal to $1$ modulo $q$; note $d^{-b} \equiv 1 ~(~mod~ q~) $. Consequently, $d^{-y} - \Sigma$ is a unit in ${\mathbb{Z}}_{q^{k_s}}$ and $(d^{-y} - \Sigma)a = 0 \in {\mathbb{Z}}_{q^{k_s}}$, which is a contradiction since $a \neq 0$. Therefore, $y$ has to be $0$ and H's $p$-group has to be trivial, hence H will now be a subgroup of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{q^{k_s}}$ (type 1).
$\square$
\begin{cor}
For any $n > 1$, $q$ a prime number greater than $|d|^n$ and $s > q$, each hyper-elementary subgroup
of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$ has index greater than $n$. Recall $\alpha$ is multiplication by $d$ and $t_s$ is the order of $d$ in the group of units $U({\mathbb{Z}}_{q^s})$.
\end{cor}
\begin{cor} \mathrm{la}bel{c2}
For any $n > 1$, $q$ a prime number greater than $|d|^n$ and $s > q$, each hyper-elementary subgroup
of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$ is conjugate to a subgroup H, such that one of the following is true:
\\
(1) the index $[{\mathbb{Z}}_{t_s}, \pi(H)] \geq n$, where $\pi : {\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s} \mapsto {\mathbb{Z}}_{t_s}$ is the natural epimorphism.\\
(2) H is a subgroup of $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_s}$, and $q^s \geq n$.
\end{cor}
\noindent{\bf Proof}\quad The index is $t_1$, $q^{k_s}$ for subgroups of type $1$, type $2$ respectively, and $q^s$ for subgroups of type $3$ in Theorem \ref{thm3}. If $q > |d|^n$, then the order of $d$ in ${\mathbb{Z}}_{q}$
is greater than $n$. Also $s > q$, so $k_s \geq 1$. Hence the order of $d$ in ${\mathbb{Z}}_{q^s}$ is $t_s = {t_1}q^{k_s}$, where $t_1 > n$, $q > n$, and $k_s \geq 1$. Now by Theorem \ref{thm3} the two corollaries follow easily.
$\square$
\section{Proof of the main Theorem}
In this section we prove our main theorem. Our strategy is to prove that ${\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$ is in fact a Farrell-Hsiang group, as defined by Bartels and L\"uck in \cite{BL1}. Recall that $d$ is a positive integer greater than $1$ and $\alpha$ is multiplication by $d$.
\begin{defn}
Let $\mathcal{F}$ be a family of subgroups of the finitely generated group G. We call G a \textbf{Farrell-Hsiang Group} with respect to the family $\mathcal{F}$ if the following holds for a fixed word metric $d_G$:\\
~~ There exists a fixed natural number $N$ such that for every natural number $n$ there is a surjective homomorphism $\Delta_n :G \mathrm{rig}htarrow F_n$ with $F_n$ a finite group such that the following condition is satisfied. For every hyper-elementary subgroup H of $F_n$ we set $\bar{H} := \Delta^{-1}_n (H)$ and require that there exists a simplicial complex $E_H$ of dimension at most N with a cell preserving simplicial $\bar{H}$-action whose stabilizers belong to $\mathcal{F}$, and an $\bar{H}$-equivariant map $f_H: G \mathrm{rig}htarrow E_H$ such that $d_G(g_0,g_1) < n$ implies $d^1_{E_H}(f_H(g_0),f_H(g_1)) < \frac{1}{n}$ for all $g_0,g_1 \in G$, where $d^1_{E_H}$ is the $l^1$-metric on $E_H$.
\end{defn}
\begin{rem}
As pointed out in \cite{BFL}, Remark 1.15, in order to check a group G is a Farrell-Hsiang group, it suffices to check these conditions for one hyper-elementary subgroup in every conjugacy class of such subgroups of $F_n$.
\end{rem}
With this definition, they proved the following theorem:
\begin{thm}
Let G be a Farrell-Hsiang group with respect to the family $\mathcal{F}$. Then G satisfies the K-theoretic and L-theoretic Farrell-Jones Conjecture
with respect to the family $\mathcal{F}$.
\end{thm}
Since both the K-theoretic and L-theoretic Farrell-Jones conjecture have been verified for abelian groups with respect to the family of virtually cyclic subgroups, by the transitivity principle (see for example \cite {BFL}, Theorem 1.11), in order to prove our main theorem, we only need to prove it with respect to the family of abelian subgroups. \\
\textbf{Claim}: The group $\Gamma = {\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$ is a Farrell-Hsiang group with respect to the family of abelian subgroups.\\
Choose $N > 0$ to be the number which appears in Proposition \ref{lte}, note $N$ is independent of $\mathrm{la}mbda$. First, there is a quotient map ${\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z} \mathrm{rig}htarrow {\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$, where $t_s$ is the order of $d$ in the unit group of ${\mathbb{Z}}_{q^s}$. Let $F_n$ be ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$, where we choose $q$ to be a prime number bigger than $d^n$ and $s >q$ (Hence Corollary \ref{c2} holds; we will manipulate $s$ more in the future to get more control.) Let $\Delta_n$ be the quotient map. Also let H be a hyper-elementary subgroup of $F_n$ and $\bar{H} = \Delta^{-1}_n (H)$. For convenience, the metric we put on $\Gamma$ is the one inherited from the orbit embedding $\eta: \Gamma \mathrm{rig}htarrow T_d \times \mathbb{R}$ where $\eta(g) = gx_0$, $g \in \Gamma$ and $x_0 = (P_0,0) \in T_d \times \mathbb{R} $ is the base point, denote this metric on $\Gamma$ by $d_{\Gamma}$. This metric is quasi-isometric to any word metric by Milnor-\u{S}varcz lemma (see \cite{BH},Proposition 8.19, pp140), hence it is good enough for our purpose.
By Corollary \ref{c2}, For any $n > 0$, $q$ a prime number greater than $d^{4n^2}$ and $s > q$, each hyper-elementary subgroup
of ${\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$ is conjugate to a subgroup H, such that one of the following is true:
\\
Case (1), $[{\mathbb{Z}}_{t_s},\pi(H)] \geq 4n^2$, where $\pi : {\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s} \mapsto {\mathbb{Z}}_{t_s}$ is the natural epimorphism.\\
Case (2), H is a subgroup of $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_s}$ \\
We proceed under the assumption that case (1) holds; i.e. we assume $[ {\mathbb{Z}}_{t_s}, \pi(H)] = m \geq 4n^2$. Choose $E_H$ to be the real line $\mathbb{R}$ (dimension 1), with the simplicial structure with $m \mathbb{Z}$ as vertices. There is a $\Gamma$ action on $\mathbb{R}$ by $(x,k) y = y + k$ for $(x,k) \in {\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$, $y \in \mathbb{R}$. Note $\Gamma$ does not act simplicially on $\mathbb{R}$, but $\bar{H}$ does. The stabilizers of a vertex or an edge is $\bar{H} \bigcap {\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \{0\} $, hence an abelian group. Define $f_H: \Gamma \mathrm{rig}htarrow E_H$, by mapping $(x,k) \in {\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z}$ to $ k \in \mathbb{R}$, $f_H$ is an $\bar{H}$ equivariant map. We need to prove for $g,h \in \Gamma$, if $d_\Gamma(g,h) < n$, then $d^1_{E_H} (f_H(g),f_H(h)) < \frac{1}{n}$.
Let $g = (x_1,k_1)$ and $h = (x_2, k_2)$, then $f_H(g) = k_1$ and $f_H(h) = k_2$. Hence $d(f_H(g),f_H(h)) = |k_1 - k_2|$. On the other hand, $d_\Gamma(g,h)= d(g(P_0,0), h(P_0,0)) = d(h^{-1}g(P_0,0),(P_0,0))$ (Here $P_0$ is the base point defined in section \ref{model}). Note $h^{-1}g = (d^{-k_2}(x_1 - x_2), k_1 - k_2)$, while $h^{-1}g$ acts on $(P_0, 0)$ as the matrix\\ \hspace*{1cm} $\left(
\begin{array}{cc}
d^{-(k_1 - k_2)} & d^{k_2}(x_1 - x_2) \\
0 & 1 \\
\end{array}\mathrm{rig}ht) $ = $\left(
\begin{array}{cc}
1 & d^{k_2}(x_1 - x_2) \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ $\left(
\begin{array}{cc}
d^{-(k_1 - k_2)} & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht).$\\ By checking the action on $P_0$, one sees that $h^{-1}g$ moves $P_0$ at least distance $|k_1 - k_2|$ away. By Lemma \ref{dd}, $|k_1 - k_2| \leq d(h^{-1}g(P_0,0),(P_0,0)) = d(g,h)$. So $d(f_H(g),f_H(h)) <d_\Gamma(g,h)< n$. One can easily check now that $d^1_{E_H} (f_H(g),f_H(h)) < \frac{1}{n}$ since the simplicial structure we put on $\mathbb{R}$ has edge length greater than $m$ while $m > 4n^2$. Hence we have completed the proof in this case.\\
Now we proceed to Case(2)\mathrm{la}bel{marker}. In this case H is a subgroup of $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_s}$. Denote $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_s}$ by $K$, then $\bar{K} = \Delta_n^{-1}(K) = ({q^s ~\mathbb{Z} {[{\frac{1}{d}} ]}}) \rtimes_\alpha \mathbb{Z} $, $\bar{H} \subseteq \bar{K}$. Define a monomorphism $\varphi: \Gamma \mathrm{rig}htarrow \Gamma$ by \\*
$\hspace*{20mm} \varphi ((x,k)) =({q^s}x,k)$, for $(x,k) \in {\mathbb{Z} {[{\frac{1}{d}} ]}} \rtimes_\alpha \mathbb{Z} $. \\*
Note that its image is $\bar{K}$; hence ${\varphi}^{-1}:\bar{K} \mathrm{rig}htarrow \Gamma $ is well defined. Note also that $\varphi$ can also be considered as the conjugation by $\left(
\begin{array}{cc}
q^s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ since $\left(
\begin{array}{cc}
q^s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht) \left(
\begin{array}{cc}
d^{-k} & x \\
0 & 1 \\
\end{array}\mathrm{rig}ht) \\\left(
\begin{array}{cc}
q^{-s} & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht) = \left(
\begin{array}{cc}
d^{-k} & q^s x \\
0 & 1 \\
\end{array}\mathrm{rig}ht) $. Define now a self homeomorphism $F_{q^s}: T_d \times \mathbb{R} \mathrm{rig}htarrow T_d \times \mathbb{R}$ by \\*
$\hspace*{20mm} F_{q^s} (z,w) = \left(
\begin{array}{cc}
q^s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht) (z,w)$, for $(z,w) \in T_d\times \mathbb{R}$\\*
where the action of $\left(
\begin{array}{cc}
q^s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ on $(z,w) \in T_d\times \mathbb{R}$ is the diagonal action explained in section \ref{model}, Remark \ref{act}. In fact, $\left(
\begin{array}{cc}
q^s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ acts on the tree $T_d$ as an isometry which fixes the line $L_0$ while it acts on $\mathbb{R}$ by $\left(
\begin{array}{cc}
q^s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)~w = {q^s}w$ for $w \in \mathbb{R}$. It is easy to see that $F_{q^s}$ is $\varphi$-semi-equivariant since $(x,k) \in \Gamma$ acts on $T_d \times \mathbb{R}$ as the matrix $\left(
\begin{array}{cc}
d^{-k} & x \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$ and $\left(
\begin{array}{cc}
q^s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht) \left(
\begin{array}{cc}
d^{-k} & x \\
0 & 1 \\
\end{array}\mathrm{rig}ht) = \left(
\begin{array}{cc}
d^{-k} & {q^s}x \\
0 & 1 \\
\end{array}\mathrm{rig}ht)\left(
\begin{array}{cc}
q^s & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)$. Recall $F_{q^s}$ is $\varphi$-semi-equivariant if $F_{q^s} (g(z,w))$ =
$\varphi(g) F_{q^s}((z,w))$ for every $g \in \Gamma$.
Now choose $\mathrm{la}mbda$ to be $n$, then by Proposition \ref{lte}, we have a $\mathcal{VC}yc$-cover $\mathcal{U}$ of dimension $N$ of $HFS(T_d \times\mathbb{R})$ satisfying the following. There exists $\delta >0$ which depends on $\mathrm{la}mbda$ such that for any point $x \in HFS(T_d \times\mathbb{R})$, there exists $U_x \in \mathcal{U}$ such that $ (\Phi_{[-n, n]}(x))^{\delta} ~\subseteq~ U_x$. We need to define an $\bar{H}$-equivariant map from $T_d \times \mathbb{R} $ to $HFS(T_d \times\mathbb{R})$. Using this map we can pull the $\mathcal{VC}yc$-cover $\mathcal{U}$ on $HFS(T_d \times\mathbb{R})$ back to $T_d \times \mathbb{R}$ and get a $\mathcal{VC}yc$-cover on $T_d \times \mathbb{R}$. Consider the following diagram\\*
$\hspace*{10mm}\begin{CD}
\Gamma @> {\eta} >> T_d \times \mathbb{R} @>{F_{q^s}^{-1}}>> T_d \times \mathbb{R} @> {\Psi_\tau}>> HFS(T_d \times\mathbb{R}) \\
\end{CD}$ \\*
~\\*
where $\eta$ is the inclusion map given by the orbit of the base point, and $\Psi_\tau = \Phi_{\tau} \circ \Psi$ as defined in remark \ref{remfl}. In order to guarantee the composition is $\bar{H}$ equivariant, we change the action of $\bar{H}$ on the second $T_d \times \mathbb{R}$ (i.e., the image of $F_{q^s}^{-1}$) by defining $ \bar{h} \bullet (z,w) = {\varphi}^{-1} (\bar{h})(z,w)$, for any $\bar{h} \in \bar{H}$, $(z,w) \in T_d\times \mathbb{R}$. We also define an $\bar{H}$ action on $HFS(T_d \times\mathbb{R})$ using the same method, by composing the action of $\Gamma$ with ${\varphi}^{-1}$. Now using the composite $ {\Psi_\tau} \circ F_{q^s}^{-1}$, we pull back the cover on $HFS(T_d \times\mathbb{R})$ to $T_d \times \mathbb{R}$ and denote the nerve of this cover by $E(\mathrm{la}mbda)$. It is a simplicial $\bar{H}$-complex of dimension $N$ whose stabilizers belong to $\mathcal{VC}yc$, which are either trivial or infinite cyclic. Also there is a canonical map from $T_d \times \mathbb{R}$ to $E(\mathrm{la}mbda)$ (see for example, \cite {BLR2},section 4.1), denote this map as $\hat{f}_H$. Now for suitable choices of $\tau$ and $q^s$, $f_H = \hat{f}_H \circ \eta$ will be the map used to make $\Gamma$ a Farrell-Hsiang group with respect to the family of abelian subgroups. In fact we choose $\tau = \ln {n} - \ln{\delta} +n$, and rechoose $s$ so that $s>q$ and $ q^s >\bar{N}$, where $\bar{N}$ is determined by Lemma \ref{lpar} choosing $\epsilon = \frac{\delta^2}{2ne^n}$, $D = 2n$. Note that $d$ is fixed once our group $\Gamma$ is fixed.
Now we begin to prove that with these choices $f_H$ indeed works. Let $x_0,x_1 \subseteq T_d \times \mathbb{R}$ with $d(x_0,x_1) < n$, and let $x_2 =(q_1(x_0), q_2(x_1))$, recall that $q_1$, $q_2$ are projections from $T_d\times \mathbb{R}$ to $T_d$ or $\mathbb{R}$ respectively. By Lemma \ref{dd} and Corollary \ref{ddc}, we have $d(x_2,x_1) < d(x_0,x_1) < n$, $d(x_2,x_0) < 2d(x_0,x_1) < 2n$. Applying $F_{q^s}^{-1}$, by Lemma \ref{lpar} and our choice of $\epsilon$ and $D$, $d_{FS(X)}(\Psi_0(F_{q^s}^{-1}(x_0)),\Psi_0(F_{q^s}^{-1}(x_2))) < \epsilon = \frac{\delta^2}{2n{e^{n}}}$. On the other hand, $d(x_2,x_1) = d(F_{q^s}^{-1}(x_2),F_{q^s}^{-1}(x_1))$, which is the distance between $q_1(x_2)$ and $q_1(x_0)$ in the tree $T_d$ by Lemma \ref{dd}, since $x_2$ and $x_1$ has the same $\mathbb{R}$ coordinate. For convenience we will denote $F_{q^s}^{-1}(x_i)$ by $\bar{x_i}$, $i = 0,1,2$, then $d(\bar{x}_1,\bar{x}_2) < n$ and $d_{FS(X)}(\Psi_0(\bar{x}_0),\Psi_0(\bar{x}_2))) < \epsilon$.
\begin{figure}
\caption{Two subcases}
\end{figure}
As shown in Figure \ref{f2} there are two subcases to consider depending on the positions of $\bar{x}_1 ~and~ \bar{x}_2$. Since $\tau = \ln{n} - \ln{\delta} +n$, by Lemma \ref{inq},
$$d_{FS(X)}(\Psi_{\tau}({\bar{x}}_0), \Psi_{\tau}(\bar{x}_2))) \leq e^{|\tau|} d(\Psi_0(\bar{x}_0),\Psi_0(\bar{x}_2))) < \frac{{ne^{n}}}{\delta} \cdot \frac{\delta^2}{2n{e^{n}}} = \frac{\delta}{2}$$
In \textbf{subcase 1}, we first assume $\bar{x}_1$ is ahead of $\bar{x}_2$ as in Figure \ref{f2}. Let $\hat{d} = d(\bar{x}_1, \bar{x}_2)$, then there exists an element of the cover $\mathcal{U}$, say $U$, such that $(\Psi_{[\tau - n ,\tau+n]}(\bar{x}_2))^\delta \subseteq U$ by Proposition \ref{lte}; hence $\Psi_{\tau +\hat{d}}(\bar{x}_2) \in U$ since $\hat{d} <n$ . On the other hand, as generalized geodesics, $\Psi_{\tau + \hat{d}}(\bar{x}_2)$ and $\Psi_{\tau}(\bar{x}_1)$ coincide for $t > -\tau$. In fact $\Psi_{\tau + \hat{d}}(\bar{x}_2)(t)$ maps $(-\infty, -\tau - \hat{d}]$ to the point $\bar{x}_2$ and $[-\tau -\hat{d}, \infty)$ isometrically to the geodesic starting with the point $\bar{x}_2$ and ending with the infinity point $\omega$; while $\Psi_{\tau}(\bar{x}_1)(t)$ maps $(-\infty, -\tau]$ to $\bar{x}_1$ and $[-\tau,\infty)$ isometrically to the geodesic starting with the point $\bar{x}_1$ and ending with the infinity point $\omega$. Hence
$$d_{FS(X)}(\Psi_{\tau +\hat{d}}(\bar{x}_2),\Psi_\tau(\bar{x}_1)) = \int_{\mathbb{R}} \frac{d(\Psi_{\tau +\hat{d}}(\bar{x}_2)(t),\Psi_\tau(\bar{x}_1)(t))}{2e^{|t|}} dt \hspace*{44mm}$$
$$= \int_{(-\infty, -\tau-\hat{d}]} \frac{d(\Psi_{\tau +\hat{d}}(\bar{x}_2)(t),\Psi_\tau(\bar{x}_1)(t))}{2e^{|t|}} dt~~~~ ~+~~~~~\int_{[-\tau - \hat{d},-\tau]} \frac{d(\Psi_{\tau +\hat{d}}(\bar{x}_2)(t),\Psi_\tau(\bar{x}_1)(t))}{2e^{|t|}} dt \hspace*{20mm}$$
$$+ \int_{[-\tau, \infty]} \frac{d(\Psi_{\tau +\hat{d}}(\bar{x}_2)(t),\Psi_\tau(\bar{x}_1)(t))}{2e^{|t|}}dt~\hspace*{20mm}$$
$$< \int_{(-\infty, -\tau-\hat{d}]}\frac{d(\bar{x}_2, \bar{x}_1 )}{2e^{-t}}dt ~ +~ \int_{[-\tau-\hat{d}, -\tau]} \frac{d(\bar{x}_2, \bar{x}_1)}{2e^{-t}} dt ~+ ~0~\hspace*{40mm}$$
$$= \int_{(-\infty, -\tau-\hat{d}]}\frac{\hat{d}}{2e^{-t}}dt ~ +~ \int_{[-\tau-\hat{d}, -\tau]} \frac{\hat{d}}{2e^{-t}} dt \hspace*{62mm}$$
$$=\frac{1}{2}\hat{d} e^{-\tau}\leq ~ \frac{n}{2e^{\tau}} ~=~ \frac{~~n~~}{2e^{\ln{n} - \ln{\delta} +n}} = \frac{\delta}{2{e^{n}}} < \frac{\delta}{2}\hspace*{70mm}.$$
Therefore, we have proven that $\Psi_{\tau}(F_{q^s}^{-1}(x_0)),\Psi_{\tau}(F_{q^s}^{-1}(x_1)) \in (\Psi_{[\tau-n ,\tau+n]}(\bar{x}_2))^\delta \subseteq U$. If $\bar{x}_2$ is ahead of $\bar{x}_1$, then the same calculation shows that
$$d_{FS(X)}(\Psi_{\tau +\hat{d}}(\bar{x}_1),\Psi_\tau(\bar{x}_2)) < \frac{\delta}{2} $$
Consequently
$$d_{FS(X)}(\Psi_{\tau +\hat{d}}(\bar{x}_1),\Psi_\tau(\bar{x}_0)) \leq d_{FS(X)}(\Psi_{\tau +\hat{d}}(\bar{x}_1),\Psi_\tau(\bar{x}_2)) + d_{FS(X)}(\Psi_{\tau}({\bar{x}}_0), \Psi_{\tau}(\bar{x}_2))) $$
$$ < \frac{\delta}{2} + \frac{\delta}{2}= \delta \hspace*{15mm} $$
Therefore,$\Psi_{\tau}(F_{q^s}^{-1}(x_0)),\Psi_{\tau}(F_{q^s}^{-1}(x_1)) \in (\Psi_{[\tau-n ,\tau+n]}(\bar{x}_1))^\delta $ which will be contained in some member of the cover $\mathcal{U}$ by proposition \ref{lte}. Hence if we pull back the cover $\mathcal{U}$ via $\Psi_{\tau}\circ F_{q^s}^{-1}$, we get a cover $\mathcal{V}$ of $T_d \times \mathbb{R}$ and $x_0$ and $x_1$ will lie in the same member of $\mathcal{V}$.
For \textbf{subcase 2}, note that $\Psi(\bar{x}_1)$ and $\Psi(\bar{x}_2)$ as generalized geodesics in $T_d \times \mathbb{R}$ will meet; suppose $y \in T_d \times \mathbb{R}$ is the first point where they meet. Let $d_i = d(\bar{x}_i,y)$ for $i=1,2$. Note $d_1 + d_2 < n$. We can assume $d_1 \geq d_2$ and let $\hat{d} = d_1 - d_2$, $\hat{d} < n$. Now we are almost in the
same situation as \textbf{subcase 1}. The generalized geodesics, $\Psi_{\tau}(\bar{x}_1)$ and $\Psi_{\tau+ \hat{d}}(\bar{x}_2)$ will coincide for $t > -\tau + d_1$. And the same calculation shows that $d_{FS(X)}(\Psi_{\tau}(\bar{x}_1),\Psi_{\tau + \hat{d}}(\bar{x}_2)) < \frac{n}{2e^{\tau - d_1}} < \frac{n}{{2\frac{{ne^{n}}}{\delta}} e^{-d_1}} < \frac{\delta}{2}$. Therefore, \\
$d_{FS(X)}(\Psi_{\tau +\hat{d}}(\bar{x}_0), \Psi_{\tau}(F_{q^s}^{-1}(x_1))) < \delta$, hence there exists an element $U$ of the cover $\mathcal{U}$ such that $\Psi_{\tau}(F_{q^s}^{-1}(x_0)),\Psi_{\tau}(F_{q^s}^{-1}(x_0)) \in (\Psi_{[\tau-n ,\tau+n]}(\bar{g}_0))^\delta \subseteq U$. Consequently if we pull back the cover $\mathcal{U}$, $x_0$ and $x_1$ will lie in the same member of this new cover $\mathcal{V}$ of $T_d \times \mathbb{R}$.\\
So far, we have proved that there exists a $\mathcal{VC}yc$-cover $\mathcal{V}$ of $T_d \times \mathbb{R}$, such that given any $x_0, x_1 \in T \times \mathbb{R}$ with $d(x_0,x_1) < n$, there exists a member of this cover containing both $x_0$ and $x_1$. In fact, our proof shows that any ball with radius less than $\frac{n}{2}$ will lie in some element of the cover. Hence we can apply the following lemma from \cite{BLR2}, proposition 5.3, page 47. \mathrm{la}bel{marker2}
\begin{lem}\mathrm{la}bel{cont}
Let $X = (X,d)$ be a metric space and $\beta \geq 1$. Suppose $\mathcal{U}$ is an open cover of $X$ of dimension less than or equal to $N$ with the following property:
$ For ~every~x\in X, there~ exists~U \in ~ \mathcal{U} ~ such ~ that ~ the ~ \beta ~ ball ~ around ~ x ~ lies ~ in ~ U.$
~\\*
Then if we denote the nerve of the cover as $N(\mathcal{U})$, the canonical map $\rho: X \mathrm{rig}htarrow N(\mathcal{U}) $ has the following contracting property. If $d(x,y) \leq \frac{\beta}{4N}$, then
$$d^1(\rho(x), \rho(y)) \leq \frac{16N^2}{\beta} d(x,y).$$
\end{lem}
By Proposition \ref{lte}, the dimension of the cover is less than a fixed number $N$ in our situation. For any $g_0,g_1 \in \Gamma$ such that $d(g_0,g_1) < n$, choose $\beta = 16N^2 n^2$, i.e., in the arguments of Case(2) from page \pageref{marker} to page \pageref{marker2} before Lemma \ref{cont} , replace $n$ by $\bar{n} = 32N^2 n^2$, hence any ball of radius $16N^2 n^2$ will lie in the same element of the cover. Then $d(g_0,g_1) < n < 4Nn^2= ~\frac{\beta}{4N}$, by Lemma \ref{cont}, $d^1(f_H(g_0), f_H(g_1)) \leq \frac{16N^2}{\beta}~ d(g_0,g_1) = \frac{16N^2}{16 N^2 n^2} ~d(g_0,g_1) \leq \frac{1}{n^2} ~n = \frac{1}{n}$. Hence we have finished our proof.
\section{Further results} \mathrm{la}bel{fur}
In this section we extend our results on the Farrell-Jones conjecture to more general groups. As in Section \ref{gts}, we denote the unit group of a ring $R$ by $U(R)$.
Let $\hat{\Gamma}$ be the following matrix subgroup of $GL_2(\mathbb{Z}[\frac{1}{p}])$, where $p$ is a prime number \\*
$\hspace*{20mm} \hat{\Gamma} = \{~ \left(
\begin{array}{cc}
\pm p^{-k} & x \\
0 & 1 \\
\end{array}\mathrm{rig}ht) ~|~ k \in \mathbb{Z} ~and ~ x \in \mathbb{Z}[\frac{1}{p}] ~\}$\\*
Note that the Baumslag-Solitar group $\Gamma = {\mathbb{Z} {[{\frac{1}{p}} ]}} \rtimes \mathbb{Z}$ we studied before, is a normal subgroup of index $2$ in $\hat{\Gamma}$
and that $\hat{\Gamma} = Aff(\mathbb{Z}[\frac{1}{p}])$; i.e. $\hat{\Gamma}$ is the full affine group of the ring $\mathbb{Z}[\frac{1}{p}]$. More precisely, $\hat{\Gamma}$ is isomorphic $ {\mathbb{Z} {[{\frac{1}{p}} ]}} \rtimes U(\mathbb{Z} {[{\frac{1}{p}} ]}) $ and $U(\mathbb{Z} {[{\frac{1}{p}} ]}) = \{ \pm p^{-k} ~|~ k \in \mathbb{Z}\} \mathrm{c}g \mathbb{Z} \oplus \mathbb{Z}_2$. Note that virtually cyclic groups are virtually abelian while the Farrell-Jones conjecture is true for virtually abelian groups. Hence by the transitivity principle, if we can prove $\hat{\Gamma}$ is a Farrell-Hsiang group with respect to the family of virtually abelian subgroups, then the Farrell-Jones conjecture is true for $\hat{\Gamma}$. By checking our proof for $\Gamma$, one sees that the only thing we need to take care of is section \ref{gts}.
For $q$ a prime number much bigger than $p$, define $U_{q^s}$ to be the subgroup of $U(\mathbb{Z}_{q^s})$ generated by $-1$ and $p$. Note since $q$ is an odd prime number, $U(\mathbb{Z}_{q^s})$ is a cyclic group (see for example \cite{KR}, chapter 4), hence $U_{q^s}$ is also a cyclic group, denote its generator by $d \in U(\mathbb{Z}_{q^s})$ and its order by $t'_s$. Then $t'_s = t_s$ or $2t_s$, where $t_s = {t_1} q^{k_s}$ is the order of $p$ in $U(\mathbb{Z}_{q^s})$ as defined in section \ref{gts}. Hence, $t'_1$, the order of $d$ in $\mathbb{Z}_{q}$, equals to either $t_1$ or $2t_1$, and $t'_s = {t'_1} q^{k_s}$. Note that $U_{q^s}$ as a subgroup of $U(\mathbb{Z}_{q^s})$ will have a canonical action on $\mathbb{Z}_{q^s}$ via multiplication, hence the semidirect product $\mathbb{Z}_{q^s} \rtimes U_{q^s}$ is well defined. And there is a quotient homomorphism $ \Delta_n : {\mathbb{Z} {[{\frac{1}{p}} ]}} \rtimes U(\mathbb{Z} {[{\frac{1}{p}} ]}) \mathrm{rig}htarrow \mathbb{Z}_{q^s} \rtimes U_{q^s}$.
\begin{lem}\mathrm{la}bel{gtr}
For any given integer $n >0$, let $q$ be a prime greater than $p^n$ and $s > q$, then every hyper-elementary subgroup H
of $\mathbb{Z}_{q^s} \rtimes U_{q^s} = \mathbb{Z}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t'_s}$ is conjugate to a subgroup $\bar{H}$, such that one of the following is true:
\\
(1) the index $[\pi'(\bar{H}), {\mathbb{Z}}_{t'_s}] \geq n$, where $\pi' : {\mathbb{Z}}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t'_s} \mapsto {\mathbb{Z}}_{t'_s}$ is the natural epimorphism.\\
(2) $\bar{H}$ is a subgroup of $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t'_s}$, and $q^s \geq n$.
\end{lem}
\begin{rem}
If we write $\mathbb{Z}_{q^s} \rtimes U_{q^s} $ as $ \mathbb{Z}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t'_s}$, then $\alpha$ is a multiplication by $d$, where $d \in U(\mathbb{Z}_{q^s})$ generates $U_{q^s}$.
\end{rem}
\noindent{\bf Proof}\quad $\mathbb{Z}_{t_1}$ is a subgroup of index $1$ or $2$ in $\mathbb{Z}_{t'_1}$ depending on whether $t'_1 = t_1$ or $2 t_1$. Likewise $\mathbb{Z}_{t_s}$ is a subgroup of index $1$ or $2$ in $\mathbb{Z}_{t'_s}$ depending on whether $t'_1 = t_1$ or $2 t_1$.
Let $H' = H \cap \mathbb{Z}_{q^s} \rtimes {\mathbb{Z}}_{t_s}$, then $H'$ is a subgroup of index $1$ or $2$ of H. If the index is $1$, then we are done by Corollary \ref{c2}. Hence we can assume the index is $2$; note that $H'$ is also a hyper-elementary group in this case. Therefore by Corollary \ref{c2}, either
\begin{itemize}
\item [(1')] $[\pi'(H'), {\mathbb{Z}}_{t_s}] \geq n$ or
\item[(2')] $H'$ can be conjugate to a subgroup of $\{0\}\rtimes_\alpha {\mathbb{Z}}_{t_s} \subseteq \mathbb{Z}_{q^s} \rtimes _\alpha {\mathbb{Z}}_{t_s}\subseteq \mathbb{Z}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t'_s}$.
\end{itemize}
It is clear that (1') $ \Longrightarrow (1)$. Therefore we assume (2') occurs. And hence we can also (after this conjugation) assume that $H'$ is a subgroup of $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t_s}$. Therefore the $Ker(\pi'|_H)$ is either $\{0\}$ or $\mathbb{Z}_2$. But the order of $Ker(\pi'|_H)$ has to divide $q$, hence $Ker(\pi'|_H) = \mathbb{Z}_2$ is impossible. Hence $\pi' :H \mathrm{rig}htarrow \hat{H} := \pi'(H) \subseteq {\mathbb{Z}}_{t'_s} $ is an isomorphism. So H is a cyclic group with generator say $(a,b) \in \mathbb{Z}_{q^s} \rtimes_\alpha {\mathbb{Z}}_{t_s}$. Now note Lemma \ref{conj} implies if $t'_1 \nmid b$, then H can be conjugate to a subgroup of $\{0\}\rtimes_\alpha{\mathbb{Z}}_{t'_s}$. If $t'_1 ~|~ b$, then order of $b$ divides $q^r$. But the order of $b$ is the same as $|H|$, and $2~|~ |H|$, which is a contradiction.
$\square$
With this Lemma, one can now check that our methods for proving the Farrell-Jones conjecture for $\Gamma$ can be extended to $\hat{\Gamma}$.
\begin{prop} \mathrm{la}bel{aff}
The K- and L-theoretic Farrell-Jones conjecture is true for $\hat{\Gamma} = Aff(\mathbb{Z} {[{\frac{1}{p}} ]})$.
\end{prop}
\begin{cor}
The K- and L-theoretic Farrell-Jones conjecture is true for every semi-direct product $\mathbb{Z} {[{\frac{1}{p}} ]} \rtimes C$, where $p$ is any prime number and C is any virtually cyclic group.
\end{cor}
In order to prove this corollary we need the following result from \cite{BFL}, Theorem 1.7 which generalizes \cite{FJ}, Proposition 2.2.
\begin{lem} \mathrm{la}bel{exp}
Let $1 \mathrm{rig}htarrow K \mathrm{rig}htarrow G \xrightarrow{\psi} Q \mathrm{rig}htarrow 1$ be an exact sequence of groups. Suppose that the group Q and for any virtually cyclic subgroup $V \subseteq Q$ the group $\psi^{-1}(V)$ satisfies the K-theoretic Farrell-Jones conjecture. Then G satisfies the K-theoretic Farrell-Jones conjecture. The same is true for the L-theoretic Farrell-Jones conjecture.
\end{lem}~\\*
Proof of the Corollary. The group $C$ acts on $\mathbb{Z} {[{\frac{1}{p}} ]}$ by some representation (homomorphism)\\*
$\hspace*{25mm} \varphi : C \mathrm{rig}htarrow Aut(\mathbb{Z} {[{\frac{1}{p}} ]}) = U(\mathbb{Z} {[{\frac{1}{p}} ]}) ~=~\{\pm p^{-k}~|~ k \in \mathbb{Z} \}.$\\*
Hence there is a canonical homomorphism \\*
$\hspace*{25mm} \bar{\varphi} : \mathbb{Z} {[{\frac{1}{p}} ]} \rtimes_{\varphi} C \mathrm{rig}htarrow \mathbb{Z} {[{\frac{1}{p}} ]} \rtimes U(\mathbb{Z} {[{\frac{1}{p}} ]}) $\\*
whose kernel $K$ is a subgroup of $C$ and therefore also virtually cyclic. Consider the following exact sequence\\*
$\hspace*{20mm}\begin{CD}
~~~~~~~~~~~~~~~~~1 @> >> K @>>> \mathbb{Z} {[{\frac{1}{p}} ]} \rtimes_{\varphi} C @> \bar{\varphi} >> I @> >> 1 \\ \end{CD}$ \\*
~\\*
where $I = image (\bar{\varphi})$. To prove the corollary, we want to apply Lemma \ref{exp} to this exact sequence. First by Theorem 1.8 from \cite{BFL}, the group I satisfies the K- and L-theoretic Farrell-Jones conjecture since its overgroup $\mathbb{Z} {[{\frac{1}{p}} ]} \rtimes U(\mathbb{Z}{[{\frac{1}{p}} ]}) $ does by Proposition \ref{aff}. Therefore to complete the proof, by Lemma \ref{exp}, it remains to show that, for each virtually cyclic subgroup V of I, ${\bar{\varphi}}^{-1}(V)$ also satisfies the K- and L-theoretic Farrell-Jones conjecture. But this follows from Theorem 0.1 of \cite{BFL} since ${\bar{\varphi}}^{-1}(V)$ is a virtually poly-$\mathbb{Z}$ group. Note that ${\bar{\varphi}}^{-1}(V)$ is an extension of V by K, which are both virtually poly-$\mathbb{Z}$ groups, hence by Lemma 4.2 (v) in \cite{BFL}, ${\bar{\varphi}}^{-1}(V)$ is a virtually poly-$\mathbb{Z}$ group.
$\square$
\begin{rem}
The groups $\mathbb{Z} {[{\frac{1}{p}} ]} \rtimes C$ (where the image of $\varphi$ is infinite) forms an interesting subclass of nearly crystallographic groups as defined in [\cite{FL}, page 309]. The importance of the class of nearly crystallographic groups lies in Theorem 1.2 of \cite{FL}, where it was shown that the fibered isomorphism conjecture for the stable topological pseudo-isotopy functor is true for all virtually solvable groups provided it is true for the much smaller class consisting of all nearly crystallographic groups. And the truth of the fibered isomorphism conjecture for a torsion free group $G$ implies that $Wh(G) = 0$.
\end{rem}
\begin{center}
{\bf APPENDIX}
\end{center}
\appendix
Let $G_d$ be the group~ $\{{\left(
\begin{array}{cc}
d^n \cdot \frac{s_1}{s_2} & b \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} ~|~ s_1,~s_2~ are~ nonzero~ integers~ coprime~ to ~d, ~b ~\in \mathbb{Q}\}$, $T_d$ be the oriented infinite, regular, $(d+1)$-valent tree with edge length $1$ as in Section \ref{model}. In this appendix we explain how the group $G_d$ acts on the tree $T_d$ where $d$ is a positive integer.
If $d$ is a prime, the action is well known as we explained in section \ref{model}. For more information, see for example \cite{Se}, Chapter II section 1.
We now assume $d$ is a power of a prime, hence $d = d_1^j$ for some prime $d_1$. Since $G_{d_1^j}$ is a subgroup of $G_{d_1}$, $G_{d_1^j}$ also acts on $T_{d_1}$. The following definition of Stallings' folding is taken from \cite{BMF}.
\begin{defn}
Let T be a G-tree (recall that this means, in particular, that there are no inversions).
Consider two edges $e_1$ and $e_2$ in T that are incident to a common vertex $v$. By
$\phi :e_1 \mathrm{rig}htarrow e_2$ denote the linear homeomorphism fixing $v$. Then define an equivalence
relation " $\sim$ " on T as the smallest equivalence relation such that:\\
(i) $x \sim \phi(x)$, for all $x \in e_1$ , and\\
(ii) if $x \sim y$ and $g \in G$ then $g(x) \sim g(y)$.\\
The quotient space $T/\sim $ is a simplicial tree with a natural simplicial action of G. It
might happen that G acts with inversions on $T/\sim $, in which case we introduce
a new equivalence class of vertices to obtain a G-tree. Call the quotient map
$T \mathrm{rig}htarrow T/\sim $ a fold.
\end{defn}
The key thing is that after the folding, G has an induced action on the new tree.
Now we take the tree to be $T_p$ which is the oriented infinite regular (p+1)-valent tree we defined before; c.f., figure \ref{tree} in section \ref{model} for $T_2$. We first explain how does the action of $G_{p^2}$ on $T_p$ induce an action on the tree $T_{p^2}$. Let $P_0P_1$ be an edge in the specified horizontal line $L_0$ of the tree $T_p$ such that $f_p(P_0) = 0$ and $f_p(P_1) = -1$; see section \ref{model} for the terminology here. The matrix ${\left(
\begin{array}{cc}
1 & d^{-1} \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} $ fixes $P_{1}$ and acts on the $p$ edges that going towards $P_1$ by cyclic permutation. We will denote the image of $P_0$ under ${\left(
\begin{array}{cc}
1 & d^{-1} \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} $ by $Q_1$. The folding map we are going to use is $\phi: P_0P_1 \mathrm{rig}htarrow Q_1P_1$ which fixes $P_1$. Note that since ${\left(
\begin{array}{cc}
1 & d^{-1} \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} $ acts as cyclic permutation on the $p$ edges going towards $P_1$, all the $p$ edges are identified after the folding. Therefore after the folding there will be $p^2$ edges going towards $P_0$. Moreover, $G_{p^2}$ is generated by matrices of the following three forms:
\begin{itemize}
\item
type I: ${\left(
\begin{array}{cc}
p^{2n} & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} $;
\item
type II: ${\left(
\begin{array}{cc}
\frac{s_1}{s_2} & 0 \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} $, where $s_1,s_2$ are positive integers that are coprime to $p$;
\item
type III: ${\left(
\begin{array}{cc}
1 & b \\
0 & 1 \\
\end{array}\mathrm{rig}ht)} $, where $b \in \mathbb{Q}$
\end{itemize}
Matrices of type I act as translation on $T_p$ which will change the Busmann function by an even number while matrices of type II and III leave the Busemann function unchanged. One sees now that the resulting tree is almost $T_{p^2}$ except it has some 2-valent vertices. We deleted these 2-valent vertices, and the group $G_{p^2}$ has an induced action on the new tree which is $T_{p^2}$. Figure \ref{rep} shows the resulting tree $T_2/\sim$, which is homeomorphic to $T_4$.
\begin{figure}
\caption{$T_2$ after Stallings' folding}
\end{figure}
If we delete all the $2$-valent vertices in the resulting tree, it will be exactly $T_{4}$; in figure \ref{rep}, for example vertices $P_{2k+1}$ will be deleted.
One can further use this to get the action of $G_{p^l}$ on $T_{p^l}$ from the action of $G_{p^l}$ on $T_{p}$ by applying Stallings' folding for $l-1$ times.
If $d$ is not a power of a prime, let $d_1^{j_1} d_2^{j_2} \ldots d_m^{j_m}$ be its prime factorization. Note that $G_d$ is a subgroup of $G_{d_l^{j_l}}$ for $1 \leq l \leq m$; hence it acts on $T_{d_l^{j_l}}$, and therefore it will act on the product space $Y_d = T_{d_1^{j_1}} \times T_{d_2^{j_2}} \times \ldots \times T_{d_m^{j_m}}$ diagonally. Now consider the ``diagonal subspace"
$$\{y = (y_1,y_2,\ldots, y_{N})\in Y_{d}~ |~ f_{{d_l}^{j_l}}(y_l) = f_{{d_{l'}}^{j_{l'}}}(y_{l'}),~ for ~any ~1\leq l ~and~ l'\leq N \}$$
where $f_{{d_l}^{j_l}}$ is the Busemann function on the corresponding $T_{d_{{d_l}^{j_l}}}$ we defined before; see Remark \ref{bus}. The diagonal space is an invariant subspace of $G_d$, hence $G_d$ has an induced action on it. It is not too hard to show that this subspace is homeomorphic to the tree $T_d$ we defined before; we will think of them as the same. Therefore we have an induced action of $G_d$ on $T_d$. There is a natural Busemann function defined on $T_d$ by $f_{d}(y) = f_{d_1^{j_1}}(y_1)$ which is the same as we defined before; see Remark \ref{bus}.
~\\
Tom Farrell\\
DEPARTMENT OF MATHEMATICS, SUNY BINGHAMTON, NY,13902,U.S.A.\\
E-mail address: [email protected]\\
Xiaolei Wu\\
DEPARTMENT OF MATHEMATICS, SUNY BINGHAMTON, NY,13902,U.S.A.\\
E-mail address: [email protected]
\end{document} |
\begin{document}
\title{Long distance co-propagation of quantum key distribution and terabit classical optical data channels}
\author{Liu-Jun Wang}
\affiliation{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China\\
}
\affiliation{CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Kai-Heng Zou}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Peking University, Beijing 100871, China}
\author{Wei Sun}
\author{Yingqiu Mao}
\affiliation{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China\\
}
\affiliation{CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Yi-Xiao Zhu}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Peking University, Beijing 100871, China}
\author{Hua-Lei Yin}
\affiliation{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China\\
}
\affiliation{CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Qing Chen}
\author{Yong Zhao}
\affiliation{QuantumCTek Co., Ltd., Hefei, Anhui 230088, China}
\author{Fan Zhang}
\email{[email protected]}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Peking University, Beijing 100871, China}
\author{Teng-Yun Chen}
\email{[email protected]}
\author{Jian-Wei Pan}
\email{[email protected]}
\affiliation{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China\\
}
\affiliation{CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\begin{abstract}
Quantum key distribution (QKD) generates symmetric keys between two remote parties, and guarantees the keys not accessible to any third party. Wavelength division multiplexing (WDM) between QKD and classical optical communications by sharing the existing fibre optics infrastructure is highly desired in order to reduce the cost of QKD applications. However, quantum signals are extremely weak and thus easily affected by the spontaneous Raman scattering effect from intensive classical light. Here, by means of wavelength selecting and spectral and temporal filtering, we realize the multiplexing and long distance co-propagation of QKD and Terabit classical coherent optical communication system up to 80km. The data capacity is two orders of magnitude larger than the previous results. Our demonstration verifies the feasibility of QKD and classical communication to share the resources of backbone fibre links, and thus taking the utility of QKD a great step forward.
\end{abstract}
\maketitle
\section{Introduction}
Quantum key distribution (QKD) \cite{Bennett84,EKERT91,Gisin02} supplies information-theoretic security \cite{Scarani09} based on the principles of quantum mechanics. Since its introduction in 1984 \cite{Bennett84}, QKD has undergone dramatic progress from point-to-point experiments \cite{Takesue07,Stucki09,Dixon10,Liu10,Wang12,Tanaka12,korzh2015provably} to QKD network implementations \cite{Peev09,Chen10,Sasaki11,Froehlich13}. Due to its extremely low intensity, conventional QKD signals require an exclusive fibre link dedicated to its transmission. To avoid the high cost of laying extra fibre resources, the integration of QKD with conventional telecom fibre channels is of great importance. One popular solution is to multiplex QKD with classical optical channels through wavelength division multiplexing (WDM), which was first realized by Townsend in 1997 \cite{Townsend97}, and further extended to optical access and metropolitan networks with moderate classical bit rate of Gigabit per second \cite{Chapuran09,Choi11,Choi10,Peters09,lancho2009qkd,mora2012simultaneous,Aleksic13,
Ciurana14,aleksic2015perspectives}. So far, QKD has not been multiplexed into high data capacity optical backbone links. In this paper, we demonstrate for the first time that QKD can be deployed in Terabit classical optical communication environments with long distance fibre link up to 80km, which shows the integration feasibility of QKD and classical telecom backbone infrastructure.
Classical backbone links have characteristics of long distances and high throughputs. For instance, a typical span distance is 80 km, and the communication capacity of one fibre link reaches up to Terabits per second (Tbps) magnitude. Unfortunately, the highest experimental and field trial record of classical data channel bandwidth used to simultaneously transmit QKD is 40 Gigabits per second (Gbps) \cite{Patel14,choi2014field}. In fact, from the simulation results of Patel \textit{et al}. \cite{Patel14}, when quantum signals wavelength is located at C-band (1530 - 1565 nm), the maximum bandwidth of data channels achievable was predicted to be 140 Gbps. This is because that as the bandwidth increases, the classical light launch power also increases, resulting in stronger spontaneous Raman scattering noise and linear crosstalk induced by the classical light, which are the main obstacles in WDM integration of QKD and classical optical data channels.
Besides, in previous experiments the classical communication generally used on-off keying (OOK) modulation schemes, which is intensity modulated and detected directly by photodiodes. With this kind of modulation, a bit rate of 1 Gbps typically corresponds to a launch power of 0 dBm (1 mW). As the bit rate is basically proportional to the launch power, 1 Tbps OOK data communication would require 30-dBm classical light, which will result in the unacceptably severe Raman scattering noise. Fortunately, the Tbps classical data channels are currently implemented by coherent optical communication combined with M-ary quadrature amplitude modulation (QAM) formats. By using 16-QAM and 64-QAM in our experiment, the classical optical power is about 10 dBm at Tbps level, which provides the possibility of QKD multiplexing. We note that high-order QAMs require higher optical signal to noise ratios (OSNR) than OOK modulations. The low launch power will lead to a worse OSNR while the high launch power will result in severe fibre nonlinear distortions that deteriorate the signal quality. Therefore, for a specific transmission distance, there exists an optimum launch power as a trade-off to balance the influence of noise and nonlinear interference.
There are some points of consideration when multiplexing QKD with such a high capacity classical optical communication. Firstly, we need to suppress the in-band noise that has the same wavelength as our quantum signals, which comes from the background fluorescence of the classical light source and the amplified spontaneous emission noise generated from erbium-doped fibre amplifiers (EDFA). Secondly, we require a high degree of isolation to reduce the out-of-band noise, which corresponds to the probability of classical light being detected by the single-photon detectors at the QKD receiver site. These two kinds of noise are proportional to the incident light power and generally referred to as the linear crosstalk. In fact, the main challenge of WDM comes from the spontaneous Raman scattering effect from the intensive classical light \cite{auyeung1978spontaneous,Subacius05,xavier2009scattering,da2014impact}.
Here, we show that through wavelength selection and sharp optical filtering, the multiplexing between QKD and Terabit classical data channels can be successfully achieved. Such coexistence leverages the existing backbone fibre cables, realizing large cost savings potentials over deploying dedicated quantum links.
\section{Raman noise and secure key rates at 1550.12 \lowercase{nm} and 1310 \lowercase{nm}}
In order to quantify the impact of Raman scattering on QKD, we first need to determine the classical signal wavelength $\lambda_c$ and quantum signal wavelength $\lambda_q$. The commercial dense WDM (DWDM) technology usually uses the C-band with relatively low fibre loss. Note that EDFAs are generally necessary to compensate for the fibre link attenuation. In contrast, the quantum signals cannot be amplified in principle because of the no-cloning theorem \cite{wootters1982single,dieks1982communication}. Therefore, in previous point-to-point WDM experiments, $\lambda_c$ was usually chosen in the C-band, while $\lambda_q$ was also located at C-band because of its low fibre loss \cite{xia2006band,Eraerds10,Patel12,walenta2014fast,wang2015experimental}, or at the O-band (1260 - 1360 nm) because of its low Raman noise \cite{runser2005demonstration,Nweke05,Chapuran09,Choi11}. Hence, we need to consider the two factors together to determine the appropriate quantum signal wavelength in different classical optical communication environments.
We measure the forward Raman scattering noises, which transmit in the same direction as the incident light, at both 1550.12 nm and 1310 nm using an InGaAs avalanche photodiode (APD) based single-photon detector, operating at 1.25 GHz with a 180-ps full width at half maximum (FWHM) gate width. Figure~\ref{fig:spectrum1} shows the count rate of Raman noise generated from a continuous wave laser source tuned from 1530 nm to 1570 nm and launched with a power level of 6 dBm. We note that in the $\lambda_q$=1550.12 nm configuration, the QKD receiver used a 20-GHz fibre Bragg gating (FBG) to filter the Raman noise, which induces an extra loss of 3.2 dB. While in the $\lambda_q$=1310 nm configuration, a bandpass filter with center wavelength of 1310.0 nm, passband width of 100 GHz, and insertion loss of 0.5 dB was used. Consequently, the Raman noise at 1550.12 nm strongly depends on the incident light wavelength, which has a count rate of 440.4 kilo counts per second (kcps) on average between 1550.12 $\pm$ 3 nm, and two times more counts beyond 1550.12 $\pm$ 10 nm. Moreover, Fig. 1 shows the intensity of the anti-Stokes scattering slightly weaker than that of the Stokes scattering. Meanwhile, the averaged noise count rate at 1310 nm is 6.2 kcps, and decreases slightly with the increasing classical signal wavelength. We can see that although the received bandwidth of 1550.12 nm is 1/5 of that of 1310 nm, the Raman noise at 1550.12 nm is approximately two orders of magnitude higher than that of 1310 nm. Nevertheless, typical fibre attenuation at 1310 nm is 0.33 dB/km, which is larger than the loss of 0.2 dB/km at 1550.12 nm.
\begin{figure}
\caption{Raman noise at 1550.12 nm (black dots) and 1310 nm (red dots). The forward Raman noises are measured in kilo counts per second (kcps) as a function of classical light wavelength, in 13.6 km standard single-mode fibre at room temperature. The classical launch power is 6 dBm.}
\label{fig:spectrum1}
\end{figure}
In order to compare the secure key rates of the two quantum signal wavelengths, we consider a scenario of QKD co-propagating with classical channels in a 50 km fibre, and simulate the key rate as a function of classical launch power as shown in Fig.~\ref{fig:KeyRateCompare}. The Raman scattering coefficient we used is obtained from the measured data of Raman noise in Fig.~\ref{fig:spectrum1}, and the QKD key rate simulation follows the decoy method \cite{Ma05}. One can see that the key rate corresponding to $\lambda_q$=1550.12 nm is higher than that of 1310 nm when the classical launch power is less than -0.76 dBm, because the Raman noise for both cases is small at low classical launch power, while 1550.12 nm has the advantage of low fibre loss.
\begin{figure}
\caption{Secure key rate comparison between 1550.12 nm and 1310 nm. The secure key rates were calculated at 50 km fibre length as a function of classical launch power. The blue region indicates that 1550.12 nm offers a higher key rate than 1310 nm, while the green area indicates vice versa and that 1310 nm is more suitable for the quantum signal wavelength in a Tbps environment, where the classical launch power is around 10 dBm.}
\label{fig:KeyRateCompare}
\end{figure}
As the optical launch power increases, the disturbance from Raman noise becomes evident that it deteriorates quantum signals of 1550.12 nm much more severely than that of 1310 nm, thus resulting in its rapid key rate decline. When the launch power reaches -0.76 dBm, the key rates of both wavelengths are the same, and afterwards 1310 nm quantum signals display more advantage from the low level of Raman noise at 1310 nm. In addition, as we can see, when the power is greater than 2.0 dBm, 1550.12 nm could not generate any secure keys while 1310 nm could still performs well up to a power level of about 10 dBm, which corresponds to the Tbps level of classical communication. Similar results could be obtained for the QKD counter-propagation scenario. Consequently, we choose 1310 nm as the wavelength of quantum signals in our experiment, which not only allows one to achieve higher degrees of isolation in suppressing the linear crosstalk through low-cost coarse wavelength division multiplexer (CWDM), but also avoid nonlinear four-wave mixing (FWM) effects when multiple C-band classical channels are used, which may produce additional noise to a 1550.12-nm quantum channel \cite{Peters09,Eraerds10}.
\section{Co-propagation of QKD and four 64-QAM classical channels}
Figure~\ref{fig:setup} shows the experimental setup. Classical communication includes multiple DWDM channels within the C-band with wavelengths $\lambda_1$, $\lambda_2$, $\dots$, $\lambda_{2n-1}$, $\lambda_{2n}$. Meanwhile, our QKD system employs polarization encoding based BB84 protocol \cite{Bennett84} and the decoy state method against photon-number-splitting (PNS) attacks \cite{Hwang03,Lo05,Wang05,Ma05}. The clock synchronization between the QKD transmitter and receiver (referred to as Alice and Bob) is achieved with 100 kHz optical pulses at wavelength of 1570 nm. The classical, the quantum, and the synchronization channels are multiplexed and de-multiplexed using CWDMs to transmit over a single standard single-mode fibre. The CWDMs provide about 83-dB suppression of the in-band noise in the multiplexing and $>$180 dB isolation between the classical and the quantum channels in the de-multiplexing, which is sufficient to reduce the linear crosstalk to a negligible level. Before the detection of quantum signals, we use a custom-made 1310-nm bandpass filter with a bandwidth of 100 GHz, to diminish the Raman noise down to about 1/24 of that passes through the de-multiplexing CWDMs. The single-photon detectors can also effectively reduce the Raman noise in time domain through narrow gate widths.
\begin{figure*}
\caption{Multiplexing schematic of QKD and Tbps data channels. (a) Classical transmitter. To simulate a real communication environment, we build up two sets of transmitters. The odd channels are combined by beam splitter (BS) to enter modulator 1, and the even channels are combined to enter modulator 2. Then the odd and even channels are combined in an interleaving way. The two in-phase and quadrature modulators (IQM) are driven by electrical signals with different data sequences, ensuring the adjacent channels carrying independent data. After an emulator of polarization division multiplexing, we use an EDFA to amplify and control the power of the classical light which enters the fibre link. (b) Classical receiver. At the receiver site, an EDFA amplifies the classical signals first, then a tunable bandpass filter at C-band (BPF1) is used to select the channel to be detected. In a polarization and phase diversity coherent receiver, the signal light and a local oscillator (LO) laser with approximately the same frequency are passed onto a polarization splitter and mixed in an optical hybrid, from which we operate balanced detection (BD) using paired photodiodes to accurately extract the signal amplitude and phase information. (c) Quantum transmitter. Four non-orthogonal states are generated through two polarizing beam splitters (PBS) and a polarization controller (PC), and the incident power of quantum states is adjusted by a variable optical attenuator (VOA). (d) Quantum receiver. We use a 100-GHz bandpass filter (BPF2) at 1310 nm to effectively suppress the Raman noise. The quantum signals are detected in two conjugate bases using InGaAs avalanche photodiode (APD) based single-photon detectors, and the states of polarization are controlled with automatic feedback.}
\label{fig:setup}
\end{figure*}
In the first set of experiments, the classical optical communication system consists of 4 channels modulated with 64-QAM format. The channel spacing is 50 GHz with wavelengths ranging from 1549.1 to 1550.3 nm (the optical spectrum is shown as Supplementary Fig. 1). The bit rate of each channel is 336 Gbps, and thus the total gross data capacity is 1.344 Tbps. The co- and counter-propagating WDM layouts each induce a total loss of about 1.6 dB to classical channels (see Supplementary Fig. 2 and Fig. 3). Figure~\ref{fig:optimalPower} shows the measured classical bit error rate (BER) and Raman noise as functions of classical launch power after 50 km standard single-mode fibre (SSMF) transmission, with QKD co-propagating with classical channels in a WDM way. One can see that the classical BER is slightly higher with QKD than that of without, due to the additional attenuation induced by QKD multiplexing. The BER has a minimum value at 4-dBm launch power. As the power increases, the nonlinear distortions will degrade the signal quality and thus increase the BER. In addition, as shown in Fig.~\ref{fig:optimalPower}, the amount of Raman noise generated from the classical signal at 1310 nm is proportional to the incident light power, indicating that the spontaneous Raman scattering is a linear effect.
\begin{figure}
\caption{The classical bit error rate (BER) and forward Raman noise (measured at 1310 nm) as functions of classical launch power at 50 km.}
\label{fig:optimalPower}
\end{figure}
Figure~\ref{fig:KeyRatevsLength}(a) shows the Raman noise and the classical BER measured at different fibre distances in the WDM environment. Both the forward and the backward Raman noises are measured at 4-dBm launch power. As the transmission distance increases, the forward Raman noise first increases and then decreases, while the backward Raman noise increases gradually until saturation. Also, the backward noise count is much higher than the forward noise, which is consistent with theoretical calculations. It should be noted that, for classical communication forward error correction (FEC) is usually adopted, which can correct a pre-FEC BER of 0.45\% or 2.4\% to a level of $10^{-15}$ or less by adding hard- or soft-decision FEC with 7\% or 20\% overhead, respectively \cite{chang2010forward,chang2011fpga}.
Since Tbps communication is generally deployed in optical trunk links, we demonstrate the co-propagation of QKD and the four classical data channels at moderately longer distances. Figure~\ref{fig:KeyRatevsLength}(b) shows the QKD secure key rate and quantum bit error rate (QBER) in this scenario. The QKD secure key rate after 50 km transmission is 18.7 kbps, and the classical launch power is kept at 4 dBm from 50 km to 70 km with the BER below 2.4\% (see Supplementary Fig. 4). The maximum distance we achieved is 80 km with a fibre loss of 27.1 dB at 1310 nm, and the secure key rate is 1.2 kbps and QBER is 3.1\%. For 80 km co-propagation, we have to increase the optical power of the classical channels to 8 dBm in order to ensure its BER to be below 2.4\% (2.14\% in the experimental measurement, see Supplementary Fig. 5). Considering the soft-decision FEC with 20\% redundancy and frame overhead, the net bit rate of the classical communication is actually 1.07 Tbps. In the counter-propagating case, QKD suffers from much stronger backward Raman scattering. From Fig.~\ref{fig:KeyRatevsLength}(a) one can see that the backward Raman noise count at 50 km is 3.2 times more than its forward noise, resulting in QBER of 1.98\%, and key rate of 17.7 kbps. The maximum distance we achieved in the counter-propagating case is 70 km with QBER at 2.62\% and key rate of 3.7 kbps, where we have increased the classical launch power to 5 dBm with a measured BER of 2.18\% ( $<$2.4\%), and the net bit rate of the classical channels is still 1.07 Tbps.
\begin{figure}
\caption{Classical BER and QKD performance with WDM. (a) Measured (symbols) and simulated (solid lines) forward and backward Raman noise as a function of fibre length, and the measured classical BER (red dots) with WDM. (b) Measured and simulated QKD secure key rate (green dots and line) and QBER (blue squares and line), with quantum signals co-propagating with 4 classical channels.}
\label{fig:KeyRatevsLength}
\end{figure}
\section{Co-propagation of QKD and 32 16-QAM classical channels}
In the second set of experiments, we build up a classical optical communication system consisting of 32 channels modulated with a 16-QAM format. The channel spacing is 100 GHz with wavelengths ranging from 1535.7 to 1559.7 nm, and the optical spectrum is shown as Fig.~\ref{fig:spectrum32channels}. The bit rate of each channel is 224 Gbps, thus the total gross bandwidth amounts to 7.168 Tbps. The WDM layouts introduce about 2-dB loss to the classical channels (see Supplementary Fig. 6 and Fig. 7). We successfully implement the WDM of QKD and classical communication at different fibre distances for both co- and counter-propagating cases. In Table~\ref{tab:32channels} we list the measured results of 50 km and the maximum distance achievable.
\begin{figure}
\caption{The spectrum of 32 classical data channels and QKD clock synchronous channel, which is measured back-to-back (BTB) and after transmission over 80 km standard single mode fibre. One can see the obvious amplified spontaneous emission (ASE) generated from the erbium-doped fibre amplifier (EDFA) which should be suppress in advance to reduce the crosstalk.}
\label{fig:spectrum32channels}
\end{figure}
\begin{table*}[]
\centering
\caption{The results of multiplexing QKD and 32 data channels.}
\label{tab:32channels}
\begin{ruledtabular}
\begin{tabular}{clcccd}
Direction & \multicolumn{1}{c}{Distance (km)} & BER & Throughput (Tbps) & QBER & \multicolumn{1}{c}{\textrm{Key Rate (kbps)}} \\
\hline
\multirow{2}{50pt}{Co-propagating} & \quad 50 & 0.14\% & 6.38 & 1.48\% & 14.8 \\
& \quad 80 (max) & 0.77\% & 5.69 & 4.24\% & 1.0 \\
\hline
\multirow{2}{50pt}{Counter-propagating} & \quad 50 & 0.14\% & 6.38 & 2.08\% & 8.7 \\
& \quad 60 (max) & 0.15\% & 6.38 & 3.24\% & 4.3 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
We have measured that the optimal launch power for 50 km transmission is around 11 dBm (see Supplementary Fig. 8). We obtain the classical BER to be below 0.45\% when fibre distance is less than or equal to 70 km (see Supplementary Fig. 9), so we can perform error correction by adding 7\% overhead, therefore the effective throughput of classical channels reaches 6.38 Tbps, improving two orders of magnitude compared with previous results \cite{Patel14,choi2014field}. We achieve maximum transmission distances of 80 km and 60 km in the co- and counter-propagating cases, respectively.
\section{Conclution}
In our experiments, the WDM optical arrangements follow the principle of guaranteeing sufficient isolation of linear crosstalk, while using as few filters as possible, so as to reduce optical loss and cost. WDM filters generally have three ports: a common-port, a pass-port, and a reflect-port. We find that the pass-ports have much higher isolation than the reflect-ports. For instance, the pass-ports of 1550-nm (1310-nm) CWDMs have about 83-dB (90-dB) isolation to the light with wavelength 1310 nm (1550 nm), while the reflect-ports have an isolation of only about 20 dB for the filter center wavelength. Therefore, we use one 1550-nm CWDM to suppress the in-band noise, and two cascaded 1310-nm CWDMs to suppress the out-band noise. In addition, the second set of experiments have similar arrangements except that the 1550-nm CWDMs are replaced by 1550-nm filter-based wavelength division multiplexers (FWDMs), which have wider passband to accommodate all 32 channels.
For wavelength division multiplexing the main challenges of suppressing linear crosstalk and reducing Raman noise are irrelevant to the implemented QKD protocol and encoding format. Therefore, although we adopt BB84 protocol with polarization encoding in our experiment, the wavelength division multiplexing principle and methods we propose are adaptive to other QKD protocols, like differential phase shift QKD and measurement device independent QKD, and other encoding formats, like phase and time-bin encoding. Furthermore, our methods may also be used by continuous variable QKD or other kinds of quantum communications when they co-propagate with classical data channels over optical fibre.
The secure key rate and transmission distance are two important parameters of QKD, and are related to the performance of single-photon detectors and parameter estimation process. In our experiment, we use semi-conductor APD based detectors, but currently the superconducting nanowire single-photon detectors (SNSPDs) have better performances with detection efficiency of $>$70\% and dark count rate of $<$100 counts per second. Therefore, if the detector of our QKD system upgrades to SNSPD, the secure key rates and transmission distances of QKD will improve drastically. In addition, the finite key length we use to estimate parameters is $1 \times 10^6$. By increasing the statistical length we can obtain tighter parameter estimation, which would result in higher secure key rate and longer transmission distance.
In conclusion, we analyze the suitable wavelength for QKD transmission when multiplexed with C-band classical optical communication, and find that compared with 1550.12 nm, 1310 nm quantum signals are more adaptable in a Tbps classical data transmission environment with about 10-dBm launch power. Under this wavelength allocation, we have achieved more sufficient crosstalk isolation against classical channels using low-cost CWDMs. In addition, we have reduced the Raman noise through 100-GHz passband filters and single photon detectors with 180 ps gate width. Consequently, we demonstrate the wavelength division multiplexing of QKD with 16-QAM/64-QAM coherent optical communication, with a maximum throughput of 6.38 Tbps and a maximum transmission distance of 80 km, which is the typical span distance in classical communications. We note that although the secure key rate at 80 km is relatively low, the key rate at 50 km is still enough for voice and text encryption using one-time pad, and through using SNSPDs or trusted relays we can realize farther key distributions. Due to the high capacity of coherent optical communication, it will be a mainstream in future and may be applied in metropolitan and access networks, and thus QKD can be deployed in more classical optical communication environments and provide high security applications at low costs.
\begin{acknowledgments}
This work has been supported by the Science and Technological Fund of Anhui Province for Outstanding Youth (No. 1508085J02), the National Natural Science Foundation of China (No. 61475004) and the Chinese Academy of Sciences (No. XDA04030213). We thank Dan Wang for discussions.
\end{acknowledgments}
\appendix
\section{Classical communication subsystem}
In our experiments, the classical communication subsystem conveys multichannel WDM optical signals with digital Nyquist pulse shaping. The high order modulation formats such as 16-/64-QAM are adopted. We build two sets of transmitters and carry out transmission experiments of Terabit Nyquist polarization division multiplexing (PDM) 16/64-QAM signals (see Supplementary Methods). In our experiments, the arbitrary waveform generators (Keysight M8195A) operating at 56 GSample/s with 2-point DAC up-sampling generate baseband signals of 28 Gbaud. The digital RRC filters with a roll-off factor of 0.1 are chosen for Nyquist pulse shaping. We digitized and recorded the received data with a real-time oscilloscope (Keysight DSA-X 96204Q) for offline digital signal processing and signal quality evaluation. In the data frame of PDM Nyquist pulse shaping signal, the preamble of each polarization consist of synchronization and training sequences with a total length of 4696 symbols, which is followed by 102400 data symbols. Two pilot symbols are inserted in every 512 data symbols for carrier phase recovery. The data frame and the DSP diagrams of the transmitter and the receiver are detailed in Supplementary Fig. 11 and Fig. 12.
The bit error rate is used to signal quality evaluation. For each measurement, we record a total of $\sim10^6$ data symbols, that is, we evaluate $\sim4 \times 10^6$ received bits for 16-QAM signal and $\sim6 \times 10^6$ received bits for 64-QAM signal. To determine the BER, we perform the error counting by comparing the decoded symbols with the known bit sequence.
In our experiment, two raw BER criterions of $4.5 \times 10^{-3}$ and $2.4 \times 10^{-2}$ are adopted, which are the respective thresholds for error-free transmission when second-generation hard-decision FEC with 7\% overhead \cite{chang2010forward} or soft-decision FEC with 20\% overhead are used \cite{chang2011fpga}. If the FEC works properly, the corrected output BER is less than $1 \times 10^{-15}$, which can be considered as error-free transmission.
\section{Quantum key distribution subsystem}
Our QKD system operates at 625 MHz using polarization encoding. Alice encode the information using weak coherent laser sources and Bob detects the signals with InGaAs APD based single-photon detectors. The detectors work at gated mode with detection efficiency of 10\% and dark count rate of $1 \times 10^{-6}$ per clock cycle. To reduce the probability of afterpulsing, we set the dead time of detectors to 200 ns. In addition, we implement the decoy-state method to protect the system from potential photon-number-splitting attacks. Alice launches the signal states, decoy states, and vacuum states with a probability ratio of 6:1:1, and the average photon numbers of signal and decoy states are 0.6 and 0.2, respectively. We note that the vacuum state is also a kind of decoy state.
In the simulation, we estimate the single-photon parameters and calculate the QBER and secure key rate following the decoy state approach \cite{Lo05getting,Ma05}. The quantum bit error rate (QBER) is given by
\begin{eqnarray}
E_\mu=\frac{1}{Q_\mu}\left[\frac{1}{2}Y_0 +e_{opt}(1-Y_0)(1-e^{-\eta \mu})\right]
\end{eqnarray}
where $Q_{\mu}$ and $Y_0$ are the probabilities of an detection event given Alice emits a signal state and a vacuum state, respectively, $e_{opt}$ is the probability that a photon hit the erroneous detector due to finite polarization contrast, which is about 0.5\% for our system, and ¦Ç is the overall transmittance, including fibre loss, 3-dB loss of Bob's optical components, and the 10\% detection efficiency of single-photon detectors. In our experiment, the $Y_0$ contains two kinds of noise, including the dark count and afterpulsing of detectors, and the spontaneous Raman scattering from classical light. Meanwhile, the secure key rate per clock cycle is given by
\begin{eqnarray}
R= q\{ -Q_\mu f H_2(E_\mu) + Q_1[1-H_2(e_1)]+ Q_0\}
\end{eqnarray}
where $q$ is the probability of Alice emits signal states and Alice and Bob choose the same bases, $f$ is the inefficiency of error correction which is about 1.25, $e_1$ is the estimated error rate of single-photon states, and $Q_1$ and $Q_0$ are the fractions of detection events by Bob that is due to the single-photon and vacuum ingredients of signal states, respectively. $H_2(x)=-xlog_2(x)-(1-x)log_2(1-x)$ is the binary entropy function. The data block size we used to estimate parameters is $1 \times 10^6$, and we consider the statistical fluctuations of 5 standard derivations. In the experiment, we implement the entire QKD postprocessing \cite{fung2010practical} based on hardware, including message authentication with pre-shared symmetric keys, error correction with a cascade algorithm \cite{brassard1994secret}, error verification with a cyclic redundancy check (CRC), and privacy amplification with a Toeplitz matrix \cite{bennett1995generalized}.
\end{document} |
\begin{document}
\title[Non-associative Ore Extensions]{Non-associative Ore Extensions}
\author{Patrik Nystedt}
\address{University West,
Department of Engineering Science,
SE-46186 Trollh\"{a}ttan, Sweden}
\author{Johan \"{O}inert}
\address{Blekinge Institute of Technology,
Department of Mathematics and Natural Sciences,
SE-37179 Karlskrona, Sweden}
\author{Johan Richter}
\address{M\"{a}lardalen University,
Academy of Education, Culture and Communication, \\
Box 883, SE-72123 V\"{a}ster\aa s, Sweden}
\email{[email protected]; [email protected]; [email protected]}
\subjclass[2010]{17D99, 17A36, 17A99, 16S36, 16W70, 16U70}
\keywords{non-associative Ore extension, simple, outer derivation.}
\begin{abstract}
We introduce non-associative Ore extensions, $S = R[X ; \sigma , \delta]$,
for any non-associa\-tive unital ring $R$ and any additive maps
$\sigma,\delta : R \rightarrow R$ satisfying $\sigma(1)=1$ and $\delta(1)=0$.
In the special case when $\delta$ is
either left or right $R_{\delta}$-linear,
where $R_{\delta} = \ker(\delta)$, and $R$ is $\delta$-simple, i.e.
$\{ 0 \}$ and $R$ are the only $\delta$-invariant ideals of $R$,
we determine the ideal structure of the non-associative differential polynomial ring
$D = R[X ; \id_R , \delta]$. Namely, in that case, we show that
all ideals of $D$ are generated by monic polynomials in the
center $Z(D)$ of $D$. We also show that $Z(D) = R_{\delta}[p]$ for
a monic $p \in R_{\delta}[X]$, unique up to addition
of elements from $Z(R)_{\delta}$.
Thereby, we generalize classical results by Amitsur on
differential polynomial rings defined by derivations on associative and simple rings.
Furthermore, we use the ideal structure of $D$ to show that $D$ is simple if and only
if $R$ is $\delta$-simple and $Z(D)$ equals the field $R_{\delta} \cap Z(R)$.
This provides us with a non-associative generalization of a result
by \"{O}inert, Richter, and Silvestrov.
This result is in turn used to show a non-associative version
of a classical result by Jordan concer\-ning simplicity
of $D$ in the cases when the characteristic of
the field $R_{\delta} \cap Z(R)$ is either zero
or a prime.
We use our findings to show simplicity results for
both non-associative versions of Weyl algebras
and non-associative differential polynomial rings
defined by monoid/group actions on compact Hausdorff spaces.
\end{abstract}
\maketitle
\pagestyle{headings}
\section{Introduction}
In 1933 Ore \cite{ore1933} introduced a version
of non-commutative polynomial rings,
nowadays called Ore extensions, that have
become one of the most useful constructions
in ring theory. The Ore extensions
play an important role when
investigating cyclic algebras, enveloping rings of solvable Lie algebras,
and various types of graded rings
such as group rings and crossed products,
see e.g. \cite{cohn1977}, \cite{jacobson1999},
\cite{mcconell1988} and \cite{rowen1988}.
They are also a natural source of
examples and counter-examples in ring theory,
see e.g. \cite{bergman1964} and \cite{cohn1961}.
Furthermore, various special cases
of Ore extensions are used as
tools in diverse analytical settings,
such as differential-, pseudo-differential and
fractional differential operator rings
\cite{goodearl1983} and
$q$-Heisenberg algebras \cite{hellstromsilvestrov2000}.
Let us recall the definition of an (associative) Ore extension.
Let $S$ be a unital ring.
Take $x \in S$ and let $R$ be a subring of $S$
containing $1$, the multiplicative identity element of $S$.
\begin{defn}\label{defore}
The pair $(S,x)$ is called an {\it Ore extension} of $R$
if the following axioms hold:
\begin{itemize}
\item[(O1)] $S$ is a free left $R$-module with basis
$\{ 1,x,x^2,\ldots \}$;
\item[(O2)] $xR \subseteq R + Rx$;
\item[(O3)] $S$ is associative.
\end{itemize}
If (O2) is replaced by
\begin{itemize}
\item[(O2)$'$] $[x,R] \subseteq R$;
\end{itemize}
then $(S,x)$ is called a {\it differential polynomial ring over} $R$.
\end{defn}
Recall that $[x,R]$ denotes the set of finite sums of elements
of the form $[x,r]=xr-rx$, for $r \in R$.
To construct Ore extensions, one considers
generalized polynomial rings $R[X ; \sigma , \delta]$ over an associative ring $R$,
where $\sigma$ is a ring endomorphism of $R$,
respecting 1, and $\delta$ is a $\sigma$-derivation of $R$, i.e.
an additive map $R \rightarrow R$ satisfying
$\delta(ab) = \sigma(a) \delta(b) + \delta(a) b$, for $a,b \in R$.
Let $\mathbb{N}$ denote the set of non-negative integers. As an additive group $R[X ; \sigma , \delta]$ is equal to the usual polynomial ring $R[X].$
The ring structure on $R[X ; \sigma , \delta]$
is defined on monomials by
\begin{equation}\label{productmonomials}
a X^m \cdot b X^n = \sum_{i \in \mathbb{N}} a \pi_i^m(b) X^{i+n},
\end{equation}
for $a , b \in R$ and $m,n \in \mathbb{N}$,
where $\pi_i^m$ denotes the sum of all the
${m \choose i}$ possible compositions of $i$
copies of $\sigma$ and $m-i$ copies of $\delta$ in arbitrary order
(see equation (11) in \cite{ore1933}).
Here we make the convention that
$\pi_i^m(b) = 0$, for $i,m \in \mathbb{N}$
such that $i > m$.
The product \eqref{productmonomials} makes the pair
$(R[X ; \sigma , \delta],X)$ an Ore extension of $R$.
In fact, (O1) and (O2) are immediate and (O3)
can be shown in several different ways, see e.g.
\cite[Proposition 7.1]{bergman1978},
\cite{nystedt2013},
\cite{richter2014},
\cite[Proposition 1.6.15]{rowen1988},
or Proposition \ref{newproof} in the present article for yet another proof.
This class of generalized polynomial rings provides us with
all Ore extensions of $R$ .
Indeed, given an Ore extension $(S,x)$ of $R$,
then define the maps $\sigma : R \rightarrow R$ and
$\delta : R \rightarrow R$ by the relations
$x a = \delta(a) + \sigma(a) x$, for $a \in R$.
Then it follows that
$\sigma$ is a ring endomorphism of $R$,
respecting 1, $\delta$ is a $\sigma$-derivation of $R$
and there is a unique well defined
ring isomorphism $f : S \rightarrow R[X;\sigma,\delta]$ subject to
the relations $f(x) = X$ and $f|_R = \id_R$.
If $(S,x)$ is a differential polynomial ring over $R$,
then $\sigma = \id_R$ and $\delta$ is a derivation on $R$.
Many different properties of associative Ore extensions, such as
when they are integral domains, principal domains,
prime or noetherian have been studied by numerous authors
(see e.g. \cite{cozzens1975} or \cite{mcconell1988} for surveys).
Here we focus on the property of simplicity of
differential polynomial rings $D = R[X ; \id_R , \delta]$.
Recall that $\delta$ is called {\it inner}
if there is $a \in R$ such that
$\delta(r) = ar - ra$, for $r \in R$.
In that case we write $\delta = \delta_a$.
If $\delta$ is not inner, then $\delta$ is called {\it outer}.
We let the {\it characteristic} of a
ring $R$ be denoted by ${\rm char}(R)$.
In an early article by Jacobson \cite{jacobson1937}
it is shown that if $\delta$ is outer and $R$ is a division ring
with ${\rm char}(R)=0$, then $D$ is simple.
The case of positive characteristic is more complicated
and $D$ may contain non-trivial ideals.
In fact, Amitsur \cite{amitsur1950} has shown that
if $R$ is a division ring with ${\rm char}(R)= p > 0$,
then every ideal of $D$ is generated by a polynomial, all of whose
monomials have degrees which are multiples of $p$.
A few years later Amitsur \cite{amitsur1957} generalized this
result to the case of simple $R$. To describe this generalization
we need to introduce some more notation.
Let $T$ be a subring of $D$.
Let $Z(T)$ denote the {\it center} of $T$, i.e. the set of all elements in $T$
that commute with every element of $T$.
If $T$ is a subring of $R$, then put $T_{\delta} = T \cap \ker(\delta)$.
Note that if $R$ is simple, then $Z(R)$ is a field.
Therefore, in that case, ${\rm char}(R) = {\rm char}(Z(R))$ and hence
${\rm char}(R)$ is either zero or a prime $p > 0$.
\begin{thm}[Amitsur \cite{amitsur1957}]\label{amitsurtheorem}
Suppose that $R$ is a simple associative ring and let
$\delta$ be a derivation on $R$.
If we put $D = R[X ; \id_R , \delta]$, then the following assertions hold:
\begin{itemize}
\item[(a)] Every ideal of $D$ is generated by a unique monic polynomial in $Z(D)$;
\item[(b)] There is a monic $b \in R_{\delta}[X]$, unique up to addition of
elements from $Z(R)_{\delta}$, such that
$Z(D) = Z(R)_{\delta}[b]$;
\item[(c)] If ${\rm char}(R)=0$ and $b \neq 1$, then there is $c \in R_{\delta}$
such that $b = c + X$. In that case, $\delta = \delta_c$;
\item[(d)] If ${\rm char}(R) = p > 0$ and $b \neq 1$, then there is $c \in R_{\delta}$
and $b_0,\ldots,b_n \in Z(R)_{\delta}$, with $b_n=1$, such that
$b = c + \sum_{i=0}^n b_i X^{p^i}$.
In that case, $\sum_{i=0}^n b_i \delta^{p^i} = \delta_c$.
\end{itemize}
\end{thm}
The condition that $R$ is simple in the above theorem is not
necessary for simplicity of $D = R[X ; \id_R , \delta]$.
Consider e.g. the well known example of the first Weyl algebra
where $R = K[Y]$, $K$ is a field with ${\rm char}(K)=0$ and
$\delta$ is the usual derivative on $R$
(for more details, see e.g. \cite[Example 1.6.32]{rowen1988}).
However, $\delta$-simplicity of $R$ is always a necessary
condition for simplicity of $D$
(see \cite[Lemma 4.1.3(i)]{jordan1975} or Proposition \ref{sigmadeltasimple}).
Recall that an ideal $I$ of $R$ is called $\delta$-invariant
if $\delta(I) \subseteq I$.
The ring $R$ is called $\delta$-simple if $\{ 0 \}$ and $R$
are the only $\delta$-invariant ideals of $R$.
Note that if $R$ is $\delta$-simple, then
the ring $Z(R)_{\delta}$ is always a field.
Therefore, in that case, ${\rm char}(R) = {\rm char}(Z(R)_{\delta})$ and hence
${\rm char}(R)$ is either zero or a prime $p > 0$.
Jordan \cite{jordan1975} (and Cozzens and Faith \cite{cozzens1975}
in a special case) has shown the following result.
\begin{thm}[Jordan \cite{jordan1975}]\label{jordantheorem}
Suppose that $R$ is a $\delta$-simple associative ring and let
$\delta$ be a derivation on $R$.
If we put $D = R[X ; \id_R , \delta]$, then the following assertions hold:
\begin{itemize}
\item[(a)] If ${\rm char}(R)=0$, then $D$ is simple if and only if
$\delta$ is outer;
\item[(b)] If ${\rm char}(R) = p > 0$, then $D$ is simple if and only if
no derivation of the form $\sum_{i=0}^n b_i \delta^{p^i}$,
$b_i \in Z(R)_{\delta}$, and $b_n=1$, is an inner derivation
induced by an element in $R_{\delta}$.
\end{itemize}
\end{thm}
In the case when $R$ is commutative, Cozzens and Faith \cite{cozzens1975}
(for integral domains $R$ of prime characteristic)
and Goodearl and Warfield \cite{goodearl1982} (in the general case) have shown that
$R[x ; \id_R, \delta]$ is simple
if and only if $R$ is $\delta$-simple and $R$ is
infinite-dimensional as a vector space over $R_{\delta}$.
If one has a family of commuting derivations,
then one can form a differential polynomial ring in several variables.
The articles \cite{malm1988}, \cite{posner1960} and \cite{voskoglou1985}
consider the question when such rings are simple. In the preprint \cite{nystedtoinertrichter} the authors of the present article study when non-associative differential polynomial rings in several variables are simple.
In the simplicity results mentioned above, a distinction
is often made between the cases when the characteristic of $R$
is zero or the characteristic of $R$ is prime.
Special attention is also often paid to the case when $R$ is commutative.
However, in \cite{oinert2013} \"{O}inert, Richter and Silvestrov have shown
the following simplicity result that holds
for all associative differential polynomial rings regardless of characteristic.
\begin{thm}[\"{O}inert, Richter and Silvestrov \cite{oinert2013}]\label{richtersilvestrovoinert}
If $R$ is associative and $\delta : R \rightarrow R$
is a derivation, then
$D = R[X ; \id_R , \delta]$ is simple
if and only if $R$ is $\delta$-simple and $Z(D)$ is a field.
\end{thm}
In this article, we address the question of what it should
mean for a pair $(S,x)$ to be a non-associative Ore extension of $R$
and when the resulting rings are simple.
It seems to the authors of the present article
that this question has not previously been analysed in the literature.
Let us briefly describe the train of reasoning that lead the authors to
their definition of such objects.
The product \eqref{productmonomials}
equips the set $R[X ; \sigma , \delta]$ of generalized polynomials over
any non-associative ring $R$
with a well defined non-associative ring structure
for any additive maps
$\sigma : R \rightarrow R$ and $\delta : R \rightarrow R$
satisfying $\sigma(1)=1$ and $\delta(1)=0$.
We wish to adapt the axioms (O1), (O2) and (O3)
to the non-associative situation so that
the resulting collection of non-associative rings
coincides with this family of generalized polynomial rings.
It turns out that this
happens precisely when $x$
belongs to the right and middle nucleus of $S$.
To be more precise, let $S$ be a non-associative ring, by this we mean that $S$ is an additive abelian group equipped with a multiplication
which is distributive with respect to addition and which has multiplicative identity $1$.
We suggest the following.
\begin{defn}\label{defnonore}
The pair $(S,x)$ is called a {\it non-associative Ore extension} of $R$ if
the following axioms hold:
\begin{itemize}
\item[(N1)] $S$ is a free left $R$-module with basis
$\{ 1,x,x^2,\ldots \}$;
\item[(N2)] $xR \subseteq R + Rx$;
\item[(N3)] $(S,S,x) = (S,x,S) = \{ 0 \}$.
\end{itemize}
If (N2) is replaced by
\begin{itemize}
\item[(N2)$'$] $[x,R] \subseteq R$;
\end{itemize}
then $(S,x)$ is called a {\it non-associative differential polynomial ring over} $R$.
\end{defn}
For non-empty subsets
$A$, $B$ and $C$ of $S$,
we let $(A,B,C)$ denote the set of finite sums of elements
of the form $(a,b,c) = (ab)c - a(bc)$, for $a \in A$,
$b \in B$ and $c \in C$.
Note that from (N3) it follows that the
element $x$ is power associative, so that the
symbols $x^i$, for $i \in \mathbb{N}$, are well defined.
Here is an outline of this article.
In Section \ref{nonassociativeringtheory},
we gather some well known facts from non-associative ring
and module theory that we need in the sequel.
In particular, we state our conventions concerning
modules over non-associative rings and
what a basis
should mean
in that situation.
In Section \ref{oreextensions}, we show that
there is a bijection between the set of non-associative Ore extensions of $R$
and the set of generalized polynomial rings $R[X ; \sigma , \delta]$ over $R$,
where $\sigma$ and $\delta$ are additive maps $R \rightarrow R$
such that $\sigma(1)=1$ and $\delta(1)=0$.
If $T$ is a subset of $R$, then we put
$T_{\delta}^{\sigma} = \{ a \in T \mid \sigma(a)=a \ \mbox{and} \ \delta(a)=0 \}$,
$T_{\delta} = T_{\delta}^{\id_R}$ and $T^{\sigma} = T_{0}^{\sigma}$.
In Section \ref{oreextensions}, we introduce
the class of {\it strong} non-associative Ore extensions
(see Definition \ref{defstrong}).
These correspond to generalized polynomial rings
$R[X ; \sigma , \delta]$,
where $\sigma$ is a, what we call, {\it fixed point homomorphism} of $R$
and $\delta$ is a, what we call, $\sigma$-{\it kernel derivation} of $R$.
By this we mean that
$\sigma$ and $\delta$ are maps $R \rightarrow R$ satisfying
$\sigma(1)=1$, $\delta(1)=0$ and both of them are right $R_{\delta}^{\sigma}$-linear
or both of them are left $R_{\delta}^{\sigma}$-linear.
Clearly, every classical derivation is a $\sigma$-kernel derivation with $\sigma=\id_R$ and every classical homomorphism is a fixed point homomorphism. In general, a $\sigma$-kernel derivation with $\sigma=\id_R$ will simply be called a \emph{kernel derivation}.
In Section \ref{simplicity}, we introduce $\sigma$-$\delta$-simplicity
for rings $R$, where $\sigma$ and $\delta$ are additive maps $R \rightarrow R$
such that $\sigma(1)=1$ and $\delta(1)=0$ (see Definition \ref{defsigmadeltasimple}).
We show that $\sigma$-$\delta$-simplicity of $R$ is a necessary
condition for simplicity of non-associative Ore extensions
$R[X ; \sigma , \delta]$ (see Proposition \ref{sigmadeltasimple}).
We also show that if $R$ is $\sigma$-$\delta$-simple,
then $Z(R)_{\delta}^{\sigma}$
is a field (see Proposition \ref{R-sigmadelta-simple-Z-field}).
Thus, in that case, we get that ${\rm char}(R) = {\rm char}( Z(R)_{\delta}^{\sigma} )$
and hence that ${\rm char}(R)$ is either zero or a prime $p > 0$.
In Section \ref{simplicity}, we prove the following
non-associative generalization of Theorems \ref{amitsurtheorem},
\ref{jordantheorem} and \ref{richtersilvestrovoinert}.
\begin{thm}\label{maintheorem}
Suppose that $R$ is a non-associative ring and that
$\delta$ is a kernel derivation on $R$.
If we put $D = R[X ; \id_R , \delta]$, then the following assertions hold:
\begin{itemize}
\item[(a)] If $R$ is $\delta$-simple, then
every ideal of $D$ is generated by a unique monic polynomial in $Z(D)$;
\item[(b)] If $R$ is $\delta$-simple,
then there is a monic $b \in R_{\delta}[X]$, unique up to addition
of elements from $Z(R)_{\delta}$, such that $Z(D) = Z(R)_{\delta}[b]$;
\item[(c)] $D$ is simple if and only if
$R$ is $\delta$-simple and $Z(D)$ is a field.
In that case $Z(D) = Z(R)_{\delta}$
in which case $b=1$;
\item[(d)] If $R$ is $\delta$-simple,
$\delta$ is a derivation on $R$ and
${\rm char}(R)=0$,
then either $b=1$ or there is $c \in R_{\delta}$ such that $b = c + X$.
In the latter case, $\delta = \delta_c$;
\item[(e)] If $R$ is $\delta$-simple,
$\delta$ is a derivation on $R$ and
${\rm char}(R)=p>0$,
then either $b=1$ or there is $c \in R_{\delta}$ and
$b_0,\ldots,b_n \in Z(R)_{\delta}$, with $b_n=1$,
such that $b = c + \sum_{i=0}^n b_i X^{p^i}$.
In the latter case,
$\sum_{i=0}^n b_i \delta^{p^i} = \delta_c$.
\end{itemize}
\end{thm}
In Section \ref{sectionweyl},
we introduce non-associative versions of
the first Weyl algebra
(see Definition \ref{definitionweyl}) and we show that they are often
simple regardless of the characteristic (see Theorem \ref{theoremweyl}).
In Section \ref{sectiondynamics},
we introduce a special class of $\sigma$-kernel derivations
induced by ring automorphisms
(see Definition \ref{definitionkernel}).
This yields simplicity results for a differential
polynomial ring analogue of the quantum plane
(see Theorem \ref{theoremquantumtorus}) and
for differential polynomial rings defined by monoid/group actions
on compact Hausdorff spaces (see Theorem~\ref{NYtheoremdynamics} and Theorem~\ref{theoremdynamics}).
In Section \ref{sectionassociative},
we show that if the coefficients are associative,
then we can often obtain simplicity of the differential polynomial ring
just from the assumption that the map $\delta$ is not a derivation.
\section{Preliminaries from Non-associative Ring Theory}\label{nonassociativeringtheory}
In this section, we recall some notions from non-associative
ring theory that we need in subsequent sections.
Although the results stated in this
section are presumably rather well known, we have, for the convenience of the reader,
nevertheless chosen to include proofs of these statements.
Throughout this section,
$R$ denotes a non-associative ring.
By this we mean that $R$ is an additive abelian group in which a multiplication
is defined, satisfying left and right distributivity.
We always assume that $R$ is unital and that the multiplicative identity
of $R$ is denoted by $1$.
The term ''non-associative'' should be interpreted
as ''not necessarily associative''.
Therefore all associative rings are non-associative.
If a ring is not associative,
we will use the term ''not associative ring''.
By a {\it left module} over $R$ we mean an additive group $M$
equipped with a biadditive map
$R \times M \ni (r,m) \mapsto rm \in M$.
In that case, we say that a subset $B$ of $M$ is a basis
if for every $m \in M$, there are unique $r_b \in R$, for $b \in B$,
such that $r_b = 0$ for all but finitely many $b \in B$,
and $m = \sum_{b \in B} r_b b$.
{\it Right modules} over $R$ and bases are defined in an analogous manner.
Recall that the \emph{commutator} $[\cdot,\cdot] : R \times R \rightarrow R$
and the \emph{associator} $(\cdot,\cdot,\cdot) : R \times R \times R \rightarrow R$
are defined by $[r,s]=rs-sr$ and
$(r,s,t) = (rs)t - r(st)$ for all $r,s,t \in R$, respectively.
The \emph{commuter} of $R$, denoted by $C(R)$,
is the subset of $R$ consisting
of elements $r \in R$ such that $[r,s]=0$
for all $s \in R$.
The \emph{left}, \emph{middle} and \emph{right nucleus} of $R$,
denoted by $N_l(R)$, $N_m(R)$ and $N_r(R)$, respectively, are defined by
$N_l(R) = \{ r \in R \mid (r,s,t) = 0, \ \mbox{for} \ s,t \in R\}$,
$N_m(R) = \{ s \in R \mid (r,s,t) = 0, \ \mbox{for} \ r,t \in R\}$, and
$N_r(R) = \{ t \in R \mid (r,s,t) = 0, \ \mbox{for} \ r,s \in R\}$.
The \emph{nucleus} of $R$, denoted by $N(R)$,
is defined to be equal to $N_l(R) \cap N_m(R) \cap N_r(R)$.
From the so-called \emph{associator identity}
$u(r,s,t) + (u,r,s)t + (u,rs,t) = (ur,s,t) + (u,r,st)$,
which holds for all $u,r,s,t \in R$, it follows that
all of the subsets $N_l(R)$, $N_m(R)$, $N_r(R)$ and $N(R)$
are associative subrings of $R$.
The \emph{center} of $R$, denoted by $Z(R)$, is defined to be equal to the
intersection $N(R) \cap C(R)$.
It follows immediately that $Z(R)$ is an associative, unital
and commutative subring of $R$.
\begin{prop}\label{intersection}
The following three equalities hold:
\begin{align}
Z(R) &= C(R) \cap N_l(R) \cap N_m(R); \label{FIRST}\\
Z(R) &= C(R) \cap N_l(R) \cap N_r(R); \label{SECOND}\\
Z(R) &= C(R) \cap N_m(R) \cap N_r(R). \label{THIRD}
\end{align}
\end{prop}
\begin{proof}
We only show \eqref{FIRST}. The equalities \eqref{SECOND} and \eqref{THIRD} are shown
in a similar way and are therefore left to the reader.
It is clear that $Z(R) \subseteq C(R) \cap N_l(R) \cap N_m(R)$.
Now we show the reversed inclusion.
Take $r \in C(R) \cap N_l(R) \cap N_m(R)$.
We need to show that $r \in N_r(R)$.
Take $s,t \in R$.
We wish to show that $(s,t,r)=0$, i.e. $(st)r = s(tr)$.
Using that $r\in C(R) \cap N_l(R) \cap N_m(R)$
we get $(st)r = r(st) = (rs)t = (sr)t = s(rt) = s(tr)$.
\end{proof}
\begin{prop}\label{centerInvClosed}
If $r \in Z(R)$ and $s \in R$ satisfy $rs = 1$,
then $s \in Z(R)$.
\end{prop}
\begin{proof}
Let $r\in Z(R)$ and suppose that $rs=1$.
First we show that $s \in C(R)$.
To this end, take $u \in R$.
Then $su = (su)1 = (su)(rs) = (r(su))s=
((rs)u)s = (1u)s = us$ and hence $s \in C(R)$.
By Proposition \ref{intersection}, we are done if we can show
$s \in N_l(R) \cap N_m(R)$. To this end, take $v \in R$.
Then $s(uv) = s((1u)v)= s(((rs) u)v) =
(rs) ( (su) v ) = 1( (su)v ) = (su)v$
which shows that $s \in N_l(R)$.
We also see that
$(us)v = (us)(1v) = (us) ( (rs) v ) =
( u (rs) ) (sv) = (u1)(sv) = u(sv)$
which shows that $s \in N_m(R)$.
\end{proof}
\begin{prop}\label{centerfield}
If $R$ is simple, then $Z(R)$ is a field.
\end{prop}
\begin{proof}
We already know that $Z(R)$ is a unital commutative ring.
What is left to show is that every non-zero element of $Z(R)$
has a multiplicative inverse in $Z(R)$.
To this end, take a non-zero $r \in Z(R)$.
Then $Rr$ is a non-zero ideal of $R$.
Since $R$ is simple, this implies that $R = Rr$.
In particular, we get that there is $s \in R$
such that $1 = sr$. By Proposition \ref{centerInvClosed},
we get that $s \in Z(R)$ and we are done.
\end{proof}
\section{Non-associative Ore extensions}\label{oreextensions}
In this section, we show that
there is a bijection between the set of (strong) non-associative Ore extensions of $R$
and the set of generalized polynomial rings $R[X ; \sigma , \delta]$ over $R$,
where $\sigma$ (is a fixed point homomorphism) and
$\delta$ (is a $\sigma$-kernel derivation) are
additive maps $R \rightarrow R$
such that $\sigma(1)=1$ and $\delta(1)=0$
(see Proposition \ref{sufficientpolynomial} and Proposition \ref{necessarypolynomial}).
We also show that if
$S = R[X ; \sigma , \delta]$ is a generalized polynomial ring, then
$S$ is associative if and only if
$R$ is associative, $\sigma$ is a ring endomorphism and
$\delta$ is a $\sigma$-derivation
(see Proposition \ref{newproof}).
Throughout this section, $R$ denotes a non-associative ring.
\begin{defn}\label{definitionpolynomials}
By a formal set of polynomials $R[X]$ over $R$ we mean the collection
of functions $f : \mathbb{N} \rightarrow R$ with the
property that $f(n)=0$ for all but finitely many $n \in \mathbb{N}$.
If $f,g \in R[X]$ and $r,s \in R$, then we define $rf + sg \in R[X]$
by the relation $(rf + sg)(n) = rf(n) + sg(n)$,
for $n \in \mathbb{N}$.
If we for each $n \in \mathbb{N}$, let $X^n \in R[X]$ be defined
by $X^n(m) = 1$, if $m=n$, and $X^n(m)=0$, if $m \neq n$,
then $R[X]$ is a free left $R$-module with
$B = \{ X^n \}_{n \in \mathbb{N} }$ as a basis.
In fact, for each $f \in R[X]$, we have that
$f = \sum_{n \in \mathbb{N}} f(n) X^n$.
By the degree of $f$, denoted by $\deg(f)$, we mean
the supremum of $\{ -\infty \} \cup \{ n \in \mathbb{N} \mid f(n) \neq 0 \}$.
If $f \neq 0$, then we call $f( \deg(f) )$ the {\it leading coefficient of} $f$.
If the leading coefficient of $f$ is 1, then we say that $f$ is {\it monic}.
\end{defn}
\begin{defn}
Let $\sigma : R \rightarrow R$ and $\delta : R \rightarrow R$
be additive maps such that $\sigma(1)=1$ and $\delta(1)=0$.
By the generalized polynomial ring
$R[X ; \sigma , \delta]$ over $R$ defined by $\sigma$ and $\delta$
we mean the set $R[X]$ of formal
polynomials over $R$ equipped with the product defined
on monomials by the relation \eqref{productmonomials}.
We will often identify each $r \in R$
with $rX^0$. It is clear that $R[X ; \sigma , \delta]$ is a
non-associative ring with $1 = X^0$.
It is also clear that $X$ is power associative
so that $X^n$, for $n > 0$, is in fact equal to the product of $X$ with itself $n$ times.
\end{defn}
\begin{defn}\label{defstrong}
Suppose that $(S,x)$ is a non-associative Ore extension of $R$.
Put $R_x = \{ a \in R \mid ax = xa \}$.
We say that $(S,x)$ is {\it strong} if at least one of the following axioms holds:
\begin{itemize}
\item[(N4)] $(x,R,R_x) = \{ 0 \}$;
\item[(N5)] $(x,R_x,R) = \{ 0 \}$.
\end{itemize}
In that case we call $R_x$ {\it the ring of constants of $R$}.
If $(S,x)$ is a non-associative differential polynomial ring,
then we say that it is strong if it is strong as a
non-associative Ore extension.
\end{defn}
The usage of the term ''ring'' in Definition \ref{defstrong}
is justified by the next result.
\begin{prop}
If $(S,x)$ is a strong non-associative Ore extensions of $R$,
then $R_x$ is a subring of $R$.
\end{prop}
\begin{proof}
It is clear that $R_x$ is an additive subgroup of $R$ containing 1.
Now we show that $R_x$ is multiplicatively closed.
Take $a,b \in R_x$. Then
$(ab)x \stackrel{(N3)}{=} a(bx) \stackrel{[ b \in R_x ]}{=}
a(xb) \stackrel{(N3)}{=} (ax)b \stackrel{[a \in R_x ]}{=}
(xa)b = x(ab)$. The last equality follows from the strongness of $(S,x)$.
Therefore $ab \in R_x$.
\end{proof}
\begin{prop}\label{sufficientpolynomial}
Every generalized polynomial ring $S = R[X ; \sigma , \delta]$ over $R$
(with $\sigma$ a fixed point homomorphism and
$\delta$ a $\sigma$-kernel derivation)
is a (strong) non-associa\-tive Ore extension of $R$ with $x = X$.
\end{prop}
\begin{proof}
We first show the ''non-strong'' statement.
From Definition \ref{definitionpolynomials}, we know that
$S$ is free as a left $R$-module with $B$ as a basis.
Therefore (N1) holds.
Also $Rx = RX^0 \cdot 1X = \delta(R) + \sigma(R)X =
\delta(R) + \sigma(R)x \subseteq R + Rx$.
Therefore (N2) holds.
Now we show (N3).
Suppose that $a,b \in R$ and $m,n \in \mathbb{N}$.
Then we get that
$(aX^m \cdot bX^n) \cdot X =
\sum_{i \in \mathbb{N}} a \pi_i^m(b) X^{i+n} \cdot X =
\sum_{i \in \mathbb{N}} a \pi_i^m(b) X^{i+n+1} =a
X^m \cdot (bX^{n+1}) = aX^m \cdot (bX^n \cdot X).$
Next we get that
$(aX^m \cdot X) \cdot bX^n =
aX^{m+1} \cdot bX^n =
\sum_{i \in \mathbb{N}} a \pi_i^{m+1}(b) X^{i+n} =
\sum_{i \in \mathbb{N}} a \pi_i^m(\delta(b)) X^{i+n} +
\sum_{i \in \mathbb{N}} a \pi_{i-1}^m(\sigma(b)) X^{i+n} =
aX^m \cdot ( \delta(b) X^n + \sigma(b)X^{n+1} ) =
aX^m \cdot (X \cdot bX^n).$
Now we show the ''strong'' statement.
Note that $R_X = R_{\delta}^{\sigma}$.
Suppose first that both $\sigma$ and $\delta$
are right $R_{\delta}^{\sigma}$-linear.
We show (N4).
To this end, take $a \in R$ and $b \in R_X$. Then
$ (X \cdot a) \cdot b =
(\delta(a) + \sigma(a)X) \cdot b
\stackrel{(N3)}{=}
\delta(a) b + \sigma(a) (X b) = [b \in R_{\delta}^{\sigma} ]=
\delta(a)b + \sigma(a) (bX)
\stackrel{(N3)}{=}
\delta(a)b + (\sigma(a) b)X.$
Since $\sigma$ and $\delta$
are right $R_{\delta}^{\sigma}$-linear, we get that
$ (X \cdot a) \cdot b = \delta(ab) + \sigma(ab)X = X \cdot (ab)$.
Suppose now that both $\sigma$ and $\delta$
are left $R_{\delta}^{\sigma}$-linear.
We show (N5).
To this end, take $a \in R_X$ and $b \in R$. Then
$( X \cdot a ) \cdot b = [a \in R_{\delta}^{\sigma}] = (a \cdot X) \cdot b
\stackrel{(N3)}{=}
a \cdot (X b) = a \cdot (\delta(b) + \sigma(b)X) =
a \delta(b) + a \sigma(b)X.$
Since $\sigma$ and $\delta$
are left $R_{\delta}^{\sigma}$-linear, we get that
$( X \cdot a) \cdot b = \delta(ab) + \sigma(ab)X = X \cdot (ab).$
\end{proof}
\begin{prop}\label{necessarypolynomial}
Every non-associative Ore extension of $R$ is isomorphic
to a generalized polynomial ring $R[X ; \sigma, \delta]$.
If the non-associative Ore extension is strong, then $\sigma$ is a fixed point homomorphism and $\delta$ is a $\sigma$-kernel derivation.
\end{prop}
\begin{proof}
We first show the ''non-strong'' statement.
Suppose that $S$ is a non-associative Ore extension of $R$
defined by the element $x \in S$. Take $a,b \in R$.
By (N1) and (N2), we get that $xa = \delta(a) + \sigma(a)x$,
for some unique $\delta(a),\sigma(a) \in R$.
Hence this defines functions $\sigma : R \rightarrow R$ and
$\delta : R \rightarrow R$.
By distributivity of $S$, we get the relation
$x(a+b) = xa + xb$ which implies that
$\sigma(a+b) = \sigma(a) + \sigma(b)$ and
$\delta(a + b) = \delta(a) + \delta(b)$.
From the relation $x1 = x$ we get that $\sigma(1)=1$ and $\delta(1)=0$.
Define $f : S \rightarrow R[X ; \sigma, \delta]$ by the additive extension
of the relations $f( a x^m ) = a X^m$, for $a \in R$ and $m \in \mathbb{N}$.
Then clearly $f$ is an isomorphism of additive groups.
What is left to show is that $f$ respects multiplication.
Take $a,b \in R$ and $m,n \in \mathbb{N}$.
We claim that $(a x^m)(b x^n) = \sum_{i \in \mathbb{N}} a \pi_i^m(b) x^{i+n}$.
If we assume that the claim holds, then
$f( (a x^m)(b x^n) ) =
f( \sum_{i \in \mathbb{N}} a \pi_i^m(b) x^{i+n} ) =
\sum_{i \in \mathbb{N}} a \pi_i^m(b) X^{i+n} =
(a X^m) \cdot (b X^n) = f( ax^n ) \cdot f(b x^m)$.
Now we prove the claim by induction over $m$.
First we show the base case $m=0$.
By (N2) we get that $x \in N_r(S)$.
Therefore $x^n \in N_r(S)$ and hence we get that
$(a x^0)(b x^n) = a (b x^n) = (a b)x^n = a \pi_0^0(b) x^n =
\sum_{i \in \mathbb{N}} a \pi_i^0(b) x^{i+n}$.
Next we show the induction step.
Suppose that the claim holds for some $m \in \mathbb{N}$.
By (N2), we get that $x \in N_m(S) \cap N_r(S)$.
Therefore all powers of $x$ also belong to $N_m(S) \cap N_r(S)$
and hence we get that
$(a x^{m+1}) (b x^n) = (a( x^m x)) (b x^n) =
((a x^m)x) (b x^n) = (a x^m) ( x (b x^n) )
= (a x^m) ( (xb) x^n ) = (a x^m) ( ( \delta(b) + \sigma(b) x ) x^n ) =
(a x^m) ( \delta(b) x^n + \sigma(b) x^{n+1} )
= (a x^m)( \delta(b) x^n ) + (a x^m)( \sigma(b) x^{n+1} ).$
By the induction hypothesis the last expression equals
$\sum_{i \in \mathbb{N}} a \pi_i^m( \delta(b) ) x^{i+n} +
\sum_{i \in \mathbb{N}} a \pi_i^m( \sigma(b) ) x^{i+n+1} =
\sum_{i \in \mathbb{N}} a \pi_i^m( \delta(b) ) x^{i+n} +
\sum_{i \in \mathbb{N}} a \pi_{i-1}^m( \sigma(b) ) x^{i+n} =
\sum_{i \in \mathbb{N}} a [ \pi_i^m( \delta(b) ) + \pi_{i-1}^m( \sigma(b) ) ] x^{i+n} =
\sum_{i \in \mathbb{N}} a \pi_i^{m+1}( b ) x^{i+n}. $
This proves the induction step.
Now we show the ''strong'' statement.
To this end, take $a \in R_x$ and $b \in R$.
Suppose first that (N5) holds.
Then $x(ab) = (xa)b$. Thus, since $a \in R_x$, we get that
$\delta(ab) + \sigma(ab)x = (ax)b
\stackrel{(N3)}{=}
a(xb) =
a (\delta(b) + \sigma(b)x) = a\delta(b) + a(\sigma(b)x)
\stackrel{(N3)}{=}
a \delta(b) + (a \sigma(b))x$.
Hence by (N1), we get that $\delta(ab)=a\delta(b)$ and $\sigma(ab)=a\sigma(b)$.
Suppose now that (N4) holds.
Then $x(ba) = (xb)a$. Thus
$\delta(ba) + \sigma(ba)x = (\delta(b) + \sigma(b)x)a =
\delta(b)a + (\sigma(b)x)a
\stackrel{(N3)}{=}
\delta(b)a + \sigma(b)(xa) =
[a \in R_x] = \delta(b)a + \sigma(b)(ax)
\stackrel{(N3)}{=}
\delta(b)a + (\sigma(b)a)x$.
Hence, by (N1), we get that $\delta(ba)=\delta(b)a$ and $\sigma(ba) = \sigma(b)a$.
Thus, in either case, $\sigma$ is a fixed point homomorphism of $R$
and $\delta$ is a $\sigma$-kernel derivation of $R$.
\end{proof}
For use in later sections, we now note
that the axioms (N4) and (N5) of Definition \ref{defstrong}
can be replaced by
seemingly
stronger statements.
\begin{prop}\label{generalaxioms}
Let $(S,x)$ be a non-associative Ore extension of $R$.
\begin{itemize}
\item[(a)] The axiom {\rm (N4)} holds if and only if
$(\mathbb{Z}[x] , S , R_x[x]) = \{ 0 \}$ holds.
\item[(b)] The axiom {\rm (N5)} holds if and only if
$(\mathbb{Z}[x] , R_x[x] , S) = \{ 0 \}$ holds.
\end{itemize}
\end{prop}
\begin{proof}
Since the ''if'' statements are trivial,
we only show the ''only if'' statements.
To this end, take $a \in R_x$, $b \in R$
and $m,n,p \in \mathbb{N}$.
(a) We need to show that $(x^n , b x^m , ax^p) = 0$.
Since $x \in N_m(S) \cap N_r(S)$ and $a \in R_x$
it is enough to show this relation for $m=p=0$.
Since (N4) holds, we get, from the proof of Proposition \ref{necessarypolynomial}, that
$(x^n b)a = \sum_{i \in \mathbb{N}} \pi_i^n(b) x^i a =
\sum_{i \in \mathbb{N}} \pi_i^n(b) a x^i =
\sum_{i \in \mathbb{N}} \pi_i^n(ba) x^i = x^n(ba).$
(b) We need to show that $(x^n , a x^p , b x^m) = 0$.
Since $x \in N_m(S) \cap N_r(S)$ and $a \in R_x$
it is enough to show this relation for $m=p=0$.
Since (N5) holds, we get, from the proof of Proposition \ref{necessarypolynomial}, that
$(x^n a)b = (a x^n)b = a (x^n b) =
\sum_{i \in \mathbb{N}} a \pi_i^n(b) x^i =
\sum_{i \in \mathbb{N}} \pi_i^n(ab) x^i = x^n(ab).$
\end{proof}
\begin{prop}\label{newproof}
If $S = R[X ; \sigma , \delta]$ is a generalized polynomial ring, then
\begin{itemize}
\item[(a)] $R \subseteq N_l(S)$ if and only if $R$ is associative;
\item[(b)] $X \in N_l(S)$ if and only if
$\sigma$ is a ring endomorphism and
$\delta$ is a $\sigma$-derivation;
\item[(c)] $S$ is associative if and only if
$R$ is associative, $\sigma$ is a ring endomorphism and
$\delta$ is a $\sigma$-derivation.
\end{itemize}
\end{prop}
\begin{proof}
(a) The ''only if'' statement is clear.
Now we show the ''if'' statement.
Suppose that $R$ is associative.
Take $a,b,c \in R$ and $m,n \in \mathbb{N}$.
We wish to show that
\begin{equation}\label{associativeabc}
(a , bX^m , cX^n)=0.
\end{equation}
Since $X \in N_r(S)$, we get that
$(a , bX^m , cX^n) = (a , bX^m , c)X^n$.
Thus it is enough to prove \eqref{associativeabc} for $n=0$.
Since $X \in N_m(S) \cap N_r(S)$ we get that
$(a , bX^m , c) = (a , b , X^m c) =
\sum_{i \in \mathbb{N}} (a , b , \pi_i^m(c) X^i) =
\sum_{i \in \mathbb{N}} (a , b , \pi_i^m(c) ) X^i = 0$,
using that $R$ is associative.
(b) First we show the ''only if'' statement.
Suppose that $X \in N_l(S)$. Take $a,b \in R$.
From the equality $X(ab) = (Xa)b$ we get that
$\delta(ab) + \sigma(ab)X =
(\delta(a) + \sigma(a)X)b
\stackrel{(N3)}{=}
\delta(a)b + \sigma(a) (Xb) =
\delta(a)b + \sigma(a)( \delta(b) + \sigma(b)X )
\stackrel{(N3)}{=}
\delta(a)b + \sigma(a)\delta(b) + (\sigma(a)\sigma(b))X$.
Hence, by (N1), we get that
$\sigma$ is a homomorphism and $\delta$ is a $\sigma$-derivation.
Now we show the ''if'' statement.
Suppose that $\sigma$ is a homomorphism and
that
$\delta$ is a $\sigma$-derivation.
From the calculation in the proof of the ''only if''
statement it follows that $X \in N_l(R)$.
From the same type of reasoning that we used in the proof
of the ''if'' statement in (a), we therefore get that
$(X,S,S) \subseteq \sum_{i \in \mathbb{N}} (X,R,R)X^i = \{0\}$.
(c) The ''only if'' statement follows directly from (a) and (b).
Now we show the ''if'' statement.
Suppose that $R$ is associative, $\sigma$ is a ring endomorphism and
that
$\delta$ is a $\sigma$-derivation.
Take $a \in R$ and $m \in \mathbb{N}$.
From (a) and (b) we get that $a,X \in N_l(S)$.
Since $N_l(S)$ is multiplicatively closed we get that
$aX^m \in N_l(S)$. Since $N_l(S)$ is closed under addition,
we get that $S \subseteq N_l(S)$
and thus $S$ is associative.
\end{proof}
\begin{prop}\label{rightbasis}
If $S = R[X ; \sigma , \delta]$ is a generalized polynomial ring
with $\sigma$ bijective, then $B = \{ X^n \}_{n \in \mathbb{N}}$
is a basis for $S$ as a right $R$-module.
\end{prop}
\begin{proof}
First we show that $B$ is a right $R$-linearly independent set.
We will show that for each $n \in \mathbb{N}$, the set
$B_n := \{ X^i \}_{i=0}^n$ is right $R$-linearly independent.
We will prove this by induction over $n$.
Base case: $n=0$. It is clear that $\{ 1 \}$ is right
$R$-linearly independent.
Induction step: suppose that $B_n$ is right $R$-linearly independent
for some $n \in \mathbb{N}$.
Suppose that $a_i \in R$, for $i \in \{1 ,\ldots , n+1\}$,
are chosen so that $\sum_{i=0}^{n+1} X^i a_i = 0$.
Then $0 = \sigma^{n+1}(a_{n+1}) X^{n+1} + \mbox{[lower terms]}$.
Since $B_{n+1}$ is left $R$-linearly independent,
we get that $\sigma^{n+1}(a_{n+1}) = 0$.
Since $\sigma$ is injective, we get that $a_{n+1}=0$.
Thus $\sum_{i=0}^{n} X^i a_i = 0$.
By the induction hypothesis, we get that $a_i = 0$,
for $i \in \{0,\ldots,n\}$.
Next we show that $B$ right $R$-spans $S$.
For each $n \in \mathbb{N}$, let $S_n$ (or $T_n$) denote
the left (or right) $R$-span of $B_n$.
We will show that for each $n \in \mathbb{N}$, the
relation $S_n = T_n$ holds.
We will prove this by induction over $n$.
Base case: $n=0$. It is clear that $S_0 = R = T_0$.
Induction step: suppose that $S_n = T_n$ for some
$n \in \mathbb{N}$.
Take $a = \sum_{i=0}^{n+1} a_i X^i \in S_{n+1}$.
Since $\sigma$ is surjective, we can pick $r \in R$
such that $\sigma^{n+1}(r) = a_{n+1}$.
This implies that $a - X^{n+1} r \in S_n$.
By the induction hypothesis this implies that
$a - X^{n+1} r \in T_n$. Thus $a \in T_n + X^{n+1}r \subseteq T_{n+1}$.
Thus $S_{n+1} \subseteq T_{n+1}$.
Since the inclusion $S_{n+1} \supseteq T_{n+1}$ trivially holds,
the induction step is complete.
\end{proof}
Explicit formulas for how elements of generalized polynomial
rings can be expressed as right $R$-linear combinations of
elements from $B$
can be worked out exactly as in the classical case
(see e.g. the formulas right after Theorem 7 in
Ore's classical article \cite{ore1933}).
In this article, we only need the following special case of these relations.
\begin{prop}\label{rightformula}
Suppose that $S = R[X ; \id_R , \delta]$ is a non-associative
differential polynomial ring.
If $r \in R$ and $n \in \mathbb{N}$, then
$r X^n = \sum_{i=0}^n (-1)^i {n \choose i} X^{n-i} \delta^i(r)$.
\end{prop}
\begin{proof}
We will show this by induction over $n$.
Base case: $n=0$. This is clear since $r X^0 = r = X^0 r$.
Induction step: suppose that
$r X^n = \sum_{i=0}^n (-1)^i {n \choose i} X^{n-i} \delta^i(r)$
for some $n \in \mathbb{N}$.
Then, since $X \in N_m(S) \cap N_r(S)$, we get that
$r X^{n+1} = r X^n X =
\sum_{i=0}^n (-1)^i {n \choose i} X^{n-i} \delta^i(r) X =
\sum_{i=0}^n (-1)^i {n \choose i} X^{n-i} ( X \delta^i(r) - \delta^{i+1}(r) ) =
\sum_{i=0}^n (-1)^i {n \choose i} X^{n+1-i} \delta^i(r) +
(-1)^{i+1} {n \choose i} X^{n-i} \delta^{i+1}(r) =
[ {n+1 \choose i} = {n \choose i} + {n \choose i-1} ]=
\sum_{i=0}^{n+1} (-1)^i {n+1 \choose i} X^{n+1-i} \delta^i(r)$
\end{proof}
\section{Ideal Structure}\label{simplicity}
The aim of this section is to prove Theorem \ref{maintheorem}.
To this end, we first show
a series of results concerning
simplicity and the center.
Throughout this section, $R$ denotes a non-associative ring
and $\sigma$ and $\delta$ are additive maps $R \rightarrow R$
satisfying $\sigma(1)=1$ and $\delta(1)=0$.
Furthermore, we let $S = R[X ; \sigma , \delta]$ denote a
non-associative Ore extension of $R$.
\begin{defn}\label{defsigmadeltasimple}
An ideal $I$ of $R$ is said to be
\emph{$\sigma$-$\delta$-invariant}
if $\sigma(I) \subseteq I$ and $\delta(I) \subseteq I$.
If $\{ 0 \}$ and $R$ are the only $\sigma$-$\delta$-invariant ideals
of $R$, then
$R$ is said to be \emph{$\sigma$-$\delta$-simple}.
\end{defn}
\begin{prop}\label{sigmadeltasimple}
If $S$ is simple, then $R$ is $\sigma$-$\delta$-simple.
\end{prop}
\begin{proof}
Take a non-zero $\sigma$-$\delta$-invariant ideal $J$ of $R$.
We wish to show that $J = R$.
Let $I = \oplus_{i \in \mathbb{N}} J X^i$.
Since $J$ is a right ideal of $R$ it follows that $I$ is a right ideal of $S$.
Using that $J$ is $\sigma$-$\delta$-invariant it follows that $I$ is a left ideal of $S$.
Since $J$ is non-zero it follows that $I$ is non-zero.
By simplicity of $S$, we get that $I=S$ and thus $J = R$.
\end{proof}
\begin{prop}\label{R-sigmadelta-simple-Z-field}
Suppose that $R$ is $\sigma$-$\delta$-simple.
If $\sigma$ is a fixed point homomorphism and $\delta$
is a $\sigma$-kernel derivation, then $Z(R)_{\delta}^{\sigma}$ is a field.
\end{prop}
\begin{proof}
Put $T = Z(R)_{\delta}^{\sigma}$.
We already know that $Z(R)$ is an associative commutative unital ring.
Suppose that $\sigma$ and $\delta$ are right $R_{\delta}^{\sigma}$-linear.
Take $a,b \in T$.
We have
$\sigma(ab) = \sigma(a)b = ab$ and
$\delta(ab) = \delta(a)b = 0b = 0$.
Thus $ab \in T$.
Since it is clear that $1 \in T$ and that
$T$ is additively closed, it follows that
$T$ is an associative commutative unital ring.
What remains to show is that every non-zero element of $T$
has a multiplicative inverse.
To this end, take a non-zero $a \in T$.
Then $Ra$ is a non-zero ideal of $R$ with
$\sigma(Ra) = \sigma(R)a \subseteq Ra$ and
$\delta(Ra) = \delta(R)a \subseteq Ra$.
Hence $Ra$ is $\sigma$-$\delta$-invariant.
By $\sigma$-$\delta$-simplicity of $R$,
we get that $Ra = R$. Thus, there is $b \in R$
such that $ab = 1$.
By Proposition \ref{centerInvClosed}, we get that $b \in Z(R)$.
Now we show that $b \in R_{\delta}^{\sigma}$.
Indeed,
$\sigma(b)=\sigma(b)1=\sigma(b)ab=\sigma(ba)b=\sigma(1)b=1b=b$
and
$\delta(b) = \delta(b)1 = \delta(b)ab = \delta(ba)b =
\delta(1)b = 0b = 0$.
This shows that $b\in T$.
The left $R_{\delta}^{\sigma}$-linear case is treated analogously.
\end{proof}
\begin{prop}\label{commutativecondition}
If $a \in R_{\delta}^{\sigma}[X]$ commutes with
every element of $R$, then $a \in C(S)$.
\end{prop}
\begin{proof}
First we show, using induction, that, for
every $n \in \mathbb{N}$, the relation $[a,x^n]=0$ holds.
The base case $n=0$ follows immediately
since $[a,X^0] = [a , 1] = 0$.
Now we show the induction step.
Suppose that $[a,X^n]=0$ for some $n \in \mathbb{N}$. Then
$[a,X^{n+1}] =
a X^{n+1} - X^{n+1} a =
a ( X X^n) - (X X^n) a =
(a X) X^n - X (X^n a) =
(a X) X^n - X (a X^n) =
(a X) X^n - (X a) X^n =
[a,X]X^n = 0$,
since it follows from $a \in R_{\delta}^{\sigma}[X]$ that $[a,X]=0$. Now
$[a,bX^n] =
a(bX^n) - (bX^n)a =
(ab)X^n - b(X^n a) =
(ab)X^n - b(a X^n) =
(ab)X^n - (ba)X^n =
[a,b]X^n = 0$.
\end{proof}
\begin{prop}\label{associativecondition}
Suppose that $S$ is a strong non-associative Ore extension of $R$.
If $a \in R_{\delta}^{\sigma}[X]$
commutes with every element of $R$, and
associates with all elements of $R$, then $a \in Z(S)$.
\end{prop}
\begin{proof}
By Proposition~\ref{commutativecondition} we conclude that $a\in C(S)$.
Since $Z(S) = C(S) \cap N(S)$,
we need to show that $a \in N(S)$.
First we show that $a \in N_l(S)$.
Take $n,p \in \mathbb{N}$.
Since $(a,R,R) = \{ 0 \}$ and $X \in N_m(S) \cap N_r(S)$, we get that
$(a, RX^n , RX^p) = (a , RX^n , R)X^p = (a , R , X^n R)X^p \subseteq
\sum_{i \in \mathbb{N}} (a , R , \pi_i^n(R) X^i) X^p \subseteq
\sum_{i = 1}^n (a , R , R) X^{i+p} = \{ 0 \}.$
By Proposition \ref{intersection}, we are done
if we can show that $a \in N_m(S)$ or $a \in N_r(S)$.
Case 1: (N4) holds.
We show that $a \in N_r(S)$.
We wish to show that
\begin{equation}\label{rightzero}
(bX^n , cX^p , a) = 0.
\end{equation}
Since $X \in N_m(S) \cap N_r(S)$ and $a \in C(S)$, we get that
$( (bX^n) (cX^p) ) a = ( ((bX^n) c) X^p) a = ( (bX^n)c ) (X^p a) =
( (bX^n ) c ) (a X^p) = (((bX^n)c) a)X^p = (( b (X^n c) ) a) X^p$
and, by Proposition \ref{generalaxioms}(a), we get that
$bX^n ( (cX^p) a ) = bX^n ( c (X^p a) ) = bX^n ( c ( a X^p ) ) =
bX^n ( (ca) X^p ) = (bX^n (ca) ) X^p
= ( b ( X^n (ca))) X^p = ( b ( (X^n c) a ) ) X^p.$
This shows \eqref{rightzero}.
Case 2: (N5) holds.
We show that $a \in N_m(S)$.
We wish to show that
\begin{equation}\label{middlezero}
(bX^n , a , cX^p) = 0.
\end{equation}
Since $X \in N_r(S)$, we only need to show \eqref{middlezero} for $p=0$.
Since $X \in N_m(S) \cap N_r(S)$, $a \in C(S)$
and $a$ associates with all elements of $R$, we get that
$((bX^n)a)c = (b (X^n a) )c = (b (a X^n))c =
( (ba)X^n ) c = (ba)(X^n c)
= \sum_{i \in \mathbb{N}} (ba) \pi_i^n(c) X^i =
\sum_{i \in \mathbb{N}} b (a \pi_i^n(c) ) X^i.$
On the other hand, since $a \in C(S)$, $X \in N_m(S) \cap N_r(S)$
and Proposition \ref{generalaxioms}(b) holds, we get that
$bX^n (a c) = b (X^n (ac)) = b ( (X^n a) c) =
b ( (a X^n) c) = b ( a (X^n c) ) =
\sum_{i \in \mathbb{N}} b ( a \pi_i^n(c) ) X^i.$
This shows \eqref{middlezero}.
\end{proof}
\begin{cor}\label{corcenter}
If $\delta$ is a kernel derivation on $R$ and we put $D = R[X ; \id_R , \delta]$, then
$Z(D)$ is the set of all $a \in D$ such that
(i) $a$ commutes with $X$, and
(ii) $a$ commutes with all elements of $R$, and
(iii) $a$ associates with all elements of $R$.
\end{cor}
\begin{prop}\label{regular}
Let $\sigma$ be injective and suppose that $a,b \in S=R[X;\sigma, \delta]$ are elements such that $ab=ba=1$. If the leading coefficient of $a$ is a regular element of $R$, then $a,b \in R$.
\end{prop}
\begin{proof}
Suppose that $b = \sum_{i=0}^m b_i X^i$, where $b_m \neq 0$.
Comparing coefficients of $X^{n+m}$ in the relation
$ab = 1$ we get that $a_n \sigma^n(b_m) = 0$
if $m + n > 0$.
Since $a_n$ is regular, we therefore get that
$\sigma^n(b_m)=0$ whenever $m+n > 0$.
By injectivity of $\sigma$, we get $b_m=0$ if $m > 0$.
Comparing coefficients of degree $n$ in the relation $ba = b_0 a = 1$
we get that $b_0 a_n = 0$ if $n > 0$. Since $b_0 = b \neq 0$
and $a_n$ is regular, we get that $n=0$.
Hence $m=n=0$ and $a,b \in R$.
\end{proof}
\begin{prop}\label{sumdegrees}
If $a,b \in S$, then
$\deg(ab) \leq \deg(a) + \deg(b)$.
Moreover, if $b$ is monic or $a$ is monic and $\sigma$ is injective,
then equality holds.
\end{prop}
\begin{proof}
Suppose that $\deg(a)=m$ and $\deg(b)=n$.
Let $a_m$ and $b_n$ denote the leading coefficients
of $a$ and $b$ respectively.
Then $ab = a_m \sigma^m(b_n) X^{m+n} + [\mbox{lower terms}]$.
So $\deg(ab) \leq m+n = \deg(a) + \deg(b)$.
Equality holds if and only if $a_m \sigma^m(b_n) \neq 0$.
This holds in particular if $b_n=1$ or if $a_m=1$ and $\sigma$ is injective.
\end{proof}
Next we show that there in some cases is a Euclidean algorithm for $S$.
\begin{prop}\label{euclideanalgorithm}
If $a,b \in S$ where $b$ is monic, then $a = qb + r$
for suitable $q,r \in S$ such that either $r=0$ or $\deg(r) < \deg(b)$.
\end{prop}
\begin{proof}
We follow closely the proof in \cite[p. 94]{rowen1988} for the associative case.
Without loss of generality, we may assume that $a\neq 0$.
Suppose that $\deg(a)=m$ and $\deg(b)=n$.
Let $a_m$ denote the leading coefficient of $a$.
Case 1: $m < n$. Then we can put $q=0$ and $r=a$.
Case 2: $m \geq n$.
Put $c = a - (a_m X^{m-n}) b$. Then $\deg(c) < \deg(a)$.
By induction there are $q',r' \in S$
with $c = q'b + r'$ and $r' = 0$ or $\deg(r') < n$.
This implies that
$a =
(a_m X^{m-n}) b + c =
(a_m X^{m-n}) b + q'b + r' =
(a_m X^{m-n} + q')b + r'.$
So we can put $q = a_m X^{m-n} + q'$ and $r=r'$.
\end{proof}
\subsection*{Proof of Theorem \ref{maintheorem}}
\subsubsection*{Proof of {\rm (a)}}
Let $I$ be an ideal of $D$.
Suppose that $m$ is the minimal degree of non-zero
elements of $I$.
Put
$J = \{ r \in R \mid \exists r_0,r_1, \ldots, r_{m-1} \in R :
rX^m + r_{m-1} X^{m-1} + \ldots + r_0 \in I \}.$
It is clear that $J$ is a ideal of $R$.
From the fact that $XI - IX \subseteq I$ it follows that
$J$ is $\delta$-invariant.
Since $R$ is $\delta$-simple and $J$ is non-zero,
we can conclude that $J = R$.
In particular, $1 \in J$.
Therefore there is a monic $a \in I$ of degree $m$.
Now we show that $a \in Z(D)$.
To this end, we check (i), (ii) and (iii) of Corollary \ref{corcenter}.
Since $a \in D_{\delta}$, (i) holds.
Now we check (ii). Take $r \in R$.
Since $a$ is monic the leading coefficient
of $[a,r]$ is $[1,r] = 0$.
Thus $\deg([a,r]) < m$ which, since $[a,r] \in I$,
implies that $[a,r]=0$, by minimality of $m$.
Now we check (iii).
Take $r,s \in R$.
Since $a$ is monic and the leading coefficients of all the polynomials
$(a,r,s)$, $(r,a,s)$ and $(r,s,a)$ equal zero, all
of them have degree less that $m$.
By minimality of $m$ and the fact that all of these polynomials
belong to $I$, we get that they are zero.
Thus (iii) holds.
Next we show that $I = Da$.
The inclusion $I \supseteq Da$ is clear.
Now we show the reversed inclusion.
Take a non-zero $c \in I$.
Since $\deg(c) \geq \deg(a)$, we can use
Proposition \ref{euclideanalgorithm} to conclude that
$c = qa + r$, for some $q,r \in S$ with $\deg(r) < \deg(a)$.
But then $r = c - qa \in I$, which, by minimality of $m$,
implies that $r = 0$. Therefore $c = qa \in I$.
Hence $I \subseteq Da$.
Finally we show uniqueness of $a$.
Suppose that $d \in D$ is monic and $I = Dd$.
From the relations $a \in Dd$
and $d \in Da$ we get, respectively from Proposition \ref{sumdegrees}, that
$\deg(a) \geq \deg(d)$ and $\deg(d) \geq \deg(a)$,
which together imply that $\deg(a) = \deg(d)$.
Since $a$ and $d$ are monic,
we get that $\deg(a-d) < m$, which, by $a-d \in I$ and
minimality of $m$, implies that $a=d$.
\subsubsection*{Proof of {\rm (b)}}
Case 1: $Z(D)$ only contains polynomials of degree zero.
Then $Z(D) \subseteq Z(R)_{\delta}$. But since $Z(R)_{\delta} \subseteq Z(D)$
we get that $Z(D) = Z(R)_{\delta}$ and we can choose $b=1$.
Case 2: $Z(D)$ contains polynomials of degree greater than zero.
Let $n$ denote the least degree of non-constant polynomials in $Z(D)$.
Take $b \in Z(D)$ such that $\deg(b)=n$.
Now we show that we may choose $b$ to be monic.
Since $I = Db$ is an ideal of $D$, by (a),
we may choose a monic $f \in I \cap Z(D)_{\delta}$
such that $I = Df$. But then $b = cf$ for some $c \in D$.
Since $f$ is monic we get that $n = \deg(b) = \deg(c) + \deg(f)$
which implies that $\deg(f) \leq n$. By minimality of $n$
we get that $\deg(f) = n$ and we may choose $b$ to be the monic $f$.
Now take $g \in Z(D)$ of degree $m$.
We will show by induction over the degree of $g$
that $g \in Z_{\delta}(R)[b]$.
Base case: $m=0$, i.e. $g$ is constant.
Then $g \in R \cap Z(S) = Z_{\delta}(R) \subseteq Z_{\delta}(R)[b]$.
Induction step: suppose that $m > 0$ and that we have shown the claim
for all $m' < m$.
Since $b$ is monic, we can write
$g = hb + k$ for some $h,k \in S$ with $\deg(k) < \deg(b)$.
Note that, since $b$ is monic, we get that $\deg(h) < \deg(g)$.
We claim that $h,k \in Z(D)$.
If we assume that the claim holds, then, by the induction
hypothesis, we are done.
Now we show the claim.
To this end, we will check (i),(ii) and (iii) in Corollary \ref{corcenter}.
First we check (i).
Note that $0 = [X,g] = [X,h]b + [X,k]$.
Seeking a contradiction, suppose that $[X,h] \neq 0$.
Since $b$ is monic and $\deg([X,k]) \leq \deg(k)$,
we get the contradiction
$-\infty = \deg(0) = \deg([X,g])= \deg( [X,h]b + [X,k] ) \geq n$.
Therefore $[X,h]=0$ and hence $[X,k]=0$.
In other words $h,k \in R_{\delta}[X]$.
Now we show (ii).
To this end, note that
$0 = [r,g] = [r,h]b + [r,k]$.
Seeking a contradiction, suppose that $[r,h] \neq 0$.
Since $b$ is monic and $\deg([r,k]) \leq \deg(k)$,
we get the contradiction
$-\infty = \deg(0) = \deg([r,g]) = \deg( [r,h]b + [r,k] ) \geq n$.
Therefore $[r,h]=0$ and hence $[r,k]=0$.
Finally, we show (iii). Take $r,s \in R$.
Let $\alpha(\cdot)$ denote either of the maps
$(\cdot,r,s)$, $(r,\cdot,s)$ or $(r,s,\cdot)$. Then
$0 = \alpha(g) = \alpha(h)b + \alpha(k)$.
Seeking a contradiction, suppose that $\alpha(h) \neq 0$.
Since $b$ is monic and $\deg(\alpha(k)) \leq \deg(k)$,
we get the contradiction
$-\infty = \deg(0) = \deg(\alpha(g)) = \deg( \alpha(h)b + \alpha(k) ) \geq n$.
Therefore $\alpha(h)=0$ and hence $\alpha(k)=0$.
This completes the induction step.
Now we show uniqueness of $b$ up to addition by an element from $Z(R)_{\delta}$.
Case 1: $Z(D)$ only contains polynomials of degree zero.
Then there is only one monic polynomial in $Z(D)$, namely $b=1$.
Case 2: $Z(D)$ contains polynomials of degree greater than zero
i.e. $n > 0$.
Suppose that there is another monic $b' \in R_{\delta}[X]$
such that $Z(D) = Z(R)_{\delta}[b']$.
Then there is a polynomial $p \in Z(R)_{\delta}[X]$
such that $b = p(b')$. Hence $n = \deg(b) = \deg(p(b)) \geq \deg(b')$.
By minimality of $n$, we get that $\deg(b')=n$.
But then $b-b'$ is a polynomial in $Z(D)$
of degree less than $n$, which, by minimality of $n$,
implies that $b - b' \in Z(R)_{\delta}$.
\subsubsection*{Proof of {\rm (c)}}
First we show the ''only if'' statement.
Suppose that $D$ is simple.
By Proposition \ref{sigmadeltasimple}, we get that
$R$ is $\delta$-simple.
By Proposition \ref{centerfield}, we get that $Z(D)$ is a field.
Next we show the ''if'' statement.
Suppose that $R$ is $\delta$-simple and that $Z(D)$ is a field.
Let $I$ be a non-zero ideal of $D$.
By (a) and Proposition \ref{regular}, this implies that
the polynomial in $Z(D)$ corresponding to $I$ is $1$.
This implies that $I = D$.
By (b) and Proposition \ref{regular},
the ring $Z(R)_{\delta}[b]$ is a field precisely when $b=1$.
\subsubsection*{Proofs of {\rm (d)} and {\rm (e)}}
By Proposition \ref{rightbasis}, we can write $b = \sum_{i=0}^n b_i X^i$,
where $b_i \in R$, for $i\in \{1,\ldots,n\}$, with $b_n = 1$.
Since $b \in Z(D)$, we get, in particular, that $Xb = bX$.
This implies that $\delta(b_i)=0$, for $i\in\{1,\ldots,n\}$.
Therefore $b = \sum_{i=0}^n X^i b_i$.
For every $j \in \{ 1,\ldots,n \}$ define the polynomial
$c_j = \sum_{i=j}^n X^{i-j} {i \choose j} b_i$.
We claim that each $c_j \in Z(D)$.
If we assume that the claim holds, then, by minimality of $n$,
we get that $b_j = c_j \in Z(R)_{\delta}$ and that
${i \choose j} b_i = 0$ whenever $1 \leq j < i \leq n$.
In the case when the characteristic of $Z(R)_{\delta}$ is zero,
we therefore get that $b=1$ or $b = b_0 + X$.
The relation $br=rb$ now gives us that
$\delta = \delta_{b_0}$.
Now suppose that the characteristic of $Z(R)_{\delta}$ is a prime $p$.
Fix $i \in \{ 1,\ldots,n \}$ such that $b_i$ is non-zero.
Then ${i \choose j} = 0$ when $1 \leq j < i$.
By Lucas' Theorem (see e.g. \cite{fine1947})
this implies that $i$ must be a power of $p$.
Choose the smallest $q \in \mathbb{N}$ such that $p^q \leq n$.
For each $i \in \mathbb{N}$ put $c_i = b_{p^i}$.
Also put $c = b_0$.
Then $b = c + \sum_{i = 0}^q c_i \delta^{p^i}$.
The relation $br=rb$ now gives us that
$\delta_c + \sum_{i=0}^n c_i \delta^{p^i} = 0$.
Now we show the claim.
To this end, we will check
conditions (i), (ii) and (iii) of Corollary \ref{corcenter}.
Since $\delta(b_i)=0$ we know that (i) holds.
Now we show (ii).
Take $r \in R$.
First note that since $br = rb$, we can use Proposition \ref{rightformula}
to conclude that
\begin{equation}\label{notethat}
b_v r = \sum_{i=v}^n (-1)^{i-v} {i \choose i-v} \delta^{i-v}(r) b_i
\end{equation}
for each $v \in \{ 0,\ldots,n \}$. Thus,
\begin{align*}
c_j r &= \sum_{i=j}^n r \left( X^{i-j} {i \choose j} b_i \right) \stackrel{[X \in N_m(D)]}{=}
\sum_{i=j}^n \left( r X^{i-j} \right) {i \choose j} b_i \\
&=
\sum_{i=j}^n \left( \sum_{k=0}^{i-j} X^{i-j-k} (-1)^k {i-j \choose k}
\delta^k(r) \right) {i \choose j} b_i \stackrel{[X \in N_l(D)]}{=} \\
\end{align*}
\begin{align*}
&= \sum_{i=j}^n \sum_{k=0}^{i-j} X^{i-j-k} (-1)^k {i-j \choose k}
\delta^k(r) {i \choose j} b_i \stackrel{[v = i-k]}{=} \\
&= \sum_{i=v}^n \sum_{v=j}^n X^{v-j} {i \choose j}{i-j \choose i-v}
(-1)^{i-v} \delta^{i-v}(r) b_i \\
&= \sum_{i=v}^n \sum_{v=j}^n X^{v-j} {v \choose j}{i \choose i-v}
(-1)^{i-v} \delta^{i-v}(r) b_i \stackrel{[{\rm Eq.} \ \eqref{notethat}]}{=}
\sum_{v=j}^n X^{v-j} {v \choose j} b_v r = c_j r.
\end{align*}
Finally, we show (iii).
Take $r,s \in R$.
From the relations $(r,s,b)=0$ and $(b,r,s)=0$
it follows that $(r,s,b_i)=(b_i,r,s)=0$.
Hence we get that
$(r,s,c_j)=(c_j,r,s)=0$.
Thus $c_j \in N_r(R) \cap N_l(R)$.
Since $c_j \in C(R)$, we now automatically get that
$(r,c_j,s) = (r c_j)s - r(c_j s) =
(c_j r) s - r (s c_j) = c_j (rs) - (rs) c_j = 0$.
Hence $c_j \in N_m(R)$.
$\qed$
\begin{rem}
Our proof of Theorem \ref{maintheorem}(d)(e) follows closely
the proof of Amitsur \cite[Theorems 3 and 4]{amitsur1957}
from the associative situation. We also remark that Amitsur's proof
is much simpler in characteristic $p>0$
than the proofs given later by Jordan \cite[Theorem 4.1.6]{jordan1975}
in the $\delta$-simple situation, although, as we show,
Amitsur's original proof can be adapted to this situation.
\end{rem}
\section{Non-associative Weyl Algebras}\label{sectionweyl}
In this section, we show that there are lots of natural examples
of non-associative diffe\-rential polynomial rings.
To this end, we introduce non-associative versions of the first Weyl algebra
(see Definition \ref{definitionweyl}) and we show that they are often
simple regardless of the characteristic (see Theorem \ref{theoremweyl}).
Throughout this section, $T$ denotes a non-associative ring
and $T[Y]$ denotes the polynomial ring over
the indeterminate $Y$. In other words $T[Y] = T[Y ; \id_R , 0]$
as a generalized polynomial ring.
\begin{defn}\label{definitionweyl}
If $\delta : T[Y] \rightarrow T[Y]$
is a $T$-linear map such that $\delta(1)=0$,
then the non-associative differential polynomial ring
$T[Y] [X ; \id_R , \delta]$ is called a
{\it non-associative Weyl algebra}.
\end{defn}
\begin{rem}
A non-associative Weyl algebra is a generalization of the classical (associative) first Weyl algebra, hence the name.
Recall that the first Weyl algebra,
$A_1(\mathbb{C})=\mathbb{C}\langle X,Y\rangle / (XY-YX-1)$
may be regarded as a differential polynomial ring
$\mathbb{C}[Y][X;\identity_\mathbb{C},\delta]$,
where $\delta : \mathbb{C}[Y] \to \mathbb{C}[Y]$
is the standard derivation on $\mathbb{C}[Y]$.
\end{rem}
\begin{thm}\label{theoremweyl}
If $T$ is simple and there for each positive $n \in \mathbb{N}$ is a non-zero $k_n \in Z(T)$
such that $\delta(Y^n) = k_n Y^{n-1}$, then
the non-associative Weyl algebra $T[Y] [X ; \id_R , \delta]$ is simple.
\end{thm}
\begin{proof}
Put $R = T[Y]$ and $S = R[X ; \id_R , \delta]$.
First we show that $R$ is $\delta$-simple.
Let $I$ be a non-zero $\delta$-invariant ideal of $R$.
Take a non-zero $a \in I$. Suppose that the degree of $a$ is $n$.
From the definition of $\delta$ it follows that $\delta^n(a)$
is a non-zero element of $I$ of degree zero.
This means that $I \cap T$ is non-zero.
By simplicity of $T$ it follows that $I \cap T = T$.
In particular $1 \in T = I \cap T \subseteq I$.
Hence $I = R$.
It is clear that $\delta$ is a kernel derivation.
Therefore, by Theorem \ref{maintheorem}(b)(c) we are done if we can show that
every non-zero monic $b \in R_{\delta}[X] \cap Z(S)$ is of degree zero.
It is clear that $R_{\delta} = T$.
Therefore $b \in T[X] \cap Z(S)$.
Seeking a contradiction, suppose that the degree of $b$ is $n > 0$.
Put $b = X^n + c X^{n-1} + [\mbox{lower terms}]$.
From $b \in Z(S)$ it follows that $c \in Z(T)$.
Take $r \in R$. Then
$0 = br - rb = (\delta(r) + cr - r c)X^{n-1} + [\mbox{lower terms}] = [c \in Z(T)] =
\delta(r) X^{n-1} + [\mbox{lower terms}]$.
Thus $\delta(r) = 0$, for all $r \in R$, which is a contradiction
since e.g.
$\delta(Y) = k_1 \neq 0$.
\end{proof}
\begin{cor}
If $T$ is simple and $\delta$ is the classical derivative on $T[Y]$, then
the non-associative Weyl algebra $T[Y] [X ; \id_R , \delta]$ is simple
if and only if ${\rm char}(T)=0$.
\end{cor}
\begin{proof}
The ''if'' statement follows immediately from Theorem \ref{theoremweyl}
where $k_n = n$, for $n > 0$.
Now we show the ''only if'' statement.
Suppose that ${\rm char}(T)=p>0$.
Then $Y^p \in Z(T[Y] [X ; \id_R , \delta])$.
In particular, from Proposition \ref{regular}, we get that
$Z(T[Y] [X ; \id_R , \delta])$ is not a field.
By Theorem \ref{maintheorem}(c) we get that
$T[Y] [X ; \id_R , \delta]$ is not simple.
As an alternative proof it is easy to see that
the proper non-zero ideal in
$T[Y]$ generated by $Y^p$
is $\delta$-invariant.
Thus $T[Y]$ is not $\delta$-simple.
By Theorem \ref{maintheorem}(c), we get that
$T[Y] [X ; \id_R , \delta]$ is not simple.
\end{proof}
\section{Kernel Derivations Defined by Automorphisms}\label{sectiondynamics}
In this section, we show
simplicity results for a differential
polynomial ring version of the quantum plane
(see Theorem \ref{theoremquantumtorus}) and
for differential polynomial rings defined by actions
on compact Hausdorff spaces (see Theorem \ref{theoremdynamics}).
To this end, we introduce a class of $\sigma$-kernel derivations
defined by ring morphisms
(see Definition \ref{definitionkernel}).
Throughout this section, $R$ denotes a non-associative ring.
\begin{prop}\label{definitionkernel}
If $\alpha : R \rightarrow R$ is a ring morphism,
then the map $\delta_{\alpha} : R \rightarrow R$
defined by $\delta_{\alpha}(r) = \alpha(r) - r$, for $r \in R$,
is a left and right $R_{\delta_{\alpha}}^{\identity_R}$-linear $\alpha$-kernel derivation.
Moreover, an ideal $I$ of $R$ is $\delta_{\alpha}$-simple
if and only if it is $\alpha$-simple.
\end{prop}
\begin{proof}
It follows immediately that $\delta_{\alpha}(1)=0$
and that $\delta_{\alpha}$ is additive.
Now we will show that $\delta_{\alpha}$ in fact is $R_{\delta_{\alpha}}^{\identity_R}$-linear
both from the left and the right.
In particular, $\delta_{\alpha}$ is an $\alpha$-kernel derivation.
Take $r \in R$ and $s \in \ker(\delta_\alpha)$.
Then $\delta_{\alpha}(rs) =
\alpha(rs) - rs =
\alpha(r)\alpha(s) - rs =
\alpha(r)s - rs =
(\alpha(r) - r)s =
\delta_{\alpha}(r)s$.
In the same way we get that $\delta_{\alpha}(sr) = s \delta_{\alpha}(r)$.
The last statement is clear since if $a \in I$,
then $\delta_\alpha(a) \in I$ if and only if $\alpha(a) - a \in I$.
\end{proof}
\begin{rem}\label{remarkderivation}
The $\alpha$-kernel derivation $\delta_{\alpha}$ from Proposition \ref{definitionkernel}
is seldom a derivation. In fact, suppose that $\delta_{\alpha}$ is a derivation.
Take $r,s \in R$. Then the relation
$\delta_{\alpha}(rs) = \delta_{\alpha}(r)s + r \delta_{\alpha}(s)$
may be rewritten as
$\delta_{\alpha}(r) \delta_{\alpha}(s) = 0$.
So in particular, we get that $\delta_{\alpha}(r)^2 = 0$.
Hence, if $R$ is a reduced ring, i.e. a ring
with no non-zero
nilpotent elements, then
$\delta_{\alpha}$ is a derivation if and only if $\alpha = \id_R$.
Thus, $\delta_\alpha$ would have to be the zero map.
\end{rem}
Let $T$ be a simple non-associative ring and suppose that $q \in Z(T) \setminus \{ 0 \}$.
Let $T[Y]$ denote the polynomial ring in the indeterminate $Y$ over $T$.
Define a ring automorphism $\alpha_q : T[Y] \rightarrow T[Y]$
by the $T$-algebra extension of the relation $\alpha_q(Y) = qY$.
By Proposition \ref{definitionkernel}, $\alpha_q$ in turn
defines an $\alpha$-kernel derivation $\delta_{\alpha_q} : T[Y] \rightarrow T[Y]$.
It is not hard to show, using Remark \ref{remarkderivation},
that $\delta_{\alpha_q}$ is a classical derivation if and only if
$q$ is nilpotent.
\begin{prop}\label{proprootunity}
If $T$ is simple, then $T[Y]$ is
$\delta_q$-simple if and only if
$q$ is not a root of unity.
\end{prop}
\begin{proof}
Put $R = T[Y]$.
First we show the ''only if'' statement.
Suppose that $q$ is a root of unity.
Take a non-zero $n \in \mathbb{N}$ with $q^n = 1$.
Then the ideal of $R$ generated by $Y^n$ is $\alpha_q$-simple.
Thus, $R$ is not $\alpha_q$-simple.
By Proposition \ref{definitionkernel} we get that $R$ is not $\delta_{\alpha_q}$-simple.
Now we show the ''if'' statement.
Suppose that $q$ is not a root of unity.
Take a non-zero $\delta_{\alpha_q}$-invariant ideal $I$ of $R$.
We wish to show that $I = R$.
By Proposition \ref{definitionkernel} $I$ is $\alpha_q$-invariant.
Take a non-zero $a \in I$ of least degree $m$.
Seeking a contradiction, suppose that $m > 0$.
Write $a = \sum_{i=0}^m a_i Y^i$, for some
$a_i \in T$, for $i \in \{0,\ldots,n\}$.
Then $\alpha_q(a) - k^m a$ is a non-zero element of $I$
of degree less than $m$. This contradicts the minimality of $m$.
Thus $m=0$ and thus $a \in I \cap T$.
Since $T$ is simple, we get that the ideal $J$ of $T$
generated by $a$ equals $T$. In particular, we get that
$I \supseteq T \ni 1$. Thus $I = R$.
\end{proof}
\begin{thm}\label{theoremquantumtorus}
If $T$ is simple and ${\rm char}(R)=0$,
then the non-associative differential polynomial ring
$D = T[Y][X ; \id_{T[Y]} , \delta_{\sigma_q}]$ is simple if and only
if $q$ is not a root of unity.
In that case, $Z(D) = Z(T)$.
\end{thm}
\begin{proof}
The ''only if'' statement follows from Theorem \ref{maintheorem}(c)
and Proposition \ref{proprootunity}.
Now we show the ''if'' statement.
Put $R = T[Y]$ and $\delta = \delta_{\alpha_q}$.
Suppose that $q$ is not a root of unity.
By Proposition \ref{proprootunity},
we get that $R$ is $\delta$-simple.
By Theorem \ref{maintheorem}(c), we are done if we can show that $Z(S)$ is a field.
To this end, we first note that, by Theorem \ref{maintheorem}(b),
there is a unique monic $b \in Z(D)$ of least degree $n$.
Seeking a contradiction, suppose that $n > 0$.
Then $b = \sum_{i=0}^n b_i X^i$,
for some
$b_i \in R_{\delta}$.
But since $q$ is not a root of unity,
it follows that $R_{\delta} = T$.
Thus $b \in T[X]$.
From the fact that $b \in Z(D)$, we get that
$bt = tb$, for $t \in T$, which in turn implies that
$b_i \in Z(T)$, for $i\in\{0,\ldots,n\}$.
By looking at the degree $n-1$ coefficient in the relations
$br = rb$, for $r \in R$, we get that $\alpha_q = \id_R$,
which contradicts the fact that $q \neq 1$.
Thus $n=0$ and it follows that $b=1$.
By Theorem \ref{maintheorem}(b), we get that
$Z(D) = Z(R)_{\delta}[1] = Z(T)$.
\end{proof}
\begin{rem}
Given a field $\mathbb{F}$ and $q\in \mathbb{F} \setminus\{0\}$, we may define the so called \emph{quantum plane} (see e.g. \cite[Chapter IV]{Kassel}) as $\mathbb{F}_q[X,Y] = \mathbb{F}\langle X,Y\rangle/(YX-qXY)$.
The quantum plane is an associative algebra and it can be realized as a classical Ore extension.
Indeed, if we define $\sigma : \mathbb{F}[X] \to \mathbb{F}[X]$ by $\sigma(X)=qX$, then the quantum plane $\mathbb{F}_q[X,Y]$
is isomorphic to the Ore extension $\mathbb{F}[X][Y,\sigma,0]$.
While the quantum plane can be seen as a $q$-deformation,
the non-associative Ore extension $D = T[Y][X ; \id_{T[Y]} , \delta_{\sigma_q}]$
that we study in Theorem \ref{theoremquantumtorus}
can be seen as a non-associative deformation of the plane.
\end{rem}
There are several ways to associate an (associative) algebra to
a dynamical system $(G,X)$, where $G$ is a group acting on a topological space $X$.
By associating a skew group algebra (see \cite{oinert2014}) or a crossed product $C^*$-algebra (see \cite{Power}) to the dynamical system,
it is possible to encode the dynamical system into the algebra
in such a way that
dynamical features (faithfulness, freeness, minimality etc) of the dynamical system
correspond to algebraical properties of the algebra.
We shall now show how to
associate a
non-associative differential polynomial ring
to a dynamical system
and exhibit a correspondence between minimality of the dynamical system
and simplicity of the non-associative ring.
For the rest of this section, let $K$ denote any of the real algebras
$\mathbb{R}$ (real numbers), $\mathbb{C}$ (complex numbers),
$\mathbb{H}$ (Hamilton's quaternions),
$\mathbb{O}$ (Graves' octonions),
$\mathbb{S}$ (sedenions), etc.
obtained by iterating the classical Cayley-Dickson doubling
procedure of the real numbers (for more details concerning this
construction, see e.g. \cite{baez2002}).
It is well known that $K$ is then a reduced ring.
Also, apart from the cases when $K$ equals $\mathbb{R}$,
$\mathbb{C}$ or $\mathbb{H}$, $K$ is not associative.
Furthermore, there is an $\mathbb{R}$-linear involution
$\overline{\cdot} : K \rightarrow K$ and a norm
$| \cdot | : K \rightarrow \mathbb{R}_{\geq 0}$
satisfying $k \overline{k} = |k|^2$, for $k \in K$.
For the rest of this section, let $Y$ be a compact Hausdorff space
and let $g : Y \to Y$ be a continuous map.
A closed subspace $Z$ of $Y$ is called $g$-invariant
if $g(Z) \subseteq Z$.
The action of $g$ on $Y$ is called {\it minimal}
if $\emptyset$ and $Y$ are the only $g$-invariant subspaces of $Y$.
By abuse of notation, we let $C(Y)$ denote the ring of continuous functions $Y \rightarrow K$.
Since $K$ is reduced, we get that $C(Y)$ is also reduced.
The homeomorphism $g : Y \rightarrow Y$ defines a ring homomorphism
$\sigma(g) : C(Y) \rightarrow C(Y)$, where
$\sigma(g)(f) = f \circ g$, for $f \in C(Y)$.
By Proposition \ref{definitionkernel}, $\sigma(g)$ in turn
defines a $\sigma$-kernel derivation $\delta_{\sigma(g)} : C(Y) \rightarrow C(Y)$.
Note that, by Remark \ref{remarkderivation}, $\delta_{\sigma(g)}$
is a classical derivation if and only if $g = \id_Y$.
\begin{prop}\label{NyPropMinimal}
If the action of $g$ on $Y$ is minimal,
then the ring $C(Y)$ is $\delta_{\sigma(g)}$-simple.
\end{prop}
\begin{proof}
Suppose that $Y$ is $g$-minimal.
We show that $C(Y)$ is $\delta_{\sigma(g)}$-simple.
Suppose that $I$ is a non-zero $\delta_{\sigma(g)}$-invariant ideal of $C(Y)$.
For a subset $J$ of $I$ define $N_J = \cap_{f \in J} f^{-1}(0)$.
Since $I$ is $\sigma(g)$-invariant it follows that $N_I$ is $g$-invariant.
It is clear that $N_J$ is closed.
Since $I$ is non-zero it follows that $N_I$ is a proper subset of $Y$.
By $g$-minimality of $Y$, we get that $N_I$ is empty.
By compactness of $X$ we get that there is some finite subset $J$ of $I$
such that $N_J$ is empty.
Define $h \in I$ by $h = \sum_{f \in J} f \overline{f} = \sum_{f \in J} |f|^2$.
Since $N_J$ is empty we get that $h(x) \neq 0$ for all $x \in X$.
Therefore $I$ contains the invertible element $h$ and hence
$I = C(Y)$.
\end{proof}
\begin{thm}\label{NYtheoremdynamics}
The non-associative differential polynomial ring
$D = C(Y)[X ; \id_{C(Y)} , \delta_{\sigma(g)}]$
is simple if the action of $g$ on $Y$ is minimal
and the topology on $Y$ is non-discrete.
In that case, $Z(D) = \mathbb{C}$, if $K = \mathbb{C}$, and
$Z(D) = \mathbb{R}$, otherwise.
\end{thm}
\begin{proof}
Put $R = C(Y)$.
Suppose that the action of $g$ on $Y$ is minimal.
By Proposition \ref{NyPropMinimal} we get that $R$ is $\delta_{\sigma(g)}$-simple.
By Theorem \ref{maintheorem}(c) we are done if we can show that $Z(D)$ is a field.
To this end, we first note that, by Theorem \ref{maintheorem}(b),
there is a unique monic $b \in Z(D)$ (up to addition of elements of $Z(R)_\delta$, which are of degree $0$) of least degree $n$.
Seeking a contradiction, suppose that $n > 0$.
Then $b = \sum_{i=0}^n b_i X^i$,
for some
$b_i \in R_{\delta_{\sigma(g)}}$.
Take $i \in \{ 0,\ldots,n \}$
and $k_i \in b_i(Y)$.
Since
$b_i \in R_{\delta_{\sigma(g)}}$,
we get that
the set $b_i^{-1}(k_i)$ is non-empty and $g$-invariant.
By $g$-minimality of $Y$, we get that $Y = b_i^{-1}(k_i)$,
i.e. $b_i$ is the constant function $k_i$.
Thus $b = \sum_{i=0}^n k_i X^i$.
From the fact that $b \in Z(D)$, we get that
$bk = kb$, for $k \in K$, which in turn implies that
$b_i \in Z(K)$, for $i\in \{0,\ldots,n\}$.
By looking at the degree $n-1$ coefficient in the relations
$br = rb$, for $r \in R$, we get that $\sigma(g) = \id_R$.
Since the topology on $Y$ is non-discrete,
this contradicts $g$-minimality of $Y$.
Thus $n=0$ and it follows that $b=1$.
Thus, by Theorem \ref{maintheorem}(b), we get that
$Z(D) = Z(R)_{\delta_{\sigma(g)}}[1] = Z(K)$.
It is well known that $Z(K)=\mathbb{R}$ for all $K$
except $K = \mathbb{C}$.
\end{proof}
\begin{prop}\label{propminimal}
Suppose that $g : Y \to Y$ is a homeomorphism.
The ring $C(Y)$ is $\delta_{\sigma(g)}$-simple
if and only if the action of $g$ on $Y$ is minimal.
\end{prop}
\begin{proof}
The ''if'' statement follows from Proposition \ref{NyPropMinimal}.
Now we show the ''only if'' statement.
Suppose that $C(Y)$ is $\delta_{\sigma(g)}$-simple.
We show that $Y$ is $g$-minimal.
Suppose that $Z$ is a closed $g$-invariant subset of $Y$
with $Z \subsetneq Y$.
We wish to show that $Z = \emptyset$.
To this end, let $I_Z$ denote the set of continuous functions
$X \rightarrow \mathbb{C}$ that vanish outside $Z$.
It is clear that $I_Z$ is an ideal of $C(Y)$.
It is also clear that $I_Z \subsetneq C(Y)$
since all non-zero constant maps belong to $C(Y) \setminus I_Z$.
Now we show that $I_Z$ is $\delta_{\sigma(g)}$-invariant.
Take $f \in I_Z$ and $x \in Y \setminus Z$.
Then $\delta_{\sigma(g)}(f)(x) = \sigma(g)(f)(x) - f(x) = [f \in I_Y \mathbb{R}ightarrow f(x)=0] =
\sigma(g)(f)(x) = f(g(x))=0$. The last equality follows since $g(x) \in Y\setminus Z$.
Now we prove this. Seeking a contradiction, suppose that $g(x) \in Z$.
Then, by the $g$-invariance of $Z$, we get $g^{-1}(Z)=Z$,
and $x = g^{-1}(g(x)) \in Z$, which is a contradiction.
By $\delta_{\sigma(g)}$-simplicity of $C(Y)$ this implies that
$I_Z = \{ 0 \}$. Since $Y$ is compact, it is completely regular.
Therefore, we get that $Z = \emptyset$.
\end{proof}
\begin{thm}\label{theoremdynamics}
Suppose that $g : Y \to Y$ is a homeomorphism.
The non-associative differential polynomial ring
$D = C(Y)[X ; \id_{C(Y)} , \delta_{\sigma(g)}]$
is simple if and only if the action of $g$ on $Y$ is minimal
and the topology on $Y$ is non-discrete.
In that case, $Z(D) = \mathbb{C}$, if $K = \mathbb{C}$, and
$Z(D) = \mathbb{R}$, otherwise.
\end{thm}
\begin{proof}
Put $R = C(Y)$.
The ''if'' statement follows from Theorem \ref{NYtheoremdynamics}.
Now we show the ''only if'' statement.
Suppose that $S$ is simple.
By Theorem \ref{maintheorem} and Proposition \ref{propminimal}.
it follows that the action of $g$ on $Y$ is minimal.
Seeking a contradiction, suppose that the topology on $Y$ is discrete.
Since the topology is Hausdorff it follows that $Y$ is a one-element set.
Thus $S$ equals the polynomial ring $K[X]$ which is not simple.
\end{proof}
\section{Associative Coefficients}\label{sectionassociative}
In this section,
we show that if the ring of coefficients is associative,
then we can often obtain simplicity of the differential polynomial ring
just from the assumption that the map $\delta$ is not a derivation.
\begin{thm}\label{theoremassociative}
Suppose that $D = R[X ; \id_R , \delta]$ is a non-associative
differential polynomial ring such that $R$ is associative
and all positive integers are regular in $R$.
If $R$ is $\delta$-simple but $\delta$ is not a derivation,
then $D$ is simple.
\end{thm}
\begin{proof}
Let $I$ be a non-zero ideal of $D$.
We wish to show that $I=D$.
Pick a non-zero element $b \in I$ of least degree $n$.
Let $b=\sum_{i=0}^n c_i X^i$, for some $c_0,\ldots,c_n \in R$.
By mimicking the proof of Theorem \ref{maintheorem}(a),
we can conclude that we may choose $c_n=1$.
Seeking a contradiction, suppose that $n > 0$.
We claim that $(b,d,e)=0$, for all $d,e \in R$.
If we assume that the claim holds, then
by extracting the terms of degree $n-1$ from the
relation $(b,d,e)=0$ we get that
$n d\delta(e) +n\delta(d)e+ (c_{n-1}d)e-n\delta(de)-c_{n-1}(de) = 0$.
But since $R$ is associative and $n$ is regular,
this implies that
$d \delta(e) + \delta(d)e = \delta(de)$
which contradicts the fact that $\delta$ is not a derivation.
Thus $n=0$ and hence $1 = b \in I$ which in turn implies that $I=D$.
Now we show the claim.
The degree $n$ part of $(b,d,e)$ equals
$(c_n d)e - c_n (de) = (1 \cdot d)e - 1 \cdot (de) = 0$.
Thus, since $(b,d,e) \in I$, we get that $(b,d,e)=0$,
from the minimality of $n$.
\end{proof}
\begin{rem}
In the cases when $T$ is associative, i.e.
in the cases when $K=\mathbb{R}$, $K = \mathbb{C}$ or $K = \mathbb{H}$,
then Theorem \ref{theoremassociative} can be used to simplify the proofs
of Theorem \ref{theoremquantumtorus} and Theorem \ref{theoremdynamics}.
\end{rem}
\end{document} |
\begin{document}
\title{Long-term cybersecurity applications enabled by quantum networks}
\author{Nicholas A. Peters,\authormark{1,*} Muneer Alshowkan,\authormark{1} Joseph C. Chapman,\authormark{1} Raphael C. Pooser,\authormark{1} Nageswara S. V. Rao,\authormark{2} and Raymond T. Newell \authormark{3}}
\address{\authormark{1} Quantum Information Science Section, Oak Ridge National Laboratory, Oak Ridge, TN 37831\\
\authormark{2}Advanced Computing Methods for Engineered Systems Section, Oak Ridge National Laboratory, Oak Ridge, TN 37831\\
\authormark{3}MPA-Quantum, Los Alamos National Laboratory, Los Alamos, NM 87545}
\email{\authormark{*}[email protected]}
\begin{abstract}
If continental-scale quantum networks are realized, they will provide the resources needed to fulfill the potential for dramatic advances in cybersecurity through quantum-enabled cryptography applications. We describe recent progress and where the US is headed as well as argue that we go one step further and jointly develop quantum and conventional cryptography methods for joint deployments along the quantum backbone infrastructure.
\end{abstract}
\textit{Challenge.}
Fault-tolerant universal quantum computing has the potential to make the majority of current cryptographic protocols obsolete. This is colloquially known as ``the quantum computing threat.'' As a result, crucial cryptographic functions that provide confidentiality, integrity, and availability of the communications which underpin global infrastructures are potentially at risk. This risk extends to the classical communications systems required to operate a quantum computer or quantum network; control signals and human-readable data are classical, and are therefore just as vulnerable as any other IT system to eavesdropping, spoofing, or other cyber attack. As we develop and deploy quantum networks, it will be essential to include security in their design from the beginning, rather than as a retrofit at the end.
There are two lines of defense against a cryptography-breaking quantum computer: classical cryptography systems which derive their strength from math problems which remain hard even with a quantum computer (often called Post-Quantum Cryptography, PQC); and quantum cryptographic systems (QCS) which derive their strength from the fundamental laws of physics. The first category, PQC, is under active development worldwide, including an ongoing competition sponsored by NIST to select and standardize quantum-safe cryptosystems~\cite{NIST2016}. On the other hand, QCS require development of separate hardware for their deployment; examples include quantum key distribution (QKD), which is the most mature QCS protocol, quantum digital signatures (QDS), and quantum secret sharing (QSS).
As a result of new hardware development for QCS, there are multiple research challenges that need to be addressed for QCS to realize its full potential. As QCS is a hardware-based solution, it is currently very expensive. For example, most discrete-variable QCS systems (e.g., encoding in polarization or time bin) utilize direct single photon detection (DD) with costly single-photon detectors (at least in the context of most modern telecommunications fiber networks). Additionally, DD-QCS can be severely limited by Raman scattering of classical light used to carry data~\cite{peters2009}. Continuous variable (CV) approaches (encoding in amplitude and phase) utilize homodyne detection which is more cost effective, relatively immune to Raman scattering~\cite{Qi_2010}, and highly efficient during room-temperature operation. As a result, integration of DD-QCS into optical networks is challenging without very strict limitations on conventional data signals carried in the same fiber. In contrast, CV-QKD can be deployed with multiple optical channels carrying commercial levels of data. However, the DD-QCS is much more mature than CV-QCS, and for example, additional assumptions are frequently made for CV-QKD security about the detection process. In either case, most QCS systems are still expensive laboratory experiments or in bulky rack-mounted boxes with limited ruggedness for deployment. Moreover, it is an important open research question as to how to best securely implement and certify QCS, for example, so that side channels do not leak unintended information.
In addition, QCS assumes that an authenticated classical communications channel is available for the after-quantum transmission processing of the protocol. Much of the current cryptography infrastructure is public (asymmetric) key based whereas QKD delivers (private) symmetric keys, so QKD is not a drop-in replacement of current infrastructure. While QDS and QSS are multi-party protocols, they are not drop in replacements for existing cryptography methods either. As a result, how to authenticate the classical conventional channel and how to best utilize QCS in existing infrastructures remain a research challenge.
\textit{Opportunity.}
PQC is conventional cryptography thought to be secure against significant quantum computers and is hoped to be secure against foreseeable technology developments for at least several decades. On the other hand, QCS could potentially enable security for much longer time scales since the security is dependent on physics, which is technology agnostic, instead of on computational difficulty like PQC.
This is an attractive feature for securing critical infrastructure, which could include science networks as well as energy systems, which are often expected to last for decades or longer.
Furthermore, QCS networks can benefit from using satellite-based quantum networks because satellites are difficult to access and can be monitored to ensure they remain physically secure, at least as secure as any ground station and probably more secure. The trade-off is that ground stations are required to communicate with the satellite; but, rugged portable ground stations are being commercialized. Satellite links could also enable longer-distance QCS links sooner than waiting for quantum repeaters to be developed~\cite{Zhang:18}, even benefiting from satellites in geostationary orbit that allow for more continuous key generation~\cite{PhysRevApplied.18.044027}. As a concrete example, the development of small rugged high-performance QKD satellites serving as ``trusted nodes'' would provide almost immediate practical benefit by distributing usable cryptographic keys to interested users on global scale.
Moreover, measurement-device-independent and device-independent implementations~\cite{PhysRevLett.108.130503,PhysRevLett.98.230501} significantly reduce the security requirements and assumptions. These more advanced protocols are designed to be secure even if certain parts of the hardware are not able to be located in a secure location. There have been demonstrations of measurement-device-independent QKD. Given the further increased security of these measurement-device-independent implementations over standard QCS (which already has advantages over other cryptography protocols relying on assumed computational difficulty) they could enable more flexibility in how QCS systems are deployed.
\textit{Assessment.}
In contrast to much of the rest of the world, in recent years, QCS research has not been a major focus in the US, where the focus is on PQC. The National Institute of Standards and Technology (NIST) is working to standardize PQC to counter the quantum computing threat \cite{NIST2016}. It does not seem possible to standardize QCS and related technologies, using the same process as PQC as they are rooted in fundamentally different ideas. In addition, QCS is relatively immature compared to conventional cryptography. Nevertheless, for QKD in particular, there are natural metrics of secure key rate which are common to compare performance between different implementations. Additionally, device-independent and measurement-device-independent QCS protocols provide security verification that can be certified via loop-hole free Bell tests and Bell state measurements, respectively~\cite{PhysRevLett.108.130503,PhysRevLett.98.230501,Bierhorst2018}.
\textit{Timeliness or maturity.}
Despite the research challenges, the promise of long-term security independent of computational capability has caused QCS to be the focus of numerous academic research and corporate development programs globally. And while it is relatively immature compared to conventional cryptography, it is already a commercialized nearer-term application enabled by quantum networks. Even though there are several commercial offerings, much research and development must be done to close the gaps we describe. This research would hopefully make products more secure, enable their certification, enable longer communications distances, and increase their secure key rates. When fully mature, QCS protocols could enable long-term security of future quantum networking infrastructures.
\end{document} |
{\betaeta}gin{document}
\thetaitle [Evolutionary behavior in a two-locus system]
{Evolutionary behavior in a two-locus system}
\alphauthor{A. M. Diyorov, U. A. Rozikov}
\alphaddress{A. \ M. \ Diyorov \\ The Samarkand branch of TUIT, Samarkand, Uzbekistan.}
\varepsilonmail {[email protected]}
\alphaddress{ U.A. Rozikov$^{a,b,c}${\betaeta}gin{itemize}
\item[$^a$] V.I.Romanovskiy Institute of Mathematics, 9, Universitet str., 100174, Tashkent, Uzbekistan;
\item[$^b$] AKFA University,
264, Milliy Bog street, Yangiobod QFY, Barkamol MFY,
Kibray district, 111221, Tashkent region, Uzbekistan;
\item[$^c$] National University of Uzbekistan, 4, Universitet str., 100174, Tashkent, Uzbekistan.
\varepsilonnd{itemize}}
\varepsilonmail{[email protected]}
{\betaeta}gin{abstract}
In this short note we study a dynamical system generated by a two-parametric quadratic operator mapping 3-dimensional simplex to itself. This is an evolution operator of the frequencies of gametes in a two-locus system. We find the set of all (a continuum set) fixed points and show that each fixed point is non-hyperbolic. We completely describe the set of all limit points of the dynamical system. Namely, for any initial point (taken from the 3-dimensional simplex) we find an invariant set containing the initial point and a unique fixed point of the operator, such that the trajectory of the initial point converges to this fixed point.
\varepsilonnd{abstract}
\muaketitle
{\betaf Mathematics Subject Classifications (2010).} 37N25, 92D10.
{\betaf{Key words.}} loci, gamete, dynamical system, fixed point, trajectory, limit point.
\sigma^t_igmaection{Introduction}
In this paper following \gammaite[page 68]{E} we define an evolution operator of a population assuming viability selection, random mating and discrete non-overlapping generations. Consider two loci $A$ (with alleles $A_1$, $A_2$) and $B$ (with alleles $B_1$, $B_2$). Then we have four gametes: $A_1B_1$, $A_1B_2$, $A_2B_1$, and $A_2B_2$. Denote the frequencies of these gametes by $x$, $y$, $u$, and $v$ respectively.
Thus the vector $(x, y, u, v)$ can be considered as a state of the system, and therefore, one takes it as a probability distribution on the set of gametes, i.e. as an element of 3-dimensional simplex, $S^3$.
Recall that $(m-1)$-dimensional simplex is defined as
$$ S^{m-1}=\{x=(x_1,...,x_m)\in \muathbb R^m: x_i\geq 0,
\sigma^t_igmaum^m_{i=1}x_i=1 \}.$$
Following \gammaite[Section 2.10]{E} we define the frequencies $(x', y', u', v')$ in the next generation as
{\betaeta}gin{equation}\lambdaabel{yq}
W: {\betaeta}gin{array}{llll}
x'=x+a\gammadot (yu-xv)\\[2mm]
y'=y-a\gammadot (yu-xv)\\[2mm]
u'=u-b\gammadot (yu-xv)\\[2mm]
v'=v+b\gammadot (yu-xv),
\varepsilonnd{array}\varepsilonnd{equation}
where $a,b\in [0,1]$.
It is easy to see that this quadratic operator, $W$, maps $S^3$ to itself. Indeed, we have $x'+y'+u'+v'=1$ and each coordinate is non-negative, for example, we check it for $y'$:
$$ y'=y-a\gammadot (yu-xv)=y(1-au)+axv\geq y(1-au)\geq 0,$$
these inequalities follow from the conditions that $x,y,u,v,a\in [0,1]$, and therefore, we have $0\lambdaeq au\lambdaeq 1$.
The operator (\rhoef{yq}), for any initial point (state) $t_0=(x_0, y_0, u_0, v_0)\in S^3$, defines its trajectory:
$\{t_n=(x_n, y_n, u_n, v_n)\}_{n=0}^\infty$ as
$$t_{n}=(x_{n}, y_{n}, u_{n}, v_{n})=W^{n}(t_0), n=0,1,2,...$$
Here $W^n$ is the $n$-fold composition of $W$ with itself:
$$W^n(\gammadot)=\underbrace{W(W(W\deltaots (W}_{n \,{\rhom times}}(\gammadot)))\deltaots).$$
{\betaf The main problem} in theory of dynamical system (see \gammaite{De}) is to study the sequence $\{t_n\}_{n=0}^\infty$ for each initial point $t_0\in S^3$.
In general, if a dynamical system is generated by a nonlinear operator then complete solution of the main problem may be very difficult. But in this short note we will completely solve this main problem for nonlinear operator (\rhoef{yq}).
{\betaeta}gin{rk}
Using $1=x+y+u+v$ (on $S^3$) one can rewrite operator (\rhoef{yq}) as
{\betaeta}gin{equation}\lambdaabel{yqs}
W: {\betaeta}gin{array}{llll}
x'=x^2+xy+xu+(1-a)xv+ayu\\[2mm]
y'=xy+y^2+(1-a)yu+axv+yv\\[2mm]
u'=xu+(1-b)yu+u^2+bxv+uv\\[2mm]
v'=(1-b)xv+yv+byu+uv+v^2.
\varepsilonnd{array}\varepsilonnd{equation}
Note that the operator (\rhoef{yqs}) is in the form of quadratic stochastic operator (QSO), i.e.,
$V: S^{m-1}\thetao S^{m-1}$ defined by
$$V: x_k'=\sigma^t_igmaum_{i,j=1}^mP_{ij,k}x_ix_j,$$
where $P_{ij,k}\geq 0$, $\sigma^t_igmaum_kP_{ij,k}=1$.
The operator is not studied in general, but some large class of QSO's are studied (see for example \gammaite{GMR}, \gammaite{L}, \gammaite{Rpd}, \gammaite{RS}, \gammaite{RZ}, \gammaite{RZh} and the references therein). But the operator (\rhoef{yq}) was not studied yet.
\varepsilonnd{rk}
\sigma^t_igmaection{The set of limit points}
{\betaeta}gin{rk} The case $a=b=0$ is very trivial, so we will not consider this case.\varepsilonnd{rk}
Recall that a point $t\in S^3$ is called a fixed point for $W: S^3\thetao S^3$ if $W(t)=t$.
Denote the set of all fixed
points by Fix$(W)$.
It is easy to see that for any $a, b\in [0,1]$, $a+b\nue 0$ the set of all fixed points of (\rhoef{yq}) is
$${\rhom Fix}(W)=\{t=(x,y,u,v)\in S^3: yu-xv=0\}.$$
This is a continuum set of fixed points.
The main problem is completely solved in the following result:
{\betaeta}gin{thm}\lambdaabel{tm} For any initial point $(x_0, y_0, u_0, v_0)\in S^3$ the following assertions hold
{\betaeta}gin{itemize}
\item[1.] If $(x_0+y_0)(u_0+v_0)=0$ then $(x_0, y_0, u_0, v_0)$ is fixed point.
\item[2.] If $(x_0+y_0)(u_0+v_0)\nue 0$ then trajectory has the following limit:
$$ \lambdaim_{n\thetao\infty}(x_{n}, y_n, u_{n}, v_n)=$$
$$\lambdaeft(A(x_0, u_0)(x_0+y_0), A(y_0, v_0)(x_0+y_0), A(x_0, u_0)(u_0+v_0), A(y_0, v_0)(u_0+v_0)\rhoight)\in {\rhom Fix}(W),
$$
where
$$A(x,u)={bx+au\over (u_0+v_0)a+(x_0+y_0) b}.$$
\varepsilonnd{itemize}
\varepsilonnd{thm}
{\betaeta}gin{proof}
We note that for each ${\alphalpha}pha\in [0,1]$ the following set is invariant:
$$X_{{\alphalpha}pha}=\{t=(x,y,u,v)\in S^3: x+y={\alphalpha}pha, \ \ u+v=1-{\alphalpha}pha\},$$
i.e., $W(X_{\alphalpha}pha)\sigma^t_igmaubset X_{\alphalpha}pha$.
Note also that
$$S^3=\betaigcup_{{\alphalpha}pha\in [0,1]} X_{\alphalpha}pha.$$
The part 1 of theorem follows in the case ${\alphalpha}pha=0$ and ${\alphalpha}pha=1$. Indeed, for ${\alphalpha}pha=0$ we have
$$
X_{0}=\{t=(0,0,u,v)\in S^3: u+v=1\},$$
and in the case of ${\alphalpha}pha=1$ we get
$$
X_{1}=\{t=(x,y,0,0)\in S^3: x+y=1\}.$$
Note that in both case the restriction of operator on the corresponding set is an id-operator, i.e., all points of the set are fixed points.
Now to prove part 2 we consider the case ${\alphalpha}pha\in (0,1)$.
Since $X_{\alphalpha}pha$ is an invariant, it suffices to study limit points of the operator on sets $X_{\alphalpha}pha$, for each ${\alphalpha}pha\in (0,1)$ separately.
To do this, we reduce operator $W$ on the invariant set $X_{\alphalpha}pha$ (i.e., replace $y={\alphalpha}pha-x$, $v=1-{\alphalpha}pha-u$):
{\betaeta}gin{equation}\lambdaabel{ya}
W_{\alphalpha}pha: {\betaeta}gin{array}{ll}
x'=(1-a+a{\alphalpha}pha)x+a{\alphalpha}pha u\\[2mm]
u'=(1-{\alphalpha}pha)bx+(1-b{\alphalpha}pha)u,
\varepsilonnd{array}\varepsilonnd{equation}
where $a,b \in [0,1]$, ${\alphalpha}pha\in (0,1)$ $x\in [0,{\alphalpha}pha]$, $u\in [0, 1-{\alphalpha}pha]$.
It is easy to find the set of all fixed points:
$${\rhom Fix}(W_{{\alphalpha}pha})=\{(x,u)\in [0,{\alphalpha}pha]\thetaimes [0,1-{\alphalpha}pha]: (1-{\alphalpha}pha)x-{\alphalpha}pha u=0\}.$$
The operator $W_{\alphalpha}pha$ is a linear operator given by the matrix
{\betaeta}gin{equation}\lambdaabel{m}
M_{\alphalpha}pha=\lambdaeft( {\betaeta}gin{array}{cc}
1-a+a{\alphalpha}pha &a{\alphalpha}pha \\[2mm]
(1-{\alphalpha}pha)b &1-b{\alphalpha}pha
\varepsilonnd{array}\rhoight).
\varepsilonnd{equation}
Eigenvalues of the linear operator are
{\betaeta}gin{equation}\lambdaabel{ev} \lambdaambda_1=1, \ \ \lambdaambda_2=1-(1-{\alphalpha}pha)a-{\alphalpha}pha b.
\varepsilonnd{equation}
For any $a,b \in [0,1]$, $a+b\nue 0$, ${\alphalpha}pha\in (0,1)$ we have $0< (1-{\alphalpha}pha)a+{\alphalpha}pha b< 1$, therefore, $0<\lambdaambda_2 < 1.$
By (\rhoef{ya}) we define trajectory of an initial point $(x_0, u_0)$ as
$$(x_{n+1}, u_{n+1})=M_{\alphalpha}pha (x_n, u_n)^T, \ \ n\geq 0.$$
Thus
{\betaeta}gin{equation}\lambdaabel{sh}
(x_{n}, u_{n})=M_{\alphalpha}pha^n\, (x_0, u_0)^T, \ \ n\geq 1.
\varepsilonnd{equation}
Therefore we need to find $M_{\alphalpha}pha^n$. To find it we use a little Cayley-Hamilton Theorem\varphiootnote{https://www.freemathhelp.com/forum/threads/formula-for-matrix-raised-to-power-n.55028/}
to obtain the following formula
$$
M_{\alphalpha}pha^n={\lambdaambda_2 \, \lambdaambda^n_1- \lambdaambda_1 \, \lambdaambda^n_2\over \lambdaambda_2-\lambdaambda_1}\gammadot I_2+ {\lambdaambda^n_2- \lambdaambda^n_1\over \lambdaambda_2-\lambdaambda_1}\gammadot M_{\alphalpha}pha,
$$
where $I_2$ is $2\thetaimes 2$ unit matrix and $\lambdaambda_1, \lambdaambda_2$ are eigenvalues (defined in (\rhoef{ev})).
By explicit formula (\rhoef{ev}) we get the following limit
$$
\lambdaim_{n\thetao \infty}M_{\alphalpha}pha^n={\lambdaambda_2 \over \lambdaambda_2-\lambdaambda_1}\gammadot I_2- {1\over \lambdaambda_2-\lambdaambda_1}\gammadot M_{\alphalpha}pha={1\over (1-{\alphalpha}pha)a+{\alphalpha}pha b}\gammadot \lambdaeft( {\betaeta}gin{array}{cc}
{\alphalpha}pha b &{\alphalpha}pha a \\[2mm]
(1-{\alphalpha}pha)b &(1-{\alphalpha}pha)a
\varepsilonnd{array}\rhoight).
$$
Using this limit, for any initial point $(x_0, u_0)\in [0,{\alphalpha}pha]\thetaimes [0,1-{\alphalpha}pha]$ we get
{\betaeta}gin{equation}\lambdaabel{s}
\lambdaim_{n\thetao\infty}(x_{n}, u_{n})=\lambdaim_{n\thetao\infty}M_{\alphalpha}pha^n\, (x_0, u_0)^T={bx_0+au_0\over (1-{\alphalpha}pha)a+{\alphalpha}pha b}\gammadot({\alphalpha}pha, 1-{\alphalpha}pha)\in {\rhom Fix}(W_{\alphalpha}pha).
\varepsilonnd{equation}
By (\rhoef{s}) we obtain
{\betaeta}gin{lemma} For any initial point $(x_0, y_0, u_0, v_0)\in S^3\sigma^t_igmaetminus (X_0\gammaup X_1)$ there exists ${\alphalpha}pha\in (0,1)$ such that $(x_0, y_0, u_0, v_0)\in X_{\alphalpha}pha$ and the trajectory of this initial point (under operator $W$, defined in (\rhoef{yq})) has the following limit
$$ \lambdaim_{n\thetao\infty}(x_{n}, y_n, u_{n}, v_n)=$$
$$\lambdaeft(A(x_0, u_0){\alphalpha}pha, A(y_0, v_0){\alphalpha}pha, A(x_0, u_0)(1-{\alphalpha}pha), A(y_0, v_0)(1-{\alphalpha}pha)\rhoight)\in {\rhom Fix}(W),
$$
where
$$A(x,u)={bx+au\over (1-{\alphalpha}pha)a+{\alphalpha}pha b}.$$
\varepsilonnd{lemma}
In this lemma we note that ${\alphalpha}pha=x_0+y_0$ and $1-{\alphalpha}pha=u_0+v_0$, therefore, the part 2 of Theorem follows, where limit point of trajectory of each initial point is given as function of the initial point only. Theorem is proved.
\varepsilonnd{proof}
\sigma^t_igmaection{Biological interpretations}
The results of Theorem \rhoef{tm} have the following biological interpretations:
Let $t=(x_0, y_0, u_0, v_0)\in \muathcal S^{3}$ be an initial state (the probability distribution on the set $\{A_1B_1, A_1B_2, A_2B_1, A_2B_2\}$ of gametes).
Theorem \rhoef{tm} says that, as a rule, the population tends to an equilibrium state with the passage of time.
Part 1 of Theorem \rhoef{tm} means that if at an initial time we had only two gametes then the (initial) state remains unchanged.
Part 2 means that depending on the initial state future of the population is stable: gametes survive with probability
$$ A(x_0, u_0)(x_0+y_0), A(y_0, v_0)(x_0+y_0), A(x_0, u_0)(u_0+v_0), A(y_0, v_0)(u_0+v_0)$$
respectively.
From the existence of the limit point of any trajectory and from the explicit form of ${\rhom Fix}(W)$ it follows that
$$\lambdaim_{n\thetao \infty}(y_nu_n-x_nv_n)=0.$$
This property, biologically means (\gammaite[page 69]{E}), that the
population asymptotically goes to a state of linkage equilibrium with respect to two loci.
\sigma^t_igmaection*{ Acknowledgements}
Rozikov thanks Institut des Hautes \'Etudes Scientifiques (IHES), Bures-sur-Yvette, France for support of his visit to IHES.
The work was partially supported by a grant from the IMU-CDC.
{\betaeta}gin{thebibliography}{999}
\betaibitem{De} R.L. Devaney, \thetaextit{An introduction to chaotic dynamical system}, Westview Press, 2003.
\betaibitem{E} W.J. Ewens, \thetaextit{Mathematical population genetics}. Mathematical biology, Springer, 2004.
\betaibitem{GMR} R.N. Ganikhodzhaev, F.M. Mukhamedov, U.A. Rozikov, \thetaextit{Quadratic stochastic operators and processes: results and open problems}. Inf. Dim. Anal. Quant. Prob. Rel. Fields. 2011. V.14, No.2, p.279-335.
\betaibitem{L} Yu.I. Lyubich, \thetaextit{Mathematical structures in population genetics}.
Biomathematics, {\betaf 22,} Springer-Verlag, 1992.
\betaibitem{Rpd} U.A. Rozikov, \thetaextit{Population dynamics: algebraic and probabilistic approach}. World Sci. Publ. Singapore. 2020, 460 pp.
\betaibitem{RS} U.A. Rozikov, N.B. Shamsiddinov, \thetaextit{On non-Volterra quadratic stochastic operators generated by a product measure}. Stoch. Anal. Appl. 2009, V.27, No.2, p.353-362.
\betaibitem{RZ} U.A. Rozikov, A.Zada, \thetaextit{On $\varepsilonll$- Volterra quadratic stochastic operators}. Inter. Journal Biomath. 2010, V. 3, No. 2, p. 143--159.
\betaibitem{RZh} U.A. Rozikov, U.U. Zhamilov, \thetaextit{On dynamics of strictly non-Volterra quadratic operators on two-dimensional simplex}. Sbornik: Math. 2009, V. 200, No.9, p.1339-1351.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title{Computing invariants of cubic surfaces}
\begin{abstract}
We report on the computation of invariants,
covariants, and contravariants of cubic surfaces.
All algorithms are implemented in the computer
algebra system {\tt magma}.
\end{abstract}
\section{Introduction}
Given two hypersurfaces of the same degree in projective space over an algebraically closed
field, one may ask for the existence of an automorphism of the projective space that maps one
of the hypersurfaces to the other.
It turns out that if the hypersurfaces are stable~\cite[Def.~1.7]{MFK} in the sense of geometric invariant
theory such an isomorphism exists
if and only if all the invariants of the hypersurfaces coincide~\cite{Mu}.
Aside from cubic curves in $\mathop{\text{\bf P}}\nolimits^2$ and quartic surfaces in $\mathop{\text{\bf P}}\nolimits^3$, an isomorphism between
smooth hypersurfaces of degree $d \geq 3$ always extends to an automorphism of the
ambient projective space~\cite[Th.~2]{MM}.
Thus, the invariants may be used to test abstract isomorphy.
If the base field is not algebraically closed, two varieties
with equal invariants can differ by a twist.
A necessary condition for the existence of a non-trivial twist
is that the variety has a non-trivial automorphism.
In this article, we focus on the case of cubic surfaces.
For them, it was proven by Clebsch~\cite{Cl} that the ring
of invariants of even weight is generated by five invariants of
degrees 8, 16, 24, 32, and 40. Later, Salmon~\cite{Sa3}
worked out explicit formulas for these invariants based on the
pentahedral representation of the cubic surface,
introduced by Sylvester~\cite{Sy}. Using modern computer algebra, it is possible
to compute the pentahedral representation of a given
cubic surface and to deduce the invariants from this~\cite{EJ1}.
We describe a different approach to compute the Clebsch-Salmon
invariants, linear covariants, and some contravariants of cubic
surfaces, based on the Clebsch transfer principle. Using this, we also
compute an invariant of degree 100~\cite[Sec.~9.4.5]{Do} and odd weight that vanishes
if and only if the cubic surface has a non-trivial automorphism.
The square of this invariant is a polynomial expression in Clebsch's
invariants.
This answers the question of isomorphy for all
stable cubic surfaces over algebraically closed fields
and for all surfaces over non-closed fields, for which the
degree 100 invariant does not vanish.
All algorithms are implemented in the computer algebra
system {\tt magma}~\cite{BCP}.
\section{The Clebsch-Salmon invariants}
\begin{dfn} \label{def_inv}
Let $K$ be a field of characteristic zero and $K[X_1,\ldots,X_n]^{(d)}$ the $K$-vector space of all
homogeneous forms of degree $d$.
Further, we fix the left group action
\begin{eqnarray*}
\mathop{\text{\rm GL}}\nolimits_n(K) \times K[X_1,\ldots,X_n] \rightarrow K[X_1,\ldots,X_n],\quad (M,f) \mapsto M \cdot f,
\end{eqnarray*}
with $(M \cdot f)(X_1,\ldots,X_n) := f((X_1,\ldots,X_n) \, M)$.
Finally, on the polynomial ring $K[Y_1,\ldots,Y_n]$, we choose the action
\begin{eqnarray*}
\mathop{\text{\rm GL}}\nolimits_n(K) \times K[Y_1,\ldots,Y_n] \rightarrow K[Y_1,\ldots,Y_n], \quad
(M,f) \mapsto M \cdot f,
\end{eqnarray*}
given by $(M \cdot f)(Y_1,\ldots,Y_n) := f((Y_1,\ldots,Y_n) \left(M^{-1}\right)^\top)$.
\begin{enumerate}
\item
An {\em invariant $I$ of degree $D$ and weight $w$} is a map $K[X_1,\ldots,X_n]^{(d)} \rightarrow K$ that may be
given by a homogeneous polynomial of degree $D$ in the coefficients of $f$ and satisfies
$$
I(M \cdot f) = \det(M)^w \cdot I(f),
$$
for all $M \in \mathop{\text{\rm GL}}\nolimits_n(K)$ and all forms $f \in K[X_1,\ldots,X_n]^{(d)}$.
\item
A {\em covariant $C$ of degree $D$, order $p$, and weight $w$} is a map
$$
K[X_1,\ldots,X_n]^{(d)} \rightarrow K[X_1,\ldots,X_n]^{(p)}
$$
such that each coefficient of $C(f)$ is a homogeneous degree $D$ polynomial
in the coefficients of $f$ and that satisfies
$$
C(M \cdot f) = \det(M)^w \cdot M \cdot (C(f)),
$$
for all $M \in \mathop{\text{\rm GL}}\nolimits_n(K)$ and all forms $f \in K[X_1,\ldots,X_n]^{(d)}$.
\item
A {\em contravariant $c$ of degree $D$, order $p$, and weight $w$} is a map
$$
K[X_1,\ldots,X_n]^{(d)} \rightarrow K[Y_1,\ldots,Y_n]^{(p)}
$$
such that each coefficient of $c(f)$ is a homogeneous degree $D$ polynomial
in the coefficients of $f$ and that satisfies
$$
c(M \cdot f) = \det(M)^w \cdot M \cdot c(f),
$$
for all $M \in \mathop{\text{\rm GL}}\nolimits_n(K)$ and all forms $f \in K[X_1,\ldots,X_n]^{(d)}$. Note that the right hand
side uses the action on $K[Y_1,\ldots,Y_n]$.
\end{enumerate}
\end{dfn}
\begin{rems}
\begin{enumerate}
\item
The set of all invariants is a commutative ring and an algebra over the base field.
\item
The set of all covariants (resp. contravariants) is a commutative ring and a module over the ring of invariants.
\item
Geometrically, the vanishing locus of $f$ or a covariant $C(f)$ is a subset of the projective space whereas
the vanishing locus of a contravariant $c(f)$ is a subset of the dual projective space. Replacing the matrix by
the transpose inverse matrix gives the action on the dual space in a naive way.
\end{enumerate}
\end{rems}
\begin{exas}
\begin{enumerate}
\item
The discriminant of binary forms of degree $d$ is an invariant of degree $2d - 2$ and weight
$d(d-1)$~\cite[Chap.~2]{Ol}.
\item
Let $f$ be a form of degree $d > 2$ in $n$ variables. Then the {\it Hessian}~$H$
defined by
$$
H(f) := \det \left(\frac{\partial^2 f}{\partial X_i \, \partial X_j} \right)_{i,j =1,\ldots,n}
$$
is a covariant of degree $n$, order $(d-2) n$, and weight $2$.
\item
Let a smooth plane curve $V \subset \mathop{\text{\bf P}}\nolimits^2$ be given by a ternary form $f$ of degree $d$.
Mapping $f$ to the form that defines the dual curve~\cite[Sec.~1.2.2]{Do} of $V$ is an example of a contravariant
of degree $2d - 2$ and order $d(d-1)$.
\end{enumerate}
\end{exas}
\subsection*{Salmon's formulas}
A cubic surface given by a system of equations of the shape
$$
a_0 X_0^3 + a_1 X_1^3 +a_2 X_2^3 +a_3 X_3^3 +a_4 X_4^3 = 0
, \quad X_0 + X_1 + X_2 + X_3 + X_4 = 0
$$
is said to be in {\it pentahedral form}. The coefficients $a_0,\ldots,a_4$ are called the
pentahedral coefficients of the surface. The cubic surfaces that have a pentahedral form
are a Zariski open subset in the Hilbert scheme of all cubic surfaces. Thus, it suffices to give
the invariants for these surfaces.
For this, we denote by $\sigma_1,\ldots,\sigma_5$ the elementary symmetric functions
of the pentahedral coefficients. Then the Clebsch-Salmon invariants
(as mentioned in the introduction)
of the cubic surface are given by~\cite[\S~467]{Sa3},
\begin{eqnarray*}
I_8 = \sigma_4^2 - 4 \sigma_3 \sigma_5, \quad
I_{16} = \sigma_1 \sigma_5^3, \quad
I_{24} = \sigma_4 \sigma_5^4, \quad
I_{32} = \sigma_2 \sigma_5^6, \quad
I_{40} = \sigma_5^8\, .
\end{eqnarray*}
Further, Salmon lists four linear covariants of degrees 11, 19, 27, and 43~\cite[\S~468]{Sa3}
\begin{align*}
L_{11} &= \sigma_5^2 \sum_{i=0}^4 a_i x_i, &
L_{19} &= \sigma_5^4 \sum_{i=0}^4 \frac{1}{a_i} x_i, \\
L_{27} &= \sigma_5^5 \sum_{i=0}^4 a_i^2 x_i, &
L_{43} &= \sigma_5^8 \sum_{i=0}^4 a_i^3 x_i \,.
\end{align*}
Finally, the $4 \times 4$-determinant of the matrix formed by the coefficients of these
linear covariants of a cubic surface in $\mathop{\text{\bf P}}\nolimits^3$ is an invariant $I_{100}$ of degree 100.
It vanishes if and only if the surface has Eckardt points or equivalently a non-trivial
automorphism group~\cite[Sec.~9.4.5, Table~9.6]{Do}. The square of $I_{100}$ can be expressed in terms of the other invariants above.
For a modern view on these invariants, we refer to~\cite[Sec.~9.4.5]{Do}.
\section{Transvection}
One classical approach to write down invariants is to
use the transvection (called \"Uberschiebung in German).
This is part of the so called symbolic method~\cite[Chap.~8,~\S2]{We},~\cite[App.~B.2]{Hu}.
We illustrate it in the case of ternary forms.
\begin{dfn}
Let $K[X_1,\ldots,X_n,Y_1,\ldots,Y_n,Z_1,\ldots,Z_n]$ be the polynomial ring in $3 n$
variables. For $i,j,k \in \{1,\ldots,n\}$, we denote by $(i\, j\, k)$ the differential operator
\begin{eqnarray*}
(i\, j\, k) :=
\det
\left(
\begin{array}{ccc}
\frac{\partial}{\partial X_i} &
\frac{\partial}{\partial X_j} &
\frac{\partial}{\partial X_k} \\
\frac{\partial}{\partial Y_i} &
\frac{\partial}{\partial Y_j} &
\frac{\partial}{\partial Y_k} \\
\frac{\partial}{\partial Z_i} &
\frac{\partial}{\partial Z_j} &
\frac{\partial}{\partial Z_k} \\
\end{array}
\right)
\, .
\end{eqnarray*}
\end{dfn}
\begin{exa}
Using this notation, the {\em Aronhold invariants} $S$ and $T$
of the ternary cubic form $f$ are given by
\begin{align*}
S(f) &:=
(1\, 2 \, 3) (2 \, 3 \, 4) (3 \, 4 \, 1) (4 \, 1 \, 2)
f(X_1,Y_1,Z_1) \cdots f(X_4,Y_4,Z_4), \\
T(f) &:=
(1 \, 2 \, 3) (1 \, 2 \, 4) (2 \, 3 \, 5) (3 \, 1 \, 6)
(4 \, 5 \, 6)^2 f(X_1,Y_1,Z_1) \cdots f(X_6,Y_6,Z_6)\, .
\end{align*}
The first one is of degree and weight~$4$, the second one is of degree and weight~$6$.
Using $S$ and $T$, one can write down the discriminant of a
ternary cubic as $\Delta := S^3 - 6 T^2$. The discriminant vanishes if
and only if the corresponding cubic curve is singular.
See~\cite[Sec.~V]{Sa2} for a historical and~\cite[Sec.~3.4.1]{Do}
for modern references concerning invariants of ternary
cubic forms.
\end{exa}
\begin{rem}
One can use the transvection to write down invariants of quaternary forms, as well.
For example, if $f$ is a quartic form in four variables then
$$
(1\, 2\, 3\, 4)^4 f(X_1,Y_1,Z_1,W_1) \cdots f(X_4,Y_4,Z_4,W_4)
$$
is an invariant of degree 4. Here, $(1\, 2\, 3 \, 4)$ denotes the
differential operator
$$
(1\, 2\, 3 \, 4) :=
\det
\left(
\begin{array}{cccc}
\frac{\partial}{\partial X_1} &
\frac{\partial}{\partial X_2} &
\frac{\partial}{\partial X_3} &
\frac{\partial}{\partial X_4} \\
\frac{\partial}{\partial Y_1} &
\frac{\partial}{\partial Y_2} &
\frac{\partial}{\partial Y_3} &
\frac{\partial}{\partial Y_4} \\
\frac{\partial}{\partial Z_1} &
\frac{\partial}{\partial Z_2} &
\frac{\partial}{\partial Z_3} &
\frac{\partial}{\partial Z_4} \\
\frac{\partial}{\partial W_1} &
\frac{\partial}{\partial W_2} &
\frac{\partial}{\partial W_3} &
\frac{\partial}{\partial W_4}
\end{array}
\right)\, .
$$
For a quaternary cubic form, one can apply this
to its Hessian to get an invariant of degree 16.
However, a direct evaluation of such formulas for
forms in four variables is too slow in practice. The reason is that both the
differential operators and the product
$f(X_1,Y_1,Z_1,W_1) \cdots f(X_4,Y_4,Z_4,W_4)$
usually have many terms.
\end{rem}
\section{The Clebsch transfer principle}
We refer to~\cite[Sec.~3.4.2]{Do} for a detailed and modern description of
the Clebsch transfer principle. The basic idea is to
compute a contravariant of a form of degree $d$ in $n$
variables out of an invariant of a form of degree $d$
in $(n-1)$ variables.
\begin{dfn}
\begin{enumerate}
\item
We consider the vector space $V = K^n$ and choose the volume form given by the
determinant. We have the following isomorphism
$$
\Phi \colon \Lambda^{n-1} V \rightarrow V^*,\quad
v_1 \wedge \dots \wedge v_{n-1} \mapsto
(v \mapsto \det (v,v_1,\ldots,v_{n-1}))\,.
$$
\item
Let $I$ be a degree $D$, weight $w$ invariant on $K[U_1,\ldots,U_{n-1}]^{(d)}$.
Then the {\it Clebsch transfer} of $I$ is the
contravariant $\tilde{I}$ of degree $D$ and order $w$
$$
\tilde{I} \colon K[X_1,\ldots,X_{n}]^{(d)} \rightarrow K[Y_1,\ldots,Y_n]^{(w)},
$$
given by
$$
\tilde{I}(f) \colon (K^n)^* \rightarrow K,\quad
l \mapsto I(f(U_1 v_1 + \cdots + U_{n-1} v_{n-1}))\, .
$$
Here, $v_1,\ldots,v_{n-1}$ are given by
$v_1 \wedge \ldots \wedge v_{n-1} = \Phi^{-1}(l)$.
Note that $\tilde{I}(f)$, as defined, is indeed a polynomial mapping
and homogeneous of degree~$w$.
\end{enumerate}
\end{dfn}
\begin{exa}
Denote by $S$ and $T$ the invariants of ternary cubic forms,
introduced above.
Then $\tilde{S}$ is a degree 4, order 4 contravariant of quaternary cubic forms.
Further, $\tilde{T}$ is a contravariant of degree 6 and order 6.
The discriminant of a cubic curve is given by $\Delta = S^3 - 6 T^2$. It vanishes if and
only if the cubic curve is singular.
Thus, the dual surface of the smooth cubic surface $V(f)$
is given by $\tilde{\Delta}(f) = \tilde{S}(f)^3 - 6 \tilde{T}(f)^2 = 0$.
\end{exa}
\begin{rem}
By definition, the dual surface of a smooth surface $V(f) \subset \mathop{\text{\bf P}}\nolimits^3$ is the set of all
tangent hyperplanes of $V(f)$. A plane $P \in (\mathop{\text{\bf P}}\nolimits^3)^*$ is tangent if and only it the
intersection $V(f) \cap P$ is singular. Thus, $P$ is a point on the dual surface if and only if
$\tilde{\Delta}(f)(P) = 0$. Here, $\Delta$ is the discriminant of ternary forms of the same
degree as $f$.
\end{rem}
\begin{rem}
For a given cubic form $f \in K[X,Y,Z,W]$, we compute $\tilde{S}(f)$
by interpolation as follows:
\begin{enumerate}
\item
Choose 35 vectors $p_1,\ldots,p_{35} \in \left(K^4\right)^*$ in general position.
\item
Compute $\Phi^{-1}(p_i)$, for $i = 1,\ldots,35$.
\item
Compute $s_i := S(f(U_1 v_1 + U_2 v_2 + U_3 v_3))$, for $v_1 \wedge v_2 \wedge v_3 = \Phi^{-1}(p_i)$
and all $i = 1,\ldots,35$.
\item
Compute the degree $4$ form $\tilde{S}(f)$ by interpolating the arguments $p_i$ and the values $s_i$.
\end{enumerate}
We can compute $\tilde{T}(f)$ in the same way. The only modification necessary is to
increase the number of vectors, as the space of sextic forms is of dimension 84.
\end{rem}
\section{Action of contravariants on covariants and vice versa}
\begin{enumerate}
\item
Recall that the rings $K[X_1,\ldots, X_n]$ and $K[Y_1,\ldots, Y_n]$ are equipped with
$\mathop{\text{\rm GL}}\nolimits_n(K)$-actions, as introduced in Definition~\ref{def_inv}.
\item
The ring of differential operators
$$K\left[\frac{\partial }{\partial X_1},\ldots,\frac{\partial }{\partial X_n}\right]$$
acts on $K[X_1,\ldots,X_n]$.
\item
The $\mathop{\text{\rm GL}}\nolimits_n(K)$-action on $\calD$ given by
$$
M \cdot \left(\frac{\partial }{\partial v} \right) :=
\frac{\partial }{\partial (v \cdot M^{-1})} \mbox{ for all } v \in K^n
$$
results in the equality
$$
M \cdot \left(\frac{\partial f}{\partial v}\right) =
\left( M \cdot \frac{\partial}{\partial v} \right) \left(M \cdot f \right),
$$
for all $f \in K[X_1,\ldots,X_n]$ and all $v \in K^n$.
\item
The map
$$
\psi
\colon K[Y_1,\ldots,Y_n] \rightarrow
K\left[\frac{\partial }{\partial X_1},\ldots,\frac{\partial }{\partial X_n}\right], \quad
Y_i \mapsto \frac{\partial }{\partial X_i}
$$
is an isomorphism of rings. Further, for each $M \in \mathop{\text{\rm GL}}\nolimits_n(K)$,
we have the following commutative diagram
\begin{eqnarray*}
\diagram
K[Y_1,\ldots,Y_n] \rrto^{\psi~~~} \dto_M & &
\calD
\dto^{M} \\
K[Y_1,\ldots,Y_n] \rrto^{\psi~~~} & &
\calD
\enddiagram_{\displaystyle .}
\end{eqnarray*}
In other words, $\psi$ is an isomorphism of $\mathop{\text{\rm GL}}\nolimits_n(K)$-modules.
\item
Let $C$ be a covariant and $c$ a contravariant on $K[X_1,\ldots,X_n]^{(d)}$.
Denote the order of $C$ by $P$ and the order of $c$ by $p$.
For $P \geq p$, we define
\begin{eqnarray*}
&c \vdash C \colon K[X_1,\ldots,X_n]^{(d)} \rightarrow K[X_1,\ldots,X_n]^{(P-p)},\quad
f \mapsto \psi(c(f)) \left(C(f)\right)\, .
\end{eqnarray*}
The notation $\vdash$ follows~\cite[p.~304]{Hu}.
\item
Assume $c \vdash C$ not to be zero.
If $p < P$ then $c \vdash C$ is a covariant of order $P - p$.
If $p = P$ then $c \vdash C$ is an invariant.
In both cases, the degree of $c \vdash C$ is the sum of the degrees of $c$ and $C$.
\item
Similarly to $\psi$, one can introduce a map
$$\widehat{\psi} \colon K[X_1,\ldots,X_n] \rightarrow
K\left[\frac{\partial }{\partial Y_1},\ldots,\frac{\partial }{\partial Y_n} \right],\quad
X_i \mapsto \frac{\partial }{\partial Y_i}\,.
$$
As above, $\widehat{\psi}$ is an isomorphism of rings and $\mathop{\text{\rm GL}}\nolimits_n(K)$-modules.
Let $C$ a covariant and $c$ a contravariant
on $K[X_1,\ldots,X_n]^{(d)}$.
We define $C \vdash c$ by
$$
(C \vdash c)(f) := \widehat{\psi}(C(f)) \left(c(f)\right)\, .
$$
\item
Assume $c \vdash C$ not to be zero.
If $p > P$ then $C \vdash c$ is a contravariant of order $p - P$.
If $p = P$ then $C \vdash c$ is an invariant.
In both cases, the degree of $C \vdash c$ is the sum of the degrees of $C$ and $c$.
\end{enumerate}
\section{Explicit invariants of cubic surfaces}
\begin{rems}
\begin{enumerate}
\item
It is well known that the ring of invariants of quaternary
cubic forms is generated by the six invariants of degrees
8, 16, 24, 32, 40, and 100~\cite[Sec.~9.4.5]{Do}.
The first five generators are primary invariants~\cite[Def.~2.4.6]{DK}.
Thus, the vector spaces of all
invariants of degrees 8, 16, 24, 32 and 40 are of
dimensions 1, 2, 3, 5, and 7.
In general, these dimensions are encoded in the
Molien series, which can be computed efficiently using
character theory~\cite[Ch.~4.6]{DK}.
\item
In the lucky case that one is able to write down a basis
of the vector space of all invariants of a given degree $d$,
one can find an expression of a given invariant of degree $d$
by linear algebra. This requires that the invariant is known
for sufficiently many surfaces.
For cubic surfaces, this is provided by the pentahedral equation.
\item
Applying the methods above, we can write down many invariants for quaternary cubic forms.
We start with the form $f$, its Hessian covariant $H(f)$, and the contravariant $\tilde{S}(f)$.
Then we apply known covariants to contravariants and vice versa. Further, one can
multiply two covariants or contravariants to get a new one.
For efficiency, it is useful to keep the orders of the covariants and contravariants as small
as possible. This way, they will not consist of too many terms.
\end{enumerate}
\end{rems}
\begin{prop}
Let $f$ be a quarternary cubic form. With
\begin{align*}
C_{4,0,4} &:= \tilde{S}(f),
& C_{4,4} &:= H(f), \\
C_{6,2} &:= C_{4,0,4} \vdash f^2,
& C_{9,3} &:= C_{4,0,4} \vdash (f \cdot C_{4,4}), \\
C_{10,0,2} &:= C_{6,2} \vdash C_{4,0,4},
& C_{11,1a} &:= C_{10,0,2} \vdash f, \\
C_{13,0,1} &:= C_{9,3} \vdash C_{4,0,4},
& C_{14,2} &:= C_{10,0,2} \vdash C_{4,4}, \\
C_{14,2a} &:= C_{13,0,1} \vdash f,
& C_{19,1a} &:= C_{13,0,1} \vdash C_{6,2},
\end{align*}
the following expressions
\begin{align*}
I_8 &:= \frac{1}{2^{11} \cdot 3^9} C_{4,0,4} \vdash C_{4,4},\\
I_{16} &:= \frac{1}{2^{30} \cdot 3^{22}} C_{6,2} \vdash C_{10,0,2}, \\
I_{24} &:= \frac{1}{2^{41} \cdot 3^{33}} C_{10,0,2} \vdash C_{14,2}, \\
I_{32a} &:= C_{10,0,2} \vdash C_{11,1a}^2, \\
I_{32} &:= \frac{2}{5}(I_{16}^2 - \frac{1}{2^{60} \cdot 3^{44}} \cdot I_{32a}), \\
I_{40a} &:= C_{4,0,4} \vdash (C_{11,1a}^2 \cdot C_{14,2}), \\
I_{40} &:= \frac{-1}{100} \cdot I_8 \cdot I_{32} - \frac{1}{50} \cdot I_{16} \cdot I_{24}
- \frac{1}{2^{72} \cdot 3^{53} \cdot 5^2} I_{40a},
\end{align*}
give the Clebsch-Salmon invariants $I_8,\ I_{16},\ I_{24},\ I_{32},$ and $I_{40}$.
Further, with
\begin{align*}
C_{11,1} :=& \frac{1}{2^{20} 3^{15}} C_{11,1a}, \\
C_{19,1} :=& \frac{1}{2^{33} \cdot 3^{24} \cdot 5} (C_{19,1a} + 2^{32} \cdot 3^{24} \cdot I_8 \cdot C_{11,1a}), \\
C_{27,1a} :=& \frac{1}{2^{42} 3^{33}} C_{13,0,1} \vdash C_{14,2a}, \\
C_{27,1} :=& I_{16} \cdot C_{11,1} + \frac{1}{200}(C_{27,1a} - 2 \cdot I_8^2 \cdot C_{11,1} - 10 \cdot I_8 \cdot C_{19,1}), \\
C_{43,1a} :=& \frac{1}{2^{68} \cdot 3^{53}} C_{13,0,1} \vdash ( C_{13,0,1} \vdash (C_{13,0,1} \vdash C_{4,4})), \\
C_{43,1} :=& \frac{-1}{1000} C_{43,1a} - \frac{1}{200} \cdot I_8^2 \cdot C_{27,1} + I_{16} \cdot C_{27,1} \\
& + \frac{1}{1000} \cdot I_8^3 \cdot C_{19,1} -\frac{1}{10} \cdot I_8 \cdot I_{16} \cdot C_{19,1} - I_{24} \cdot C_{19,1} \\
& + \frac{1}{200} \cdot I_8^2 \cdot I_{16} \cdot C_{11,1} + \frac{3}{20} \cdot I_8 \cdot I_{24} \cdot C_{11,1},
\end{align*}
$C_{11,1},\ C_{19,1},\ C_{27,1},$ and $C_{43,1}$ are Salmon's linear covariants.
Here, we use the first index to indicate the degree of an invariant, covariant, or contravariant. The second
index is the order of a covariant, whereas the third index is the order of a contravariant.
Finally, we can compute $I_{100}$ as the determinant of the 4 linear covariants.
\end{prop}
\begin{proof}
The following {\tt magma} script shows in approximately one second of CPU time that the algorithm
as described above coincides with Salmon's formulas for the pentahedral family, as the last
two comparisons result in {\tt true}.
\begin{verbatim}
r5 := PolynomialRing(Integers(),5);
ff5<a,b,c,d,e> := FunctionField(Rationals(),5);
r4<x,y,z,w> := PolynomialRing(ff5,4);
lfl := [x,y,z,w,-x-y-z-w];
col := [ff5.i : i in [1..5]];
f := a*x^3 + b*y^3 + c*z^3 + d*w^3 + e*(-x-y-z-w)^3;
sy_f := [ElementarySymmetricPolynomial(r5,i) : i in [1..5]];
sigma := [Evaluate(sf,col) : sf in sy_f];
I_8 := sigma[4]^2 - 4 *sigma[3] * sigma[5];
I_16 := sigma[1] * sigma[5]^3;
I_24 := sigma[4] * sigma[5]^4;
I_32 := sigma[2] * sigma[5]^6;
I_40 := sigma[5]^8;
L_11 := sigma[5]^2 * &+[ col[i] * lfl[i] : i in [1..5]];
L_19 := sigma[5]^4 * &+[ 1/col[i] * lfl[i] : i in [1..5]];
L_27 := sigma[5]^5 * &+[ col[i]^2 * lfl[i] : i in [1..5]];
L_43 := sigma[5]^8 * &+[ col[i]^3 * lfl[i] : i in [1..5]];
inv := ClebschSalmonInvariants(f);
cov := LinearCovariantsOfCubicSurface(f);
inv eq [I_8, I_16, I_24, I_32, I_40];
cov eq [L_11, L_19, L_27, L_43];
\end{verbatim}
\end{proof}
\section{Performance test}
Computing the Clebsch-Salmon invariants, following the approach above, for 100
cubic surfaces chosen at random
with two digit integer coefficients takes about 3 seconds of CPU time.
Most of the time is used for the direct evaluation of the invariant $S$ of ternary cubics by transvection.
Note that
computing the contravariant $\tilde{S}$ by interpolation requires 35 evaluations of the invariant $S$ of
a ternary cubic.
Computing both contravariants $\tilde{S}$ and $\tilde{T}$ and the dual surface takes about 18 seconds of
CPU time for the same 100 randomly chosen surfaces.
For comparison, the computation of the pentahedral form by inspecting the singular points of
the Hessian takes about 10 seconds per example~\cite[Sec.~5.11]{EJ1}.
All computations are done on one core of an Intel i5-2400 processor running at 3.1GHz.
\end{document} |
\begin{document}
\title[$C_{n}$-moves and the difference of Jones polynomials for links]{$C_{n}$-moves and the difference of Jones polynomials for links}
\author{Ryo Nikkuni}
\address{Department of Mathematics, School of Arts and Sciences, Tokyo Woman's Christian University, 2-6-1 Zempukuji, Suginami-ku, Tokyo 167-8585, Japan}
\email{[email protected]}
\thanks{The author was supported by JSPS KAKENHI Grant Number 15K04881.}
\subjclass{Primary 57M25}
\date{}
\keywords{Jones polynomial, Vassiliev invariant, $C_{n}$-move}
\begin{abstract}
The Jones polynomial $V_{L}(t)$ for an oriented link $L$ is a one-variable Laurent polynomial link invariant discovered by Jones. For any integer $n\ge 3$, we show that: (1) the difference of Jones polynomials for two oriented links which are $C_{n}$-equivalent is divisible by $\left(t-1\right)^{n}\left(t^{2}+t+1\right)\left(t^{2}+1\right)$, and (2) there exists a pair of two oriented knots which are $C_{n}$-equivalent such that the difference of the Jones polynomials for them equals $\left(t-1\right)^{n}\left(t^{2}+t+1\right)\left(t^{2}+1\right)$.
\end{abstract}
\maketitle
\section{Introduction}
The {\it Jones polynomial} $V_{L}(t)\in {\mathbb Z}\left[t^{\pm 1/2}\right]$ is an integral Laurent polynomial link invariant for an oriented link $L$ defined by the following formulae:
\begin{eqnarray*}
V_{O}(t) &=&1,\\
t^{-1}V_{L_{+}}(t)-tV_{L_{-}}(t) &=& \left(t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right)V_{L_{0}}(t),
\end{eqnarray*}
where $O$ denotes the trivial knot and $L_{+}$, $L_{-}$ and $L_{0}$ are oriented links which are identical except inside the depicted regions as illustrated in Fig. \ref{skein} \cite{Jones87}. The triple of oriented links $\left(L_{+},L_{-},L_{0}\right)$ is called a {\it skein triple}. Jones also showed the following property of the Jones polynomials for oriented knots.
\begin{Theorem}\label{jw}{\rm (Jones \cite[Proposition 12.5]{Jones87})}
For any two oriented knots $J$ and $K$, $V_{J}(t)-V_{K}(t)$ is divisible by $\left(t-1\right)^{2}\left(t^{2}+t+1\right)$.
\end{Theorem}
On the basis of Theorem \ref{jw}, for an oriented knot $K$, Jones called the polynomial $W_{K}(t) = \left\{1-V_{K}(t)\right\} / \left(t-1\right)^{2}\left(t^{2}+t+1\right)$ a {\it simplified polynomial} and made a table of the simplified polynomials for knots up to $10$ crossings \cite{Jones87}. In particular, if $K$ is the right-handed trefoil knot then $W_{K}(t) = 1$. So the polynomial $\left(t-1\right)^{2}\left(t^{2}+t+1\right)$ is maximal as a divisor of the difference of Jones polynomials of any pair of two oriented knots.
\begin{figure}
\caption{Skein triple $\left(L_{+}
\label{skein}
\end{figure}
Our purpose in this paper is to examine the difference of Jones polynomials for two oriented links which are {\it $C_{n}$-equivalent}, where a $C_{n}$-equivalence is an equivalence relation on oriented links introduced by Habiro \cite{Habiro94} and Gusarov \cite{Gusarov00} independently as follows. For a positive integer $n$, a {\it $C_{n}$-move} is a local move on oriented links as illustrated in Fig. \ref{Cnmove} if $n\ge 2$, and a $C_{1}$-move is a crossing change. Two oriented links are said to be {\it $C_{n}$-equivalent} if they are transformed into each other by $C_{n}$-moves and ambient isotopies. By the definition of a $C_{n}$-move, it is easy to see that a $C_{n}$-equivalence implies a $C_{n-1}$-equivalence. Note that a $C_{2}$-move equals a {\it delta move} introduced by Matveev \cite{Matveev87} and Murakami-Nakanishi \cite{MN89} independently as illustrated in Fig. \ref{C2C3} (1), and a $C_{3}$-move equals a {\it clasp-pass move} introduced by Habiro \cite{Habiro93} as illustrated in Fig. \ref{C2C3} (2). A $C_{n}$-move is closely related to the {\it Vassiliev invariants} of oriented links \cite{Vass90}, \cite{BL93}, \cite{Natan95}, \cite{Stanford96}. It is known that if two oriented links are $C_{n}$-equivalent then they have the same Vassiliev invariants of order $\le n-1$, and specially for oriented knots, the converse is also true \cite{Habiro00}, \cite{Gusarov00}.
\begin{figure}
\caption{$C_{n}
\label{Cnmove}
\end{figure}
\begin{figure}
\caption{(1) Delta move, (2) Clasp-pass move, (3) Pass move}
\label{C2C3}
\end{figure}
Now let us generalize Theorem \ref{jw} to oriented links which are $C_{n}$-equivalent.
\begin{Theorem}\label{divisible}
\begin{enumerate}
\item If two oriented links $L$ and $M$ are $C_{2}$-equivalent, then $V_{L}(t)-V_{M}(t)$ is divisible by $\left(t-1\right)^{2}\left(t^{2}+t+1\right)$.
\item For any integer $n\ge 3$, if two oriented links $L$ and $M$ are $C_{n}$-equivalent, then $V_{L}(t)-V_{M}(t)$ is divisible by $\left(t-1\right)^{n}\left(t^{2}+t+1\right)\left(t^{2}+1\right)$.
\end{enumerate}
\end{Theorem}
We remark that Theorem \ref{divisible} (1) was also observed in \cite[Theorem 2]{Ganzell14} for oriented knots by using the Kauffman bracket. Since any two oriented knots are $C_{2}$-equivalent \cite{MN89}, Theorem \ref{jw} is deduced from Theorem \ref{divisible} (1).
In the case of $n\ge 3$, we show the maximality of $\left(t-1\right)^{n}\left(t^{2}+t+1\right)\left(t^{2}+1\right)$ as a divisor of the difference of Jones polynomials for oriented links which are $C_{n}$-equivalent as follows. Let $J_{n}$ and $K_{n}$ be two oriented knots as illustrated in Fig. \ref{Cnex}. Note that $J_{n}$ and $K_{n}$ are transformed into each other by a single $C_{n}$-move, see Fig. \ref{Cnex2}. Then we have the following.
\begin{Theorem}\label{main_jones}
\begin{eqnarray*}
V_{J_{n}}(t) - V_{K_{n}}(t)
= (-1)^{n+1}(t-1)^{n}\left(t^{2}+t+1\right)\left(t^{2}+1\right).
\end{eqnarray*}
\end{Theorem}
\begin{figure}
\caption{Oriented knots $J_{n}
\label{Cnex}
\end{figure}
\begin{figure}
\caption{$J_{n}
\label{Cnex2}
\end{figure}
In section $2$, we prove Theorem \ref{divisible} and give its applications to the study of the difference of Vassiliev invariants of order $\le n$ for two oriented links which are $C_{n}$-equivalent. In section $3$, we prove Theorem \ref{main_jones} without knowing $V_{J_{n}}(t)$ and $V_{K_{n}}(t)$ individually by applying Kanenobu's formula for the difference of Jones polynomials for two oriented knots which are transformed into each other by a single $C_{n}$-move (Lemma \ref{lem1}) and a $C_{n}$-move which does not change the knot type (Lemma \ref{lem2}).
\section{Proof of Theorem \ref{divisible}}
We recall the following results about the special values of the Jones polynomial. Here an $r$-component oriented link $L$ is said to be {\it proper} if ${\rm lk}\left(K,L\setminus K\right)\equiv 0\pmod{2}$ for each component $K$ of $L$, where ${\rm lk}$ denotes the linking number, and the {\it Arf invariant} is a link invariant introduced in \cite{Robertello} defined for only proper links.
\begin{Lemma}\label{vlem} Let $L$ be an $r$-component oriented link. Then the followings holds.
\begin{enumerate}
\item {\rm (\cite[(12.1)]{Jones87})} $V_{L}(1) = \left(-2\right)^{r-1}$.
\item {\rm (\cite[(12.4)]{Jones87})} $\displaystyle V_{L}\left(e^{{2\pi \sqrt{-1}}/3}\right) = \left(-1\right)^{r-1}$.
\item {\rm (Murakami \cite{Murakami86})} $V_{L}(\sqrt{-1}) = \left(\sqrt{-2}\right)^{r-1}\cdot (-1)^{{\rm Arf}(L)}$ if $L$ is proper, and $0$ if $L$ is nonproper, where ${\rm Arf}$ denotes the {\it Arf invariant}.
\end{enumerate}
\end{Lemma}
For an oriented link $L$, we denote the $l$-th derivative at $1$ of the Jones polynomial $V_{L}(t)$ by $V_{L}^{(l)}(1)$. It is known that $V_{L}^{(l)}(1)$ is a Vassiliev invariant of order $\le l$ \cite{KN98}. Then we have the following.
\begin{Lemma}\label{mainlem1}
Let $L$ and $M$ be two oriented $r$-component links and $n$ an integer with $n\ge 2$. Then $V_{L}^{(l)}(1) = V_{M}^{(l)}(1)$ for $l=1,2,\ldots, n-1$ if and only if $V_{L}(t)-V_{M}(t)$ is divisible by $\left(t-1\right)^{n}\left(t^{2}+t+1\right)$.
\end{Lemma}
\begin{proof}
The `if' part is clear because $\left(t-1\right)^{n}$ divides $V_{L}(t)-V_{M}(t)$. We show the `only if' part by the induction on $n$. Assume that $n=2$. By Lemma \ref{vlem} (1) and (2), there exists a polynomal $g(t)\in {\mathbb Z}\left[t^{\pm 1/2}\right]$ such that
\begin{eqnarray}\label{d1}
V_{L}(t) - V_{M}(t) = \left(t^{3}-1\right)g(t).
\end{eqnarray}
Then by differentiating both sides in (\ref{d1}), we have
\begin{eqnarray}\label{d2}
V_{L}^{(1)}(t) - V_{M}^{(1)}(t) = 3t^{2}g(t) + \left(t^{3}-1\right)g^{(1)}(t).
\end{eqnarray}
Thus by the assumption and (\ref{d2}), we have $g(1)=0$. This implies that $V_{L}(t)-V_{M}(t)$ is divisible by $\left(t-1\right)^{2}\left(t^{2}+t+1\right)$. Next assume that $n\ge 3$ and $V_{L}^{(l)}(1) = V_{M}^{(l)}(1)$ for $l=0,1,\ldots, n-1$. By the induction hypothesis, it follows that there exists a polynomial $h(t)\in\left[t^{\pm 1/2}\right]$ such that
\begin{eqnarray}\label{d3}
V_{L}(t) - V_{M}(t) = \left(t-1\right)^{n-1}\left(t^{2}+t+1\right)h(t).
\end{eqnarray}
Let us denote $\left(t^{2}+t+1\right)h(t)$ by $\tilde{h}(t)$. Then by (\ref{d3}) and the assumption, we have
\begin{eqnarray}\label{d4}
0 = V_{L}^{(n-1)}(1) - V_{M}^{(n-1)}(1) = (n-1)!\ \tilde{h}(1).
\end{eqnarray}
Thus we have $0 = \tilde{h}(1) = 3h(1)$, namely $h(1) = 0$. This implies that $t-1$ divides $h(t)$, therefore we have the desired conclusion.
\end{proof}
\begin{Remark}
{\rm For an $r$-component oriented link $L$, it is known that
\begin{eqnarray*}
V_{L}^{(1)}(1) = -3(-2)^{r-2}{\rm Lk}(L)
\end{eqnarray*}
if $r\ge 2$ and $0$ if $r=1$, where ${\rm Lk}$ denotes the total linking number, that is the summation of all pairwise linking numbers of $L$ \cite[(12.2)]{Jones87}. Thus Lemma \ref{mainlem1} implies Theorem \ref{jw} in case $r=1$, and ${\rm Lk}(L) = {\rm Lk}(M)$ if and only if $V_{L}(t)-V_{M}(t)$ is divisible by $\left(t-1\right)^{2}\left(t^{2}+t+1\right)$ in case $r\ge 2$.
}
\end{Remark}
\begin{Lemma}\label{mainlem2}
For an integer $n\ge 3$, if two oriented links $L$ and $M$ are $C_{n}$-equivalent, then $V_{L}(t)-V_{M}(t)$ is divisible by $t^{2} + 1$.
\end{Lemma}
\begin{proof}
Let $L$ and $M$ be two $r$-component oriented links which are $C_{n}$-equivalent. Then $L$ and $M$ are $C_{3}$-equivalent. Note that a $C_{3}$-move $=$ a clasp-pass move can be realized by a single {\it pass move} \cite{Kauffman83} as illustrated in Fig. \ref{C2C3} (3), and a pass move does not change the Arf invariant of a proper link \cite[Appendix]{MN89}. If $L$ is proper, then $M$ is also proper because $L$ and $M$ also are $C_{2}$-equivalent and a $C_{2}$-move does not change the pairwise linking numbers. Then by Lemma \ref{vlem} (3), we have $V_{L}\left(\sqrt{-1}\right) = V_{M}\left(\sqrt{-1}\right)$. If $L$ is nonproper, then $M$ is also nonproper and by Lemma \ref{vlem} (3), we have $V_{L}\left(\sqrt{-1}\right) = 0 = V_{M}\left(\sqrt{-1}\right)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{divisible}]
As we mentioned before, if two oriented $r$-component links $L$ and $M$ are $C_{n}$-equivalent then $V_{L}^{(l)}(1) = V_{M}^{(l)}(1)$ for $l=1,2,\ldots, n-1$. By combining this fact with Lemma \ref{mainlem1}, we have (1) in case $n=2$, and by combining this fact with Lemma \ref{mainlem1} and Lemma \ref{mainlem2}, we have (2).
\end{proof}
As an application, we give alternative short proofs for two theorems shown by H. A. Miyazawa. Note that these theorems were proved by fairly combinatorial argument, that is, by making up a list of oriented $C_{n}$-moves carefully and checking the congruence for each of the cases. First we show the following as a direct consequence of Theorem \ref{divisible} (2).
\begin{Theorem}\label{mv1}{\rm (H. A. Miyazawa \cite[Theorem 1.5]{Miyazawa00})}
For an integer $n\ge 3$, if two oriented links $L$ and $M$ are $C_{n}$-equivalent, then it follows that
\begin{eqnarray*}
V_{L}^{(n)}(1) \equiv V_{M}^{(n)}(1) \pmod{6\cdot n!}.
\end{eqnarray*}
\end{Theorem}
\begin{proof}
Assume that two oriented links $L$ and $M$ are $C_{n}$-equivalent. Then by Theorem \ref{divisible} (2), there exists a polynomial $f(t)\in {\mathbb Z}\left[t^{\pm 1/2}\right]$ such that
\begin{eqnarray}\label{mv3}
V_{L}(t) - V_{M}(t) = \left(t-1\right)^{n}\left(t^{2}+t+1\right)\left(t^{2}+1\right)f(t).
\end{eqnarray}
Let us denote $\left(t^{2}+t+1\right)\left(t^{2}+1\right)f(t)$ by $\tilde{f}(t)$. Then by (\ref{mv3}), we have
\begin{eqnarray*}
V_{L}^{(n)}(1) - V_{M}^{(n)}(1) = n! \cdot \tilde{f}(1) = n! \cdot 6 f(1).
\end{eqnarray*}
Thus we have the result.
\end{proof}
Miyazawa also showed the best possibility of Theorem \ref{mv1} by exhibiting two pairs of two oriented knots which are $C_{n}$-equivalent whose differences of the Jones polynomials do not equal $6\cdot n!$ but the greatest common divisor of them is $6\cdot n!$. The best possibility of Theorem \ref{mv1} may also be given by two oriented knots $J_{n}$ and $K_{n}$ in Theorem \ref{main_jones} whose difference of the Jones polynomials exactly equals $6\cdot n!$. Such an example was also observed by Horiuchi \cite{Horiuchi}.
On the other hand, the {\it Conway polynomial} $\nabla_{L}(z)\in {\mathbb Z}\left[z\right]$ is an integral polynomial link invariant for an oriented link $L$ defined by the following formulae:
\begin{eqnarray*}
\nabla_{O}(z) &=&1,\\
\nabla_{L_{+}}(z) - \nabla_{L_{-}}(z) &=& z\nabla_{L_{0}}(z),
\end{eqnarray*}
where $\left(L_{+},L_{-},L_{0}\right)$ is a skein triple in Fig. \ref{skein} \cite{Conway}.
Note that
\begin{eqnarray}\label{det}
V_{L}(-1) = \nabla_{L}\left(-2\sqrt{-1}\right)
\end{eqnarray}
and the absolute value of $V_{L}(-1)$ is known as the determinant of $L$. We denote the coefficient of $z^{l}$ in $\nabla_{L}(z)$ by $a_{l}(L)$. Then it is known that the Conway polynomial of an $r$-component oriented link $L$ is of the following form
\begin{eqnarray}\label{conway}
\nabla_{L}(z) = \sum_{i\ge 0}a_{r+2i-1}(L) z^{r+2i-1}.
\end{eqnarray}
It is known that $a_{l}(L)$ is a Vassiliev invariant of order $\le l$ \cite{Natan95}. Thus if two oriented links $L$ and $M$ are $C_{n}$-equivalent, then $a_{l}(L) = a_{l}(M)$ for $l\le n-1$. In the case of $l=n$, Miyazawa showed the following. Note that in the case of oriented knots, this had been obtained by Ohyama-Ogushi \cite{OO90}.
\begin{Theorem}\label{mv2}{\rm (H. A. Miyazawa \cite[Theorem 1.3]{Miyazawa00})}
For an integer $n\ge 3$, if two oriented links $L$ and $M$ are $C_{n}$-equivalent, then it follows that
\begin{eqnarray*}
a_{n}\left(L\right) \equiv a_{n}\left(M\right) \pmod{2}.
\end{eqnarray*}
\end{Theorem}
\begin{proof}
Let $L$ and $M$ be two $r$-component oriented links which are $C_{n}$-equivalent. If $n\equiv r\pmod{2}$, then by (\ref{conway}) we have $a_{n}(L) = a_{n}(M) = 0$. Assume that $n\not\equiv r\pmod{2}$. Then by Theorem \ref{divisible} (2) and (\ref{det}), there exists a polynomial $W(t)\in {\mathbb Z}\left[t^{\pm 1/2}\right]$ such that
\begin{eqnarray*}
(-1)^{n} \cdot 2^{n+1}\cdot W(-1) &=& V_{L}(-1) - V_{M}(-1) \\
&=& \nabla_{L}\left(-2\sqrt{-1}\right) - \nabla_{M}\left(-2\sqrt{-1}\right) \\
&=& \sum_{i\ge 1}\left\{a_{n+2i-2}(L) - a_{n+2i-2}(M)\right\}\cdot \left(-2\sqrt{-1}\right)^{n+2i-2}.
\end{eqnarray*}
This implies
\begin{eqnarray*}
0 \equiv \left\{a_{n}(L) - a_{n}(M)\right\}\cdot 2^{n}\pmod{2^{n+1}}
\end{eqnarray*}
and therefore $a_{n}(L) - a_{n}(M)$ must be even.
\end{proof}
Miyazawa showed that Theorem \ref{mv2} is also best possible. Furthermore, Ohyama-Yamada proved that for an integer $n\ge 2$, if two oriented knots $J$ and $K$ are transformed into each other by a single $C_{2n}$-move then $a_{2n}(J) - a_{2n}(K) = 0,\ \pm 2$ \cite[Theorem 1.3]{OY08}.
\section{Proof of Theorem \ref{main_jones}}
We show three lemmas needed to prove the Theorem \ref{main_jones}. The first lemma is Kanenobu's formula for the difference of Jones polynomials for two oriented links which are transformed into each other by a single $C_{n}$-move. Let $L$ and $M$ be two oriented links which are transformed into each other by a single $C_{n}$-move as illustrated in Fig. \ref{Cnmove}. Let $c_{j1},c_{j2}\ (j=2,3,\ldots,n)$ and $c_{1}$ be crossings of $L$ as illustrated in Fig. \ref{Cnmove2}. We denote the sign of $c_{1}$ by $\varepsilon_{1}$ and the sign of $c_{j1}$ by $\varepsilon_{j}$ $(j=2,3,\ldots,n)$. Let $L\left[\delta_{2},\delta_{3},\ldots,\delta_{n}\right]$ be the link obtained from $L$ by smoothing the crossing $c_{1}$, smoothing the crossing $c_{j1}$ if $\delta_{j} = 1$, and changing the crossing $c_{j1}$ and smoothing the crossing $c_{j2}$ if $\delta_{j} = -1$ $(j=2,3,\ldots,n)$. Then the following formula holds.
\begin{Lemma}\label{lem1} {\rm (Kanenobu \cite[(4.10)]{Kanenobu04})}
\begin{eqnarray*}
&& V_{L}(t) - V_{M}(t) \\
&=&
\left(\prod_{i=1}^{n}\varepsilon_{i}\right)t^{\sum_{i=1}^{n}\varepsilon_{i}-\frac{n}{2}}\left(t-1\right)^{n}
\sum_{\delta_{2},\delta_{3},\ldots,\delta_{n}=\pm 1}\left(\prod_{j=2}^{n}\delta_{j}\right)V_{L\left[\delta_{2},\delta_{3},\ldots,\delta_{n}\right]}(t).
\end{eqnarray*}
\end{Lemma}
\begin{figure}
\caption{Crossings $c_{1}
\label{Cnmove2}
\end{figure}
Next we show the second lemma. For an integer $n\ge 3$, let $L'_{n}$ and $M'_{n}$ be two links as illustrated in Fig. \ref{Cnambi}, where $T$ is an arbitrary $2$-string tangle which are same for both links. Note that $L'_{n}$ and $M'_{n}$ are transformed into each other by a single $C_{n}$-move. Then we have the following.
\begin{Lemma}\label{lem2}
$L'_{n}$ and $M'_{n}$ are ambient isotopic.
\end{Lemma}
\begin{proof}
See Fig. \ref{Cnambi2}.
\end{proof}
\begin{figure}
\caption{Two links $L'_{n}
\label{Cnambi}
\end{figure}
\begin{figure}
\caption{$L'_{n}
\label{Cnambi2}
\end{figure}
Lemma \ref{lem2} gives a new example of a $C_{n}$-move which does not change the knot type. Such an example was first discovered by Ohyama-Tsukamoto \cite{OT99}.
The third lemma is a calculation of the Jones polynomial for a $\left(2,-m\right)$-torus knot or link $N_{m}$ for a non-negative integer $m$ as illustrated in Fig. \ref{2mtorus} (1). Note that such a calculation has been already known, see \cite[pp. 37]{Kauffman3}, \cite[Lemma 2.1]{Landvoy98} for example. However we state a formula and give a proof for reader's convenience.
\begin{Lemma}\label{lem3}
\begin{eqnarray*}
V_{N_{m}}(t) =
\left(-t^{-\frac{1}{2}}\right)^{m}\left(-t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right)+ \frac{\left(-t^{\frac{1-3m}{2}}\right)\left\{1-\left(-t\right)^{m}\right\}}{1+t}.
\end{eqnarray*}
\end{Lemma}
\begin{proof}
Note that $N_{0}$ is the trivial $2$-component link and $N_{1}$ is the trivial knot. Then we can check the formula directly for $m=0,1$. Assume that $m\ge 2$. Then we obtain the skein triple $\left(N_{m-2},N_{m},N_{m-1}\right)$ easily and therefore we have
\begin{eqnarray}\label{v1}
t^{-1}V_{N_{m-2}}(t) - tV_{N_{m}}(t)
= \left(t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right)V_{N_{m-1}}(t).
\end{eqnarray}
By (\ref{v1}), we have
\begin{eqnarray}\label{v2}
V_{N_{m}}(t) + t^{-\frac{1}{2}}V_{N_{m-1}}(t)
&=& t^{-\frac{3}{2}}\left\{V_{N_{m-1}}(t) + t^{-\frac{1}{2}}V_{N_{m-2}}(t)\right\}\\
&=& \left(t^{-\frac{3}{2}}\right)^{m-1}\left\{V_{N_{1}}(t) + t^{-\frac{1}{2}}V_{N_{0}}(t)\right\}\nonumber\\
&=& -t^{\frac{1-3m}{2}}. \nonumber
\end{eqnarray}
Then by (\ref{v2}), we have
\begin{eqnarray*}
V_{N_{m}}(t) &=& -t^{-\frac{1}{2}}V_{N_{m-1}}(t) -t^{\frac{1-3m}{2}} \\
&=& \left(-t^{-\frac{1}{2}}\right)^{m}V_{N_{0}}(t) + \sum_{i=1}^{m}\left(-t^{\frac{1-3m}{2}}\right)(-t)^{i-1}\\
&=& \left(-t^{-\frac{1}{2}}\right)^{m}\left(-t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right) + \frac{\left(-t^{\frac{1-3m}{2}}\right)\left\{1-\left(-t\right)^{m}\right\}}{1+t}.
\end{eqnarray*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{main_jones}]
First we show in the case of $n=3$ and $4$. If $n=3$, by a calculation (with the help of \cite{kodamaknot}) we have
\begin{eqnarray*}
V_{J_{3}}(t) &=& t^{-1}-2+4t-4t^{2}+5t^{3}-5t^{4}+3t^{5}-2t^{6}+t^{7}, \\
V_{K_{3}}(t) &=& t^{-1}-1+2t-2t^{2}+2t^{3}-2t^{4}+t^{5}.
\end{eqnarray*}
Then we have
\begin{eqnarray*}
V_{J_{3}}(t) - V_{K_{3}}(t) = (t-1)^{3}\left(t^{2}+t+1\right)\left(t^{2}+1\right).
\end{eqnarray*}
If $n=4$, by a calculation we have
\begin{eqnarray*}
V_{J_{4}}(t) &=& -t^{-2}+4t^{-1}-8+13t-15t^{2}+17t^{3}-16t^{4}+12t^{5}-8t^{6}+4t^{7}-t^{8}, \\
V_{K_{4}}(t) &=& -t^{-2}+4t^{-1}-7+10t-11t^{2}+12t^{3}-10t^{4}+7t^{5}-4t^{6}+t^{7}.
\end{eqnarray*}
Then we have
\begin{eqnarray*}
V_{J_{4}}(t) - V_{K_{4}}(t) = -(t-1)^{4}\left(t^{2}+t+1\right)\left(t^{2}+1\right).
\end{eqnarray*}
From now on, we assume that $n\ge 5$. Since $\varepsilon_{i} = 1$ for any $i$, we have
\begin{eqnarray}\label{j1}
V_{J_{n}}(t) - V_{K_{n}}(t)
&=&
t^{\frac{n}{2}}\left(t-1\right)^{n}
\sum_{\delta_{2},\ldots,\delta_{n}=\pm 1}\left(\prod_{j=2}^{n}\delta_{j}\right)V_{J_{n}\left[\delta_{2},\ldots,\delta_{n}\right]}(t).
\end{eqnarray}
If $\delta_{2} = -1$, we can see that ${J_{n}}\left[-1,\delta_{3},\ldots,\delta_{n-1},1\right]$ and ${J_{n}}\left[-1,\delta_{3},\ldots,\delta_{n-1},-1\right]$ are ambient isotopic, see Fig. \ref{Cnex3}. Thus by (\ref{j1}), we have
\begin{eqnarray}\label{j2}
V_{J_{n}}(t) - V_{K_{n}}(t)
&=&
t^{\frac{n}{2}}\left(t-1\right)^{n}
\sum_{\delta_{3},\ldots,\delta_{n}=\pm 1}\left(\prod_{j=3}^{n}\delta_{j}\right)V_{J_{n}\left[1,\delta_{3},\ldots,\delta_{n}\right]}(t).
\end{eqnarray}
\begin{figure}
\caption{${J_{n}
\label{Cnex3}
\end{figure}
Let $k$ be an integer satisfying $3\le k\le n-2$. Note that $k$ is also satisfied with $3\le n-k+1\le n-2$. Then we can see that $J_{n}\left[1,\ldots,1,-1,\delta_{k+1},\ldots,\delta_{n}\right]$ is ambient isotopic to $L'_{n-k+1}\left[\delta_{k+1},\ldots,\delta_{n}\right]$ for some $2$-string tangle $T_{k}$, see Fig. \ref{Cnex4}, where $L'_{n-k+1}$ and $M'_{n-k+1}$ are corresponding knots as illustrated in Fig. \ref{Cnambi}. Then by Lemma \ref{lem2}, we have $L'_{n-k+1}$ and $M'_{n-k+1}$ are ambient isotopic and therefore
\begin{eqnarray}\label{j3}
&& \sum_{\delta_{k+1},\ldots,\delta_{n}=\pm 1}\left(\prod_{j=k+1}^{n}\delta_{j}\right)V_{J_{n}\left[1,\ldots,1,-1,\delta_{k+1},\ldots,\delta_{n}\right]}(t)\\
&=& \left\{V_{L'_{n-k+1}}(t) - V_{M'_{n-k+1}}(t)\right\} \Big/ \left(-\prod_{i=k+1}^{n}\varepsilon_{i}\right)t^{-1+\sum_{i=k+1}^{n}\varepsilon_{i}-\frac{1}{2}(n-k)}\left(t-1\right)^{n-k} \nonumber\\
&=& 0. \nonumber
\end{eqnarray}
Thus by (\ref{j2}) and (\ref{j3}), we have
\begin{eqnarray}\label{j4}
V_{J_{n}}(t) - V_{K_{n}}(t) &=&
t^{\frac{n}{2}}\left(t-1\right)^{n}
\sum_{\delta_{n-1},\delta_{n}=\pm 1}\delta_{n-1}\delta_{n}
V_{J_{n}\left[1,\ldots,1,\delta_{n-1},\delta_{n}\right]}(t).
\end{eqnarray}
\begin{figure}
\caption{$J_{n}
\label{Cnex4}
\end{figure}
We can see easily that $J_{n}\left[1,\ldots,1,-1,-1\right]$ is ambient isotopic to $N_{n-3}$, $J_{n}\left[1,\ldots,1,-1,1\right]$ is ambient isotopic to the split union of $N_{n-4}$ and the trivial knot, and $J_{n}\left[1,\ldots,1,-1\right]$ is ambient isotopic to the connected sum of $N_{n-3}$, the Hopf link with linking number $1$ and the Hopf link with linking number $-1$. Thus we have
\begin{eqnarray}
V_{J_{n}\left[1,\ldots,1,-1,-1\right]}(t) &=& V_{N_{n-3}}(t), \label{j5}\\
V_{J_{n}\left[1,\ldots,1,-1,1\right]}(t) &=& \left(-t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right)V_{N_{n-4}}(t), \label{j6} \\
V_{J_{n}\left[1,\ldots,1,-1\right]}(t) &=& \left(-t^{\frac{5}{2}}-t^{\frac{1}{2}}\right)\left(-t^{-\frac{5}{2}}-t^{-\frac{1}{2}}\right)V_{N_{n-3}}(t). \label{j7}
\end{eqnarray}
Further, $J_{n}\left[1,\ldots,1\right]$ is ambient isotopic to the oriented link as illustrated in Fig. \ref{2mtorus} (2), where $m=n-4$. We obtain the skein triple $\left(N_{n-3},J_{n}\left[1,\ldots,1\right],N_{n-4}\right)$ by changing and smoothing the marked crossing in Fig. \ref{2mtorus}. Thus we have
\begin{eqnarray}\label{j8}
V_{J_{n}\left[1,\ldots,1\right]}(t)
= t^{-2}V_{N_{n-3}}(t) - t^{-1}\left(t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right)V_{N_{n-4}}(t).
\end{eqnarray}
By combining with (\ref{j4}), (\ref{j5}), (\ref{j6}), (\ref{j7}) and (\ref{j8}), we have
\begin{eqnarray}\label{j9}
&& V_{J_{n}}(t) - V_{K_{n}}(t) \\
&=&
t^{\frac{n}{2}}\left(t-1\right)^{n}
\left\{
\left(-1-t^{2}\right)V_{N_{n-3}}(t) + \left(t^{\frac{1}{2}}+t^{-\frac{3}{2}}\right)V_{N_{n-4}}(t)
\right\} \nonumber \\
&=& t^{\frac{n}{2}}\left(t-1\right)^{n}\left(1+t^{2}\right)
\left\{
-V_{N_{n-3}}(t) + t^{-\frac{3}{2}}V_{N_{n-4}}(t)
\right\}. \nonumber
\end{eqnarray}
Here, by Lemma \ref{lem3}, we also have
\begin{eqnarray}\label{j10}
&& -V_{N_{n-3}}(t) + t^{-\frac{3}{2}}V_{N_{n-4}}(t)\\
&=& -\left(-t^{-\frac{1}{2}}\right)^{n-3}\left(-t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right)- \frac{\left(-t^{\frac{10-3n}{2}}\right)\left\{1-\left(-t\right)^{n-3}\right\}}{1+t} \nonumber \\
&& + t^{-\frac{3}{2}}\left(-t^{-\frac{1}{2}}\right)^{n-4}\left(-t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right) + \frac{\left(-t^{\frac{10-3n}{2}}\right)\left\{1-\left(-t\right)^{n-4}\right\}}{1+t} \nonumber \\
&=& (-1)^{n-2}t^{\frac{3-n}{2}}\left(-t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right)
+ (-1)^{n-4}t^{\frac{1-n}{2}}\left(-t^{\frac{1}{2}}-t^{-\frac{1}{2}}\right)
+ (-1)^{n-4}t^{\frac{2-n}{2}}\nonumber \\
&=& (-1)^{n+1}\left(t^{\frac{4-n}{2}}+t^{\frac{2-n}{2}}+t^{-\frac{n}{2}}\right) \nonumber \\
&=& (-1)^{n+1}t^{-\frac{n}{2}}\left(t^{2}+t+1\right). \nonumber
\end{eqnarray}
By (\ref{j9}) and (\ref{j10}), we have the desired conclusion.
\end{proof}
\begin{figure}
\caption{(1) $(2,-m)$-torus knot or link $N_{m}
\label{2mtorus}
\end{figure}
\section*{Acknowledgment}
The author is grateful to Professor Yoshiyuki Ohyama for informing him of Horiuchi's unpublished note \cite{Horiuchi}.
{\normalsize
}
\end{document} |
\overrightarrow{\beta}gin{document}
\title[Low-lying zeros of cubic Dedekind zeta functions]{Omega results for cubic field counts via lower-order terms in the one-level density}
\overrightarrow{a}uthor{Peter J. Cho, Daniel Fiorilli, Yoonbok Lee and Anders S\"odergren}
\date{\today}
\overrightarrow{a}ddress{Department of Mathematical Sciences, Ulsan National Institute of Science and Technology, Ulsan, Korea}
\email{[email protected]}
\overrightarrow{a}ddress{CNRS, Universit\'e Paris-Saclay, Laboratoire de math\'ematiques d'Orsay, 91405, Orsay, France}
\email{[email protected]}
\overrightarrow{a}ddress{Department of Mathematics, Research Institute of Basic Sciences, Incheon National University, Korea}
\email{[email protected], [email protected]}
\overrightarrow{a}ddress{Department of Mathematical Sciences, Chalmers University of Technology and the University of Gothenburg, SE-412 96 Gothenburg, Sweden}
\email{[email protected]}
\thanks{Peter J. Cho is supported by the NRF grant funded by the Korea government(MSIT) (No. 2019R1F1A1062599) and the Basic Science Research Program(2020R1A4A1016649). Yoonbok Lee is supported by the NRF grant funded by the Korea government(MSIT) (No.2019R1F1A1050795). Daniel Fiorilli was supported at the University of Ottawa by an NSERC discovery grant. Anders S\"odergren was supported by a grant from the Swedish Research Council (grant 2016-03759).}
\maketitle
\overrightarrow{\beta}gin{abstract}
In this paper we obtain a precise formula for the $1$-level density of $L$-functions attached to non-Galois cubic Dedekind zeta functions. We find a secondary term
which is unique to this context, in the sense that no lower-order term of this shape has appeared in previously studied families. The presence of this new term allows us to deduce an omega result for cubic field counting functions, under the assumption of the Generalized Riemann Hypothesis. We also investigate the associated $L$-functions Ratios Conjecture, and find that it does not predict this new lower-order term. Taking into account the secondary term in Roberts' Conjecture, we refine the Ratios Conjecture to one which captures this new term. Finally, we show that any improvement in the exponent of the error term of the recent Bhargava--Taniguchi--Thorne cubic field counting estimate would imply that the best possible error term in the refined Ratios Conjecture is $O_\varepsilon(X^{-\frac 13+\varepsilon})$. This is in opposition with all previously studied families, in which the expected error in the Ratios Conjecture prediction for the $1$-level density is $O_\varepsilon(X^{-\frac 12+\varepsilon})$.
\end{abstract}
\section{Introduction}
In \cite{KS1,KS2} Katz and Sarnak made a series of fundamental conjectures about statistics of low-lying zeros in families of $L$-functions. Recently, these conjectures have been refined by Sarnak, Shin and Templier~\cite{SaST} for families of parametric $L$-functions. There is a huge body of work on the confirmation of these conjectures for particular test functions in various families, many of which are harmonic (see, e.g., \cite{ILS,Ru,FI,HR,ST}). There are significantly fewer geometric families that have been studied. In this context we mention the work of Miller~\cite{M} and Young~\cite{Y2} on families of elliptic curve $L$-functions, and that of Yang~\cite{Y}, Cho and Kim~\cite{CK1,CK2} and Shankar, S\"odergren and Templier~\cite{SST1} on families of Artin $L$-functions.
In families of Artin $L$-functions, these results are strongly linked with counts of number fields. More precisely, the set of admissible test functions is determined by the quality of the error terms in such counting functions. In this paper we consider the sets
$$\mathcal{F}^\mathfrak pm(X):=\{K/{\mathbb{Q}} \text{ non-Galois}\,:\, [K:{\mathbb{Q}}]=3, 0<\mathfrak pm D_K < X\},$$
where for each cubic field $K/{\mathbb{Q}}$ of discriminant $D_K$ we include only one of its three isomorphic copies. The first power-saving estimate for the cardinality $N^\mathfrak pm(X):=|\mathcal{F}^\mathfrak pm (X)|$
was obtained by Belabas, Bhargava and Pomerance~\cite{BBP}, and was later refined by Bhargava, Shankar and Tsimerman~\cite{BST}, Taniguchi and Thorne~\cite{TT}, and Bhargava, Taniguchi and Thorne~\cite{BTT}. The last three of these estimates take the shape
\overrightarrow{\beta}gin{equation}
N^\mathfrak pm(X) = C_1^\mathfrak pm X + C_2^\mathfrak pm X^{\frac 56} + O_{\varepsilon}(X^{\theta+\varepsilon})
\label{equation TT}
\end{equation}
for certain explicit values of $ \theta <\frac 56$, implying in particular Roberts' conjecture \cite{Ro}. Here,
$$ C_1^+:=\frac{1}{12\zeta(3)};\hspace{.5cm} C_2^+:=\frac{4\zeta(\frac 13)}{5\Gammamma( \frac 23)^3\zeta(\frac 53)}; \hspace{.5cm} C_1^-:=\frac{1}{4\zeta(3)};\hspace{.5cm} C_2^-:=\frac{4\sqrt 3\zeta(\frac 13)}{5\Gammamma( \frac 23)^3\zeta(\frac 53)}.$$
The presence of this secondary term is a striking feature of this family, and we are interested in studying its consequences for the distribution of low-lying zeros. More precisely, the estimate~\eqref{equation TT} suggests that one should be able to extract a corresponding lower-order term in various statistics on those zeros.
In addition to~\eqref{equation TT}, we will consider precise estimates involving local conditions, which are of the form
\overrightarrow{\beta}gin{align}\label{equation TT local 2}
N^\mathfrak pm_{p} (X,T) : &= \#\{ K \in \mathcal F^\mathfrak pm(X) : p \text{ has splitting type }T \text{ in } K\} \notag
\\& = A^\mathfrak pm_p (T ) X +B^\mathfrak pm_p(T) X^{\frac 56} + O_{\varepsilon}(p^{ \omega}X^{\theta+\varepsilon}),
\end{align}
where $p$ is a given prime, $T$ is a splitting type, and the constants $A^\mathfrak pm_p (T )$ and $B^\mathfrak pm_p (T )$ are defined in Section~\ref{section background}. Here, $\theta$ is the same constant as that in~\eqref{equation TT}, and $\omega\geq 0$. Note in particular that~\eqref{equation TT local 2} implies~\eqref{equation TT} (take $p=2$ in~\eqref{equation TT local 2} and sum over all splitting types $T$).
Perhaps surprisingly, it turns out that the study of low-lying zeros has an application to cubic field counts. More precisely, we were able to obtain the following conditional omega result for $N^\mathfrak pm_{p} (X,T)$.
\overrightarrow{\beta}gin{theorem}
\label{theorem omega result counts}
Assume the Generalized Riemann Hypothesis for $\zeta_K(s)$ for each $K\in\mathcal F^{\mathfrak pm}(X)$. If $\theta,\omega\geq 0$ are admissible values in~\eqref{equation TT local 2}, then $\theta+\omega\geq \frac 12$.
\end{theorem}
As part of this project, we have produced numerical data which suggests that $\theta=\frac 12$ and any $\omega >0$ are admissible values in~\eqref{equation TT local 2} (indicating in particular that the bound $\omega+\theta \geq \frac 12$ in Theorem~\ref{theorem omega result counts} could be best possible). We have made several graphs to support this conjecture in Appendix~\ref{appendix}. As a first example of these results, in Figure~\ref{figure intro} we display a graph of $X^{-\frac 12}(N^+_{5} (X,T)-A^+_5 (T ) X -B^+_5(T) X^{\frac 56} ) $ for the various splitting types $T$, which suggests that $\theta=\frac 12$ is admissible and best possible.
\overrightarrow{\beta}gin{figure}[h]
\label{figure intro}
\overrightarrow{\beta}gin{center}
\includegraphics[scale=.55]{combinedTs10e8p5largecaption.jpg}
\end{center}\caption{The normalized error terms $X^{-\frac 12}(N^+_{5} (X,T)-A^+_5 (T ) X -B^+_5(T) X^{\frac 56} ) $ for the splitting types $T= T_1,\dots,T_5$ as described in Section~\ref{section background}. }
\end{figure}
Let us now describe our unconditional result on low-lying zeros. For a cubic field $K$, we will focus on the Dedekind zeta function $\zeta_K(s)$, whose
$1$-level density is defined by
$$ \mathfrak{D}_{\mathfrak phi}(K) := \sum_{\gamma_{K}} \mathfrak phi\left( \frac{\log (X/(2 \mathfrak pi e)^2)}{2\mathfrak pi} \gamma_K\right).$$
Here, $\mathfrak phi$ is an even, smooth and rapidly decaying real function for which the Fourier transform
$$\widehat \mathfrak phi(\xi) :=\int_{\mathbb R} \mathfrak phi(t) e^{-2\mathfrak pi i \xi t} dt $$ is compactly supported. Note that $\mathfrak phi$ can be extended to an entire function through the inverse Fourier transform. Moreover, $X$ is a parameter (approximately equal to $|D_K|$) and $\rho_K=\frac 12+i\gamma_K$ runs through the non-trivial zeros\footnote{The Riemann Hypothesis for $\zeta_K(s)$ implies that $\gamma_K\in \mathbb R$.} of $\zeta_K(s)/\zeta(s)$. In order to understand the distribution of the $\gamma_K$, we will average $\mathfrak{D}_{\mathfrak phi}(K)$ over the family $\mathcal F^\mathfrak pm(X)$. Our main technical result is a precise estimation of this average.
\overrightarrow{\beta}gin{theorem}
\label{theorem main}
Assume that the cubic field count~\eqref{equation TT local 2} holds for some fixed parameters $\frac 12\leq \theta <\frac 56$ and $ \omega\geq0$.
Then, for any real even Schwartz function $\mathfrak phi$ for which $\sigma:=\sup({\rm supp}(\widehat \mathfrak phi))< \frac {1-\theta}{\omega+ \frac 12}$, we have the estimate
\overrightarrow{\beta}gin{multline}\frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \mathfrak{D}_{\mathfrak phi}(K)=\widehat \mathfrak phi(0)\Big(1 + \frac{ \log (4 \mathfrak pi^2 e) }{L} -\frac{C_2^\mathfrak pm}{5C_1^\mathfrak pm} \frac{X^{-\frac 16}}{L}
+ \frac{(C_2^\mathfrak pm)^2 }{5(C_1^\mathfrak pm)^2 } \frac{X^{-\frac 13}}{L} \Big) \\
+ \frac1{\mathfrak pi}\int_{-\infty}^{\infty}\mathfrak phi\Big(\frac{Lr}{2\mathfrak pi}\Big){\mathbb{R}}e\Big(\frac{\Gamma'_{\mathfrak pm}}{\Gamma_{\mathfrak pm}}(\tfrac12+ir)\Big)dr -\frac{2}{L}\sum_{p,e}\frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\theta_e+\tfrac 1p)\\ -\frac{2C_2^\mathfrak pm X^{-\frac 16}}{C_1^\mathfrak pm L}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e}\frac{\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) \overrightarrow{\beta}ta_e(p) +O_{\varepsilon}(X^{\theta - 1 + \sigma(\omega + \frac12 ) +\varepsilon}) ,
\label{equation main theorem}
\end{multline}
where $\Gamma_+(s):=
\mathfrak pi^{-s}\Gammamma(\frac s2)^2$, $\Gamma_-(s):=
\mathfrak pi^{-s}\Gammamma(\frac s2)\Gammamma(\frac {s+1}2)$, $x_p:=(1+\frac 1p+\frac 1{p^2})^{-1}$, $\theta_e$ and $\overrightarrow{\beta}ta_e(p)$ are defined in~\eqref{definition theta_e} and~\eqref{eqaution definition beta}, respectively, and $L:=\log \big( \frac{X}{(2 \mathfrak pi e)^2}\big)$.
\end{theorem}
\overrightarrow{\beta}gin{remark}
In the language of the Katz--Sarnak heuristics, the first and third terms on the right-hand side of~\eqref{equation main theorem} are a manifestation of the symplectic symmetry type of the family $\mathcal{F}^\mathfrak pm(X)$. More precisely, one can turn~\eqref{equation main theorem} into an expansion in descending powers of $L$ using Lemma~\ref{one-level-2-term} as well as~\cite[Lemma 12.14]{MV}. The first result in this direction is due to Yang~\cite{Y}, who showed that under the condition $\sigma<\frac 1{50}$, we have that
\overrightarrow{\beta}gin{equation}
\frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \mathfrak{D}_{\mathfrak phi}(K)
=\widehat \mathfrak phi(0)-\frac{\mathfrak phi(0)}2+o_{X\rightarrow \infty}(1).
\label{Yang}
\end{equation}
This last condition was relaxed to $\sigma<\frac 4{41}$ by Cho--Kim~\cite{CK1,CK2}\footnote{In \cite{CK1}, the condition $\sigma < \frac{4}{25}$ should be corrected to $\sigma<\frac 4{41}$.} and Shankar--S\"odergren--Templier~\cite{SST1}, independently, and corresponds to the admissible values $\theta=\frac 79$ and $\omega=\frac {16}9$ in~\eqref{equation TT} and~\eqref{equation TT local 2} (see~\cite{TT}). In the recent paper~\cite{BTT}, Bhargava, Taniguchi and Thorne show that $\theta=\frac 23$ and $\omega=\frac 23$ are admissible, and deduce that~\eqref{Yang} holds as soon as $\sigma<\frac 2{7}$. Theorem~\ref{theorem main} refines these results by obtaining a power saving estimate containing lower-order terms for the left-hand side of~\eqref{Yang}. Note in particular that the fourth term on the right-hand side of~\eqref{equation main theorem} is of order $ X^{\frac{\sigma-1}6+o(1)}$ (see once more Lemma~\ref{one-level-2-term}).
\end{remark}
The Katz--Sarnak heuristics are strongly linked with statistics of eigenvalues of random matrices, and have been successful in predicting the main term in many families. However, this connection does not encompass lower-order terms. The major tool for making predictions in this direction is the $L$-functions Ratios Conjecture of Conrey, Farmer and Zirnbauer~\cite{CFZ}. In particular, these predictions are believed to hold down to an error term of size roughly the inverse of the square root of the size of the family. As an example, consider the unitary family of Dirichlet $L$-functions modulo $q$, in which the Ratios Conjecture's prediction is particularly simple. It is shown in~\cite{G+} that if $ \eta$ is a real even Schwartz function for which $\widehat \eta$ has compact (but arbitrarily large) support, then this conjecture implies the estimate
\overrightarrow{\beta}gin{multline}
\frac 1{\mathfrak phi(q)}\sum_{\chi \bmod q}\sum_{\gamma_{\chi}} \eta\Big( \frac{\log q}{2\mathfrak pi} \gamma_\chi\Big) = \widehat{\eta}(0) \Big( 1-\frac { \log(8\mathfrak pi e^{\gamma})}{\log q}-\frac{\sum_{p\mid q}\frac{\log p}{p-1}}{\log q}\Big) +\int_0^{\infty}\frac{\widehat{\eta}(0)-\widehat{\eta}(t)}{q^{\frac t2}-q^{-\frac t2}} dt + E(q),
\end{multline}
where $\rho_\chi=\frac 12+i\gamma_\chi$ is running through the non-trivial zeros of $L(s,\chi)$, and $E(q) \ll_{\varepsilon} q^{-\frac 1 2+\varepsilon}$. In~\cite{FM}, it was shown that this bound on $E(q)$ is essentially best possible in general, but can be improved when the support of $\widehat \eta$ is small. This last condition also results in improved error terms in various other families (see, for instance, \cite{MiSymplectic,MiOrthogonal,FPS1,FPS2,DFS}).
Following the Ratios Conjecture recipe, we can obtain a prediction for the average of $\mathfrak{D}_\mathfrak phi(K)$ over the family $\mathcal{F}^\mathfrak pm(X)$. The resulting conjecture, however, differs from Theorem~\ref{theorem main} by a term of order $X^{\frac{\sigma-1}6+o(1)} $, which is considerably larger than the expected error term $O_{\varepsilon}(X^{-\frac 12 +\varepsilon})$. We were able to isolate a specific step in the argument which could be improved in order to include this additional contribution. More precisely, modifying Step 4 in~\cite[Section 5.1]{CFZ},
we recover a refined Ratios Conjecture which predicts a term of order $X^{\frac{\sigma-1}6+o(1)}$, in agreement with Theorem~\ref{theorem main}.
\overrightarrow{\beta}gin{theorem}
\label{theorem RC}
Let $\frac 12\leq \theta<\frac 56$ and $\omega\geq0 $ be such that~\eqref{equation TT local 2} holds. Assume Conjecture~\ref{ratios-thm} on the average of shifts of the logarithmic derivative of $\zeta_K(s)/\zeta(s)$, as well as the Riemann Hypothesis for $\zeta_K(s)$, for all $K\in \mathcal{F}^{\mathfrak pm}(X)$. Let $\mathfrak phi$ be a real even Schwartz function such that $ \widehat \mathfrak phi$ is compactly supported. Then we have the estimate
\overrightarrow{\beta}gin{multline*}
\frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \sum_{\gamma_K}\mathfrak phi \Big(\frac{L\gamma_K}{2\mathfrak pi }\Big)=\widehat \mathfrak phi(0)\Big(1 + \frac{ \log (4 \mathfrak pi^2 e) }{L} -\frac{C_2^\mathfrak pm}{5C_1^\mathfrak pm} \frac{X^{-\frac 16}}{L}
+ \frac{(C_2^\mathfrak pm)^2 }{5(C_1^\mathfrak pm)^2 } \frac{X^{-\frac 13}}{L} \Big) \\+ \frac1{\mathfrak pi}\int_{-\infty}^{\infty}\mathfrak phi\Big(\frac{Lr}{2\mathfrak pi}\Big){\mathbb{R}}e\Big(\frac{\Gamma'_{\mathfrak pm}}{\Gamma_{\mathfrak pm}}(\tfrac12+ir)\Big)dr -\frac{2}{L}\sum_{p,e}\frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\theta_e+\tfrac 1p) \\-\frac{2C_2^\mathfrak pm X^{-\frac 16}}{C_1^\mathfrak pm L}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e}\frac{\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) \overrightarrow{\beta}ta_e(p)+J^\mathfrak pm(X)
+ O_\varepsilon(X^{\theta-1+\varepsilon}),
\end{multline*}
where $J^\mathfrak pm(X)$ is defined in \eqref{J X def}.
If $\sigma=\sup ( {\rm supp}(\widehat\mathfrak phi)) <1$, then we have the estimate
\overrightarrow{\beta}gin{equation}\label{J X asymp form}
J^\mathfrak pm(X) = C^{\mathfrak pm} X^{-\frac 13} \int_{\mathbb R} \Big( \frac{X}{(2\mathfrak pi e)^2 }\Big)^{\frac{\xi}6} \widehat \mathfrak phi(\xi) d\xi +O_\varepsilon( X^{\frac{\sigma-1}2+\varepsilon}),
\end{equation}
where $C^{\mathfrak pm} $ is a nonzero absolute constant which is defined in~\eqref{equation definition C}. Otherwise, we have the identity
\overrightarrow{\beta}gin{multline}
J^\mathfrak pm(X)=-\frac{1}{\mathfrak pi i} \int_{(\frac 15)} \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big) \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) X^{-s} \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \frac{ A_3( -s, s) }{1-s} ds \\
-\frac{1}{\mathfrak pi i} \int_{(\frac 1{20})} \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)\frac{C_2^\mathfrak pm }{C_1^\mathfrak pm} X^{-s-\frac 16} \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \Big\{ \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\zeta(\tfrac 56-s)}{\zeta(\tfrac 56+s)}\frac{ A_4(-s,s)}{ 1- \frac{ 6s}5}
\\+\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac 16} \frac{A_3( -s, s) }{1-s}
\Big\}ds,
\label{equation definition J(X)}
\end{multline}
where $A_3(-s,s)$ and $A_4(-s,s)$ are defined in~\eqref{A3 prod form} and~\eqref{equation definition A_4}, respectively.
\end{theorem}
\overrightarrow{\beta}gin{remark}
It is interesting to compare Theorem~\ref{theorem RC} with Theorem~\ref{theorem main}, especially when $\sigma$ is small. Indeed, for $\sigma<1$, the difference between those two evaluations of the $1$-level density is given by
$$ C^\mathfrak pm X^{-\frac 13}\int_{\mathbb R} \Big( \frac{X}{(2\mathfrak pi e)^2 }\Big)^{\frac{\xi}6} \widehat \mathfrak phi(\xi) d\xi+O_\varepsilon\big(X^{\frac{\sigma-1}{2}+\varepsilon}+X^{\theta - 1 + \sigma(\omega + \frac12 ) +\varepsilon}\big).$$
Selecting test functions $\mathfrak phi$ for which $\widehat \mathfrak phi \geq 0$ and $\sigma$ is positive but arbitrarily small, this shows that no matter how large $\omega$ is, any admissible $\theta<\frac 23$ in~\eqref{equation TT} and~\eqref{equation TT local 2} would imply that this difference is asymptotic to $C^\mathfrak pm X^{-\frac 13}\int_{\mathbb R} ( \frac{X}{(2\mathfrak pi e)^2 })^{\frac{\xi}6} \widehat \mathfrak phi(\xi) d\xi \gg X^{-\frac 13}$. In fact, Roberts' numerics~\cite{Ro} (see also~\cite{B}), as well as our numerical investigations described in Appendix~\ref{appendix}, indicate that $\theta= \frac 12$ could be admissible in~\eqref{equation TT}
and~\eqref{equation TT local 2}. In other words, in this family, the Ratios Conjecture, as well as our refinement (combined with the assumption of~\eqref{equation TT} and~\eqref{equation TT local 2} for some $\theta<\frac 23$ and $\omega\geq0$), are not sufficient to obtain a prediction with precision $o(X^{-\frac 13})$. This is surprising, since Conrey, Farmer and Zirnbauer have conjectured this error term to be of size $O_\varepsilon(X^{-\frac 12+\varepsilon})$, and this has been confirmed in several important families~\cite{MiSymplectic,MiOrthogonal,FPS1,FPS2,DFS} (for a restricted set of test functions).
\end{remark}
\section*{Acknowledgments}
We would like to thank Frank Thorne for inspiring conversations, and for sharing the preprint \cite{BTT} with us. We also thank Keunyoung Jeong for providing us with preliminary computational results.
The computations in this paper were carried out on a personal computer using pari/gp as well as Belabas' CUBIC program. We thank Bill Allombert for his help regarding the latest development version of pari/gp.
\section{Background}
\label{section background}
Let $K/{\mathbb{Q}}$ be a non-Galois cubic field, and let $\widehat{K}$ be the Galois closure of $K$ over $\mathbb{Q}$. Then, the Dedekind zeta function of the field $K$ has the decomposition
\overrightarrow{\beta}gin{align*}
\zeta_K(s) = \zeta(s) L(s,\rho,\widehat{K}/\mathbb{Q}),
\end{align*}
where $L(s,\rho,\widehat{K}/\mathbb{Q})$ is the Artin $L$-function associated to the two-dimensional representation $\rho$ of ${\rm Gal}(\widehat K/{\mathbb{Q}}) \simeq S_3$. The strong Artin conjecture is known for such representations; in this particular case we have an explicit underlying cuspidal representation $\tau$ of $GL_2/\mathbb{Q}$ such that $L(s,\rho,\widehat{K}/\mathbb{Q})=L(s,\tau)$. For the sake of completeness, let us describe $\tau$ in more detail. Let $F=\mathbb{Q}[\sqrt{D_K}]$, and let $\chi$ be a non-trivial character of Gal$(\widehat{K}/F)\simeq C_3$, considered as a Hecke character of $F$. Then $\tau=\text{Ind}^{\mathbb Q}_{F} \chi$ is a dihedral representation of central character $\chi_{D_K}=\big(\frac {D_K}\cdot \big)$. When $D_K <0$, $\tau$ corresponds to a weight one newform of level $\left| D_K \right|$ and nebentypus $\chi_{D_K}$, and when $D_K>0$ it corresponds to a weight zero Maass form (see \cite[Introduction]{DFI}). In both cases, we will denote the corresponding automorphic form by $f_K$, and in particular we have the equality
$$L(s,\rho,\widehat{K}/\mathbb{Q}) = L(s,f_K).$$
We are interested in the analytic properties of $\zeta_K(s)/\zeta(s)=L(s,f_K)$.
We have the functional equation
\overrightarrow{\beta}gin{equation}
\label{equation functional equation}
\Lambda(s,f_K)=\Lambda(1-s,f_K).
\end{equation}Here,
$\Lambda(s,f_K):= |D_K|^{\frac s2} \Gamma_{f_K}(s) L(s,f_K)$ is the completed $L$-function, with the gamma factor
$$ \Gamma_{f_K}(s)=\overrightarrow{\beta}gin{cases} \Gamma_+(s) & \text{ if } D_K>0 \text{ } (\text{that is } K \text{ has signature } (3,0) ); \\ \Gamma_-(s) & \text{ if } D_K < 0 \text{ } (\text{that is } K \text{ has signature } (1,1) ) ,
\end{cases}$$
where $\Gamma_+(s):=
\mathfrak pi^{-s}\Gammamma(\frac s2)^2$ and $\Gamma_-(s):=
\mathfrak pi^{-s}\Gammamma(\frac s2)\Gammamma(\frac {s+1}2)$.
The coefficients of $L(s,f_K)$ have an explicit description in terms of the splitting type of the prime ideal $(p)\mathcal O_{K}$.
Writing
$$ L(s,f_K) = \sum_{n =1}^\infty \frac{\lambda_K (n)}{n^s},$$
we have that
\overrightarrow{\beta}gin{center}
\overrightarrow{\beta}gin{tabular}{c|c|c}
Splitting type & $(p)\mathcal O_{K} $ & $\lambda_K(p^e)$\\
\hline
$T_1$ & $\mathfrak p_1\mathfrak p_2\mathfrak p_3 $ & $ e+1$ \\
$T_2$ & $\mathfrak p_1\mathfrak p_2 $ & $(1+(-1)^e)/2$ \\
$T_3$ & $\mathfrak p_1$ & $\tau_e$ \\
$T_4$ & $\mathfrak p_1^2\mathfrak p_2 $ & $1$ \\
$T_5$ & $\mathfrak p_1^3$ & $0$, \\
\end{tabular}
\end{center}
where
$$ \tau_e:= \overrightarrow{\beta}gin{cases}
1 &\text{ if } e\equiv 0 \bmod 3; \\
-1 &\text{ if } e\equiv 1 \bmod 3; \\
0 &\text{ if } e\equiv 2 \bmod 3.
\end{cases} $$
Furthermore, we find that the coefficients of the reciprocal
\overrightarrow{\beta}gin{equation}\label{lemobius}
\frac{1}{L(s,f_K)} = \sum_{n=1}^\infty \frac{ \mu_K (n)}{n^s}
\end{equation}
are given by
\overrightarrow{\beta}gin{equation*}
\mu_{K}(p^k) =
\overrightarrow{\beta}gin{cases}
-\lambda_K(p) & \mbox{~if~} k = 1;\\
\Big( \frac{D_K} p\Big) & \mbox{~if~} k = 2;\\
0 & \mbox{~if~} k > 2.
\end{cases}
\end{equation*}
The remaining values of $\lambda_K(n)$ and $\mu_K(n)$ are determined by multiplicativity.
Finally, the coefficients of the logarithmic derivative
$$ -\frac{L'}{L}(s,f_K)=\sum_{n\geq 1} \frac{\Lambda(n)a_K(n)}{n^s}$$
are given by
\overrightarrow{\beta}gin{center}
\overrightarrow{\beta}gin{tabular}{c|c|c}
Splitting type & $(p) $ & $a_K(p^e)$ \\
\hline
$T_1$ & $\mathfrak p_1\mathfrak p_2\mathfrak p_3 $ & $ 2$ \\
$T_2$ & $\mathfrak p_1\mathfrak p_2 $ & $1+(-1)^e$ \\
$T_3$ & $\mathfrak p_1$ & $\eta_e$ \\
$T_4$ & $\mathfrak p_1^2\mathfrak p_2 $ & $1$ \\
$T_5$ & $\mathfrak p_1^3$ & $0$, \\
\end{tabular}
\end{center}
where
$$ \eta_e:= \overrightarrow{\beta}gin{cases}
2 &\text{ if } e \equiv 0 \bmod 3; \\
-1 &\text{ if } e \equiv \mathfrak pm 1 \bmod 3.
\end{cases} $$
We now describe explicitly the constants $A_p^\mathfrak pm(T)$ and $B_p^\mathfrak pm(T)$ that appear in~\eqref{equation TT local 2}. More generally, let $\mathbf{p} = ( p_1 , \ldots , p_J)$ be a vector of primes, and let $\mathbf{k} = ( k_1 ,\ldots , k_J )\in \{1,2,3,4,5\}^J $. (When $J=1$, $\mathbf{p}=(p)$ is a scalar and we will abbreviate by writing $\mathbf{p}=p$, and similarly for $\mathbf{k}$.) We expect that
\overrightarrow{\beta}gin{align}\label{equation TT local 2 vector form}
N^\mathfrak pm_{\mathbf{p}} (X,T_\mathbf{k}) : &= \#\{ K \in \mathcal F^\mathfrak pm(X) : p_j \text{ has splitting type }T_{k_j} \text{ in }K \; (1\leq j\leq J)\} \notag
\\& = A^\mathfrak pm_\mathbf{p} (T_\mathbf{k} ) X +B^\mathfrak pm_\mathbf{p}(T_\mathbf{k}) X^{\frac 56} + O_{\varepsilon}((p_1 \cdots p_J )^{ \omega}X^{\theta+\varepsilon}),
\end{align}
for some $\omega \geq0$ and with the same $\theta$ as in \eqref{equation TT}.
Here,
$$ A^\mathfrak pm_\mathbf{p} ( T_\mathbf{k} ) = C_1^\mathfrak pm \mathfrak prod_{j = 1}^J ( x_{p_j} c_{k_j} (p_j ) ),\hspace{1cm} B^\mathfrak pm_\mathbf{p} ( T_\mathbf{k} ) = C_2^\mathfrak pm \mathfrak prod_{j = 1}^J ( y_{p_j} d_{k_j} (p_j ) ) ,$$
$$x_p:=\Big(1+\frac 1p+\frac 1{p^2}\Big)^{-1}, \hspace{2cm} y_p:=\frac{1-p^{-\frac 13}}{(1-p^{-\frac 53})(1+p^{-1})}, $$
and $c_k(p)$ and $d_k(p)$ are defined in the following table:
\overrightarrow{\beta}gin{center}
\overrightarrow{\beta}gin{tabular}{c|c|c}
$k$ & $c_k(p)$ & $d_k(p)$ \\
\hline
$1$ & $\frac 16 $ & $\frac{(1+p^{-\frac 13})^3}6$ \\
$2$ & $\frac 12 $ & $\frac{(1+p^{-\frac 13})(1+p^{-\frac 23})}{2} $ \\
$3$ & $\frac 13$ & $\frac{(1+p^{-1})}{3} $ \\
$4$ & $\frac 1{p} $ & $\frac{(1+p^{-\frac 13})^2}{p} $ \\
$5$ & $\frac 1{p^2}$ & $\frac{(1+p^{-\frac 13})}{p^2} .$
\end{tabular}
\end{center}
Recently, Bhargava, Taniguchi and Thorne~\cite{BTT} have shown that the values $\theta=\omega=\frac{2}3$ are admissible in~\eqref{equation TT local 2 vector form}.
\section{New lower-order terms in the $1$-level density}
In this section we shall estimate the 1-level density
$$ \frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \mathfrak{D}_{\mathfrak phi}(K) $$
assuming the cubic field count~\eqref{equation TT local 2} for some fixed parameters $\frac 12\leq \theta <\frac 56$ and $ \omega\geq 0$. Throughout the paper we will use the shorthand
$$ L =\log \Big(\frac X{(2\mathfrak pi e)^2}\Big). $$
The starting point of this section is the explicit formula.
\overrightarrow{\beta}gin{lemma}
\label{lemma explicit formula}
Let $\mathfrak phi$ be a real even Schwartz function whose Fourier transform is compactly supported, and let $K\in \mathcal{F}^\mathfrak pm (X)$. We have the formula
\overrightarrow{\beta}gin{equation}\label{explicit formula}\overrightarrow{\beta}gin{split}
\mathfrak{D}_\mathfrak phi(K) = \sum_{\gamma_K}\mathfrak phi\Big(\frac{L\gamma_K}{2\mathfrak pi}\Big)=& \frac{\widehat\mathfrak phi(0)}{L } \log{|D_K|} + \frac1{\mathfrak pi}\int_{-\infty}^{\infty}\mathfrak phi\Big(\frac{Lr}{2\mathfrak pi}\Big){\mathbb{R}}e\Big(\frac{\Gamma'_{\mathfrak pm}}{\Gamma_{\mathfrak pm}}(\tfrac12+ir)\Big)dr \\
& -\frac2{L }\sum_{n=1}^{\infty}\frac{\Lambda(n)}{\sqrt n}\widehat\mathfrak phi\Big(\frac{\log n}{L}\Big) a_K(n),
\end{split}\end{equation}
where $\rho_K=\frac 12+i\gamma_K$ runs over the non-trivial zeros of $L(s,f_K)$.
\end{lemma}
\overrightarrow{\beta}gin{proof}
This follows from e.g.~\cite[Proposition 2.1]{RS}, but for the sake of completeness we reproduce the proof here. By Cauchy's integral formula, we have the identity
\overrightarrow{\beta}gin{multline*}
\sum_{\gamma_K}\mathfrak phi\Big(\frac{L\gamma_K}{2\mathfrak pi}\Big)=\frac{1}{2\mathfrak pi i} \int_{(\frac 32)}\mathfrak phi\left( \frac{L}{2\mathfrak pi i} \left( s - \frac{1}{2} \right)\right) \frac{\Lambda'}{\Lambda}(s,f_K)ds \\-\frac{1}{2\mathfrak pi i} \int_{(-\frac 12)}\mathfrak phi\left( \frac{L}{2\mathfrak pi i} \left( s - \frac{1}{2} \right)\right) \frac{\Lambda'}{\Lambda}(s,f_K) ds.
\end{multline*}
These integrals converge since $\mathfrak phi\left(\frac{L}{2 \mathfrak pi i}\left( s-\frac{1}{2} \right) \right)$ is rapidly decreasing in vertical strips.
For the second integral, we apply the change of variables $s \rightarrow 1-s$. Then, by the functional equation in the form $ \frac{\Lambda'}{\Lambda}(1-s,f_K)=-\frac{\Lambda'}{\Lambda}(s,f_K)$ and since $\mathfrak phi(-s)=\mathfrak phi(s)$, we deduce that
\overrightarrow{\beta}gin{align*}
\sum_{\gamma_K}\mathfrak phi\Big(\frac{L\gamma_K}{2\mathfrak pi}\Big)=\frac{1}{\mathfrak pi i} \int_{(\frac 32)} \mathfrak phi\left( \frac{L}{2\mathfrak pi i} \left( s - \frac{1}{2} \right)\right) \frac{\Lambda'}{\Lambda}(s,f_K)ds.
\end{align*}
Next, we insert the identity $$ \frac{\Lambda'}{\Lambda}(s,f_K)=\frac{1}{2}\log |D_K| + \frac{\Gammamma_{f_K}'}{\Gammamma_{f_K}}(s) - \sum_{n\geq 1}\frac{\Lambda(n)a_K(n)}{n^s}$$ and separate into three integrals. By shifting the contour of integration to ${\mathbb{R}}e(s)=\frac12$ in the first two integrals, we obtain the first two terms on the right-hand side of~\eqref{explicit formula}. The third integral is equal to
\overrightarrow{\beta}gin{align*}
-2\sum_{n\geq 1} \frac{\Lambda(n)a_K(n)}{\sqrt{n}}\frac{1}{2\mathfrak pi i} \int_{(\frac 32)} \mathfrak phi\left( \frac{L}{2\mathfrak pi i} \left( s - \frac{1}{2} \right)\right)n^{-(s-\frac 12)}ds.
\end{align*}
By moving the contour to ${\mathbb{R}}e(s)=\frac 12$ and applying Fourier inversion, we find the third term on the right-hand side of~\eqref{explicit formula} and the claim follows.
\end{proof}
Our goal is to average \eqref{explicit formula} over $K \in \mathcal{F}^\mathfrak pm(X)$. We begin with the first term.
\overrightarrow{\beta}gin{lemma}
\label{lemma average log}
Assume that \eqref{equation TT} holds for some $0\leq\theta<\frac 56$. Then, we have the estimate
$$\frac{1}{ N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \log{|D_K|} = \log X - 1 -\frac{C_2^\mathfrak pm}{5C_1^\mathfrak pm} X^{-\frac 16} + \frac{(C_2^\mathfrak pm)^2}{ 5 (C_1^\mathfrak pm)^2 } X^{- \frac13} +O_{\varepsilon}(X^{\theta-1+\varepsilon}+X^{-\frac 12}).$$
\end{lemma}
\overrightarrow{\beta}gin{proof}
Applying partial summation, we find that
\overrightarrow{\beta}gin{align*}
\sum_{K \in \F^\mathfrak pm(X)} \log |D_K| = \int_1^X (\log t) dN^\mathfrak pm(t) = N^\mathfrak pm(X)\log X-N^\mathfrak pm(X)-\frac 15 C_2^\mathfrak pm X^{\frac 56} +O_{\varepsilon}(X^{\theta+\varepsilon}).
\end{align*}
The claimed estimate follows from applying~\eqref{equation TT}.
\end{proof}
For the second term of~\eqref{explicit formula}, we note that it is constant on $\mathcal{F}^\mathfrak pm(X)$. We can now concentrate our efforts on the average of the third (and most crucial) term
\overrightarrow{\beta}gin{equation}
\label{equation definitino I}
I^{\mathfrak pm}(X;\mathfrak phi) := -\frac2{LN^\mathfrak pm(X) }\sum_{p} \sum_{e=1}^\infty \frac{\log p }{p^{e/2}} \widehat\mathfrak phi\Big(\frac{e \log p}{L}\Big) \sum_{K \in \F^\mathfrak pm(X)} a_K(p^e ).
\end{equation}
It follows from~\eqref{equation TT local 2} that
\overrightarrow{\beta}gin{align} \notag
\sum_{K \in \F^\mathfrak pm(X)} a_K(p^e)&=2N^\mathfrak pm_p(X,T_1)+(1+(-1)^e)N^\mathfrak pm_p(X,T_2)+ \eta_e N^\mathfrak pm_p(X,T_3)+N^\mathfrak pm_p(X,T_4)\\
&= C_1^\mathfrak pm X (\theta_e+\tfrac 1p)x_p+ C_2^\mathfrak pm X^{\frac 56} (1+p^{-\frac 13})(\kappa_e(p)+p^{-1}+p^{-\frac 43})y_p +O_{\varepsilon}(p^{\omega }X^{\theta +\varepsilon}), \label{sum p e 1}
\end{align}
where
\overrightarrow{\beta}gin{equation}
\theta_e:=\delta_{2\mid e}+\delta_{3\mid e}=\overrightarrow{\beta}gin{cases}
2 &\text{ if } e\equiv 0 \bmod 6 \\
0 &\text{ if } e\equiv 1 \bmod 6 \\
1 &\text{ if } e\equiv 2 \bmod 6 \\
1 &\text{ if } e\equiv 3 \bmod 6 \\
1 &\text{ if } e\equiv 4 \bmod 6 \\
0 &\text{ if } e\equiv 5 \bmod 6,
\end{cases}
\label{definition theta_e}
\end{equation}
and
\overrightarrow{\beta}gin{equation}
\kappa_e(p):=(\delta_{2\mid e}+\delta_{3\mid e})(1+p^{-\frac 23}) + (1-\delta_{3\mid e})p^{-\frac 13}=\overrightarrow{\beta}gin{cases}
2+2p^{-\frac 23} &\text{ if } e\equiv 0 \bmod 6 \\
p^{-\frac 13} &\text{ if } e\equiv 1 \bmod 6 \\
1+p^{-\frac 13}+p^{-\frac 23} &\text{ if } e\equiv 2 \bmod 6 \\
1+p^{-\frac 23} &\text{ if } e\equiv 3 \bmod 6 \\
1+p^{-\frac 13}+p^{-\frac 23} &\text{ if } e\equiv 4 \bmod 6 \\
p^{-\frac 13} &\text{ if } e\equiv 5 \bmod 6.
\end{cases}
\label{definition kappa_e}
\end{equation}
Here, $ \delta_{\mathcal P} $ is equal to $1$ if $ \mathcal P$ is true, and is equal to $0$ otherwise. Note that we have the symmetries $\theta_{-e} = \theta_e$ and $\kappa_{-e}(p) = \kappa_e(p)$. With this notation, we prove the following proposition.
\overrightarrow{\beta}gin{proposition}
\label{proposition prime sum}
Let $\mathfrak phi$ be a real even Schwartz function for which $\widehat \mathfrak phi$ has compact support and let $\sigma:=\sup({\rm supp}(\widehat \mathfrak phi))$. Assume that~\eqref{equation TT local 2} holds for some fixed parameters $0\leq \theta <\frac 56$ and $\omega\geq 0$. Then we have the estimate
\overrightarrow{\beta}gin{multline*}
I^{\mathfrak pm}(X;\mathfrak phi)= -\frac{2}{L}\sum_{p,e}\frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\theta_e+\tfrac 1p) \\
+ \frac{2}{L}\bigg( - \frac{ C_2^\mathfrak pm }{C_1^\mathfrak pm }X^{-\frac 16} + \frac{ ( C_2^\mathfrak pm )^2 }{(C_1^\mathfrak pm )^2 }X^{-\frac 13} \bigg) \sum_{p,e}\frac{\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) \overrightarrow{\beta}ta_e(p) +O_{\varepsilon}(X^{\theta - 1 + \sigma(\omega + \frac12 ) +\varepsilon}+X^{-\frac 12+\frac{\sigma}6}),
\end{multline*}
where
\overrightarrow{\beta}gin{equation}
\overrightarrow{\beta}ta_e(p):=y_p (1+p^{-\frac 13})(\kappa_e(p) +p^{-1}+p^{-\frac 43})-x_p(\theta_e+\tfrac 1p).
\label{eqaution definition beta}
\end{equation}
\end{proposition}
\overrightarrow{\beta}gin{proof}
Applying~\eqref{sum p e 1}, we see that
\overrightarrow{\beta}gin{multline*}
I^{\mathfrak pm}(X;\mathfrak phi) = -\frac{2C_1^\mathfrak pm X}{LN^\mathfrak pm(X)}\sum_p \sum_{e=1}^\infty \frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\theta_e+\tfrac 1p)\\
- \frac{2C_2^\mathfrak pm X^{\frac 56}}{LN^\mathfrak pm(X)} \sum_p \sum_{e=1}^\infty \frac{y_p(1+p^{-\frac 13})\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\kappa_e(p) +p^{-1}+p^{-\frac 43}) \\
+O_{\varepsilon}\Big(X^{\theta - 1 +\varepsilon}\sum_{\substack{p^e \leq X^\sigma \\ e\geq 1}}p^{\omega -\frac e2}\log p \Big)\\
= -\frac{2 }{L }\bigg( 1 - \frac{ C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{- \frac16} + \frac{ (C_2^\mathfrak pm)^2 }{(C_1^\mathfrak pm)^2 } X^{- \frac13} \bigg) \sum_p \sum_{e=1}^\infty \frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\theta_e+\tfrac 1p)\\
- \frac{2}{L} \bigg( \frac{C_2^\mathfrak pm }{C_1^\mathfrak pm}X^{-\frac 16} - \frac{ (C_2^\mathfrak pm)^2 }{(C_1^\mathfrak pm)^2 } X^{- \frac13} \bigg) \sum_p \sum_{e=1}^\infty \frac{y_p(1+p^{-\frac 13})\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\kappa_e(p) +p^{-1}+p^{-\frac 43}) \\ +O_{\varepsilon}\big(X^{\theta - 1 + \sigma(\omega + \frac12 ) +\varepsilon} +X^{-\frac 12+\frac{\sigma}6}\big).
\end{multline*}
Note in particular that the error term $O(X^{-\frac 12+\frac{\sigma}6})$ bounds the size of the contribution of the first omitted term in the expansion of $X^{\frac56}/N^{\mathfrak pm}(X)$ appearing in the second double sum above. Indeed, this follows since $\kappa_1(p)=p^{-\frac13}$ and
\overrightarrow{\beta}gin{equation*}
X^{-\frac12}\sum_{p\leq X^{\sigma}}\frac{\log p}{p^{\frac56}}=O\big(X^{-\frac 12+\frac{\sigma}6}\big).
\end{equation*}
The claimed estimate follows.
\end{proof}
\overrightarrow{\beta}gin{proof}[Proof of Theorem~\ref{theorem main}]
Combine Lemmas~\ref{lemma explicit formula} and ~\ref{lemma average log} with Proposition~\ref{proposition prime sum}.
\end{proof}
We shall estimate $ I^{\mathfrak pm}(X;\mathfrak phi)$ further, and find asymptotic expansions for the double sums in Proposition~\ref{proposition prime sum}.
\overrightarrow{\beta}gin{lemma} \label{one-level-2-term}
Let $\mathfrak phi$ be a real even Schwartz function whose Fourier transform is compactly supported, define $\sigma:=\sup({\rm supp}(\widehat \mathfrak phi))$, and let $\ell$ be a positive integer. Define
$$ I_1(X;\mathfrak phi) := \sum_{p}\sum_{e=1}^\infty \frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\theta_e+\tfrac 1p) , \qquad I_2(X;\mathfrak phi) := \sum_{p}\sum_{e=1}^\infty \frac{\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) \overrightarrow{\beta}ta_e(p) .$$
Then, we have the asymptotic expansion
$$
I_1(X;\mathfrak phi) = \frac{\mathfrak phi (0)}{4} L + \sum_{n=0 }^\ell \frac{ \widehat{\mathfrak phi}^{(n)} (0) \nu_1 (n) }{ n!} \frac{ 1 }{L^n } + O_{\ell} \Big( \frac{1 }{L^{\ell+1} } \Big),
$$
where
\overrightarrow{\beta}gin{multline*} \nu_1 (n ) := \delta_{n=0} + \sum_{p}\sum_{ e \neq 2 } \frac{x_p e^n ( \log p)^{n+1} }{p^{\frac e2}} (\theta_e+\tfrac 1p) + \sum_p \frac{ 2^n ( \log p)^{n+1} }{p} \Big( x_p\Big( 1 + \frac1p\Big) -1 \Big) \\
+ \int_1^\infty \frac{2^n ( \log u)^{n-1} ( \log u - n ) }{u^2} \mathcal{R} (u) du
\end{multline*}
with $ \mathcal{R}(u) := \sum_{ p \leq u } \log p - u $. Moreover, we have the estimate
$$I_2(X;\mathfrak phi) = L \int_0^\infty \widehat{\mathfrak phi} ( u) e^{\frac{Lu}6} du + O\Big( X^{\frac \sigma 6} e^{-c_0(\sigma)\sqrt{\log X}}\Big),$$
where $c_0(\sigma)>0$ is a constant. Under RH, we have the more precise expansion
$$I_2(X;\mathfrak phi) = L \int_0^\infty \widehat{\mathfrak phi} ( u) e^{\frac{Lu}{6}} du + \sum_{n=0}^\ell \frac{ \widehat{\mathfrak phi}^{(n)} (0) \nu_2 (n) }{ n!} \frac{ 1 }{L^n } + O_{\ell} \Big( \frac{1 }{L^{\ell+1} } \Big), $$
where
\overrightarrow{\beta}gin{multline*}
\nu_2 (n ) := \delta_{n=0} + \sum_{p}\sum_{e=2}^\infty \frac{e^n (\log p )^{n+1} \overrightarrow{\beta}ta_e(p) }{p^{\frac e2}} +
\sum_{p} \frac{(\log p)^{n+1}}{p^{\frac 12}} \Big( \overrightarrow{\beta}ta_1 (p) - \frac{1}{p^{\frac13}} \Big)\\
+ \int_1^{\infty} \frac{( \log u)^{n-1} ( 5 \log u - 6n) }{6 u^{\frac{11}{6} }} \mathcal{R} (u) du.
\end{multline*}
\end{lemma}
\overrightarrow{\beta}gin{proof}
We first split the sums as
\overrightarrow{\beta}gin{equation}\label{prime sum eqn 1}
I_1(X;\mathfrak phi) = \sum_p \frac{\log p }{p} \widehat{\mathfrak phi} \Big( \frac{ 2 \log p }{L} \Big) + I'_1(X;\mathfrak phi), \qquad
I_2(X;\mathfrak phi) =\sum_p \frac{ \log p }{ p^{\frac56}}\widehat\mathfrak phi\Big(\frac{\log p }{L}\Big) +I'_2(X;\mathfrak phi) ,
\end{equation}
where
\overrightarrow{\beta}gin{equation}\overrightarrow{\beta}gin{split}\label{prime sum eqn 3}
I'_1(X;\mathfrak phi) & :=
\sum_{p}\sum_{e \neq 2 } \frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\theta_e+\tfrac 1p) + \sum_p \frac{ \log p }{p} \Big( x_p\Big( 1 + \frac1p\Big) -1 \Big) \widehat{\mathfrak phi} \Big( \frac{ 2 \log p }{L} \Big), \\
I'_2(X;\mathfrak phi) & := \sum_{p}\sum_{e=2}^\infty \frac{\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) \overrightarrow{\beta}ta_e(p) +
\sum_{p} \frac{\log p}{p^{\frac 12}}\widehat\mathfrak phi\Big(\frac{\log p }{L}\Big) \Big( \overrightarrow{\beta}ta_1 (p) - \frac{1}{p^{\frac13}} \Big) .
\end{split}\end{equation}
We may also rewrite the sums in~\eqref{prime sum eqn 1} using partial summation as follows:
\overrightarrow{\beta}gin{equation}\overrightarrow{\beta}gin{split}\label{prime sum eqn 2}
\sum_p \frac{\log p }{p} \widehat{\mathfrak phi} \Big( \frac{ 2 \log p }{L} \Big) & = \int_1^\infty \frac{1 }{u} \widehat{\mathfrak phi} \Big( \frac{ 2 \log u }{L} \Big)d( u+ \mathcal{R}(u)) \\
& = \frac{ \mathfrak phi(0) }{4} L + \widehat{\mathfrak phi}(0) - \int_1^\infty \Big(\frac{-1}{u^2} \widehat{\mathfrak phi} \Big( \frac{2 \log u }{L} \Big) + \frac{2}{u^2L} \widehat{\mathfrak phi}' \Big( \frac{2 \log u }{L} \Big) \Big) \mathcal{R} (u) du , \\
\sum_p \frac{\log p }{p^{\frac56} } \widehat{\mathfrak phi} \Big( \frac{ \log p }{L} \Big) & = L \int_0^\infty \widehat{\mathfrak phi} ( u) e^{Lu/6} du + \widehat{\mathfrak phi}(0) \\
& \hspace{2cm}- \int_1^{X^{\sigma}} \Big(\frac{-5}{6 u^2} \widehat{\mathfrak phi} \Big( \frac{ \log u }{L} \Big) + \frac{ 1 }{u^2L} \widehat{\mathfrak phi}' \Big( \frac{ \log u }{L} \Big) \Big) u^{\frac16} \mathcal{R} (u) du .
\end{split}\end{equation}
Next, for any $\ell\geq 1$ and $ |t| \leq \sigma $, Taylor's theorem reads
\overrightarrow{\beta}gin{equation}\label{taylor eqn}
\widehat{\mathfrak phi} ( t ) = \sum_{n=0}^\ell \frac{ \widehat{\mathfrak phi}^{(n)} (0) }{ n!} t^n + O_{\ell} ( |t|^{\ell+1}),
\end{equation}
and one has a similar expansion for $\widehat \mathfrak phi'$. The claimed estimates follow from substituting this expression into~\eqref{prime sum eqn 3} and \eqref{prime sum eqn 2} and evaluating the error term using the prime number theorem $ \mathcal{R}(u)\ll u e^{-c \sqrt{ \log u }} $.
\end{proof}
We end this section by proving Theorem~\ref{theorem omega result counts}.
\overrightarrow{\beta}gin{proof}[Proof of Theorem~\ref{theorem omega result counts}]
Assume that $\theta,\omega\geq 0$ are admissible values in~\eqref{equation TT local 2}, and are such that $\theta+\omega <\frac 12$. Let $\mathfrak phi$ be any real even Schwartz function such that $\widehat \mathfrak phi\geq 0$ and $1< \sup({\rm supp}(\widehat \mathfrak phi)) < (\frac 56-\theta)/(\frac 13+\omega)$; this is possible thanks to the restriction $\theta+\omega <\frac 12$. Combining Lemmas~\ref{lemma explicit formula} and ~\ref{lemma average log} with Proposition~\ref{proposition prime sum}, we obtain the estimate\footnote{This is similar to the proof of Theorem~\ref{theorem main}. However, since we have a different condition on $\theta$ (that is $\theta + \omega < \frac 12$), there is an additional error term in the current estimate.}
\overrightarrow{\beta}gin{multline}\frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \mathfrak{D}_{\mathfrak phi}(K)=\widehat \mathfrak phi(0)\Big(1 + \frac{ \log (4 \mathfrak pi^2 e) }{L} -\frac{C_2^\mathfrak pm}{5C_1^\mathfrak pm} \frac{X^{-\frac 16}}{L}
+ \frac{(C_2^\mathfrak pm)^2 }{5(C_1^\mathfrak pm)^2 } \frac{X^{-\frac 13}}{L} \Big) \\
+ \frac1{\mathfrak pi}\int_{-\infty}^{\infty}\mathfrak phi\Big(\frac{Lr}{2\mathfrak pi}\Big){\mathbb{R}}e\Big(\frac{\Gamma'_{\mathfrak pm}}{\Gamma_{\mathfrak pm}}(\tfrac12+ir)\Big)dr -\frac{2}{L}\sum_{p,e}\frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\theta_e+\tfrac 1p)\\ -\frac{2C_2^\mathfrak pm X^{-\frac 16}}{C_1^\mathfrak pm L}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e}\frac{\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) \overrightarrow{\beta}ta_e(p) +O_{\varepsilon}(X^{\theta - 1 + \sigma(\omega + \frac12 ) +\varepsilon}+X^{-\frac 12+\frac \sigma 6}),
\label{equation corollary}
\end{multline}
where $\sigma=\sup({\rm supp}(\widehat \mathfrak phi))$.
To bound the integral involving the gamma function in~\eqref{equation corollary}, we note that Stirling's formula implies that for $s$ in any fixed vertical strip minus discs centered at the poles of $\Gammamma_\mathfrak pm(s)$, we have the estimate
$$ {\mathbb{R}}e \left( \frac{ \Gammamma'_{\mathfrak pm} }{ \Gammamma_{\mathfrak pm}} ( s) \right) = \log | s | + O(1).$$
Now, $ \mathfrak phi(x) \ll |x|^{-2}$, and thus
$$ \frac{1 }{ \mathfrak pi} \int_{-\infty}^\infty \mathfrak phi \Big( \frac{ Lr}{ 2 \mathfrak pi} \Big) {\mathbb{R}}e \Big( \frac{ \Gammamma'_{\mathfrak pm} }{ \Gammamma_{\mathfrak pm}} ( \tfrac12 + ir ) \Big) dr \ll \int_{-1}^1 \left| \mathfrak phi \Big( \frac{ Lr}{ 2 \mathfrak pi} \Big) \right| dr + \int_{ |r|\geq 1 } \frac{ \log (1+|r|)}{ (Lr)^2} dr \ll \frac1L. $$
Moreover, Lemma~\ref{one-level-2-term} implies the estimates
$$- \frac{2}{L}\sum_{p,e}\frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) (\theta_e+\tfrac 1p) \ll 1$$
and
\overrightarrow{\beta}gin{multline*}
-\frac{2C_2^\mathfrak pm X^{-\frac 16}}{C_1^\mathfrak pm L}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e}\frac{\log p}{p^{\frac e2}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) \overrightarrow{\beta}ta_e(p) \\= -\frac{2C_2^\mathfrak pm X^{-\frac 16}}{C_1^\mathfrak pm } \int_0^\infty \widehat \mathfrak phi(u) e^{\frac{Lu}{6}} du +O\big( X^{-\frac { 1}6}+X^{\frac{\sigma -2}{6}} \big),
\end{multline*}
since the Riemann hypothesis for $\zeta_K(s)$ implies the Riemann hypothesis for $\zeta(s)$. Combining these estimates, we deduce that
the right-hand side of~\eqref{equation corollary} is
$$ \leq -C_\varepsilon X^{\frac{\sigma-1}6-\varepsilon}+O_\varepsilon(1+X^{\frac{\sigma-1}6-\delta+\varepsilon}+X^{-\frac 13+\frac \sigma 6}), $$
where $\varepsilon>0$ is arbitrary, $C_\varepsilon$ is a positive constant, and $\delta :=\frac{\sigma-1}6 -( \theta - 1 + \sigma(\omega + \frac12 )) >0 $. However, for small enough $\varepsilon$, this contradicts the bound
$$ \frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \mathfrak{D}_{\mathfrak phi}(K) = O( \log X), $$
which is a direct consequence of the Riemann Hypothesis for $\zeta_K(s)$ and the Riemann-von Mangoldt formula~\cite[Theorem 5.31]{IK}.
\end{proof}
\section{A refined Ratios Conjecture}
\label{section RC}
The celebrated $L$-functions Ratios Conjecture \cite{CFZ} predicts precise formulas for estimates of averages of ratios of (products of) $L$-functions evaluated at points close to the critical line. The conjecture is presented in the form of a recipe with instructions on how to produce predictions of a certain type in any family of $L$-functions. In order to follow the recipe it is of fundamental importance to have control of counting functions of the type~\eqref{equation TT} and \eqref{equation TT local 2 vector form} related to the family. The connections between counting functions, low-lying zeros and the Ratios Conjecture are central in the present investigation.
The Ratios Conjecture has a large variety of applications. Applications to problems about low-lying zeros first appeared in the work of Conrey and Snaith \cite{CS}, where they study the one-level density of families of quadratic Dirichlet $L$-functions and quadratic twists of a holomorphic modular form. The investigation in \cite{CS} has inspired a large amount of work on low-lying zeros in different families; see, e.g., \cite{MiSymplectic,MiOrthogonal,HKS,FM,DHP,FPS1,FPS3,MS,CP, Wax}.
As part of this project, we went through the steps of the Ratios Conjecture recipe with the goal of estimating the $1$-level density. We noticed that the resulting estimate does not predict certain terms in Theorem~\ref{theorem main}.
To fix this, we modified~\cite[Step 4]{CFZ}, which is the evaluation of the average of the coefficients appearing in the approximation of the expression
\overrightarrow{\beta}gin{equation} \label{ralphgam}
R(\overrightarrow{\overrightarrow{a}lpha}pha,\gamma;X):=\frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \frac{L\left(\frac{1}{2}+\overrightarrow{\overrightarrow{a}lpha}pha,f_K\right)}{L\left(\frac{1}{2}+\gamma,f_K\right)}.
\end{equation}
More precisely, instead of only considering the main term, we kept track of the secondary term in Lemma~\ref{lemma count1}.
We now describe more precisely the steps in the Ratios Conjecture recipe. The first step involves the approximate functional equation for $L(s,f_K)$, which reads
\overrightarrow{\beta}gin{equation}
L\left(s, f_K\right) = \sum_{n<x} \frac{\lambda_K(n)}{n^s}+|D_K|^{\frac 12-s}\frac{\Gamma_{\mathfrak pm}(1-s)}{\Gamma_{\mathfrak pm}(s)} \sum_{n<y}
\frac{\lambda_K(n)}{n^{1-s}}+\:{\rm Error},\label{approxeq}
\end{equation}
where $x,y$ are such that $xy\overrightarrow{a}symp |D_K| (1+|t|)^2 $(this is in analogy with~\cite{CS}; see~\cite[Theorem 5.3]{IK} for a description of the approximate functional equation of a general $L$-function). The analysis will be carried out assuming that the error term can be neglected, and that the sums can be completed.
Following \cite{CFZ}, we replace the
numerator of $\eqref{ralphgam}$ with the approximate functional equation $\eqref{approxeq}$ and the
denominator of $\eqref{ralphgam}$ with $\eqref{lemobius}$. We will need to estimate the first sum in \eqref{approxeq} evaluated at $s=\frac{1}{2}+\overrightarrow{\overrightarrow{a}lpha}pha$, where $|{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha)|$ is sufficiently small. This gives the contribution
\overrightarrow{\beta}gin{equation} \label{ragone}
R_1(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma;X):=\frac1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \sum_{h,m} \frac{\lambda_K(m)\mu_{K}(h)}{m^{\frac{1}{2} +
\overrightarrow{\overrightarrow{a}lpha}pha}h^{\frac{1}{2}+\gamma}}
\end{equation}
to \eqref{ralphgam}. This infinite sum converges absolutely in the region ${\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha)>\frac 12$ and ${\mathbb{R}}e(\gamma)>\frac 12$, however, later in this section we will provide an analytic continuation to a wider domain. We will also need to evaluate the contribution of the second sum in $\eqref{approxeq}$, which is given by
\overrightarrow{\beta}gin{equation} \label{ragtwo}
R_2(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma;X):=\frac1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} |D_K|^{-\overrightarrow{\overrightarrow{a}lpha}pha}\frac{\Gamma_{\mathfrak pm}\left(\frac 12-\overrightarrow{\overrightarrow{a}lpha}pha \right)}{\Gamma_{\mathfrak pm}\left(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha\right)}\sum_{h,m} \frac{\lambda_K(m)\mu_{K}(h)}{m^{\frac{1}{2}-\overrightarrow{\overrightarrow{a}lpha}pha}h^{\frac{1}{2}+\gamma}}.
\end{equation}
(Once more, the series converges absolutely for ${\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha)<-\frac 12$ and ${\mathbb{R}}e(\gamma)>\frac 12$, but we will later provide an analytic continuation to a wider domain.)
A first step in the understanding of the $R_j(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma;X)$ will be achieved using the following precise evaluation of the expected value of $\lambda_K(m)\mu_K(h) $.
\overrightarrow{\beta}gin{lemma}\label{lemma count1}
Let $m,h\in \mathbb N$, and let $\frac 12 \leq \theta<\frac 56$ and $\omega\geq0$ be such that~\eqref{equation TT local 2 vector form} holds. Assume that $h$ is cubefree. We have the estimate
\overrightarrow{\beta}gin{multline*}
\frac{1}{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)}\lambda_K(m)\mu_K(h) = \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}f(e,s,p)x_p\\
+ \Bigg( \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}g(e,s,p)y_p- \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}f(e,s,p)x_p\Bigg) \frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \\
+O_\varepsilon\bigg( \mathfrak prod_{ p \mid hm , p^e \mathfrak parallel m } \big( (2 e + 5)p^{\omega}\big) X^{\theta-1 +\varepsilon}\bigg),
\end{multline*}
where
\overrightarrow{\beta}gin{align*}
f(e,0,p)&:= \frac{e+1}{6}+\frac{1+(-1)^e}{4}+\frac{\tau_e}{3}+\frac{1}{p}; \\
f(e,1,p)&:= - \frac{e+1}{3}+\frac{\tau_e}{3}- \frac{1}{p} ; \\
f(e,2,p)&:=\frac{e+1}{6}-\frac{1+(-1)^e}{4}+\frac{\tau_e}{3} ; \\
g(e,0,p)&:= \frac{(e+1)(1+p^{-\frac 13})^3}{6} +\frac{(1+(-1)^e)(1+p^{-\frac 13})(1+p^{-\frac 23})}{4} \\
& \quad + \frac{\tau_e(1+p^{-1})}{3} + \frac{(1+p^{-\frac 13})^2}{p}; \\
g(e,1,p)&:= -\frac{(e+1)(1+p^{-\frac 13})^3}{3}+ \frac{\tau_e(1+p^{-1})}{3}-\frac{(1+p^{-\frac 13})^2}{p}; \\
g(e,2,p)&:= \frac{(e+1)(1+p^{-\frac 13})^3}{6} -\frac{(1+(-1)^e)(1+p^{-\frac 13})(1+p^{-\frac 23})}{4} + \frac{\tau_e(1+p^{-1})}{3} .
\end{align*}
\end{lemma}
\overrightarrow{\beta}gin{proof}
We may write $ m = \mathfrak prod_{j=1}^J p_j^{e_j}$ and $ h = \mathfrak prod_{j=1}^J p_j^{s_j }$, where
$ p_1 , \ldots , p_J$ are distinct primes and for each $ j $, $ e_j $ and $ s_j $ are nonnegative integers, but not both zero. Then we see that
\overrightarrow{\beta}gin{align*}
\sum_{K \in \F^\mathfrak pm(X)} \lambda_K (m) \mu_K (h) = & \sum_{K \in \F^\mathfrak pm(X)} \mathfrak prod_{j=1}^J \bigg( \lambda_K \big( p_j ^{e_j} \big) \mu_K \big( p_j^{s_j} \big) \bigg) = \sum_{\mathbf{k}} \sum_{ \substack{ K \in \mathcal{F}^{\mathfrak pm}(X) \\ \mathbf{p} : ~type ~T_{\mathbf{k}} }} \mathfrak prod_{j=1}^J \bigg( \lambda_K \big( p_j ^{e_j} \big) \mu_K \big( p_j^{s_j} \big) \bigg) ,
\end{align*}
where $\mathbf{k} = (k_1 , \ldots , k_J ) $ runs over $\{ 1, 2, 3, 4, 5 \}^J $ and $\mathbf{p} = ( p_1 , \ldots , p_J)$. When each $p_j$ has splitting type $T_{k_j}$ in $K$, the values $\lambda_K (p_j^{e_j})$ and $ \mu_K ( p_j^{s_j})$ depend on $p_j$, $k_j $, $e_j$ and $ s_j$. Define
\overrightarrow{\beta}gin{equation*}\label{def eta}
\eta_{1, p_j}(k_j, e_j ) := \lambda_K (p_j^{e_j}) , \qquad \eta_{2, p_j} (k_j, s_j ) := \mu_K (p_j^{s_j})
\end{equation*}
for each $ j \leq J$ with $p_j $ of splitting type $T_{k_j}$ in $K$, as well as
\overrightarrow{\beta}gin{equation}\label{def eta vec}
\eta_{1, \mathbf{p}} ( \mathbf{k}, \mathbf{e}) := \mathfrak prod_{j=1}^J \eta_{1, p_j} (k_j, e_j ), \qquad \eta_{2, \mathbf{p}} ( \mathbf{k}, \mathbf{s}) := \mathfrak prod_{j=1}^J \eta_{2, p_j} (k_j, s_j ).
\end{equation}
We see that
\overrightarrow{\beta}gin{align*}
\sum_{K \in \F^\mathfrak pm(X)} \lambda_K (m) \mu_K (h) & = \sum_{\mathbf{k}} \eta_{1, \mathbf{p}} ( \mathbf{k}, \mathbf{e}) \eta_{2,\mathbf{p}} ( \mathbf{k}, \mathbf{s}) \sum_{ \substack{ K \in \mathcal{F}^{\mathfrak pm}(X) \\ \mathbf{p} : ~type ~T_{\mathbf{k}} }} 1 \\
& = \sum_{\mathbf{k}} \eta_{1, \mathbf{p}} ( \mathbf{k}, \mathbf{e}) \eta_{2,\mathbf{p}} ( \mathbf{k}, \mathbf{s}) N^{\mathfrak pm}_{\mathbf{p}} ( X , T_{\mathbf{k}}),
\end{align*}
which by~\eqref{equation TT local 2 vector form} is equal to
\overrightarrow{\beta}gin{align*}
& \sum_{\mathbf{k}} \eta_{1, \mathbf{p}} ( \mathbf{k}, \mathbf{e}) \eta_{2,\mathbf{p}} ( \mathbf{k}, \mathbf{s}) \bigg(C_1^\mathfrak pm \mathfrak prod_{j = 1}^J ( x_{p_j} c_{k_j} (p_j ) ) X + C_2^\mathfrak pm \mathfrak prod_{j = 1}^J ( y_{p_j} d_{k_j} (p_j ) ) X^{\frac 56} + O_{\varepsilon} \bigg(\mathfrak prod_{j =1}^J p_j^{\omega}X^{\theta+\varepsilon} \bigg) \bigg) \\
= & C_1^\mathfrak pm X \bigg( \sum_{\mathbf{k}} \eta_{1, \mathbf{p}} ( \mathbf{k}, \mathbf{e}) \eta_{2,\mathbf{p}} ( \mathbf{k}, \mathbf{s}) \mathfrak prod_{j = 1}^J ( x_{p_j} c_{k_j} (p_j )) \bigg) + C_2^\mathfrak pm X^{\frac 56} \bigg( \sum_{\mathbf{k}} \eta_{1, \mathbf{p}} ( \mathbf{k}, \mathbf{e}) \eta_{2,\mathbf{p}} ( \mathbf{k}, \mathbf{s}) \mathfrak prod_{j = 1}^J ( y_{p_j} d_{k_j} (p_j )) \bigg) \\
& + O_{\varepsilon} \bigg( \sum_{\mathbf{k}} | \eta_{1, \mathbf{p}} ( \mathbf{k}, \mathbf{e}) \eta_{2,\mathbf{p}} ( \mathbf{k}, \mathbf{s}) | \mathfrak prod_{j =1}^J p_j^{\omega}X^{\theta+\varepsilon} \bigg) .
\end{align*}
We can change the last three $\mathbf{k}$-sums into products by \eqref{def eta vec}. Doing so, we obtain that the above is equal to
\overrightarrow{\beta}gin{align*}
&C_1^\mathfrak pm X \mathfrak prod_{j =1}^J \bigg( x_{p_j} \widetilde f(e_j, s_j, p_j ) \bigg) + C_2^\mathfrak pm X^{\frac 56} \mathfrak prod_{j =1}^J \bigg( y_{p_j} \widetilde g(e_j, s_j, p_j ) \bigg) + O_{\varepsilon} \bigg( \mathfrak prod_{j =1}^J \bigg( p_j^{\omega} (2e_j +5 ) \bigg) X^{\theta+\varepsilon} \bigg) \\
=& C_1^\mathfrak pm X \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}\widetilde f(e,s,p)x_p + C_2^\mathfrak pm X^{\frac 56} \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}\widetilde g(e,s,p)y_p + O_{\varepsilon} \bigg( \mathfrak prod_{j =1}^J \bigg( p_j^{\omega} ( 2 e_j + 5 ) \bigg) X^{\theta+\varepsilon} \bigg) ,
\end{align*}
where
$$ \widetilde f( e, s, p) := \sum_{k=1}^5 \eta_{1, p } (k , e ) \eta_{2,p } (k , s ) c_{k } (p ) , \qquad \widetilde g(e,s,p) := \sum_{k =1}^5 \eta_{1, p } ( k , e ) \eta_{2,p } ( k , s ) d_{k } (p ) . $$
A straightforward calculation shows that $\widetilde f( e, s, p) = f( e, s, p) $ and $\widetilde g( e, s, p)=g( e, s, p) $ (see the explicit description of the coefficients in Section~\ref{section background}; note that $ \eta_{2,p}(k,0) = 1 $), and the lemma follows.
\end{proof}
We now proceed with the estimation of $R_1(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma ; X )$. Taking into account the two main terms in Lemma \ref{lemma count1}, we expect that
\overrightarrow{\beta}gin{equation}
R_1(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma;X)=R_1^M(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma)+\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) (R_1^S(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma)-R_1^M(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma))+\text{Error},
\label{equation expansion R_1}
\end{equation}
where
\overrightarrow{\beta}gin{align}
\label{equation definition R_1M}
\overrightarrow{\beta}gin{split}R_1^M(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma):=&\mathfrak prod_p \Big( 1 + \sum_{ e\geq 1} \frac{x_p f(e,0,p)}{p^{e(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha)}} + \sum_{ e\geq 0} \frac{x_p f(e,1,p)}{p^{e(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha) +(\frac 12+\gamma) }} + \sum_{ e\geq 0} \frac{x_p f(e,2,p)}{p^{e(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha) + 2(\frac 12+\gamma) }}\Big),\\
R_1^S(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma):=&\mathfrak prod_p \Big( 1 + \sum_{ e\geq 1} \frac{y_p g(e,0,p)}{p^{e(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha)}} + \sum_{ e\geq 0} \frac{y_p g(e,1,p)}{p^{e(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha) +(\frac 12+\gamma) }} + \sum_{ e\geq 0} \frac{y_p g(e,2,p)}{p^{e(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha) + 2(\frac 12+\gamma) }}\Big)
\end{split}
\end{align}
for $ {\mathbb{R}}e ( \overrightarrow{\overrightarrow{a}lpha}pha) , {\mathbb{R}}e ( \gamma) > \frac 12$. Since
\overrightarrow{\beta}gin{multline*}
R_1^M(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma)
= \mathfrak prod_p \Bigg( 1 + \frac{1}{p^{1+2\overrightarrow{\overrightarrow{a}lpha}pha}} - \frac{1}{p^{1+\overrightarrow{\overrightarrow{a}lpha}pha+\gamma}}\\ + O \Big(\frac{1}{p^{\frac 32 +{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha)}}+\frac{1}{p^{\frac 32 +3{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha)}}+\frac{1}{p^{\frac 32 +{\mathbb{R}}e(\gamma)}} + \frac{1}{p^{\frac 32+ {\mathbb{R}}e ( 2 \overrightarrow{\overrightarrow{a}lpha}pha + \gamma) }} + \frac{1}{p^{\frac 52 + {\mathbb{R}}e (3 \overrightarrow{\overrightarrow{a}lpha}pha+ 2 \gamma) }} \Big) \Bigg),
\end{multline*}
we see that
\overrightarrow{\beta}gin{equation}
A_3(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma) := \frac{\zeta(1+\overrightarrow{\overrightarrow{a}lpha}pha+\gamma)}{\zeta(1+2\overrightarrow{\overrightarrow{a}lpha}pha) } R_1^M(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma)
\label{equation definition A_3}
\end{equation}
is analytically continued to the region\footnote{To see this, write $\frac{\zeta(1+\overrightarrow{\overrightarrow{a}lpha}pha+\gamma)}{\zeta(1+2\overrightarrow{\overrightarrow{a}lpha}pha) }$ as an Euler product, and expand out the triple product in~\eqref{equation definition A_3}. The resulting expression will converge in the stated region.} $ {\mathbb{R}}e (\overrightarrow{\overrightarrow{a}lpha}pha), {\mathbb{R}}e (\gamma)> - \frac 16 $. Similarly, from the estimates
\overrightarrow{\beta}gin{align*}
& \sum_{ e\geq 1} \frac{y_p g(e,0,p)}{p^{e(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha)}} = \frac 1{p^{\frac 56+\overrightarrow{\overrightarrow{a}lpha}pha}}+\frac 1{p^{1+2\overrightarrow{\overrightarrow{a}lpha}pha}} +O\Big( \frac 1{p^{{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha) + \frac 32}}+\frac 1{p^{2{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha) + \frac 43}}+ \frac 1{p^{3{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha) + \frac 32}}\Big),\\
&\sum_{ e\geq 0} \frac{y_p g(e,1,p)}{p^{e(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha) +(\frac 12+\gamma) }}=-\frac 1{p^{\frac 56+\gamma}} - \frac 1{p^{1+\overrightarrow{\overrightarrow{a}lpha}pha+\gamma}}+O\Big(\frac 1{p^{\frac 43+{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha+\gamma)}} \Big),\\
&\sum_{ e\geq 0} \frac{y_p g(e,2,p)}{p^{e(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha) + 2(\frac 12+\gamma) }} = O \Big( \frac 1{p^{\frac 32+{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha+2\gamma)}} \Big) ,
\end{align*}
we deduce that
\overrightarrow{\beta}gin{equation}
A_4(\overrightarrow{\overrightarrow{a}lpha}pha,\gamma):= \frac{ \zeta(\tfrac 56+\gamma) \zeta(1+\overrightarrow{\overrightarrow{a}lpha}pha+\gamma)}{\zeta(\tfrac 56+\overrightarrow{\overrightarrow{a}lpha}pha) \zeta(1+2\overrightarrow{\overrightarrow{a}lpha}pha) }R_1^S(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma)
\label{equation definition A_4}
\end{equation}
is analytic in the region ${\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha),{\mathbb{R}}e(\gamma)>-\frac 16$.
Note that by their defining product formulas, we have the bounds
\overrightarrow{\beta}gin{equation}\label{trivial bounds for A3 and A4}
A_3 ( \overrightarrow{\overrightarrow{a}lpha}pha, \gamma)=O_\varepsilon (1) ,\quad A_4 ( \overrightarrow{\overrightarrow{a}lpha}pha, \gamma) = O_\varepsilon (1)
\end{equation}
for $ {\mathbb{R}}e ( \overrightarrow{\overrightarrow{a}lpha}pha) , {\mathbb{R}}e ( \gamma) \geq - \frac16 + \varepsilon > - \frac16$. Using this notation,~\eqref{equation expansion R_1} takes the form
\overrightarrow{\beta}gin{multline*}
R_1(\overrightarrow{\overrightarrow{a}lpha}pha,\gamma;X) = \frac{\zeta(1+2\overrightarrow{\overrightarrow{a}lpha}pha)}{\zeta(1+\overrightarrow{\overrightarrow{a}lpha}pha+\gamma)} \Big( A_3(\overrightarrow{\overrightarrow{a}lpha}pha,\gamma)+\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \Big( \frac{\zeta(\tfrac 56+\overrightarrow{\overrightarrow{a}lpha}pha)}{\zeta(\tfrac 56+\gamma)} A_4(\overrightarrow{\overrightarrow{a}lpha}pha,\gamma) - A_3(\overrightarrow{\overrightarrow{a}lpha}pha,\gamma) \Big)\Big)\\ +\text{Error}.
\end{multline*}
The above computation is sufficient in order to obtain a conjectural evaluation of the average~\eqref{ragone}. However, our goal is to evaluate the $1$-level density through the average of $\frac{L'}{L} (\frac 12+r ,f_K ) $; therefore it is necessary to also compute the partial derivative $\frac{\mathfrak partial }{ \mathfrak partial \overrightarrow{\overrightarrow{a}lpha}pha}R_1(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma;X)|_{\overrightarrow{\overrightarrow{a}lpha}pha=\gamma=r} $. To do so, we need to make sure that the error term stays small after a differentiation. This is achieved by applying Cauchy's integral formula for the derivative
$$f'(a)=\frac{1}{2\mathfrak pi i}\int_{|z-a|=\kappa} \frac{f(z)}{(z-a)^2}dz$$ (valid for all small enough $\kappa>0$), and bounding the integrand using the approximation for $R_1(\overrightarrow{\overrightarrow{a}lpha}pha,\gamma;X) $ above. As for the main terms, one can differentiate them term by term, and obtain the expected approximation
\overrightarrow{\beta}gin{multline}
\frac{\mathfrak partial }{ \mathfrak partial \overrightarrow{\overrightarrow{a}lpha}pha}R_1(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma;X)\Big|_{\overrightarrow{\overrightarrow{a}lpha}pha=\gamma=r}=A_{3,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r)+ \frac{\zeta'}{\zeta}(1+2r) A_3(r,r) \\+ \frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \Big( A_{4,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r) +\frac{\zeta'}{\zeta} (\tfrac 56+r) A_4 (r,r) - A_{3,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r) +\frac{\zeta'}{\zeta}(1+2r) ( A_4(r,r)-A_3(r,r)) \Big) \\
+\text{Error},
\label{equation R_1 calculation}
\end{multline}
where $A_{3,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r)=\frac{\mathfrak partial }{ \mathfrak partial \overrightarrow{\overrightarrow{a}lpha}pha}A_3(\overrightarrow{\overrightarrow{a}lpha}pha,\gamma)\Big|_{\overrightarrow{\overrightarrow{a}lpha}pha=\gamma=r}$ and $A_{4,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r)=\frac{\mathfrak partial }{ \mathfrak partial \overrightarrow{\overrightarrow{a}lpha}pha}A_4(\overrightarrow{\overrightarrow{a}lpha}pha,\gamma)\Big|_{\overrightarrow{\overrightarrow{a}lpha}pha=\gamma=r}.$
Now, from the definition of $f(e,j,p) $ and $g(e,j,p)$ (see Lemma~\ref{lemma count1}) as well as~\eqref{definition theta_e} and~\eqref{definition kappa_e}, we have
\overrightarrow{\beta}gin{align*}
f(1,0,p)+f(0,1,p) & =g(1,0,p)+g(0,1,p)=0,\\
f(e,0,p)+f(e-1,1,p) +f(e-2,2,p) & =g(e,0,p)+g(e-1,1,p) +g(e-2,2,p)=0, \\
f(e, 0 , p ) - f(e-2 , 2, p ) & = \theta_e + p^{-1} , \\
g(e, 0 , p ) - g(e-2 , 2, p ) & = ( 1 + p^{- \frac13} ) ( \kappa_e ( p ) + p^{-1} + p^{- \frac43} ) .
\end{align*}
By the above identities and the definition~\eqref{equation definition R_1M}, we deduce that
$$R_1^M(r,r)=A_3(r,r)=R_1^S(r,r)=A_4(r,r)=1. $$
It follows that for ${\mathbb{R}}e(r)>\frac 12$,
\overrightarrow{\beta}gin{align*}
& R^M_{1,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r) = \frac{R^M_{1,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r)}{R^M_1(r,r)} = \frac{ \mathfrak partial}{\mathfrak partial \overrightarrow{\overrightarrow{a}lpha}pha} \log R^M_{1} ( \overrightarrow{\overrightarrow{a}lpha}pha, \gamma) \bigg|_{ \overrightarrow{\overrightarrow{a}lpha}pha=\gamma = r}\\
&= \sum_p \left( - \frac{ x_p \log p }{p^{\frac12 + r}}f(1,0,p) - \sum_{ e \geq 2} \frac{ x_p \log p }{ p^{ e( \frac12 + r ) } } \big( f(e,0,p ) - f(e-2,2,p)\big) \right)\\
&\hspace{1cm}+ \sum_p \left( - \sum_{ e \geq 2} \frac{ x_p \log p }{ p^{ e( \frac12 + r ) } } (e-1) \big( f( e,0,p) + f(e-1,1,p) + f( e-2,2,p) \big) \right) \\
& =- \sum_p \sum_{ e \geq 1} \frac{ x_p \log p }{ p^{ e( \frac12 + r ) } } \Big( \theta_e + \frac1p \Big)
\end{align*}
and
\overrightarrow{\beta}gin{align*}
R^S_{1,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r)
&= \sum_p \left( - \frac{ y_p \log p }{p^{\frac12 + r}}g(1,0,p) - \sum_{ e \geq 2} \frac{ y_p \log p }{ p^{ e( \frac12 + r ) } } \big( g(e,0,p ) - g(e-2,2,p)\big) \right)\\
&\hspace{1cm}+ \sum_p \left( - \sum_{ e \geq 2} \frac{ y_p \log p }{ p^{ e( \frac12 + r ) } } (e-1) \big( g( e,0,p) + g(e-1,1,p) + g( e-2,2,p) \big) \right) \\
& =- \sum_p \sum_{ e \geq 1} \frac{y_p \log p }{ p^{ e( \frac12 + r ) } } \big( 1 + p^{- \frac13} \big) \big( \kappa_e ( p ) + p^{-1} + p^{- \frac43} \big) \\
& =- \sum_p \sum_{ e \geq 1} \frac{ \log p }{ p^{ e( \frac12 + r ) } } \Big( \overrightarrow{\beta}ta_e (p)+ x_p \Big( \theta_e+ \frac1p \Big)\Big) ,
\end{align*}
by~\eqref{eqaution definition beta}.
Thus, we have
\overrightarrow{\beta}gin{align*}
A_{3,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r)&=R^M_{1,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r) - \frac{\zeta'}{\zeta}(1+2r) = -\sum_{p,e\geq 1} \Big(\theta_e+\frac 1p\Big) \frac{x_p\log p }{p^{e(\frac 12+r)}}- \frac{\zeta'}{\zeta}(1+2r)
\end{align*}
and
\overrightarrow{\beta}gin{align}
A_{4,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r)-A_{3,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r) = -\sum_{p,e\geq 1} \frac{(\overrightarrow{\beta}ta_e(p)-p^{-\frac e3}) \log p }{p^{e(\frac 12+r)}},
\end{align}
which are now valid in the extended region $ {\mathbb{R}}e(r)>0 $. Coming back to~\eqref{equation R_1 calculation}, we deduce that
\overrightarrow{\beta}gin{align*}\notag
\frac{\mathfrak partial }{ \mathfrak partial \overrightarrow{\overrightarrow{a}lpha}pha}R_1(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma;X)\Big|_{\overrightarrow{\overrightarrow{a}lpha}pha=\gamma=r} & =A_{3,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r)+ \frac{\zeta'}{\zeta}(1+2r) \\&\notag\hspace{1cm}+ \frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \Big( A_{4,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r) - A_{3,\overrightarrow{\overrightarrow{a}lpha}pha}(r,r) +\frac{\zeta'}{\zeta} (\tfrac 56+r) \Big) +\text{Error}\\
\notag &=-\sum_{p,e\geq 1} \Big(\theta_e+\frac 1p\Big) \frac{x_p\log p }{p^{e(\frac 12+r)}}
-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e\geq 1} \frac{(\overrightarrow{\beta}ta_e(p)-p^{-\frac e3} )\log p }{p^{e(\frac 12+r)}}\\
&\hspace{1cm}+\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big)\frac{\zeta'}{\zeta} (\tfrac 56+r)
+\text{Error},
\label{equation contribution R_1 to RC}
\end{align*}
where the second equality is valid in the region ${\mathbb{R}}e(r)>0$.
We now move to $R_2(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma ;X).$ We recall that
\overrightarrow{\beta}gin{equation}
R_2(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma;X)=\frac1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} |D_K|^{-\overrightarrow{\overrightarrow{a}lpha}pha}\frac{\Gamma_{\mathfrak pm}(\frac 12-\overrightarrow{\overrightarrow{a}lpha}pha)}{\Gamma_{\mathfrak pm}(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha)} \sum_{h,m} \frac{\lambda_K(m)\mu_{K}(h)}{m^{\frac{1}{2}-\overrightarrow{\overrightarrow{a}lpha}pha}h^{\frac{1}{2}+\gamma}},
\end{equation}
and the Ratios Conjecture recipe tells us that we should replace $\lambda_K(m)\mu_K(h)$ with its average. However, a calculation involving Lemma~\ref{lemma count1} suggests that the terms $ |D_K|^{-\overrightarrow{\overrightarrow{a}lpha}pha}$ and $\lambda_K(m)\mu_{K}(h)$ have non-negligible covariance. To take this into account, we substitute this step with the use of the following corollary of Lemma~\ref{lemma count1}.
\overrightarrow{\beta}gin{corollary}
\label{lemma average oscillatory}
Let $m,h\in \mathbb N$, and let $\frac 12 \leq \theta<\frac 56$ and $\omega\geq0$ be such that~\eqref{equation TT local 2 vector form} holds. For $\overrightarrow{\overrightarrow{a}lpha}pha\in \mathbb{C}$ with $0<{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha) < \frac 12$, we have the estimate
\overrightarrow{\beta}gin{multline*}
\frac{1}{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} |D_K|^{-\overrightarrow{\overrightarrow{a}lpha}pha} \lambda_K(m)\mu_K(h)
= \frac{X^{-\overrightarrow{\overrightarrow{a}lpha}pha}}{1-\overrightarrow{\overrightarrow{a}lpha}pha} \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}f(e,s,p)x_p \\ + X^{-\frac16-\overrightarrow{\overrightarrow{a}lpha}pha} \Bigg( \frac{1}{1- \frac{6\overrightarrow{\overrightarrow{a}lpha}pha}5} \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}g(e,s,p)y_p- \frac{1}{1-\overrightarrow{\overrightarrow{a}lpha}pha} \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}f(e,s,p)x_p\Bigg) \frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \\
+O_{\varepsilon}\bigg((1+|\overrightarrow{\overrightarrow{a}lpha}pha|)\mathfrak prod_{ p \mid hm, p^e \mathfrak parallel m } \big( (2 e + 5)p^{\omega}\big)X^{\theta-1-{\mathbb{R}}e(\overrightarrow{\overrightarrow{a}lpha}pha)+\varepsilon}\bigg).
\end{multline*}
\end{corollary}
\overrightarrow{\beta}gin{proof}
This follows from applying Lemma \ref{lemma count1} and (\ref{equation TT}) to the identity
\overrightarrow{\beta}gin{multline*}
\sum_{K \in \F^\mathfrak pm(X)} |D_K|^{-\overrightarrow{\overrightarrow{a}lpha}pha} \lambda_K(m)\mu_K(h) = \int_1^X u^{-\overrightarrow{\overrightarrow{a}lpha}pha} d\bigg( \sum_{K \in \F^\mathfrak pm(X)}u \lambda_K(m)\mu_K(h) \bigg) \\
= X^{-\overrightarrow{\overrightarrow{a}lpha}pha} \sum_{K \in \F^\mathfrak pm(X)} \lambda_K(m)\mu_K(h) + \overrightarrow{\overrightarrow{a}lpha}pha \int_1^X u^{-\overrightarrow{\overrightarrow{a}lpha}pha -1 } \bigg( \sum_{K \in \F^\mathfrak pm(X)}u \lambda_K(m)\mu_K(h)\bigg) du .
\end{multline*}
\end{proof}
Applying this lemma, we deduce the following heuristic approximation of $R_2 ( \overrightarrow{\overrightarrow{a}lpha}pha, \gamma ; X)$:
\overrightarrow{\beta}gin{equation*}\overrightarrow{\beta}gin{split}
& \frac{\Gamma_{\mathfrak pm}(\frac 12-\overrightarrow{\overrightarrow{a}lpha}pha)}{\Gamma_{\mathfrak pm}(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha)} \sum_{h,m} \frac{1}{m^{\frac{1}{2}-\overrightarrow{\overrightarrow{a}lpha}pha}h^{\frac{1}{2}+\gamma}} \bigg\{ \frac{X^{-\overrightarrow{\overrightarrow{a}lpha}pha}}{1-\overrightarrow{\overrightarrow{a}lpha}pha} \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}f(e,s,p)x_p\\
& + X^{-\frac 16-\overrightarrow{\overrightarrow{a}lpha}pha} \frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \Big( \frac{1}{1-\frac{6\overrightarrow{\overrightarrow{a}lpha}pha}5} \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}g(e,s,p)y_p- \frac{1}{1-\overrightarrow{\overrightarrow{a}lpha}pha} \mathfrak prod_{p^e\mathfrak parallel m, p^s \mathfrak parallel h}f(e,s,p)x_p\Big) \bigg\} \\
= & \frac{\Gamma_{\mathfrak pm}(\frac 12-\overrightarrow{\overrightarrow{a}lpha}pha)}{\Gamma_{\mathfrak pm}(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha)} \bigg\{ X^{-\overrightarrow{\overrightarrow{a}lpha}pha} \frac{ R_1^M ( - \overrightarrow{\overrightarrow{a}lpha}pha, \gamma)}{1-\overrightarrow{\overrightarrow{a}lpha}pha} + X^{-\frac 16-\overrightarrow{\overrightarrow{a}lpha}pha}\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \Big( \frac{R_1^S ( - \overrightarrow{\overrightarrow{a}lpha}pha, \gamma) }{1-\frac{6\overrightarrow{\overrightarrow{a}lpha}pha}5} - \frac{R_1^M ( - \overrightarrow{\overrightarrow{a}lpha}pha, \gamma)}{1-\overrightarrow{\overrightarrow{a}lpha}pha} \Big) \bigg\}
\\
= & \frac{\Gamma_{\mathfrak pm}(\frac 12-\overrightarrow{\overrightarrow{a}lpha}pha)}{\Gamma_{\mathfrak pm}(\frac 12+\overrightarrow{\overrightarrow{a}lpha}pha)} \frac{\zeta(1-2\overrightarrow{\overrightarrow{a}lpha}pha)}{ \zeta( 1 - \overrightarrow{\overrightarrow{a}lpha}pha + \gamma) } \bigg\{ X^{-\overrightarrow{\overrightarrow{a}lpha}pha} \frac{A_3 ( - \overrightarrow{\overrightarrow{a}lpha}pha, \gamma)}{1-\overrightarrow{\overrightarrow{a}lpha}pha} \\
& + X^{-\frac 16-\overrightarrow{\overrightarrow{a}lpha}pha}\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \Big( \frac{A_4 ( - \overrightarrow{\overrightarrow{a}lpha}pha, \gamma)}{1- \frac{6\overrightarrow{\overrightarrow{a}lpha}pha}5} \frac{ \zeta(\frac 56-\overrightarrow{\overrightarrow{a}lpha}pha)}{ \zeta( \frac 56+\gamma)} - \frac{A_3 ( - \overrightarrow{\overrightarrow{a}lpha}pha, \gamma) }{1-\overrightarrow{\overrightarrow{a}lpha}pha} \Big) \bigg\} .
\end{split}\end{equation*}
If ${\mathbb{R}}e( r) $ is positive and small enough, then we expect that
\overrightarrow{\beta}gin{equation*}\overrightarrow{\beta}gin{split}
\frac{\mathfrak partial }{ \mathfrak partial \overrightarrow{\overrightarrow{a}lpha}pha}R_2(\overrightarrow{\overrightarrow{a}lpha}pha, \gamma;X)\Big|_{\overrightarrow{\overrightarrow{a}lpha}pha=\gamma=r} = & - \frac{\Gamma_{\mathfrak pm}(\frac 12-r)}{\Gamma_{\mathfrak pm}(\frac 12+r)} \zeta(1-2r) \bigg\{ X^{-r} \frac{A_3 ( - r,r)}{1-r} \\
& + X^{-\frac 16-r} \frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \Big( \frac{\zeta(\tfrac 56-r)}{\zeta(\tfrac 56+r)} \frac{A_4(-r,r)}{ 1- \frac{6r}5} - \frac{ A_3( -r, r)}{1-r} \Big) \bigg\}+\text{Error} .
\end{split}\end{equation*}
We arrive at the following conjecture.
\overrightarrow{\beta}gin{conjecture} \label{ratios-thm}
Let $\frac 12 \leq \theta<\frac 56$ and $\omega\geq0$ be such that~\eqref{equation TT local 2 vector form} holds. There exists $0<\delta<\frac 1{6}$ such that for any fixed $\varepsilon>0$ and for $r\in \mathbb C$ with $ \frac 1{L} \ll {\mathbb{R}}e(r) <\delta $ and
$|r|\leq X^{\frac \varepsilon 2}$,
\overrightarrow{\beta}gin{multline} \label{2nd conj eqn}
\frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \frac{L'(\frac 12+r,f_K)}{L(\frac 12+r,f_K)}\\
= -\sum_{p,e\geq 1} \Big(\theta_e+\frac 1p\Big) \frac{x_p\log p }{p^{e(\frac 12+r)}}
-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e\geq 1} \frac{(\overrightarrow{\beta}ta_e(p)-p^{-\frac e3}) \log p }{p^{e(\frac 12+r)}} \\
+\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\zeta'}{\zeta}(\tfrac 56+r)
- X^{-r} \frac{\Gamma_{\mathfrak pm}(\frac 12-r)}{\Gamma_{\mathfrak pm}(\frac 12+r)}\zeta(1-2r) \frac{A_3( -r, r)}{1-r} \\
- \frac{C_2^\mathfrak pm }{C_1^\mathfrak pm } X^{-r-\frac 16} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\Gamma_{\mathfrak pm}(\frac 12-r)}{\Gamma_{\mathfrak pm}(\frac 12+r)}\zeta(1-2r) \Big( \frac{\zeta(\tfrac 56-r)}{\zeta(\tfrac 56+r)} \frac{A_4(-r,r)}{ 1- \frac{6r}5} - \frac{ A_3( -r, r)}{1-r} \Big) \\
+O_\varepsilon(X^{\theta-1+\varepsilon}).
\end{multline}
Note that the two sums on the right-hand side are absolutely convergent.
\end{conjecture}
Traditionally, when applying the Ratios Conjecture recipe, one has to restrict the real part of the variable $r$ to small enough positive values. For example, in the family of quadratic Dirichlet $L$-functions~\cite{CS,FPS3}, one requires that $\frac1{\log X}\ll{\mathbb{R}}e(r)<\frac 14$. This ensures that one is far enough from a pole for the expression in the right-hand side. In the current situation, we will see that the term involving $X^{-r-\frac 16}$ has a pole at $s=\frac 16$.
\overrightarrow{\beta}gin{proposition}
\label{proposition RC}
Assume Conjecture~\ref{ratios-thm} and the Riemann Hypothesis for $\zeta_K(s)$ for all $K\in \mathcal{F}^{\mathfrak pm}(X)$, and let $\mathfrak phi$ be a real even Schwartz function such that $ \widehat \mathfrak phi$ is compactly supported. For any constant $ 0< c < \frac16$, we have that
\overrightarrow{\beta}gin{multline}
\frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \sum_{\gamma_K}\mathfrak phi \Big(\frac{L\gamma_K}{2\mathfrak pi }\Big)=\widehat \mathfrak phi(0)\Big(1+ \frac{\log(4 \mathfrak pi^2 e)}{L} -\frac{C_2^\mathfrak pm}{5C_1^\mathfrak pm} \frac{X^{-\frac 16}}{L}
+ \frac{(C_2^\mathfrak pm)^2 }{5(C_1^\mathfrak pm)^2 } \frac{X^{-\frac 13}}{L} \Big)\\+ \frac1{\mathfrak pi}\int_{-\infty}^{\infty}\mathfrak phi\left(\frac{Lr}{2\mathfrak pi}\right){\mathbb{R}}e\Big(\frac{\Gamma'_{\mathfrak pm}}{\Gamma_{\mathfrak pm}}(\tfrac12+ir)\Big)dr -\frac{2}{L}\sum_{p,e}\frac{x_p\log p}{p^{\frac e2}}\widehat\mathfrak phi\left(\frac{\log p^e}{L}\right) (\theta_e+\tfrac 1p)\\
-\frac{2C_2^\mathfrak pm X^{-\frac 16}}{C_1^\mathfrak pm L}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e}\frac{\log p}{p^{\frac e2}}\widehat\mathfrak phi\left(\frac{\log p^e}{L}\right) (\overrightarrow{\beta}ta_e(p)-p^{-\frac e3})\\
-\frac{1}{\mathfrak pi i} \int_{(c )} \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big) \Big\{ -\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\zeta'}{\zeta}(\tfrac 56+s) + X^{-s} \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \frac{A_3( -s, s) }{1-s} \\
+\frac{C_2^\mathfrak pm }{C_1^\mathfrak pm} X^{-s-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \Big( \frac{\zeta(\tfrac 56-s)}{\zeta(\tfrac 56+s)} \frac{A_4(-s,s)}{ 1- \frac{6s}5} - \frac{ A_3( -s, s)}{1-s} \Big) \Big\}ds
\\ + O_\varepsilon(X^{\theta-1+\varepsilon}).
\label{equation proposition RC}
\end{multline}
\end{proposition}
\overrightarrow{\beta}gin{proof}
By the residue theorem, we have the identity
\overrightarrow{\beta}gin{align}
\frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \mathfrak{D}_\mathfrak phi(K)= \frac{1}{2 \mathfrak pi i} \left( \int_{(\frac1L)} - \int_{(-\frac1L)} \right) \frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \frac{L'(s+\frac 12,f_K)}{L(s+\frac 12,f_K)}\mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)ds.
\label{equation residue theorem}
\end{align}
Under Conjecture~\ref{ratios-thm} and well-known arguments (see e.g.~\cite[Section 3.2]{FPS3}), the part of this sum involving the first integral is equal to
\overrightarrow{\beta}gin{multline*}
-\frac{1}{2\mathfrak pi i} \int_{(\frac1L)} \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big) \bigg\{\sum_{p,e\geq 1} \Big(\theta_e+\frac 1p\Big) \frac{x_p\log p }{p^{e(\frac 12+s)}}
+\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e\geq 1} \frac{(\overrightarrow{\beta}ta_e(p)-p^{-\frac e3}) \log p }{p^{e(\frac 12+s)}} \\-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\zeta'}{\zeta}(\tfrac 56+s) + X^{-s} \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \frac{A_3( -s, s) }{1-s} \\
+\frac{C_2^\mathfrak pm }{C_1^\mathfrak pm}X^{-s-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \Big( \frac{\zeta(\tfrac 56-s)}{\zeta(\tfrac 56+s)} \frac{A_4(-s,s)}{ 1- \frac{6s}5} - \frac{ A_3( -s, s)}{1-s} \Big) \bigg\} ds \\+O_\varepsilon(X^{\theta-1+\varepsilon}),
\end{multline*}
where we used the bounds \eqref{trivial bounds for A3 and A4} and
\overrightarrow{\beta}gin{align}
\mathfrak phi\Big(\frac{Ls}{2\mathfrak pi i}\Big) = \frac { (-1)^\ell} {L^\ell s^\ell }\int_{\mathbb R} e^{ L {\mathbb{R}}e(s) x} e^{ i L \operatorname{Im}(s) x} \widehat \mathfrak phi^{(\ell)}(x) dx \ll_\ell \frac{e^{L|{\mathbb{R}}e(s)| \sup({\rm supp}(\widehat \mathfrak phi)) } }{L^\ell |s|^\ell }
\label{equation decay phi complex}
\end{align}
for every integer $\ell>0 $, which is decaying on the line ${\mathbb{R}}e(s) = \frac 1L$. We may also shift the contour of integration to the line ${\mathbb{R}}e(s)=c$ with $ 0 < c < \frac16$.
For the second integral in~\eqref{equation residue theorem} (over the line ${\mathbb{R}}e(s)=-\frac{1}{L}$), we treat it as follows. By the functional equation \eqref{equation functional equation}, we have
\overrightarrow{\beta}gin{align*}
- \frac{1}{2 \mathfrak pi i} & \int_{(-\frac1L)} \frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \frac{L'(s+\frac 12,f_K)}{L(s+\frac 12,f_K)}\mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)ds \\
= & \frac{1}{2 \mathfrak pi i} \int_{(\frac1L)} \frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \frac{L'(s+\frac 12,f_K)}{L(s+\frac 12,f_K)}\mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)ds \\
& + \frac{1}{2 \mathfrak pi i} \int_{(-\frac1L)} \frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \left( \log |D_K| + \frac{ \Gammamma'_{\mathfrak pm}}{\Gammamma_\mathfrak pm} ( \tfrac12 + s )+ \frac{ \Gammamma'_{\mathfrak pm}}{\Gammamma_\mathfrak pm} ( \tfrac12 - s ) \right)\mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)ds.
\end{align*}
The first integral on the right-hand side is identically equal to the integral that was just evaluated in the first part of this proof.
As for the second, by shifting the contour to the line ${\mathbb{R}}e (s)=0$, we find that it equals
\overrightarrow{\beta}gin{align*}
& \left( \frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \log |D_K| \right) \frac{1}{2 \mathfrak pi i} \int_{(0)} \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)ds + \frac{1}{2 \mathfrak pi i} \int_{(0)} \left( \frac{ \Gammamma'_{\mathfrak pm}}{\Gammamma_\mathfrak pm} ( \tfrac12 + s )+ \frac{ \Gammamma'_{\mathfrak pm}}{\Gammamma_\mathfrak pm} ( \tfrac12 - s ) \right) \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)ds \\
& = \left( \frac 1{N^\mathfrak pm(X)} \sum_{K \in \F^\mathfrak pm(X)} \log |D_K| \right) \frac{ \widehat{\mathfrak phi} (0)}{L} + \frac{1}{ \mathfrak pi} \int_{- \infty}^\infty \mathfrak phi \Big(\frac{Lr}{2\mathfrak pi }\Big) {\mathbb{R}}e \left( \frac{ \Gammamma'_{\mathfrak pm}}{\Gammamma_\mathfrak pm} ( \tfrac12 + ir ) \right)dr .
\end{align*}
By applying Lemma \ref{lemma average log} to the first term, we find the leading terms on the right-hand side of \eqref{equation proposition RC}.
Finally, by absolute convergence we have the identity
\overrightarrow{\beta}gin{align*}
\frac{1}{2\mathfrak pi i} \int_{(c)}\mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)\sum_{p,e\geq 1} \Big(\theta_e+\frac 1p\Big) \frac{x_p\log p }{p^{e(\frac 12+s)}} ds
&= \sum_{p,e\geq 1} \Big(\theta_e+\frac 1p\Big) \frac{x_p\log p } {p^{\frac e2}} \frac 1{2\mathfrak pi i} \int_{(c)}\mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big) p^{-es}ds\\
&=\frac{1}{L}\sum_{p,e\geq 1} \Big(\theta_e+\frac 1p\Big) \frac{x_p\log p }{p^{\frac e2}}\widehat \mathfrak phi\Big( \frac{e\log p}{L} \Big),
\end{align*}
since the contour of the inner integral can be shifted to the line ${\mathbb{R}}e(s)=0$. The same argument works for the term involving $\overrightarrow{\beta}ta_e(p)-p^{-\frac e3}$. Hence, the proposition follows.
\end{proof}
\section{Analytic continuation of $A_3 ( -s, s ) $ and $A_4 ( -s , s )$}
The goal of this section is to prove Theorem \ref{theorem RC}. To do so, we will need to estimate some of the terms in \eqref{equation proposition RC}, namely
\overrightarrow{\beta}gin{equation}\label{J X def}\overrightarrow{\beta}gin{split}
J^\mathfrak pm & (X) := \frac{2C_2^\mathfrak pm X^{-\frac 16}}{C_1^\mathfrak pm L}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e}\frac{\log p}{p^{\frac {5e}{6}}}\widehat\mathfrak phi\left(\frac{\log p^e}{L}\right) \\
& -\frac{1}{\mathfrak pi i} \int_{(c )} \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big) \Big\{ -\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\zeta'}{\zeta}(\tfrac 56+s) + X^{-s} \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \frac{ A_3( -s, s) }{1-s}\\
& +\frac{C_2^\mathfrak pm }{C_1^\mathfrak pm} X^{-s-\frac 16}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \Big( \frac{\zeta(\tfrac 56-s)}{\zeta(\tfrac 56+s)} \frac{A_4(-s,s)}{ 1- \frac{6s}5} - \frac{ A_3( -s, s)}{1-s} \Big) \Big\}ds,
\end{split}\end{equation}
for $ 0 < c < \frac16 $. The idea is to provide an analytic continuation to the Dirichlet series $A_3(-s,s)$ and $A_4(-s,s)$ in the strip $ 0<{\mathbb{R}}e(s) < \frac 12$, and to shift the contour of integration to the right.
\overrightarrow{\beta}gin{lemma}
\label{lemma A_3 analytic continuation}
The product formula
\overrightarrow{\beta}gin{equation}\label{A3 prod form}
A_3 ( -s, s) = \zeta(3) \zeta( \tfrac32 - 3s ) \mathfrak prod_p \bigg( 1 - \frac{1}{ p^{\frac32 +s}} + \frac{1}{p^{\frac52 - s}} - \frac{1}{p^{\frac52 -3s}} - \frac{1}{p^{3-4s}} + \frac{1}{p^{\frac92 - 5s}} \bigg)
\end{equation}
provides an analytic continuation of $A_3 (-s, s)$ to $|{\mathbb{R}}e ( s )| < \frac12$ except for a simple pole at $s= \frac16$ with residue $$-\frac{\zeta(3)}{3\zeta(\frac53)\zeta(2)}. $$
\end{lemma}
\overrightarrow{\beta}gin{proof}
From~\eqref{equation definition R_1M} and~\eqref{equation definition A_3}, we see that in the region $ | {\mathbb{R}}e (s) | < \frac 16$,
\overrightarrow{\beta}gin{align} \notag
A_3 ( -s, s) & = \mathfrak prod_p \bigg( 1 - \frac1{p^3} \bigg)^{-1} \bigg( 1 - \frac1{p^{1-2s}}\bigg) \\ &\hspace{1.5cm}\times\bigg( 1+ \frac1p + \frac1{p^2} + \sum_{e \geq 1 } \frac{ f(e,0,p)}{p^{e(\frac 12-s)}} + \sum_{ e \geq 0 } \frac{f(e,1,p)}{p^{e(\frac 12-s)+\frac12+s}} + \sum_{e \geq 0 } \frac{f(e,2,p)}{ p^{e(\frac 12-s)+1+2s }} \bigg) \notag \\
& = \zeta(3) \mathfrak prod_p \bigg( 1 - \frac1{p^{1-2s}}\bigg) \bigg( \frac1{p^2} + \sum_{e \geq 0 } \frac{1}{p^{e(\frac 12-s)}} \bigg( f(e,0,p) + \frac{f(e,1,p)}{p^{\frac 12+s}} + \frac{f(e,2,p)}{ p^{1+2s }}\bigg) \bigg). \label{equation A3 section 5}
\end{align}
The sum over $e\geq 0$ on the right-hand side is equal to
\overrightarrow{\beta}gin{multline}
\label{equation geometric series}
\frac16 \bigg( 1 - \frac{1}{ p^{\frac 12+s}} \bigg)^2 \sum_{e \geq 0 } (e+1) \frac{1}{p^{e(\frac 12-s)}} + \frac12 \bigg( 1 - \frac{1}{p^{1+2s}} \bigg) \sum_{e \geq 0 } \frac{ 1+(-1)^e}{2} \frac{1}{p^{e(\frac 12-s)}} \\
+ \frac13 \bigg( 1 + \frac{1}{p^{\frac 12+s}}+ \frac{1}{p^{1+2s}} \bigg) \sum_{e \geq 0 }\tau_e \frac{1}{p^{e(\frac 12-s)}} + \frac1p \bigg( 1- \frac{1}{p^{\frac 12+s}}\bigg) \sum_{e \geq 0 } \frac{1}{p^{e(\frac 12-s)}}\\
= \frac16 \cdot \frac{ \Big(1 - \frac{1}{ p^{\frac 12+s}} \Big)^2}{ \Big(1 - \frac{1}{ p^{\frac 12-s}}\Big)^2} + \frac12 \cdot \frac{ 1 - \frac{1}{p^{1+2s}} }{ 1 - \frac{1}{p^{1-2s}} } + \frac13 \cdot \frac{ 1 + \frac{1}{p^{\frac 12+s}}+ \frac{1}{p^{1+2s}}}{ 1 + \frac{1}{p^{\frac 12-s}}+ \frac{1}{p^{1-2s}} } + \frac1p \cdot \frac{ 1- \frac{1}{p^{\frac 12+s}}}{ 1- \frac{1}{p^{\frac 12-s}} }.
\end{multline}
Here, we have used geometric sum identities, e.g.,
\overrightarrow{\beta}gin{align*}
\sum_{k=0}^\infty \tau_k x^k & = \sum_{k=0}^\infty x^{3k} - \sum_{k=0}^\infty x^{3k+1} = \frac{1-x}{1-x^3 } = \frac{1}{1+x+x^2 } \qquad (|x| < 1).
\end{align*}
Inserting the expression~\eqref{equation geometric series} in~\eqref{equation A3 section 5} and simplifying, we obtain the identity
\overrightarrow{\beta}gin{equation*}
A_3 ( -s, s) = \zeta(3) \zeta( \tfrac32 - 3s ) \mathfrak prod_p \bigg( 1 - \frac{1}{ p^{\frac 32+s}} + \frac{1}{p^{\frac 52 - s}} - \frac{1}{p^{\frac 52-3s}} - \frac{1}{p^{3-4s}} + \frac{1}{p^{\frac 92 - 5s}} \bigg)
\end{equation*}
in the region $|{\mathbb{R}}e(s)|<1/6$. Now, this clearly extends to
$ | {\mathbb{R}}e ( s ) | < 1/2$ except for a simple pole at $ s= 1/6 $ with residue equal to
$$ -\frac{\zeta(3)}{3} \mathfrak prod_p \big( 1 - p^{-\frac53} - p^{-2} + p^{-\frac{11}{3}}\big) = - \frac{ \zeta(3)}{3} \frac{ 1}{ \zeta(\frac53) \zeta(2)} , $$
as desired.
\end{proof}
\overrightarrow{\beta}gin{lemma}
\label{lemma A_4 analytic continuation}
Assuming RH, the function $A_4(-s,s)$ admits an analytic continuation to the region $|{\mathbb{R}}e(s)|< \frac 12$, except for a double pole at $s=\frac 16$. Furthermore, for any $0<\varepsilon<\frac 14$ and in the region $|{\mathbb{R}}e(s)|< \frac 12-\varepsilon$, we have the bound
$$ A_4(-s,s) \ll_\varepsilon (|\operatorname{Im}(s)|+1)^{ \frac23} . $$
\end{lemma}
\overrightarrow{\beta}gin{proof}
By~\eqref{equation definition A_4} and~\eqref{equation definition R_1M}, for $ | {\mathbb{R}}e (s) | < \frac16$ we have that
\overrightarrow{\beta}gin{align*}
A_4 ( -s, s) = & \mathfrak prod_p \frac{ \Big(1- \frac1{p^{1-2s}}\Big)\Big( 1- \frac1{p^{\frac56-s}}\Big)\Big( 1 - \frac1{p^{\frac13}} \Big) }{ \Big(1- \frac1{p^2}\Big)\Big( 1- \frac1{p^{\frac56+s}}\Big) \Big(1- \frac1{p^{\frac53}}\Big) } \Bigg( \frac1{p^2} \Big( 1+ \frac1{p^{\frac13}}\Big) + \sum_{e \geq 0 } \frac{ g(e,0,p) + \frac{g(e,1,p)}{p^{\frac12+s}} + \frac{g(e,2,p)}{ p^{1+2s }} }{p^{e(\frac 12-s)}} \Bigg),
\end{align*}
since $ y_p^{-1} - g(0,0,p) = \frac1{p^2} \Big( 1+ \frac1{p^{\frac13}}\Big) $. Recalling the definition of $g(e,j,p)$ (see Lemma~\ref{lemma count1}), a straightforward evaluation of the infinite sum over $e\geq 0$ yields the expression
\overrightarrow{\beta}gin{multline*}
A_4 ( -s, s)
= \zeta(2) \zeta(\tfrac53) \mathfrak prod_p \frac{ \Big(1- \frac1{p^{1-2s}}\Big)\Big( 1- \frac1{p^{\frac56 -s}}\Big)\Big( 1 - \frac1{p^{\frac13}}\Big) }{ \Big( 1- \frac1{p^{\frac56 +s}}\Big) } \Bigg( \frac{ \Big(1+\frac{1}{p^{ \frac13}} \Big)^3\Big( 1 - \frac{1}{ p^{\frac12+s}} \Big)^2 }{ 6 \Big( 1 - \frac{1}{ p^{\frac12-s}} \Big)^2} \\
+ \frac{ \Big(1+\frac{1}{p^{ \frac13}} \Big)\Big(1+\frac{1}{p^{ \frac23}} \Big) \Big( 1 - \frac{1}{p^{1+2s}}\Big) }{2 \Big( 1 - \frac{1}{p^{1-2s}}\Big)}
+ \frac{ \Big( 1+\frac1p \Big) \Big( 1 + \frac{1}{p^{\frac12 +s}}+ \frac{1}{p^{1+2s}}\Big) }{ 3 \Big( 1 + \frac{1}{p^{\frac12-s}} + \frac{1}{p^{1-2s}} \Big) }
+ \frac{\Big(1+\frac{1}{p^{ \frac13}} \Big)^2 \Big( 1- \frac{1}{p^{\frac12+s}}\Big) }{ p\Big( 1- \frac{1}{p^{\frac12-s}}\Big) } + \frac{ 1+\frac{1}{p^{ \frac13}} }{p^2} \Bigg).
\end{multline*}
Isolating the "divergent terms" leads us to the identity
\overrightarrow{\beta}gin{align*}
A_4 ( -s, s)
= & \zeta(2) \zeta(\tfrac53) \mathfrak prod_p (D_{4,p,1}(s) + A_{4,p,1}(s) ),
\end{align*}
where
\overrightarrow{\beta}gin{multline*}
D_{4,p,1}(s):= \frac{ 1- \frac1{p^{\frac56-s}} }{ 1- \frac1{p^{\frac56+s}} } \Bigg( \frac{ \Big(1+ \frac2{p^{\frac13}} - \frac2p \Big) \Big(1 - \frac{1}{ p^{\frac12+s}} \Big)^2 \Big(1 + \frac1{p^{\frac12-s}}\Big) }{ 6\Big( 1 - \frac{1}{ p^{\frac12-s}}\Big)} \\
+ \frac{ 1 - \frac{1}{p^{1+2s}} }{2} + \frac{\Big(1 - \frac1{p^{\frac13}} +\frac1p \Big)\Big( 1 + \frac{1}{p^{\frac12+s}}+ \frac{1}{p^{1+2s}} \Big) \Big(1- \frac1{p^{1-2s}}\Big) }{3\Big( 1 + \frac{1}{p^{\frac12-s}} + \frac{1}{p^{1-2s}} \Big) } + \frac1p \Bigg)
\end{multline*}
and
\overrightarrow{\beta}gin{align*}
A_{4,p,1}(s) := & \frac{ 1- \frac1{p^{\frac56-s}} }{ 1- \frac1{p^{\frac56+s}} } \Bigg( - \frac{ \Big(1 - \frac{1}{ p^{\frac12+s}} \Big)^2 \Big(1 + \frac1{p^{\frac12-s}}\Big) }{ 6 p^{\frac43} \Big( 1 - \frac{1}{ p^{\frac12-s}}\Big)} - \frac{ 1 - \frac{1}{p^{1+2s}} }{2p^{\frac43}} - \frac{ \Big( 1 + \frac{1}{p^{\frac12+s}}+\frac{1}{p^{1+2s}} \Big) \Big(1- \frac1{p^{1-2s}}\Big) }{3 p^{\frac43} \Big( 1 + \frac{1}{p^{\frac12-s}} + \frac{1}{p^{1-2s}} \Big) } \\
& \quad + \frac{\Big(1+ \frac1{p^{\frac13}} - \frac1{p^{\frac23}}- \frac1p \Big) \Big( 1- \frac{1}{p^{\frac12+s}}\Big)\Big( 1+ \frac{1}{p^{\frac12-s}} \Big) -1 }{p} + \frac1{p^2} \Big( 1- \frac1{p^{\frac23}}\Big) \Big(1- \frac1{p^{1-2s}}\Big) \Bigg).
\end{align*}
The term $A_{4,p,1}(s)$ is "small" for $ | {\mathbb{R}}e (s)|< \frac12$, hence we will concentrate our attention on $D_{4,p,1}(s)$. We see that
$$
D_{4,p,1}(s)
= \frac{ 1- \frac1{p^{\frac56-s}} }{ 1- \frac1{p^{\frac56+s}} } D_{4,p,2}(s)
+ \frac1p + A_{4,p,2}(s) ,
$$
where
$$ D_{4,p,2}(s):= \frac{ \Big(1+ \frac2{p^{\frac13}} \Big) \Big(1 - \frac{1}{ p^{\frac12+s}} \Big)^2 \Big(1 + \frac1{p^{\frac12-s}}\Big) }{ 6\Big( 1 - \frac{1}{ p^{\frac12-s}}\Big)}
+ \frac{ 1 - \frac{1}{p^{1+2s}} }{2} \\ + \frac{\Big(1 - \frac1{p^{\frac13}} \Big)\Big( 1 + \frac{1}{p^{\frac12+s}}+ \frac{1}{p^{1+2s}} \Big) \Big(1- \frac1{p^{1-2s}}\Big) }{3\Big( 1 + \frac{1}{p^{\frac12-s}} + \frac{1}{p^{1-2s}} \Big) } $$
and
\overrightarrow{\beta}gin{align*}
A_{4,p,2}(s):= \frac{ \Big( 1- \frac1{p^{\frac56-s}} \Big) }{ p \Big( 1- \frac1{p^{\frac56+s}} \Big) } \Bigg( & - \frac{ \Big(1 - \frac{1}{ p^{\frac12+s}} \Big)^2 \Big(1 + \frac1{p^{\frac12-s}}\Big) }{ 3\Big( 1 - \frac{1}{ p^{\frac12 -s}}\Big)}
+ \frac{ \Big( 1 + \frac{1}{p^{\frac12+s}}+ \frac{1}{p^{1+2s}} \Big) \Big(1- \frac1{p^{1-2s}}\Big) }{3\Big( 1 + \frac{1}{p^{\frac12-s}} + \frac{1}{p^{1-2s}} \Big) } +1 \Bigg) - \frac1p,
\end{align*}
which is also "small". Taking common denominators and expanding out shows that
$$ D_{4,p,2}(s)=\frac{1}{ 1- \frac{1}{p^{\frac32 -3s}} } \bigg( 1 - \frac1p - \frac{1}{p^{\frac56+s}} + \frac{1}{p^{\frac56-s}} + \frac1{p^{\frac43-2s}} + A_{4,p,3}(s) \bigg),$$
where
$$ A_{4,p,3}(s) := - \frac{1}{p^{\frac32-s}} + \frac{1}{ p^{\frac52-s}} - \frac1{p^{\frac43}} + \frac{1}{p^{\frac{11}6+s}} - \frac1{p^{\frac{11}6 -s}} + \frac1{p^{\frac73} } - \frac{1}{p^{\frac73-2s}} $$
is "small". More precisely, for $ | {\mathbb{R}}e (s) |\leq \frac12 - \varepsilon < \frac12 $ and $ j = 1,2,3$, we have the bound
$ A_{4,p,j } (s) = O_\varepsilon \Big( \frac{1}{p^{1+\varepsilon}}\Big) $. Therefore,
\overrightarrow{\beta}gin{align}
\label{equation before A4tilde}
A_4 ( -s, s)
= \zeta(2) \zeta(\tfrac53) \zeta( \tfrac32-3s) \widetilde{A}_4 (s) \mathfrak prod_p \Bigg( \frac{\Big( 1- \frac1{p^{\frac56-s}}\Big) }{ \Big( 1- \frac1{p^{\frac56+s}}\Big) } \bigg( 1 - \frac{1}{p^{\frac56+s}} + \frac{1}{p^{\frac56-s}} + \frac1{p^{\frac43-2s}} \bigg) \Bigg) ,
\end{align}
where
\overrightarrow{\beta}gin{align*}
\widetilde{A}_4 (s) := & \mathfrak prod_p \left( 1 + \frac{ \frac1p \bigg( 1- \frac{1}{p^{\frac32 -3s}} - \frac{ 1- p^{-\frac56+s} }{ 1- p^{- \frac56-s} } \bigg) + \frac{ 1- p^{-\frac56+s} }{ 1- p^{- \frac56-s} } A_{4,p,3}(s) + \Big( 1- \frac{1}{p^{\frac32-3s}}\Big)(A_{4,p,2}(s) + A_{4,p,1}(s)) }{ \frac{ 1- p^{-\frac56+s} }{ 1- p^{- \frac56-s} } \Big( 1 - \frac{1}{p^{\frac56+s}} + \frac{1}{p^{\frac56-s}} + \frac1{p^{\frac43-2s}} \Big) } \right)
\end{align*}
is absolutely convergent for $|{\mathbb{R}}e (s) | < \frac12 $. Hence, the final step is to find a meromorphic continuation for the infinite product on the right-hand side of~\eqref{equation before A4tilde}, which we will denote by $D_3(s)$. However, it is straightforward to show that
\overrightarrow{\beta}gin{equation}\label{definition D3s}
A_{4,4}(s):= D_3(s) \frac{ \zeta(\tfrac83-4s) \zeta(\tfrac53-2s) \zeta(\tfrac{13}6-3s) } { \zeta(\tfrac43-2s) \zeta(\tfrac{13}6-s)}
\end{equation}
converges absolutely for $|{\mathbb{R}}e(s)|<\frac 12$. This finishes the proof of the first claim in the lemma.
Finally, the growth estimate
$$ A_4(-s,s)\ll_\varepsilon (|\operatorname{Im}(s)|+1)^{\varepsilon} | \zeta( \tfrac32 - 3s ) \zeta( \tfrac43 - 2s)| \ll_\varepsilon (|\operatorname{Im}(s)|+1)^{ \frac23} $$
follows from~\eqref{equation before A4tilde},~\eqref{definition D3s}, as well as~\cite[Theorems 13.18 and 13.23]{MV} and the functional equation for $\zeta(s)$.
\end{proof}
Now that we have a meromorphic continuation of $A_4(-s,s)$, we will calculate the leading Laurent coefficient at $s=\frac 16$.
\overrightarrow{\beta}gin{lemma}\label{lemma A4 limit at 16}
We have the formula
$$ \lim_{s\rightarrow \frac 16} (s-\tfrac 16)^2 A_4(-s,s) = \frac16 \frac{ \zeta(2)\zeta(\tfrac 53)}{\zeta( \tfrac43)} \mathfrak prod_p \Big(1-\frac 1{p^{\frac 23}}\Big)^2 \Big(1- \frac1p \Big)\Big(1+\frac 2{p^{\frac 23}}+\frac 1p+\frac 1{p^{\frac 43}}\Big) . $$
\end{lemma}
\overrightarrow{\beta}gin{proof}
By Lemma \ref{lemma A_4 analytic continuation}, $A_4 ( -s, s ) $ has a double pole at $ s= \frac16 $. Moreover, by \eqref{equation before A4tilde} and \eqref{definition D3s} we find that $ \frac{ A_4 ( -s, s)}{ \zeta( \frac32 - 3s ) \zeta ( \frac43 - 2s ) } $ has a convergent Euler product in the region $|{\mathbb{R}}e(s)| < \frac 13$ (this allows us to interchange the order of the limit and the product in the calculation below), so that
\overrightarrow{\beta}gin{align*}
\lim_{s \to \frac16 } & (s-\tfrac16 )^2 A_4 ( -s , s) = \frac{1}{6} \lim_{s \to \frac16} \frac{ A_4 ( -s, s)}{ \zeta( \frac32 - 3s ) \zeta ( \frac43 - 2s ) } \\
= & \frac{ \zeta(2)\zeta(\tfrac 53) }{6} \mathfrak prod_p \Big(1-\frac1p \Big) \Big(1-\frac 1{p^{\frac 23}}\Big)^2 \Big(1-\frac 1{p^{\frac 13}}\Big) \Bigg\{ \frac{ \Big(1+\frac 1{p^{\frac 13}}\Big)^3 \Big(1 - \frac{1}{ p^{\frac23}}\Big)^2 }{ 6 \Big( 1 - \frac{1}{ p^{\frac13}}\Big)^2 } \\
& + \frac{ \Big(1+ \frac1{p^{ \frac13}} \Big) \Big( 1+ \frac1{p^{ \frac23}} \Big) \Big( 1 - \frac{1}{p^{\frac43}}\Big) }{2\Big( 1 - \frac{1}{p^{\frac23}}\Big) } + \frac{ \Big(1+ \frac1p \Big)\Big(1 + \frac{1}{p^{\frac23}}+ \frac{1}{p^{\frac43}}\Big) }{3 \Big( 1 + \frac{1}{p^{\frac13}} + \frac{1}{p^{\frac23}} \Big) } + \frac{ \Big( 1 + \frac1{p^{ \frac13}} \Big)^2 \Big( 1- \frac{1}{p^{\frac23}} \Big) }{p\Big( 1- \frac{1}{p^{\frac13}} \Big) } + \frac{ 1+ \frac1{p^{\frac13}} }{p^2} \Bigg\} .
\end{align*}
The claim follows.
\end{proof}
We are now ready to estimate $J^\mathfrak pm(X)$ when the support of $\widehat \mathfrak phi$ is small.
\overrightarrow{\beta}gin{lemma}
\label{lemma transition term small}
Let $\mathfrak phi$ be a real even Schwartz function such that $\sigma=\sup({\rm supp} (\widehat \mathfrak phi)) <1$. Let $J^\mathfrak pm(X)$ be defined by \eqref{J X def}.
Then we have the estimate
$$J^\mathfrak pm(X) = C^\mathfrak pm \mathfrak phi \Big( \frac{L }{12 \mathfrak pi i}\Big) X^{-\frac 13}+O_{\varepsilon} \Big(X^{\frac {\sigma-1}2+\varepsilon }\Big), $$
where
\overrightarrow{\beta}gin{equation} \label{equation definition C}
C^\mathfrak pm := \frac{5}{12} \frac{C_2^\mathfrak pm }{C_1^\mathfrak pm} \frac{\Gamma_{\mathfrak pm}(\frac 13)}{\Gamma_{\mathfrak pm}(\frac 23)} \frac{ \zeta(\tfrac 23)^2 \zeta(\tfrac 53) \zeta(2) }{ \zeta( \tfrac43)} \mathfrak prod_p \Big(1-\frac 1{p^{\frac 23}}\Big)^2 \Big(1- \frac1p \Big)\Big(1+\frac 2{p^{\frac 23}}+\frac 1p+\frac 1{p^{\frac 43}}\Big) .
\end{equation}
\end{lemma}
\overrightarrow{\beta}gin{proof}
We rewrite the integral in $J^\mathfrak pm(X)$ as
\overrightarrow{\beta}gin{multline}\label{J X integral eqn1}
\frac{1}{2 \mathfrak pi i} \int_{(c)} (-2) \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)\Big\{ \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big)\Big(-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16} \frac{\zeta'}{\zeta}(\tfrac 56+s)+ X^{-s} \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \frac{A_3( -s, s)}{1-s} \Big)\\
+\frac{C_2^\mathfrak pm }{C_1^\mathfrak pm} X^{-s-\frac 16} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)} \zeta(1-2s) \frac{\zeta(\tfrac 56-s)}{\zeta(\tfrac 56+s)} \frac{A_4(-s,s)}{ 1- \frac{6s}5}
\\+\Big(\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}\Big)^2 X^{-s-\frac 13} \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \frac{ A_3( -s, s)}{1-s}
\Big\}ds
\end{multline}
for $ 0 < c < \frac16$. The integrand has a simple pole at $s=\frac 16$ with residue
\overrightarrow{\beta}gin{align}
& -2\mathfrak phi \Big( \frac{L }{12 \mathfrak pi i}\Big) \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) X^{-\frac 16} \Big(\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} - \frac25 \frac{\Gamma_{\mathfrak pm}(\frac 13)}{\Gamma_{\mathfrak pm}(\frac 23)} \frac{ \zeta(\tfrac23 ) \zeta(3)}{ \zeta( \tfrac53 ) \zeta(2)} \Big) \notag \\
& -2 \mathfrak phi \Big( \frac{L }{12 \mathfrak pi i}\Big) \frac{C_2^\mathfrak pm }{C_1^\mathfrak pm} X^{-\frac 13} \frac{\Gamma_{\mathfrak pm}(\frac 13)}{\Gamma_{\mathfrak pm}(\frac 23)} \frac{5\zeta(\tfrac 23)^2}{4} \lim_{s\rightarrow \frac 16} (s-\tfrac 16)^2A_4(-s,s)\label{residue 16}
+O\Big(\mathfrak phi\Big( \frac{L}{12 \mathfrak pi i}\Big) X^{-\frac 12} \Big) \\
&= -{C^\mathfrak pm} \mathfrak phi \Big( \frac{L }{12 \mathfrak pi i}\Big) X^{- \frac13} + O( X^{\frac{\sigma}{6}- \frac12}) \notag
\end{align}
by Lemma~\ref{lemma A4 limit at 16} as well as the fact that the first line vanishes. Due to Lemmas~\ref{lemma A_3 analytic continuation} and~\ref{lemma A_4 analytic continuation}, we can shift the contour of integration to the line ${\mathbb{R}}e(s)=\frac 12 - \frac{\varepsilon}{2}$, at the cost of $-1$ times the residue~\eqref{residue 16}.
We now estimate the shifted integral. The term involving $\frac{\zeta'}{\zeta}(\tfrac 56+s)$ can be evaluated by interchanging sum and integral; we obtain the identity
\overrightarrow{\beta}gin{equation}\label{equation log der zeta}
\frac{1}{\mathfrak pi i} \int_{(\frac 12-\frac \varepsilon 2)} \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big) \frac{\zeta'}{\zeta}(\tfrac 56+s)ds \\
= - \frac 2 L\sum_{p,e}\frac{\log p}{p^{\frac {5e}6}}\widehat\mathfrak phi\Big(\frac{\log p^e}{L}\Big) .
\end{equation}
The last step is to bound the remaining terms, which is carried out by combining~\eqref{equation decay phi complex} with Lemmas~\ref{lemma A_3 analytic continuation} and~\ref{lemma A_4 analytic continuation}.
\end{proof}
Finally, we complete the proof of Theorem \ref{theorem RC}.
\overrightarrow{\beta}gin{proof}[Proof of Theorem \ref{theorem RC}]
Given Proposition~\ref{proposition RC} and Lemma~\ref{lemma transition term small}, the only thing remaining to prove is~\eqref{equation definition J(X)}. Applying~\eqref{J X integral eqn1} with $c=\frac 1{20}$ and splitting the integral into two parts, we obtain the identity
\overrightarrow{\beta}gin{multline*}
J^\mathfrak pm (X) = \frac{2C_2^\mathfrak pm X^{-\frac 16}}{C_1^\mathfrak pm L}\Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \sum_{p,e}\frac{\log p}{p^{\frac {5e}{6}}}\widehat\mathfrak phi\left(\frac{\log p^e}{L}\right) \\
-\frac{1}{\mathfrak pi i} \int_{(\frac{1}{20})} \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)\Big\{ \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big)\Big(-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm} X^{-\frac 16} \frac{\zeta'}{\zeta}(\tfrac 56+s)+ X^{-s} \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \frac{ A_3( -s, s)}{1-s} \Big) \Big\} ds\\
-\frac{1}{\mathfrak pi i} \int_{(\frac{1}{20})} \mathfrak phi \Big(\frac{Ls}{2\mathfrak pi i}\Big)\Big\{ \frac{C_2^\mathfrak pm }{C_1^\mathfrak pm} X^{-s-\frac 16} \Big( 1-\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}X^{-\frac{1}{6}}\Big) \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)} \zeta(1-2s) \frac{\zeta(\tfrac 56-s)}{\zeta(\tfrac 56+s)} \frac{A_4(-s,s)}{ 1- \frac{6s}5}
\\+\Big(\frac{C_2^\mathfrak pm}{C_1^\mathfrak pm}\Big)^2X^{-s-\frac 13} \frac{\Gamma_{\mathfrak pm}(\frac 12-s)}{\Gamma_{\mathfrak pm}(\frac 12+s)}\zeta(1-2s) \frac{ A_3( -s, s) }{1-s}
\Big\}ds.
\end{multline*}
By shifting the first integral to the line ${\mathbb{R}}e(s) = \frac15$ and applying \eqref{equation log der zeta}, we derive \eqref{equation definition J(X)}. Note that the residue at $s=\frac16$ is the first line of $(\ref{residue 16})$, which is equal to zero.
\end{proof}
\overrightarrow{a}ppendix
\section{Numerical investigations}
\label{appendix}
In this section we present several graphs\footnote{The computations associated to these graphs were done using development version 2.14 of pari/gp (see \url{https://pari.math.u-bordeaux.fr/Events/PARI2022/talks/sources.pdf}), and the full code can be found here: \url{https://github.com/DanielFiorilli/CubicFieldCounts} .} associated to the error term $$ E^+_{p} (X,T) := N^+_{p} (X,T) -A^+_p (T ) X -B^+_p(T) X^{\frac 56}.$$
We recall that we expect a bound of the form $ E^+_{p} (X,T) \ll_{\varepsilon} p^{ \omega}X^{\theta+\varepsilon} $ (see~\eqref{equation TT local 2}). Moreover, from the graphs shown in Figure~\ref{figure intro}, it seems likely that $\theta=\frac 12$ is admissible and best possible.
Now, to test the uniformity in $p$, we consider the function
$$ f_p(X,T):= \max_{1\leq x\leq X} x^{-\frac 12}|E^+_{p} (x,T)|;$$
we then expect a bound of the form $f_p(X,T) \ll_\varepsilon p^{\omega}X^{\theta-\frac 12+\varepsilon}$ with $\theta$ possibly equal to $\frac 12$.
To predict the smallest admissible value of $\omega$, in Figure~\ref{figure T123} we plot $f_{p}(10^4,T_j)$ for $j=1,2,3,$ as a function of $p<10^4$.
\overrightarrow{\beta}gin{figure}[h]
\overrightarrow{\beta}gin{center}
\includegraphics[scale=.58]{T1T2T3sunif10e4p10e4.jpg}
\end{center}
\caption{A plot of $(p,f_{p}(10^4,T_j))$ for $p<10^4$ and $j=1,2,3.$
}\label{figure T123}
\end{figure}
From this data, it seems likely that any $\omega >0$ is admissible. Now, one might wonder whether this is still valid in the range $p>X$. To investigate this, in Figure~\ref{figure second T3} we plot the function $f_{p}(10^4,T_3)$ for every $10^4$-th prime up to $10^8$, revealing similar behaviour. Finally, we have also produced similar data associated to the quantity $N^-_{p}(X,T_j)$ with $j=1,2,3$, and the result was comparable to Figure~\ref{figure T123}.
\overrightarrow{\beta}gin{figure}[h]
\overrightarrow{\beta}gin{center}
\includegraphics[scale=.55]{T3unif10e4somep10e8.jpg}
\end{center}
\caption{A plot of some of the values of $(p,f_{p}(10^4,T_3))$ for $p<10^8$.
}
\label{figure second T3}
\end{figure}
However, it seems like the splitting type $T_4$ behaves differently; see Figure~\ref{figure T45 first} for a plot of $p\cdot f_p(10^4,T_4)$ for every $p<10^5$.
\overrightarrow{\beta}gin{figure}[h]
\overrightarrow{\beta}gin{center}
\includegraphics[scale=.5]{T4unif10e4p10e5normp.jpg}
\end{center}
\caption{A plot of $(p,pf_{p}(10^4,T_4))$ for $p<10^5$.
\label{figure T45 first}
}
\end{figure}
One can see that this graph is eventually essentially constant. This is readily explained by the fact that in the range $p>X$, we have $N^{\mathfrak pm}_{p} (X,T_4) =0$. Indeed, if $p$ has splitting type $T_4$ in a cubic field $K$ of discriminant at most $X$, then $p$ must divide $D_K$, which implies that $p\leq X$. As a consequence, $pf_p(X,T_4)\overrightarrow{a}symp X^{\frac 12} $, which is constant as a function of $p$. As for the more interesting range $p\leq X$, it seems like $f_p(X,T_4)\ll_\varepsilon p^{-\frac 12+\varepsilon}X^\varepsilon $ (i.e. for $T=T_4$, the values $\theta=\frac 12$ and any $\omega>-\frac 12$ are admissible in~\eqref{equation TT local 2}). In Figure~\ref{figure T4 second} we test this hypothesis with larger values of $X$ by plotting $p^{\frac 12} \cdot f_p(10^5,T_4)$ for all $p< 10^4$.
\overrightarrow{\beta}gin{figure}[h]
\overrightarrow{\beta}gin{center}
\includegraphics[scale=.55]{T4unif10e5p10e4normsqrtp.jpg}
\end{center}
\caption{A plot of $(p,p^{\frac 12}f_{p}(10^5,T_4))$ for $p<10^4$.
}
\label{figure T4 second}
\end{figure}
This seems to confirm that for $T=T_4$, the values $\theta=\frac 12$ and any $\omega>-\frac 12$ are admissible in~\eqref{equation TT local 2}. In other words, it seems like we have $E_p^+(X,T_4)\ll_\varepsilon p^{-\frac 12+\varepsilon} X^{\frac 12+\varepsilon}$, and the sum of the two exponents here is $2\varepsilon$, which is significantly smaller than the sum of exponents in Theorem~\ref{theorem omega result counts} which is $\omega+\theta \geq \frac 12$. Note that this is not contradictory, since in that theorem we are assuming such a bound uniformly for all splitting types, and from the discussion above we expect that $E_p^+(X,T_1)\ll_\varepsilon p^{\varepsilon} X^{\frac 12+\varepsilon}$ is essentially best possible. Finally, we have also produced data for the quantity $N_p^-(X,T_4)$. The result was somewhat similar, but far from identical. We would require more data to make a guess as strong as the one we made for $E_p^+(X,T_4)$.
For the splitting type $T_5$, it seems like the error term is even smaller (probably owing to the fact that these fields are very rare). Indeed, this is what the graph of $p^2\cdot f_p(10^6,T_5)$ for all $p< 10^3$ in Figure~\ref{figure T5} indicates.
\overrightarrow{\beta}gin{figure}[h]
\overrightarrow{\beta}gin{center}
\includegraphics[scale=.5]{T5unif10e6p10e3normp2.jpg}
\end{center}
\caption{A plot of $(p,p^{2}f_{p}(10^6,T_5))$ for $p<10^3$.
}
\label{figure T5}
\end{figure}
Again, there are two regimes. Firstly, by~\cite[p.\ 1216]{B}, $p>2$ has splitting type $T_5$ in the cubic field $K$ if and only if $p^2\mid D_K$, hence $N^{\mathfrak pm}_p(X,T_5)=0$ for $p>X^{\frac 12}$ (that is $p^2\cdot f_p(X,T_5) \overrightarrow{a}symp X^{\frac 12}$). As for $p\leq X^{\frac 12}$, Figure~\ref{figure T5} indicates that $f_p(X,T_5) \ll_\varepsilon p^{-1 +\varepsilon}X^{\varepsilon} $ (e.g. for $T=T_5$, the values $\theta=\frac 12$ and any $\omega>-1$ are admissible in~\eqref{equation TT local 2}). Once more, it is interesting to compare this with
Theorem~\ref{theorem omega result counts}, since it seems like $ E^+_p(X,T_5) \ll_\varepsilon p^{-1+\varepsilon} X^{\frac 12+\varepsilon} $, and the sum of the two exponents is now $-\frac 12+2\varepsilonilon$. We have also produced analogous data associated to the quantity $N_p^-(X,T_5)$. The result was somewhat similar.
Finally, we end this section with a graph (see Figure~\ref{figure E+(X)}) of $$E^+(X):=X^{-\frac 12} \big(N^+_{\rm all}(X)-C_1^+X-C_2^+X^{\frac 56}\big)$$ for $X< 10^{11}$ (which is the limit of Belabas' program\footnote{The program, based on the algorithm in~\cite{B}, can be found here: \url{https://www.math.u-bordeaux.fr/~kbelabas/research/cubic.html}} used for this computation). Here, $N_{\rm all}^+(X)$ counts all cubic fields of discriminant up to $X$, including Galois fields (by Cohn's work~\cite{C}, $N^+_{\rm all}(X) -N^+(X)\sim c X^{\frac 12}$, with $c=0.1585...$). This strongly supports the conjecture that $E^+(X)\ll_\varepsilon X^{\frac 12+\varepsilon}$ and that the exponent $\frac 12$ is best possible. It is also interesting that the graph is always positive, which is not without reminding us of Chebyshev's bias (see for instance the graphs in the survey paper~\cite{GM}) in the distribution of primes.
\overrightarrow{\beta}gin{figure}[h]
\overrightarrow{\beta}gin{center}
\includegraphics[scale=.38]{NX10e11with5000points.png}
\end{center}
\caption{A plot of $E^+(X) $ for $X<10^{11}$.
}
\label{figure E+(X)}
\end{figure}
Given this numerical evidence, one may summarize this section by stating that in all cases, it seems like we have square-root cancellation. More precisely, the data indicates that the bound
\overrightarrow{\beta}gin{equation}
N^+_{p} (X,T) -A^+_p (T ) X -B^+_p(T) X^{\frac 56} \ll_\varepsilon (pX)^{\varepsilon} \big(A^+_p (T ) X\big)^{\frac 12}
\label{equation bound montgomery for cubic}
\end{equation}
could hold, at least for almost all $p$ and $X$. This is reminiscent of Montgomery's conjecture~\cite{Mo} for primes in arithmetic progressions, which states that
$$ \sum_{\substack{n\leq x \\ n\equiv a \bmod q} }\Lambda(n)- \frac{x}{\mathfrak phi(q)} \ll_\varepsilon x^{\varepsilon} \Big(\frac{x}{\mathfrak phi(q)}\Big)^{\frac 12} \qquad (q\leq x,\qquad (a,q)=1). $$
Precise bounds such as~\eqref{equation bound montgomery for cubic} seem to be far from reach with the current methods, however we hope to return to such questions in future work.
\overrightarrow{\beta}gin{thebibliography}{99}
\bibitem[B]{B} K. Belabas, \emph{A fast algorithm to compute cubic fields.} Math. Comp. \textbf{66} (1997), no. 219, 1213--1237.
\bibitem[BBP]{BBP} K. Belabas, M. Bhargava, C. Pomerance, \emph{Error estimates for the Davenport-Heilbronn theorems.} Duke Math. J. \textbf{153} (2010), no. 1, 173--210.
\bibitem[BST]{BST} M. Bhargava, A. Shankar, J. Tsimerman, \emph{On the Davenport-Heilbronn theorems and second order terms.} Invent. Math. \textbf{193} (2013), no. 2, 439--499.
\bibitem[BTT]{BTT} M. Bhargava, T. Taniguchi, F. Thorne, \emph{Improved error estimates for the Davenport--Heilbronn theorems.} Preprint 2021, arXiv:2107.12819.
\bibitem[CK1]{CK1} P. J. Cho, H. H. Kim, \emph{Low lying zeros of Artin $L$-functions.} Math. Z. \textbf{279} (2015), no. 3-4, 669--688.
\bibitem[CK2]{CK2} P. J. Cho, H. H. Kim, \emph{$n$-level densities of Artin $L$-functions.} Int. Math. Res. Not. IMRN 2015, no. 17, 7861--7883.
\bibitem[CP]{CP} P.\ J.\ Cho, J.\ Park, \emph{Dirichlet characters and low-lying zeros of $L$-functions}, J.\ Number Theory \textbf{212} (2020), 203--232.
\bibitem[C]{C} H. Cohn, \emph{The density of abelian cubic fields.} Proc. Amer. Math. Soc. \textbf{5} (1954), 476--477.
\bibitem[CFZ]{CFZ} J.\ B.\ Conrey, D.\ W.\ Farmer, M.\ R.\ Zirnbauer, \emph{Autocorrelation of ratios of $L$-functions.} Commun.\ Number Theory Phys.\ \textbf{2} (2008), no.\ 3, 593--636.
\bibitem[CS]{CS} J.\ B.\ Conrey, N.\ C.\ Snaith, \emph{Applications of the $L$-functions ratios conjectures.} Proc.\ Lond.\ Math.\ Soc.\ (3) \textbf{94} (2007), no.\ 3, 594--646.
\bibitem[DHP]{DHP} C.\ David, D.\ K.\ Huynh, J.\ Parks, \emph{One-level density of families of elliptic curves and the Ratios Conjecture}, Res.\ Number Theory \textbf{1} (2015), Paper No.\ 6, 37 pp.
\bibitem[DFS]{DFS} L.\ Devin, D.\ Fiorilli, A.\ S\"odergren, \emph{Low-lying zeros in families of holomorphic cusp forms: the weight aspect.} Preprint 2019, arXiv:1911.08310.
\bibitem[DFI]{DFI} W.\ Duke, J.\ B.\ Friedlander, H.\ Iwaniec, \emph{The subconvexity problem for Artin $L$-functions.}
Invent.\ Math.\ \textbf{149} (2002), no.\ 3, 489--577.
\bibitem[FM]{FM} D.\ Fiorilli, S.\ J.\ Miller, \emph{Surpassing the ratios conjecture in the $1$-level density of Dirichlet $L$-functions.} Algebra Number Theory \textbf{9} (2015), no.\ 1,13--52.
\bibitem[FPS1]{FPS1} D.\ Fiorilli, J.\ Parks, A.\ S\"odergren, \emph{Low-lying zeros of elliptic curve $L$-functions: Beyond the Ratios Conjecture.} Math.\ Proc.\ Cambridge Philos.\ Soc.\ \textbf{160} (2016), no.\ 2, 315--351.
\bibitem[FPS2]{FPS2} D.\ Fiorilli, J.\ Parks, A.\ S\"odergren, \emph{Low-lying zeros of quadratic Dirichlet $L$-functions: Lower order terms for extended support.} Compos.\ Math.\ \textbf{153} (2017), no.\ 6, 1196--1216.
\bibitem[FPS3]{FPS3} D.\ Fiorilli, J.\ Parks, A.\ S\"odergren, \emph{Low-lying zeros of quadratic Dirichlet $L$-functions: A transition in the ratios conjecture.} Q.\ J.\ Math.\ \textbf{69} (2018), no.\ 4, 1129--1149.
\bibitem[FI]{FI} \'E.\ Fouvry, H.\ Iwaniec, \emph{Low-lying zeros of dihedral $L$-functions.} Duke Math.\ J.\ \textbf{116} (2003), no.\ 2, 189--217.
\bibitem[G+]{G+} J. Goes, S. Jackson, S. J. Miller, D. Montague, K. Ninsuwan, R. Peckner, T. Pham, \emph{A unitary test of the ratios conjecture.} J.\ Number Theory \textbf{130} (2010), no. 10, 2238--2258.
\bibitem[GM]{GM} A. Granville, G. Martin, \emph{Prime number races.} Amer. Math. Monthly \textbf{113} (2006), no. 1, 1--33.
\bibitem[HR]{HR} C.\ P.\ Hughes, Z.\ Rudnick, \emph{Linear statistics of low-lying zeros of $L$-functions.} Q.\ J.\ Math.\ \textbf{54} (2003), no.\ 3, 309--333.
\bibitem[HKS]{HKS} D.\ K.\ Huynh, J.\ P.\ Keating, N.\ C.\ Snaith, \emph{Lower order terms for the one-level density of elliptic curve $L$-functions}, J.\ Number Theory \textbf{129} (2009), no.\ 12, 2883--2902.
\bibitem[IK]{IK} H.\ Iwaniec, E.\ Kowalski, \emph{Analytic number theory.} American Mathematical Society Colloquium Publications \textbf{53}, American Mathematical Society, Providence, RI, 2004.
\bibitem[ILS]{ILS} H.\ Iwaniec, W.\ Luo, P.\ Sarnak, \emph{Low lying zeros of families of $L$-functions.} Inst.\ Hautes \'Etudes Sci.\ Publ.\ Math.\ \textbf{91} (2000), 55--131.
\bibitem[JR]{JR} J. W. Jones, D. P. Roberts, \emph{A database of number fields.} LMS J. Comput. Math. \textbf{17} (2014), no. 1, 595--618.
\bibitem[KS1]{KS1} N.\ M.\ Katz, P.\ Sarnak, \emph{Zeroes of zeta functions and symmetry.} Bull.\ Amer.\ Math.\ Soc.\ (N.S.) \textbf{36} (1999), no.\ 1, 1--26.
\bibitem[KS2]{KS2} N.\ M.\ Katz, P.\ Sarnak, \emph{Random matrices, Frobenius eigenvalues, and monodromy.}
American Mathematical Society Colloquium Publications \textbf{45}, American Mathematical Society, Providence, RI, 1999.
\bibitem[MS]{MS} A.\ M.\ Mason, N.\ C.\ Snaith, \emph{Orthogonal and symplectic $n$-level densities}, Mem.\ Amer.\ Math.\ Soc.\ \textbf{251} (2018), no.\ 1194.
\bibitem[M1]{M} S. J. Miller, \emph{One- and two-level densities for rational families of elliptic curves: evidence for the underlying group symmetries.} Compos. Math. \textbf{140} (2004), no. 4, 952--992.
\bibitem[M2]{MiSymplectic} S.\ J.\ Miller, \emph{A symplectic test of the $L$-functions ratios conjecture.} Int.\ Math.\ Res.\ Not.\ IMRN 2008, no.\ 3, Art.\ ID rnm146, 36 pp.
\bibitem[M3]{MiOrthogonal} S.\ J.\ Miller, \emph{An orthogonal test of the $L$-functions Ratios Conjecture.} Proc.\ Lond.\ Math.\ Soc.\ (3) \textbf{99} (2009), no.\ 2, 484--520.
\bibitem[Mo]{Mo} H. L. Montgomery, \emph{Primes in arithmetic progressions.} Michigan Math. J. \textbf{17} (1970), 33--39.
\bibitem[MV]{MV} H. L. Montgomery, R. C. Vaughan, \emph{Multiplicative number theory. I. Classical theory.} Cambridge Studies in Advanced Mathematics \textbf{97}, Cambridge University Press, Cambridge, 2007.
\bibitem[Ro]{Ro} D.\ P.\ Roberts, \emph{Density of cubic field discriminants.} Math.\ Comp.\ \textbf{70} (2001), no.\ 236, 1699--1705.
\bibitem[Ru]{Ru} M.\ Rubinstein, \emph{Low-lying zeros of $L$--functions and random matrix theory.} Duke Math.\ J.\ \textbf{109} (2001), no.\ ~1,
147--181.
\bibitem[RS]{RS} Z. Rudnick, P. Sarnak, \emph{Zeros of principal $L$-functions and random matrix theory.} Duke Math. J. \textbf{81} (1996), no. 2, 269--322.
\bibitem[SaST]{SaST} P.\ Sarnak, S.\ W.\ Shin, N.\ Templier, \emph{Families of $L$-functions and their symmetry.} Proceedings of Simons Symposia, \emph{Families of Automorphic Forms and the Trace Formula.} Springer-Verlag (2016), 531--578.
\bibitem[ShST]{SST1} A.\ Shankar, A.\ S\"odergren, N.\ Templier, \emph{Sato-Tate equidistribution of certain families of Artin $L$-functions.} Forum Math.\ Sigma \textbf{7} (2019), e23 (62 pages).
\bibitem[ST]{ST} S.\ W.\ Shin, N.\ Templier, \emph{Sato-Tate theorem for families and low-lying zeros of automorphic $L$-functions.} Invent.\ Math.\ \textbf{203} (2016), no.\ 1, 1--177.
\bibitem[TT]{TT} T. Taniguchi, F. Thorne, \emph{Secondary terms in counting functions for cubic fields.} Duke Math. J. \textbf{162} (2013), no. 13, 2451--2508.
\bibitem[W]{Wax} E.\ Waxman, \emph{Lower order terms for the one-level density of a symplectic family of Hecke $L$-functions}, J.\ Number Theory \textbf{221} (2021), 447--483.
\bibitem[Ya]{Y} A.\ Yang, \emph{Distribution problems associated to zeta functions and invariant theory.} Ph.D. Thesis, Princeton University, 2009.
\bibitem[Yo]{Y2} M.\ P.\ Young, \emph{Low-lying zeros of families of elliptic curves.} J.\ Amer.\ Math.\ Soc.\ \textbf{19} (2006), no.\ 1, 205--250.
\end{thebibliography}
\end{document} |
\begin{document}
\title{Quantum Correlations with a Classical Apparatus}
\author{Frederick H. Willeboordse}
\affiliation{
Dept of Physics, The National University of Singapore, Singapore 119260
}
\email{[email protected]}
\homepage[\\Homepage: ]{http://www.willeboordse.ch/science/}
\begin{abstract}
A deterministic, relativistically local and thus classical Bell-type apparatus is reported that violates the Bell-CHSH inequality by introducing a simple local memory element in the detector and by requiring the detector combinations to switch with unequal probabilities. This indicates that the common notion of the fundamental impossibility of a classical-type theory underlying quantum mechanics may need to be re-evaluated.
\end{abstract}
\maketitle
\section{Introduction}
A cornerstone in the understanding of the fundamental nature of quantum mechanics is formed by the Clauser-Horne-Shimony-Holt (CHSH) inequality \cite{CHSH} which is based on John S. Bell's reexamination \cite{Bell_Physics_1} of the Einstein-Podolsky-Rosen paradox \cite{EPR}. The CHSH inequality concerns possible (anti-) correlations in an experimental setup where a source emits an entangled pair of particles that are then independently detected with a measuring device which has (at least) two possible settings and which has a binary output. Denoting the settings of the first measuring apparatus as $A_1$ and $A_2$, the settings of the second measuring apparatus as $B_1$ and $B_2$, and the probability that the binary outputs of settings $A_i$ and $B_j$ ($i,j = 1,2$) mismatch as $P(A_i,B_j)$, CHSH showed that C = $P(A_1,B_1) + P(A_1,B_2) + P(A_2,B_1) - P(A_2,B_2) \leq 2$ for what are generally considered to be \textit{all} possible classical relativistically local binary valued systems. Under the same conditions, quantum mechanics predicts the upper bound to be $ 2 \sqrt{2}$ instead of $2$. Experiments by Aspect and others subsequently clearly showed the quantum mechanical value to be correct \cite{Aspect_PRL-49}.
The general consensus that arose from the discrepancy between the experimental results and the limits imposed on classical correlations by the CHSH inequality is that it is impossible to construct a classical type theory that can fully account for the observed (anti-) correlations \cite{Mermin_PhysicsToday1985,Hess-Philipp_PNAS-98,Kaszlikowski_DoctorateThesis,Shimony_Stanford}. Consequently, it is believed that quantum mechanics cannot be viewed as the statistical mechanics of an underlying local realistic sub-atomic world and any attempt at describing nature by means of e.g. a cellular automaton or a complex system is futile.
Therefore, first and foremost one needs to get past the hurdle of the CHSH inequality if one would like to open the door to re-evaluating the ideas of realistic descriptions. From a conceptual point of view, getting past this hurdle does not require the actual design of an apparatus or automaton that represents a real physical system as falsification of the consensus interpretation can be achieved by a single counter example as long as it reasonably can be argued that it is genuinly classical.
This paper is organized as follows: Section \ref{sec:apparatus} describes the setup of the apparatus and its components. Section \ref{sec:numexp} gives details on the numerical experiment as well as the internal workings of the components described in section \ref{sec:apparatus}. For the apparatus, the distinction between weak and strong locality is important and its relevance outlined in section \ref{sec:weakstrong}. A discussion and the conclusions are presented in section \ref{sec:discussion}.
\section{The Apparatus - Classical Model \label{sec:apparatus}}
The setup numerically simulated below is depicted in Fig.~\ref{apparatus}. It consists of a source $S$ which periodically emits two identical ``eventrons" in opposite directions and two detectors $A$,$B$ each with two settings denoted by the subscripts $1$ and $2$ respectively. The detectors are operated by Alice and Bob who receive instructions on how to set the detectors (i.e. what setting the detectors should be in) from the experimenter. When an eventron hits a detector, the detector will output either a one or a zero creating an event that is recorded by Alice and Bob together with the detector setting. All events are recorded (the dropping of events is not permitted). Alice and Bob are not allowed to communicate with each other, back to the experimenter or the source and consequently Alice is not aware of the outcomes recorded by Bob and vice versa. Furthermore, Alice and Bob do not know each others instructions (i.e. the experimenter does not send the instructions for Bob to Alice or vice versa: Alice and Bob only receive their own instructions), nor are they allowed to have predetermined knowledge of each others settings (by means of a published switching schedule e.g.).
\begin{figure}\label{apparatus}
\end{figure}
The source is in possession of a binary lookup table $t$ and has a memory-like variable $c$ that acts as a counter and index to entries in the lookup table. The variable $c$ is increased by one every time a pair of eventrons is emitted. The lookup table is created by the source by randomly setting its entries to zero or one with a probability $p_t$ of successive entries being different and without further arrangement of the entries in groups. It is created once at the beginning of an experimental run and is then used throughout the run. It has a given length, though the value is not essential here, and an index $k$ can be used to access entry $t[k]$ (hence the table is like an array in the context of computer programming). When an index $k$ is incremented and subsequently exceeds the table length, it is set to point to item 2 in the table as the entry $k-1$ is used in the computation of the output in some cases.
The detectors contain local memory-like variables $m_X$ that act as indices to the lookup table, and local functions $f(t,X_i,m_X,c)$ with $X$ indicating either detector $A$ or $B$ and $i$ either setting $1$ or $2$ that determine the binary output based on the value of this internal memory-like variable and the variables carried by the eventron.
The eventrons carry with them the source's lookup table, the value of the source's counter $c$ as well as a set-memory command which the source sets with a probability $p_s$ labeling it randomly as destined for either detector $A$ or detector $B$. When a detector receives a set-memory command destined for it, the variable $m_X$ is set to $m_X = c$.
An experimental run contains the following parts:
\begin{enumerate}
\item Switch on: The source creates the binary lookup table. All the variables of all the components of the apparatus are set to random values, as are the detector settings.
\item During an experimental run: The source periodically emits eventrons. Alice and Bob set the detectors according to the instructions received by the experimenter. They record the detector output and its setting for each event.
\item Switch off: Alice and Bob send the recorded detector outputs and settings to the experimenter who calculates the correlations.
\end{enumerate}
Note that in this setup, it is not necessary to directly synchronize the experimenters' instructions with the source. As none of the eventrons is dropped, he only needs to specify the sequential number of eventrons per setting (e.g. the first four eventrons are measured in setting 1, the next 9 eventrons in setting 2, and so on). However, it would be straightforward to modify the apparatus to allow for dropped eventrons by including a timing mechanism.
Besides the memory-like variables in the detectors, a key difference with conventional setups is that experimenter sends random detector-setting instructions to Alice and Bob such that certain detector setting combinations occur more frequently than others. However it should be stressed that the experimenter does not know outcomes at any stage during a run, and it is argued in the discussion that this neither violates relativistic locality not that it is 'un'-classical.
\section{Details of the Numerical Experiment \label{sec:numexp}}
During one experimental run, the following steps are repeatedly carried out:
\newline
\noindent \textbf{At the source}
\begin{enumerate}
\item Increase $c$ by one (if it exceeds the table length, set it to point to item 2 of the table)
\item Create two identical eventrons
\item Copy the value of $c$ to the eventrons
\item Copy the lookup table $t$ to the eventrons
\item With probability $p_s$ set the set-memory command of the eventrons to either $A_1$ or $B_1$ (randomly chosen). Otherwise, set the command to inactive.
\item Launch the eventrons towards the detectors
\end{enumerate}
\noindent \textbf{During flight of the eventrons}
\newline
With probability $p_d$, the experimenter instructs Alice and Bob to set the detectors to $A_i$ and $B_j$ respectively with $i$ and $j$ randomly chosen when the previous detector pairing (only known to the experimenter) was either $A_1$,$B_2$ or $A_2$,$B_1$, while he does so with a probability $\alpha p_s$, when the previous detector pairing was either $A_1$,$B_1$ or $A_2$,$B_2$ .
The parameter $\alpha$ is a factor ranging from roughly 0.7 to $1/p_s$ (here $\alpha =2$ was used). As $p_d$ and $p_s$ are quite a bit smaller than one, for the majority of events, the detector settings do not change.
Although in the numerical simulations conducted, the experimenter's instructions are dispatched during the flight of the eventrons, this is not essential. Given the probabilities $p_d$ and $p_s$, he could make switching list beforehand. It is only essential that he is ignorant of the detectors' outputs and the values of the sources' variables, and of course that the list is not distributed to the source.
\vspace*{5mm}
\noindent \textbf{At the detectors}
\newline
\noindent When an eventron arrives at the detector, carry out the function corresponding to the detector setting:
\noindent $f(t,X_1,m_X,c$):
\begin{itemize}
\item
If the set-memory command indicates $X$, set $m_X = c $. Otherwise leave $m_X$ as it is.
\item Lookup the entry $t[m_X]$ and emit this entry.
\item Increase the value of $m_X$ by 1 (if it exceeds the table length, set it to point to item 2 of the table).
\end{itemize}
\noindent $f(t,X_2,m_X,c$):
\begin{itemize}
\item Lookup the entry $t[c-1]$ and emit this entry
\end{itemize}
\noindent
The apparatus can violate the CHSH inequality due to a combination of two factors.
Firstly, the construction of the lookup table assures that when $c$ and $m_X$ are aligned (i.e. when $c = m_x$), the probability of mismatch between $A_1$ and $B_2$ or between $A_2$ and $B_1$ is equal to $p_t$. As long as $p_s$ is sufficiently larger than $p_d$, this alignment is mostly active for detector pairs $A_1$,$B_2$ or $A_2$,$B_1$. Detector settings $A_2$ and $B_2$ always yield the same binary output and consequently the probability of mismatch for this combination is zero.
Secondly, for detector combination $A_1$,$B_1$, the alignment is mostly inactive as the probability that both detectors receive a set-memory command before the detector settings change is rather small. Consequently, for the pairing $A_1$, $B_1$, the respective memories $m_X$ are de-aligned most of the time. As the probability of obtaining a zero or one equals to 1/2 in the lookup table when randomly picking an entry, the probability of mismatch when looking up two values with the two de-aligned indices equals 1/2 as well. Consequently, the lower limit of $p_t$ for violating the Bell-CHSH inequality is given by $2 p_t + 1/2 = 2$. In practice, of course, $A_1$, $B_1$ will be aligned sometimes and hence $p_t$ needs to be sufficiently larger than 0.75 in order to compensate for the lost (anti-)correlations.
Numerical verification was carried out for $p_t = 0.9$ and a histogram of the results is displayed in Fig. 2.
\begin{figure}\label{histogram}
\end{figure}
It should be noted that when considered individually, each detector setting will emit a one with probability 1/2 as can be expected from the construction of the lookup table.
The question that arises immediately is: "What is different so that Bell-CHSH-type proofs do not apply"?
The essential differences are the inclusion of a memory term and the dependence of the detector output on how long the detector has been in a setting (i.e. a time dependence due to time it takes to receive a set-memory command). These dependences prevent the factorization of the detector probabilities and consequently, Bell's proof is not applicable to the current system. As such, of course, it is not be particularly exciting to find that a system not covered by the conditions of a proof, does not obey that very proof's results. The point, however, is that conclusions (which by themselves naturally reside outside the proof) are drawn which are not warranted, namely the impossibility of a realistic local theory. It may be argued that the inclusion of a memory term is disallowed as subsequent detector readings need to be independent. However, physically speaking, it is hard if not impossible to assess what this would exactly mean if the changes were to occur at levels many orders of magnitude below the observed levels and outcomes were a statistical representation of the changed sub-micro states. For example, the seemingly random output of a single detector is not in conflict with a deterministic underlying process. For practical purposes, even one of the simplest two-state nearest neighbor cellular automatons (rule 30) can provide excellent randomness event though each subsequent state is completely defined by the previous state \cite{Wolfram_ANKOS}.
While the apparatus described in Fig. 1 can easily be simulated numerically as done here, the results apply just as well to a mechanical device of the same design and can therefore experimentally be tested.
\section{Strong and Weak Locality \label{sec:weakstrong}}
Probably the least elegant part of the proposed approach is the requirement that on average the time the apparatus is set to detector combinations $A_1$,$B_1$ must be significantly shorter than the average time that it is set to combinations $A_1$,$B_2$ or $A_2$,$B_1$ (the total number of events per detector combination over the course of the experiment can however be identical). From a conceptual point of view one can argue however that the settings of the detectors are at the experimenter's discretion. Neither the eventrons nor the detectors are aware of the experimenter's choices while for the apparatus to violate the CHSH inequality it is furthermore not necessary that the experimenter knows the outputs of the detectors or the values of the variables of the source. Also, successive detector combinations are randomly chosen and not dependent on any of the previous combinations. Therefore, even though as such the timing of the detector setting switches can be considered as responsible for the additional correlations, it does not break relativistic locality and is fully classical. Indeed, this is not non-local either in the sense of Einstein´s principle of separability as described by Aspect \cite{Aspect_PRD-14} in the following manner ``the setting of a measuring device at a certain time (event A) does not influence the result obtained with the other measuring device (event B) if the event B is not in the forward light cone of event A (nor does it influence the way in which particles are emitted by a source if the emission event is not in the forward light cone of event A)". After all, in the described apparatus, there is no communication from detector A to B and if so desired, the detector switching schedule could be generated before the experiment is carried out and dispatched to the detectors in such a way that the source is ignorant of the schedule, and that A and B only know their own schedules but not each others. Hence, it can be ascertained that information remains within the light cones at all times.
In this context it is important to note that Jarrett has shown that the factorization which is an essential condition for Bell-type Theorem proofs can be considered as the conjunction of (relativistic) locality and (statistical) completeness \cite{Jarrett_Nous-18,Ballentine_AmJPhys-55} (see also \cite{Shimony_Stanford}). Consequently, as relativity has extensive experimental support, violation of the Bell theorem implies incompleteness and quantum mechanics is indeed incomplete in this sense (as Jarrett points out this terminology should not mistakingly be taken to imply defectiveness). The crux of the matter, however, is the assertion that deterministic theories always satisfy the completeness condition and that consequently deterministic classical systems are governed by Bell-type theorems \cite{Jarrett_Nous-18,Shimony_Stanford}. As the proposed apparatus shows, this assertion is not necessarily valid for {\it all} classical systems if time and (local) memory are incorporated.
This can also be seen as follows. If the detectors A and B have $\lambda$-dependent memories $\kappa_A (\lambda)$ and $\kappa_B(\lambda)$ respectively, the probability of obtaining outcomes $x_A$ and $x_B$ can be expressed as:
\begin{eqnarray}
P ( x_A ,x_B | A,B,\lambda)
= \frac{1}{N_{\kappa_A}N_{\kappa_B}} \sum_{\kappa_A''(\lambda)\kappa_B''(\lambda)} P_S \quad \quad \quad \\
P_S = P(x_A,x_B|A(\kappa_A''(\lambda)) B(\kappa_B''(\lambda)),\lambda)
= \frac{N_{\kappa_A''(\lambda)}N_{\kappa_B''(\lambda)}}{N_{\kappa_A}N_{\kappa_B}} \nonumber
\end{eqnarray}
where the double prime in $\kappa''$ indicates that the sum is to be taken only over those values that yield the outcomes $x_A$ and $x_B$, and $N_v$ the total number of different values a variable $v$ can attain. As the apparatus is deterministic and relativistically local, the probability $Q_A (Q_B)$ of obtaining $x_A (x_B)$ on the left (right) hand side while ignoring the right (left) hand side is:
\begin{eqnarray}
Q_A & = & P(x_A | A, \lambda) =
\sum_{\kappa_A'(\lambda)} \sum_{x_B} P(x_A,x_B | A(\kappa_A'(\lambda)),B,\lambda) \nonumber \\
& = & \frac{N_{\kappa_A'(\lambda)}}{N_{\kappa_A}} \\
Q_B & = & P(x_ B| B, \lambda) =
\sum_{\kappa_B'(\lambda)} \sum_{x_A} P(x_A,x_B | A, B(\kappa_B'(\lambda)),\lambda) \nonumber \\
& =& \frac{N_{\kappa_B'(\lambda)}}{N_{\kappa_B}}
\end{eqnarray}
where the single prime in $\kappa'$ indicates that the sum is taken only over those values that yield the outcome $x_A$. It should be noted that the set of values from $\lambda$ for which $\kappa'(\lambda)$ yields $x_A$ will generally be different from the set of values from $\lambda$ for which both $\kappa_A''(\lambda)$ and $\kappa_B''(\lambda)$ yield outcomes $x_A$ and $x_B$ respectively.
Therfore we obtain,
\begin{eqnarray}
P ( x_A ,x_B | A,B,\lambda) & = & \frac{N_{\kappa_A''(\lambda)}N_{\kappa_B''(\lambda)}}{N_{\kappa_A}N_{\kappa_B}} \\
& \neq & \frac{N_{\kappa_A'(\lambda)}}{N_{\kappa_A}} \frac{N_{\kappa_B'(\lambda)}}{N_{\kappa_B}} = Q_A Q_B
\nonumber
\end{eqnarray}
showing that factorization is not necessarily possible and thus that strong locality does not hold.
Hence, I believe that the Bell theorem only applies to a subset of all possible classical systems and that completeness is an additional condition that needs to be justified on grounds other than local realism. This of course does not imply that the completeness condition is not reasonable per se or that it is not an accurate reflection of nature. The only implication is that it needs to be motivated independently from local realism and that hence by itself local realism does preclude quantum correlations.
\section{Discussion \label{sec:discussion}}
One may nevertheless wonder if, in an indirect way, the proposed experimental apparatus doesn't simply set the detectors based on knowledge about the source's output. One could, e.g., imagine a source periodically emitting instructions (1,0,0,1; 0,1,0,1; 0,0,1,1; 1,0,1,0) corresponding to detectors settings $A_1,A_2,B_1,B_2$ respectively. If the experimenters then would loop through the settings pairs $A_1,B_1 \rightarrow A_1,B_2 \rightarrow A_2,B_1 \rightarrow A_2,B_2$, they would obtain 3 for the (anti-)correlations. However, if the experimenter were one step out of sync, the anti-correlation would be 1. Similarly, for two and three steps out of sync, they would obtain 2. Consequently, for random starting points, inevitable when requiring ignorance of the source's variables, the average over many runs would reduce to the maximal (anti-)correlation of 2. Here, this is not the case, even when executing the experiment many times with different random starting points for each separate part of the experiment (detectors, source, experimenter's switching decisions), the violation occurs as is shown in Fig. 2. Furthermore, in order to obtain the violation, it is not necessary to carefully tune the value of the probabilities $p_s$ and $p_d$. The only requirement is that $p_d$ is sufficiently smaller than $p_s$ as is shown in Fig. 3.
\begin{figure}\label{settings}
\end{figure}
The main objective of this report was a proof of concept, namely that contrary to common belief, the CHSH and Bell inequalities do not exclude the possibility of a realistic theory. The proposed apparatus shows that the statistical independence necessary for the factorization condition in the Bell theorem does not directly follow from either (relativistic) locality or determinism and that it hence needs to be justified independently of these concepts.
It will be interesting to see whether the apparatus can be modified to yield correlations mimicking those found in photon coincidence experiments. The results should provide impetus to efforts of describing nature as a cellular automaton\cite{tHooft_SPIN-2002, Wolfram_ANKOS} or complex system and may have important bearings in the area of secure communication.
\begin{acknowledgments}
I would like to thank Dagomir Kaszlikowski for his extraordinary efforts to explain the intricacies of the Bell theorem to me, Andreas Keil for numerical verification of the results and careful reading of the manuscript, and Michael Revzen, Marek Zukowski, Markus Aspelmeyer and Thomas Osipowicz for fruitful and illuminating discussions.
\end{acknowledgments}
\end{document} |
\begin{document}
\title{Bounding the number of odd paths in planar graphs via convex optimization}
\begin{abstract}
Let $N_{\mathcal{P}}(n,H)$ denote the maximum number of copies of $H$ in an $n$ vertex planar graph.
The problem of bounding this function for various graphs $H$ has been extensively studied since the 70's.
A special case that received a lot of attention recently is when $H$ is the path on $2m+1$ vertices, denoted $P_{2m+1}$.
Our main result in this paper is that
$$
N_{\mathcal{P}}(n,P_{2m+1})=O(m^{-m}n^{m+1})\;.
$$
This improves upon the previously best known bound by a factor $e^{m}$,
which is best possible up to the hidden constant, and makes a significant step towards resolving conjectures
of Gosh et al. and of Cox and Martin. The proof uses graph theoretic arguments together with (simple) arguments
from the theory of convex optimization.
\end{abstract}
\section{Introduction}
In this paper we study the following extremal problem: given a fixed graph $H$, what is the maximum number of copies of $H$
that can be found in an $n$ vertex planar graph? We denote this maximum by $N_{\mathcal{P}}(n,H)$.
The investigation of this problem was initiated by Hakimi and Schmeichel \cite{HakSch1979} in the 70's.
They considered the case when $H$ is a cycle of length $m$, denoted $C_m$.
They determined $N_{\mathcal{P}}(n,C_3)$ and $N_{\mathcal{P}}(n,C_4)$ exactly, and for general $m\geq 3$ proved that $N_{\mathcal{P}}(n,C_{m})=\Theta(n^{\lfloor{m}/{2}\rfloor})$.
Following this result, Alon and Caro \cite{AloCar1984} determined $N_{\mathcal{P}}(n,K_{2,m})$ exactly for all $m$, where $K_{2,m}$ is the complete $2$-by-$m$ bipartite graph. In a series of works \cite{Epp1993,GyoPauSalTomZam2021,HuyWoo2022,Wor1986}, which culminated with a recent paper of Huynh, Joret and Wood \cite{HuyJorWoo2020}, the asymptotic value of $N_{\mathcal{P}}(n,H)$ was determined up to a constant factor\footnote{This line of research was also generalized to other families of sparse host graphs, e.g.\ graphs that are embeddable in a surface of genus $g$, $d$-degenerate graphs, and more. In fact, the main result of \cite{HuyJorWoo2020} also determines (up to constant factors) the maximum number of copies of a given graph in an $n$ vertex graph which is embeddable in a surface of genus $g$. A recent far reaching generalization of \cite{HuyJorWoo2020} can be found in Liu \cite{Liu2021} where the order of magnitude of the maximum number of copies of a given graph in a `nowhere dense' graph was computed up to constant factors.} (depending on $H$) for every fixed $H$.
The next natural question following the result of \cite{HuyJorWoo2020} is to determine the asymptotic growth of $N_{\mathcal{P}}(n,H)$ up to\footnote{We use the standard notation $o(1)$ to denote a quantity tending to $0$ when $n$ tends to infinity and $H$ is fixed. Similarly, when we write $o(n^k)$ we mean $o(1)\cdot n^k$. } $1+o(1)$, or more ambitiously, to determine its exact value. This line of research was initiated by Gy\H{o}ri, Paulos, Salia, Tompkins and Zamora \cite{GyoPauSalTomZam2019,GyoPauSalTomZam2021}, who showed that for large enough $n$ we have $N_{\mathcal{P}}(n,P_4)=7n^{2}-32n+27$ and $N_{\mathcal{P}}(n,C_5)=2n^2-10n+12$, where $P_m$ denotes the path with $m$ vertices (and $m-1$ edges).
We note that the result of Alon and Caro \cite{AloCar1984} implies that $N_{\mathcal{P}}(n,K_{1,2})=N_{\mathcal{P}}(n,P_3)= n^2+3n-16$.
Addressing the problem of finding the asymptotic value of $N_{\mathcal{P}}(n,P_{m})$ up to $1+o(1)$, Ghosh, Gy\H{o}ri, Martin, Paulos, Salia, Xiao and Zamora \cite{GhoGyoMarPauSalXiaZam2021}, showed that $N_{\mathcal{P}}(n,P_{5})=(1+o(1))n^3$. They also raised the following conjecture\footnote{They also conjectured that the second order term is $O(n^{m})$.}
regarding the asymptotic value of $N_{\mathcal{P}}(n,P_{2m+1})$ for arbitrary $m \geq 2$:
\begin{equation}\label{eq-Conjecture}
N_{\mathcal{P}}(n,P_{2m+1})=(4m^{-m}+o(1))n^{m+1}\;.
\end{equation}
We note that the lower bound in \eqref{eq-Conjecture} is easy. Indeed, start with a cycle of length $2m$, and then replace every second vertex with an independent set consisting of $(n-m)/m$ vertices, each with the same neighborhood as the original vertex it replaced.
In a very recent paper, Cox and Martin \cite{CoxMar2021_1} introduced an analytic approach for proving \eqref{eq-Conjecture}.
They showed that
\begin{equation}\label{eqcm}
N_{\mathcal{P}}(n,P_{2m+1})\leq (\rho(m)/2+o(1))n^{m+1}\;,
\end{equation}
where $\rho(m)$ is the solution to a certain convex optimization problem, which we define precisely in Section \ref{section3}.
They further conjectured that
\begin{equation}\label{CoxMartinConj}
\rho(m)\leq 8m^{-m}\;,
\end{equation}
which, if true, implies \eqref{eq-Conjecture}. In the same paper, they verified their conjecture for $m=3$ by showing that $\rho(3)=8/27$, which confirms \eqref{eq-Conjecture} for $m=3$.
Using the same approach they also improved the known asymptotic value of $N_{\mathcal{P}}(n,P_{2m+1})$ by showing that
\[
N_{\mathcal{P}}(n,P_{2m+1})\leq \left(\frac{1}{2\cdot(m-1)!}+o(1)\right)n^{m+1}\;.
\]
Note that this bound is roughly $e^{m}$ larger than the one conjectured in (\ref{eq-Conjecture}).
Our main result in this paper, Theorem \ref{thm:number of path in planar graphs} below, makes a significant step towards the resolution of the Cox--Martin and Gosh et al.\ conjectures, by establishing \eqref{CoxMartinConj} up to an absolute constant.
\begin{theorem}\label{thm:number of path in planar graphs}
There is an absolute constant $C$ so that for every fixed $m\geq 2$ and large enough $n$, we have
\[
N_{\mathcal{P}}(n,P_{2m+1})\leq Cm^{-m}n^{m+1}\;.
\]
\end{theorem}
As noted after \eqref{eq-Conjecture}, the above bound is best possible up to the value of $C$. Furthermore, as can be seen in the proof of Theorem \ref{thm:number of path in planar graphs}, the constant $C$ we obtain is $10^4$ (which can certainly be improved).
\subsection{Related work and paper overview}
In addition to studying $N_{\mathcal{P}}(n,P_{2m+1})$, Cox and Martin \cite{CoxMar2021_1} also introduced an analytic method for bounding the maximum number of even cycles in planar graphs.
Similar to the case of odd paths discussed above, they showed that $N_{\mathcal{P}}(n,C_{2m})\leq (\beta(C_m)+o(1)) n^{m}$, where $\beta(C_m)$ is an optimization problem, similar to the one we study in Section \ref{section2}.
They conjectured that $\beta(C_m)=m^{-m}$, a bound which implies by their framework that $N_{\mathcal{P}}(n,C_{2m})\leq (1+o(1)) (n/m)^{m}$.
Observe that the example we mentioned after \eqref{eq-Conjecture} shows that this bound is best possible.
Towards their conjecture, Cox and Martin \cite{CoxMar2021_1} proved that $\beta(C_m)\leq 1/m!$.
Using the ideas in this paper, one can significantly improve this bound. In particular, using Lemma \ref{lem:number of P_m in an n vertex graph} in Section \ref{section2}, it is not hard to show that for some absolute constant $C$, we have
\begin{equation}\label{eqcycles}
\beta(C_m)\leq Cm^{-m}\;.
\end{equation}
In an independent work, Lv, Gy\H{o}ri, He, Salia, Tompkins and Zhu \cite{LvGyoHeSalTomZhu2022} confirmed the conjecture of Cox and Martin by showing that one can in fact obtain $C=1$ in (\ref{eqcycles}). We thus do not include the proof of (\ref{eqcycles}).
We should point that the reason why studying $N_{\mathcal{P}}(n,P_{2m+1})$ appears to be much harder than $N_{\mathcal{P}}(n,C_{2m})$ is that
as opposed to $\beta(C_m)$, which is an optimization problem involving a single graph, $\rho(m)$ is an optimization problem which involves several multigraphs. To overcome this difficulty
we first study in Section \ref{section2} an optimization problem, denoted $\beta(P_m)$,
which is the analogue of $\beta(C_m)$ for the setting of $P_m$. The main advantage of first studying $\beta(P_m)$ is that it allows us
to employ a weight shifting argument, which does not seem to be applicable to $\rho(m)$.
Our main result in that section is a nearly tight bound for $\beta(P_{m})$.
However, as opposed to the case of $N_{\mathcal{P}}(n,C_{2m})$, a bound for $\beta(P_{m})$ does not immediately translate into a bound for $N_{\mathcal{P}}(n,P_{2m+1})$. Hence, in Section \ref{section3} we present the main novel part of this paper, showing how one can transfer any bound for $\beta(P_{m})$ into a bound for $\rho(m)$, thus proving Theorem \ref{thm:number of path in planar graphs}. To this end we use simple arguments from the theory of convex optimization, which allow us to exploit the fact that $\rho(m)$ is a low degree polynomial.
The key lemmas leading to the proof of Theorem \ref{thm:number of path in planar graphs} are Lemmas \ref{thm:main theorem} and \ref{prop:rho to beta} for which we obtain bounds that are optimal up to constant factors. Moreover, if one can improve these bounds to the optimal conjectured ones, then this will give the conjectured inequality (\ref{eq-Conjecture}). We believe that with more care, it is possible to use the ideas in this paper to improve the bound for $\beta(P_m)$ in Lemma \ref{thm:main theorem} to the conjectured one.
In contrast, because of the complex structure of $\rho(m)$, it seems that in order to improve the bound in Lemma \ref{prop:rho to beta} to the conjectured bound, a new idea is needed.
\section{A variant of $\rho(m)$}
\label{section2}
Our goal in this section is to prove Lemma \ref{thm:main theorem} regarding the optimization problem $\beta(P_m)$.
This lemma will be used in the next section in the proof of Theorem \ref{thm:number of path in planar graphs}.
The proof of Lemma \ref{thm:main theorem} will employ a subtle weight shifting argument.
We first recall several definitions from \cite{CoxMar2021_1}. In what follows we write $[n]$ to denote the set $\{1,\ldots,n\}$ and $K_n$ to denote the complete graph on $[n]$.
\begin{definition}
Let $n>0$ be an integer, and let $\mu$ be a probability measure on the edges of $K_{n}$.
\begin{enumerate}
\item For any $x\in [n]$ we define the weighted degree of $x$ to be
\[
\bar{\mu}(x)=\sum_{y\in [n]\setminus\{x\}}\mu(x,y)\;.
\]
\item For any subgraph $H\subseteq K_n$ we define the weight of $H$ to be
\[
\mu(H)=\prod_{e\in E(H)} \mu(e)\;.
\]
\item For any graph $H$ with no isolated vertices, define
\[
\beta(\mu;H)=\sum_{H'\in \mathbf{C}(H,n)}\mu(H')\;,
\]
where $\mathbf{C}(H,n)$ is the set of all (non-induced and unlabeled) copies of $H$ in $K_n$.
Further, we define
\[
\beta(H)=\sup_{\mu} \beta(\mu;H)\;,
\]
where the supremum is taken over all $n'$ and all probability measures $\mu$ on the edges of $K_{n'}$.
\end{enumerate}
\end{definition}
Intuitively, the function $\beta(\mu;H)$ is the probability of hitting a (non-induced and unlabeled) copy of $H$ if $|E(H)|$ independent edges were chosen according to $\mu$.
\begin{lemma}\label{thm:main theorem}
For any integer $m\geq 2$ we have
\begin{equation*}
\beta(P_{m})\leq \frac{2e^2}{m^{m-2}}\;.
\end{equation*}
\end{lemma}
We remark that this lemma is optimal up to the constant factor $2e^2$. To see this, consider the uniform distribution over the edges of $C_{m}$, which shows that $\beta(P_{m}) \geq 1/m^{m-2}$. It seems reasonable to conjecture that $\beta(P_{m}) = 1/m^{m-2}$.
The key step in the proof of Lemma \ref{thm:main theorem} is Lemma \ref{lem:number of P_m in an n vertex graph} below. To state this lemma, we first need the following definitions.
\begin{definition}
For every $k,\ell \geq 0$ we define $P_{(k,\ell)}$ to be a disjoint union of $P_{k+1}$ and $P_{\ell+1}$.
\end{definition}
From now on, we will not only deal with probability measures but also with bounded measures. Therefore, we will frequently write \emph{measure} to denote a bounded measure. Moreover, for a measure $\mu$ we will denote its total mass by $w(\mu)$.
\begin{definition}\label{defbetastar}
Suppose $\mu$ is a measure on the edges of $K_n$ and $s,t \geq 0$. Define
\[
\beta^*(\mu;P_{(s,t)}) = \sum _{P\in\mathbf{C}^*(P_{(s,t)},n)}\mu(P)\;,
\]
where $\mathbf{C}^*(P_{(s,t)},n)$ is the set of copies of $P_{(s,t)}$ in $K_{n}$ where the path of length $s$ starts with the vertex $n$, and the path of length $t$ starts with the vertex $1$. Further, for every $w>0$ we define
\[
\beta_{w,n}^*(P_{(s,t)})=\sup_{\mu} \beta^*(\mu;P_{(s,t)})\;,
\]
where the supremum is taken over all measures $\mu$ on the edges of $K_{n}$ with $w(\mu)=w$.
\end{definition}
We remark that for any measure $\mu$ on the edges of $K_n$, we have $\beta^*(\mu;P_{(0,0)})=1$. This is because $\mathbf{C}^*(P_{(0,0)},n)$ consist of a single graph, the independent set $I_2=\{1,n\}$, and because $\mu(I_2)=1$. This clearly implies that $\beta_{w,n}^*(P_{(0,0)})=1$ for every $w$ and $n$.
\begin{lemma}\label{lem:number of P_m in an n vertex graph}
For every $0\leq \ell \leq m \leq n$ we have
\[
\beta_{1,n}^*(P_{(\ell,m-\ell)})\leq \frac{1}{m^{m}}\;.
\]
\end{lemma}
\begin{claim}\label{claim_double_path}
Suppose that $t$ is a non-negative integer, $s,n$ are positive integers, and $w\geq 0$. Then, there exists a measure $\mu$ on the edges of $K_n$ with $w(\mu)=w$, satisfying:
\begin{enumerate}
\item $\beta^*(\mu;P_{(s,t)})=\beta^*_{w,n}(P_{(s,t)})$, and
\item for all $q\neq n-1$ we have $\mu(q,n)=0$.
\end{enumerate}
\end{claim}
\begin{proof}
The main idea in the proof is the introduction of the notion of a $w$-useful measure.
We say that a measure $\mu$ on the edges of $K_n$ with $w(\mu)=w$ is \emph{$w$-optimal} if
\[
\beta^*(\mu;P_{(s,t)})=\beta_{w,n}^*(P_{(s,t)})\;.
\]
We further say that $\mu$ is \emph{$w$-useful} if $\mu$ is $w$-optimal and
\[
\max_{k\in[n-1]} \mu(n,k) = \sup_{\eta,k} \eta(n,k)\;,
\]
where the supremum is taken over all $k\in [n-1]$ and all measures $\eta$ which are $w$-optimal. Let us see why such a $w$-useful measure exists.
Note that there is a natural bijection between measures $\mu$ with $w(\mu)=w$, and vectors in the simplex $\mathcal Delta=\{x\in \mathbb{R}^{\binom{n}{2}}:x_i \geq 0 ~\mbox{and}~\sum_{i=1}^{\binom{n}{2}} x_i=w\}$. Thus, to show that a $w$-useful measure exists we think of $\mu$ as a vector in $\mathcal Delta$.
Recalling that
\[
\beta^*(\mu;P_{(s,t)}) = \sum _{P\in\mathbf{C}^*(P_{(s,t)},n)}\mu(P)= \sum_{P\in\mathbf{C}^*(P_{(s,t)},n)}\prod_{e\in E(P)} \mu(e)\;,
\]
we see that $\beta^*(\mu;P_{(s,t)})$ is an $\binom{n}{2}$-variate polynomial, with variables $\mu(e)$ for all $e\in E(K_{n})$. Under these notations, $w$-optimal measures are maximal points of the polynomial $\beta^*(\mu;P_{(s,t)})$ in $\mathcal Delta$. Since $\mathcal Delta$ is compact and $\beta^*(\mu;P_{(s,t)})$ is continuous, we deduce that $O_w$, the set of all $w$-optimal measures, is non-empty. Moreover, $O_w$ is a compact set, since it is closed (as the preimage of a closed set under the continuous function $\beta^*(\mu;P_{(s,t)})$) and bounded (as it is contained in $\mathcal Delta$).
Setting $f(\mu)=\max_{k\in[n-1]}\mu(n,k)$, we find that $\mu$ is a $w$-useful measure if and only if it is a maximal point of $f$ within $O_w$. Since $O_w$ is compact and $f$ is continuous, a $w$-useful measure exists.
We now prove that the existence of $w$-useful measures implies the claim. Indeed, let $\mu$ be a $w$-useful measure. Assume with out loss of generality\footnote{If this is not the case, we can permute the vertices and end up with such measure.} that $\mu(n-1,n)\geq 0$ is maximal among all $\mu(k,n)$. We claim that $\mu$ is as required. The first condition follows immediately from the fact that any $w$-useful measure is also $w$-optimal.
Assume towards contradiction that the second condition fails, that is, that there exists a $q\neq n-1$ with $\mu(q,n)>0$. We will now show that there is a measure $\mu'$ satisfying $w(\mu')=w$ which will either
contradict the fact that $\mu$ is $w$-optimal or the fact that it is $w$-useful.
We define $\mu'$ as follows:
We first set $\mu'(e)=\mu(e)$ for every edge other than the two edges $\{n-1,n\}$ and $\{q,n\}$.
Define $W_q$ to be the weight (under $\mu$) of all copies of $P_{(s-1,t)}$, not containing $n$, such that the path of length $s-1$ starts with $q$, and the path of length $t$ starts with $1$. Define $W_{n-1}$ analogously. Then, we define
\[
\mu'(n,n-1)=\begin{cases}
\mu(n-1,n)+\mu(q,n) & \text{if } W_{n-1}\geq W_q\;,\\
0 & \text{else}\;,
\end{cases}
\]
and
\[
~~~~~~\mu'(q,n)=\begin{cases}
0 & \text{if } W_{n-1}\geq W_q\;,\\
\mu(q,n)+\mu(n-1,n) & \text{else\;.}
\end{cases}
\]
To see that we indeed get a contradiction, assume first that $W_{n-1}\geq W_q$.
Since a copy of $P_{(s,t)}$ in $C^{*}(P_{(s,t)},n)$ uses at most one of the edges $\{n-1,n\}$ and $\{q,n\}$, decreasing the value of
$\{q,n\}$ by some $\varepsilon$ while increasing that of $\{n-1,n\}$ by the same $\varepsilon$ increases the
total weight of copies of $P_{(s,t)}$ by $\varepsilon(W_{n-1}-W_q)$. We thus infer that
\begin{align*}
\beta^*(\mu';P_{(s,t)}) = \beta^*(\mu;P_{(s,t)})+\mu(q,n)(W_{n-1}-W_q) \geq \beta^*(\mu;P_{(s,t)})\;.
\end{align*}
Since $\mu'(n,n-1)>\mu(n,n-1)$ we see that $\mu'$ witnesses the fact that $\mu$ is not $w$-useful.
If on the other hand $W_{q}>W_{n-1}$, then
\begin{align*}
\beta^*(\mu';P_{(s,t)}) = \beta^*(\mu;P_{(s,t)})+\mu(n-1,n)(W_{q}-W_{n-1})> \beta^*(\mu;P_{(s,t)})=\beta_{w,n}^*(P_{(s,t)})\;,
\end{align*}
so $\mu'$ witnesses the fact that $\mu$ is not $w$-optimal.
\end{proof}
\begin{claim}\label{claim_induction}
Suppose $s,t$ are non-negative integers, $n$ is a positive integer, and $w\geq 0$. Then, there are $w_1,\ldots ,w_s\geq 0$ such that $\sum_{i=1}^{s}w_i\leq w$ and such that
\[
\beta_{w,n}^*(P_{s,t})\leq \beta_{w',n-s}^*(P_{0,t}) \cdot \prod_{i=1}^{s}w_i \;,
\]
where $w'=w-\sum_{i=1}^{s}w_i$.
\end{claim}
\begin{proof}
First, if $s+t+1\geq n$ then the claim is trivial, as $\mathbf{C}^*(P_{(s,t)},n)=\emptyset$. So we assume for the rest of the proof that $s+t+2\leq n$.
Let $\mu_0$ be a measure on the edges of $K_n$ as guaranteed by Claim \ref{claim_double_path}. Since $\beta_{w,n}^*(P_{(s,t)})=\beta^*(\mu_0;P_{(s,t)})$, it is enough to prove that there are $w_1,\ldots,w_s\geq 0$ such that $\sum_{i=1}^{s}w_i\leq w$ and
\begin{equation}\label{eqbeta1}
\beta^*(\mu_0;P_{(s,t)})\leq \beta_{w',n-s}^*(P_{0,t})\cdot \prod_{i=1}^{s}w_i\;,
\end{equation}
where $w'=w-\sum_{i=1}^{s}w_i$. We define inductively a sequence of reals $w_1,\ldots w_k\geq 0$ with $\sum_{i=1}^{k}w_i\leq w$, along with measures $\mu_1,\ldots ,\mu_k$ on the edges of $K_{n-1},\ldots ,K_{n-k}$, respectively, such that the following holds for all $1 \leq j \leq s$, where we set $w'_{j}=w-\sum_{i=1}^{j}w_i$:
\begin{enumerate}[label=(\roman*)]
\item\label{1} $w(\mu_j)=w'_{j}$,
\item\label{2} $w_{j}=\mu_{j-1}(n-j+1,n-j)$,
\item\label{3} for all $t\in[n-j-2]$ we have $\mu_j(n-j,t)=0$,
\item\label{4} $\beta^*(\mu_j;P_{(s-j,t)})=\beta_{w'_{j},n-j}^*(P_{(s-j,t)})$, and
\item\label{5} $\beta^*(\mu_{j-1};P_{(s-j+1,t)})\leq w_j \cdot \beta^*(\mu_{j};P_{(s-j,t)})$.
\end{enumerate}
Indeed, assuming $w_1,\ldots, w_j$ and $\mu_1,\ldots,\mu_j$ have already been chosen, we now choose $w_{j+1}$ and $\mu_{j+1}$.
We first set $w_{j+1}=\mu_j(n-j,n-j-1)\geq 0$ so that the second condition holds.
Further, set $\mu'_{j+1}=\mu_{j}|_{n-j-1}$, the restriction of $\mu_{j}$ to the edges of $K_{n-j-1}$.
Observe that by the induction hypothesis on $\mu_{j}$, we have $\mu_{j}(n-j,t)=0$ for all $t\neq n-j-1$.
Hence
\[
w(\mu'_{j+1})=w(\mu_{j})-\sum_k\mu_{j}(n-j,k)=w(\mu_{j})-w_{j+1}=w'_{j+1}\;,
\]
and
\begin{align}\label{eq1-lemma}
\beta^*(\mu_j;P_{(s-j,t)})= w_{j+1}\cdot \beta^*(\mu'_{j+1};P_{(s-j-1,t)})\leq w_{j+1}\cdot \beta_{w'_{j+1},n-j-1}^*(P_{(s-j-1,t)})\;.
\end{align}
Let $\mu_{j+1}$ be the measure given by Claim \ref{claim_double_path} applied with $P_{(s-j-1,t)}$ and total mass $w'_{j+1}$.
We claim that $\mu_{j+1}$ satisfies the inductive properties. The fact that it satisfies the first condition is immediate from its definition.
To see that $\mu_{j+1}$ satisfies the last three conditions, note that by Claim \ref{claim_double_path} the measure $\mu_{j+1}$ satisfies
\begin{equation}\label{eq2-lemma}
\beta^*(\mu_{j+1};P_{(s-j-1,t)})=\beta_{w'_{j+1},n-j-1}^*(P_{(s-j-1,t)})\;,
\end{equation}
and $\mu_{j+1}(n-j-1,t)=0 $ for all $t\neq n-j-2$. Finally, combining \eqref{eq1-lemma} and \eqref{eq2-lemma} we obtain
\[
\beta^*(\mu_j;P_{(s-j,t)})\leq w_{j+1}\cdot\beta^*(\mu_{j+1};P_{(s-j-1,t)})\;,
\]
thus verifying the last three properties.
Repeatedly applying property \ref{5} we deduce that
$$
\beta^*(\mu_0;P_{(s,t)}) \leq \beta^*(\mu_s;P_{(0,t)})\cdot \prod_{i=1}^{s}w_i\;.
$$
Since $\beta^*(\mu_s;P_{(0,t)})= \beta^*_{w'_s,n-s}(P_{(0,t)})=\beta^*_{w',n-s}(P_{(0,t)})$ (by property \ref{4} and the definition of $w'$) we have thus proved (\ref{eqbeta1}) and the proof is complete.
\end{proof}
We now use Claim \ref{claim_induction} to prove Lemma \ref{lem:number of P_m in an n vertex graph}.
\begin{proof}[Proof of Lemma \ref{lem:number of P_m in an n vertex graph}]
Claim \ref{claim_induction} applied with $s=\ell,t=m-\ell$ and with $w=1$ asserts that there are $w_1,\ldots ,w_\ell\geq 0$ such that $\sum_{i=1}^{\ell}w_i\leq 1$ and such that
\begin{equation}\label{eq1-main-lemma}
\beta_{1,n}^*(P_{\ell,m-\ell})\leq \beta_{w',n-\ell}^*(P_{0,m-\ell}) \cdot \prod_{i=1}^{\ell}w_i \;,
\end{equation}
where $w'=1-\sum_{i=1}^{\ell}w_i$.
Clearly, for all integers $s,t,k$ and $w\geq 0$ we have $\beta_{w,k}^*(P_{(s,t)})=\beta_{w,k}^*(P_{(t,s)})$.
Hence, using Claim \ref{claim_induction} with $s=m-\ell,t=0$ and with $w=w'$, we obtain a sequence $w_{\ell+1},\ldots,w_{m}$ of non-negative reals, such that $\sum_{i=\ell+1}^m w_i\leq w'$ and such that
\begin{equation}\label{eq2-main-lemma}
\beta_{w',n-\ell}^*(P_{0,m-\ell})=\beta_{w',n-\ell}^*(P_{m-\ell,0})\leq \beta_{w'',n-m}^*(P_{(0,0)})\cdot \prod_{i=\ell+1}^{m}w_i =\prod_{i=\ell+1}^m w_i\;,
\end{equation}
where $w''=w'-\sum_{i=\ell+1}^m w_i$, and we used the fact that $\beta_{w'',n-m}^*(P_{(0,0)})=1$ (see the remark after Definition \ref{defbetastar}).
Combining \eqref{eq1-main-lemma} and \eqref{eq2-main-lemma}, we infer that there are $w_1,\ldots,w_m\geq 0$ with $\sum_{i=1}^{m}w_i\leq 1$ such that
\[
\beta_{1,n}^*(P_{\ell,m-\ell})\leq \prod_{i=1}^{m}w_i\leq \left(\frac{\sum_{i=1}^{m}w_i}{m}\right)^m\leq \frac{1}{m^m}\;,
\]
where the second inequality is the AM-GM inequality, and the last inequality follows from the properties of the sequence $w_1,\ldots ,w_m$.
\end{proof}
To deduce Lemma \ref{thm:main theorem} from the above claims, we recall a definition and a lemma from Cox and Martin \cite{CoxMar2021_1} which we specialize here to the case of $P_m$.
\begin{definition}
For an integer $n$, we denote by $\Opt(n;H)$ the set of all probability measures $\mu$ on the edges of $K_{n}$ satisfying
\[
\beta(\mu; P_m) = \sup_{\eta} \beta(\eta;P_m)\;,
\]
where the supremum is taken over all probability measures $\eta$ on the edges of $K_{n}$.
\end{definition}
\begin{lemma}[Lemma 4.5 in \cite{CoxMar2021_1}]\label{lem:inequalities regarding the masses}
For every $n\geq m\geq 2$ and $\mu \in \Opt(n; P_m)$, we have the following for all $x \in [n]$
\[
\bar{\mu}(x)\cdot (m-1)\cdot \beta(\mu;P_m)=\sum_{\substack{P\in \mathbf{C}(P_m,n)\\ V(P)\ni x}}\deg_{P}(x)\mu (P)\;.
\]
\end{lemma}
\begin{proof}[Proof of Lemma \ref{thm:main theorem}]
Suppose $n \geq m$ and take any $\mu\in \Opt(n;P_m)$. We will next show that $\beta(\mu;P_m)\leq \frac{20}{m^{m-2}}$ thus completing the proof.
Let $x\in [n]$ be such that $\bar{\mu}(x)\neq 0$. By Lemma \ref{lem:inequalities regarding the masses} we have
\begin{align}\label{eq3}
\bar{\mu}(x)\cdot (m-1)\cdot \beta(\mu;P_m)\leq 2\sum_{\substack{P\in \mathbf{C}(P_m,n)\\ V(P)\ni x}}\mu (P)\;.
\end{align}
Given distinct $s,t\in[n]$ and $0 \leq \ell \leq m-2$ we define $\mathbf{C}^*(s,t,\ell)$ to be the set of all copies of $P_{(\ell,m-\ell-2)}$ in $K_n$, where the path of length $\ell$ starts with $s$ and the path of length $m-\ell-2$ starts with $t$.
We have
\begin{align}
\sum_{\substack{P\in \mathbf{C}(P_m,n)\\ V(P)\ni x}}\mu (P)&= \sum_{y\in [n]\setminus \{x\}}\mu(x,y)\sum_{\ell=0}^{m-2}\sum_{\substack{P \in \mathbf{C}^*(x,y,\ell)}}\mu (P)\nonumber\\
&\leq \sum_{y\in [n]\setminus \{x\}}\mu(x,y)\sum_{\ell=0}^{m-2}\beta^*_{1,n}(P_{(\ell,m-\ell-2)})\nonumber\\
&\leq \frac{m-1}{(m-2)^{m-2}}\sum_{y\in [n]\setminus \{x\}}\mu(x,y)=\frac{\bar{\mu}(x)(m-1)}{(m-2)^{m-2}}\label{eq4}\;,
\end{align}
where the second inequality holds by the definition\footnote{We rely on the fact that although $\beta^*_{w,n}(P_{(s,t)})$ was defined with respect to paths starting at vertices $1$ and $n$, we could have chosen any pair of vertices in $[n]$ (in the above proof we use $x,y$). } of $\beta^*_{1,n}(P_{(s,t)})$, and the third inequality holds by Lemma \ref{lem:number of P_m in an n vertex graph}.
Recalling that $\bar{\mu}(x)>0$ and combining \eqref{eq3} and \eqref{eq4} we infer that
\[
\beta(\mu;P_m)\leq \frac{2}{(m-2)^{m-2}}\leq \frac{2e^2}{m^{m-2}}\;.\qedhere
\]
\end{proof}
\section{Proving the main result}
\label{section3}
We start this section with stating the optimization problem of Cox and Martin \cite{CoxMar2021_1}.
\begin{definition}
Let $n$ be an integer and let $\mu$ be a probability measure on the edges of $K_{n}$. Then, for any integer $m\geq 2$, letting $(n)_m$ to be the set of all ordered $m$-tuples of distinct elements from $[n]$, define
\[
\rho (\mu;m)=\sum_{x\in (n)_m}\bar{\mu}(x_1)\left(\prod_{i=1}^{m-1}\mu (x_i,x_{i+1})\right)\bar{\mu}(x_m)\;.
\]
Furthermore, define
\[
\rho_n(m)=\sup_{\mu}\rho(\mu;m)\quad \text{and}\quad \rho(m)=\sup_{n\in \mathbb{N}}\rho_n(m)\;,
\]
where the supremum in the definition of $\rho_n(m)$ is taken over all probability measures $\mu$ on the edges of $K_{n}$.
\end{definition}
Note that if we expand the products in the definition of $\rho(m)$ we see that
$\rho(m)$ is very similar to $\beta(P_{m+2})$. The crucial difference is that in $\rho(m)$ we count the total weight of walks of a very special structure. These walks are formed by first choosing distinct $x_2,\ldots ,x_{m+1}$ to be a copy of $P_{m}$, and then choosing \emph{arbitrary} $x_1\neq x_2$ and $x_{m+2}\neq x_{m+1}$ (so we allow $x_1=x_{m+1}$ and/or $x_1,x_{m+2}\in \{x_2,\ldots ,x_{m+1}\}$). For example, a walk of this type might be $(1,2,1,2)$ or $(1,2,1,3,1)$.
Our main task in this section is to prove the following lemma.
\begin{lemma}\label{prop:rho to beta}
For all integers $n\geq m\geq 2$ we have
\[
\rho_n(m)\leq \frac{1152}{m^2}\cdot \beta(P_{m})\;.
\]
\end{lemma}
The constant $1152$ in the above lemma is clearly not optimal. We did not make any attempt to improve it, as it seems that a new idea is required to obtain the optimal one.
A simple lower bound for $\rho_n(m)$ is $8/m^{m}$, which is achieved by the uniform distribution on the edges of $C_{m}$.
As we mentioned in the previous section, it seems reasonable to conjecture that $\beta(P_{m})=1/m^{m-2}$.
Therefore, a natural conjecture is that in Lemma \ref{prop:rho to beta} the optimal constant is $8$.
Let us first deduce Theorem \ref{thm:number of path in planar graphs} from Lemmas \ref{thm:main theorem} and \ref{prop:rho to beta}.
\begin{proof}[Proof of Theorem \ref{thm:number of path in planar graphs}]
Lemma 2.3 in Cox and Martin \cite{CoxMar2021_1} asserts that for all $m\geq 2$ we have
\[
N_{\mathcal{P}}(n,P_{2m+1})\leq (\rho(m)/2+o(1)) n^{m+1}\;.
\]
Furthermore, since Lemma \ref{prop:rho to beta} holds for all $n$, we deduce that $\rho(m)\leq \frac{10^3}{m^2}\cdot\beta(P_m)$.
Together with Lemma \ref{thm:main theorem}, this gives Theorem \ref{thm:number of path in planar graphs} as then
\begin{align*}
N_{\mathcal{P}}(n,P_{2m+1})&\leq (\rho(m)/2+o(1)) n^{m+1}\leq (576\beta(P_m)m^{-2}+o(1)) n^{m+1}\\
&\leq (10^4m^{-m}+o(1)) n^{m+1}\;.\qedhere
\end{align*}
\end{proof}
Before proving Lemma \ref{prop:rho to beta}, let us recall a special case of the Karush--Kuhn--Tucker (KKT) conditions (see Corollaries 9.6 and 9.10 in \cite{Gul2010}).
\begin{theorem}[Special case of the KKT conditions]\label{thm:KKT}
Let $f\colon \mathbb{R}^n\to \mathbb{R}$ be a continuously differentiable function, and consider the optimization problem
\begin{align*}
\max_{x\in \mathcal Delta} f(x)\;, \text{ where } \mathcal Delta=\left\{x:\sum_{i=1}^{n} x_i=1\text{ and } x_1,\ldots ,x_n\geq 0\right\}\;.
\end{align*}
If $\mathbf{x}^*$ achieves this maximum, then there is some $\lambda\in \mathbb {R}$ such that, for each $i\in [n]$, either
\[
\mathbf{x}^*_i=0,\quad\text{or}\quad \frac{\partial f}{\partial x_i}(\mathbf{x}^*)=\lambda\;.
\]
\end{theorem}
\begin{proof}[Proof of Lemma \ref{prop:rho to beta}.]
Let $\mathbf{P^*}$ be the set of walks $(x_1,x_2,\ldots,x_{m+2})$ on $[n]$ constructed as follows: first, choose $(x_2,x_3,\ldots,x_{m+1})$ to be a path (i.e.\ a non-induced and \emph{labeled} copy of $P_{m}$), and then complete the walk by choosing an arbitrary $x_1\neq x_2$ and an arbitrary $x_{m+2}\neq x_{m+1}$.
Further, for any $i\neq j\in [n]$ we let $\mathbf{P^*}(\{i,j\})$ be the set of all walks $(x_1,x_2,\ldots,x_{m+2})\in \mathbf{P^*}$ such that there is $k$ with $\{x_{k},x_{k+1}\}=\{i,j\}$.
Define $f\colon \mathbb{R}^{\binom{[n]}{2}}\to \mathbb{R}$ by
\[
f(\mathbf{x})=\sum_{p\in (n)_m}\left(\sum_{p_0\in [n]\setminus\{p_1\}}\mathbf{x}_{p_0,p_1}\right)\left(\prod_{i=1}^{m-1}\mathbf{x}_{p_i,p_{i+1}}\right)\left(\sum_{p_{m+1}\in [n]\setminus \{p_m\}}\mathbf{x}_{p_m,p_{m+1}}\right)\;.
\]
Suppose $\mu$ is a probability measure on the edges of $K_n$ with $\rho(\mu;m)=\rho_n(m)$. When viewing $\mu$ as a vector in $\mathbb{R}^{\binom{[n]}{2}}$, we have $f(\mu)=\rho(\mu;m)$, and moreover,
\[
f(\mu)=\max_{\mathbf{x}\in \mathcal Delta} f(\mathbf{x})\;, \text{ where } \mathcal Delta=\left\{\mathbf{x}:\sum_{i=1}^{\binom{n}{2}} \mathbf{x}_i=1\text{ and } \mathbf{x}_1,\ldots ,\mathbf{x}_{\binom{n}{2}}\geq 0\right\}\;.
\]
By the maximality of $\mu$ and by Theorem \ref{thm:KKT} (the KKT conditions), there is a non-negative\footnote{As the polynomial has only positive coefficients, $\lambda$ must be non-negative.} real $\lambda$ such that for all $\{i,j\}\in \binom{[n]}{2}$ we have
\[
\mu(i,j)=0 \quad \text{or}\quad \frac{\partial f(\mathbf{x})}{\partial \mathbf{x}_{i,j}}(\mu)=\lambda\;.
\]
Note that the degree of each term $\mathbf{x}_{i,j}$, in every monomial of $f(\mathbf{x})$ is at most\footnote{The only case where it is $3$ is when $m=2$ and we consider a walk on one edge three times, e.g,\ the walk $(1,2,1,2)$.} $3$. Thus, for every $\{i,j\}\in \binom{[n]}{2}$ we have
\begin{align}\label{eq1-rho}
\lambda \cdot \mu(i,j)=\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}_{i,j}}(\mu)\cdot\mu(i,j)\leq 3\sum_{P\in \mathbf{P^*}(\{i,j\})}\mu (P)\;.
\end{align}
We also have the following:
\begin{align}
\lambda &= \sum_{\{i,j\}\in \binom{[n]}{2}}\lambda \cdot \mu(i,j)\\
&=\sum_{\{i,j\}\in \binom{[n]}{2}}\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}_{i,j}}(\mu)\cdot \mu(i,j)\nonumber\\
&\geq \sum_{\{i,j\}\in \binom{[n]}{2}}\sum_{P\in \mathbf{P^*}(\{i,j\})}\mu(P)\nonumber\\
&=\sum_{P\in \mathbf{P^*}}\mu(P)\sum_{\{i,j\}\in \binom{[n]}{2}}\mathbbm{1} (\{i,j\}\in E(P))\\
&\geq (m-1)\rho(\mu;m)\label{eq2-rho}\;,
\end{align}
where the first equality holds as $\mu$ is a probability measure, the second equality holds by the definition of $\lambda$, and the last inequality holds as there are at least $m-1$ distinct edges in each walk in $\mathbf{P^*}$. Combining \eqref{eq1-rho} and \eqref{eq2-rho} we have the following for all $i\in [n]$:
\begin{align*}
(m-1)\cdot \bar{\mu}(i)\cdot \rho(\mu;m) &= \sum_{j\in [n]\setminus\{i\}}(m-1)\mu(i,j)\rho(\mu;m)\\
&\leq 3 \sum_{j\in [n]\setminus\{i\}} \sum_{P\in \mathbf{P^*}(\{i,j\})}\mu(P)\\
&=3\sum_{P\in \mathbf{P^*}}\mu (P)\sum_{j\in [n]\setminus\{i\}} \mathbbm{1}(\{i,j\}\in E(P))\\
&=3\sum_{P\in \mathbf{P^*}}\deg_P(i)\mu (P)\\
&\leq 12\sum_{P\in \mathbf{P^*}}\mu (P)= 12\cdot \rho(\mu;m)\;.
\end{align*}
where the last inequality follows as for every $P\in \mathbf{P^*}$ and $i\in P$ we have\footnote{An example being the walk $(1,2,1,3,1)$.} $\deg_P(i)\leq 4$.
Dividing both sides by $(m-1)\cdot \rho(\mu;m)$ we obtain that for all $i$ we have $\bar{\mu}(i)\leq \frac{12}{m-1}$. Therefore, as $m\geq 2$ we have
\begin{align*}
\rho (\mu;m)&=\sum_{x\in (n)_m}\bar{\mu}(x_1)\left(\prod_{i=1}^{m-1}\mu (x_i,x_{i+1})\right)\bar{\mu}(x_m)\\
&\leq \frac{144}{(m-1)^2}\sum_{x\in (n)_m}\prod_{i=1}^{m-1}\mu (x_i,x_{i+1})\leq \frac{1152}{m^2}\cdot \beta(P_{m})\;.\qedhere
\end{align*}
\end{proof}
\end{document} |
\begin{document}
\title{Cross-verification and Persuasive Cheap Talk\thanks{We are grateful for the helpful comments of Navin Kartik, Elliot Lipnowski and Joel Sobel. Ludovic Renou gratefully acknowledges the support of the Agence Nationale pour la Recherche under grant ANR CIGNE (ANR-15-CE38-0007-01) and through the ORA Project ``Ambiguity in Dynamic Environments'' (ANR-18-ORAR-0005). Atakan's work on this project was supported by a grant from the European Research Council (ERC 681460, InformativePrices).}}
\author{Alp Atakan\thanks{Koc University and QMUL. } \,\& Mehmet Ekmekci\thanks{ Boston College}
\,\& Ludovic Renou \thanks{QMUL and CEPR}}
\maketitle
\begin{abstract}
We study a cheap-talk game where two experts first choose what information to acquire and then offer advice to a decision-maker whose actions affect the welfare of all. The experts cannot commit to reporting strategies. Yet, we show that the decision-maker's ability to cross-verify the experts' advice acts as a commitment device for the experts. We prove the existence of an equilibrium, where an expert's equilibrium payoff is equal to what he would obtain if he could commit to truthfully revealing his information.
{}
\textbf{Keywords}: Bayesian persuasion, information design, commitment,
cheap talk, multiple experts
\textbf{JEL Classification Numbers}: C73, D82
\end{abstract}
\section{Introduction}
Decision-makers routinely solicit advice from experts who have a vested interest in the decision at hand. Consulting multiple experts may allow a decision-maker to check the veracity of the advice that he receives by comparing one expert's recommendation with another's ("cross-verification"). In this paper, we study how cross-verification affects communication.
Cross-verification's effectiveness depends on the experts' information. If experts have perfectly correlated information, then inconsistent recommendations from experts
definitively indicate untruthful, self-serving advice. Alternatively, if experts have uncorrelated information, then cross-verification cannot detect misleading advice. Thus, if the experts strategically acquire information, their choices will affect the scope for cross-verification. This paper sheds light on this interplay by analyzing a cheap-talk game, where experts independently acquire information before
providing advice to a decision-maker. More precisely, we study the following game: Two experts with
identical preferences, first choose statistical experiments that provide
information about an unknown state of the world, privately observe their experiments' outcomes, and then offer private
reports to the decision-maker. The decision-maker collects
all the reports and chooses an action.
As a benchmark, suppose that an expert could commit to revealing his experiment's outcome truthfully. Following \citet{kamenica2011bayesian}, we call the experiment that this expert would optimally select the expert-optimal experiment. In our model, however, the experts
\emph{cannot} commit. Yet, we show that there exists an equilibrium, where both experts
choose the expert-optimal experiment and truthfully report
the outcomes of their experiments. In equilibrium, the experts optimally select perfectly correlated experiments, which enable cross-verification to be most effective. In turn, cross-verification facilitates truthful communication and allows
the experts to receive their best possible payoff. In other words, cross-verification acts as a commitment device.
The existence of such an equilibrium relies on three essential properties.
First, we assume that the experts are free to choose arbitrarily correlated statistical
experiments (see \citet{green1978two} and \citet{gentzkow2016competition,gentzkow2017bayesian}). In fact in the equilibrium that we construct, they choose to correlate
their experiments' outcomes perfectly and thus allow the decision-maker
to cross-verify their reports perfectly.
Second, suppose one expert deviates from reporting the experiment's outcome
truthfully, while the other is truthful. In this case, the decision-maker detects
a deviation as the two reports are inconsistent.
However, the decision-maker cannot deduce the deviator's identity.
Third, we show that a \emph{uniform} punishment always exists. There is an action that punishes the experts for deviating from
truthful reporting, irrespective of the experts'
private information.
The existence of the aforementioned uniform punishment is key to our
equilibrium construction since the decision-maker does not know the
deviator's identity and, therefore, cannot condition the punishment on
the deviator's information. Proving the existence of a uniform punishment is the main technical contribution
of the paper. We stress that the uniform
punishment is relative to the expert-optimal experiment. Arbitrary
experiments do not necessarily admit uniform punishments, and therefore,
cross-verification does not necessarily elicit honest advice when the experts choose arbitrary experiments.
{}
Our main result, described above, also generalizes to situations
where the experts have non-identical preferences, provided that a
uniform punishment continues to exist. In particular, we show that
there is a uniform punishment when the preferences of the second expert
are a convex combination of the preferences of the first expert and
the decision-maker. For example, this is the case in the
quadratic utility example of \citet{crawford1982strategic} when the two experts
have like-biases.
Finally, we also study cross-verification from the decision-maker's perspective. We show that there is an equilibrium where the
decision-maker benefits from cross-verification if the expert-optimal
experiment is informative at some prior belief. The intuition is as follows: The decision-maker benefits from any additional information, and even the expert-optimal experiment provides
valuable information in many circumstances. If the expert-optimal experiment does not provide
useful information to the decision-maker, we appropriately modify
the expert-optimal experiment. The modified experiment offers
valuable information for the decision-maker, and the experts can truthfully
communicate this information in equilibrium. We also establish this result's converse: the decision-maker's unique equilibrium payoff is equal
to his payoff at his prior belief if the expert-optimal experiment is uninformative at every prior belief. In other
words, the decision-maker only benefits from cross-verification in
situations where the experts also benefit.
{}
\textbf{\textit{Related literature.}} This paper is related
to the literature on cheap talk pioneered by \citet{crawford1982strategic}
and several papers in this literature study communication with
multiple experts. In particular, \citet{krishna2001asymmetric} focus
on a model where the experts are perfectly informed and show that
there is an equilibrium where the experts truthfully reveal the state
if the experts send messages simultaneously. In contrast, \citet{krishna2001model}
prove that such an equilibrium does not exist if the experts send
messages sequentially. \citet{battaglini2002multiple} shows that the decision-maker can learn a multidimensional state by consulting experts about different dimensions. \citet{ambrus2014almost} find that there are equilibrium outcomes of multi-sender cheap-talk games that are arbitrarily close to full revelation, if the senders imperfectly observe the state and if the state space is large enough.\footnote{Also, see \citet{wolinsky2002eliciting},
and \citet{gilligan-krehbiel} for related work on multi-sender cheap-talk games.} Our work differs from these articles in several respects: Foremost, our main result shows that the experts obtain their commitment
payoff. In contrast, the cheap-talk literature is predominantly interested in full information revelation. In other words, our emphasis is on the experts' perspective while the cheap-talk literature focuses on the decision-maker's perspective. Second, we assume that the experts choose what kind of
information to acquire, while the previous papers typically assume that
the experts perfectly know the state. This is an important distinction
since the experts' information affects the scope for cross-verification.
Third, the literature on cheap talk focuses on agents with single-peaked preferences and frequently assumes that all agents have quadratic utility.
In contrast, we put no restrictions on the utility functions.\footnote{With quadratic utility and like-biased experts, the expert-optimal
information structure coincides with the decision-maker's and entails choosing the perfectly informative experiment.
Therefore, as in \citet{krishna2001asymmetric}, our result also implies that full information revelation
is an equilibrium in this particular case. However, with other utility specifications,
the expert-optimal and decision-maker optimal information structures need not coincide.}
The
survey by \citet{sobel2013giving} also discusses how cross-verification
ensures truth-telling in the context of multi-sender
cheap-talk games. The argument provided in this survey relies on the
existence of an arbitrarily harsh exogenously-given punishment for
deviations from truthful reporting. Instead, we show that a uniform
punishment relative to the optimal experiment always exists.
Our paper is also related to the following works that focus on single-expert cheap-talk games: \citet{lyu2020information} characterizes the equilibrium set in a model where the expert acquires information before providing advice. \citet{lipnowski2020equivalence} shows that an expert can obtain his commitment payoff if the expert's value function is continuous.\footnote{The value function describes the expert's highest expected payoff at a given belief conditional on the decision-maker choosing a best-reply to that belief. Continuity of the value function is a strong assumption. E.g., with two states and two actions, it requires the expert to be indifferent between the two actions whenever the decision-maker is.} Instead, we focus on a model with multiple experts and show that the experts receive their commitment payoff, without making any assumptions on their payoff functions.
Finally, this paper is closely related to the literature on Bayesian persuasion
(\citet{kamenica2011bayesian}). A number of articles that include \citet{au2020competitive},
\citet{gentzkow2016competition,gentzkow2017bayesian}, \citet{koessler2018interactive}
and \citet{li2018bayesian,li2018sequential} study persuasion with
multiple experts. In all of these papers, the experts can commit to
revealing their information truthfully. In contrast, we assume that the experts' recommendations are cheap-talk, i.e., we require sequential
rationality at every stage of the game. Our result
shows that the experts can achieve their commitment payoff even though
they cannot commit to revealing their information.
For a recent survey of the literature on Bayesian persuasion, we refer to \citet{kamenica2019bayesian}.
\section{The Model}
We study a cheap-talk game between two experts, labelled 1 and 2,
and a decision-maker. The experts provide information to the decision-maker
about a payoff-relevant state $\omega\in\Omega$, who then chooses
an action $a\in A$. The sets $A$ and $\Omega$ are finite. The
experts have identical preferences. An expert's payoff is $u(a,\omega)$
when the decision-maker chooses action $a$ and the state is $\omega$.
(We relax the identical preferences assumption in the next section.)
The decision-maker's payoff is $v(a,\omega)$. Initially, neither
the experts nor the decision-maker knows the state. The common prior
probability that the state is $\omega$ is $\pi^{\circ}(\omega)$.
We first provide an informal description of the cheap-talk game. The
game has three stages. In the first stage, the two experts simultaneously
choose a statistical experiment. The selected experiments are publicly
observed. In the second stage, each expert privately observes his experiment's outcome and then sends a message to the decision-maker.
In the third stage, the decision-maker observes the experts' messages
and chooses an action.
{}
We now provide a formal description. To model the choice of statistical
experiments, we follow \citet{gentzkow2016competition,gentzkow2017bayesian}.
These authors define a statistical experiment $\sigma$ as a partition
of $\Omega\times[0,1]$ into finitely many (Lebesgue) measurable subsets. A signal $s$ is an element of the partition
$\sigma$, i.e., a measurable subset of $\Omega\times[0,1]$. The
probability of signal $s\in\sigma$ conditional on $\omega$ is the
(Lebesgue) measure of the set $\{x\in[0,1]:(\omega,x)\in s\}$. Throughout,
we omit the dependence on the experiment $\sigma$, and write $\lambda_{s}$
for the probability of the signal $s$ and $\pi_{s}$ for the posterior
probability. We denote the set of experiments that the experts can
choose from by $\Sigma$.
{}
In the first stage, expert $i$ thus chooses an experiment $\sigma_{i}\in\Sigma$.
The chosen experiments $(\sigma_{1},\sigma_{2})$ are publicly observed.
In the second stage, expert $i$ privately observes the realization
$s_{i}\in\sigma_{i}$ and sends a private message $m_{i}\in M_{i}$
to the decision-maker. We assume that the sets of messages are rich
enough to communicate any signal realizations. Finally, the decision-maker
observes the messages $(m_{1},m_{2})$ (but not the realized signals
$(s_{1},s_{2})$) and chooses an action $a$. We denote $\Gamma(\pi^{\circ},u,v)$
the cheap-talk game. Note that different extensive-form games are
consistent with our description. Throughout, we assume that the state
$(\omega,x)\in\Omega\times[0,1]$ is chosen by Nature according to
the probability distribution $\pi^{\circ}\times U[0,1]$ after the
experts have chosen their experiments where $U[0,1]$ denotes the uniform distribution on the unit interval. Thus, we have a proper sub-game
after each choice of statistical experiments $(\sigma_{1},\sigma_{2})$.
{}
A strategy for expert $i$ is a pair $(\sigma_{i},\tau_{i})$, where
$\sigma_{i}\in\Sigma$ and $\tau_{i}(\sigma_{i},\sigma_{j},s_{i})\in\Delta(M_{i})$
for all $(\sigma_{i},\sigma_{j},s_{i})$ with $s_{i}\in\sigma_{i}$.
A strategy for the decision-maker specifies a mixed action $\alpha(\sigma_{i},\sigma_{j},m_{i},m_{j})\in\Delta(A)$
for all $(\sigma_{i},\sigma_{j},m_{i},m_{j})$.\footnote{To ease exposition, we do not explicitly consider randomizations over
the choices of experiments. This does not
affect any of our results.} The solution concept is weak perfect Bayesian equilibrium. We stress
that this requires the beliefs to be consistent with the chosen experiments
$(\sigma_{1},\sigma_{2})$ even if these experiments are off the equilibrium
path.
{}
Few remarks are worth making. First, as in classical cheap-talk games,
none of the experts can commit to reporting strategies.
Second, if the experiments are $(\sigma_{1},\sigma_{2})$, then the
joint probability of $(s_{1},s_{2})\in\sigma_{1}\times\sigma_{2}$
conditional on $\omega$ is the measure of the set $\{x:(\omega,x)\in s_{1}\cap s_{2}\}$.
Thus, if both experts choose the same experiment $\sigma$, then the
probability of $(s,s')\in\sigma\times\sigma$ is zero, whenever $s\neq s'$.
(To see this, note that if $s\neq s'$, then $s\cap s'=\emptyset$
since $\sigma$ is a partition.) In words, if both experts choose
the same experiment, their realized signals are perfectly correlated.
This property will turn out to be crucial.\footnote{Note, however, that we can allow for the experts to choose identical
and independent experiments without affecting our results. To do so,
it suffices to define an experiment as a finite partition of $\Omega\times[0,1]\times[0,1]$,
with $(\omega,x,y)$ distributed according to $\pi^{\circ}\times U([0,1])\times U([0,1])$.
Intuitively, if the experts condition their random observations on
$x$, they are perfectly correlated, while they are independent if
one expert conditions on $x$ and the other on $y$.} An alternative modeling is to assume that there is a fixed set of
statistical experiments and let the experts observe the realization
of the experiment of their choices. This alternative modeling also
implies that if the two experts choose to observe the same experiment's realization, their observations are identical. Lastly,
it is usual to model statistical experiments as probability kernels
$\sigma^{*}:\Omega\rightarrow\Delta(S)$, where $S$ is the (finite)
set of signals. The latter formulation naturally implies the former:
for each $\omega$, we can partition $[0,1]$ into $|S|$ non-empty
and disjoint intervals such that the length of the $s$-th interval
is $\sigma^{*}(s|\omega)$ when the state is $\omega$. With a slight
abuse of notation, we identify the probability kernel $\sigma^{*}$
with that particular partition of $\Omega\times[0,1]$.
{}
We focus on \textbf{truthful equilibria}, in which
the two experts choose the same experiment in the first stage and
truthfully report the common signal realization in the second stage.
{}
In what follows, we denote by $v(\alpha,\pi)$
the decision-maker's expected payoff when he chooses the mixed
action $\alpha$ and his belief is $\pi$, and by $BR(\pi):=\{\alpha\in\Delta(A):v(\alpha,\pi)\geq v(\alpha',\pi),\forall\alpha'\in\Delta(A)\}$
the set of decision-maker's best-replies at $\pi$. Similarly,
we write $u(\alpha,\pi)$ for an expert's expected payoff.
\section{The Main Result}
In this section, we show that the ability of the decision-maker to cross-verify information
serves as a commitment device for the experts. More precisely, we show
that there exists an equilibrium of the cheap-talk game, in
which the experts obtain their \emph{commitment value}.
{}
We define the commitment value as the highest payoff an expert can obtain when he commits to truthfully disclose the realized signal, as in games of Bayesian persuasion. Formally, consider the persuasion game, where an expert first chooses
a statistical experiment $\sigma:\Omega\rightarrow\Delta(S)$, \emph{commits}
to truthfully reveal the realized signal $s$ to the decision-maker, who then
makes a decision. \citet{kamenica2011bayesian} prove
that the best equilibrium payoff for the expert in this game is given
by $\text{cav}\;\overline{u}(\pi)$, where $\text{cav}\;\overline{u}$
is the concavification of $\overline{u}$ and $\overline{u}(\pi):=\max_{\alpha\in BR(\pi)}u(\alpha,\pi)$. (See also \citealp{AumannMaschler95}.)
For later reference, we write $(\lambda^{*}_s,\pi_{s}^{*})_{s\in S}$ for an optimal splitting of the prior $\pi^{\circ}$, that is, $\sum_{s\in S}\lambda^{*}_s\overline{u}(\pi_s^{*})=\text{cav}\,\overline{u}(\pi^{\circ})$ and
$\sum_{s\in S}\lambda^{*}_s\pi_{s}^{*}=\pi^{\circ}$. We write $\Pi^{*}$ for $\{\pi_{s}^{*}:s\in S\}$,
$\mathrm{co\,}\Pi^{*}$ for the convex hull of $\Pi^{*}$, and
$\Delta^{*}$ for the set of all probability distributions
over $\Pi^{*}$. The corresponding optimal experiment is denoted $\sigma^*$.
\begin{thm}
There exists a truthful equilibrium of the cheap-talk game, where both experts obtain their commitment value
$\text{cav\;}\overline{u}(\pi^{\circ})$.
\end{thm}
Before proving Theorem 1, we explain our result's logic with the help of a simple example. There are two states, $\omega_0$ and $\omega_1$, and four actions, $a_{0},a_{L},a_{R}$ and $a_{1}$. The preferences
are depicted in Figure \ref{ex:mixed strategy punishment}. Throughout the example, probabilities refer to the probability of $\omega_1$. Assume that $\pi^{\circ}=0.45$.
\begin{figure}
\caption{Uniform Punishment and Cross-verification.}
\label{ex:mixed strategy punishment}
\end{figure}
We first note that the optimal experiment $\sigma^*$ consists in splitting the prior into the posteriors
posteriors $\pi_{s_{0}}^{*}=0.3$ and
$\pi_{s_{1}}^{*}=0.6$.\footnote{We have $\lambda^{*}_{s_{0}}=\lambda^{*}_{s_{1}}=1/2$. The experiment is given by: $\sigma^*(s_{0}\vert\omega_{0})=0.64$
and $\sigma^*(s_1|\omega_1)= 0.67$.} We also note that $u(a_0,\pi_{s_0}) < u(a_1,\pi_{s_0})$ and $u(a_1,\pi_{s_1}) < u(a_0,\pi_{s_1})$, that is, the experts have an incentive to mis-report the realized signals. Thus, if there was a single expert, choosing the experiment $\sigma^*$ and truthfully reporting the realized signal would not be an equilibrium. More generally, no equilibrium would give the expert his commitment value.
Matters are different if the decision-maker chooses to consult another expert. To see this, suppose that the two experts choose the experiment $\sigma^*$ and truthfully report the outcome of the experiment. The decision-maker then holds belief
0.3 (resp., 0.6) and plays action $a_{0}$ (resp., $a_{1}$) after observing
two matching messages equal to $s_{0}$ (resp., $s_{1}$).
Off the equilibrium path, i.e., when the decision-maker observes
two contradictory messages, assume that he holds belief $\pi_{p}=0.4$ and
plays action $\alpha_{p}\in\Delta(\{a_{L},a_{R}\})=BR(0.4)$.
The key observation to make is that the mixed strategy $\alpha_{p}\in BR(0.4)$
is a \emph{uniform punishment}, that is, $u(\alpha_p, \pi_{s_0}) < u(a_0,\pi_{s_0})$
and $u(\alpha_p, \pi_{s_1}) < u(a_1,\pi_{s_1})$. (See Figure \ref{ex:mixed strategy punishment}.) In words, regardless of the realized signal, an expert is punished for deviating from truth-telling. All the decision-maker needs to know is that a deviation has occurred, and the presence of the second expert indeed guarantees that deviations are detected. The experts thus benefit from the decision-maker cross-verifying their information. (Naturally, there are other equilibria, where the decision-maker benefits from cross-verification. See the next section.)
We conclude with two additional remarks. First, if the experts choose the perfectly informative experiment,
truthful reporting does not constitute an equilibrium. This is because
the actions that are best for the decision-maker at beliefs $\pi_{s_{0}}=0$ and
$\pi_{s_{1}}=1$ are the worst for the experts at those beliefs. Second, for any two experiments $\sigma_1$ and $\sigma_2$,
there is an equilibrium, where experts 1 and
2 choose experiments $\sigma_{1}$ and $\sigma_{2}$, respectively, and a babbling
equilibrium of the ensuing sub-game is played.
We now turn to the proof of Theorem 1. The proof rests on three essential properties. First, if the two experts choose the same experiment, their signals' realizations are \emph{perfectly correlated}. This is because they observe the same outcome. Second, if the two experts choose the same experiment, the decision-maker \emph{detects} any deviation from truth-telling. This is because the decision-maker receives contradicting messages after any deviation. However, he cannot identify the deviator and, thus, cannot infer the true signal's realization. Therefore, to deter deviations, the decision-maker must be able to punish the two experts simultaneously. The third property is the existence of such a \emph{uniform punishment} whenever the experiment is expert optimal. The following lemma states this property.
\begin{lem}[Uniform punishment]
Let $(\lambda_s^*,\pi_s^*)_{s \in S}$ be an optimal splitting. There exist $\pi_{p}\in\text{co }\Pi^{*}$
and $\alpha_{p}\in BR(\pi_{p})$ such that $u(\alpha_{p},\pi_{s}^{*})\leq\overline{u}(\pi_{s}^{*})$
for all $\pi_{s}^{*}\in\Pi^{*}$.
\end{lem}
Lemma 1 is our main technical contribution. We postpone its proof to the end of this section and now show how to construct an equilibrium of the cheap-talk game with a payoff of $\text{cav\,}\overline{u}(\pi^{\circ})$ to the experts.
\begin{proof}[Proof of Theorem 1.]
Let $(\lambda_s^*,\pi_s^*)_{s \in S}$ be an optimal splitting inducing the payoff $\text{cav\,}\overline{u}(\pi^{\circ})$. Let $\sigma^*$ be the optimal experiment associated with that splitting. Recall that
$\Pi^*:=\{\pi_s^*: s\in S\}$. From Lemma 1, there exist $\pi_{p}\in \mathrm{co\,} \Pi^*$ and $\alpha_{p}\in BR(\pi_{p})$
such that for all $\pi_{s}^{*}\in\Pi^{*}$, $u(\alpha_{p},\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})\leq0$.
We construct a truthful equilibrium as follows. The experts choose the optimal experiment
$\sigma^{*}$ and truthfully report the realized signal. Following the
choice of $\sigma^{*}$, the decision-maker chooses $\alpha\in BR(\pi_{s})$, with
$u(\alpha,\pi_{s})=\overline{u}(\pi_{s})$, when he observes two
identical messages equal to $s$. Alternatively, if the decision-maker receives two
conflicting messages, he chooses $\alpha_{p}$ (sustained by the
belief $\pi_{p}$). Finally, following the choice of any other statistical
experiment, an equilibrium of the continuation game, which exists by finiteness,
is played. It is routine to check that we indeed have an equilibrium.
\end{proof}
We now offer a series of remarks.
\begin{rem}
We have assumed that the two experts share the same preferences. If the preferences of one expert, say the second expert,
are a convex combination of the preferences of the first expert and the decision-maker,
i.e., $\beta u(a,\omega)+(1-\beta)v(a,\omega)$ for
some $\beta\in[0,1]$, then we can still construct a truthful equilibrium, where the first expert continues to obtain his commitment value.
To see this, let $\alpha_{s}$ be such that
$u(\alpha_{s},\pi_{s}^{*})=\overline{u}(\pi_{s}^{*})$ and note that
$v(\alpha_{p},\pi_{s}^{*})\leq v(\alpha_{s},\pi_{s}^{*})=\max_{\tilde{\alpha}}v(\tilde{\alpha},\pi_{s}^{*})$
for all $s$, where $\alpha_{p}$ is the punishment, which exists by Lemma 1. This implies that $\beta u(\alpha_p,\pi_{s}^{*})+(1-\beta)v(\alpha_p,\pi_{s}^{*})\leq\beta u(\alpha_{s},\pi_{s}^{*})+(1-\beta)v(\alpha_{s},\pi_{s}^{*})$
for all $\pi_{s}^{*}\in\Pi^{*}$, i.e., $\alpha_{p}$ is also a uniform
punishment for the second expert. We illustrate this remark with a simple example. As in Crawford and Sobel (1982), assume that the decision-maker obtains the payoff $-(\alpha-\omega)^2$, when he chooses $\alpha \in [0,1]$ and the state is $\omega$.\footnote{Throughout, we have assumed that the decision-maker has a finite set of actions. Our results extend to the set $A$ being a non-empty compact subset of $\mathbb{R}$ and concave continuous payoff functions. The proof of Lemma 1 only requires a slight modification: we need to invoke duality for convex programming rather than for linear programming.}
The payoff of the two experts are $-(\alpha-\omega-b)^2$ and $-(\alpha-\omega-\beta b)^2$, with $\beta \in [0,1]$ and $b >0$, respectively. The second expert is (weakly) less biased than the first expert. Observe that, up to a constant, the payoff of the less biased expert is a convex combination of the payoff of the most biased expert and the decision-maker, that is:
\begin{equation*}
- \left[(1-\beta)(\alpha- \omega)^2 + \beta (\alpha-\omega-b)^2\right] = -(\alpha - \omega - \beta b)^2 -b^2\beta(1-\beta).
\end{equation*} Therefore, there exists an equilibrium, which gives the most biased expert his commitment value.\footnote{In the quadratic example, the payoff $\overline{u}(\pi)$ to the most biased expert is $-(\mathbb{V}_{\pi}[\omega] + b^2)$, with $\mathbb{V}_{\pi}[\omega]$ the variance of $\omega$ with respect to the distribution $\pi$. Since the variance of a real-valued random variable is concave in its distribution, full information disclosure attains the commitment value.}
\end{rem}
\begin{rem}
We have assumed that the choice of experiments is publicly observed.
If the decision-maker does not observe the experiments chosen by the two experts,
but if the experts observe each other's experiment choice, then
again there is a truthful equilibrium, where the optimal experiment $\sigma^{*}$ is chosen
as in Theorem 1. In this equilibrium, the play on the equilibrium path unfolds
as in Theorem 1. If any expert deviates and chooses another experiment
$\sigma \neq \sigma^{*}$, then the two experts send the message $m_{0}$, where $m_0$
is a message that is never sent on the equilibrium path. If the decision-maker
observes two messages that do not match or observes a message equal
to $m_{0}$ from either of the two experts, then he best responds
to her belief $\pi_{p}\in\text{co }\Pi^{*}$ and plays action $\alpha_{p}$.
\end{rem}
\begin{rem} Similarly, if we assume that the experts do not observe each other's choice of experiments, but the decision-maker does, then our result continues to hold. To see this, we construct an equilibrium as follows. In the first stage, the experts choose the optimal experiment. In the second stage, an expert truthfully reports his signal if he has chosen the optimal experiment in the first stage. (The strategies are left unspecified in other contingencies.) If the decision-maker observes the experts choosing the optimal experiment, the decision-maker follows the same strategy as in our main proof. If the decision-maker observes only one expert choosing the optimal experiment, he plays a best-reply to the message sent by that expert. (The strategies are left unspecified in all other contingencies.) On path, the experts receive their commitment value. If an expert chooses another experiment, the decision-maker observes the deviation but not the other expert, who continues to truthfully reveal the signal. Hence, the deviation does not change the expert's payoff. \end{rem}
\begin{rem}
We have assumed that the two experts choose experiments simultaneously.
This assumption is again not required for our result.
Suppose instead that one expert, say the first expert, chooses an experiment
$\sigma:\Omega\rightarrow\Delta(S_1\times S_2)$, with expert $i$ privately observing the signal's realization $s_i$.
As before, after observing their signals, the experts send messages to the decision-maker, who then chooses an action.
Yet again, we have a truthful equilibrium, where the equilibrium payoff of the two experts is $\text{cav\,}\overline{u}(\pi^{\circ})$ as in Theorem 1.
In this equilibrium, the first expert chooses the optimal experiment and perfectly correlates the second expert's signal with his own.
\end{rem}
\begin{rem} Our result relies on the assumption that the two experts send messages simultaneously. Instead, consider a game where in the first stage the two experts independently and simultaneously choose experiments and privately observe signals. In the second stage, the decision-maker consults expert 1 and after observing expert 1's message, the decision-maker sends a cheap-talk message to expert 2. In the third stage, expert 2 sends a message to the decision-maker, and in the last stage the decision-maker chooses an action. Again, there is an equilibrium that delivers the experts their commitment payoff under this specification also. In this equilibrium, both experts choose the expert-optimal experiment, expert 1 truthfully reveals his information, the decision-maker babbles after observing expert 1's message, and expert 2 also truthfully reveals his information. Deviations from equilibrium play are punished by the same mechanism as in our main result.
\end{rem}
\begin{rem} We have assumed weak perfect Bayesian equilibrium as our solution concept.
If we restrict attention to a finite set of experiments, which contains $\sigma^*$, then
we can strengthen the solution
concept to sequential equilibrium. We only need a slight modification of Lemma 1 to guarantee that
the decision-maker believes that the realized signal is either $s$ or $s'$ after observing a
report $(s,s')$. We need to prove the existence of a belief
$\pi_{s,s'}\in\Delta(\{\pi_{s}^{*},\pi_{s'}^{*}\})$ and a mixed action
$\alpha_{s,s'} \in BR(\pi_{s,s'})$ such that $u(\alpha_{s,s'},\pi_{\tilde{s}}^{*})-\overline{u}(\pi_{\tilde{s}}^{*})\leq0$
for all $\tilde{s}\in\{s,s'\}$. A minor adaptation of the proof of Lemma
1 guarantees this result.
\end{rem}
\begin{proof}[ Proof of Lemma 1.]
We first establish two intermediate claims, then we use these two
claims to establish the lemma. Let $(\lambda_s^*,\pi_s^*)_{s \in S}$ be an optimal splitting. Recall that
$\Pi^*:=\{\pi_s^*: s\in S\}$ and $\Delta^*$ is the set of all probability distributions over $\Pi^*$.
\textbf{Claim 1:} For any $\lambda\in\Delta^{*}$, $\overline{u}(\sum_{s}\lambda_s\pi_{s}^{*})\leq\sum_{s}\lambda_s\overline{u}(\pi_{s}^{*})$.
\textit{Proof of Claim 1:} Consider the convex hull of the graph of $\overline{u}$,
i.e., $\co\{(\pi,r)\in\Delta(\Omega)\times\mathbb{R}:r=\overline{u}(\pi)\}$.
By construction, the point $(\pi^{\circ},\text{cav}\;\overline{u}(\pi^{\circ}))=(\sum_{s}\lambda^{*}_s\pi_{s}^{*},\sum_{s}\lambda^{*}_s\overline{u}(\pi_{s}^{*}))$
is on the boundary of the convex hull. From the supporting
hyperplane theorem, there exists a hyperplane $h\in\mathbb{R}^{|\Omega|}\times\mathbb{R}$
supporting $\co\{(\pi,r)\in\Delta(\Omega)\times\mathbb{R}:r=\overline{u}(\pi)\}$
at $(\pi^{\circ},\text{cav}\;\overline{u}(\pi^{\circ}))$ such that the graph of $\overline{u}$ lies below $h$. For all $s \in S$, the point
$(\pi_{s}^{*},\overline{u}(\pi_{s}^{*}))$ also lies on the hyperplane $h$.
Consequently, the point $(\sum_s\lambda_s \pi^*_s,\sum_{s}\lambda_s\overline{u}(\pi_{s}^{*}))$,
must also lies on the hyperplane. Therefore, $\overline{u}(\sum_s\lambda_s \pi^*_s)\leq\sum_{s}\lambda_s\overline{u}(\pi_{s}^{*})$
as required.
$\blacksquare$
\textbf{Claim 2:} Choose any non-empty subset $B \subset A$ and $\varepsilon>0$. If $\max_{s\in S}[u(\alpha,\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})]\geq\varepsilon$
for each $\alpha\in\Delta(B)$, then there exists $\hat{\lambda}\in\Delta^{*}$
such that \newline
$\min_{\alpha\in\Delta(B)}u(\alpha,\sum_{s}\hat{\lambda}_s\pi_{s}^{*}) \geq \sum_{s}\hat{\lambda}_s\overline{u}(\pi_{s}^{*})+\varepsilon$.
\textit{Proof of Claim 2:} The claim follows from duality. Consider the following linear program:
\begin{align*}
\min_{\left(x,\alpha\right)\in\mathbb{R}\times\Delta\left(B\right)}x
\end{align*}
subject to: for all $s \in S$,
\begin{equation*}
\sum_{a\in B}\alpha(a)\left[u(a,\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})\right]\leq x.
\end{equation*}
This minimization problem has a solution $\hat{x}$. Our hypothesis implies that $\hat{x}\geq\varepsilon$.
The dual program is given by
\[
\max_{\left(y,\lambda\right)\in\mathbb{R}\times\Delta(\Pi^{*})}y
\]
subject to: for all $a \in B$,
\[ \sum_{s\in S}\lambda_s\left[u(a,\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})\right] \geq y.\]
Since the primal linear program has a solution, the dual program also
has a solution $(\hat{y},\hat{\lambda})$. No duality gap further
implies that $\hat{y}=\hat{x}\geq\varepsilon$. (See Section 4.2 of \citealp{Luen08}.) Therefore, for all $a \in B$,
\[
u(a,\sum_{s}\hat{\lambda}_s\pi_{s}^{*})=\sum_{s\in S}\hat{\lambda}_s u(a,\pi_{s}^{*})\geq\varepsilon+\sum_{s\in S}\hat{\lambda}_s\overline{u}(\pi_{s}^{*})
\]
Hence, $u(\alpha,\sum_{s}\hat{\lambda}_s\pi_{s}^{*})\geq\sum_{s}\hat{\lambda}_s\overline{u}(\pi_{s}^{*})+\varepsilon$
for all $\alpha\in\Delta(B)$, as required.
$\blacksquare$
We now use Claims 1 and 2 to complete the proof. Denote by $br(\pi)\subset A$
the decision-maker's set of pure best-replies to belief $\pi$.
By contradiction, assume that there does not exist $\pi_{p}\in \mathrm{co}\, \Pi^{*}$ and $\alpha_{p}\in BR(\pi_{p})$
such that $u(\alpha_{p},\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})\leq 0$
for all $\pi_{s}^{*}\in\Pi^{*}$. Note that $\pi\in \mathrm{co}\, \Pi^{*}$ if
and only if $\pi=\sum_{s}\lambda_{s}\pi_{s}^{*}$ for some $\lambda\in\Delta^{*}$.
Hence, our contradiction hypothesis can be restated as follows: for
each $\lambda\in\Delta^*$, there exists $\varepsilon(\lambda)>0$
such that $\max_{s\in S}[u(\alpha,\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})]\geq\varepsilon(\lambda)$
for each $\alpha\in\Delta(br(\sum_{s}\lambda_{s}\pi_{s}^{*}))=BR(\sum_{s}\lambda_{s}\pi_{s}^{*})$.
Let $\varepsilon:=\min_{\lambda\in\Delta^*}\varepsilon(\lambda)$.
Note that $\varepsilon>0$ because $\varepsilon(\lambda)$ depends only
on the finite set $br(\sum_{s}\lambda_{s}\pi_{s}^{*})$, and there
are finitely many such subsets of $A$.
Define the correspondence $F:\Delta^{*}\rightarrow\Delta^{*}$, with
\[
F(\lambda):=\Big\{\lambda'\in\Delta^*:\min_{\alpha\in BR(\sum_{s}\lambda_{s}\pi_{s}^{*})}\sum_{s}\lambda'_s\Big(u(\alpha,\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})\Big)\geq\varepsilon\Big\}.
\]
We can readily check that this correspondence is convex and
compact valued. We argue below that it is non-empty valued
and lower hemi-continuous. Hence, the correspondence has a fixed point
$\overline{\lambda}\in F(\overline{\lambda})$ by Theorem 15.4 in
\citet{border1990fixed}. Noting that $\sum_{s}\overline{\lambda}(s)u(\alpha,\pi_{s}^{*})=u(\alpha,\sum_{s}\overline{\lambda}(s)\pi_{s}^{*})$,
we find
\[
\min_{\alpha\in BR(\sum_{s}\overline{\lambda}(s)\pi_{s}^{*})}\Big(u(\alpha,\sum_{s}\overline{\lambda}(s)\pi_{s}^{*})-\sum_{s}\overline{\lambda}(s)\overline{u}(\pi_{s}^{*})\Big)\geq\varepsilon
\]
for $\overline{\lambda}\in\Delta^{*}$ contradicting Claim 1 and establishing
the result.
We now show that the correspondence is non-empty valued. Pick any $\lambda\in\Delta^{*}$. The contradiction
hypothesis states that $\max_{s\in S}[u(\alpha,\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})]\geq\varepsilon$
for each $\alpha\in BR(\sum_{s}\lambda_s\pi_{s}^{*})$. Claim
2 then implies that there exists $\hat{\lambda}\in\Delta^{*}$ such that
\[\min_{\alpha\in BR(\sum_{s}\lambda_s\pi_{s}^{*})}\sum_{s}\hat{\lambda}_s(u(\alpha,\pi_{s}^{*})-\overline{u}(\pi_{s}^{*}))\geq\varepsilon,\]
i.e., the correspondence is non-empty valued.
Finally, we prove lower hemi-continuity. Pick an open set $O\subseteq\Delta^{*}$
such that $F(\lambda)\cap O\neq\emptyset$. Since $BR$ is upper hemi-continuous
(by the maximum principle) and $A$ is finite, there exists a neighborhood
$O'$ of $\lambda$ such that $BR(\sum_{s}\lambda'_s\pi_{s}^{*})\subseteq BR(\sum_{s}\lambda_s\pi_{s}^{*})$
for all $\lambda' \in O'$. Therefore, for all $\lambda' \in O'$,
\[
\min_{\alpha\in BR(\sum_{s}\lambda'_s \pi_{s}^{*})}\sum_{s}\lambda^{''}_s\Big(u(\alpha,\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})\Big)\geq\min_{\alpha\in BR(\sum_{s}\lambda_s \pi_{s}^{*})}\sum_{s}\lambda^{''}_{s}\Big(u(\alpha,\pi_{s}^{*})-\overline{u}(\pi_{s}^{*})\Big)\geq\varepsilon
\]
for any $\lambda^{''} \in F(\lambda)\cap O$ because $BR(\sum_{s}\lambda'_s\pi_{s}^{*})\subseteq BR(\sum_{s}\lambda_s\pi_{s}^{*})$,
i.e., $\lambda^{''}\in F(\lambda')$.
Hence, $F(\lambda')\cap O\neq\emptyset$ for all $\lambda' \in O'$,
which proves the lower hemi-continuity of $F$ (Definition 11.3 in
\citet{border1990fixed}).
\end{proof}
\section{The Decision-maker and Cross-verification}
The previous section showed that the experts benefit from the decision-maker cross-verifying their information.
This section explores whether the decision-maker can also benefit from cross-verification.
We begin with some definitions. Fix a cheap-talk game $\Gamma(\pi^{\circ},u,v)$. We say that
the experts benefit from persuasion if $\text{cav\,}\overline{u}(\pi^{\circ})>\overline{u}(\pi^{\circ})$. Similarly, we say that the decision-maker benefits from cross-verification if there exists an equilibrium of the cheap-talk game, where the decision-maker's payoff exceeds the ex-ante payoff $\max_{a\in A}v(a,\pi^{\circ})$. Notice that if the decision-maker benefits from cross-verification, the experts must reveal some information to the decision-maker.
Define $\hat{A}:=\{a\in A:\exists\pi\in\Delta(\Omega)\text{ s.t. }a\in BR(\pi)\}$
and $\overline{v}(\pi):=\max_{\alpha}v(\alpha,\pi)$ for all $\pi\in\Delta(\Omega)$.
We say that there are \emph{no redundant actions} for the decision-maker if for all non-empty $B\subset\hat{A}$,
there exists $\pi\in\Delta(\Omega)$ such that $\overline{v}(\pi)>\max_{a\in B}v(a,\pi)$.
There are no redundant actions for the experts if there are no two distinct
actions $a$ and $a'$ such that $u(a,\omega)=u(a',\omega)$ for all
$\omega\in\Omega$.
\begin{rem}
The conditions of non-redundancy are generic. Moreover, the condition of no redundant actions for the decision-maker does not
preclude strictly dominated actions. Two important implications of that condition are as follows: (i) the set $BR^{-1}(a):=\{\pi\in\Delta(\Omega):v(a,\pi)=\overline{v}(\pi)\}$
has full dimension (as a subset of the simplex of dimension $|\Omega|-1$), and (ii) no action other than $a$ is optimal in the
relative interior of $BR^{-1}(a)$, denoted by $\mathrm{int\,}BR^{-1}(a)$.
\end{rem}
Theorem 1 showed that the experts benefit from cross-verification
in games where they benefit from persuasion. The following proposition
further establishes that the decision-maker also benefits from cross-verification in
such games.
\begin{prop}
Assume that there are no redundant actions for the decision-maker in the game
$\Gamma(\pi^{\circ},u,v)$. At almost all priors $\pi^{\circ}$, if the experts benefit from persuasion, then the decision-maker benefits from cross-verification.
\end{prop}
We first illustrate the logic of the proposition with the help of a simple example. There are two states, $\omega_0$ and $\omega_1$, and three actions, $a_0,a_1$ and $a_p$. Throughout the example, probabilities refer to the probability of $\omega_1$. The prior is $\pi^{\circ}=0.45$. The payoffs are illustrated in Figure \ref{Fig:example pure strategy}. The optimal experiment consists in splitting the prior into the posteriors $\pi_{s_{0}}^{*}=0.3$ and $\pi_{s_{1}}^{*}=0.6$. The experts strictly benefit from persuasion. From Theorem 1, there exists a truthful equilibrium, where an expert's payoff is his commitment value. Action $a_p$ is the uniform punishment sustaining the equilibrium. Note that $a_p$ is uniquely optimal at the prior and also optimal at the two posteriors. Consequently, the decision-maker does not benefit from cross-verification at the equilibrium. Yet, we can construct another equilibrium, where the decision-maker benefits from cross-verification. To see this, consider the splitting of the prior into $\pi_{s_0}=0.2$ and $\pi_{s_1}=0.8$. At $\pi_{s_0}$ (resp., $\pi_{s_1}$), the decision-maker plays $a_0$ (resp., $a_1$). To sustain this splitting as an equilibrium, the decision-maker punishes the experts with $a_p$. The decision-maker strictly benefits from this more informative experiment.
We prove that the logic of the example generalizes to almost all priors. That is, for all priors, but for a subset with Lebesgue measure zero, we can always construct an equilibrium of the cheap-talk game, where the decision-maker benefits from cross-verification if the experts benefit from persuasion. More precisely, we prove that the proposition holds at all interior priors, where the decision-maker has at most two best-replies, a generic condition.
The need for non-redundancy is clear. If the decision-maker is indifferent between all his actions, the decision-maker cannot benefit from cross-verification, while the experts can benefit from persuasion. We now turn to the proof.
\begin{figure}
\caption{DM Benefits from Cross-verification.}
\label{fig:example pure strategy DM}
\label{Fig:example pure strategy}
\end{figure}
\begin{proof}[Proof of Proposition 1.]
Consider an optimal splitting $(\lambda^*_s,\pi_s^*)_{s \in S}$ of $\pi^{\circ}$, which induces the value $\cav \overline{u}(\pi^{\circ})$, where $ \cav \overline{u}(\pi^{\circ})> \overline{u}(\pi^{\circ})$. Without loss of generality, assume that $\lambda_{s}^*>0$ for all $s \in S$. Let $\overline{v}(\pi):=\max_{\alpha} v(\alpha,\pi)$ for all $\pi \in \Delta(\Omega)$.
If the decision-maker benefits from the statistical experiment, there is nothing to prove. So, assume that the decision-maker does not benefit from the statistical experiment, i.e., $\sum_{s} \lambda^*_s \overline{v}(\pi^*_s)=\overline{v}(\pi^{\circ})$. We construct another equilibrium at which the decision-maker benefits from cross-verification.
We first claim that for all $a \in BR(\pi^{\circ})$, $a \in BR(\pi)$ for all $\pi \in \co\{\pi^*_s: s \in S\}$. To see this, consider any $a \in BR(\pi^{\circ})$ and observe that
\begin{eqnarray*}
\sum_{s} \lambda^*_s\overline{v}(\pi^*_s)=\overline{v}(\pi^{\circ})=v(a,\pi^{\circ})= v\left(a,\sum_{s}\lambda^*_s \pi_s^*\right) = \sum_{s}\lambda^*_s v(a,\pi_s^*).
\end{eqnarray*}
It follows that
\begin{eqnarray*}
\sum_{s}\underbrace{\lambda^*_s}_{>0} (\underbrace{\overline{v}(\pi^*_s)-v(a,\pi_s^*)}_{\geq 0})=0
\end{eqnarray*}
If there exists $s$ such that $\overline{v}(\pi_s^*) > v(a,\pi^*_s)$, we have a contradiction. Hence, $a \in BR(\pi_s^*)$ for all $s$ and, consequently, $a \in BR(\pi)$ for all $\pi \in \co\{\pi^*_s: s \in S\}$.
From the definition of $\overline{u}$, we have that $u(a,\pi_s^*) \leq \overline{u}(\pi_s^*)$ for all $s$, for all $a \in BR(\pi^{\circ})$, since
$BR(\pi^{\circ}) \subseteq BR(\pi_s^*)$ for all $s$. We now argue that for all $a \in BR(\pi^{\circ})$, there exists $s_{a} \in S$ such that
$u(a,\pi_{s_{a}}^*) < \overline{u}(\pi_{s_{a}}^*)$. Choose any $a \in BR(\pi^{\circ})$. To the contrary, assume that $u(a,\pi_s^*) = \overline{u}(\pi_s^*)$ for all $s$. We then have
\begin{eqnarray*}
\cav \overline{u}(\pi^{\circ})= \sum_{s} \lambda_s^* \overline{u}(\pi_s^*)= \sum_{s} \lambda_s^* u(a,\pi_s^*)
= u(a,\pi^{\circ}) \leq \overline{u}(\pi^{\circ}) \leq \cav \overline{u}(\pi^{\circ}),
\end{eqnarray*}
a contradiction with the expert benefiting from the experiment.
To sum up, we have (i) $BR(\pi^{\circ}) \subseteq BR(\pi)$ for all $\pi \in \co\{\pi^*_s: s \in S\}$, and (ii) for each $a \in BR(\pi^{\circ})$, there exists $s_{a}$ such that $u(a^*_{s_{a}},\pi_{s_{a}}^*) > u(a,\pi_{s_{a}}^*)$ with $a^*_{s_{a}} \in BR(\pi_{s_{a}}^*)$ satisfying $u(a^*_{s_{a}},\pi_{s_{a}}^*)=\overline{u}(\pi_{s_{a}}^*)$.
For each $a \in BR(\pi^{\circ})$, consider the open ball $\mathcal{O} =\{\pi \in \Delta(\Omega): ||\pi-\pi^*_{s_{a}}|| < \varepsilon\}$ such that $u(a,\pi) < u(a^*_{s_{a}},\pi)$ for all $\pi$ in the open ball. Since $u$ is continuous in $\pi$ and $u(a,\pi_{s_{a}}^*) < u(a^*_{s_{a}},\pi_{s_{a}}^*)$, such an open ball exists.
We claim that $\mathcal{O}$ intersects the relative interior of $BR^{-1}(a^*_{s_{a}})$. To see this, note that $\mathcal{O} \cap BR^{-1}(a^*_{s_{a}}) \neq \emptyset$ since $\pi^*_{s_{a}}$ is an element of both $\mathcal{O}$ and $BR^{-1}(a^*_{s_{a}})$. Moreover, it follows from the non-redundancy of $A$ that $\pi^*_{s_{a}}$ is not in the relative interior of $BR^{-1}(a^*_{s_{a}})$ since any $a \in BR(\pi^{\circ})$ is also optimal at $\pi^*_{s_{a}}$. Since the relative interior of $BR^{-1}(a^*_{s_{a}})$ is non-empty, there exists $\pi^{**}$ in the relative interior such that the half-open line segment $[\pi^{**},\pi^*_{s_{a}})$ is contained in the relative interior. (See Theorem 2.1.3 and Lemma 2.1.6 in Hiriart-Urruty and Lemar\'echal.) Therefore, there exists $\overline{\pi}_a$ in the intersection of the relative interior of $BR^{-1}(a^*_{s_{a}})$ and $\mathcal{O}$, i.e., such that $u(a,\overline{\pi}_a)< u(a_{s}^*,\overline{\pi}_a)=\overline{u}(\overline{\pi}_a)$. Note that $v(a_{s_{a}}^*,\overline{\pi}_a) > v(a,\overline{\pi}_a)$ since $a_{s_{a}}^*$ is uniquely optimal at $\overline{\pi}$. In other words, there is an element of $BR(\pi^{\circ})$, namely $a$, which is not an element of $BR(\overline{\pi}_a)$.
The last step consists in showing that there exists $a \in BR(\pi^{\circ})$ and $\underline{\pi}_a \in BR^{-1}(a)$ such that the open segment $(\underline{\pi}_a,\overline{\pi}_a)$ includes $\pi^{\circ}$. Indeed, if such an open segment exists, we have a splitting $(\underline{\pi}_a,\overline{\pi}_a)$ of $\pi^{\circ}$ such that $\overline{u}(\underline{\pi}_a) \geq u(a,\underline{\pi}_a)$, $\overline{u}(\overline{\pi}_a)=u(a_{s_{a}}^*,\overline{\pi}_a) > u(a,\overline{\pi}_a)$. This splitting can be supported as a truthful equilibrium (with $a$ as the punishment at belief $\pi^{\circ}$). Moreover, since $v(a_{s_{a}}^*,\overline{\pi}_a) > v(a,\overline{\pi}_a)$, the decision-maker strictly benefits, the desired contradiction.
Finally, suppose that $\pi^{\circ}$ is in the interior of the simplex. If $BR(\pi^{\circ})=\{a\}$, then $\pi^{\circ}$ is in the relative interior of $BR^{-1}(a)$. Thus, we can trivially find a segment with the required property.
If $BR(\pi^{\circ})=\{a,b\}$ and $s_a=s_b$, then the same arguments apply, since the open segment will intersect either
$BR^{-1}(a)$ or $BR^{-1}(b)$. If $s_a \neq s_b$, choose $\overline{\pi}_{s_a}$ such that $b$ is uniquely optimal at $\overline{\pi}_{s_a}$. Such $\overline{\pi}_{a}$ exists since $v(b,\pi_{s_a})= \max_{a' \in BR(\pi_{s_a})}v(a',\pi_{s_a})$ (if not $s_a=s_b$). As before, the open segment intersects either
$BR^{-1}(a)$ or $BR^{-1}(b)$. However, it cannot be $BR^{-1}(b)$. If it were, $b$ would be uniquely optimal at $\overline{\pi}_{s_a}$ and optimal at $\pi^{\circ}$ and $\underline{\pi}_{s_a}$, which is not possible since $BR^{-1}(b)$ is convex.
Since the set of interior priors with at most two best-replies is generic, the proof is complete.\end{proof}
Proposition 1 does not generalize to all priors. For a counter-example, consider Figure \ref{fig:prop1}. There are three states, $\omega_0$, $\omega_1$ and $\omega_2$, and two actions, $a$ and $b$. The action $a$ (resp., $b$) is optimal in the left triangle marked ``$a$'' (resp., in the right triangle marked ``$b$''). At the prior $\pi^{\circ}$, the action $a$ is the unique best-reply of the decision-maker. Assume that $u(b,\omega_1) >u(a,\omega_1)$. Thus, if the experts truthfully reveal the state, they benefit from persuasion, while the decision-maker does not.\footnote{If there are two states, Proposition 1 generalizes to all interior priors. In this case, non-redundancy of the decision-maker's payoff implies that the decision-maker has at most two best-replies at each belief, where we know that Proposition 1 holds. In general, however, we do not know whether the proposition generalizes to all interior priors.}
\begin{figure}
\caption{A counter-example}
\label{fig:prop1}
\end{figure}
Proposition 1 proved that the decision-maker benefits from cross-verification
whenever the experts benefit from persuasion. We now show a partial converse, that is,
the decision-maker benefits from cross-verification only when the experts benefit from persuasion.
\begin{prop}
Assume that there are no redundant actions for the experts and the
decision-maker in the game $\Gamma(\pi^{\circ},u,v)$. If $\overline{u}$ is a
concave function, then the decision-maker does not benefit from cross-verification. That is, in all equilibria of $\Gamma(\pi^{\circ},u,v)$,
the decision-maker's payoff is $\overline{v}(\pi^{\circ})$.
\end{prop}
To understand Proposition 2, assume that the experts and the decision-maker have opposing preferences, that is, $u=-v$. In this case, what is best for the decision-maker is worst for the experts, and therefore, $\overline{v}=-\overline{u}$. Moreover, if either of the experts, say expert 1, chooses a uninformative experiment, an expert's payoff is $u(\pi^{\circ})$ in all equilibria of the ensuing game. This is because if expert 2's experiment produces two signals $s$ and $s'$ such that the set of best-replies at $\pi_s$ differs from the set of best-replies at $s'$, then expert 2 has an incentive to misreport one of the two signals, if not both. The experts cannot credibly communicate any information. Therefore, no expert can obtain less than $\overline{u}(\pi^{\circ})$ in equilibrium. Experts cannot obtain more than $\overline{u}(\pi^{\circ})$ either. Indeed, for every on-path posterior $\pi$, the decision-maker chooses a best-reply in equilibrium, hence an expert's payoff is minimized at $\pi$, i.e., an expert's payoff is $u^{\min}(\pi):=\min_{a}u(a,\pi)$. The result then follows from the concavity of $u^{\min}$. Proposition 2 does not require opposing preferences; the logic outlined above extends to all games, where $\overline{u}$ is concave.
To further illustrate Proposition 2, consider Figure \ref{Fig: no communication example.}.
For the decision-maker to benefit from cross-verification, the experts would need to choose an experiment, which induces the decision-maker to play different actions after receiving different signals. However, we cannot sustain such a choice as an equilibrium. An expert would always have an incentive to misreport the realized signal. This is because any action other than the one chosen by the decision-maker improves an expert's payoff, i.e., there is no uniform punishment.
The need for the non-redundancy of the experts' actions is again clear. If the experts are totally indifferent, they cannot benefit from persuasion but can provide the decision-maker with perfectly informative signals. It remains to prove Proposition 2. We do so through a series of lemmata. The following lemma shows that the conflict of interest between the experts and the decision-maker is maximal when the experts cannot benefit from persuasion; that is, the decision-maker's best-replies at belief $\pi$ minimizes the experts' expected payoff. Recall that $\hat{A}$ is the set of actions that are a best response for the decision-maker to some belief.
\begin{figure}
\caption{No Benefit from Cross-verification or Persuasion.}
\label{Fig: no communication example.}
\end{figure}
\begin{lem}
\label{L:conflicteverywhere}For every $\pi\in\Delta(\Omega)$, $BR(\pi)=\arg\min_{\alpha'\in\Delta(\hat{A})}u(\alpha',\pi)$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{L:conflicteverywhere}] We start by proving the following claim.
\begin{claim}
\label{L:conflictinterest}$a\in \hat{A}$ and $\pi\in \mathrm{int\,} BR^{-1}(a)$ implies
$\{a\}=\arg\min_{\alpha'\in\Delta(\hat{A})}u(\alpha',\pi)$.
\end{claim}
\begin{proof}[Proof of Claim \ref{L:conflictinterest}] Fix $a\in \hat{A}$ and $\pi\in \mathrm{int\,} BR^{-1}(a)$. We first argue that there
does not exist $a' \in \hat{A}$ such that such that $u(a',\pi)<u(a,\pi)$. To the contrary, suppose such $a'$ exists. Pick an arbitrary $\pi'\in \mathrm{int\,} BR^{-1}(a')$. There
exists $\pi''\in \mathrm{int\,} BR^{-1}(a')$ and $\lambda\in(0,1)$ such that
$\pi''=\lambda\pi+(1-\lambda)\pi'$. We obtain
\begin{eqnarray*}
u(a',\pi'') & = & \overline{u}(\pi'') \\
&\geq & \lambda \overline{u}(\pi)+ (1-\lambda) \overline{u}(\pi')\\
& = & \lambda u(a,\pi)+(1-\lambda)u(a',\pi')\\
&> & \lambda u(a',\pi)+(1-\lambda)u(a',\pi')\\
&=& u(a',\pi''),
\end{eqnarray*}
where the first inequality follows from the concavity of $\overline{u}$, the desired contradiction.
We now argue that there does not exist $a' \in \hat{A}$ such that $u(a',\pi)=u(a,\pi)$. From the above, for all $\pi_n \in \mathrm{int\,} BR^{-1}(a)$,
$u(a',\pi_n)\geq u(a,\pi_n)$. Consider any convex combination $(\lambda_n,\pi_n)_n$ satisfying $\sum_n \lambda_n \pi_n =\pi$, $\pi_n \in \mathrm{int\,} BR^{-1}(a)$ for all $n$, $\lambda_n>0$ for all $n$, and the $\pi_n$ being linearly independent. Such a convex combination exists since $BR^{-1}(a)$ has full dimension. If $u(a',\pi)=u(a,\pi)$, then
\[ u(a',\pi)= \sum_{n} \lambda_n u(a',\pi_n) \geq \sum_n \lambda_n u(a,\pi_n)=u(a,\pi) = u(a',\pi), \]
i.e., $u(a',\pi_n)=u(a,\pi_n)$ for all $n$, a
contradiction with the condition of no redundant actions for the experts. Therefore, for all $a' \neq a$, $u(a',\pi)>u(a,\pi)$, which completes the proof of the claim.
\end{proof}
From Claim \ref{L:conflictinterest}, the statement is true for all $\pi$ such that
$\pi\in \mathrm{int\,} BR^{-1}(a)$ for some $a \in \hat{A}$.
Since $BR$ and $\arg\min_{\alpha'\in \hat{A}}u(\alpha',\pi)$ are upper
hemi-continuous correspondences, which coincide almost everywhere
(in Lebesque measure), they coincide everywhere.\end{proof}
We now derive an immediate implication of Lemma \ref{L:conflicteverywhere}. We first introduce some additional notation. Recall that following the choice of experiments $(\sigma_1,\sigma_2)$, we have a proper sub-game. We are interested in analyzing the play in these sub-games. To ease notation, we drop the dependence on
$(\sigma_1,\sigma_2)$ and write $\pi(m_{1},m_{2})\in\Delta(\Omega)$ for the decision-maker's belief after observing the messages $(m_1,m_2)$. Similarly, we write $\alpha(m_{1},m_{2})$ for the decision-maker's equilibrium
reply. Notice that $\alpha(m_{1},m_{2})\in \Delta(\hat{A})$ because this action is a best response to belief $\pi(m_{1},m_{2})$. Finally, let $\mathbb{P}$
denote the probability distribution over signals, messages and actions induced by the prior and the strategy profile, conditional on the experiments
$(\sigma_1,\sigma_2)$. At an equilibrium, sequential rationality requires the decision-maker to choose a best-reply to his belief. Fix an equilibrium, an on-path profile of messages $(m_1,m_2)$, and its associated belief $\pi(m_1,m_2)$. Since all best-replies of the decision-maker to $\pi(m_1,m_2)$ minimize the experts' payoffs, no expert must be able to induce the decision-maker to choose an action outside $BR(\pi(m_1,m_2))$ by changing his message to $m'_1$.
\begin{lem}
\label{independencefrommessages}If $\mathbb{P}(m_{i},m_{j})>0$,
then for all $m_{i}'$, $\alpha(m_{i}',m_{j})\in BR(\pi(m_{i},m_{j}))$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{independencefrommessages}]
Without loss of generality, let $i=1$, $j=2$. The proof is by contradiction. Assume that there exists
$m_{1},m_{1}', m_{2}'$ such that $\alpha(m_{1}',m_{2}') \notin BR(\pi(m_{1},m_{2}'))$.
From Lemma \ref{L:conflicteverywhere}, $u(\alpha(m_{1}',m_{2}'),\pi(m_{1},m_{2}'))>u(\alpha(m_{1},m_{2}'),\pi(m_{1},m_{2}'))$.
The equilibrium payoff to expert 1 is
\[
\sum_{(\tilde{m}_{1},\tilde{m}_{2})}\mathbb{P}(\tilde{m}_{1},\tilde{m}_{2})u(\alpha(\tilde{m}_{1},\tilde{m}_{2}),\pi(\tilde{m}_{1},\tilde{m}_{2})).
\]
If expert 1 deviates by always sending the message $m_1'$, his expected payoff is:
\[
\sum_{(\tilde{m}_{1},\tilde{m}_{2})}\mathbb{P}(\tilde{m}_{1},\tilde{m}_{2})u(\alpha(m'_{1},\tilde{m}_{2}),\pi(\tilde{m}_{1},\tilde{m}_{2})).
\]
We now argue that the deviation is profitable, the required contradiction.
From Lemma \ref{L:conflicteverywhere}, we have that $u(\alpha(\tilde{m}_{1},\tilde{m}_{2}),\pi(\tilde{m}_{1},\tilde{m}_{2}))\leq u(\alpha(m'_{1},\tilde{m}_{2}),\pi(\tilde{m}_{1},\tilde{m}_{2}))$
for all $(\tilde{m}_{1},\tilde{m}_{2})$. Moreover, there exists $(m_1,m_2)$ such that the inequality is strict and
$\mathbb{P}(m_1,m_2)>0$. Thus, the deviation is profitable.
\end{proof}
The next lemma shows that if any expert chooses an uninformative experiment, then the experts' and the decision-maker's payoff in the ensuing equilibrium is equal to their payoff at their prior belief.
\begin{lem}
\label{L:uninformativeisguaranteed} Let $(\sigma_1,\sigma_2)$ be a profile of experiments.
If either $\sigma_{1}$ or $\sigma_2$ is an uninformative
experiment, then the experts' equilibrium payoff is $\overline{u}(\pi^{\circ})$ and the decision-maker's equilibrium payoff is
$\overline{v}(\pi^{\circ})$ in the ensuing sub-game.
\end{lem}
\begin{proof}[Proof of Lemma \ref{L:uninformativeisguaranteed}]
Without loss of generality, assume that $\sigma_2$ is uninformative. Since the experiments are observed by the decision-maker,
this implies that $\pi(m_1,m_2)$ is independent of $m_2$. (Recall that we require the beliefs to be consistent with the experiments.)
To ease the notation, we drop the dependence on $m_2$.
Together with Lemma \ref{independencefrommessages}, this
implies that for all $(m_1,m_2)$ such that $\mathbb{P}(m_{1},m_{2})>0$, $\alpha(m'_{1},m_{2})\in BR(\pi(m_{1}))$
for all $m'_{1}$. That is, $\alpha(m'_{1},m_{2})$ is a best-reply to all posterior beliefs $\pi(m_1)$. Note that since
$\mathbb{P}(m_{1},m_{2})>0$, the message $m_1$ has strictly positive probability. It follows that $\alpha(m'_{1},m_{2})$ is a best-reply to $\pi^{\circ}$ (as the prior is a convex combinations of the posteriors). Since it is true for all $(m'_1,m_2)$, the decision-maker payoff is $\overline{v}(\pi^{\circ})$.
Finally, since Lemma \ref{L:conflicteverywhere}
states that the experts are indifferent among all best-replies of the decision-makers, an expert's payoff is $\overline{u}(\pi^{\circ})$.\end{proof}
We now conclude the proof.
\begin{lem}\label{eq-payoff}
In any equilibrium of the cheap-talk game, the experts' payoff is $\overline{u}(\pi^{\circ})$, and the decision-maker's payoff is
$\overline{v}(\pi^{\circ})$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{eq-payoff}]
Fix any equilibrium of the cheap-talk game. From
Lemma \ref{L:uninformativeisguaranteed}, the payoff to any expert must at least be $\overline{u}(\pi^{\circ})$. We now argue that it cannot be higher. If $(\sigma^*_1,\sigma^*_2)$ are the experiments chosen at the first stage, then in the ensuing sub-game, an expert's payoff is:
\begin{align*}
\sum_{(m_{1},m_{2})}\mathbb{P}(m_{1},m_{2})u(\alpha(m_{1},m_{2}),\pi(m_{1},m_{2})) & =\sum_{(m_{1},m_{2})}\mathbb{P}(m_{1},m_{2})\min_{a\in A}u(a,\pi(m_{1},m_{2}))\\
& \leq\min_{a\in A}u\left(a,\sum_{(m_{1},m_{2})}\mathbb{P}(m_{1},m_{2})\pi(m_{1},m_{2})\right)\\
& =\min_{a\in A}u\left(a,\pi^{\circ}\right)=\bar{u}(\pi^{\circ}).
\end{align*}
(Recall that $\mathbb{P}$, $\alpha$ and $\pi$ depend on $(\sigma_1^*,\sigma_2^*)$, but to ease notation, we do not explicitly write the dependence.)
Finally, we argue that the decision-maker cannot get a payoff higher than $\overline{v}(\pi^{\circ})$ either. Indeed, for the decision-maker to obtain a higher payoff, there must exist an action
$a\in BR(\pi^{\circ})$ and a message profile $(m_{1},m_{2})$
such that $\mathbb{P}(m_1,m_2)>0$ and $a\notin BR(\pi(m_{1},m_{2}))$. This, however, would imply
that an expert's equilibrium payoff is strictly less than $u(a,\pi^{\circ})$, a contradiction with an expert's equilibrium payoff being equal to
$\bar{u}(\pi^{\circ})=\min_{a'\in \hat{A}}u\left(a',\pi^{\circ}\right)$.
The latter assertion follows from Lemma \ref{independencefrommessages}, which states that
$u(a,\pi(m_{1},m_{2}))>u(\alpha(m_{1},m_{2}),\pi(m_{1},m_{2}))$
and $u(a,\pi(m_{1}',m_{2}'))\geq u(\alpha(m_{1}',m_{2}'),\pi(m_{1}',m_{2}'))$
for all pairs of messages $(m_{1}',m_{2}')$ with $\mathbb{P}(m_1',m_2')>0$.
\end{proof}
\section{Conclusion}
In this paper, we studied the effects of cross-verification on the decision-maker's and experts' payoffs. Clearly, cross-verification is not the sole reason for soliciting advice from multiple experts. Consulting a diverse set of experts with different opinions, specializations, preferences can provide a decision-maker with insights about the merits of different aspects of an issue. In fact, a decision-maker may be able to perfectly learn a multidimensional state by consulting experts about different dimensions. However, consulting experts that have information about different dimensions of a decision reduces the scope for cross-verification since cross-verification is most effective when experts' information is highly correlated. Moreover, as we demonstrated in this paper, the experts have an incentive to facilitate cross-verification by acquiring correlated information. This points to an interesting tension that can inform future research on committee design.
\end{document} |
\begin{document}
\begin{abstract}
Many security and other real-world situations are dynamic in nature and can be modelled as strictly competitive (or zero-sum) dynamic games.
In these domains, agents perform actions to affect the environment and receive observations -- possibly imperfect -- about the situation and the effects of the opponent's actions.
Moreover, there is no limitation on the total number of actions an agent can perform --- that is, there is no fixed horizon.
These settings can be modelled as partially observable stochastic games (POSGs).
However, solving \textit{general} POSGs is computationally intractable, so we focus on a broad subclass of POSGs called \emph{one-sided POSGs}.
In these games, only one agent has imperfect information while their opponent has full knowledge of the current situation.
We provide a full picture for solving one-sided POSGs: we
(1) give a theoretical analysis of one-sided POSGs and their value functions,
(2) show that a variant of a value-iteration algorithm converges in this setting,
(3) adapt the heuristic search value-iteration algorithm for solving one-sided POSGs,
(4) describe how to use approximate value functions to derive strategies in the game, and
(5) demonstrate that our algorithm can solve one-sided POSGs of non-trivial sizes and analyze the scalability of our algorithm in three different domains: pursuit-evasion, patrolling, and search games.
\end{abstract}
\begin{keyword}
zero-sum partially observable stochastic games \sep one-sided information \sep value iteration \sep heuristic search value iteration
\end{keyword}
\title{Solving Zero-Sum One-Sided Partially Observable Stochastic Games}
\section{Introduction}
Non-cooperative game theory models the interaction of multiple agents in a joint environment.
Rational agents perform actions in the environment to achieve their own, often conflicting goals.
The interaction of agents is typically very complex in real-world dynamic scenarios ---
the agents can perform sequences of multiple actions while only having partial information about the actions of others and the events in the environment.
Finding out (approximate) optimal strategies for agents in dynamic environment with imperfect information is a long-standing problem in Artificial Intelligence. Its applications range from recreational games, such as poker~\cite{moravcik2017-deepstack,brown2018-libratus}, to uses in security such as patrolling~\cite{Basilico2009,vorobeychik2014-icaps,basilico2016} and pursuit-evasion games~\cite{isler2005-peg,isler2008-os-peg,amigoni2012-peg}.
For tackling this problem, game theory can provide appropriate mathematical models and algorithms for computing (approximate) optimal strategies according to some game-theoretic solution concept.
Among all existing game-theoretic models suitable for modelling dynamic interaction with imperfect information, \emph{partially observable stochastic games (POSGs)} are one of the most general ones.
POSGs model situations where all players have only partial information about the state of the environment, agents perform actions and receive observations, and the length of the interaction among agents is not a priori bounded.
As such, the expressive possibilities of POSGs are broad. In particular, they can model all considered security scenarios as well as recreational games.
Despite having high expressive power, POSGs have limited applications due to the complexity of computing (approximate) optimal strategies.
There are two main reasons for this.
First, the imperfect information provides challenges for sequential decision-making even in the single-agent case -- partially observable Markov decision processes (POMDPs).
Theoretical results show that various exact and approximate problems in POMDPs are undecidable~\cite{madani1999-undecidability}.
Focused research effort has yielded several approximate algorithms with convergence guarantees~\cite{smith2004-hsvi,kurniawati2008sarsop} scalable even to large POMDPs~\cite{silver2010monte}.
The main step when solving a POMDP is to reason about \emph{belief states} -- probability distributions over possible states.
Note that an agent can easily deduce a belief state in a POMDP since the environment changes only as a result of the agent's actions or because of the environment's stochasticity (which is known).
In POSGs, however, the presence of another agent(s) changing the environment generates another level of complexity.
Suppose all players have partial information about the environment. In that case, each player needs to reason not only about their belief over environment states, but also about opponents' beliefs, their beliefs over beliefs, and so on.
This issue is called the problem with \emph{nested beliefs}~\cite{macdermed2013-magii} and cannot be avoided in general unless we pose additional assumptions on the game model.
This is primarily because, in general POSGs, the choice of the optimal action (strategy) of a player depends on these nested beliefs.
To avoid this issue, we will focus on a subclass of POSGs that does not suffer from the problem of nested beliefs while still being expressive enough to contain many existing real-world games and scenarios.
One such sub-class of POSGs are two-player concurrent-move games where one player is assumed to have full knowledge about the environment and only one player has partial information.
In this case, the player with partial information (player 1 from now on) does not have to reconstruct the belief of the opponent (player 2) since player 2 always has full information about the true state of the environment.
Similarly, player 2 can always reconstruct the belief of player 1 by using the full information about the environment, which includes information about the action-history of player 1.
The game is played over stages where both players independently choose their next action (i.e., albeit player 2 has full knowledge about the current history and state, he does not know the action player 1 is about to play in the current stage).
The state of the game with the joint action of the players determines the next state and the next observation generated for player 1.
We term this class of games as \emph{one-sided POSG}.
While this class of games has appeared before in the literature (e.g., in~\cite{sorin2003-stochastic-incomplete} as \emph{Level-1 stochastic games}, or in~\cite{chatterjee2005semiperfect} as \emph{semiperfect-information} stochastic games\footnote{In this work, however, the authors assumed that the game is turn-taking. In contrast, we consider a more general case where at each timestep, both players choose simultaneously next action to be played.}) we are the first to focus on designing a practical algorithm for computing (approximately) optimal strategies.
Despite the seemingly-strong assumption on the perfect information for player 2, the studied class of one-sided POSG has broad application possibilities, especially in security.
In particular, this model subsumes patrolling games~\cite{Basilico2009,vorobeychik2014-icaps,basilico2016} or pursuit-evasion games~\cite{isler2005-peg,isler2008-os-peg,amigoni2012-peg}.
In many security-related problems, the defender is protecting an area (or a computer network) against the attacker that wants to attack it (e.g., by intruding into the area or infiltrating the network).
The defender does not have full information about the environment since he does not know which actions the attacker performed (e.g., which hosts in the computer network have been compromised by the attacker).
At the same time, it is difficult for the defender to exactly know what information the attacker has since the attacker can infiltrate the system or use insider information, and can thus have substantial knowledge about the environment.
Hence, as the worst-case assumption, the defender can assume that the attacker has full knowledge about the environment.
From this perspective, one-sided POSG can be used to compute robust defense strategies.
We restrict to the strictly competitive (or zero-sum) setting. In this case, the defender has guaranteed expected outcome when using such robust strategies even against attackers with less information.
Finally, we use the standard assumption that payoffs are computed as discounted sums of immediate rewards. However, our approach could be generalized to the non-discounted version to some extent (by proceeding similarly to \cite{horak2018-ijcai}).
Our main contribution is the description of the first practical algorithm for computing (approximate) optimal solution for two-player zero-sum one-sided POSG with discounted rewards.\footnote{Parts of this work appeared in conference publications~\cite{horak2017-aaai}. This submission is significantly extended from the published works by (1) containing all the proofs and all the technical details regarding the algorithm, (2) full description of the procedure for extracting strategies computed by the algorithm, and (3) new experiments with improved implementation of the algorithm. Finally, we acknowledge that a modification of presented algorithm has been provided in \cite{horak2019-ijcai,horak2019-cose} where a compact representation of belief space was proposed for a specific cybersecurity domain and demonstrate that proposed algorithm can scale even beyond experiments however at the cost of losing theoretical guarantees.}
The contribution is threefold: (1) the theoretical contribution proving that our proposed algorithm has guarantees for approximating the value of any one-sided POSG, (2) showing how to extract strategies from our algorithm and use them to play the game, (3) implementation of the algorithm and experimental evaluation on a set of games.
The theoretical work is a direct extension of the theory behind the single-player case (i.e., POMDPs).
In POMDPs, an optimal strategy in every step depends on the player's belief over environment states and on the outcomes achievable in each state. In other words, we have a \emph{value function} which takes a belief $b$ and returns the optimal expected value that can be achieved under $b$ (by following an optimal strategy in both the current decision point and those encountered afterwards).
\begin{figure}
\caption{Outline of the theoretical results presented in the paper.}
\label{fig:map}
\end{figure}
Figure~\ref{fig:map} visualizes the outline and key results provided in each section of the paper.
After reviewing related work (Section~\ref{sec:rw}) we state relevant technical background for POMDPs~(Section~\ref{sec:pomdps}).
We then formally define one-sided POSG (Section~\ref{sec:osposg:model}) and restate some known results \cite{sorin2003-stochastic-incomplete} regarding the characteristics of the value function (convexity) and show that the value function can be computed using a recursive formula (Section~\ref{sec:osposg:value}).
We then observe that each strategy can be decomposed into the distribution that determines the very next action and the strategy for the remainder of the game and that this structure is mirrored on the level of value functions (Section~\ref{sec:osposg:composing}).
With these tools, we derive a Bellman equation for one-sided POSG a prove that the iterative application of the corresponding operator $H$ is guaranteed to converge to the optimal value function $V^*$ (Section~\ref{sec:osposg:bellman}).
To get a baseline method of computing $V^*$, we show that the operator $H$ can be computed using a linear program (Section~\ref{sec:osposg:vi}).
To get a method with better scaling properties, we design novel approximate algorithms that aim at approximating $V^*$ (Section~\ref{sec:osposg:hsvi}).
Namely, we follow the heuristic search value iteration algorithm (HSVI)~\cite{smith2004-hsvi,smith2005-hsvi} that uses two functions to approximate the value function, an upper bound function and a lower bound function.
By decreasing the gap between these approximations, the algorithm approximates the optimal expected value for relevant belief points.
We show that a similar approach can also work in one-sided POSG and that, while the overall idea remains, most of the technical parts of the algorithm have to be adapted for one-sided POSG.
We identify and address these technical challenges in order to formally prove that our HSVI algorithm for one-sided POSG converges to optimal strategies.
As defined, the HSVI algorithm primarily approximates optimal value for a given game.
To extract strategies that reach computed values in expectation, we provide an additional online algorithm (based on ideas from online game-playing algorithms with imperfect information but finite horizon~\cite{moravcik2017-deepstack}) that generates actions from (approximate) optimal strategies according to the computed approximated value functions (Section~\ref{sec:osposg:playing}).
Finally, we experimentally evaluate the proposed algorithm on a set of different games, show scalability for these games, and provide deep insights into the performance for each specific part of the algorithm~(Section~\ref{sec:osposg:experiments}).
We demonstrate that our implementation of the algorithm is capable of solving non-trivial games with as much as 4\,500 states and 120\,000 transitions.
\section{Related Work}\label{sec:rw}
General, domain-independent algorithms for solving\footnote{Or even approximating an optimal solution to a given error.} (subclasses of) partially observable stochastic games with infinite horizon are not commonly studied.
As argued in the introduction, the problem of nested beliefs is one of the reasons.
One way of tackling this issue is by using history-dependent strategies.
One of the few such approaches is the bottom-up dynamic programming for constructing relevant finite-horizon policy trees for individual players while pruning-out dominated strategies~\cite{hansen2004dynamic,kumar2009dynamic}.
However, while the history-dependent policies can cope with the necessity of considering the nested beliefs, the number of the strategies is doubly exponential in the horizon of the game (i.e., the number of turns in the game), which greatly limits the scalability and applicability of the algorithm.
We take another approach and restrict to subclasses of POSGs, where the problem of nested beliefs does not appear.
Besides the works focused directly on one-sided POSGs, there are other works that consider specific subclasses of POSGs.
For example, Ghosh et al.~\citeyear{Ghosh2004} study zero-sum POSGs with public actions and observations.
The authors show that the game has a well-defined value and present an algorithm that exploits the transformation of such a model into a game with complete information.
In one-sided POSG, however, the actions are not publicly observable since the imperfectly-informed player lacks the information about their opponent's action.
Compared to existing works studying one-sided POSG~\cite{sorin2003-stochastic-incomplete,chatterjee2005semiperfect}, our work is the first to provide a practical algorithm that can be directly used to solve games of non-trivial sizes.
Our algorithm focuses on the \emph{offline problem} of (approximately) solving a given one-sided POSG.
However, a part of our contribution is the extraction of the strategy that reaches the computed value.
On the other hand, \emph{online algorithms} focus on computing strategies that will be used while playing the game.
For a long time, no online algorithms for dynamic imperfect-information games provided guarantees on the (near-)optimality of the resulting strategies.
While several new algorithms with theoretical guarantees emerged~\cite{lisy2015online,moravcik2017-deepstack,sustr2019monte} in recent years, they only considered limited-horizon games and produced history-dependent strategies.
Using such online algorithms for POSGs is thus only possible with very limited lookahead or when using a heuristic evaluation function.
Our approach is fully domain-independent and avoids considering complete histories and the use of evaluation functions while nevertheless being able to consider strategies with horizon of 100 turns or more.
Finally, note that the recent work \cite{sustr2020sound} has shown that online algorithms which seem to be consistent with some Nash equilibrium strategy might fail to be ``sound'' (i.e., there will be a way to exploit them).
Fortunately, our algorithm is provably $\epsilon$-sound in this sense, since (the proof of) \thref{thm:equilibrium} shows that it is always guaranteed to get at least the equilibrium value minus $\epsilon$.
\section{Partially Observable MDPs}
\label{sec:pomdps}
Partially observable Markov decision processes (POMDPs)~\citep{astrom1965-pomdp,sondik1978-pomdp,pineau2003-pbvi,smith2004-hsvi,smith2005-hsvi,spaan2005-perseus,bonet2009-rtdpbel,somani2013-despot} are a standard tool for single-agent decision making in stochastic environment under uncertainty about the states.
From the perspective of partially observable stochastic games, POMDPs can be seen as a variant of POSG that is only played by a single player.
\begin{definition}[Partially observable Markov decision process]
\thlabel{def:pomdp}
A \emph{partially observable Markov decision process} is a tuple $(S,A,O,T,R)$ where
\begin{compactitem}
\item $S$ is a finite set of states,
\item $A$ is a finite set of actions the agent can use,
\item $O$ is a finite set of observations the agent can observe,
\item $T(o, s' \mid s, a)$ is a probability to transition to $s'$ while generating observation $o$ when the current state is $s$ and agent uses action $a$,
\item $R(s, a)$ is the immediate reward of the agent when using action $a$ in state $s$.
\end{compactitem}
\end{definition}
In POMDPs, the agent starts with a known belief $b^{\mathrm{init}} \in \Delta(S)$ that characterizes the probability $b^{\mathrm{init}}(s)$ that $s$ is the initial state.
The play proceeds similarly as in POSGs, except that there is only one decision-maker involved:
The initial state $s^{(1)}$ is sampled from the distribution $b^{\mathrm{init}}$.
Then, in every stage $t$, the agent decides about the current action $a^{(t)}$ and receives reward $R(s^{(t)}, a^{(t)})$ based on the current state of the environment $s^{(t)}$.
With probability $T(o^{(t)}, s^{(t+1)} \mid s^{(t)}, a^{(t)})$ the system transitions to $s^{(t+1)}$ and the agent receives observation $o^{(t)}$.
The decision process is then repeated.
Although many objectives have been studied in POMDPs, in this section we discuss only discounted POMDPs with infinite-horizon, i.e., the objective is to optimize $\sum_{t=1}^\infty \gamma^{t-1} r_t$ for a discount factor $\gamma \in (0, 1)$.
A strategy $\sigma: (A_1 O)^* \rightarrow A_1$ in POMDPs is traditionally called a \emph{policy} and assigns a deterministic action to each observed history $\omega \in (A_1 O)^*$ of the agent.\footnote{As usual, we take $X^*$ to denote the set of all finite sequences over $X$. For a set $Y$ of sequences, $YZ$ denotes the set of sequences obtained by concatenating a single element of $Z$ to some sequence from $Y$. (Combining this notation yields, e.g., $axbyc \in (AX)^*A$ for $a,b,c \in A$ and $x,y \in X$.)}
Since the agent is the only decision-maker within the environment, and the probabilistic characterization of the environment is known, the player is able to infer his belief $\mathbb{P}_{b^{\mathrm{init}}}[s^{(t+1)} \mid (a^{(i)} o^{(i)})_{i=1}^t ]$ (i.e., how likely it is to be in a particular state after a sequence of actions and observations $(a^{(i)} o^{(i)})_{i=1}^t$ has been used and observed).
This belief can be defined recursively
\begin{equation}
\tau(b,a,o)(s') = \eta \sum_{s \in S} b(s) \cdot T(o,s' \mid s,a)
\end{equation}
where $\eta$ is a normalizing term, and $\tau(b,a,o) \in \Delta(S)$ is the updated belief of the agent when his current belief was $b$ and he played and observed $(a,o)$.
\cite{sondik1971-thesis} has shown that the belief of the agent is a sufficient statistic, and POMDPs can therefore be translated into \emph{belief-space MDP}.
In theory, standard methods for solving MDPs can be applied, and POMDPs can be solved, e.g., by iterating
\begin{equation}
V^{t+1}(b) = [HV^t](b) = \max_{a \in A} \left[ \sum_{s \in S} b(s) \cdot R(s,a) + \gamma \sum_{o \in O} \mathbb{P}_{b}[o \mid a] \cdot V^t(\tau(b,a,o)) \right] \ \text{.} \label{eq:pomdp:bellman}
\end{equation}
Since $H$ is a contraction, the repeated application of Equation~\eqref{eq:pomdp:bellman} converges to a unique convex value function $V^*: \Delta(S) \rightarrow \R$ of the POMDP.
However, since the number of beliefs is infinite, it is impossible to apply this formula to approximate $V^*$ directly.
\paragraph{Exact value iteration}
The value iteration can be, however, rewritten in terms of operations with so-called $\alpha$-vectors~\citep{sondik1978-pomdp}.
An $\alpha$-vector can be seen as a linear function $\alpha: \Delta(S) \rightarrow \R$ characterized by its values $\alpha(s)$ in the vertices $s \in S$ of the belief simplex $\Delta(S)$.
We thus have $\alpha(b) = \sum_{s \in S} b(s) \cdot \alpha(s)$.
Assume that $V^t$ is a piecewise-linear and convex function where $V^t(b) = \max_{\alpha \in \Gamma^t} \alpha(b)$ for a finite set of $\alpha$-vectors $\Gamma^t$.
We can then form a new (finite) set $\Gamma^{t+1}$ of $\alpha$-vectors to represent $V^{t+1}$ from Equation~\eqref{eq:pomdp:bellman} by considering all possible combinations of $\alpha$-vectors from the set $\Gamma^t$:
\begin{flalign}
\Gamma^{t+1} = \Big\lbrace \ \alpha : \Delta(S) \to \R \ \Big| \ \alpha(s) = R(s,a) + \gamma \!\!\!\!\!\!\!\! \sum_{(o,s') \in O \times S} \!\!\!\!\!\!\!\! T(o, s' \,|\, s, a) \alpha^o(s') \hspace{-32em} \nonumber && \\
&& \textnormal{for some } a \in A \textnormal{ and } \alpha^o \in \Gamma^t, \ o \in O \ \Big\rbrace \ \text{.}
\end{flalign}
As $|\Gamma^{t+1}| = |A| \cdot |\Gamma^t|^{|O|}$, this exact approach suffers from poor scalability.
Several techniques have been proposed to reduce the size of sets $\Gamma^t$~\citep{littman1996-thesis,zhang2001speeding}, however, this still does not translate to an efficient algorithm.
In the remainder of this section, we present two scalable algorithms for solving POMDPs that are relevant to this thesis.
First, we present RTDP-Bel that uses discretized value function and applies Equation~\eqref{eq:pomdp:bellman} directly.
Second, we present heuristic search value iteration (HSVI)~\citep{smith2004-hsvi,smith2005-hsvi} that inspires our methods for solving POSGs.
\paragraph{RTDP-Bel}
The RTDP-Bel algorithm~\citep{bonet1998-rtdpbel} is based on RTDP~\citep{BBS95} and has been originally framed in the context of Goal-POMDPs.
Goal-POMDPs do not discount rewards (i.e., they set $\gamma=1$ in Equation~\eqref{eq:pomdp:bellman}). However, the agent is incentivized to reach the goal state $g$ as his reward for every transition before reaching the goal is negative (i.e., it represents the cost).
The RTDP-Bel also applies to discounted POMDPs as discounting can be modelled within the Goal-POMDP framework as a fixed probability $1-\gamma$ of reaching the goal state during every transition~\citep{bonet2009-rtdpbel}.
RTDP-Bel adapts RTDP to partially observable domains by using a grid-based approximation of $V^*$ and using a hash-table to store the values, where $V^*(b) \sim \widehat{V}(\lfloor K \cdot b \rfloor)$ for some fixed parameter $K \in \mathbb{N}$.
This approximation, however, loses the theoretical properties of RTDP.
The algorithm need not converge as the values of the discretized value function may oscillate.
Moreover, there is no guarantee that the values stored in the hash-table will provide a bound on the values of $V^*$~\cite[p.~3, last paragraph of Section~3]{bonet2009-rtdpbel}.
Despite the lack of theoretical properties, RTDP-Bel has been shown to perform well in practice.
The RTDP-Bel algorithm performs a sequence of trials (see Algorithm~\ref{alg:rtdp-bel}) that updates the discretized value function $\widehat{V}$.
\begin{algorithm}
\caption{A single trial of the RTDP-Bel algorithm.}\label{alg:rtdp-bel}
\DontPrintSemicolon
\small
$b \gets b^{\mathrm{init}}$; ~$s \sim b$ \;
\While{$b(g) < 1$}{
$Q(b,a) \gets \sum_{s \in S} b(s) R(s,a) + \sum_{o \in O} \mathbb{P}_b[o \mid a] \cdot \widehat{V}(\lfloor K \cdot \tau(b,a,o) \rfloor)$ \;
$a^* \gets \argmax_{a \in A} Q(b,a)$ \;
$\widehat{V}(\lfloor K \cdot b \rfloor) \gets Q(b,a^*)$ \;
$(o,s') \sim T(o,s' \mid s, a^*)$;\ \ \
$b \gets \tau(b, a^*, o)$;\ \ \
$s \gets s'$ \;
}
\end{algorithm}
\paragraph{Heuristic search value iteration (HSVI)}
Heuristic search value iteration~\citep{smith2004-hsvi,smith2005-hsvi} is a representative of a class of point-based methods for solving POMDPs.
Unlike RTDP-Bel, it approximates $V^*$ using piecewise-linear functions.
We illustrate the difference between a grid-based approximation used in RTDP-Bel and a piecewise-linear approximation in Figures~\ref{fig:v-grid} and~\ref{fig:v-pwlc}.
Observe that unlike the grid-based approximation, a piecewise-linear approximation can yield a close approximation of $V^*$ even in regions with a rapid change of value.
\begin{figure}
\caption{RTDP-Bel}
\caption{HSVI (PWLC)}
\caption{HSVI2 (sawtooth)}
\caption{
Comparison of value function approximation schemes
}
\label{fig:v-grid}
\label{fig:v-pwlc}
\label{fig:v-sawtooth}
\end{figure}
In the original version of the \emph{heuristic-search value iteration} algorithm (HSVI)~\citep{smith2004-hsvi}, the algorithm keeps two piecewise-linear and convex (PWLC) functions $\uv$ and $\ov$ to approximate $V^*$ (see Figure~\ref{fig:v-pwlc}) and refines them over time.
The lower bound on the value is represented in the vector-set representation using a finite set of $\alpha$-vectors $\Gamma$, while the upper bound is formed as a lower convex hull of a set of points $\Upsilon=\lbrace (b_i,y_i) \,|\, i=1,\ldots,m \rbrace$ where $b_i \in \Delta(S)$ and $y_i \in \R$.
We then have
\begin{subequations}
\begin{align}
\uv(b) &= \max_{\alpha \in \Gamma} \sum_{s \in S} b(s) \cdot \alpha(s) \\
\ov(b) &= \min \lbrace \textstyle\sum_{i=1}^m \lambda_i y_i \mid \lambda \in \mathbb{R}_{\geq 0}^m: \textstyle\sum_{i=1}^m \lambda_i b_i = b \rbrace \ \text{.} \label{eq:hsvi-ub}
\end{align}
\end{subequations}
Computing $\ov(b)$ according to Equation~\eqref{eq:hsvi-ub} requires solving a linear program.
In the second version of the algorithm (HSVI2,~\citep{smith2005-hsvi}), the PWLC representation of upper bound has been replaced by a sawtooth-shaped approximation~\citep{hauskrecht2000-value-functions} (see Figure~\ref{fig:v-sawtooth}).
While the sawtooth approximation is less tight with the same set of points, the computation of $\ov(b)$ does not rely on the use of linear programming and can be done in linear time in the size of $\Upsilon$.
HSVI2 initializes the value function $\uv$ by considering policies `always play the action $a$' and construct one $\alpha$-vector for each action $a \in A$ corresponding to the expected cost for playing such policy.
For the initialization of the upper bound, the fast-informed bound is used~\citep{hauskrecht2000-value-functions}.
The refinement of $\uv$ and $\ov$ is done by adding new elements to the sets $\Gamma$ and $\Upsilon$.
Since the goal of each update is to improve the approximation quality in the selected belief $b$ as much as possible, we refer to them as \emph{point-based updates} (see Algorithm~\ref{alg:pb-update}).
\begin{algorithm}
\SetKwFunction{Update}{$\mathtt{update}$}
\SetKwProg{myproc}{procedure}{}{}
\DontPrintSemicolon
\small
$\alpha^{a,o} \gets \argmax_{\alpha \in \Gamma} \sum_{s' \in S} \tau(b,a,o)(s') \cdot \alpha(s')$ for all $a \in A, o \in O$ \;
$\alpha^a(s) \gets R(s,a) + \gamma \sum_{o,s'} T(o,s' \mid s, a) \cdot \alpha^{a,o}(s')$ for all $s \in S, a \in A$ \;
$\Gamma \gets \Gamma \cup \lbrace \argmax_{\alpha^a} \sum_{s \in S} b(s) \cdot \alpha^a(s) \rbrace$ \;
$\Upsilon \gets \Upsilon \cup \lbrace (b, \max_{a \in A} \left[ \sum_{s \in S} b(s) R(s,a) + \gamma\sum_{o \in O} \mathbb{P}_b[o \mid a] \cdot \ov(\tau(b,a,o)) \right])\rbrace$
\caption{Point-based $\mathtt{update(}b\mathtt{)}$ procedure of $(\uv,\ov)$.}
\label{alg:pb-update}
\end{algorithm}
\begin{algorithm}
\caption{
HSVI2 for discounted POMDPs.
The pseudocode follows the ZMDP implementation and includes $\mathtt{update}$ on line~\ref{alg:hsvi-disc:update1}.
}
\label{alg:hsvi-disc}
\SetKwFunction{Explore}{$\mathtt{explore}$}
\SetKwProg{myproc}{procedure}{}{}
\DontPrintSemicolon
\small
Initialize $\uv$ and $\ov$\;
\lWhile{$\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) > \varepsilon$}{
\Explore{$b^{\mathrm{init}},\varepsilon,0$}
}
\myproc{\Explore{$b,\varepsilon,t$}}{
\lIf{$\ov(b) - \uv(b) \leq \varepsilon\gamma^{-t}$}{\Return\label{alg:hsvi-disc:termination}}
$a^* \gets \argmax_{a \in A} \left[ \sum_{s} b(s) \cdot R(s,a) + \gamma\sum_{o \in O} \mathbb{P}_b[o \mid a] \ov(\tau(b,a,o)) \right]$\label{alg:hsvi-disc:action-choice}\;
$\mathtt{update}(b)$ \label{alg:hsvi-disc:update1}\;
$o^* \gets \argmax_{o \in O} \mathbb{P}_b[o \mid a] \cdot \mathrm{excess}_{t+1}(\tau(b,a^*,o))$\;
\Explore{$\tau(b,a^*,o^*),\varepsilon,t+1$}\;
$\mathtt{update}(b)$ \;
}
\end{algorithm}
Similarly to RTDP-Bel, HSVI2 selects beliefs where the update should be performed based on the simulated play (selecting actions according to $\ov$).
Unlike RTDP-Bel, however, observations are not selected randomly.
Instead, HSVI2 selects an observation with the highest \emph{weighted excess gap}, i.e. the excess approximation error
\begin{equation}
\mathrm{excess}_{t+1}(\tau(b,a^*,o)) = \ov(\tau(b,a^*,o))-\uv(\tau(b,a^*,o))-\varepsilon\gamma^{-(t+1)} \label{eq:hsvi:excess}
\end{equation}
in $\tau(b,a^*,o)$ weighted by the probability $\mathbb{P}_b[o \mid a^*]$.
This heuristic choice attempts to target beliefs where the update will have the most significant impact on $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}})$.
The HSVI2 algorithm for discounted-sum POMDPs ($\gamma \in (0,1)$) is shown in Algorithm~\ref{alg:hsvi-disc}.
This algorithm provably converges to an $\varepsilon$-approximation of $V^*(b^{\mathrm{init}})$ using values $\uv(b^{\mathrm{init}})$ and $\ov(b^{\mathrm{init}})$, see~\citep{smith2004-hsvi}.
\section{Game Model: One-Sided Partially Observable Stochastic Games (OS-POSGs)}
\label{sec:osposg:model}
We now define the model of one-sided POSGs and describe strategies for this class of games.
\begin{definition}[one-sided POSGs]
A \emph{one-sided POSG} (or OS-POSG) is a tuple $G=(S,A_1,A_2,O,T,R,\gamma)$ where
\begin{compactitem}
\item $S$ is a finite set of of game \emph{states},
\item $A_1$ and $A_2$ are finite sets of \emph{actions} of player~1 and player~2, respectively,
\item $O$ is a finite set of \emph{observations}
\item for every $(s,a_1,a_2) \in S \times A_1 \times A_2$, $T(\cdot \,|\, {s,a_1,a_2}) \in \Delta(O \times S)$ represents probabilistic transition function,
\item $R: S \times A_1 \times A_2 \rightarrow \R$ is a reward function of player~1,
\item $\gamma \in (0,1)$ is a discount factor.
\end{compactitem}
\end{definition}
The game starts by sampling the initial state $s^{(1)} \sim b^{\mathrm{init}}$ from the \emph{initial belief} $b^{\mathrm{init}}$.
Then the game proceeds for an infinite number of \emph{stages} where the players choose their actions simultaneously and receive feedback from the environment.
At the beginning of $i$-th stage, the current state $s^{(i)}$ is revealed to player~2, but not to player~1.
Then player~1 selects action $a_1^{(i)} \in A_1$ and player~2 selects action $a_2^{(i)} \in A_2$.
Based on the current state of the game $s^{(i)}$ and the actions $(a_1^{(i)},a_2^{(i)})$ taken by the players, an unobservable reward $R(s^{(i)},a_1^{(i)},a_2^{(i)})$ is assigned\footnote{Note that we consider a zero-sum setting, hence the reward of player~2 is $-R(s^{(i)},a_1^{(i)},a_2^{(i)})$. We do however consider that player~2 focuses on minimizing the reward of player~1 instead of reasoning about the rewards of player~2 directly.} to player~1, and the game transitions to a state $s^{(i+1)}$ while generating observation $o^{(i)}$ with probability $T(o^{(i)},s^{(i+1)} \,|\, s^{(i)},a_1^{(i)},a_2^{(i)})$.
After committing to action $a_2^{(i)}$, player~2 observes the entire outcome of the current stage, including the action $a_1^{(i)}$ taken by player~1 and the observation $o^{(i)}$.
player~1, on the other hand, knows only his own action $a_1^{(i)}$ and the observation $o^{(i)}$, while the action $a_2^{(i)}$ of player~2 and both the past and new states of the system $s^{(i)}$ and $s^{(i+1)}$ remain unknown to him.
The information asymmetry in the game means that while player~2 can observe entire course of the game $(s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^t s^{(t+1)} \in (S A_1 A_2 O)^*S$ up to the current decision point at time $t+1$, player~1 only knows his own actions and observations $(a_1^{(i)} o^{(i)}_{i=1})^t \in (A_1 O)^*$.\footnote{Recall that we use the standard notation where $X^* := $ all finite sequences over $X$ (and, if $Y$ is a set of sequences, $YZ$ denotes the set of sequences obtained by appending a single element of $Z$ at the end of some $y\in Y$).}
The players make decisions solely based on this information - formally, this is captured by the following definition:
\begin{definition}[Behavioral strategy]
\thlabel{def:osposg:behavioral}
Let $G$ be a one-sided POSG.
Mappings $\sigma_1: (A_1 O)^* \rightarrow \Delta(A_1)$ and $\sigma_2: (S A_1 A_2 O)^* S \rightarrow \Delta(A_2)$ are \emph{behavioral strategies} of imperfectly informed player~1 and perfectly informed player~2, respectively.
The sets of all behavioral strategies of player~1 and player~2 are denoted $\Sigma_1$ and $\Sigma_2$, respectively.
\end{definition}
\paragraph{Plays in OS-POSGs}
Players use their behavioral strategies $(\sigma_1,\sigma_2)$ to play the game.
A \emph{play} is an infinite word $(s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^\infty$, while finite prefixes of plays $w=(s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^T s^{(T+1)}$ are called \emph{histories} of length~$T$, and plays having $w$ as a prefix are denoted $\mathsf{Cone}(w)$.
Formally, a \emph{cone} of $w$ is a set of all plays extending $w$,
\begin{align}
\mathsf{Cone}(w) := \hspace{27em} \nonumber \\
\left\lbrace (s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^\infty \in (S A_1 A_2 O)^* \mid (s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^\infty \text{ extends } w \right\rbrace \text{.}
\end{align}
At a decision point at time $t$, players extend a history $(s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^t s^{(t+1)}$ of length $t$ by sampling actions from their strategies $a_1^{(t+1)} \sim \sigma_1((a_1^{(i)} o^{(i)})_{i=1}^t)$ and $a_2^{(t+1)} \sim \sigma_2((s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^t s^{(t+1)})$.
We consider a discounted-sum objective with discount factor $\gamma \in (0,1)$.
The payoff associated with a play $(s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^\infty$ is thus $\Disc^\gamma := \sum_{i=1}^\infty \gamma^{i-1} R(s^{(i)},a_1^{(i)},a_2^{(i)})$.
Player~1 is aiming to maximize this quantity while player~2 is minimizing it.
Apart from reasoning about decision rules of the players for the entire game (i.e., their behavioural strategies $\sigma_1$ and $\sigma_2$), we also consider the strategies they use for a single decision point---or stage---of the game only (i.e., assuming that the course of the previous stages $(s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^t$ is fixed and considered a parameter of the given stage).
\begin{definition}[Stage strategy]
\thlabel{def:osposg:stage-strategy}
Let $G$ be a one-sided POSG.
A \emph{stage strategy} of player~1 is a distribution $\pi_1 \in \Delta(A_1)$ over the actions player~1 can use at the current stage.
A \emph{stage strategy} of player~2 is a mapping $\pi_2: S \rightarrow \Delta(A_2)$ from the possible current states of the game (player~2 observes the true state at the beginning of the current stage) to a distribution over actions of player~2.
The sets of all stage strategies of player~1 and player~2 are denoted $\Pi_1$ and $\Pi_2$, respectively.
\end{definition}
Note that a stage strategy of player~2 is essentially a conditional probability distribution given the current state of the game.
For the reasons of notational convenience, we use notation $\pi_2(a_2 \,|\, s)$ instead of $\pi_2(s)(a_2)$ wherever applicable.
\subsection{Subgames}
Recall that both players know past actions of player~1 and all observations player~1 has received. The action-observation history is thus public knowledge.
This allows us to define a notion of \emph{subgames}.
A subgame induced by an action-observation history $\omega$ (or \emph{$\omega$-subgame}) is formed by histories $h$ such that the action-observation history $\omega(h)$ of player~1 in $h$ is a suffix of $\omega$, i.e., $\omega(h) \sqsupseteq \omega$.
Later in the text, we will specifically reason about subgames that follow directly after the first stage of the game---these correspond to $(a_1, o)$-subgames for some action $a_1$ and observation $o$.
Observe that, once $(a_1,o)$ is played and observed, both players know exactly which \emph{$(a_1,o)$-subgame} they are currently in.
Consequently, reasoning about $(a_1,o)$-subgame can be done without considering any other $(a_1',o')$-subgame.
\subsection{Probability measures}
We now proceed by defining a probability measure on the space of infinite plays in one-sided POSGs.
Assuming that $b \in \Delta(S)$ is the initial belief characterizing the distribution over possible initial states, and players use strategies $(\sigma_1,\sigma_2)$ to play the game from the current situation, we can define the probability distribution over histories (i.e., prefixes of plays) recursively as follows.
\begin{subequations}
\begin{align*}
& \mathbb{P}_{b,\sigma_1,\sigma_2}[s^{(1)}] = b(s^{(1)}) \numberthis \\
& \mathbb{P}_{b,\sigma_1,\sigma_2}[(s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^t s^{(t+1)}] = \mathbb{P}_{b,\sigma_1,\sigma_2}[(s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^{t-1} s^{(t)}] \ \cdot \numberthis \\
& \qquad\qquad\qquad \cdot \sigma_1((a_1^{(i)} o^{(i)})_{i=1}^{t-1}, a_1^{(t)}) \cdot \sigma_2((s^{(i)} a_1^{(i)} a_2^{(i)} o^{(i)})_{i=1}^{t-1} s^{(t)}, a_2^{(t)}) \ \cdot \\
& \qquad\qquad\qquad \cdot T(o^{(t)}, s^{(t+1)} \;|\; s^{(t)}, a_1^{(t)}, a_2^{(t)})
\end{align*}
\end{subequations}
This probability distribution also coincide with a measure $\mu$ defined over the cones, i.e. plays having $w$ as a prefix.
\begin{equation}
\mu(\mathsf{Cone}(w)) = \mathbb{P}_{b,\sigma_1,\sigma_2}[w]
\end{equation}
The measure $\mu$ uniquely extends to the probability measure $\mathbb{P}_{b,\sigma_1,\sigma_2}[\cdot]$ over infinite plays of the game, which allows us to define the expected utility $\E_{b,\sigma_1,\sigma_2}[\Disc^\gamma]$ of the game when the initial belief of the game is $b$ and strategies $\sigma_1 \in \Sigma_1$ and $\sigma_2 \in \Sigma_2$ are played by player~1 and player~2, respectively.
In a similar manner, we can define a probability measure $\mathbb{P}_{b,\pi_1,\pi_2}[s, a_1, a_2, o, s']$ that predicts events only one step into the future (for \textit{stage} strategies $\pi_1 \in \Pi_1$, $\pi_2 \in \Pi_2$).
For belief $b$ and stage strategies $\pi_1$, $\pi_2$, we consider the probability that a stage starts in state $s \in S$ (sampled from $b$), players select actions $a_1 \sim \pi_1$ and $a_2 \sim \pi_2$, and that this results into a transition to a new state $s' \in S$ while generating an observation $o \in O$:
\begin{equation}
\mathbb{P}_{b,\pi_1,\pi_2}[s, a_1, a_2, o, s'] = b(s) \pi_1(a_1) \pi_2(a_2 \,|\, s) T(o, s' \,|\, s, a_1, a_2) \ \text{.} \label{eq:osposg:stage-prob}
\end{equation}
The probability distribution in Equation~\eqref{eq:osposg:stage-prob} can be marginalized to obtain, e.g., the probability that player~1 plays action $a_1 \in A_1$ and observes $o \in O$,
\begin{align}
\mathbb{P}_{b,\pi_1,\pi_2}[a_1, o] = & \sum_{(s, a_2, s') \in S \times A_2 \times S} \mathbb{P}_{b,\pi_1,\pi_2}[s, a_1, a_2, o, s'] \nonumber \\
= & \sum_{(s, a_2, s') \in S \times A_2 \times S} b(s) \pi_1(a_1) \pi_2(a_2 \,|\, s) T(o, s' \,|\, s, a_1, a_2) \ \text{.}
\end{align}
At the beginning of each stage, the imperfectly informed player~1 selects their action based on their belief about the current state of the game.
For a fixed current stage-strategy $\pi_2$ of player~2, player~1 can derive the distribution over possible states at the beginning of the next stage.
If player~1 starts with a belief $b$, takes an action $a_1 \in A_1$, and observes $o \in O$, his updated belief $b' = \tau(b,a_1,\pi_2,o)$ over states $s' \in S$ is going to be $\tau(b,a_1,\pi_2,o)(s') = $
\begin{subequations}\label{eq:osposg:tau}
\begin{align}
&= \mathbb{P}_{b,\pi_1,\pi_2}[s' \,|\, a_1, o] = \sum_{(s,a_2) \in S \times A_2} \mathbb{P}_{b,\pi_1,\pi_2}[s, a_2, s' \,|\, a_1, o] \\
&= \frac{1}{\mathbb{P}_{b,\pi_1,\pi_2}[a_1,o]} \sum_{(s,a_2) \in S \times A_2} \mathbb{P}_{b,\pi_1,\pi_2}[s, a_1, a_2, o, s'] \\
&= \frac{1}{\mathbb{P}_{b,\pi_1,\pi_2}[a_1,o]} \sum_{(s,a_2) \in S \times A_2} b(s) \pi_1(a_1) \pi_2(a_2 \,|\, s) T(o, s' \,|\, s, a_1, a_2) \ \text{.}
\end{align}
\end{subequations}
In Section~\ref{sec:osposg:bellman}, this expression will prove useful for describing the Bellman equation in one-sided POSGs.
\section{Value of One-Sided POSGs}\label{sec:osposg:value}
We now proceed by establishing the value function of one-sided POSGs.
The value function represents the utility player~1 can achieve in each possible initial belief of the game.
First, we define the value of a strategy $\sigma_1 \in \Sigma_1$ of player~1, which assigns a payoff player~1 is guaranteed to get by playing $\sigma_1$ in the game (parameterized by the initial belief of the game).
Based on the value of strategies, we define the optimal value function of the game where player~1 chooses the best strategy for the given initial belief.
\begin{definition}[Value of strategy]\thlabel{def:osposg:strategy-value}
Let $G$ be a one-sided POSG and $\sigma_1 \in \Sigma_1$ be a behavioral strategy of the imperfectly informed player~1.
The \emph{value of strategy} $\sigma_1$, denoted $\val^{\sigma_1}$, is a function mapping each belief $b \in \Delta(S)$ to the expected utility that $\sigma_1$ guarantees against a best-responding player~2 given that the initial belief is $b$:
\begin{equation}
\val^{\sigma_1}(b) = \inf_{\sigma_2 \in \Sigma_2} \E_{b,\sigma_1,\sigma_2}[\Disc^\gamma] \ \text{.}
\end{equation}
\end{definition}
When given an instance of a one-sided POSG with initial belief $b$, player~1 aims for a strategy that yields the best possible expected utility $\val^{\sigma_1}(b)$.
The value player~1 can guarantee in belief $b$ is characterized by the optimal value function $V^*$ of the game.
\begin{definition}[Optimal value function]
\thlabel{def:osposg:v-star}
Let $G$ be a one-sided POSG.
The \emph{optimal value function} $V^*: \Delta(S) \rightarrow \R$ of $G$ represents the supinf value of player~1 for each of the beliefs, i.e.
\begin{equation}
V^*(b) = \sup_{\sigma_1 \in \Sigma_1} \val^{\sigma_1}(b) \ \text{.}
\end{equation}
\end{definition}
Note that according to von Neumann's minimax theorem~\cite{vonneumann1928-minimax} (resp. its generalization~\cite{sion1958-minimax}), every zero-sum POSG with discounted-sum objective $\Disc^\gamma$ is determined in the sense that the lower values (in the $\sup\inf$ sense) and the upper values (in the $\inf\sup$ sense) of the game coincide and represent the value of the game.
Therefore, $V^*(b)$ also represents the value of the game when the initial belief of the game is $b \in \Delta(S)$.
Since the $\Disc^\gamma$ objective is considered (for $0 < \gamma < 1$), the infinite discounted sum of rewards of player~1 converge.
As a result, the values of strategies $\val^{\sigma_1}(b)$ and the value of the game $V^*(b)$ can be bounded.
\begin{restatable}{proposition}{ValuesAreBounded}\label{thm:osposg:bounded}
Let $G$ be a one-sided POSG.
Then the payoff $\Disc^\gamma$ of an arbitrary play in $G$ is bounded by values
\begin{equation}
L = \min_{(s,a_1,a_2)} R(s,a_1,a_2) / (1-\gamma) \qquad U = \max_{(s,a_1,a_2)} R(s,a_1,a_2) / (1-\gamma) \ \text{.}
\end{equation}
It also follows that $L \leq V^*(b) \leq U$ and $L \leq \val^{\sigma_1}(b) \leq U$ holds for every belief $b \in \Delta(S)$ and strategy $\sigma_1 \in \Sigma_1$ of the imperfectly informed player~1.
\end{restatable}
Since the values $L$ and $U$ are uniquely determined by the given one-sided POSG, we will use these symbols in the remainder of the text.
We now focus on the discussion of structural properties of solutions of OS-POSGs.
First, we show that the value of an arbitrary strategy $\sigma_1 \in \Sigma_1$ of player~1 is linear in $b \in \Delta(S)$ --- that is, it can be represented as a convex combination of its values in the vertices of the simplex $\Delta (S)$.
In accordance with the notation used in the POMDP literature, we refer to linear functions defined over the $\Delta(S)$ simplex as \emph{$\alpha$-vectors}.
For $s \in S$, we overload the notation as $\alpha(s) := $ the value of $\alpha$ in the vertex corresponding to $s$. This allows us to write the following for every $b \in \Delta(S)$
\begin{equation}
\alpha(b) = \sum_{s \in S} \alpha(s) \cdot b(s) \qquad \text{where } \alpha(s) = \alpha(\mathbbm{1}_s),\ \ \mathbbm{1}_s(s') = \begin{cases}
1 & s = s' \\
0 & \text{otherwise}
\end{cases}
\ \text{.}
\end{equation}
The following lemma shows the result we promised earlier:
\begin{lemma}\thlabel{thm:osposg:strategy-value-linear}
Let $G$ be a one-sided POSG and $\sigma_1 \in \Sigma_1$ be an arbitrary behavioral strategy of player~1.
Then the value $\val^{\sigma_1}$ of strategy $\sigma_1$ is a linear function in the belief space $\Delta(S)$.
\end{lemma}
\begin{proof}
According to the \thref{def:osposg:strategy-value}, the value $\val^{\sigma_1}$ of strategy $\sigma_1$ is defined as the expected utility of $\sigma_1$ against the best-response strategy $\sigma_2$ of player~2.
However, before having to act, player~2 observes the true initial state $s \sim b$.
Therefore, he will play a best-response strategy $\sigma_2$ against $\sigma_1$ (with expected utility $\val^{\sigma_1}(s)$) given that the initial state is $s$.
Since the probability that the initial state is $s$ is $b(s)$, we have
\begin{equation}
\val^{\sigma_1}(b) = \sum_{s \in S} b(s) \val^{\sigma_1}(s) \ \text{.}
\end{equation}
This shows that $\val^{\sigma_1}$ is a linear function in the belief $b \in \Delta(S)$.
\end{proof}
Since a point-wise supremum of a set of linear functions is convex, \thref{thm:osposg:strategy-value-linear} implies that the optimal value function $V^*$ is convex:
\begin{restatable}{lemma}{ValuesConvex}\label{thm:osposg:vs-convex}
Optimal value function $V^*$ of a one-sided POSG is convex.
\end{restatable}
Unless otherwise specified, we endow any space $\Delta(X)$ over a finite set $X$ with the $\| \cdot \|_1$ metric.
To prepare the ground for the later proof of correctness of our main algorithm (presented in Section~\ref{sec:osposg:hsvi}), we now show that both the value of strategies and the optimal value function $V^*$ are Lipschitz continuous.
(Recall that for $k > 0$ a function $f: \Delta(X) \rightarrow \mathbb{R}$ is $k$-Lipschitz continuous if for every $p, q \in \Delta(X)$ it holds $| f(p) - f(q) | \leq k \cdot \| p - q \|_1$.)
\begin{restatable}{lemma}{LinearFsAreLipschitz}\label{thm:osposg:linbound-lipschitz}
Let $X$ be a finite set and let $f: \Delta(X) \rightarrow [ y_{\mathrm{min}}, y_{\mathrm{max}} ]$ be a linear function.
Then $f$ is $k$-Lipschitz continuous for $k=(y_{\mathrm{max}}-y_{\mathrm{min}})/2$.
\end{restatable}
Lemma~\ref{thm:osposg:linbound-lipschitz} directly implies that both values $\val^{\sigma_1}$ of strategies $\sigma_1$ of the imperfectly informed player~1, as well as the optimal value function $V^*$ are Lipschitz continuous.
\begin{lemma}\thlabel{thm:osposg:strategy-value-lipschitz}
Let $\sigma_1 \in \Sigma_1$ be an arbitrary strategy of the imperfectly informed player~1.
Then $\val^{\sigma_1}$ is $(U-L)/2$-Lipschitz continuous.\footnote{Recall that $L$ and $U$, introduced in Proposition~\ref{thm:osposg:bounded}, are the minimum and maximum possible utilities in the game.}
\end{lemma}
\begin{proof}
Value $\val^{\sigma_1}$ of strategy $\sigma_1$ is linear (\thref{thm:osposg:strategy-value-linear}) and its values are bounded by $L$ and $U$ (Proposition~\ref{thm:osposg:bounded}).
Therefore, according to Lemma~\ref{thm:osposg:linbound-lipschitz}, the function $\val^{\sigma_1}$ is $(U-L)/2$-Lipschitz.
\end{proof}
For notational convenience, we denote this constant as $\delta := (U-L)/2$ in the remainder of the text.
\begin{restatable}{proposition}{ValuesAreLipschitz}\label{thm:osposg:value-lipschitz}
Value function $V^*$ of one-sided POSGs is $\delta$-Lipschitz continuous.
\end{restatable}
\begin{remark}
In the remainder of the text, we will use term \emph{value function} to refer to an arbitrary function $V: \Delta(S) \rightarrow \R$ that assigns numbers $V(b)$ (estimates of the value achieved under optimal play) to beliefs $b \in \Delta(S)$ of player~1.
\end{remark}
\subsection{Elementary Properties of Convex Functions}
\label{sec:osposg:cvx}
In \thref{thm:osposg:vs-convex}, we have shown that the optimal value function $V^*$ of one-sided POSGs is convex.
In this section, we will explicitly state some of the important properties of convex functions that motivate our approach and are used throughout the rest of the text.
\begin{proposition}\thlabel{thm:osposg:sup-convex}
Let $f: \Delta(S) \rightarrow \mathbb{R}$ be a point-wise supremum of linear functions, i.e.,
\begin{equation}
f(b) = \sup_{\alpha \in \Gamma} \alpha(b) \ , \qquad \Gamma \subseteq \left\lbrace \alpha: \Delta(S) \rightarrow \mathbb{R} \mid \alpha \text{ is linear} \right\rbrace \ \text{.}
\end{equation}
Then $f$ is convex and continuous.
Furthermore, if every $\alpha \in \Gamma$ is $k$-Lipschitz continuous, $f$ is $k$-Lipschitz continuous as well.
\end{proposition}
\begin{proof}
Let $b,b' \in \Delta(S)$ and $\lambda \in [0,1]$ be arbitrary.
We have
\begin{align*}
\lambda f(b) + (1-\lambda) f(b') &= \lambda \sup_{\alpha \in \Gamma} \alpha(b) + (1-\lambda) \sup_{\alpha \in \Gamma} \alpha(b') \\
&= \sup_{\alpha \in \Gamma} \lambda \alpha(b) + \sup_{\alpha \in \Gamma} (1-\lambda) \alpha(b') \\
&\geq \sup_{\alpha \in \Gamma} \ \left[\lambda \alpha(b) + (1-\lambda) \alpha(b') \right] \\
&= \sup_{\alpha \in \Gamma} \alpha(\lambda b + (1-\lambda) b') \\
&= f(\lambda b + (1-\lambda)b') ,
\end{align*}
which shows that $f$ is convex.
We now prove the continuity of $f$.
Since every convex function is continuous on the interior of its domain, it remains to show that $f$ is continuous on the boundary of $\Delta(S)$.
Assume to the contradiction that it is not continuous, i.e., there exists $b_0$ on the boundary such that for all $b$ from its neighborhood $f(b_0) > f(b) + C$ for some $C > 0$.
Since $f$ is a pointwise supremum of linear functions, there exists $\alpha \in \Gamma$ such that $\alpha(b_0) > f(b_0) - C/2$.
However, at the same time, we have $\alpha(b) \leq f(b_0) - C$.
This is in contradiction with the fact that all $\alpha \in \Gamma$ are linear, and hence continuous.
Furthermore, suppose that every $\alpha \in \Gamma$ is $k$-Lipschitz continuous and let $b,b'\in \Delta(S)$. We have
\begin{align*}
f(b) &= \sup_{\alpha \in \Gamma} \alpha(b) \\
&\leq \sup_{\alpha \in \Gamma} \left[ \alpha(b') + k \| b - b' \|_1 \right] & \text{(since every $\alpha \in \Gamma$ is $k$-Lipschitz)} \\
&= \left[ \sup_{\alpha \in \Gamma} \alpha(b') \right] + k \| b - b' \|_1 \\
&= f(b') + k \| b - b' \|_1.
\end{align*}
Since the identical argument proves the inequality $f(b') \leq f(b) + k\|b-b'\|_1$, this shows that $f$ is $k$-Lipschitz continuous.
\end{proof}
Recall that we aim to emulate the HSVI algorithm from POMDPs, where the optimal value function $V^*$ is approximated by a series of piecewise linear and convex functions.
One of the common ways to represent these functions is as a point-wise maximum of a finite set of linear functions (typically called \textit{$\alpha$-vectors} in the POMDP context):
\begin{definition}[Piecewise linear and convex function on $\Delta(S)$]
\thlabel{def:osposg:pwlc}
A function $f: \Delta(S) \rightarrow \mathbb{R}$ is said to be \emph{piecewise linear and convex} (PWLC) if it is of the form $f(b) = \max_{\alpha \in \Gamma} \alpha(b)$ (for each $b\in \Delta (S)$) for some finite set $\Gamma \subset \lbrace \alpha: \Delta(S) \rightarrow \mathbb{R} \mid \alpha \text{ is linear} \rbrace$.
\end{definition}
We immediately see that the preceding Proposition~\ref{thm:osposg:sup-convex} applies to any function of this type.
The next result shows that PWLC functions remain unchanged if we replace the set $\Gamma$ by its convex hull:
\begin{restatable}{proposition}{ConvexClosureDoesntIncreaseSupremum}
\label{thm:osposg:cvx-convexification}
Let $\Gamma \subset \lbrace \alpha: \Delta(S) \rightarrow \mathbb{R} \mid \alpha \text{ is linear} \rbrace$ be a set of linear functions.
Then for every $b \in \Delta(S)$ we have
\begin{equation}
\sup_{\alpha \in \Gamma} \alpha(b) = \sup_{\alpha \in \mathsf{Conv}(\Gamma)} \alpha(b) \ \text{.}
\end{equation}
\end{restatable}
In the opposite direction, every convex function can be represented as a supremum over some set of linear functions.
The following proposition shows this using the largest possible set, i.e. $\{ \alpha \leq f \mid \alpha \textnormal{ linear} \}$:
\begin{restatable}{proposition}{ConvexFunctionsAsSupremaOfLinearFs}
\label{thm:osposg:cvx-sup-representable}
Let $f: \Delta(S) \rightarrow \mathbb{R}$ be a convex continuous function.
Then there exists a set $\Gamma$ of linear functions such that $\alpha \leq f$ for every $\alpha \in \Gamma$ and $f(b) = \sup_{\alpha \in \Gamma} \alpha(b)$ for every $b \in \Delta(S)$.
\end{restatable}
\section{Composing Strategies}
\label{sec:osposg:composing}
Every behavioural strategy of the imperfectly informed player~1 can be split into the stage strategy $\pi_1$ player~1 uses in the first stage of the game, and behavioural strategies he uses in the rest of the game after he reaches an $(a_1,o)$-subgame.
We can also use the inverse principle, called \emph{strategy composition}, to form new strategies by choosing the stage strategy $\pi_1$ for the first stage and then selecting a separate behavioral strategy $\overline{\zeta}=(\zeta_{a_1,o})_{(a_1,o) \in A_1 \times O}$ for each subgame (see Figure~\ref{fig:osposg:composition} for illustration).
\begin{figure}
\caption{Composition of strategies $\zeta$ using a stage strategy $\pi_1$.}
\label{fig:osposg:composition}
\end{figure}
\begin{definition}[Strategy composition]
\thlabel{def:osposg:composite}
Let $G$ be a one-sided POSG and $\pi_1 \in \Pi_1$ a stage strategy of player~1.
Furthermore, let $\overline{\zeta} \in (\Sigma_1)^{A_1 \times O}$ be a vector representing behavioral strategies of player~1 for each $(a_1,o)$-subgame where $a_1 \in A_1$ and $o \in O$.
The \emph{strategy composition} $\mathsf{comp}(\pi_1,\overline{\zeta})$ is a behavioral strategy of player~1 such that
\begin{equation}
\mathsf{comp}(\pi_1,\overline{\zeta})(\omega) = \begin{cases}
\pi_1 & \omega = \emptyset \\
\zeta_{a_1,o}(\omega') & \omega = a_1 o \omega'
\end{cases}
\qquad
\text{ for each } \omega \in (A_1 O )^*
\ \text{.}
\label{eq:osposg:composite}
\end{equation}
\end{definition}
By composing strategies $\overline{\zeta}$ using $\pi_1$, we obtain a new strategy where the probability of playing $a_1$ in the first stage of the game is $\pi_1(a_1)$, and strategy $\zeta_{a_1,o}$ is used after playing action $a_1$ and receiving observation $o$ in the first stage of the game.
Importantly, the newly formed strategy $\mathsf{comp}(\pi_1,\overline{\zeta}) \in \Sigma_1$ is also a behavioral strategy (of imperfectly informed player~1), and therefore the properties of strategies presented in Section~\ref{sec:osposg:value} apply also to $\mathsf{comp}(\pi_1,\overline{\zeta})$.
As the next result shows, the opposite property also holds --- for each strategy $\sigma_1 \in \Sigma_1$ of player~1, we can find the appropriate $\pi_1$ and $\overline{\zeta}$ such that $\sigma_1 = \mathsf{comp}(\pi_1,\overline{\zeta})$:
\begin{restatable}{proposition}{StrategyDecomposition}
\label{thm:osposg:decomposition}
Every behavioral strategy $\sigma_1 \in \Sigma_1$ of player~1 can be represented as a strategy composition of some stage strategy $\pi_1 \in \Pi_1$ and player~1 behavioral strategies $\zeta_{a_1,o}$.
\end{restatable}
Importantly, we can obtain values $\val^{\mathsf{comp}(\pi_1,\overline{\zeta})}$ of composite strategies without considering the entire strategy $\mathsf{comp}(\pi_1,\overline{\zeta})$.
As the following lemma shows, it suffices to consider only the first stage of the game and the \emph{values} of the strategies $\overline{\zeta} \in (\Sigma_1)^{A_1 \times O}$.
\begin{restatable}{lemma}{StrategyCompositionValue}
\label{thm:osposg:composition}
Let $G$ be a one-sided POSG and $\mathsf{comp}(\pi_1,\overline{\zeta})$ a composite strategy. Then the following holds:
\begin{align}
\val^{\mathsf{comp}(\pi_1,\overline{\zeta})}(s) = \min_{a_2 \in A_2} \E_{a_1 \sim \pi_1,\, (o,s') \sim T(\cdot \,|\, s,a_1,a_2)} \left[ R(s,a_1,a_2) + \gamma \val^{\zeta_{a_1,o}}(s') \right] \nonumber \\
\!\!\!\!\!\!\!\!\!\! = \min_{a_2 \in A_2} \sum_{a_1 \in A_1} \!\! \pi_1(a_1) \left[ R(s,a_1,a_2) + \gamma \!\!\!\!\!\!\! \sum_{(o,s) \in O \times S} \!\!\!\!\!\!\! T(o,s' \,|\, s,a_1,a_2) \val^{\zeta_{a_1,o}}(s') \right] \text{.}
\end{align}
\end{restatable}
The proof relies on the fact that when player~1 takes the action $a_1$, observes $o$, and ends up in $s'$, the strategy $\zeta_{a_1,o}$ guarantees the player gets at least $\val^{\zeta_{a_1,o}}(s')$ utility (in expectation), no matter what player~2 does.
Since the values in the rest of the game are known, it suffices to focus on the best-response strategy of player~2 in the first stage of the game.
\subsection{Generalized Composition}
Lemma~\ref{thm:osposg:composition} suggests that we can use composition of \emph{values} of strategies $\val^{\zeta_{a_1,o}}$ to form values of composite strategies $\val^{\mathsf{comp}(\pi_1,\overline{\zeta})}$.
In this section, still consider linear functions $\val^{\zeta_{a_1,o}}$, but we relax the assumption that these functions represent values of some specific behavioural strategy.
This allows us to derive a generalized principle of composition and approximate the value function $V^*$ by a supremum of arbitrary linear functions (as opposed to functions $\val^{\sigma_1}$).
Throughout the text, we will use $\linDS$ to denote the set of linear functions on $\Delta(S)$ (i.e., $\alpha$-vectors).
We will also use the term `linear' to refer to functions that satisfy $f( \lambda b + (1-\lambda)b') = \lambda f(b) + (1-\lambda)f(b')$ on $\Delta(S)$.
\begin{definition}[Value composition]
\thlabel{def:osposg:val-composition}
Let $\pi_1 \in \Pi_1$ and $\overline{\alpha} \in (\linDS)^{A_1 \times O}$.
\emph{Value composition} $\mathsf{valcomp}(\pi_1,\overline{\alpha}): \Delta(S) \rightarrow \R$ is a linear function defined by the values in vertices of the $\Delta(S)$ simplex as follows:
\begin{align*}
\mathsf{valcomp}(\pi_1,\overline{\alpha})(s) = & \min_{a_2 \in A_2} \sum_{a_1 \in A_1} \pi_1(a_1) \Big[ R(s,a_1,a_2) \ + \numberthis\label{eq:osposg:valcomp}\\
& \qquad \gamma \sum_{(o,s') \in O \times S} T(o,s' \,|\, s,a_1,a_2) \alpha_{a_1,o}(s') \Big] \ \text{.}
\end{align*}
\end{definition}
Observe that according to Lemma~\ref{thm:osposg:composition}, $\mathsf{valcomp}(\pi_1,\overline{\alpha}) = \val^{\mathsf{comp}(\pi_1,\overline{\zeta})}$ for $\alpha_{a_1,o} = \val^{\zeta_{a_1,o}}$.
The value composition $\mathsf{valcomp}(\pi_1,\overline{\alpha})$, however, admits arbitrary linear function $\alpha_{a_1,o}$ and not only the value $\val^{\zeta_{a_1,o}}$ of some strategy $\zeta_{a_1,o} \in \Sigma_1$.
Moreover, as long as linear functions $\alpha_{a_1,o}$ serve as lower bounds for values of some strategies, so will the corresponding value composition serve as a lower bound for the corresponding composite strategy:
\begin{restatable}{lemma}{GenCompositionValue}
\label{thm:osposg:gen-composition}
Let $\pi_1 \in \Pi_1$ be a stage strategy of player~1 and $\overline{\alpha} \in (\linDS)^{A_1 \times O}$ a~vector of linear functions s.t. for each $\alpha_{a_1,o}$ there exists a strategy $\zeta_{a_1,o} \in \Sigma_1$ with $\val^{\zeta_{a_1,o}} \geq \alpha_{a_1,o}$.
Then there exists a strategy $\sigma_1 \in \Sigma_1$ such that $\sigma_1(\emptyset) = \pi_1$ and $\val^{\sigma_1} \geq \mathsf{valcomp}(\pi_1,\overline{\alpha})$.
\end{restatable}
In case of value of composite strategies, we know that $\val^{\mathsf{comp}(\pi_1,\zeta)}$ is a $\delta$-Lipschitz continuous linear function (since $\mathsf{comp}(\pi_1,\zeta) \in \Sigma_1$ is a behavioral strategy of player~1 and \thref{thm:osposg:strategy-value-lipschitz} applies).
Additionally, we prove that as long as linear functions $\alpha_{a_1,o}$ are bounded by $L \leq \alpha_{a_1,o}(b) \leq U$ for every belief $b \in \Delta(S)$, and are therefore $\delta$-Lipschitz continuous, the value composition $\mathsf{valcomp}(\pi_1,\overline{\alpha})$ is also $\delta$-Lipschitz.
\begin{restatable}{lemma}{ValcompLipschitz}
\label{thm:osposg:valcomp-lipschitz}
Let $\pi_1 \in \Pi_1$ and $\overline{\alpha} \in (\linDS)^{A_1 \times O}$ such that $L \leq \alpha^{a_1,o}(b) \leq U$ for every $b \in \Delta(S)$.
Then $L \leq \mathsf{valcomp}(\pi_1,\overline{\alpha})(b) \leq U$ for every $b \in \Delta(S)$ and $\mathsf{valcomp}(\pi_1,\overline{\alpha})$ is a $\delta$-Lipschitz continuous function.
\end{restatable}
\section{Bellman Equation for One-Sided POSGs}
\label{sec:osposg:bellman}
In Section~\ref{sec:osposg:value}, we have defined the value function $V^*$ as the supremum over the strategies player~1 can achieve in each of the beliefs (see \thref{def:osposg:v-star}).
However, while this correctly defines the value function, it does not provide a straightforward recipe to obtaining value $V^*(b)$ for the given belief $b \in \Delta(S)$.
Obtaining the value for the given belief according to \thref{def:osposg:v-star} is as hard as solving the game itself.
In this section, we provide an alternative characterization of the optimal value function $V^*$ inspired by the value iteration methods, e.g., for Markov decision processes (MDPs) and their partially observable variant (POMDPs).
The high-level idea behind these approaches is to start with a coarse approximation $V_0: \Delta(S) \rightarrow \mathbb{R}$ of the value function $V^*$, and then iteratively improve the approximation by applying the Bellman's operator $H$, i.e., generate a sequence such that $V_{i+1} = HV_i$.
In our case, the improvement is based on finding a new, previously unknown, strategy that achieves higher values for each of the beliefs by means of value composition principle (\thref{def:osposg:val-composition}).
Throughout this section, we will consider value functions that are represented as a point-wise supremum over a (possibly infinite) set $\Gamma$ of linear functions (called $\alpha$-vectors), i.e.,
\begin{equation}
V(b) = \sup_{\alpha \in \Gamma} \alpha(b) \qquad \text{for } \Gamma \subset \left\lbrace \alpha: \Delta(S) \rightarrow \mathbb{R} \mid \alpha \text{ is linear} \right\rbrace \ \text{.}
\end{equation}
By Proposition~\ref{thm:osposg:cvx-convexification}, we can always assume that the set $\Gamma$ is convex (since this doesn't come at the loss of generality).
For more details on this representation of value functions see Section~\ref{sec:osposg:cvx}.
\begin{definition}[Max-composition]
\thlabel{def:osposg:H-valcomp}
Let $V: \Delta(S) \rightarrow \R$ be a convex continuous function and let $\Gamma$ be a convex set of linear functions such that $V(b) = \sup_{\alpha \in \Gamma} \alpha(b)$.
The \emph{max-composition} operator $H$ is defined as
\begin{equation}
[HV](b) = \max_{\pi_1 \in \Pi_1} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \mathsf{valcomp}(\pi_1,\overline{\alpha})(b) \ \text{.} \label{eq:osposg:max-composition}
\end{equation}
\end{definition}
We will now prove several fundamental properties of the max-composition operator $H$ from \thref{def:osposg:H-valcomp}.
First, we will show that this operator preserves continuity and convexity, allowing us to apply the operator iteratively.
Second, we introduce equivalent formulations of the operator $H$, which represent the solution of $[HV](b)$ in a more traditional form of finding a Nash equilibrium of a stage-game.
These formulations also allow us to show that the behaviour of $H$ is not sensitive to the choice of the set $\Gamma$ used to represent the value function $V$.
Finally, we conclude by showing that the operator $H$ can indeed be used to approximate the optimal value function $V^*$.
Namely, we show that $H$ is a contraction mapping (and thus iterated application converges to a unique fixpoint) and that its fixpoint is the optimal value function $V^*$.
\begin{restatable}{proposition}{HVLipschitzConvex}
Proposition~\label{thm:osposg:hv-convex}
Let $V: \Delta(S) \rightarrow \R$ be a convex continuous function and let $\Gamma$ be a convex set of linear functions such that $V(b)=\sup_{\alpha \in \Gamma} \alpha(b)$.
Then $HV$ is also convex and continuous.
Furthermore, if $V$ is $\delta$-Lipschitz continuous, the function $HV$ is $\delta$-Lipschitz continuous as well.
\end{restatable}
The proof of this result goes by rewriting $HV$ as a supremum over all value-compositions and using our earlier observations about convexity and Lipschitz continuity of such suprema.
We will now prove that the max-composition operator $H$ can be alternatively characterized using max-min and min-max optimization.
Recall that $\tau(b,a_1,\pi_2,o)$ denotes the Bayesian update of belief $b$ given that player~1 played $a_1$ and observed $o$, and player~2 is assumed to follow stage strategy $\pi_2$ in the current round (see Equation~\eqref{eq:osposg:tau}).
\begin{restatable}{theorem}{Bellman}
\label{thm:osposg:bellman}
Let $V: \Delta(S) \rightarrow \R$ be a convex continuous function and let $\Gamma$ be a convex set of linear functions on $\Delta(S)$ such that $V(b) = \sup_{\alpha \in \Gamma} \alpha(b)$ for every belief $b \in \Delta(S)$.
Then the following definitions of operator $H$ are equivalent:
\begin{subequations}
\begin{align}
& [HV](b) = \nonumber \\
& = \max_{\pi_1 \in \Delta(S)} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \mathsf{valcomp}(\pi_1,\overline{\alpha})(b) \label{eq:osposg:equiv-valcomp}\\
&= \! \max_{\pi_1 \in \Pi_1} \!\! \min_{\pi_2 \in \Pi_2} \left[ \E_{b,\pi_1,\pi_2}[R(s,a_1,a_2)] + \gamma \sum_{a_1,o} \mathbb{P}_{b,\pi_1,\pi_2}[a_1,o] \cdot V(\tau(b,a_1,\pi_2,o)) \right] \label{eq:osposg:H-maxmin}\\
&= \! \min_{\pi_2 \in \Pi_2} \!\! \max_{\pi_1 \in \Pi_2} \left[ \E_{b,\pi_1,\pi_2}[R(s,a_1,a_2)] + \gamma \sum_{a_1,o} \mathbb{P}_{b,\pi_1,\pi_2}[a_1,o] \cdot V(\tau(b,a_1,\pi_2,o)) \right] \ \text{.} \label{eq:osposg:H-minmax}
\end{align}
\end{subequations}
\end{restatable}
The proof consists of verifying the assumptions of von Neumann's minimax theorem, which shows the equivalence of \eqref{eq:osposg:H-maxmin} and \eqref{eq:osposg:H-minmax}.
The equivalence of \eqref{eq:osposg:H-maxmin} and \eqref{eq:osposg:equiv-valcomp} can be then shown by reformulating each stage game as a separate zero-sum game and verifying that it satisfies the assumptions of a Sion's generalization of the minimax theorem \cite{sion1958-minimax}.
\begin{corollary}\thlabel{thm:osposg:gamma-independent}
Bellman's operator $H$ does not depend on the convex set $\Gamma$ of linear functions used to represent the convex value function $V$.
\end{corollary}
Since the maximin and minimax values of the game (from equations~\eqref{eq:osposg:H-maxmin} and \eqref{eq:osposg:H-minmax}) coincide, the value $[HV](b)$ corresponds to the Nash equilibrium in the stage game.
We define the stage game formally.
\begin{definition}[Stage game]
\thlabel{def:osposg:stage-game}
A \emph{stage game} with respect to a convex continuous value function $V: \Delta(S) \rightarrow \R$ and belief $b \in \Delta(S)$ is a two-player zero sum game with strategy spaces $\Pi_1$ for the maximizing player~1 and $\Pi_2$ for the minimizing player~2, and payoff function
\begin{equation}
u^{V,b}(\pi_1,\pi_2) = \E_{b,\pi_1,\pi_2}[R(s,a_1,a_2)] + \gamma \sum_{a_1,o} \mathbb{P}_{b,\pi_1,\pi_2}[a_1,o] \cdot V(\tau(b,a_1,\pi_2,o)) \ \text{.}
\end{equation}
With a slight abuse of notation, we use $[HV](b)$ to refer both to the max-composition operator (\thref{def:osposg:H-valcomp}) as well as to this stage game.
\end{definition}
We will now show that the Bellman's operator $H$ is a contraction mapping.
Recall that the mapping $H$ is a contraction, if there exists $0 \leq k < 1$ such that $\| HV_1 - HV_2 \| \leq k \| V_1 - V_2 \|$.
We consider the metric $\| V_1 - V_2 \|_\infty = \max_{b \in \Delta(S)} | V_1(b) - V_2(b) |$ corresponding to the $l_\infty$.
First, we focus on a single belief point and identify a criterion which ensures that $| HV_1(b) - HV_2(b) | \leq k | V_1(b) - V_2(b) |$.
While somewhat technical, this criterion will enable us to demonstrate the contractivity of $H$.
Moreover, it will also be useful in Section~\ref{sec:osposg:hsvi:alg} to prove the correctness of the HSVI algorithm proposed therein.
\begin{restatable}{lemma}{ContractivityLemma}
\label{thm:osposg:point-contractivity}
Let $V, W: \Delta(S) \rightarrow \R$ be two convex continuous value functions and $b \in \Delta(S)$ a belief such that $[HV](b) \leq [HW](b)$.
Let $(\pi_1^V,\pi_2^V)$ and $(\pi_1^W,\pi_2^W)$ be Nash equilibrium strategy profiles in stage games $[HV](b)$ and $[HW](b)$, respectively, and $C \geq 0$.
If $W(\tau(b,a_1,o,\pi_2^V)) - V(\tau(b,a_1,o,\pi_2^V)) \leq C$ for every action $a_1 \in \Supp(\pi_1^W)$ of player~1 and every observation $o \in O$ such that $\mathbb{P}_{b,\pi_1^W,\pi_2^V}[o \,|\, a_1] > 0$, then $[HW](b) - [HV](b) \leq \gamma C$.
\end{restatable}
\begin{lemma}\thlabel{thm:osposg:contraction}
Operator $H$ is a contraction on the space of convex continuous functions $V: \Delta(S) \rightarrow \R$ (under the supremum norm), with contraction-factor $\gamma$.
\end{lemma}
\begin{proof}
Let $V, W: \Delta(S) \rightarrow \R$ be convex functions such that $\| V - W \|_{\infty} = \max_{b \in \Delta(S)} | V(b) - W(b) | \leq C$.
To prove the contractivity of $H$, it suffices to show that $\| HV - HW \|_{\infty} \leq \gamma C$, i.e., $| [HV](b) - [HW](b) | \leq \gamma C$ for every belief $b \in \Delta(S)$.
Since $| V(b) - W(b) | \leq C$ holds for every belief $b$, Lemma~\ref{thm:osposg:point-contractivity} yields both $HV(b)-HW(b) \leq \gamma C$ and $HW(b)-HV(b) \leq \gamma C$.
\end{proof}
Next, we show that the optimal value function from \thref{def:osposg:v-star} is the fixpoint of the Bellman's operator $H$.
Intuitively, this holds because $V^*$ can be represented as a supremum over all possible value functions $\val^{\sigma_1}$, which remains unchanged as we apply the operator $H$ (resp. the value-compositions it consists of).
\begin{restatable}{lemma}{FixpointLemma}
Lemma~\label{thm:osposg:fixed-point}
The optimal value function $V^*$ satisfies $V^*=HV^*$.
\end{restatable}
\noindent
Together, the two results ensure that $H$ can be applied iteratively to obtain $V^*$:
\begin{theorem}
\thlabel{thm:osposg:unique-fixpoint}
$V^*$ is a unique fixpoint of $H$.
Moreover, for any convex function $V_0$, the sequence $\lbrace V_i \rbrace_{i=0}^\infty$ such that $V_i = HV_{i-1}$ converges to $V^*$.
\end{theorem}
\begin{proof}
By Lemma~\ref{thm:osposg:fixed-point}, $V^*$ is \textit{a} fixpoint of $H$.
By \thref{thm:osposg:contraction}, $H$ is a contraction mapping on the space of convex value functions.
Banach's fixed point theorem~\citep{ciesielski2007-banach} then implies the uniqueness and the ``moreover'' part.
\end{proof}
\section{Exact Value Iteration}
\label{sec:osposg:vi}
In Section~\ref{sec:osposg:bellman}, we have shown that the optimal value function can be approximated by means of composing strategies in the sense of max-composition introduced in \thref{def:osposg:H-valcomp}.
In this section, we provide a linear programming formulation to perform such optimal composition for value functions that are piecewise linear and convex, i.e., can be represented as a point-wise maximum of a finite set $\Gamma$ of linear functions.
Furthermore, we show that as long as the value function $V$ is piecewise linear and convex, $HV$ is also piecewise linear and convex.
This allows for using the same linear program (LP) iteratively to approximate the optimal value function $V^*$ by means of constructing a sequence of piecewise linear and convex value functions $\lbrace V_i \rbrace_{i=1}^\infty$ such that $V_i = HV_{i-1}$.
\subsection{Computing Max-Compositions}
In order to compute $HV$ given a piecewise linear and convex (PWLC) value function $V$, it is essential to solve Equation~\eqref{eq:osposg:max-composition}.
Every PWLC value function can be represented as a point-wise maximum over a finite set of linear functions $\lbrace \alpha_1, \ldots, \alpha_k \rbrace$ (see \thref{def:osposg:pwlc}).
Without loss of generality, we consider that the set $\Gamma$ used to represent the value function $V$ is the convex hull of the aforementioned set:
\begin{equation}
\Gamma \coloneqq \mathsf{Conv}\left( \left\lbrace \alpha_1, \ldots, \alpha_k \right) \right\rbrace = \left\lbrace \sum_{i=1}^k \lambda_i \alpha_i \mid \lambda \in \mathbb{R}^k_{\geq 0}, \| \lambda \|_1 = 1 \right\rbrace \ \text{.} \label{eq:gamma-conv}
\end{equation}
Recall that forming a convex hull of the set of linear functions used to represent $V$ does not affect the values $V$ attains (by Proposition~\ref{thm:osposg:cvx-convexification}).
We will now show that when the set $\Gamma$ is represented as in Equation~\eqref{eq:gamma-conv}, linear programming can be used to compute $HV(b)$:
\begin{restatable}{lemma}{LP}
\label{thm:lp}
Let $\Gamma = \mathsf{Conv}\left( \left\lbrace \alpha_1, \ldots, \alpha_k \right) \right\rbrace$ be a convex hull of a finite set of $\alpha$-vectors.
Then $[HV](b)$ coincides with the solution of the following linear program:
\begin{subequations}
\label{eq:osposg:max-composition-lp}
\begin{align*}
\max_{\pi_1,\lambda,\overline{\alpha},V} \ & \sum_{s \in S} b(s) \cdot V(s) \numberthis\label{eq:osposg:max-composition-lp:objective}\\
\text{s.t.} \ \ \ & V(s) \leq \sum_{a_1 \in A_1} \pi_1(a_1) R(s,a_1,a_2) \ + \gamma \!\!\!\!\!\!\!\!\!\!\!\! \sum_{(a_1,o,s') \in A_1 \times O \times S} \!\!\!\!\!\!\!\!\!\!\!\! T(o,s' \,|\, s,a_1,a_2) \hat{\alpha}^{a_1,o}(s') \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \\
& \hspace{20em} \forall (s,a_2) \in S \times A_2 \numberthis\label{eq:osposg:max-composition-lp:br} \\
& \hat{\alpha}^{a_1,o}(s') = \sum_{i=1}^k \hat{\lambda}_i^{a_1,o} \cdot \alpha_i(s') \hspace{4.5em} \forall (a_1,o,s') \in A_1 \times O \times S \numberthis\label{eq:osposg:max-composition-lp:convexification}\\
& \sum_{i=1}^k \hat{\lambda}_i^{a_1,o} = \pi_1(a_1) \hspace{11.5em} \forall (a_1,o) \in A_1 \times O \numberthis\label{eq:osposg:max-composition-lp:lambda-hat}\\
& \sum_{a_1 \in A_1} \pi_1(a_1) = 1 \numberthis\label{eq:osposg:max-composition-lp:pi-sum}\\
& \pi_1(a_1) \geq 0 \hspace{19em} \forall a_1 \in A_1 \numberthis\label{eq:osposg:max-composition-lp:pi-positive}\\
& \hat{\lambda}_i^{a_1,o} \geq 0 \hspace{11em} \forall (a_1,o) \in A_1 \times O, 1 \leq i \leq k \numberthis\label{eq:osposg:max-composition-lp:last}\\
\end{align*}
\end{subequations}
\end{restatable}
In the latter text, we also use the following dual formulation of the linear program~\eqref{eq:osposg:max-composition-lp} (with some minor modifications to improve readability):
\begin{subequations}
\label{eq:osposg:max-composition-dual}
\begin{align}
\min_{V,\pi_2,\hat{\tau}} \ & V \\
\text{s.t.} \ & V \geq \!\!\!\!\!\!\!\! \sum_{(s,a_2) \in S \times A_2} \!\!\!\!\!\!\!\! \pi_2(s \wedge a_2) R(s,a_1,a_2) + \gamma \sum_{o \in O} \hat{V}(a_1,o) \!\!\!\!\!\!\!\! & \forall a_1 \label{eq:osposg:max-composition-dual:br} \\
& \hat{V}(a_1,o) \geq \sum_{s' \in S} \hat{\tau}(b,a_1,o,\pi_2)(s') \cdot \alpha_i(s') & \forall (a_1,o), 1 \leq i \leq k \label{eq:osposg:max-composition-dual:subgame}\\
& \hat{\tau}(b,a_1,\pi_2,o)(s') = \!\!\!\!\!\!\!\!\! \sum_{(s,a_2) \in S \times A_2} \!\!\!\!\!\!\!\!\! T(o,s' \,|\, s,a_1,a_2) \pi_2(s \wedge a_2) \!\!\!\!\!\!\!\!\!\! & \forall (a_1,o,s') \label{eq:osposg:max-composition-dual:tau}\\
& \sum_{a_2 \in A_2} \pi_2(s \wedge a_2) = b(s) & \forall s \label{eq:osposg:max-composition-dual:pi-sum}\\
& \pi_2(s \wedge a_2) \geq 0 & \forall (s, a_2) \label{eq:osposg:max-composition-dual:pi-positive}
\end{align}
\end{subequations}
Here, the stage strategy of player~2 is represented as a joint probability $\pi_2(s \wedge a_2)$ of playing action $a_2 \in A_2$ while being in state $s \in S$ (i.e., $\pi_2(a_2 \,|\, s) = \pi_2(s \wedge a_2) / b(s)$ where applicable).
Player~1 then seeks the best response $a_1 \in A_1$ (constraint~\eqref{eq:osposg:max-composition-dual:br}) that maximizes the sum of the expected immediate reward and the $\gamma$-discounted utility in the $(a_1,o)$-subgames.
The beliefs $\tau(b,a_1,\pi_2,o)$ in the subgames are multiplied by the probability of reaching the $(a_1,o)$-subgame (i.e., there is no division by $\mathbb{P}_{b,a_1,\pi_2}[a_1,o]$ in Equation~\eqref{eq:osposg:max-composition-dual:tau}), hence also the values of subgames $V(a_1,o)$ need not be multiplied by $\mathbb{P}_{b,a_1,\pi_2}[a_1,o]$.
The value of an $(a_1,o)$-subgame $V(a_1,o)$ is expressed as a maximum $\max_{\alpha \in \Gamma} \alpha(\tau(b,a_1,\pi_2,o))$ expressed by constraints~\eqref{eq:osposg:max-composition-dual:subgame}.
\subsection{Value Iteration}
To run a value iteration algorithm that would apply the linear program~\eqref{eq:osposg:max-composition-lp} repeatedly, we require that every $V_i$ in the sequence $\lbrace V_i \rbrace_{i=0}^\infty$, starting from an arbitrary PWLC value function $V_0$, is also piecewise linear and convex.
By \thref{thm:osposg:hv-pwlc} this is always the case.
\begin{lemma}
\label{thm:lp-vertices}
Let $Q$ be the set of vertices of the polytope defined by constraints
\eqref{eq:osposg:max-composition-lp:br}-\eqref{eq:osposg:max-composition-lp:last},
and let $(\pi_1^q,\hat{\alpha}^q)$ be the assignment of the variables $\pi_1$ and $\hat{\alpha}$ corresponding to the vertex $q \in Q$.
Then\footnote{Note that $\overline{\alpha}^q(a_1,o)$ for $a_1$ with $\pi_1^q(a_1)=0$ do not contribute to $\mathsf{valcomp}(\pi_1^q,\overline{\alpha}^q)$. In parts of the game that are not reached by player~1, we can thus define $\overline{\alpha}^q$ arbitrarily.}
\begin{equation}
[HV](b) = \max_{q \in Q} \mathsf{valcomp}(\pi_1^q, \overline{\alpha}^q) \qquad \text{ for } \overline{\alpha}^q(a_1,o)=\hat{\alpha}^q(a_1,o) / \pi_1^q(a_1) \ \text{.} \label{eq:thm:lp-vertices}
\end{equation}
\end{lemma}
\begin{proof}
Consider the LP~\eqref{eq:osposg:max-composition-lp} which computes the optimal value composition $\mathsf{valcomp}(\pi_1,\overline{\alpha})$ in $[HV](b)$ (see Lemma~\ref{thm:lp}).
The polytope of feasible solutions of the LP defined by the constraints~\eqref{eq:osposg:max-composition-lp:br}--\eqref{eq:osposg:max-composition-lp:last} is independent of the belief $b$ (which only appears in the objective~\eqref{eq:osposg:max-composition-lp:objective}).
Therefore, the set $Q$ of vertices of this polytope is also independent of belief $b \in \Delta(S)$.
The optimal solution of a linear programming problem~\eqref{eq:osposg:max-composition-lp} representing $[HV](b)$ can be found within the vertices $Q$ of the polytope of feasible solutions~\citep{vanderbei2015-lp}.
There is a finite number of vertices $q \in Q$, and each vertex $q \in Q$ corresponds to some assignment of variables defining the value composition $\mathsf{valcomp}(\pi_1^q,\overline{\alpha}^q)$.
Since the set $Q$ of the vertices of the polytope is independent of the belief $b$, we get the desired result.
\end{proof}
\begin{lemma}
\thlabel{thm:osposg:hv-pwlc}
If $V$ is a piecewise linear and convex function, then so is $HV$.
\end{lemma}
\begin{proof}
This lemma is a direct consequence of \thref{thm:lp-vertices}.
Since the number of vertices of the polytope of LP~\eqref{eq:osposg:max-composition-lp} is finite, the pointwise maximization in~\eqref{eq:thm:lp-vertices} defines a PWLC function.
\end{proof}
We can use the above-stated results to iteratively construct a sequence of value functions $\lbrace V_i \rbrace_{i=0}^\infty$ such that $V_0$ is an arbitrary PWLC function and $V_i = HV_{i-1}$.
Namely, we construct $V_i$ by enumerating the vertices of the polytope defined by the linear program~\eqref{eq:osposg:max-composition-lp} and constructing appropriate linear functions $\mathsf{valcomp}(\pi_1^q, \overline{\alpha}^q)$.
By \thref{thm:lp-vertices}, these linear functions form the set of $\alpha$-vectors needed to represent a PWLC (\thref{thm:osposg:hv-pwlc}) function $V_i$.
According to \thref{thm:osposg:unique-fixpoint} this sequence converges to $V^*$:
\begin{corollary}
Starting from an arbitrary PWLC value function $V_0$, a repeated application of the LP \eqref{eq:osposg:max-composition-lp}, as described in \thref{thm:lp-vertices}, converges to $V^*$.
\end{corollary}
A more efficient algorithm can be devised based on, e.g., the linear support algorithm for POMDPs~\citep{cheng1988-pomdp-algorithms}.
Here, the set $\Gamma'$ of linear functions defining $HV$ is constructed incrementally, terminating once it is provably large enough to represent the value function $HV$.
Exact value iteration algorithms to solve POMDPs are, however, generally considered to only be capable of solving very small problems.
We cannot, therefore, expect a decent performance of such approaches when solving one-sided POSGs that are more general than POMDPs.
The next section remedies this issue by providing a point-based approach for solving one-sided POSGs
\section{Heuristic Search Value Iteration for OS-POSGs}
\label{sec:osposg:hsvi}
In this section, we provide a scalable algorithm for solving one-sided POSGs, inspired by the \emph{heuristic search value iteration} (HSVI) algorithm~\citep{smith2004-hsvi,smith2005-hsvi} for approximating value function of POMDPs (summarized in Section~\ref{sec:pomdps}).
Our algorithm approximates the convex optimal value function $V^*$ using a pair of piecewise linear and convex value functions $\uv$ (lower bound on $V^*$) and $\ov$ (upper bound on $V^*$).
These bounds are refined over time and, given the initial belief $b^{\mathrm{init}}$ and the desired precision $\varepsilon > 0$, the algorithm is guaranteed to approximate the value $V^*(b^{\mathrm{init}})$ within $\varepsilon$.
In Section~\ref{sec:osposg:playing}, we show that this process also generates value functions that allow us to extract $\varepsilon$-Nash equilibrium strategies of the game.
We first show the approximation schemes used to represent $\uv$ and $\ov$, and the methods to initialize these bounds (Section~\ref{sec:osposg:hsvi:vf}).
We then discuss the so-called ``point-based updates'' which are used to refine the bounds induced by $\uv$ and $\ov$ (Section~\ref{sec:osposg:hsvi:pb-updates}).
Finally, in Section~\ref{sec:osposg:hsvi:alg}, we describe the algorithm and prove its correctness.
\subsection{Value Function Representations}
\label{sec:osposg:hsvi:vf}
Following the results on POMDPs and the original HSVI algorithm~\citep{hauskrecht2000-value-functions,smith2004-hsvi}, we use two distinct methods to represent upper and lower PWLC bounds on $V^*$.
\paragraph{Lower bound $\uv$}
Similarly as in the previous sections, the lower bound $\uv: \Delta(S) \rightarrow \R$ is represented as a point-wise maximum over a finite set $\Gamma$ of linear functions called $\alpha$-vectors, i.e., $\uv(b) = \max_{\alpha \in \Gamma} \alpha(b)$.
Each $\alpha \in \Gamma$ is a linear function $\alpha: \Delta(S) \rightarrow \R$ represented by its values $\alpha(s)$ in the vertices of the $\Delta(S)$ simplex, i.e., $\alpha(b) = \sum_{s \in S} b(s) \cdot \alpha(s)$.
\paragraph{Upper bound $\ov$}
Upper bound $\ov: \Delta(S) \rightarrow \R$ is represented as a lower convex hull of a set of points $\Upsilon = \lbrace (b_i, y_i) \mid 1 \leq i \leq k \rbrace$.
Each point $(b_i,y_i) \in \Upsilon$ provides an upper bound $y_i$ on the value $V^*(b_i)$ in belief $b_i$, i.e., $y_i \geq V^*(b_i)$.
Since the value function $V^*$ is convex, it holds that
\begin{align}
\left( \forall \lambda \in \R^k_{\geq 0} \ \textnormal{s.t. } \sum_{i=1}^k \lambda_i = 1 \right) \ : \ V^* \left( \sum_{i=1}^k \lambda_i b_i \right) \leq \sum_{i=1}^k \lambda_i \cdot V^*(b_i) \leq \sum_{i=1}^k \lambda_i \cdot y_i .
\end{align}
This fact is used in the first variant of the HSVI algorithm (HSVI1~\citep{smith2004-hsvi}) to obtain the value of the upper bound $V_{\mathrm{HSVI1}}^\Upsilon(b)$ for belief $b$:
A linear program can be used to find coefficients $\lambda \in \R^k_{\geq 0}$ such that $b = \sum_{i=1}^k \lambda_i \cdot y_i$ holds and $\sum_{i=1}^k \lambda_i \cdot y_i$ is minimal:
\begin{equation}
V_{\mathrm{HSVI1}}^\Upsilon(b) = \min \left\lbrace \sum_{i=1}^k \lambda_i y_i \mid \lambda \in \R_{\geq 0}^k: \sum_{i=1}^k \lambda_i = 1 \ \wedge \ \sum_{i=1}^k \lambda_i b_i = b \right\rbrace \ \text{,}
\label{eq:osposg:ub-hsvi}
\end{equation}
In the latter proof of the \thref{thm:osposg:correctness} showing the correctness of the algorithm, we require the bounds $\uv$ and $\ov$ to be $\delta$-Lipschitz continuous.
Since this needs not hold for $V_{\mathrm{HSVI1}}^\Upsilon$, we define $\ov$ as a lower $\delta$-Lipschitz envelope of $V_{\mathrm{HSVI1}}^\Upsilon$:
\begin{equation}
\ov(b) = \min_{b' \in \Delta(S)} \left[ V_{\mathrm{HSVI1}}^\Upsilon(b') + \delta \| b - b' \|_1 \right] \ \text{.} \label{eq:osposg:projection}
\end{equation}
This computation can be expressed as a linear programming problem
\begin{subequations}
\label{lp:osposg:projection}
\begin{align}
\ov(b) = \min_{\lambda,\Delta,b'} \ & \sum_{i=1}^k \lambda_i y_i + \delta \sum_{s \in S} \Delta_s \\
\text{s.t.} \ & \sum_{i=1}^k \lambda_i b_i(s) = b'(s) & \forall s \in S \\
& \Delta_s \geq b'(s) - b(s) & \forall s \in S \\
& \Delta_s \geq b(s) - b'(s) & \forall s \in S \\
& \sum_{i=1}^k \lambda_i = 1 \\
& \lambda_i \geq 0 & \forall 1 \leq i \leq k
\end{align}
\end{subequations}
Here, we have $\Delta_s=|b'(s) - b(s)|$ (and hence $\sum_{s \in S} \Delta_s = \| b - b' \|_1$).
Using the definitions of $\ov$ and $V_{\mathrm{HSVI1}}^\Upsilon$ together with the fact that $V^*$ is $\delta$-Lipschitz continuous and convex, we can prove that the function $\ov$ represents an upper bound on $V^*$:
\begin{restatable}{lemma}{UBIsUpperBound}
\label{thm:osposg:projection}
Let $\Upsilon = \lbrace (b_i,y_i) \,|\, 1 \leq i \leq k \rbrace$ such that $y_i \geq V^*(b_i)$ for every $1 \leq i \leq k$.
Then the value function $\ov$ is $\delta$-Lipschitz continuous and satisfies
\begin{equation*}
V^* \leq \ov \leq V_{\mathrm{HSVI1}}^\Upsilon.
\end{equation*}
\end{restatable}
The dichotomy in representation of value functions $\uv$ and $\ov$ allows for easy refinement of the bounds.
By adding new elements to the set $\Gamma$, the value $\uv(b) = \max_{\alpha \in \Gamma} \alpha(b)$ can only increase---and hence the lower bound $\uv$ gets tighter.
Similarly, by adding new elements to the set of points $\Upsilon$, the solution of linear program~\eqref{lp:osposg:projection} can only decrease and hence the upper bound $\ov$ tightens.
\subsubsection{Initial Bounds}
We now describe our approach to obtaining the initial bounds $\uv$ and $\ov$ on the optimal value function $V^*$ of the game:
\paragraph{Lower bound $\uv$}
We initially set the lower bound to the value $\val^{\sigma_1^{\mathrm{unif}}}$ of the uniform strategy $\sigma_1^{\mathrm{unif}} \in \Sigma_1$ of player~1 (i.e., the strategy that plays every action with probability $1/|A_1|$ in all stages of the game).
Recall that the value $\val^{\sigma_1^{\mathrm{unif}}}$ of the strategy $\sigma_1^{\mathrm{unif}}$ is a linear function (see \thref{thm:osposg:strategy-value-linear}), and hence the initial lower bound $\uv$ is a piecewise linear and convex function represented as a pointwise maximum of the set $\Gamma = \lbrace \val^{\sigma_1^{\mathrm{unif}}} \rbrace$.
\paragraph{Upper bound $\ov$}
We use the solution of a perfect information variant of the game (i.e., where player~1 is assumed to know the entire history of the game, unlike in the original game).
We form a modified game $G'$ which is identical to the OS-POSG $G$ (i.e., has the same states $S$, actions $A_1$ and $A_2$, dynamics $T$ and rewards $R)$, except that all information is revealed to player~1 in each step.
$G'$ is a perfect information stochastic game, and we can apply the value iteration algorithm to solve $G'$~\cite{bowling2000-sg}.
The additional information player~1 in $G'$ (compared to $G$) can only increase the utility he can achieve.
Hence $V^*_s$ of the state $s$ of game $G'$ forms an upper bound on the utility player~1 can achieve in $G$ if he knew that the initial state of the game is $s$ (i.e., his belief is $b_s$ where $b_s(s)=1$).
We initially define $\Upsilon$ as the set that contains one point for each state $s \in S$ of the game (i.e., for each vertex of the $\Delta(S)$ simplex),
\begin{equation}
\Upsilon = \lbrace (b_s, V^*_s) \;|\; s \in S \rbrace \qquad\qquad
b_s(s') = \begin{cases}
1 & s = s' \\
0 & \text{otherwise} \ \text{.}
\end{cases}
\label{eq:osposg:upsilon-initial}
\end{equation}
\subsection{Point-based Updates}
\label{sec:osposg:hsvi:pb-updates}
Unlike the exact value iteration algorithm (Section~\ref{sec:osposg:vi}) which constructs all $\alpha$-vectors needed to represent $HV$ in each iteration, the HSVI algorithm focuses on a single belief at a time.
Performing a \emph{point-based} update in belief $b \in \Delta(S)$ corresponds to solving the stage-games $[H\uv](b)$ and $[H\ov](b)$ where the values of subsequent stages are represented using value functions $\uv$ and $\ov$, respectively.
\paragraph{Update of lower bound $\uv$}
First, the LP~\eqref{eq:osposg:max-composition-lp} is used to compute the optimal value composition $\mathsf{valcomp}(\pi_1^{\mathrm{LB}},\overline{\alpha}^{\mathrm{LB}})$ in $[H\uv](b)$, i.e.,
\begin{equation}
(\pi_1^{\mathrm{LB}},\overline{\alpha}^{\mathrm{LB}}) \ \ \ \ = \argmax_{\substack{\pi_1 \in \Pi_1 \\ \overline{\alpha} \in \mathsf{Conv}(\Gamma)^{A_1 \times O}}} \mathsf{valcomp}(\pi_1,\overline{\alpha})(b) \ \text{.}
\label{eq:osposg:pbupdate-lb}
\end{equation}
The $\mathsf{valcomp}(\pi_1^{\mathrm{LB}},\overline{\alpha}^{\mathrm{LB}})$ function is a linear function corresponding to a new $\alpha$-vector that forms a lower bound on $V^*$.
This new $\alpha$-vector is used to refine the bound by setting $\Gamma \coloneqq \Gamma \cup \lbrace \mathsf{valcomp}(\pi_1^{\mathrm{LB}},\overline{\alpha}^{\mathrm{LB}}) \rbrace$.
As the following lemma shows, refining the lower bound $\uv$ via a point-based update preserves its desirable properties:
\begin{restatable}{lemma}{LBUpdatesPreserveStuff}
\label{thm:osposg:lb-point}
The lower bound $\uv$ initially satisfies the following conditions, which are subsequently preserved during point-based updates:
\begin{compactenum}[(1)]
\item $\uv$ is $\delta$-Lipschitz continuous.
\item $\uv$ is lower bound on $V^*$.
\end{compactenum}
\end{restatable}
\paragraph{Update of upper bound $\ov$}
Similarly to the case of the point-based update of the lower bound $\uv$, the update of upper bound is performed by solving the stage game $[H\ov](b)$.
Since $\ov$ is represented by a set of points $\Upsilon$, it is not necessary to compute the optimal value composition.
Instead, we form a refined upper bound $V_{\mathrm{UB}}^{\Upsilon'}$ (which corresponds to $\ov$ after the point-based update is made) by adding a new point $(b, [H\ov](b))$ to the set $\Upsilon'$ representing $V_{\mathrm{UB}}^{\Upsilon'}$, i.e., $\Upsilon' = \Upsilon \cup \lbrace (b, [H\ov](b)) \rbrace$.
We now show that the upper bound $\ov$ has the desired properties, and these properties are retained when applying the point-based update---and hence we can perform point-based updates of $\ov$ repeatedly.
\begin{restatable}{lemma}{UBPreservesStuff}
\label{thm:osposg:ub-point}
The upper bound $\ov$ initially satisfies the following conditions, which are subsequently preserved during point-based updates:
\begin{compactenum}[(1)]
\item $\ov$ is $\delta$-Lipschitz continuous.
\item $\ov$ is an upper bound on $V^*$.
\end{compactenum}
\end{restatable}
The LPs~\eqref{eq:osposg:max-composition-lp} and~\eqref{eq:osposg:max-composition-dual} solve the stage game $[HV](b)$ when the value function $V$ is represented as a maximum over a set of linear functions (i.e., the way lower bound $\uv$ is).
It is, however, possible to adapt constraints in~\eqref{eq:osposg:max-composition-dual} to solve the $[H\ov](b)$ problem.
We replace constraint~\eqref{eq:osposg:max-composition-dual:subgame} by constraints inspired by the LP~\eqref{lp:osposg:projection} used to solve $\ov(b)$.
\begin{subequations}
\label{eq:osposg:dual-projection}
\begin{align}
& \hat{V}(a_1,o) = \sum_{i=1}^{|\Upsilon|} \lambda^{a_1,o}_i y_i + \delta \sum_{s' \in S} \Delta^{s'}_{a_1,o} & \forall (a_1,o) \in A_1 \times O \\
& \sum_{i=1}^{|\Upsilon|} \lambda^i_{a_1,o} b_i(s') = b'_{a_1,o}(s') & \forall (a_1,o,s') \in A_1 \times O \times S \\
& \Delta^{s'}_{a_1,o} \geq b'_{a_1,o}(s') - \hat{\tau}(b,a_1,\pi_2,o)(s') & \forall (a_1,o,s') \in A_1 \times O \times S \\
& \Delta^{s'}_{a_1,o} \geq \hat{\tau}(b,a_1,\pi_2,o)(s') - b'_{a_1,o}(s') & \forall (a_1,o,s') \in A_1 \times O \times S \\
& \sum_{i=1}^{|\Upsilon|} \lambda^{a_1,o}_i = \sum_{s' \in S} \hat{\tau}(b,a_1,\pi_2,o)(s') & \forall (a_1,o) \in A_1 \times O \label{eq:osposg:dual-projection:lambda-sum}\\
& \lambda^i_{a_1,o} \geq 0 & \forall (a_1,o) \in A_1 \times O, 1 \leq i \leq |\Upsilon|
\end{align}
\end{subequations}
\subsection{The Algorithm}
\label{sec:osposg:hsvi:alg}
We are now ready to present the heuristic search value iteration (HSVI) algorithm for one-sided POSGs (Algorithm~\ref{alg:osposg:hsvi}) and prove its correctness.
The algorithm is similar to the HSVI algorithm for POMDPs~\citep{smith2004-hsvi,smith2005-hsvi}.
First, the bounds $\uv$ and $\ov$ on the optimal value function $V^*$ are initialized (as described in Section~\ref{sec:osposg:hsvi:vf}) on line~\ref{alg:osposg:hsvi:initialization}.
Then, until the desired precision $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) \leq \varepsilon$ is reached, the algorithm performs a sequence of trials using the $\mathtt{Explore}$ procedure, starting from the initial belief $b^{\mathrm{init}}$ (lines~\ref{alg:osposg:hsvi:termination}--\ref{alg:osposg:hsvi:exploration}).
\begin{algorithm}[t]
\caption{HSVI algorithm for one-sided POSGs}
\label{alg:osposg:hsvi}
\KwData{Game $G$, initial belief $b^{\mathrm{init}}$, discount factor $\gamma \in (0,1)$, desired precision $\varepsilon > 0$, neighborhood parameter $D$}
\KwResult{Approximate value functions $\uv$ and $\ov$ satisfying $\ov(b) - \uv(b) \leq \varepsilon$, sets $\Gamma$ and $\Upsilon$ constructed by point-based updates that represent $\uv$ and $\ov$}
\SetKwFunction{Explore}{Explore}
\SetKwProg{myproc}{procedure}{}{}
\DontPrintSemicolon
Initialize $\uv$ and $\ov$ (see Section~\ref{sec:osposg:hsvi:vf}) \label{alg:osposg:hsvi:initialization}\;
\While{$\excess_0(b^{\mathrm{init}}) > 0$}{\label{alg:osposg:hsvi:termination}
\Explore{$b^{\mathrm{init}},0$} \label{alg:osposg:hsvi:exploration}\;
}
\Return{\normalfont $\uv$ and $\ov$, sets $\Gamma$ and $\Upsilon$ that represent $\uv$ and $\ov$}\;
\myproc{\Explore{$b_t,t$}}{\label{alg:osposg:hsvi:explore}
$(\pi_1^{\mathrm{LB}},\pi_2^{\mathrm{LB}}) \gets $ equilibrium strategy profile in $[H\uv](b_t)$ \label{alg:osposg:hsvi:follower-choice}\;
$(\pi_1^{\mathrm{UB}},\pi_2^{\mathrm{UB}}) \gets $ equilibrium strategy profile in $[H\ov](b_t)$ \label{alg:osposg:hsvi:leader-choice}\;
Perform point-based updates of $\uv$ and $\ov$ at belief $b_t$ (see Section~\ref{sec:osposg:hsvi:pb-updates}) \label{alg:osposg:hsvi:pb-update-1} \;
$(a_1^*,o^*) \gets $ select according to forward exploration heuristic \label{alg:osposg:hsvi:fwd-exploration}\;
\If{$\mathbb{P}_{b,\pi_1^{\mathrm{UB}},\pi_2^{\mathrm{LB}}}[a_1^*,o^*] \cdot \excess_{t+1}(\tau(b_t,a_1^*,\pi_2^{\mathrm{LB}},o^*)) > 0$ \label{alg:osposg:hsvi:recursion-termination}}{
\Explore{$\tau(b_t,a_1^*,\pi_2^{\mathrm{LB}},o^*),t+1$} \label{alg:osposg:hsvi:recursion}\;
Perform point-based updates of $\uv$ and $\ov$ at belief $b_t$ (see Section~\ref{sec:osposg:hsvi:pb-updates}) \label{alg:osposg:hsvi:pb-update-2}\;
}
}
\end{algorithm}
The recursive procedure $\mathtt{Explore}$ generates a sequence of beliefs $\lbrace b_i \rbrace_{i=0}^k$ (for some $k \geq 0$) where $b_0 = b^{\mathrm{init}}$ and each belief $b_t$ reached at the recursion depth $t$ satisfied $\excess_t(b_t) > 0$ on line~\ref{alg:osposg:hsvi:termination} or~\ref{alg:osposg:hsvi:recursion-termination}.
The algorithm tries to ensure that values of beliefs $b_t$ reached at $t$-th level of recursion (i.e., $t$-th stage of the game) are approximated with sufficient accuracy and the gap between $\ov(b)$ and $\uv(b)$ is at most $\rho(t)$, where $\rho(t)$ is defined by
\begin{equation}
\rho(0) = \varepsilon \qquad\qquad \rho(t+1) = [ \rho(t) - 2 \delta D ] / \gamma \ \text{.} \label{eq:osposg:rho}
\end{equation}
To ensure that the sequence $\rho$ is monotonically increasing and unbounded, we need to select the parameter $D$ from the interval $(0 , (1-\gamma)\varepsilon / 2\delta )$.
When the approximation quality $\ov(b_t) - \uv(b_t)$ of the value of a belief $b_t$ reached at the $t$-th recursion level of $\mathtt{Explore}$ (i.e., at the $(t+1)$-th stage of the game) exceeds the desired approximation quality $\rho(t)$, it is said to have a positive \emph{excess gap} $\excess_t(b_t)$,
\begin{equation}\label{eq:onesided:excess}
\excess_t(b_t) = \ov(b_t) - \uv(b_t) - \rho(t) \ \text{.}
\end{equation}
Note that our definition of excess gap is more strict compared to the original HSVI algorithm for POMDPs, where the $-2 \delta D$ term from Equation~\eqref{eq:osposg:rho} is absent (see Equation~\eqref{eq:hsvi:excess}).
Unlike in POMDPs, which are single-agent, the belief transitions $\tau(b,a_1,\pi_2,o)$ in one-sided POSGs depend on player~2 as well (resp., on her strategy $\pi_2$).
The tighter bounds on the approximation quality allow us to prove the correctness of the proposed algorithm in \thref{thm:osposg:correctness}.
\paragraph{Forward exploration heuristic}
The algorithm uses a heuristic approach to select which belief $\tau(b,a_1,\pi_2^{\mathrm{LB}},o)$ will be considered in the next recursion level of the $\mathtt{Explore}$ procedure, i.e., what action-observation pair $(a_1,o) \in A_1 \times O$ will be chosen by player~1, on line~\ref{alg:osposg:hsvi:fwd-exploration}.
This selection is motivated by Lemma~\ref{thm:osposg:point-contractivity}---in order to ensure that $\excess_t(b_t) \leq 0$ (or more precisely $\excess_t(b_t) \leq -2 \delta D$) at the currently considered belief $b_t$ in $t$-th recursion level, all beliefs $\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)$ reached with positive probability when playing $\pi_1^{\mathrm{UB}}$ have to satisfy $\excess_{t+1}(\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)) \leq 0$.
Specifically, we focus on a belief that has the highest \emph{weighted excess gap}.
Inspired by the original HSVI algorithm for POMDPs~\citep{smith2004-hsvi,smith2005-hsvi}), we define the weighted excess gap as the excess gap $\excess_{t+1}(\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o))$ multiplied by the probability that the action-observation pair $(a_1,o)$ that leads to the belief $\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)$ occurs.
As a result, the next action-observation pair $(a_1^*,o^*)$ for further exploration is selected according to the formula
\begin{equation}
(a_1^*,o^*) = \argmax_{(a_1,o) \in A_1 \times O} \mathbb{P}_{b,\pi_1^{\mathrm{UB}},\pi_2^{\mathrm{LB}}}[a_1,o] \cdot \excess_{t+1}(\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)) \ \text{.}
\end{equation}
We now show formally that if the weighted excess gap of the optimal $(a_1^*,o^*)$ satisfies $\mathbb{P}_{b,\pi_1^{\mathrm{UB}},\pi_2^{\mathrm{LB}}}[a_1^*,o^*] \cdot \excess_{t+1}(\tau(b_t,a_1^*,\pi_2^{\mathrm{LB}},o^*)) \leq 0$, performing the point based update at $b_t$ ensures that $\excess_t(b_t) \leq -2 \delta D$.
\begin{restatable}{lemma}{ExcessContractivity}
\label{thm:osposg:hsvi:excess-contractivity}
Let $b_t$ be a belief encountered at $t$-th recursion level of $\mathtt{Explore}$ procedure and assume that the corresponding action-observation pair $(a_1^*,o^*)$ (from line~\ref{alg:osposg:hsvi:fwd-exploration} of Algorithm~\ref{alg:osposg:hsvi}) satisfies
\begin{equation}
\mathbb{P}_{b,\pi_1^{\mathrm{UB}},\pi_2^{\mathrm{LB}}}[a_1^*,o^*] \cdot \excess_{t+1}(\tau(b_t,a_1^*,\pi_2^{\mathrm{LB}},o^*)) \leq 0 \ \text{.}
\end{equation}
Then $\excess_t(b_t) \leq -2 \delta D$ after performing a point-based update at $b_t$.
Furthermore, all beliefs $b_t' \in \Delta(S)$ such that $\| b_t - b_t' \|_1 \leq D$ satisfy $\excess_t(b_t') \leq 0$.
\end{restatable}
The proof goes by verifying the assumptions of Lemma~\ref{thm:osposg:point-contractivity} (``a criterion for contractivity''), which allows us to bound the difference between $\ov$ and $\uv$ by $\rho(t+1)$. The ``furthermore'' part then follows from $\delta$-Lipschitz continuity of the bounds.
We now use Lemma~\ref{thm:osposg:hsvi:excess-contractivity} (especially its second part) to prove that Algorithm~\ref{alg:osposg:hsvi} terminates with $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) \leq \varepsilon$.
As we mentioned earlier, we can also use value functions $\uv$ and $\ov$ to play the game and obtain $\varepsilon$-Nash equilibrium of the game (see Section~\ref{sec:osposg:playing}).
\begin{theorem}\thlabel{thm:osposg:correctness}
For any $\varepsilon > 0$ and $0 < D < (1-\gamma)\varepsilon / 2\delta$, Algorithm~\ref{alg:osposg:hsvi} terminates with $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) \leq \varepsilon$.
\end{theorem}
\begin{proof}
By the choice of parameter $D$, the sequence $\rho(t)$ (for $\rho(0)=\varepsilon$) is monotonically increasing and unbounded, and the difference between value functions $\uv$ and $\ov$ is bounded by $U-L$ (since $L \leq \uv(b) \leq \ov(b) \leq U$ for every belief $b \in \Delta(S)$).
Therefore, there exists $T_{\mathrm{max}}$ such that $\rho(T_{\mathrm{max}}) \geq U - L \geq \ov(b) - \uv(b)$ for every $b \in \Delta(S)$, so the recursive procedure $\mathtt{Explore}$ always terminates.
To prove that the whole algorithm terminates, we reason about sets $\Psi_t \subset \Delta(S)$ of belief points where the trials performed by the $\mathtt{Explore}$ terminated.
Initially, $\Psi_t = \emptyset$ for every $0 \leq t < T_{\mathrm{max}}$.
Whenever the $\mathtt{Explore}$ recursion terminates at recursion level $t$ (i.e., the condition on line~\ref{alg:osposg:hsvi:recursion-termination} does not hold), the belief $b_t$ (which was the last belief considered during the trial) is added into set $\Psi_t$ ($\Psi_t \coloneqq \Psi_t \cup \lbrace b_t \rbrace$).
Recall that since $\Delta(S)$ is compact, it is, in particular, totally bounded (that is, if any two distinct elements $b,b'$ of a set $\Psi_t \subset \Delta(S)$ satisfy $\| b - b' \|_1 > D$, the set $\Psi_t$ must be finite).
Since the number of possible termination depths is finite ($0 \leq t \leq T_{\mathrm{max}}$), the algorithm has to terminate unless some $\Psi_t$ is infinite.
To show that the algorithm terminates, it thus remains that every two distinct points $b, b' \in \Psi_t$ are at least $D$ apart.
Assume to the contradiction that two trials terminated at recursion level $t$ with the last beliefs considered $b_t^{(1)}$ (for the earlier trial) and $b_t^{(2)}$ (for the trial that occurred at a later time), and that these beliefs satisfy $\| b_t^{(1)} - b_t^{(2)} \|_1 \leq D$.
When the former trial has been terminated in belief $b_t^{(1)}$, all reachable beliefs from $b_t^{(1)}$ had a negative excess gap (otherwise the trial would have continued as the condition on line~\ref{alg:osposg:hsvi:recursion-termination} would have been satisfied).
According to Lemma~\ref{thm:osposg:hsvi:excess-contractivity}, after the point-based update is performed in $b_t^{(1)}$, the excess gap of all beliefs $b_t'$ with $\| b_t^{(1)} - b_t' \|_1 \leq D$ have negative excess gap $\excess_t(b_t') \leq 0$.
When $b_t^{(2)}$ has been selected for exploration in $(t-1)$-th level of recursion, the condition on line~\eqref{alg:osposg:hsvi:recursion-termination} was met and $b_t^{(2)}$ must have had positive excess gap $\excess_t(b_t^{(2)}) > 0$.
This, however, contradicts the assumption that all beliefs $b_t'$ with $\| b_t^{(1)} - b_t' \|_1 \leq D$ (i.e., including $b_t^{(2)}$) already have negative excess gap.
Now that we know that Algorithm~\ref{alg:osposg:hsvi} always terminates,
note that at least one trial must have terminated in the first level of recursion (unless the Algorithm~\ref{alg:osposg:hsvi} has terminated on line~\ref{alg:osposg:hsvi:termination} with $\excess_0(b^{\mathrm{init}}) \leq 0$ beforehand).
By Lemma~\ref{thm:osposg:hsvi:excess-contractivity}, the update in $b^{\mathrm{init}}$ then renders $\excess_0(b^{\mathrm{init}}) \leq -2\delta D \leq 0$.
We then have that $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) \leq \rho(0) = \varepsilon$ which completes the proof.
\end{proof}
\section{Using Value Function to Play}
\label{sec:osposg:playing}
In the previous section, we have presented an algorithm that can approximate the value $V^*(b^{\mathrm{init}})$ of the game within an arbitrary given precision $\varepsilon > 0$ starting from an arbitrary initial belief $b^{\mathrm{init}}$.
However, in many games, knowing only the game's value is not enough. Indeed, to solve the game, we also need access to strategies that achieve the desired near-optimal performance.
In this section, we show that using the value functions $\uv$ and $\ov$ computed by the HSVI algorithm (Algorithm~\ref{alg:osposg:hsvi}) enables us to obtain $\varepsilon$-Nash equilibrium strategies for both players.
The Bellman's equation from Theorem~\ref{thm:osposg:bellman} may suggest that the near-optimal strategies can be extracted by employing the lookahead decision rule (similarly to POMDPs) and obtaining strategies to play in the current stage by computing the Nash equilibrium of stage games $[H\uv](b)$ and $[H\ov](b)$, respectively.
However, unlike in POMDPs and Markov games of imperfect information, this approach does \emph{not} work in one-sided POSGs because the belief of player~1 does not constitute a sufficient statistic for playing the game.
The reasons for this are similar to the usage of unsafe resolving~\citep{burch2018-thesis,seitz2019-value} in the realm of extensive-form games.
We use the following example to demonstrate the insufficiency of the belief to play the game.
\begin{example}
Consider a \emph{matching pennies} game shown in Figure~\ref{fig:belief-insufficient:nfg}.
This game can be formalized as a one-sided POSG that is shown in Figure~\ref{fig:belief-insufficient:osposg}.
The game starts in the state $s_0$ (i.e., the initial belief is $b^{\mathrm{init}}(s_0)=1$) and player~2 chooses her action $H$ or $T$.
Next, after transitioning to $s_H$ or $s_T$ (based on the decision of player~2), player~1 is \emph{unaware} of the true state of the game (i.e., the past decision of player~1) and chooses his action $H$ or $T$.
Based on the combination of decisions taken by the players, player~1 gets either $1/\gamma$ or $-1/\gamma$ and the game transitions to the state $s_\infty$ where it stays forever with zero future rewards.
\begin{figure}
\caption{Normal form}
\label{fig:belief-insufficient:nfg}
\caption{OS-POSG representation}
\label{fig:belief-insufficient:osposg}
\caption{Value function $V^*$}
\label{fig:belief-insufficient:value}
\caption{A game where belief is not a sufficient statistic for the imperfectly informed player.}
\label{fig:belief-insufficient}
\end{figure}
To understand the caveats of using belief $b \in \Delta(S)$ to derive the stage strategy to play, let us consider the optimal value function $V^*$ of the OS-POSG representation (Figure~\ref{fig:belief-insufficient:osposg}) of the matching pennies game.
Figure~\ref{fig:belief-insufficient:value} shows the values of $V^*$ over simplex $\Delta(\lbrace s_H, s_T \rbrace)$.
If it is more likely that the player~2 played $H$ in the first stage of the game (i.e., the current state is $s_H$), it is optimal for player~1 to play strategy prescribing him to play $H$ in the current stage (with value $\alpha_H$).
Conversely, if it is more likely that the current state is $s_T$, player~1 is better off with playing $T$ (with value $\alpha_T$).
The value function $V^*$ is then a point-wise maximum over these two linear functions.
Now, since the uniform mixture between $H$ and $T$ is the Nash equilibrium strategy for both players in the matching pennies game, player~1 will find himself in a situation when he assumes that the current belief is $\lbrace s_H: 0.5, s_T: 0.5 \rbrace$.
In this belief, any decision of player~1 yields expected reward 0---hence based purely on the belief, player~1 may opt to play, e.g., ``always $T$''.
However, such strategy is not in equilibrium and player~2 is able to exploit it by playing ``always $H$''.
This example illustrates that the belief alone does not provide sufficient information to choose the right strategy $\pi_1$ for the current stage based on the Equation~\eqref{eq:osposg:H-maxmin}.
\end{example}
\subsection{Justified Value Functions}
First of all, we define conditions under which it makes sense to use value function to play a one-sided POSG.
The conditions are similar to \emph{uniform improvability} in, e.g., POMDPs.
Our definitions, however, reflect the fact that we deal with a two-player problem (and we thus introduce the condition for each player separately).
Moreover, we use a stricter condition for player~1 who does \emph{not} have perfect information about the belief---and thus defining the condition based solely on the beliefs is not sufficient.
\begin{definition}[Min-justified value function]
\thlabel{def:osposg:min-justified}
Convex continuous value function $V$ is said to be \emph{min-justified} (or, justified from the perspective of the minimizing player~2) if for every belief $b \in \Delta(S)$ it holds that $[HV](b) \leq V(b)$.
\end{definition}
\begin{definition}[Max-justified value function]
\thlabel{def:osposg:max-justified}
Let $\Gamma$ be a compact set of linear functions, and $V$ be a value function such that $V(b) = \sup_{\alpha \in \Gamma} \alpha(b)$ for every $b$.
$V$ is said to be \emph{max-justified} by $\Gamma$ (or, justified from the perspective of the maximizing player~1) if for every $\alpha \in \Gamma$ there exists $\pi_1 \in \Pi_1$ and $\overline{\alpha} \in \Gamma^{A_1 \times O}$ such that $\mathsf{valcomp}(\pi_1,\overline{\alpha}) \geq \alpha$.
\end{definition}
While the reason for the terminology is not apparent just yet, we will show in Sections~\ref{sec:osposg:p1-strategy} that the ``max-justifying'' set $\Gamma$ can be used to construct a strategy $\sigma_1$ of player~1 such that $\val^{\sigma_1}(b) \geq V(b)$ for every $b$.
Similarly, we will show in Section~\ref{sec:osposg:p2-strategy} that if the value function $V$ is min-justified, we can construct a strategy $\sigma_2$ of player~2 that \emph{justifies} the value $V(b)$ for every belief $b \in \Delta(S)$, i.e., we have $\E_{b,\sigma_1,\sigma_2}[\Disc^\gamma] \leq V(b)$ against every strategy $\sigma_1$ of player~1.
As preparation for more substantial proofs that follow, the remainder of this subsection presents several basic properties of min- and max-justified functions.
Recall that no matter how well things go for the maximizing player, the corresponding utility will never get above $U$. Similarly, the minimizing player cannot push the utility below $L$.
Lemma~\ref{thm:osposg:max-justified-bounded} and Lemma~\ref{thm:osposg:min-justified-bounded} prove that max- and min-justified functions obey the same restrictions.
This is in agreement with our intuition that max-justification should guarantee utility of \textit{at least} some value (which therefore cannot be higher than $U$) and min-justification should guarantee utility of \textit{no more than} some value (which therefore cannot be lower than $L$).
\begin{restatable}{lemma}{MinJustified}
\label{thm:osposg:min-justified-bounded}
Let $V$ be a value function that is min-justified.
Then $V(b) \geq L$.
\end{restatable}
\begin{restatable}{lemma}{MaxJustified}
\label{thm:osposg:max-justified-bounded}
Let $V$ be a value function that is max-justified by a set of $\alpha$-vectors $\Gamma$.
Then for every $\alpha \in \Gamma$ we have $\alpha \leq U$.
\end{restatable}
To prepare for showing that the value function $\uv$ resulting produced by Algorithm~\ref{alg:osposg:hsvi} is max-justified by $\mathsf{Conv}(\Gamma)$, we state the following technical lemma:
\begin{restatable}{lemma}{ConvMaxJustified}
\label{thm:osposg:conv-max-justified}
Let $\Gamma$ be a set of linear functions, and $V$ a value function that is max-justified by $\Gamma$.
Then $V$ is also max-justified by $\mathsf{Conv}(\Gamma)$.
\end{restatable}
\subsection{Strategy of Player~1}
\label{sec:osposg:p1-strategy}
In this section, we will show that when the value function $V$ is max-justified by a set of $\alpha$-vectors $\Gamma$, we can implicitly form a strategy $\sigma_1$ of player~1 that achieves utility of at least $V(b^{\mathrm{init}})$ for any given initial belief $b^{\mathrm{init}}$.
We provide an online game-playing algorithm (Algorithm~\ref{alg:osposg:cr}) which implicitly constructs the desired strategy. This algorithm is inspired by the ideas of continual resolving for extensive-form games~\citep{moravcik2017-deepstack}.
While playing the game, Algorithm~\ref{alg:osposg:cr} maintains a lower bound $\rho$ on the values the reconstructed strategy has to achieve.
Inspired by the terminology of continual resolving for extensive-form games, we call this lower-bounding linear function a \emph{gadget}.
The goal of the $\mathtt{Act}(b,\rho)$ method is to reconstruct a strategy $\sigma_1$ of player such that its value satisfies $\val^{\sigma_1} \geq \rho$.
We will now show that the $\mathtt{Act}$ method achieves precisely this.
The reasoning about the current gadget allows us to obtain guarantees on the quality of the reconstructed strategy, even when player~1 does not have an accurate belief because he does not have access to the stage strategies used by the adversary.
\begin{algorithm}
\caption{Continual resolving algorithm for one-sided POSGs}
\label{alg:osposg:cr}
\DontPrintSemicolon
\SetKwBlock{Repeat}{repeat}{}
\SetKwInOut{Input}{input}
\Input{one-sided POSG $G$ \\ a finite set $\Gamma$ of linear functions representing convex value function $V$}
\SetKwFunction{Act}{Act}
\SetKwProg{myproc}{procedure}{}{}
$b \gets b^{\mathrm{init}}$ \label{alg:osposg:cr:binit}\;
$\rho^{\mathrm{init}} \gets \argmax_{\alpha \in \Gamma} \alpha(b_{\mathrm{init}})$ \label{alg:osposg:cr:rho-zero}\;
\Act{$b^{\mathrm{init}}, \rho^{\mathrm{init}}$}
\myproc{\Act{$b, \rho$}}{
$(\pi_1^*,\overline{\alpha}^*) \gets \argmax_{\pi_1,\overline \alpha} \lbrace \mathsf{valcomp}(\pi_1,\overline{\alpha})(b) \mid \pi_1 \in \Pi_1, \overline{\alpha} \in \textsf{Conv}(\Gamma)^{A_1 \times O} \textnormal{ s.t. } \mathsf{valcomp}(\pi_1,\overline{\alpha}) \geq \rho \rbrace$ \label{alg:osposg:cr:resolve}\;
$\pi_2 \gets$ solve $[HV](b)$ to obtain assumed stage strategy of the adversary \;
sample and play $a_1 \sim \pi_1^*$ \;
$o \gets$ observed observation \;
$b' \gets \tau(b, a_1, \pi_2, o)$ \;
\Act{$b', \alpha^*_{a_1, o}$} \;
}
\end{algorithm}
\begin{restatable}{proposition}{PlOneStrategy}
\label{thm:osposg:p1-strategy}
Let $V$ be a value function that is max-justified by a set of $\alpha$-vectors $\Gamma$.
Let $b^{\mathrm{init}} \in \Delta(S)$ and $\rho^{\mathrm{init}} \in \Gamma$.
By playing according to $\mathtt{Act}(b^{\mathrm{init}},\rho^{\mathrm{init}})$, player~1 implicitly forms a strategy $\sigma_1$ for which $\val^{\sigma_1} \geq \rho^{\mathrm{init}}$.
\end{restatable}
This proposition is proven by constructing a sequence of strategies under which player~1 follows Algorithm~\ref{alg:osposg:cr} for $K$ steps (for $K=0,1,\ldots$).
We provide a lower bound on the value each of these strategies, and show that the limit of these lower bounds coincides with $\rho^{\mathrm{init}}$, as well as with the lower bound on the value guaranteed by following Algorithm~\ref{alg:osposg:cr} for \emph{infinite} period of time.
\begin{corollary}
\thlabel{thm:osposg:p1-strategy-guarantee}
Let $V$ be a value function that is max-justified by a compact set $\Gamma$ and let $b^{\mathrm{init}}$ be the initial belief of the game.
The Algorithm~\ref{alg:osposg:cr} implicitly constructs a strategy $\sigma_1$ which guarantees that the utility to player~1 will be at least $V(b^{\mathrm{init}})$.
\end{corollary}
\begin{proof}
$\rho^{\mathrm{init}}$ from line~\ref{alg:osposg:cr:rho-zero} of Algorithm~\ref{alg:osposg:cr} has value $\rho^{\mathrm{init}}(b^{\mathrm{init}}) = V(b^{\mathrm{init}})$ in the initial belief $b^{\mathrm{init}}$.
By Proposition~\ref{thm:osposg:p1-strategy}, we can construct a strategy $\sigma_1$ with value $\val^{\sigma_1} \geq \rho^{\mathrm{init}}$.
Hence $\val^{\sigma_1}(b^{\mathrm{init}}) \geq \rho^{\mathrm{init}}(b^{\mathrm{init}}) =V(b^{\mathrm{init}})$.
\end{proof}
\subsection{Strategy of Player~2}
\label{sec:osposg:p2-strategy}
We will now present an analogous algorithm to obtain a strategy for player~2 when the value function $V$ is min-justified.
Recall that the stage strategies $\pi_2$ of player~2 influence the belief of player~1 (Equation~\ref{eq:osposg:tau}).
Unlike player~1, player~2 knows which stage strategies $\pi_2$ have been used in the past, and he is thus able to infer the current belief of player~1.
As a result, the $\mathtt{Act}$ method of Algorithm~\ref{alg:osposg:cr2} depends on the current belief of player~1, but not on the gadget $\rho$ as it did in Algorithm~\ref{alg:osposg:cr}.
\begin{algorithm}
\caption{Strategy of player~2}
\label{alg:osposg:cr2}
\DontPrintSemicolon
\SetKwBlock{Repeat}{repeat}{}
\SetKwInOut{Input}{input}
\Input{one-sided POSG $G$ \\ convex value function $V$}
\SetKwFunction{Act}{Act}
\SetKwProg{myproc}{procedure}{}{}
\Act{$b^{\mathrm{init}}$}
\myproc{\Act{$b$}}{
$\pi_2^* \gets $ optimal strategy of player~2 in the stage game $[HV](b)$ \label{alg:osposg:cr2:resolve}\;
$s \gets$ currently observed state \;
sample and play $a_2 \sim \pi_2^*(\cdot \,|\, s)$ \;
$(a_1,o) \gets$ action of the adversary and the corresponding observation\;
\Act{$\tau(b, a_1, \pi_2^*, o)$} \;
}
\end{algorithm}
We will now show that if the value function $V$ is min-justified, playing according to Algorithm~\ref{alg:osposg:cr2} guarantees that the utility will be at most\footnote{In other words, this is a performance guarantee for the (minimizing) player 2.} $V(b^{\mathrm{init}})$.
\begin{restatable}{proposition}{PlTwoStrategy}
\label{thm:osposg:p2-strategy}
Let $V$ be a min-justified value function and let $b^{\mathrm{init}}$ be the initial belief of the game.
The Algorithm~\ref{alg:osposg:cr2} implicitly constructs a strategy $\sigma_2$ which guarantees that the utility to player~1 will be at most $V(b^{\mathrm{init}})$.
\end{restatable}
The proof of Proposition~\ref{thm:osposg:p2-strategy} is similar to the proof of Proposition~\ref{thm:osposg:p1-strategy}.
We derive an upper bound on the utility player~1 can achieve against player~2 who follows Algorithm~\ref{alg:osposg:cr2} for $K$ steps (for $K=0,1,\ldots$).
We show that the limit of these upper bounds coincides with $V(b^{\mathrm{init}})$ and with the upper bound on the utility player~1 can achieve when player~2 follows \ref{alg:osposg:cr2} for an infinite number of iterations.
\subsection{Using Value Functions \texorpdfstring{$\uv$}{VLB} and \texorpdfstring{$\ov$}{VUB} to Play the Game}
\label{sec:osposg:playing-hsvi}
In Sections~\ref{sec:osposg:p1-strategy} and~\ref{sec:osposg:p2-strategy}, we have shown that we can obtain strategies to play the game when the value functions are max-justified or min-justified, respectively.
In this section, we will show that the heuristic search value iteration algorithm for solving one-sided POSGs (Section~\ref{sec:osposg:hsvi}) generates value functions with these properties.
Namely, at any time, the lower bound $\uv$ is max-justified value function by the set of $\alpha$-vectors $\mathsf{Conv}(\Gamma)$, and the upper bound $\ov$ is min-justified.
This allows us to derive two important properties of the algorithm.
First, since \thref{thm:osposg:correctness} guarantees that the algorithm terminates with $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) \leq \varepsilon$, we can use the resulting value functions $\uv$ (represented by $\Gamma$) and $\ov$ to obtain $\varepsilon$-Nash equilibrium strategies for both players.
Next, we can also run the algorithm in anytime fashion and, since the bounds $\uv$ and $\ov$ satisfy the properties at any point of time, use these bounds to extract strategies with performance guarantees.
We will first prove that at any point of time in the execution of Algorithm~\ref{alg:osposg:hsvi}, the lower bound $\uv$ is max-justified by the set $\mathsf{Conv}(\Gamma)$, and the upper bound $\ov$ is a min-justified value function.
To prove this, it suffices to show initial lower-bound value function $\uv$ is max-justified by $\mathsf{Conv}(\Gamma)$ and the initial upper-bound value function $\ov$ is min-justified, and that this property is preserved after any sequence of point-based updates performed on $\uv$ and $\ov$.
With the help of Lemma~\ref{thm:osposg:conv-max-justified}, we can prove that this is true for $\uv$:
\begin{restatable}{lemma}{LBmaxJustByConv}
\label{thm:osposg:max-justified}
Let $\Gamma$ be the set of $\alpha$-vectors that have been generated at any time during the execution of the HSVI algorithm for one-sided POSGs (Algorithm~\ref{alg:osposg:hsvi}).
Then the lower bound $\uv$ is max-justified by the set $\mathsf{Conv}(\Gamma)$.
\end{restatable}
Even though the proof is more complicated, the analogous result holds for $\ov$ as well:
\begin{lemma}
\thlabel{thm:osposg:min-justified}
Let $\ov$ be the upper bound considered at any time of the execution of the HSVI algorithm for one-sided POSGs (Algorithm~\ref{alg:osposg:hsvi}).
Then $\ov$ is min-justified.
\end{lemma}
\begin{proof}
Upper bound $\ov$ is only modified by means of point-based update on lines~\ref{alg:osposg:hsvi:pb-update-1} and~\ref{alg:osposg:hsvi:pb-update-2} of Algorithm~\ref{alg:osposg:hsvi}.
Therefore, it suffices to show that (1) the initial upper bound is min-justified and that (2) the upper bound $V_{\mathrm{UB}}^{\Upsilon'}$ resulting from applying a point-based update on a min-justified upper bound $\ov$ is min-justified as well.
First, let us prove that the initial value function $\ov$ is min-justified.
Initially, $\ov(b)$ is set to the value of a \emph{perfect information} version of the game, where the imperfectly informed player~1 gets to know the initial state of the game.
By removing this information from player~1, the utility player~1 can achieve can only decrease.
It follows that $[H\ov](b) \leq \ov(b)$, so the initial value function $\ov(b)$ is min-justified.
Now, let us consider an upper bound $\ov$ represented by a set $\Upsilon = \lbrace (b_i, y_i) \mid 1 \leq i \leq k \rbrace$ that is considered by the Algorithm~\ref{alg:osposg:hsvi} and let us assume that $\ov$ is min-justified.
Consider that a point-based update in $b_{k+1}$ is to be performed.
We show that the function $V_{\mathrm{UB}}^{\Upsilon'}$ resulting from the point-based update in $b_{k+1}$ is min-justified as well.
Recall that $\Upsilon' = \Upsilon \cup \lbrace (b_{k+1}, y_{k+1}) \rbrace$ and $y_{k+1} = [H\ov](b_{k+1})$.
Clearly, since $\Upsilon \subset \Upsilon'$, it holds $V_{\mathrm{UB}}^{\Upsilon'}(b) \leq \ov(b)$ and $[HV_{\mathrm{UB}}^{\Upsilon'}](b) \leq [H\ov](b)$ for every $b \in \Delta(S)$.
Due to this and since $\ov$ is assumed to be min-justified, we have $y_i \geq [HV_{\mathrm{UB}}^{\Upsilon'}](b)$ for every $1 \leq i \leq k+1$.
We will now prove that $V_{\mathrm{UB}}^{\Upsilon'}$ is min-justified by showing that $[HV_{\mathrm{UB}}^{\Upsilon'}](b) \leq V_{\mathrm{UB}}^{\Upsilon'}(b)$ holds for arbitrary belief $b \in \Delta$.
Let $\lambda_i$ and $b'$ correspond to the optimal solution of the linear program~\eqref{lp:osposg:projection} for solving $V_{\mathrm{UB}}^{\Upsilon'}(b)$.
We have
\begin{align*}
V_{\mathrm{UB}}^{\Upsilon'}(b) &= \sum_{i=1}^{k+1} \lambda_i y_i + \delta \| b - b' \|_1 \hspace{-10em} \\
&& \textit{ $\lambda_i$ and $b'$ represent an optimal solution of $V_{\mathrm{UB}}^{\Upsilon'}(b)$} \\
&\geq \sum_{i=1}^{|\Upsilon|} \lambda_i \cdot [HV_{\mathrm{UB}}^{\Upsilon'}](b_i) + \delta \| b - b' \|_1 \hspace{-10em} \\
&\geq [HV_{\mathrm{UB}}^{\Upsilon'}](b') + \delta \| b - b' \|_1 \hspace{-10em} \\
&& \textit{ $HV_{\mathrm{UB}}^{\Upsilon'}$ is convex, see Proposition~\ref{thm:osposg:hv-convex}} \\
&\geq [HV_{\mathrm{UB}}^{\Upsilon'}](b) \hspace{-10em} & \textit{ $V_{\mathrm{UB}}^{\Upsilon'}$ is $\delta$-Lipschitz continuous, and hence,} \\
&& \textit{by Proposition~\ref{thm:osposg:hv-convex}, $HV_{\mathrm{UB}}^{\Upsilon'}$ is as well} \ \text{.}
\end{align*}
This shows that any point-based update results in a min-justified value function $V_{\mathrm{UB}}^{\Upsilon'}$.
As a result, Algorithm~\ref{alg:osposg:hsvi} only considers upper bounds $\ov$ that are min-justified.
\end{proof}
We are now in a position to show that Algorithm~\ref{alg:osposg:hsvi} produces $\varepsilon$-Nash equilibrium strategies.
\begin{theorem}\thlabel{thm:equilibrium}
In any OS-POSG, applying Algorithms~\ref{alg:osposg:cr} and~\ref{alg:osposg:cr2} to the output of Algorithm~\ref{alg:osposg:hsvi} yields an $\varepsilon$-Nash equilibrium.
\end{theorem}
\begin{proof}
According to \thref{thm:osposg:correctness}, Algorithm~\ref{alg:osposg:hsvi} terminates and the value functions $\uv$ and $\ov$ that result from the execution of the algorithm satisfy $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) \leq \varepsilon$.
Furthermore, we know that lower bound $\uv$ is max-justified by the set $\Gamma$ resulting from the execution of Algorithm~\ref{alg:osposg:hsvi} (\thref{thm:osposg:max-justified}), and the upper bound $\ov$ is min-justified (\thref{thm:osposg:min-justified}).
We can therefore use Algorithm~\ref{alg:osposg:cr} to obtain a strategy for player~1 that achieves utility of at least $\uv(b^{\mathrm{init}})$ for player~1 (\thref{thm:osposg:p1-strategy-guarantee}).
Similarly, we can use Algorithm~\ref{alg:osposg:cr2} to obtain a strategy for player~2 that ensures that the utility of player~1 will be at most $\ov(b^{\mathrm{init}})$ (Proposition~\ref{thm:osposg:p2-strategy}).
It follows that if either player were to deviate from the strategy prescribed by the algorithm, they would not be able to improve their utility by more than $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}})$.
Since $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) \leq \varepsilon$, these strategies must form a $\varepsilon$-Nash equilibrium of the game.
\end{proof}
\section{Experimental evaluation}
\label{sec:osposg:experiments}
In this section, we focus on the experimental evaluation of the heuristic search value iteration algorithm for solving one-sided partially observable stochastic games from Section~\ref{sec:osposg:hsvi}.
We demonstrate the scalability of the algorithm in three security domains.
Rewards in all of the domains have been scaled to the interval $[0,100]$ or $[-100,0]$, respectively, and we report the runtime required to reach $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) \leq 1$.
We first outline the details of our experimental setup.
\subsection{Algorithm Settings}\label{sec:alg:settings}
Compared to the version of the HSVI algorithm presented in Section~\ref{sec:osposg:hsvi}, we adopt several modifications to improve the scalability of the algorithm.
In this section, we describe these modifications and show that the theoretical guarantees of the algorithm still hold.
\paragraph{Pruning the Sets $\Gamma$ and $\Upsilon$}
Each time a point-based update is performed, the size of the sets $\Gamma$ and $\Upsilon$ used to represent value functions $\uv$ and $\ov$ increases.
As new elements are generated, some of the elements in these sets may become unnecessary to accurately represent the bounds $\uv$ and $\ov$.
Since the sizes of sets $\uv$ and $\ov$ have a direct impact on the sizes of linear programs used throughout the algorithm, removing unnecessary elements from $\uv$ and $\ov$ improves the performance.
Whenever a new $\alpha$-vector $\mathsf{valcomp}(\pi_1^{\mathrm{LB}}, \overline{\alpha}^{\mathrm{LB}})$ is generated according to Equation~\eqref{eq:osposg:pbupdate-lb}, all dominated elements in the set $\Gamma$ get removed and only those elements of $\alpha \in \Gamma$ that dominate $\mathsf{valcomp}(\pi_1^{\mathrm{LB}}, \overline{\alpha}^{\mathrm{LB}})$ in at least one state remain, i.e.,
\begin{align}
\Gamma \ \ \ & \coloneqq & \left\lbrace \alpha' \in \Gamma \mid \exists s \in S: \alpha'(s) > \mathsf{valcomp}(\pi_1^{\mathrm{LB}}, \overline{\alpha}^{\mathrm{LB}})(s) \right\rbrace \hspace{5em} \nonumber \\
&& \cup \ \ \ \left\lbrace \mathsf{valcomp}(\pi_1^{\mathrm{LB}}, \overline{\alpha}^{\mathrm{LB}}) \right\rbrace \ \text{.} \label{eq:osposg:uv-pruning}
\end{align}
For the set $\Upsilon$ used to represent the upper bound $\ov$, we use a batch approach instead of removing dominated elements immediately.
We remove dominated elements every time the size of the set $\Upsilon$ increases by 10\% compared to the size after the last pruning was performed (this is analogous to the pruning technique proposed in~\citep{smith2004-hsvi}).
Algorithm~\ref{alg:osposg:ov-pruning} inspects each point $(b_i,y_i) \in \Upsilon$ and checks whether it is needed to represent value function $\ov$---and if it is not needed, the point gets removed.
\begin{algorithm}
\caption{Pruning set $\Upsilon$ representing the upper bound $\ov$}
\label{alg:osposg:ov-pruning}
\DontPrintSemicolon
\SetKwInOut{Input}{input}
\Input{Set $\Upsilon$ used to represent $\ov$}
\For{$(b_i,y_i) \in \Upsilon$}{
\lIf{$y_i > \ov(b_i)$}{$\Upsilon \coloneqq \Upsilon \setminus \lbrace (b_i,y_i) \rbrace$}
}
\end{algorithm}
Removing elements from sets $\Gamma$ and $\Upsilon$ does not violate the theoretical properties of the algorithm.
First of all, only elements that are not necessary to represent currently considered bounds are removed---hence the values of value functions $\uv$ and $\ov$ considered at each step of the algorithm remain unchanged, and the convergence property is hence retained.
Furthermore, we can still use pruned value functions to extract strategies with guaranteed performance.
Since the resulting upper bound value function $\ov$ is identical to the one obtained without pruning, it is still min-justified. It can thus be used to obtain a strategy of the minimizing player~2 with guaranteed utility at most $\ov(b^{\mathrm{init}})$ (Section~\ref{sec:osposg:p2-strategy}).
Similarly, $\uv$ can be used to obtain a strategy of player~1 (Section~\ref{sec:osposg:p1-strategy}).
Despite the fact that the resulting set $\Gamma$ of $\alpha$-vectors is different from the set constructed by Algorithm~\ref{alg:osposg:hsvi} when no pruning is used, we can see that for every missing element $\alpha'$ there has to exist an element $\alpha$ such that $\alpha \geq \alpha'$ (see Equation~\eqref{eq:osposg:uv-pruning}).
Therefore, we can always replace missing $\alpha$-vectors in value compositions (i.e., linear functions $\alpha^{a_1,o}$) without decreasing the values of the resulting value composition---and hence $\uv$ remains max-justified by the set of $\alpha$-vectors $\mathsf{Conv}(\Gamma)$.
\paragraph{Partitioning States and Value Functions}
In many games, even the imperfectly informed player~1 has access to some information about the game.
For example, in the pursuit-evasion games we discuss below, the pursuer \emph{knows} his position---and representing his uncertainty about his position within the belief is unnecessary.
To reduce the dimension of the beliefs, we allow for partitioning states into disjoint sets such that the imperfectly informed player~1 always \emph{knows} in which set he is currently.
Formally, let $S = \bigcup_{i=1}^K S_i$ such that $S_i \cap S_j = \emptyset$ for every $i \neq j$.
Player~1 has to know the initial partition, i.e., $\Supp(b^{\mathrm{init}}) \subseteq S_i$ for some $1 \leq i \leq K$.
Furthermore, he has to be able to infer which partition he is in at any time, i.e., for every belief $b$ over a partition $S_i$ (i.e., $\Supp(b) \subseteq S_i$), every achievable action-observation pair $(a_1,o)$ and every stage strategy $\pi_2 \in \Pi_2$ of player~2, we have $\Supp(\tau(b,a_1,\pi_2,o)) \subseteq S_j$ for some $1 \leq j \leq K$.
We use $T(S_i, a_1, o)$ to denote such $S_j$.
This partitioning allows for reducing the size of LP~\eqref{eq:osposg:max-composition-lp} used to compute stage game solutions.
Namely, the quantification over $s \in S$ can be replaced by $s \in S_i$, where $S_i$ is the current partition.
Furthermore, since also the partition of the next stage has to be known, we can also replace $(a_1,o,s') \in A_1 \times O \times S$ by $(a_1,o,s') \in A_1 \times O \times T(S_i,a_1,o)$.
\paragraph{Parameters and Hardware}
We use value iteration for stochastic games, or MDPs, respectively, to initialize the upper and lower bounds.
The upper bound is initialized by solving a perfect-information variant of the game (see Section~\ref{sec:osposg:hsvi:vf}).
The lower bound is computed by fixing the uniform strategy $\sigma_1^{\mathrm{unif}}$ for player~1 and solving the resulting Markov decision process from the perspective of player~2.
We terminate the algorithms when either change in valuations between iterations of value iteration is lower than $0.025$, or $20$ minutes time limit has expired.
The initialization time is included in the computation times of the HSVI algorithm.
We use $\varepsilon=1$.
However, similarly to~\cite{smith2004-hsvi}, we adjust $\varepsilon$ in each iteration, and we get $\varepsilon_{\mathrm{imm}}$ that is about to be used in the current iteration using formula $\varepsilon_{\mathrm{imm}} = 0.25 + \eta(\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) - 0.25)$ with $\eta=0.9$.
We set the parameter $D$ to the largest value such that $\rho(t) \geq 0.25^{-t}$ holds for every $t \geq 0$.
Each experiment has been run on a single core of Intel Xeon Platinum 8160. We have used CPLEX 12.9 to solve the linear programs.
\subsection{Experimental Results}
We now turn our attention to the discussion of experimental results.
We introduce the domains used in our experiments and comment on the scalability of the proposed algorithm.
\begin{figure}
\caption{\ }
\caption{\ }
\caption{Pursuit evasion games: (a) Pursuit evasion game $5 \times N$. The {\bf P}
\label{fig:peg-game}
\label{fig:peg-results}
\end{figure}
\paragraph{Pursuit-Evasion Games (inspired by~\citep{chung2011search,isler2008-os-peg})}
In pursuit-evasion games, a team of $K$ centrally controlled pursuers (we consider a team of $K=2$) is trying to locate and capture the evader---who is trying to avoid getting captured.
The game is played on a grid (dimensions $3 \times N$), with the pursuers starting in the top-left corner and the evader in the bottom-right corner -- see Figure~\ref{fig:peg-game}.
In each step, the units move to one of their adjacent locations (i.e., the actions of the evader are $A_2 = \lbrace \mathrm{left}, \mathrm{right}, \mathrm{up}, \mathrm{down} \rbrace$, while the actions available to the team of pursuers are joint actions for all units in the team, $A_1 = (A_2)^K$).
The game ends when one of the units from the team of pursuers enters the same cell as the evader---and the team of pursuers (player~1) then receives a reward of $+100$.
The reward for all other transitions in the game is zero.
The pursuer knows the location of their units, but the current location of the evader is not known.
The game with $N=3$ was solved in 4.5\,s on average, while the game with $N=7$ took 10\,267\,s to be solved to the gap $\varepsilon=1$ -- full results can be found in Figure~\ref{fig:peg-results}.
The game $8 \times N$ has not been solved successfully within 10 hours time limit, and the gap of $\ov(b_{\mathrm{init}})-\uv(b_{\mathrm{init}}) = 1.245$ has been reached after 10 hours.
Sizes of the games range from 143 states and 2\,671 transitions (for $3 \times N$ game) to 3\,171 states and 92\,531 transitions (for $8 \times N$ game).
\begin{figure}
\caption{Intrusion search games: (a) Intrusion-search game with width $W=3$ in configuration 1-1: A denotes the initial position of the attacker and D the positions of the defender's units. T is the attacker's target. (b) Time to reach $\ov(b_{\mathrm{init}
\label{fig:bpg}
\label{fig:bpg-results}
\end{figure}
\paragraph{Search Games (inspired by~\citep{bosansky2014-jair})}
In search games that model intrusion, the defender patrols checkpoint zones (see Figure~\ref{fig:bpg}, the zones are marked with box).
The attacker aims to cross the graph while not being captured by the defender.
She can either wait for one move to conceal her presence (and clean up the trace), or move further.
Each unit of the defender can move to adjacent nodes within its assigned zone.
The goal of the attacker is to cross the graph to reach node marked by $T$ without encountering any unit of the defender.
If she manages to do so, the defender receives a reward of $-100$.
We consider games with two checkpoint zones with a varying number of nodes in a zone $W$ (i.e. the width of the graph). We use two configurations of the defending forces: (1) one defender in each checkpoint and (2) two defenders in the first checkpoint and one defender in the second checkpoint. We denote these settings as 1-1 and 2-1.
The results are shown in Figure~\ref{fig:bpg-results} (with five runs for each parameterization, the confidence intervals mark the standard error in our graphs).
The largest game ($W=5$ and two defenders in the first zone) has 4\,656 states and 121\,239 transitions and can be solved within 560\,s.
This case highlights that our algorithm can solve even large games.
However, a much smaller game with $W=5$ and configuration 1-1 (964 states and 9\,633 transitions) is more challenging, since the coordination problem with just one defender in the first zone is harder, and despite its smaller size it is solved within 640\,s.
\paragraph{Patrolling Games (inspired by~\citep{Basilico2009,vorobeychik2014-icaps})}
In a patrolling game, a patroller (player~1) aims to protect a set of targets $V$.
The targets are represented by vertices of a graph, and the possible movements of the patroller are represented by the edges of the graph.
The attacker observes the movement of the patroller and decides which target $v \in V$ he will attack, or whether he will postpone the decision.
Once the attacker decides to attack a target $v$, the defender has $t_\times$ steps to reach the attacked vertex.
If he fails to do so, he receives a negative reward $-C(v)$ associated to the target $v$---otherwise, he successfully protects the target, and the reward is zero.
The patroller does not know whether and where the attack has already started.
The costs $C(v)$ are scaled so the $\max_{v \in V} C(v) = 100/\gamma^{t_{\times}}$, i.e., the minimum possible payoff for the defender is $-100$.
Following the setting in~\citep{vorobeychik2014-icaps}, we focus on graphs generated from Erdos-Renyi model~\citep{newman2010networks} with parameter $p=0.25$ (denoted $ER(0.25)$) with attack times $t_\times \in \lbrace 3, 4 \rbrace$ and number of vertices $|\mathcal{V}|$ ranging from 7 to 15.
The time to solve even the largest instances ($V=17$) with $t_\times=3$ was $305.5$\,s.
For attack time $t_\times=4$, however, some number of instances failed to reach the precision $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}}) \leq 1$ within the time limit of 10 hours.
For the most difficult setting, $|\mathcal{V}|=17$ and $t_\times=4$, the algorithm reached desired precision in $70\%$ of instances.
For unsolved instances in this setting, mean $\ov(b^{\mathrm{init}}) - \uv(b^{\mathrm{init}})$ after the cutoff after 10 hours is however reasonably small at 3.77$\pm$0.54.
The results include games with up to 856 states and 6\,409 transitions.
See Figure~\ref{fig:patrolling-results} for more details.
\begin{figure}
\caption{Time to reach $\ov(b_{\mathrm{init}
\label{fig:patrolling-results}
\end{figure}
\begin{figure}
\caption{Effect of initialization on runtime. The target error is measured as Bellman residual $\| TV - V \|_\infty$ of the value iteration algorithms used to obtain initial bounds.}
\label{fig:residual}
\end{figure}
\subsection{Impact of Initialization on Solution Time}
Recall that we use value iteration algorithms for solving perfect information stochastic games and Markov decision processes, respectively, to initialize upper and lower bounds on $V^*$.
In our experiments, we terminate the algorithms whenever the change in valuation between iterations of value iteration is smaller than $\beta=0.025$.
In Figure~\ref{fig:residual}, we analyze the impact of the choice of $\beta$ on the running time of the algorithm when applied to pursuit evasion games.
Observe that the tighter initial bounds are used, the faster the convergence of the algorithm.
In fact, the difference between $\beta=1$ and $\beta=0.025$ is approximately an order of magnitude in run time.
Recall that the bounds $\ov$ and $\uv$ not only serve as bounds on $V^*$, but they are also used to obtain strategies that are considered during the forward exploration phase of the algorithm (see lines~\ref{alg:osposg:hsvi:follower-choice} and~\ref{alg:osposg:hsvi:leader-choice} of Algorithm~\ref{alg:osposg:hsvi}).
We believe that these results indicate that the use of, e.g., domain-dependent initialization of the bounds can greatly improve the run time of the algorithm in complex domains.
\subsection{Performance Analysis}
Based on the the algorithm's runtime data, we observed that most of the computation time is split between solving the linear programs used to compute $H\ov$ and $H\uv$ and pruning the representations of these bounds.
Together, these three tasks typically took around 85\% of the total runtime (and always at least 70\%), with the remaining time being spent on computation of initial bounds, construction of the linear programs, and other smaller tasks.
More specifically, solving $H\ov$ took 30-50\% of the runtime in typical games while reaching as far as 60\% in large pursuit evasion games (e.g., 60.5\% in the $3 \times 7$ pursuit evasion game).
Solving $H\uv$ was faster --- in most games, it took between 10 and 20\% of the total runtime.
Finally, time required to perform pruning of the bounds $\ov$ and $\uv$ also took 10-20\% of the runtime, with the exception of the patrolling games with attack time $t_\times=4$, where it required over 40\% of the total runtime.
\section{Conclusions}
We cover two-player zero-sum partially observable stochastic games (POSGs) with discounted rewards and one-sided observability --- that is, those where the second player has perfect information about the game.
We describe the theoretical properties of the value function in these games and show that algorithms based on value-iteration converge to an optimal value of the game.
We also propose the first approximate algorithm that generalizes the ideas behind point-based algorithms designed for partially observable Markov decision processes (POMDPs) and transfers these techniques to POSGs.
The presented work shows that it is possible to translate selected results from the single-agent setting to zero-sum games.
Moreover, in future work, this work could be further extended in several ways:
First, as already demonstrated by existing follow-up works~\cite{horak2019-cose}, the scalability of the algorithm can be substantially improved for specific security games.
Second, many heuristics and methods proven useful in the POMDP setting can be translated and evaluated in the game-theoretic setting, further improving the scalability.
Third, generalization beyond the strictly adversarial setting (e.g., by computing a Stackelberg equilibrium) is another key direction supporting the applicability of these game-theoretic models to security.
\appendix
\section{Proofs}\label{sec:app:proofs}
\ValuesAreBounded*
\begin{proof}
The smallest payoff player~1 can hypothetically achieve in any play consists of getting $\underline{r} = \min_{(s,a_1,a_2)} R(s,a_1,a_2)$ in every timestep.
The infinite discounted sum $\sum_{t=1}^\infty \gamma^{t-1} \underline{r}$ converges to $\underline{r}/(1-\gamma)=L$.
Conversely, the maximum payoff can be achieved if player~1 obtains $\overline{r} = \max_{(s,a_1,a_2)} R(s,a_1,a_2)$ in every timestep.
Expected values of strategies (and therefore also the value of the game) are expectation over the payoffs of individual plays---hence are bounded by $L$ and $U$ as well.
\end{proof}
\ValuesConvex*
\begin{proof}
Definition~\ref{def:osposg:v-star} defines $V^*$ as the point-wise supremum over linear functions $\val^{\sigma_1}$ (over all strategies $\sigma_1 \in \Sigma_1$ of player~1).
This implies the convexity of $V^*$~\citep[p.81]{boyd2004-convex}.
\end{proof}
\LinearFsAreLipschitz*
\begin{proof}
Let $p,q \in \Delta(X)$ be arbitrary two points in the probability simplex over the finite set $X$.
Since $f$ is a linear function, it can be represented as a convex combination of values $\alpha(x)$ in the vertices of the simplex corresponding to the elements $u \in X$,
\begin{subequations}
\begin{equation}
f(p) = \sum_{u \in X} \alpha(u) \cdot p(u) \qquad \text{where} \qquad \alpha(u) = f(\mathbbm{1}_u), \ \ \mathbbm{1}_u(v) = \begin{cases}
1 & v = u \\
0 & \text{otherwise}
\end{cases} \ \text{.}
\end{equation}
Without loss of generality, let us assume $f(p) \geq f(q)$.
Now, the difference $| f(p) - f(q) |$ satisfies
\begin{equation}
| f(p) - f(q) | = f(p) - f(q) = \sum_{u \in X} \alpha(u) \cdot [ p(u) - q(u) ] \ \text{.} \label{thm:osposg:linbound-lipschitz:step1}
\end{equation}
Denote $X^+ = \lbrace u \in X \ |\ p(u) - q(u) \geq 0 \rbrace$ and $X^- = \lbrace u \in X \ |\ p(u) - q(u) < 0 \rbrace$.
We can now bound the difference from Equation~\eqref{thm:osposg:linbound-lipschitz:step1} by $| f(p) - f(q) | = $
\begin{align}
&= \sum_{u \in X^+} \alpha(u) \cdot [ p(u) - q(u) ] + \sum_{u \in X^-} \alpha(u) \cdot [ p(u) - q(u) ] \\
&\leq \sum_{u \in X^+} y_{\mathrm{max}} \cdot [ p(u) - q(u) ] + \sum_{u \in X^-} y_{\mathrm{min}} \cdot [ p(u) - q(u) ] \\
&= y_{\mathrm{max}} \sum_{u \in X^+} [ p(u) - q(u) ] + y_{\mathrm{min}} \sum_{u \in X^-} [ p(u) - q(u) ] \ \text{.}
\end{align}
Since both $p$ and $q$ belong to $\Delta(X)$, we have $\| p \|_1 = \| q \|_1 = 1$. Since $p(u),q(u) \geq 0$ are non-negative, we have
\begin{equation*}
\| p \|_1 = \| q \|_1 + \sum_{u \in X^+} [ p(u) - q(u) ] - \sum_{u \in X^-} [ q(u) - p(u) ] . \end{equation*}
It follows that
\begin{equation}\label{eq:blabla}
\sum_{u \in X^+} [ p(u) - q(u) ] = -\sum_{u \in X^-} [ p(u) - q(u) ] \text{.}
\end{equation}
From equation \eqref{eq:blabla}, we further see that both terms in \eqref{eq:blabla} are equal to $\| p - q \|_1 / 2 $.
This implies that
\begin{align*}
| f(p) - f(q) | \leq \ & \ y_{\mathrm{max}} \| p - q \|_1/2 + y_{\mathrm{min}} (- \| p - q \|_1/2) \\
= \ & \ (y_{\mathrm{max}} - y_{\mathrm{min}})/2 \cdot \| p - q \|_1
\end{align*}
and completes the proof.
\end{subequations}
\end{proof}
\ValuesAreLipschitz*
\begin{proof}
$V^*$ is defined as a supremum over $\delta$-Lipschitz continuous values $\val^{\sigma_1}$ of strategies $\sigma_1 \in \Sigma_1$ of the imperfectly informed player~1.
Therefore for arbitrary $b, b' \in \Delta(S)$, we have the following
\begin{equation}
V^*(b) = \sup_{\sigma_1 \in \Sigma_1} \val^{\sigma_1}(b) \leq \sup_{\sigma_1 \in \Sigma_1} [ \val^{\sigma_1}(b') + \delta \| b - b' \|_1 ] = V^*(b') + \delta \| b - b' \|_1 \ \text{.}
\end{equation}
\end{proof}
\ConvexClosureDoesntIncreaseSupremum*
\begin{proof}
Clearly, it suffices to prove the inequality $\geq$.
Let $b \in \Delta(S)$ and let $\sum_{i=1}^k \lambda_i \alpha_i$ be an arbitrary\footnote{Recall that according to the Carath\'eodory's theorem, it suffices to consider finite convex combinations.} convex combination of linear functions from $\Gamma$ (i.e., we have $\alpha_i \in \Gamma$).
We need to show that $\alpha(b) \geq \sum_{i=1}^k \lambda_i \alpha_i(b)$ holds for some $\alpha \in \Gamma$.
This is straightforward, as can be witnessed by the function $\alpha_{i^*} \in \Gamma$, $i^* := \argmax_i \alpha_i(b)$:
\begin{align*}
\sum_{i=1}^k \lambda_i \alpha_i(b) \leq \sum_{i=1}^k \lambda_i \max_{1 \leq i \leq k} \alpha_i(b) = \max_{1 \leq i \leq k} \alpha_i(b) = \alpha_{i^*}(b) \ \text{.}
\end{align*}
\end{proof}
\ConvexFunctionsAsSupremaOfLinearFs*
\begin{proof}
Let $\Gamma := \{ \alpha : \Delta(S) \rightarrow \mathbb{R} \textnormal{ linear} \mid \alpha \leq f \}$.
Clearly, the pointwise supremum of $\Gamma$ is no greater than $f$.
It remains to show that $\sup_{\alpha \in \Gamma} \alpha(b_0) \geq f(b_0)$ for each $b_0$.
Let $b_0$ be an interior point of $\Delta(S)$.
By the standard convex-analysis result, there exists a subdifferential of $f$ at $b_0$, that is, a vector $v$ such that $f(b) \geq f(b_0) + v\cdot (b-b_0)$ holds for each $b \in \Delta(S)$.
The function $\alpha (b) := f(b_0) + v\cdot (b-b_0)$ therefore belongs to $\Gamma$ and witnesses that $\sup_{\alpha \in \Gamma} \alpha(b_0) \geq f(b_0)$.
Suppose that $b_0$ lies at the boundary of $\Delta(S)$ and let $\eta$, $\|\eta \|_1=1$, be a direction in which every nearby point $b_\delta := b_0 - \delta \eta$, $\delta \in (0,\Delta]$, lies in the interior of $\Delta(S)$ (for some $\Delta>0$).
Since $f$ is convex, the directional derivatives $f'_\eta(b_\delta) = \lim_{g\to 0_+} \frac{f(b_\delta+g\eta)-f(b_\delta)}{g}$ are non-decreasing as the points $b_\delta$ get closer to $b_0$.
In particular, the linear functions $\alpha_\delta$ found for $b_\delta$ in the previous step satisfy
\[
\alpha_\delta(b_0) \geq f(b_\delta) + f'_\eta(b_\delta)\delta \geq f(b_\delta) + f'_\eta(b_\Delta)\delta .
\]
The right-hand side converges to $f(b_0) + f'_\eta(b_\Delta)\cdot 0 = f(b_0)$, which shows that the supremum of $\alpha_\delta(b_0)$ is at least $f(b_0)$. Since $\alpha_\delta \in \Gamma$, this proves the remaining part of the proposition.
\end{proof}
\StrategyDecomposition*
\begin{proof}
Let $\sigma_1 \in \Sigma_1$ be an arbitrary behavioral strategy of player~1, and let $\pi_1 = \sigma_1(\emptyset)$ and $\zeta_{a_1,o}(\omega')=\sigma_1(\omega')$ for every $(a_1,o) \in A_1 \times O$ and $\omega' \in (A_1 O)^*$.
It can be easily verified that $\mathsf{comp}(\pi_1,\overline{\zeta})$ defined in Definition~\ref{def:osposg:composite} satisfies $\mathsf{comp}(\pi_1,\overline{\zeta})=\sigma_1$.
\end{proof}
\StrategyCompositionValue*
\begin{proof}
Let us evaluate the payoff if player~2 uses $a_2$ in the first stage of the game given that the initial state of the game is $s$.
The expected reward of playing action $a_2$ against $\mathsf{comp}(\pi_1,\overline{\zeta})$ in the first stage is $\sum_{a_1 \in A_1} \pi_1(a_1) R(s,a_1,a_2)$, i.e., the expectation over the actions player~1 can take.
Now, at the beginning of the next stage, player~2 knows everything about the past stage---including action $a_1$ taken by player~1, observation $o$ he received, and the new state of the game $s'$.
Therefore, player~2 knows the strategy $\zeta_{a_1,o}$ player~1 is about to use in the rest of the game.
By definition of $\val^{\zeta_{a_1,o}}$ (Definition~\ref{def:osposg:strategy-value}), the best payoff player~2 can achieve in $(a_1,o)$-subgame is $\val^{\zeta_{a_1,o}}(s')$.
After reaching the subgame, however, one stage has already passed and the rewards originally received at time $t$ are now received at time $t+1$.
As a result, the reward $\val^{\zeta_{a_1,o}}(s')$ gets discounted by $\gamma$.
The probability that the $(a_1,o)$-subgame is reached is $\sum_{(a_1,o,s') \in A_1 \times O \times S} \pi_1(a_1) T(o,s' \,|\, s,a_1,a_2)$, and the expectation over $\gamma \val^{\zeta_{a_1,o}}(s')$ is thus computed.
Player~2 chooses an action which achieves the minimum payoff which completes the proof.
\end{proof}
\GenCompositionValue*
\begin{proof}
Let $\overline{\zeta} \in (\Sigma_1)^{A_1 \times O}$ be as in the lemma, and let $\overline{\alpha}^\zeta$ be such that $\alpha_{a_1,o}^\zeta = \val^{\zeta_{a_1,o}}$.
According to the assumption we have $\alpha_{a_1,o}^\zeta \geq \alpha_{a_1,o}$.
Replacing $\alpha_{a_1,o}$ by $\alpha_{a_1,o}^\zeta$ in Equation~\eqref{eq:osposg:valcomp} can only increase the objective value, hence
\begin{equation}
\mathsf{valcomp}(\pi_1,\overline{\alpha})(s) \leq \mathsf{valcomp}(\pi_1,\overline{\alpha}^\zeta)(s) = \val^{\mathsf{comp}(\pi_1,\overline{\zeta})}(s) \ \text{.}
\end{equation}
Composite strategies are behavioral strategies of player~1, hence $\sigma_1 = \mathsf{comp}(\pi_1,\overline{\zeta})$.
\end{proof}
\ValcompLipschitz*
\begin{proof}
Since $\mathsf{valcomp}(\pi_1,\overline{\alpha})(b)$ is calculated as a convex combination of the values $\mathsf{valcomp}(\pi_1,\overline{\alpha})(s)$ in the vertices of the $\Delta(S)$ simplex, it suffices to show that
\begin{equation*}
\left( \forall s \in S \right) : L \leq \mathsf{valcomp}(\pi_1,\overline{\alpha})(s) \leq U .
\end{equation*}
Let $a_2^* \in A_2$ be the minimizing action of player~2 in Equation~\eqref{eq:osposg:valcomp}.
It holds $\underline{r} \leq R(s,a_1,a_2^*) \leq \overline{r}$, where $\underline{r}$ and $\overline{r}$ are minimum and maximum rewards in the game.
Hence $\underline{r} \leq \sum_{a_1 \in A_1} \pi_1(a_1) R(s,a_1,a_2^*) \leq \overline{r}$.
Similarly, from the assumption of the lemma, we have $L \leq \alpha_{a_1,o}(s') \leq U$ and hence $L \leq \sum_{(a_1,o,s') \in A_1 \times O \times S} \pi_1(a_1) T(o, s' \,|\, s, a_1, a_2^*) \alpha_{a_1,o}(s') \leq U$.
We will now prove that $\mathsf{valcomp}(\pi_1,\overline{\alpha})(s) \leq U$ (the proof of $\mathsf{valcomp}(\pi_1,\overline{\alpha})(s) \geq L$ is analogous):
\begin{align*}
& \mathsf{valcomp}(\pi_1,\overline{\alpha})(s) = \\
& = \sum_{a_1 \in A_1} \pi_1(a_1) R(s,a_1,a_2^*) + \gamma \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sum_{(a_1,o,s') \in A_1 \times O \times S} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \pi_1(a_1) T(o, s' \,|\, s, a_1, a_2^*) \alpha_{a_1,o}(s') \\
& \leq \ \overline{r} + \gamma U \ = \ \overline{r} + \gamma \frac{\overline{r}}{1-\gamma} \ = \ U \ \text{.}
\end{align*}
The $\delta$-Lipschitz continuity of $\mathsf{valcomp}(\pi_1,\overline{\alpha})$ then follows directly from Lemma~\ref{thm:osposg:linbound-lipschitz}.
\end{proof}
\HVLipschitzConvex*
\begin{proof}
According to Definition~\ref{def:osposg:H-valcomp}, operator $H$ can be rewritten as a supremum over all possible value compositions:
\begin{subequations}
\begin{align}
& [HV](b) = \max_{\pi_1 \in \Pi_1} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \mathsf{valcomp}(\pi_1, \overline{\alpha})(b) = \!\!\!\!\!\!\!\!\! \sup_{(\pi_1,\overline{\alpha}) \in \Pi_1 \times \Gamma^{A_1 \times O}} \!\!\!\!\!\!\!\!\! \mathsf{valcomp}(\pi_1, \overline{\alpha})(b) \ \text{, and} \\
& [HV](b) = \sup_{\alpha \in \Gamma'} \alpha(b) \qquad \Gamma' = \left\lbrace \mathsf{valcomp}(\pi_1,\overline{\alpha}) \;|\; \pi_1 \in \Pi_1, \overline{\alpha} \in \Gamma^{A_1 \times O} \right\rbrace \ \text{.}
\label{eq:osposg:hv-pointwise}
\end{align}
\end{subequations}
In Equation~\eqref{eq:osposg:hv-pointwise}, $HV$ is represented as a point-wise supremum from a set $\Gamma'$ of linear functions $\mathsf{valcomp}(\pi_1,\overline{\alpha})$, which is a convex continuous function (see \thref{thm:osposg:sup-convex}).
Moreover, in case $V$ is $\delta$-Lipschitz continuous, the set $\Gamma$ representing $V$ can be assumed to contain only $\delta$-Lipschitz continuous linear functions.
According to Lemma~\ref{thm:osposg:valcomp-lipschitz}, $\mathsf{valcomp}(\pi_1,\overline{\alpha})$ is $\delta$-Lipschitz continuous for every $\pi_1 \in \Pi_1$ and $\alpha^{a_1,o} \in \Gamma$.
Hence, $\Gamma'$ contains $\delta$-Lipschitz continuous linear functions only and the point-wise maximum $HV$ over $\Gamma'$ is $\delta$-Lipschitz continuous.
\end{proof}
\Bellman*
\begin{proof}
We first prove the equality of \eqref{eq:osposg:H-maxmin} and \eqref{eq:osposg:H-minmax}.
Let us define a payoff function $u: \Pi_1 \times \Pi_2 \rightarrow \mathbb{R}$ to be the objective of the maximin and minimax optimization in \eqref{eq:osposg:H-maxmin} and \eqref{eq:osposg:H-minmax}.
\begin{subequations}
\begin{equation}
u(\pi_1,\pi_2) = \E_{b,\pi_1,\pi_2}[R(s,a_1,a_2)] + \gamma \sum_{a_1,o} \mathbb{P}_{b,\pi_1,\pi_2}[a_1,o] \cdot V(\tau(b,a_1,\pi_2,o))
\end{equation}
After expanding the expectation $\E_{b,\pi_1,\pi_2}[R(s,a_1,a_2)]$ and expressing $V$ as a supremum over linear functions $\alpha \in \Gamma$, we get
\begin{align*}
u(\pi_1,\pi_2) &= \sum_{s,a_1,a_2} b(s) \pi_1(a_1) \pi_2(a_2 | s) R(s,a_1,a_2) \ + \\
& \qquad + \gamma \sum_{a_1,o} \mathbb{P}_{b,\pi_1,\pi_2}[a_1,o] \cdot \sup_{\alpha \in \Gamma} \sum_{s'} \tau(b,a_1,\pi_2,o)(s') \cdot \alpha(s') \numberthis\\
&= \sum_{s,a_1,a_2} b(s) \pi_1(a_1) \pi_2(a_2 | s) R(s,a_1,a_2) \ + \numberthis\label{eq:osposg:stage-utility}\\
& \qquad + \gamma \sum_{a_1,o} \pi_1(a_1) \cdot \sup_{\alpha \in \Gamma} \sum_{s,a_2,s'} b(s)\pi_2(a_2|s)T(o,s' \,|\, s,a_1,a_2)\alpha(s') \ \text{.}
\end{align*}
\end{subequations}
Note that the term $\mathbb{P}_{b,\pi_1,\pi_2}[a_1,o]$ cancels out after expanding $\tau(b,a_1,\pi_2,o)$ in Equation~\eqref{eq:osposg:stage-utility}.
We now show that the von Neumann's minimax theorem~\citep{vonneumann1928-minimax,nikaido1953-minimax} applies to the game with utility function $u$ and strategy spaces $\Pi_1$ and $\Pi_2$ for player~1 and player~2, respectively.
The von Neumann's minimax theorem requires that the strategy spaces $\Pi_1$ and $\Pi_2$ are convex compact sets (which is clearly the case), and that the utility function $u$ (as characterized by Equation~\eqref{eq:osposg:stage-utility}) is continuous, convex in $\Pi_2$ and concave in $\Pi_1$.
We will now prove the latter and show that $u$ is a convex-concave utility function.
Clearly, for every $\pi_2 \in \Pi_2$, the function $u(\cdot, \pi_2): \Pi_1 \rightarrow \R$ (where $\pi_2$ is considered constant) is linear in $\pi_1$, and hence also concave.
The convexity of $u(\pi_1,\cdot): \Pi_2 \rightarrow \R$ (after fixing arbitrary $\pi_1 \in \Pi_1)$ is more involved.
As weighted sum of convex functions with positive coefficients $\pi_1(a_1) \geq 0$ is also convex, it is sufficient to show that $f(\pi_2) = \sup_{\alpha \in \Gamma} \sum_{s,a_2,s'} b(s) \pi_2(a_2|s) T(o,s' \,|\, s,a_1,a_2) \alpha(s')$ is convex.
Observe that for every $\alpha \in \Gamma$, the expression $\sum_{s,a_2,s'} b(s) \pi_2(a_2|s) T(o,s' \,|\, s,a_1,a_2) \alpha(s')$ is linear in $\pi_2$ and, as a result, the supremum over such linear expressions in $\pi_2$ is convex in $\pi_2$ (see \thref{thm:osposg:sup-convex}).
According to von Neumann's minimax theorem $\max_{\pi_1 \in \Pi_1} \min_{\pi_2 \in \Pi_2} u(\pi_1,\pi_2) = \min_{\pi_2 \in \Pi_2} \max_{\pi_1 \in \Pi_1} u(\pi_1,\pi_2)$ which concludes the proof of equality of~\eqref{eq:osposg:H-maxmin} and~\eqref{eq:osposg:H-minmax}.
We now proceed by showing the equality of \eqref{eq:osposg:equiv-valcomp} and \eqref{eq:osposg:H-maxmin}.
By further rearranging Equation~\eqref{eq:osposg:stage-utility}, we get
\begin{align*}
u(\pi_1,\pi_2) &= \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \Big[ \sum_{s,a_1,a_2} b(s) \pi_1(a_1) \pi_2(a_2 | s) R(s,a_1,a_2) \ + \numberthis\label{eq:osposg:stage-utility-supoutside}\\
& \qquad\qquad + \gamma \sum_{a_1,o} \pi_1(a_1) \sum_{s,a_2,s'} b(s)\pi_2(a_2|s)T(o,s' \,|\, s,a_1,a_2)\alpha_{a_1,o}(s') \Big] \ \text{.}
\end{align*}
Let us define a game with strategy spaces $\Gamma$ and $\Pi_2$ and payoff function $u'_{\pi_1}: \Gamma \times \Pi_2 \rightarrow \R$ where $u'_{\pi_1}$ is the objective of the supremum in Equation~\eqref{eq:osposg:stage-utility-supoutside} (Equation~\eqref{eq:osposg:uprime-2} is an algebraic simplification of Equation~\eqref{eq:osposg:uprime-1}).
\begin{subequations}
\begin{align*}
u'_{\pi_1}(\overline{\alpha},\pi_2) &= \sum_{s,a_1,a_2} b(s) \pi_1(a_1) \pi_2(a_2 | s) R(s,a_1,a_2) \ + \numberthis\label{eq:osposg:uprime-1}\\
& \qquad\qquad + \gamma \sum_{a_1,o} \pi_1(a_1) \sum_{s,a_2,s'} b(s)\pi_2(a_2|s)T_{s,a_1,a_2}(o,s')\alpha_{a_1,o}(s') \\
&= \sum_s b(s) \sum_{a_2} \pi_2(a_2|s) \sum_{a_1} \pi_1(a_1) \Big[ R(s,a_1,a_2) \ + \numberthis\label{eq:osposg:uprime-2}\\
& \qquad\qquad + \gamma \sum_{o,s'} T(o,s' \,|\, s,a_1,a_2)\alpha_{a_1,o}(s') \Big] \ \text{.}
\end{align*}
\end{subequations}
Plugging~\eqref{eq:osposg:uprime-2} into~\eqref{eq:osposg:stage-utility-supoutside}, we can write
\begin{equation}
\max_{\pi_1 \in \Pi_1} \min_{\pi_2 \in \Pi_2} u(\pi_1,\pi_2) = \max_{\pi_1 \in \Pi_1} \min_{\pi_2 \in \Pi_2} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} u'_{\pi_1}(\pi_2,\overline{\alpha}) \ \text{.}
\end{equation}
To prove the equivalence of \eqref{eq:osposg:equiv-valcomp} and \eqref{eq:osposg:H-maxmin}, we need to show that the minimum and supremum can be swapped.
Since $u'_{\pi_1}$ is linear in both $\pi_2$ and $\overline{\alpha}$, $\Pi_2$ is a compact convex set and $\Gamma$ (and therefore also the set of mappings $\overline{\alpha} \in \Gamma^{A_1 \times O}$) is convex, it is possible to apply Sion's minimax theorem~\citep{sion1958-minimax} to get
\begin{equation}
\max_{\pi_1 \in \Pi_1} \min_{\pi_2 \in \Pi_2} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} u'_{\pi_1}(\pi_2,\overline{\alpha})
= \max_{\pi_1 \in \Pi_1} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \min_{\pi_2 \in \Pi_2} u'_{\pi_1}(\pi_2,\overline{\alpha}) \ \text{.}
\end{equation}
As $u'_{\pi_1}$ is linear in $\pi_2$ (for fixed $\pi_1$ and $\overline{\alpha}$), the minimum over $\pi_2$ is attained in pure strategies. Denote $\hat{\pi}_2: S \rightarrow A_2$ a pure strategy of player~2 assigning action $\hat{\pi}_2(s)$ to be played in state $s$, and $\hat{\Pi}_2$ the set of all pure strategies of player~2.
We now rewrite $u'_{\pi_1}$ to use pure strategies $\hat{\Pi}_2$ instead of randomized stage strategies $\Pi_2$.
First, in Equation~\eqref{eq:osposg:bellman-1}, we replace the maximization over $\Pi_2$ by maximization over the pure strategies $\hat{\Pi}_2$ and replace expectation over actions of player~2 by using the deterministic action $\hat{\pi}_2(s)$ where appropriate.
Then, in Equation~\eqref{eq:osposg:bellman-2}, we leverage the fact that, unlike player~1, player~2 knows the state before having to act, and hence he can optimize his actions $\hat{\pi}_2(s)$ independently.
And, finally, in Equation~\eqref{eq:osposg:bellman-3}, we use Definition~\ref{def:osposg:val-composition}.
\begin{subequations}
\begin{align*}
& \max_{\pi_1 \in \Pi_1} \min_{\pi_2 \in \Pi_2} u(\pi_1,\pi_2) = \max_{\pi_1 \in \Pi_1} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \min_{\pi_2 \in \Pi_2} u'_{\pi_1}(\pi_2,\overline{\alpha}) = \\
& \qquad = \max_{\pi_1 \in \Pi_1} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \min_{\hat{\pi}_2 \in \hat{\Pi}_2} \sum_s b(s) \sum_{a_1} \pi_1(a_1) \Big[ R(s,a_1,\hat{\pi_2}(s)) \ + \numberthis\label{eq:osposg:bellman-1}\\
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad + \gamma \sum_{o,s'} T(o,s' \mid s,a_1,\hat{\pi}_2(s))\alpha_{a_1,o}(s') \Big] \\
& \qquad = \max_{\pi_1 \in \Pi_1} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \sum_s b(s) \min_{\hat{\pi}_2(s) \in A_2} \sum_{a_1} \pi_1(a_1) \Big[ R(s,a_1,\hat{\pi_2}(s)) \ + \numberthis\label{eq:osposg:bellman-2}\\
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad + \gamma \sum_{o,s'} T(o,s' \mid s,a_1,\hat{\pi}_2(s))\alpha_{a_1,o}(s') \Big] \\
& \qquad = \max_{\pi_1 \in \Pi_1} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \sum_s b(s) \cdot \mathsf{valcomp}(\pi_1,\overline{\alpha})(s) \\
& \qquad = \max_{\pi_1 \in \Pi_1} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \mathsf{valcomp}(\pi_1,\overline{\alpha})(b) \ \text{.} \numberthis\label{eq:osposg:bellman-3}
\end{align*}
\end{subequations}
This concludes the proof of the equality of Equations~\eqref{eq:osposg:equiv-valcomp} and~\eqref{eq:osposg:H-maxmin}.
\end{proof}
\ContractivityLemma*
\begin{proof}
By deviating from the equilibrium strategy profiles in stage games $[HV](b)$ and $[HW](b)$, the players can only worsen their payoffs.
Therefore, we have
\begin{align*}
& u^{V,b}(\pi_1^W,\pi_2^V) \leq u^{V,b}(\pi_1^V,\pi_2^V) = [HV](b) \leq \numberthis\\
& \qquad\qquad\qquad \leq [HW](b) = u^{W,b}(\pi_1^W,\pi_2^W) \leq u^{W,b}(\pi_1^W,\pi_2^V) \ \text{.}
\end{align*}
We can thus bound the difference $[HW](b) - [HV](b)$ by $u^{W,b}(\pi_1^W,\pi_2^V) - u^{V,b}(\pi_1^W,\pi_2^V)$ where, according to Definition~\ref{def:osposg:stage-game},
\begin{align*}
&u^{W,b}(\pi_1^W,\pi_2^V) - u^{V,b}(\pi_1^W,\pi_2^V) = \numberthis\label{eq:thm:osposg:point-contractivity}\\
&\qquad\qquad = \gamma \sum_{a_1,o} \mathbb{P}_{b,\pi_1^W,\pi_2^V}[a_1,o] \cdot [ W(\tau(b,a_1,\pi_2^V,o)) - V(\tau(b,a_1,\pi_2^V,o)) ] \ \text{.}
\end{align*}
Since every $W(\tau(b,a_1,o,\pi_2^V)) - V(\tau(b,a_1,o,\pi_2^V)$ considered in Equation~\eqref{eq:thm:osposg:point-contractivity} with non-zero probability $\mathbb{P}_{b,\pi_1^W,\pi_2^V}[a_1,o]$ is assumed to be bounded by $C$, the expectation over such $W(\tau(b,a_1,o,\pi_2^V)) - V(\tau(b,a_1,o,\pi_2^V)$ is likewise bounded by $C$.
It follows that $u^{W,b}(\pi_1^W,\pi_2^V) - u^{V,b}(\pi_1^W,\pi_2^V) \leq \gamma C$, and hence we also have $[HW](b) - [HV](b) \leq \gamma C$.
\end{proof}
\FixpointLemma*
\begin{proof}
\begin{subequations}
According to \thref{thm:osposg:gamma-independent}, the Bellman's operator does not depend on the set $\Gamma$ used to represent the value function $V^*$.
To this end, we will assume that the set $\Gamma$ used to represent $V^*$ is
\begin{equation}
\Gamma = \mathsf{Conv} \lbrace \val^{\sigma_1} \mid \sigma_1 \in \Sigma_1 \rbrace \ \text{.}
\end{equation}
To prove the equivalence of value functions $V^*$ and $HV^*$ we consider that these functions are represented as follows:
\begin{align}
V^*(b) &= \sup_{\alpha \in \Gamma_{V^*}} \alpha(b) & \Gamma_{V^*} &= \left\lbrace \val^{\sigma_1} \mid \sigma_1 \in \Sigma_1 \right\rbrace \\
[HV^*](b) &= \sup_{\alpha \in \Gamma_{HV^*})} & \Gamma_{HV^*} &= \left\lbrace \mathsf{valcomp}(\pi_1, \overline{\alpha}) \mid \pi_1 \in \Pi_1, \overline{\alpha} \in \Gamma^{A_1 \times O} \right\rbrace \ \text{.} \label{thm:osposg:fixpoint:gamma-hv}
\end{align}
To prove the equivalence of $V^*$ and $HV^*$, it suffices to show that for every $\alpha \in \Gamma_{V^*}$ there exists $\alpha' \in \Gamma_{HV^*}$ such that $\alpha' \geq \alpha$, and vice versa.
First, from Proposition~\ref{thm:osposg:decomposition}, Lemma~\ref{thm:osposg:composition} and Definition~\ref{def:osposg:val-composition}, it follows that every strategy $\sigma_1 \in \Sigma_1$ can be represented as a value composition $\mathsf{valcomp}(\pi_1, \overline{\zeta})$, and we have
\begin{equation}
\val^{\sigma_1} = \val^{\mathsf{comp}(\pi_1,\overline{\zeta})} = \mathsf{valcomp}(\pi_1, \overline{\alpha}^{\overline{\zeta}})
\end{equation}
where $\alpha^{\overline{\zeta}}_{a_1,o} = \val^{\overline{\zeta}_{a_1,o}} \in \Gamma$.
Hence $\val^{\sigma_1} = \mathsf{valcomp}(\pi_1, \overline{\zeta}) \in \Gamma_{HV^*}$.
The opposite direction of the proof, i.e., that for every $\alpha \in \Gamma_{HV^*}$ there exists $\alpha' \in \Gamma_{V^*}$ such that $\alpha' \geq \alpha$, is more involved.
Let $\alpha = \mathsf{valcomp}(\pi_1, \overline{\alpha}) \in \Gamma_{HV^*}$ be arbitrary.
From~\eqref{thm:osposg:fixpoint:gamma-hv}, each $\alpha_{a_1,o}$ can be written as a convex combination of finitely many elements of $\lbrace \val^{\sigma_1} \mid \sigma_1 \in \Sigma_1 \rbrace$.
\begin{equation}
\alpha_{a_1,o} = \sum_{i=1}^K \lambda_i^{a_1,o} \val^{\sigma_1^{a_1,o,i}} \label{thm:osposg:fixpoint:cvx-combination}
\end{equation}
Let us form a vector of strategies $\overline{\zeta} \in (\Sigma_1)^{A_1 \times O}$ such that each $\zeta_{a_1,o}$ is a convex combination of strategies $\sigma_1^{a_1,o,i}$ using coefficients from Equation~\eqref{thm:osposg:fixpoint:cvx-combination},
\begin{equation}
\zeta_{a_1,o} = \sum_{i=1}^K \lambda_i^{a_1,o} \sigma_1^{a_1,o,i} \ \text{.}
\end{equation}
We can interpret strategy $\zeta_{a_1,o}$ as player~1 first randomly choosing among strategies $\sigma_1^{a_1,o,i}$, and then following the chosen strategy in the rest of the game.
If the player~2 knew which strategy $\sigma_1^{a_1,o,i}$ has been chosen, he is able to achieve utility $\val^{\sigma_1^{a_1,o,i}}$.
However, he has no access to this information, and hence $\val^{\zeta^{a_1,o}} \geq \sum_{i=1}^K \lambda_i^{a_1,o} \val^{\sigma_1^{a_1,o,i}} = \alpha^{a_1,o}$.
Now, we have
\begin{equation}
\alpha' = \val^{\mathsf{comp}(\pi_1,\overline{\zeta})} \geq \mathsf{valcomp}(\pi_1,\overline{\alpha}) = \alpha
\end{equation}
\end{subequations}
which concludes the proof.
\end{proof}
\LP*
\begin{proof}
Since the set $\Gamma$ is convex and compact, the dynamic programming operator $H$ can be used:
\begin{subequations}
\begin{align*}
[HV](b) &= \max_{\pi_1 \in \Pi_1} \sup_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \mathsf{valcomp}(\pi_1,\overline{\alpha})(b) \numberthis\\
&= \max_{\pi_1 \in \Pi_1} \max_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \mathsf{valcomp}(\pi_1,\overline{\alpha})(b) \numberthis\label{eq:osposg:max-composition-tolp-1}\\
&= \max_{\pi_1 \in \Pi_1} \max_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \sum_{s \in S} b(s) \cdot \mathsf{valcomp}(\pi_1,\overline{\alpha})(s) \numberthis\label{eq:osposg:max-composition-tolp-2}\\
&= \max_{\pi_1 \in \Pi_1} \max_{\overline{\alpha} \in \Gamma^{A_1 \times O}} \sum_{s \in S} b(s) \cdot \min_{a_2} \Bigg[ \sum_{a_1} \pi_1(a_1) R(s,a_1,a_2) \ + \label{eq:osposg:max-composition-tolp}\numberthis\\
& \qquad\qquad\qquad\qquad\qquad + \gamma \!\!\!\!\!\!\!\!\!\!\!\! \sum_{(a_1,o,s') \in A_1 \times O \times S} \!\!\!\!\!\!\!\!\!\!\!\! T(o,s' \,|\, s,a_1,a_2)\pi_1(a_1)\alpha^{a_1,o}(s') \Bigg] \ \text{.}
\end{align*}
\end{subequations}
Equation~\eqref{eq:osposg:max-composition-tolp-1} follows from the fact that $\mathsf{valcomp}(\pi_1,\overline{\alpha})$ is continuous in $\overline{\alpha}$, and $\Gamma$ is a compact set (and hence also $\Gamma^{A_1 \times O}$ is).
The Equation~\eqref{eq:osposg:max-composition-tolp-2} represents value of the linear function $\mathsf{valcomp}(\pi_1,\overline{\alpha})$ as the convex combination of its values in the vertices of the $\Delta(S)$ simplex, and, finally, Equation~\eqref{eq:osposg:max-composition-tolp} rewrites $\mathsf{valcomp}(\pi_1,\overline{\alpha})(s)$ using Definition~\ref{def:osposg:val-composition}.
Equation~\eqref{eq:osposg:max-composition-tolp} can be directly formalized as a mathematical program \eqref{eq:osposg:max-composition-nlp} whose solution is $[HV](b)$.
Indeed, the minimization over $a_2 \in A_2$ can be rewritten as a set of constraints for each value of state $V(s)$ (one for each action $a_2 \in A_2$ of player~2) in Equation~\eqref{eq:osposg:max-composition-nlp:br}.
The convex hull of set $\lbrace \alpha_1, \ldots, \alpha_k \rbrace$ is represented by~\eqref{eq:osposg:max-composition-nlp:convexification} where variables $\lambda_i^{a_1,o}$ represent coefficients of the convex combination.
The stage strategy $\pi_1$ is characterized by~\eqref{eq:osposg:max-composition-nlp:pi-sum} and~\eqref{eq:osposg:max-composition-nlp:pi-positive}.
\begin{subequations}
\label{eq:osposg:max-composition-nlp}
\begin{align*}
\max_{\pi_1,\lambda,\overline{\alpha},V} \ & \sum_{s \in S} b(s) \cdot V(s) \numberthis\\
\text{s.t.} \ \ \ & V(s) \leq \sum_{a_1 \in A_1} \pi_1(a_1) R(s,a_1,a_2) \ + & \forall (s,a_2) \in S \times A_2 \numberthis\label{eq:osposg:max-composition-nlp:br}\\[-0.5em]
& \qquad\qquad\qquad + \gamma \!\!\!\!\!\!\!\!\!\!\!\! \sum_{(a_1,o,s') \in A_1 \times O \times S} \!\!\!\!\!\!\!\!\!\!\!\! T(o,s' \,|\, s,a_1,a_2) \pi_1(a_1) \alpha^{a_1,o}(s') \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \\
& \alpha^{a_1,o}(s') = \sum_{i=1}^k \lambda_i^{a_1,o} \cdot \alpha_i(s') & \forall (a_1,o,s') \in A_1 \times O \times S \numberthis\label{eq:osposg:max-composition-nlp:convexification}\\[-0.5em]
& \sum_{i=1}^k \lambda_i^{a_1,o} = 1 & \forall (a_1,o) \in A_1 \times O \numberthis\\
& \sum_{a_1 \in A_1} \pi_1(a_1) = 1 \numberthis\label{eq:osposg:max-composition-nlp:pi-sum}\\
& \pi_1(a_1) \geq 0 & \forall a_1 \in A_1 \numberthis\label{eq:osposg:max-composition-nlp:pi-positive}\\
& \lambda_i^{a_1,o} \geq 0 & \forall (a_1,o) \in A_1 \times O, 1 \leq i \leq k \numberthis\label{eq:osposg:max-composition-nlp:last}\\
\end{align*}
\end{subequations}
This mathematical program is not linear since it contains a product of variables $\pi_1(a) \cdot \alpha^{a_1,o}(s')$.
It can, however, be linearized by introducing substitution $\hat{\alpha}^{a_1,o}(s') = \pi_1(a_1) \alpha^{a_1,o}(s')$ and $\hat{\lambda}_i^{a_1,o} = \pi_1(a_1) \lambda_i^{a_1,o}$ to obtain~\eqref{eq:osposg:max-composition-lp}.
\end{proof}
\UBIsUpperBound*
\begin{proof}
The inequality $\ov \leq V_{\mathrm{HSVI1}}^\Upsilon$ follows trivially from eq. \eqref{eq:osposg:projection} (with $b' := b$).
Proving $V^*(b) \leq \ov(b)$ is more involved.
Suppose that $b'$ is the minimizer from the definition of $\ov$, i.e., that $\ov(b) = V_{\mathrm{HSVI1}}^\Upsilon(b') + \delta \| b - b' \|_1$.
By definition of $V_{\mathrm{HSVI1}}^\Upsilon$, this $b'$ can be represented as a convex combination $\sum_i \lambda_i b_i = b'$ for which $\sum_i \lambda_i y_i = V_{\mathrm{HSVI1}}^\Upsilon(b')$.
We thus have
\begin{equation}
\ov(b) = \sum_{i=1}^k \lambda_i y_i + \delta \left\| b - b' \right\|_1 .
\end{equation}
Our assumptions imply that every pair $(b_i,y_i)$ satisfies $V^*(b_i) \leq y_i$.
Combining this observations with the fact that $V^*$ is convex and $\delta$-Lipschitz continuous (Lemma~\ref{thm:osposg:vs-convex} and Proposition~\ref{thm:osposg:value-lipschitz}), we have
\begin{align*}
& V^*(b) \ \leq \ V^* \left( b' \right) + \delta \left\| b - b' \right\|_1 \ = \ V^* \left( \sum_i \lambda_i b_i \right) + \delta \left\| b - b' \right\|_1 \leq \\
& \leq \sum_{i=1}^k \lambda_i V^*(b_i) + \delta \left\| b - b' \right\|_1 \leq \sum_{i=1}^k \lambda_i y_i + \delta \left\| b - b' \right\|_1 = \ov(b) \ \text{.}
\end{align*}
Finally, let us prove that $\ov$ is $\delta$-Lipschitz continuous.
Let us consider beliefs $b_1, b_2 \in \Delta(S)$.
Without loss of generality, assume that $\ov(b_1) \geq \ov(b_2)$.
Let $b_{\argmin}$ be the minimizer of $\ov(b_2)$, i.e.,
\begin{equation}
b_{\argmin} = \argmin_{b'} [ V_{\mathrm{HSVI1}}^\Upsilon(b') + \delta \| b_2 - b' \|_1 ] \text{.}
\end{equation}
By triangle inequality, we have
\begin{align*}
& \ov(b_1) = \\
& = \min_{b' \in \Delta(S)} [ V_{\mathrm{HSVI1}}^\Upsilon(b') + \delta \| b_1 - b' \|_1 ] \leq V_{\mathrm{HSVI1}}^\Upsilon(b_{\argmin}) + \delta \| b_1 - b_{\argmin} \|_1 \leq \\
& \leq [ V_{\mathrm{HSVI1}}^\Upsilon(b_{\argmin}) + \delta \| b_2 - b_{\argmin} \|_1 ] + \delta \| b_1 - b_2 \|_1 = \ov(b_2) + \delta \| b_1 - b_2 \|_1
\end{align*}
which completes the proof.
\end{proof}
\LBUpdatesPreserveStuff*
\begin{proof}
Initially, value function $\uv$ satisfies both conditions.
Indeed, the set $\Gamma$ contains only the value $\val^{\sigma_1^{\mathrm{unif}}}$ of the uniform strategy $\sigma_1^{\mathrm{unif}}$, i.e., $\uv(b) = \val^{\sigma_1^{\mathrm{unif}}}(b)$ for every belief $b \in \Delta(S)$.
Value $\val^{\sigma_1^{\mathrm{unif}}}$ is the value for a valid strategy $\sigma_1^{\mathrm{unif}}$ of player~1---hence it is $\delta$-Lipschitz continuous (Lemma~\ref{thm:osposg:strategy-value-lipschitz}) and lower bounds $V^*$.
Assume that every $\alpha$-vector in the set $\Gamma$ is $\delta$-Lipschitz continuous, and that for each $\alpha \in \Gamma$ there exists strategy $\sigma_1 \in \Sigma_1$ with $\val^{\sigma_1} \geq \alpha$ (which holds also for the initial $\uv$).
Let $\mathsf{valcomp}(\pi_1^{\mathrm{LB}},\overline{\alpha}^{\mathrm{LB}})$ be the value composition from Equation~\eqref{eq:osposg:pbupdate-lb} obtained when performing the point-based update of $\uv$ by solving $[H\uv](b)$.
We will now show that the refined function $V_{\mathrm{LB}}^{\Gamma'}$ represented by the set $\Gamma' = \Gamma \cup \lbrace \mathsf{valcomp}(\pi_1^{\mathrm{LB}},\overline{\alpha}^{\mathrm{LB}}) \rbrace$ satisfies both properties, and hence any sequence of application of the point-based updates of $\uv$ preserves the aforementioned properties.
\begin{compactenum}[(1)]
\item By Lemma~\ref{thm:osposg:valcomp-lipschitz}, $\mathsf{valcomp}(\pi_1^{\mathrm{LB}},\overline{\alpha}^{\mathrm{LB}})$ is $\delta$-Lipschitz continuous (and thus so is the value function $V_{\mathrm{LB}}^{\Gamma'}$ represented by the set $\Gamma' = \Gamma \cup \lbrace \mathsf{valcomp}(\pi_1,\overline{\alpha}) \rbrace$).
\item Each $\alpha$-vector in $\Gamma$ forms lower bound on the value of some strategy of player~1.
Since $\overline{\alpha}^{\mathrm{LB}} \in \Gamma^{A_1 \times O}$, we have that every $\alpha_{a_1,o}$ lower bounds the value of some strategy of player~1.
The fact that $\mathsf{valcomp}(\pi_1^{\mathrm{LB}},\overline{\alpha}^{\mathrm{LB}})$ is also a lower bound follows from Lemma~\ref{thm:osposg:gen-composition}---and hence every $\alpha$-vector from the set $\Gamma' = \Gamma \cup \lbrace \mathsf{valcomp}(\pi_1^{\mathrm{LB}},\overline{\alpha}^{\mathrm{LB}}) \rbrace$ is a lower bound on $V^*$.
Hence also $V_{\mathrm{LB}}^{\Gamma'}(b)=\sup_{\alpha \in \Gamma'} \alpha(b) \leq V^*(b)$.
\end{compactenum}
\end{proof}
\UBPreservesStuff*
\begin{proof}
$\ov$ has been defined as a lower $\delta$-Lipschitz envelope of $V_{\mathrm{HSVI1}}^\Upsilon$, hence it is $\delta$-Lipschitz continuous (Lemma~\ref{thm:osposg:projection}).
We will therefore focus only on the property (2).
Since the upper bound is initialized by a solution of a perfect information variant of the game, we have that $y_i \geq V^*(b_i)$ for every $(b_i,y_i)$ from the initial set $\Upsilon$ (Equation~\eqref{eq:osposg:upsilon-initial}).
Hence, applying Lemma~\ref{thm:osposg:projection}, $\ov$ is an upper bound on $V^*$.
We will now show that if $y_i \geq V^*(b_i)$ holds for $(b_i,y_i) \in \Upsilon$ (and $\ov$ is thus an upper bound on $V^*$), the application of a point-based update in any belief yields set $\Upsilon'$ such that $y_i \geq V^*(b_i)$ also holds for every $(b_i,y_i) \in \Upsilon'$---and the resulting value function $V_{\mathrm{UB}}^{\Upsilon'}$ is therefore upper bound on $V^*$ as well.
Since $\ov \geq V^*$, the utility function of any stage game satisfies $u^{\ov,b}(\pi_1,\pi_2) \geq u^{V^*,b}(\pi_1,\pi_2)$ for every $b \in \Delta(S)$, $\pi_1 \in \Pi_1$ and $\pi_2 \in \Pi_2$.
This implies that $[H\ov](b) \geq [HV^*](b) = V^*(b)$.
We already know that $y_i \geq V^*(b_i)$ holds for $(b_i,y_i) \in \Upsilon$, and now we have $[H\ov](b) \geq V^*(b)$.
Therefore, for every $(b_i,y_i) \in \Upsilon \cup \lbrace (b, [H\ov](b)) \rbrace$, we have $y_i \geq V^*(b_i)$, and applying the Lemma~\ref{thm:osposg:projection}, we have that the value function $V_{\mathrm{UB}}^{\Upsilon'}$ is an upper bound on $V^*$.
\end{proof}
\ExcessContractivity*
\begin{proof}
Since $\uv \leq V^* \leq \ov$, it holds that $[H\uv](b_t) \leq [H\ov](b_t)$.
Applying Lemma~\ref{thm:osposg:point-contractivity} with $C=\rho(t+1)$ implies that when the beliefs $\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)$ satisfy
\begin{equation*}
\ov(\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)) - \uv(\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)) \leq \rho(t+1),
\end{equation*}
we have $[H\ov](b_t) - [H\uv](b_t) \leq \gamma \rho(t+1)$.
Luckily, this assumption is satisfied in the considered situation --- indeed, otherwise there would be some $(a_1,o) \in A_1 \times O$ with
\begin{align*}
\ov(\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)) - \uv(\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)) > \rho(t+1) ,
\end{align*}
i.e., one satisfying $\excess_{t+1}(\tau(b_t,a_1,\pi_2^{\mathrm{LB}},o)) > 0$, for which $\mathbb{P}_{b,\pi_1^{\mathrm{UB}},\pi_2^{\mathrm{LB}}}[a_1,o] > 0$.
This would contradict the assumption \begin{align*}
\mathbb{P}_{b,\pi_1^{\mathrm{UB}},\pi_2^{\mathrm{LB}}}[a_1^*,o^*] \, \cdot \, \excess_{t+1}(\tau(b_t,a_1^*,\pi_2^{\mathrm{LB}},o^*)) \leq 0 .
\end{align*}
Now, according to Equation~\eqref{eq:osposg:rho}, we have $[H\ov](b_t)-[H\uv](b_t) \leq \gamma \rho(t+1) = \rho(t) - 2 \delta D$.
It follows that the excess gap after performing the point-based update in $b_t$ satisfies
\begin{align}
\excess_t(b_t) & = \ov(b_t) - \uv(b_t) - \rho(t) \leq \gamma \rho(t+1) - \rho(t) \nonumber \\
& = [\rho(t) - \rho(t)] - 2 \delta D = -2 \delta D ,
\end{align}
which completes the proof of the first part of the lemma.
Now since the value functions $\uv$ and $\ov$ are $\delta$-Lipschitz continuous (Lemma~\ref{thm:osposg:lb-point} and Lemma~\ref{thm:osposg:ub-point}), the difference $\ov - \uv$ is $2\delta$-Lipschitz continuous.
Thus for every belief $b_t' \in \Delta(S)$ satisfying $\| b_t - b_t' \|_1 \leq D$, we have
\begin{equation}
\ov(b_t') - \uv(b_t') \leq \ov(b_t) - \uv(b_t) + 2\delta \| b_t - b_t' \|_1 \leq \ov(b_t) - \uv(b_t) + 2\delta D \ \text{.}
\end{equation}
Now since $\excess_t(b_t) \leq -2\delta D$, we have $\excess_t(b_t') \leq 0$ which proves the second part of the lemma.
\end{proof}
\MinJustified*
\begin{proof}
Assume for the contradiction that $V(b) < L$ for some belief $b \in \Delta(S)$.
We pick $b = \argmin_{b' \in \Delta(S)} V(b')$ and denote $\varepsilon=L-V(b)$.
Now, using the utility $u^{V,b}$ from Definition~\ref{def:osposg:stage-game} and using our choice of $b$, we have
\begin{align*}
u^{V,b}(\pi_1,\pi_2) &= \E_{b,\pi_1,\pi_2}[R(s,a_1,a_2)] + \gamma \sum_{a_1,o} \mathbb{P}_{b,\pi_1,\pi_2}[a_1,o] V(\tau(b,a_1,\pi_2,o)) \\
&\geq \underline{r} + \gamma \sum_{a_1,o} \mathbb{P}_{b,\pi_1,\pi_2}[a_1,o] V(b) = \underline{r} + \gamma V(b) = \underline{r} + \gamma (L - \varepsilon)
\end{align*}
where $\underline{r}$ is the minimum reward in the game.
Since $L = \sum_{t=1}^\infty \gamma^{t-1} \underline{r} = \underline{r} + \sum_{t=2}^\infty \gamma^{t-1} \underline{r} = \underline{r} + \gamma L$, we also have that $u^{V,b}(\pi_1,\pi_2) \geq L - \gamma \varepsilon$.
Therefore it would have to also hold that
$[HV](b) = \max_{\pi_1 \in \Pi_1} \min_{\pi_2 \in \Pi_2} u^{V,b}(\pi_1,\pi_2) \geq L - \gamma \varepsilon > L - \varepsilon = V(b)$
which contradicts that $V$ is min-justified.
\end{proof}
\MaxJustified*
\begin{proof}
Let $V$ be max-justified by $\Gamma$ and let us assume for contradiction that there exists $\alpha \in \Gamma$ and $s \in S$ such that $\alpha(s) > U$.
We pick $\alpha$ and $s$ such that $(\alpha,s) = \argmax_{\alpha \in \Gamma, s \in S} \alpha(s)$ and denote $\varepsilon = \alpha(s) - U$.
Using Definition~\ref{def:osposg:val-composition} and our choice of $(\alpha,s)$, we get the following for every $\pi_1 \in \Pi_1$ and $\overline{\alpha} \in \Gamma^{A_1 \times O}$:
\begin{align*}
& \mathsf{valcomp}(\pi_1,\overline{\alpha})(s) =\\
& = \min_{a_2 \in A_2} \sum_{a_1 \in A_1} \pi_1(a_1) \Big[ R(s,a_1,a_2) + \gamma \!\!\! \sum_{o,s' \in O \times S} \!\!\! T(o,s' \mid s,a_1,a_2) \alpha_{a_1,o}(s') \Big] \\
&\leq \min_{a_2 \in A_2} \sum_{a_1 \in A_1} \pi_1(a_1) \Big[ \overline{r} + \gamma \!\!\! \sum_{o,s' \in O \times S} \!\!\! T(o,s' \mid s,a_1,a_2) \alpha(s) \Big] \\
&= \min_{a_2 \in A_2} \left[ \overline{r} + \gamma \alpha(s) \right]
\end{align*}
where $\overline{r} = \max_{(s,a_1,a_2)} R(s,a_1,a_2)$ is the maximum reward in the game.
Since $U = \sum_{t=1}^\infty \gamma^{t-1}\overline{r} = \overline{r} + \sum_{t=2}^\infty \gamma^{t-1}\overline{r} = \overline{r} + \gamma U$, we have the following inequality for every $\pi_1 \in \Pi_1$ and $\overline{\alpha} \in \Gamma^{A_1 \times O}$
\begin{equation}\label{eq:thm:osposg:max-justified-bounded}
\mathsf{valcomp}(\pi_1,\overline{\alpha})(s) \leq \min_{a_2 \in A_2} [\overline{r} + \gamma \alpha(s)] = \overline{r} + \gamma (U+\varepsilon) = U + \gamma \varepsilon < U + \varepsilon = \alpha(s) \ \text{.}
\end{equation}
By Equation~\eqref{eq:thm:osposg:max-justified-bounded}, no value composition can satisfy $\mathsf{valcomp}(\pi_1,\overline{\alpha})(b_s) \geq \alpha(b_s)$ where $b_s(s)=1$ and $b_s(s')=0$ otherwise.
Consequently, no value composition can satisfy $\mathsf{valcomp}(\pi_1,\overline{\alpha})(b) \geq \alpha(b)$ for every belief $b \in \Delta(S)$ as required by Definition~\ref{def:osposg:max-justified}.
This contradicts our assumption and concludes the proof.
\end{proof}
\ConvMaxJustified*
\begin{proof}
Recall that $V$ is max-justified by $\Omega$ if 1) $V(b) = \sup_{\alpha \in \Omega} \alpha(b)$ and 2) for every $\alpha \in \Omega$ there exists $\pi_1 \in \Pi_1$ and $\overline{\alpha} \in \Omega^{A_1 \times O}$ such that $\mathsf{valcomp}(\pi_1,\overline{\alpha}) \geq \alpha$.
Let $V$ be a value function and suppose that $\Gamma$ satisfies 1) and 2).
We will now verify that these properties hold for $\mathsf{Conv}(\Gamma)$ as well.
By Proposition~\ref{thm:osposg:cvx-convexification}, we have that $\sup_{\alpha \in \mathsf{Conv}(\Gamma)} \alpha(b) = \sup_{\alpha \in \Gamma} \alpha(b)$.
Since the property 1) holds for $\Gamma$ and the value of $V$ remains unchanged, 1) holds for $\mathsf{Conv}(\Gamma)$ as well.
We will now prove 2) by showing that for every $\alpha \in \mathsf{Conv}(\Gamma)$, there exists $\pi_1 \in \Pi_1$ and $\overline{\alpha} \in \mathsf{Conv}(\Gamma)^{A_1 \times O}$ such that $\mathsf{valcomp}(\pi_1,\overline{\alpha}) \geq \alpha$.
First of all, let us write $\alpha \in \mathsf{Conv}(\Gamma)$ as a finite convex combination $\sum_{i=1}^k \lambda_i \alpha^i$ of $\alpha$-vectors $\alpha^i \in \Gamma$.
Using the assumption that $V$ is max-justified by $\Gamma$, we have that for every $\alpha^i$ there exists $\pi_1^{(i)} \in \Pi_1$ and $\overline{\alpha}^{(i)} \in \Gamma$, such that $\mathsf{valcomp}(\pi_1^{i},\overline{\alpha}^{i}) \geq \alpha^i$.
Denote $\pi_1(a_1) := \sum_{i=1}^k \lambda_i \pi_1^i(a_1)$ and $\alpha_{a_1,o} := \sum_{i=1}^k \lambda_i \pi_1^i(a_1) \alpha^i_{a_1,o} / \pi_1(a_1)$.\footnote{Observe that $\alpha_{a_1,o} \in \mathsf{Conv}(\Gamma)$ since $\pi_1(a_1) = \sum_{i=1}^k \lambda_i \pi_1^i(a_1)$ and the coefficients $\lambda_i \pi_1^i(a_1) / \pi_1(a_1)$ thus sum to 1.}
We claim that $\pi_1$ and $\overline{\alpha}$ witness that $V$ is max-justified by $\mathsf{Conv}(\Gamma)$.
Since $\pi_1^i$ and $\overline{\alpha}^i$ were chosen s.t. $\mathsf{valcomp}(\pi_1^i,\overline{\alpha}^i) \geq \alpha^i$, we have $\sum_{i=1}^k \lambda_i \mathsf{valcomp}(\pi_1^i,\overline{\alpha}^i) \geq \sum_{i=1}^k \lambda_i \alpha^i = \alpha \in \mathsf{Conv}(\Gamma)$.
To finish the proof, we show that $\mathsf{valcomp}(\pi_1,\overline{\alpha}) \geq \sum_{i=1}^k \lambda_i \mathsf{valcomp}(\pi_1^i,\overline{\alpha}^i)$.
By Definition~\ref{def:osposg:val-composition}, we have
\begin{align*}
&\mathsf{valcomp}(\pi_1,\overline{\alpha}) = \\
& = \min_{a_2 \in A_2} \Big[ \sum_{a_1 \in A_1} \pi_1(a_1) R(s,a_1,a_2) \\
& \hspace{10em} + \gamma \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sum_{(a_1,o,s') \in A_1 \times O \times S} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \pi_1(a_1) T(o,s' \,|\, s,a_1,a_2) \alpha_{a_1,o}(s') \Big] \\
&= \min_{a_2 \in A_2} \Big[ \sum_{i=1}^k \sum_{a_1 \in A_1} \lambda_i \pi_1^i(a_1) R(s,a_1,a_2) + \\
& \hspace{10em} + \gamma \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sum_{(a_1,o,s') \in A_1 \times O \times S} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! T(o,s' \,|\, s,a_1,a_2) \sum_{i=1}^k \lambda_i \pi_1^i(a_1) \alpha^i_{a_1,o}(s') \Big] \\
&= \min_{a_2 \in A_2} \sum_{i=1}^k \lambda_i \Big[ \sum_{a_1 \in A_1} \pi_1^i(a_1) R(s,a_1,a_2) \\
& \hspace{10em} + \gamma \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sum_{(a_1,o,s') \in A_1 \times O \times S} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \pi_1^i(a_1) T(o,s' \,|\, s,a_1,a_2) \alpha^i_{a_1,o}(s') \Big] \\
&\geq \sum_{i=1}^k \lambda_i \min_{a_2 \in A_2} \Big[ \sum_{a_1 \in A_1} \pi_1^i(a_1) R(s,a_1,a_2) \\
& \hspace{10em} + \gamma \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sum_{(a_1,o,s') \in A_1 \times O \times S} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \pi_1^i(a_1) T(o,s' \,|\, s,a_1,a_2) \alpha^i_{a_1,o}(s') \Big] \\
&= \sum_{i=1}^k \lambda_i \mathsf{valcomp}(\pi_1^i, \overline{\alpha}^i) \ \text{.}
\end{align*}
\end{proof}
\PlOneStrategy*
\begin{proof}
Let $b^{\mathrm{init}}$ and $\rho^{\mathrm{init}}$ be as in the proposition and assume that player~1 follows $\mathtt{Act}(b,\rho)$ for the first $K$ stages and then follows the uniformly-random strategy $\sigma_1^{\mathrm{unif}}$.
We denote this strategy as $\sigma_1^{b,\rho,K}$.
To get to our result, we will first consider an arbitrary belief $b \in \Delta(S)$ and gadget $\rho \in \Gamma$.
We will use induction to prove that the value of $\sigma_1^{b,\rho,K}$ satisfies $\val^{\sigma_1^{b,\rho,K}} \geq \rho - \gamma^K \cdot (U-L)$.
First, assume that $K=0$, i.e., player~1 plays the uniform strategy $\sigma_1^{\mathrm{unif}}$ immediately.
Value of the uniform strategy $\sigma_1^{\mathrm{unif}}$ is at least $\val^{\sigma_1^{\mathrm{unif}}} \geq L$ (Proposition~\ref{thm:osposg:bounded}) while $\rho \leq U$ (Lemma~\ref{thm:osposg:max-justified-bounded}).
Hence $\val^{\sigma_1^{b,\rho,0}} \geq L \geq L - (U - \rho) = \rho - \gamma^0 (U-L)$.
Let $K \geq 1$ and assume that $\val^{\sigma_1^{b',\rho',K-1}} \geq \rho' - \gamma^{K-1} (U-L)$ for every belief $b' \in \Delta(S)$ and gadget $\rho' \in \Gamma$.
Observe that due to the recursive nature of the $\mathtt{Act}$ method, we can represent the strategy $\sigma_1^{b,\rho,K}$ as a composite strategy $\sigma_1^{b,\rho,K} = \mathsf{comp}(\pi_1^*, \overline{\zeta})$, where $\zeta_{a_1,o} = \sigma_1^{\tau(b,a_1,\pi_2,o), \alpha^*_{a_1,o},K-1}$ and $\pi^*_1$ comes from line~\ref{alg:osposg:cr:resolve} of Algorithm~\ref{alg:osposg:cr}.
(To ensure that $\overline \alpha^*$ and $\pi^*_1$ are correctly defined, the algorithm requires the existence of a value composition satisfying $\mathsf{valcomp}(\pi_1,\overline{\alpha}) \geq \rho$. This requirement holds since $V$ is max-justified by the set $\Gamma$ and $\rho \in \Gamma$.)
Applying Lemma~\ref{thm:osposg:composition}, the induction hypothesis, and Definition~\ref{def:osposg:val-composition} (in this order), we have $\val^{\mathsf{comp}(\pi_1^*,\overline{\zeta})}(s) = $
\begin{align*}
&= \min_{a_2 \in A_2} \sum_{a_1 \in A_1} \pi_1^*(a_1) \Big[ R(s,a_1,a_2) + \gamma \!\!\!\!\!\! \sum_{(o,s') \in O \times S} \!\!\!\!\!\! T(o,s' \mid s, a_1, a_2) \val^{\zeta_{a_1,o}}(s') \Big] \\
&\geq \min_{a_2 \in A_2} \sum_{a_1 \in A_1} \pi_1^*(a_1) \Big[ R(s,a_1,a_2) \ + \\
& \qquad\qquad + \gamma \!\!\!\!\!\! \sum_{(o,s') \in O \times S} \!\!\!\!\!\! T(o,s' \mid s, a_1, a_2) [ \alpha^*_{a_1,o}(s')- \gamma^{K-1}(U-L) ] \Big] \\
&= \min_{a_2 \in A_2} \sum_{a_1 \in A_1} \pi_1^*(a_1) \Big[ R(s,a_1,a_2) \ + \\
& \qquad\qquad + \gamma \!\!\!\!\!\! \sum_{(o,s') \in O \times S} \!\!\!\!\!\! T(o,s' \mid s, a_1, a_2) \alpha^*_{a_1,o}(s') \Big] - \gamma^K (U-L) \\
&= \mathsf{valcomp}(\pi_1^*,\overline{\alpha}^*) - \gamma^K (U-L) \ \text{.}
\end{align*}
We thus have $\val^{\sigma_1^{b,\rho,K}} = \val^{\mathsf{comp}(\pi_1^*,\overline{\zeta})} \geq \mathsf{valcomp}(\pi_1^*,\overline{\alpha}^*) - \gamma^K (U-L)$.
Moreover, according to constraint on line~\ref{alg:osposg:cr:resolve} of Algorithm~\ref{alg:osposg:cr}, we also have $\mathsf{valcomp}(\pi_1^*,\overline{\alpha}^*) \geq \rho$.
As a result, we also have $\val^{\sigma_1^{b,\rho,K}} \geq \rho - \gamma^K (U-L)$. This completes the induction step.
Denote by $\sigma_1$ the strategy where player 1 follows $\mathtt{Act}(b^{\mathrm{init}},\rho^{\mathrm{init}})$ for \emph{infinite} period of time (i.e., as $K \rightarrow \infty$).
We then have
\begin{equation*}
\val^{\sigma_1} = \lim_{K \rightarrow \infty} \val^{\sigma_1^{b^{\mathrm{init}},\rho^{\mathrm{init}},K}} \geq \lim_{K \rightarrow \infty} [\rho^{\mathrm{init}} - \gamma^K (U-L)] = \rho^{\mathrm{init}}
\end{equation*}
which completes the proof.
\end{proof}
\PlTwoStrategy*
\begin{proof}
For the purposes of this proof, we will use
\begin{equation*}
\val_2(\sigma_2', b) = \sup_{\sigma_1 \in \Sigma_1} \E_{b,\sigma_1,\sigma_2'}[\Disc^\gamma]
\end{equation*}
to denote the value a strategy $\sigma_2'$ of player~2 guarantees when the belief of player~1 is $b$.
Similarly to the proof of Proposition~\ref{thm:osposg:p1-strategy}, we will first consider strategies $\sigma_2^{b,K}$ where player~2 plays according to $\mathtt{Act}(b)$ for $K$ steps, and then follows an arbitrary (e.g., uniform) strategy in the rest of the game, and we show that $\val_2(\sigma_2^{b,K}, b) \leq V(b) + \gamma^K (U-L)$.
First, let $K=0$ and $b \in \Delta(S)$ be the belief of player~1.
By Proposition~\ref{thm:osposg:bounded}, player~1 cannot achieve higher utility than $U$.
Moreover, $V$ is min-justified, so we have $V(b) \geq L$ by Lemma~\ref{thm:osposg:min-justified-bounded}.
Therefore, player~1 cannot achieve higher utility than $\val_2(\sigma_2^{b,0},b) \leq U \leq U + V(b) - L = V(b) + \gamma^0 (U-L)$ when his belief is $b$.
Now let $K \geq 1$ be arbitrary.
By the induction hypothesis, we have that strategy $\sigma_2^{b',K-1}$ guarantees that the utility is at most $\val_2(\sigma_2^{b',K-1},b') \leq V(b') + \gamma^{K-1} (U-L)$ when the belief of player~1 is $b'$.
Let us evaluate the utility that $\sigma_2^{b,K}$ guarantees against arbitrary strategy $\sigma_1$ of player~1 in belief $b$.
In the first stage of the game, player~2 plays according to $\pi_2^*$ obtained on line~\ref{alg:osposg:cr2:resolve} of Algorithm~\ref{alg:osposg:cr2}, and the expected reward from the first stage is $\E_{b,\sigma_1,\pi_2^*}[R(s,a_1,a_2)]$.
If player~1 plays $a_1$ and observes $o$, he reaches an $(a_1,o)$-subgame where the belief of player~1 is $\tau(b,a_1,\pi_2^*,o)$ and player~2 plays $\sigma_2^{\tau(b,a_1,\pi_2^*,o),K-1}$.
Using the induction hypothesis, we know that player~1 is able to achieve utility of at most $\val_2(\sigma_2^{\tau(b,a_1,\pi_2^*,o),K-1}, \tau(b,a_1,\pi_2^*,o)) \leq V(\tau(b,a_1,\pi_2^*,o)) + \gamma^{K-1} (U-L)$.
This implies that an upper bound on the utility that $\sigma_1$ achieves against $\sigma_2^{b,K}$ (i.e., the strategy corresponding to player~2 following $\mathtt{Act}(b)$ for $K$ stages) is
\begin{align*}
&\E_{b,\sigma_1,\pi_2^*}[R(s,a_1,a_2)] + \gamma \E_{b,\sigma_1,\pi_2^*}[ V(\tau(b,a_1,\pi_2^*,o)) + \gamma^{K-1} (U-L) ] \\
&\quad = \E_{b,\sigma_1,\pi_2^*}[R(s,a_1,a_2)] + \\
&\quad\quad\quad + \gamma \!\!\!\!\!\!\!\! \sum_{(a_1,o) \in A_1 \times O} \!\!\!\!\!\!\!\! \mathbb{P}_{b,\sigma_1,\pi_2^*}[a_1,o] \cdot [ V(\tau(b,a_1,\pi_2^*,o)) + \gamma^{K-1} (U-L) ] \ \text{.}
\end{align*}
By allowing player~1 to maximize over $\sigma_1$, we get an upper bound on the value $\val_2(\sigma_2^{b,K},b)$ strategy $\sigma_2^{b,K}$ guarantees when the belief of player~1 is $b$.
\begin{align*}
&\val_2(\sigma_2^{b,K},b) \leq \\
&\quad \leq \sup_{\sigma_1 \in \Sigma_1} \Big[ \E_{b,\sigma_1,\pi_2^*}[R(s,a_1,a_2)] \ + \\
& \qquad\qquad\qquad\qquad + \gamma \!\!\!\!\!\!\!\!\! \sum_{(a_1,o) \in A_1 \times O} \!\!\!\!\!\!\!\!\! \mathbb{P}_{b,\sigma_1,\pi_2^*}[a_1,o] \cdot [ V(\tau(b,a_1,\pi_2^*,o)) + \gamma^{K-1} (U-L) ] \Big] \\
&\quad = \max_{\pi_1 \in \Pi_1} \Big[ \E_{b,\pi_1,\pi_2^*}[R(s,a_1,a_2)] + \\
&\quad\quad\quad + \gamma \!\!\!\!\!\!\!\!\! \sum_{(a_1,o) \in A_1 \times O} \!\!\!\!\!\!\!\!\! \mathbb{P}_{b,\pi_1,\pi_2^*}[a_1,o] \cdot V(\tau(b,a_1,\pi_2^*,o)) \Big] + \gamma^K (U-L) \\
&\quad = \max_{\pi_1 \in \Pi_1} u^{V,b}(\pi_1, \pi_2^*) + \gamma^K (U-L)
\end{align*}
Using the fact that $\pi_2^*$ is the optimal strategy in the stage game $[HV](b)$, the definition of the stage game's value, and the fact that $V$ is min-justified, we get
\begin{align*}
&\max_{\pi_1 \in \Pi_1} u^{V,b}(\pi_1, \pi_2^*) + \gamma^K (U-L) = \min_{\pi_2 \in \Pi_2} \max_{\pi_1 \in \Pi_2} u^{V,b}(\pi_1,\pi_2) + \gamma^K (U-L) \\
&\qquad\qquad = [HV](b) + \gamma^K (U-L) \leq V(b) + \gamma^K (U-L) \ \text{.}
\end{align*}
Hence, the utility player~1 with belief $b$ can achieve against player~2 who follows strategy $\sigma_2^{b,K}$ is at most $V(b) + \gamma^K (U-L)$, and we have $\val_2(\sigma_2^{b,K},b) \leq V(b) + \gamma^K (U-L)$ which completes the induction step.
Now, similarly to the proof of Proposition~\ref{thm:osposg:p1-strategy}, when player~2 follows $\mathtt{Act}(b^{\mathrm{init}})$ for \emph{infinitely} many stages (i.e., plays strategy $\sigma_2$ from the theorem), player~1 is able to achieve utility at most
\begin{equation*}
\val_2(\sigma_2, b^{\mathrm{init}}) = \lim_{K \rightarrow \infty} \val_2(\sigma_2^{b^{\mathrm{init}},K}, b^{\mathrm{init}}) \leq \lim_{K \rightarrow \infty} [ V(b^{\mathrm{init}}) + \gamma^K (U-L) ] = V(b^{\mathrm{init}})
\end{equation*}
which completes the proof.
\end{proof}
\LBmaxJustByConv*
\begin{proof}
Observe that during the execution of Algorithm~\ref{alg:osposg:hsvi} the set $\Gamma$ is modified only by the point-based updates on lines~\ref{alg:osposg:hsvi:pb-update-1} and~\ref{alg:osposg:hsvi:pb-update-2} of Algorithm~\ref{alg:osposg:hsvi}.
To prove the result, it thus suffices to show that (1) the initial lower bound $\uv$ is max-justified by the set $\mathsf{Conv}(\Gamma) = \Gamma = \lbrace \val^{\sigma_1^{\mathrm{unif}}} \rbrace$ and that (2) if $\uv$ is max-justified by $\mathsf{Conv}(\Gamma)$ then any point-based update results in a value function $V_{\mathrm{LB}}^{\Gamma'}$ that is max-justified by the set $\mathsf{Conv}(\Gamma')$.
First, let us show that the initial lower bound $\uv$ is max-justified by the initial set of $\alpha$-vectors $\Gamma=\lbrace \val^{\sigma_1^{\mathrm{unif}}} \rbrace$ (and therefore also by $\mathsf{Conv}(\Gamma) = \Gamma$).
Clearly, $\sigma_1^{\mathrm{unif}} = \mathsf{comp}(\pi_1^{\mathrm{unif}}, \zeta^{\mathrm{unif}})$, i.e., the uniform strategy $\sigma_1^{\mathrm{unif}}$ can be composed from a uniform stage strategy $\pi_1^{\mathrm{unif}}$ for the first stage of the game, and playing uniform strategy $\zeta^{\mathrm{unif}}_{a_1,o}=\sigma_1^{\mathrm{unif}}$ in every $(a_1,o)$-subgame after playing and observing $(a_1,o)$.
Hence, $\val^{\sigma_1^{\mathrm{unif}}}=\mathsf{valcomp}(\pi_1^{\mathrm{unif}}, \overline{\alpha}^{\mathrm{unif}})$ for $\alpha^{\mathrm{unif}}_{a_1,o}=\val^{\sigma_1^{\mathrm{unif}}}$ and the initial $\uv$ is therefore max-justified by the set $\mathsf{Conv}(\Gamma) = \Gamma = \lbrace \val^{\sigma_1^{\mathrm{unif}}} \rbrace$.
Next, consider a lower bound $\uv$ from Algorithm~\ref{alg:osposg:hsvi} and assume that it is max-justified by a set $\mathsf{Conv}(\Gamma)$.
The point-based update constructs a set $\Gamma' = \Gamma \cup \lbrace \mathsf{valcomp}(\pi_1, \overline{\alpha}) \rbrace$ for some $\pi_1 \in \Pi_1$ and $\overline{\alpha} \in \mathsf{Conv}(\Gamma)^{A_1 \times O}$, see Equation~\eqref{eq:osposg:pbupdate-lb}.
Since $\uv$ was max-justified by $\mathsf{Conv}(\Gamma)$, we know that for every $\alpha \in \mathsf{Conv}(\Gamma)$ there exists $\pi_1' \in \Pi_1$, $\overline{\alpha}' \in \mathsf{Conv}(\Gamma)^{A_1 \times O}$ such that $\mathsf{valcomp}(\pi_1',\overline{\alpha}') \geq \alpha$.
The same holds for the newly constructed $\alpha$ vector $\mathsf{valcomp}(\pi_1, \overline{\alpha})$, and $V_{\mathrm{LB}}^{\Gamma'}$ is therefore max-justified by $\mathsf{Conv}(\Gamma) \cup \lbrace \mathsf{valcomp}(\pi_1, \overline{\alpha}) \rbrace$.
By Lemma~\ref{thm:osposg:conv-max-justified}, we also have that $V_{\mathrm{LB}}^{\Gamma'}$ is max-justified by $\mathsf{Conv}(\mathsf{Conv}(\Gamma) \cup \lbrace \mathsf{valcomp}(\pi_1, \overline{\alpha}) \rbrace) = \mathsf{Conv}(\Gamma')$.
Every point-based update thus results in a value function $V_{\mathrm{LB}}^{\Gamma'}$ which is max-justified by $\mathsf{Conv}(\Gamma')$ which completes the proof.
\end{proof}
\end{document} |
\begin{document}
\date{}
\title{Star chromatic index}
\begin{abstract}
The star chromatic index $\chis'(G)$ of a graph $G$ is the minimum number
of colors needed to properly color the edges of the graph so that no path or cycle
of length four is bi-colored. We obtain a near-linear upper bound in terms
of the maximum degree $\Delta=\Delta(G)$. Our best lower bound
on $\chis'$ in terms of $\Delta$ is $2\Delta(1+o(1))$ valid for complete graphs.
We also consider the special case of cubic graphs,
for which we show that the star chromatic index lies between 4 and 7 and
characterize the graphs attaining the lower bound.
The proofs involve a variety of notions from other branches of mathematics
and may therefore be of certain independent interest.
\end{abstract}
\section{Motivation}
Edge-colorings of graphs have long tradition. Although the chromatic index of
a graph with maximum degree $\Delta$ is either equal to $\Delta$ or $\Delta+1$
(Vizing \cite{Vizing}), it is hard to decide when one or the other value
occurs. This is a consequence of the fact that distinguishing between graphs
whose chromatic index is $\Delta$ or $\Delta+1$ is NP-hard (Holyer \cite{Holyer}).
This is true even for the special case when $\Delta=3$ (cubic and subcubic graphs).
Two special parameters concerning vertex colorings of graphs under some additional
constraints have received lots of attention. The first kind is that of an
\emph{acyclic coloring} (see \cite{Gr,AMR}), where we ask not only that every color
class is an independent vertex set but also that any two color classes induce an
acyclic subgraph. The second kind is obtained when we request
that any two color classes induce a star forest --- this variant is called
\emph{star coloring} (see \cite{ACKKR,NO} for more details). These types of colorings
give rise to the notions of the \emph{acyclic chromatic number} and the
\emph{star chromatic number} of a graph, respectively.
A \emph{proper $k$-edge-coloring} of a graph $G$ is a mapping
$\varphi: E(G)\to C$, where $C$ is a set (of \emph{colors}) of cardinality $k$,
and for any two adjacent edges $e,f$ of $G$, we have $\varphi(e)\ne \varphi(f)$.
A subgraph $F$ of $G$ is said to be \emph{bi-colored} (under the edge-coloring
$\varphi$) if $|\varphi(E(F))|\le 2$.
A proper $k$-edge-coloring $\varphi$ is an
\emph{acyclic $k$-edge-coloring} if there are no bi-colored cycles in $G$, and
is a \emph{star $k$-edge-coloring} if there are neither bi-colored 4-cycles
nor bi-colored paths of length 4 in $G$ (by length of a path we mean its
number of edges).
The \emph{star chromatic index} of $G$,
denoted by $\chis'(G)$, is the smallest integer $k$ such that $G$ admits
a star $k$-edge-coloring.
Note that the above definition of acyclic/star edge-coloring of a graph $G$
is equivalent with acyclic/star vertex coloring of the line-graph $L(G)$.
\newcommand\Gd{{\mathcal G}_\Delta}
If one considers the class of graphs $\Gd$ of maximum degree at most $\Delta$,
Brooks' Theorem shows that the usual chromatic number is $O(\Delta)$.
The maximum acyclic chromatic number on $\Gd$ is
$\Omega(\Delta^{4/3}/\log^{1/3}\Delta)$ and $O(\Delta^{4/3})$
(Alon, McDiarmid, and Reed \cite{AMR}).
The maximum star chromatic number on $\Gd$ is
$\Omega(\Delta^{3/2}/\log^{1/2}\Delta)$ and $O(\Delta^{3/2})$
(Fertin, Raspaud, and Reed \cite{FRR}).
In contrast with the aforementioned $\Delta^{4/3}$ behaviour in the class of
all graphs of maximum degree $\Delta$, the acyclic chromatic index is linear
in terms of the maximum degree. Alon et al.\ \cite{AMR} proved
that it is at most $64\Delta$, and Molloy and Reed \cite{MR} improved
the upper bound to $16\Delta$.
One would expect a similar phenomenon
to hold for star edge-colorings. However, the only previous work \cite{Liu-Deng}
just improves the constant in the bound $O(\Delta^{3/2})$ from
vertex coloring.
In this paper we show a near-linear upper bound for the star chromatic index
in terms of the maximum degree (Theorem \ref{thm:upperdelta}).
Additionally, we provide some lower bounds (Theorem \ref{thm:lowerdelta})
and consider the special case of cubic graphs (Theorem \ref{thm:cubic}).
The proofs involve a variety of notions from other branches of mathematics
and are therefore of certain independent interest.
\section{Upper bound for $\chis'(K_n)$}
We shall first treat the special case of complete graphs. The study of
their star chromatic index is motivated by the results presented in
Section \ref{sect:3} since they give rise to general upper bounds on the
star chromatic index.
\begin{theorem}
\label{thm:upperKn}
The star chromatic index of the complete graph\/ $K_n$ satisfies
$$
\chis'(K_n) \le n \cdot \frac{ 2^{ 2\sqrt2(1+o(1)) \sqrt{\log n} } }{(\log n)^{1/4}} \,.
$$
In particular, for every $\eps>0$ there exists a constant $c$ such that
$\chis'(K_n) \le cn^{1+\eps}$ for every $n\ge 1$.
\end{theorem}
\begin{proof}
Let $A$ be an $n$-element set of integers, to be chosen later.
We will assume that the vertices of $K_n$ are exactly the
elements of~$A$, $V(K_n)=A$, and color the edge $ij$ by color $i+j$.
Obviously, this defines a proper edge-coloring.
Suppose that $ijklm$ is a bi-colored path (or bi-colored 4-cycle).
By definition of the coloring we have $i+j = k+l$ and $j+k = l+m$,
implying $i + m = 2k$. Thus, if we ensured that the set $A$ does not contain
any solution to $i+m=2k$ with $i,m \ne k$ we would have found a star edge-coloring of $K_n$.
It is easy to see, that such triple $(i,k,m)$ forms a 3-term arithmetic progression;
luckily, a lot is known about sets without these progressions.
We will use a construction due to Elkin \cite{Elkin} (see also \cite{GreenWolf}
for a shorter exposition) who has improved an earlier result
of Behrend \cite{Behrend}. As shown by Elkin \cite{Elkin}, there is a set
$A \subset \{1, 2, \dots, N\}$ of cardinality at least $c_1 N(\log N)^{1/4}/2^{ 2\sqrt2 \sqrt{\log N} }$
such that $A$ contains no 3-term arithmetic progression.
The defined coloring uses only colors $1, 2, \dots, 2N$ (possibly not all of them),
thus we have shown that $\chis'(K_n) \le 2N$.
We still need to get a bound on $N$ in terms of $n$.
In the following, $c_1,c_2,\dots$ are absolute constants.
For every $\eps > 0$ we have
\begin{equation}
n = |A| \ge c_1 N \frac{ (\log N)^{1/4} }{ 2^{ 2\sqrt2 \sqrt{\log N} } }
\ge c_2 N^{1-\eps}
\label{eq:A}
\end{equation}
Since we also have $N \le c_3 n^{1+\eps}$ for every $\eps > 0$, we may plug this in
(\ref{eq:A}) and use the fact that
$(\log N)^{1/4}\, 2^{-2\sqrt2 \sqrt{\log N}}$
is a decreasing function of~$N$ for large $N$ to conclude that
$$
n \ge c_4 N ((1+\eps)\log n)^{1/4}\ 2^{-2\sqrt2 \sqrt{(1+\eps)\log n}}\,.
$$
Thus we get
$\displaystyle N \le c_5 n 2^{2\sqrt2 \sqrt{(1+\eps)\log n}}\,(\log n)^{-1/4}$.
One more round of this `bootstrapping' yields the desired inequality
$$
N \le n \frac{ 2^{ 2\sqrt2(1+o(1)) \sqrt{\log n} } }{(\log n)^{1/4}}\,.
$$
\end{proof}
\paragraph{Remark.} A tempting possibility for modification
is to use a set $A$ (in an arbitrary group) that contains
no 3-term arithmetic progression and $|A+A|$ is small.
Any such set could serve for our construction, with the same proof.
Even more generally, we only need a symmetric function $p:A\times A\to N$,
where $A=\{1,2,\dots,n\}$, such that $p(a,\cdot): A\to N$ is a 1-1 function
for each fixed $a\in A$, $N$ is small, and $p$ does not yield bi-colored
paths (for all $i,j,k,l,m$ we either have $p(i,j)\ne p(k,l)$ or $p(j,k)\ne p(l,m)$).
We have been unable, however, to find a set that would yield
a better bound than that of Theorem \ref{thm:upperKn}.
\section{An upper bound for general graphs}
\label{sect:3}
The purpose of this section is to present a way to find
star edge-coloring of an arbitrary graph~$G$, using
a star edge-coloring of the complete graph $K_n$ with
$n = \Delta(G)+1$.
We will use the concept of \emph{frugal colorings} as defined by
Hind, Molloy and Reed \cite{HMR}.
A proper vertex coloring of a graph is called \emph{$\beta$-frugal} if no more
than $\beta$ vertices
of the same color appear in the neighbourhood of a vertex.
Molloy and Reed \cite{MR,MR2} proved that every graph has an
$O(\log \Delta/\log\log \Delta)$-frugal coloring using $\Delta+1$ colors.
If $\Delta$ is large enough, one may use $50$ for the implicit constant
in the $O(\log \Delta/\log\log \Delta)$ asymptotics.
\begin{theorem} \label{thm:upperdelta}
For every graph $G$ of maximum degree $\Delta$ we have
\begin{equation}
\chis'(G) \le \chis'(K_{\Delta+1}) \cdot
O\Bigl(\frac{\log \Delta}{\log\log \Delta}\Bigr)^2
\label{eq:B}
\end{equation}
and therefore\/ $\chis'(G) \le \Delta \cdot 2^{O(1) \sqrt{\log \Delta}}$.
\end{theorem}
\begin{proof}
Using the above-mentioned result of Molloy and Reed \cite{MR2}, we find
a $\beta$-frugal $(\Delta+1)$-coloring $f$ with
$\beta = O(\log \Delta/\log\log \Delta)$.
We assume the colors used by $f$ are the vertices of $K_{\Delta+1}$,
so that the frugal coloring is $f: V(G) \to V(K_{\Delta+1})$.
Let $c$ be a star edge-coloring of $K_{\Delta+1}$.
A natural attempt is to color the edge $uv$ of $G$ by $c(f(u)f(v))$.
This coloring, however, may not even be proper: if a vertex $v$ has
neighbours $u$ and $w$ of the same color, then the edges $vu$ and $vw$ will
be of the same color. To resolve this, we shall produce another edge-coloring,
with the aim to distinguish these edges; then we will combine the two
colorings.
We define an auxiliary coloring $g$ of $E(G)$ using $2\beta^2$
colors. Let us first set
$$
V_i = \{ v \in V(G) : f(v) = i \}, \qquad i \in V(K_{\Delta+1})
$$
and define the induced subgraphs $G_{ij} = G[V_i \cup V_j]$.
For each pair $\{i,j\}$ we shall define the coloring $g$ on the edges
of $G_{ij}$; in the end this will define $g(e)$ for every
edge $e$ of $G$. Recall that the frugality of $f$ implies
that the maximum degree in $G_{ij}$ is at most $\beta$. Consequently,
the maximum degree in the (distance) square of $L(G_{ij})$ is at most
$2\beta(\beta-1) < 2\beta^2$. Therefore, we can find a coloring
of $E(G_{ij})$ using $2\beta^2$ colors so that no two edges of this graph
have the same color, if their distance in the line graph is 1 or 2.
Now we can define the desired star edge-coloring of $G$:
we color an edge $uv$ by the pair
$$
h(uv) = (c(f(u)f(v)), g(uv)) \,.
$$
First, we show this coloring is proper. Consider adjacent
(distinct) edges $vu$ and $vw$. If $f(u) \ne f(w)$, then $f(u)f(v)$
and $f(v)f(w)$ are two distinct adjacent edges of $K_{\Delta+1}$,
hence $c$ assigns them distinct colors.
On the other hand, if $f(u)=f(w) = i$ (say), we put $j=f(v)$
and notice that $uv$ and $vw$ are two adjacent edges
of $G_{ij}$, hence the coloring $g$ distinguishes them.
It remains to show that $G$ has no 4-path or cycle colored
with two alternating colors. Let us call such
object simply a \emph{bad path} (considering $C_4$ as a closed path).
Suppose for a contradiction that the path $uvwxy$ is bad.
By looking at the first coordinate of $h$ we observe
that the $c$-color of the edges of the trail $f(u)f(v)f(w)f(x)f(y)$
assumes either just one value or two alternating ones.
As $c$ is a star edge-coloring of $K_{\Delta+1}$,
this trail cannot be a path (nor a 4-cycle).
A simple case analysis shows that in fact $f(u)=f(w)=f(y)$
and $f(v)=f(x)$. Put $i = f(u)$, $j=f(v)$ and consider
again the $g$ coloring of $G_{ij}$. By construction,
$g(uv) \ne g(wx)$, showing that $uvwxy$ is not a bad
path, a contradiction.
\end{proof}
As we saw in this section, an upper bound on the star chromatic
index of $K_n$ yields a slightly weaker result for general
bounded degree graphs. We wish to note that, if convenient, one
may start with other special graphs in place of $K_n$, in particular with $K_{n,n}$.
It is easy to see that
$$
\chis'(K_{n,n}) \le \chis'(K_n) + n
$$
(if the vertices of $K_{n,n}$ are $a_i, b_i$ ($i=1, \dots, n$)
then we color edges $a_ib_j$ and $a_jb_i$ using the color
of the edge $ij$ in $K_n$, while each edge $a_ib_i$ gets a unique color).
On the other hand, a simple recursion yields an estimate
$$
\chis'(K_n) \le \sum_{i=1}^{\ceil{\log_2 n}} 2^{i-1} \chis'(K_{ \ceil{n/2^i}, \ceil{n/2^i}}) \,.
$$
From this it follows that
if $\chis'(K_{n,n})$ is $O(n)$ (or $n (\log n)^{O(1)}$, $n^{1+o(1)}$, respectively)
then $\chis'(K_{n})$ is $O(n \log n)$ (or $n (\log n)^{O(1)}$, $n^{1+o(1)}$, respectively).
\section{A lower bound for $\chis'(K_n)$}
Our best lower bound on $\chis'(K_n)$ is provided below and is linear in terms
of $n$. The upper bound from Theorem \ref{thm:upperKn} is more than
a polylogarithmic factor away from this. So, even the asymptotic behaviour of
$\chis'(K_n)$ remains a mystery.
\begin{theorem} \label{thm:lowerdelta}
The star chromatic index of the complete graph\/ $K_n$ satisfies
$$
\chis'(K_n) \ge 2n(1+o(1)).
$$
\end{theorem}
\begin{proof}
Assume there is a star edge-coloring of $K_n$ using $b$ colors.
Let $a_i$ be the number of edges of color $i$,
let $b_{i,j}$ be the number of 3-edge paths colored $i,j,i$.
We set up a double-counting argument. Note that all sums over
$i$, $j$ are assumed to be over all available colors
(that is, from 1 to $b$). As every edge gets one color, we have
\begin{equation}
\sum_i a_i = \binom n2 \,.
\label{eq:cc}
\end{equation}
Fixing $i$, we have a matching $M_i$ with $a_i$ edges and each
edge sharing both ends with an edge from $M_i$ contributes to
some $b_{i,j}$. Consequently,
\begin{equation}
\sum_j b_{i,j} = 4\binom {a_i}2 \,.
\label{eq:aa}
\end{equation}
Finally, we fix color $j$ and observe that each 3-edge
path colored $i,j,i$ (for some~$i$) uses two edges among
the $2a_j \cdot (n-2a_j)$ edges connecting a vertex of $M_j$
to a vertex outside of $M_j$. This leads to
\begin{equation}
\sum_i b_{i,j} \le a_j (n-2a_j) \,.
\label{eq:bb}
\end{equation}
Now we use (\ref{eq:aa}) and (\ref{eq:bb}) to evaluate the double sum
$\sum_{i,j} b_{i,j}$ in two ways, getting
$$
4 \sum_i \binom {a_i}2 \le \sum_j a_j(n-2a_j) \,.
$$
This inequality reduces to
$$
4 \sum_i a_i^2 \le (n+2) \sum_i a_i \,.
$$
By the Cauchy-Schwartz inequality, $(\sum a_i)^2\le b\cdot \sum a_i^2$,
and then using (\ref{eq:cc}), we obtain
$$
4 \binom{n}{2} \le b(n+2).
$$
Therefore, $b\ge 2 n(n-1)/(n+2) = (2+o(1))n$.
\end{proof}
\section{Subcubic graphs}
A regular graph of degree three is said to be \emph{cubic}. A graph of maximum degree at most three is \emph{subcubic}.
A graph $G$ is said to cover a graph $H$ if there is a graph homomorphism from $G$ to $H$ that is locally bijective.
Explicitly, there is a mapping $f:V(G) \to V(H)$ such that whenever $uv$ is an edge of $G$, the image
$f(u) f(v)$ is an edge of $H$, and for each vertex $v \in V(G)$, $f$ is a bijection between
the neighbours of $v$ and the neighbours of $f(v)$.
\begin{theorem}
\label{thm:cubic}
{\rm (a)} If\/ $G$ is a subcubic graph, then $\chis'(G) \le 7$.
{\rm (b)} If\/ $G$ is a simple cubic graph, then $\chis'(G) \ge 4$, and the equality holds if and only if\/ $G$ covers the graph of the $3$-cube.
\end{theorem}
For the part (a) of this theorem we will need the following lemma. It seems to be possible
to use this lemma for other classes of graphs, therefore it might be of certain independent interest.
\begin{lemma}
\label{thm:recursive}
Let $f:E(G) \to \{1, \dots, k\}$ be a $k$-edge-coloring.
{\rm (a)} Let $e$ be an edge of $G$. Suppose that the restriction of $f$ to $E(G)\setminus\{e\}$
is a star edge-coloring of $G - e$ and that $f(e)$ is distinct from $f(e')$
whenever $d(e,e') \le 2$ (that is, either $e, e'$ share a vertex, or a
common adjacent edge).
Then $f$ is a star edge-coloring of $G$.
{\rm (b)} Let $A$ be a set of vertices of $G$, let $B = V(G) \setminus A$,
and let $X$ be the set of edges with one end in $A$ and the other in $B$.
Suppose that
\begin{enumerate}
\setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt}
\item (a restriction of) $f$ is a star edge-coloring of\/ $G[A]$;
\item (a restriction of) $f$ is a star edge-coloring of\/ $G[B]$;
\item no edges $e_1, e_2$ in $X$ share a common vertex in $A$ or a common adjacent edge in $G[A]$;
\item for every edge $e \in X$ and every edge $e'$ in $G[B] \cup X$ such that
$d(e,e') \le 2$ we have $f(e) \ne f(e')$
(distance is measured in $G[B] \cup X$, not in $G$);
\item for every edge $e \in X$ and every edge $e'$ in $G[A]$ we have $f(e) \ne f(e')$.
\end{enumerate}
Then $f$ is a star edge-coloring of $G$.
\end{lemma}
\begin{proofof}{of the lemma}
(a) Since $f$ is a star edge-coloring of $G-e$, no 4-path (or 4-cycle) in $G-e$ is bi-colored.
If $P$ is a bi-colored 4-path (4-cycle)
containing $e$, then $P$ contains an edge of the same color as $e$ at distance $\le 2$ from $e$,
a contradiction.
(b) Conditions (3), (4), (5) imply that for every edge $e \in X$ and every edge $e'\in E(G)$,
if $d(e,e')\le 2$, then $f(e) \ne f(e')$. Therefore, we can repeatedly apply part (a), starting
with the graph $G[A] \cup G[B]$ and adding one edge of $X$ at a time.
\end{proofof}
To explain a bit the conditions of part (b) in the above lemma: the point here is that
in the condition 5, we do not check what is the distance of $e$ and $e'$. In our applications,
$A$ will be a particular small subgraph of $G$ (such as those in Figure~\ref{fig:auxreduc})
and $B$ the `unknown' rest of the graph. We do not
want to distinguish whether some edges in $X$ share a vertex in $B$. This, however, may create
new 4-paths, henceforth the particular formulation of this lemma.
\begin{proofof}{of the theorem}
(a) Trying to get a contradiction, let us assume that $G$ is a subcubic
graph with the minimum number of edges for which $\chis'(G)>7$.
We first prove several properties of $G$ (connectivity, absence of various
small subgraphs). This will eventually allow us to construct the desired
7-edge-coloring by decomposing $G$ into a collection of cycles connected by paths of length 1 or~2.
Clearly, \textbf{$G$ G is connected.}
\textbf{Suppose that $G$ contains a cut-edge $xy$.} Let $G_x$ and
$G_y$ be the components of $G-xy$ which contain the vertex $x$ and $y$,
respectively. By the minimality of $G$, each of $G_x$ and $G_y$ admits a
star 7-edge-coloring. In $G_x$ there are at most 6 edges that are
incident to a neighbor of $x$. By permuting the colors, we may assume that
color 7 is not used on these edges. Similarly, we may assume that color 7
is not used on the edges in $G_y$ that are incident with neighbors of $y$.
Then we can color the edge $xy$ by using color 7 and obtain a star
7-edge-coloring of $G$ (we use Lemma~\ref{thm:recursive}(a)).
This contradiction shows that \textbf{$G$~is 2-connected.}
If $G$ contained \textbf{a path $wxyz$, where $x$ and $y$ are degree~2 vertices,}
then we could color $G-xy$ by induction, and extend the coloring to a star
7-edge-coloring of $G$ by using Lemma~\ref{thm:recursive}(a). (For the
edge $e=xy$ we use a color that does not appear on the at most six edges incident to $w$ or $z$.)
Thus, \textbf{such path $wxyz$ does not exist.} In particular, $G$~is not a cycle.
Suppose next that $G$ contains \textbf{a degree~2 vertex $z$ whose neighbors $x$ and $y$ are adjacent.}
We will use Lemma~\ref{thm:recursive}(a) for $e=xz$. By induction we may find
a star edge-coloring of $G-e$, and as there are at most six edges in $G$ at
distance $\le 2$ from $e$, we can extend the coloring to $e$ to satisfy
the condition of the lemma.
So the graph $G$ can be star edge-colored using 7 colors, a contradiction.
This shows that \textbf{the neighbors of a degree~2 vertex cannot be adjacent in $G$.}
Further suppose that $G$ contains \textbf{parallel edges.} Three parallel edges
would constitute the whole (easy to color) graph, so suppose there are
two parallel edges between vertices $u$ and $v$.
Unless $G$ contains a bridge, or $G$ has at most three vertices (and is easy to color),
there are neighbors $u'$ of $u$, $v'$ of $v$ and $u' \ne v'$.
By induction we can color $G \setminus \{u,v\}$. Next, we extend this coloring to
the edges $uu'$, $vv'$, so that each of them has different color than the $\le 6$
edges at distance $\le 2$ from it. Now we distinguish two cases.
If $uu'$ and $vv'$ have different colors, say $a$ and $b$, then it is enough to use
on the two parallel edges any two distinct colors that are different from $a$ and~$b$.
If $uu'$ and $vv'$ have the same color, then there are at most 5 colors of edges
at distance $\le 2$ from the parallel edges, so we may again use Lemma~\ref{thm:recursive}, part (a).
So, \textbf{$G$ does not contain parallel edges.}
Next we suppose that $G$ contains \textbf{one of the first three graphs in Figure~\ref{fig:auxreduc}}
as a subgraph, where other edges of $G$ attach only
at the vertices denoted by the empty circles, and some of these vertices may be identified.
We use Lemma~\ref{thm:recursive}, part (b). We let $A$ be the set of
vertices of the subgraph in the figure that are denoted by full circles,
so $X$ is the set of the three thick edges.
By induction, $G[B]$ is star 7-edge-colorable. This coloring can be extended
to $G[B] \cup X$ so that color of each edge $e$ in $X$ differs from the color of all
edges at distance $\le 2$ from $e$ (there are at most 6 such edges).
We assume that the colors used on $X$ are in $\{5, 6, 7\}$.
For $G[A]$ we use the coloring as shown in the figure. This satisfies conditions
of Lemma~\ref{thm:recursive}, part (b), and therefore $G$ can be
star 7-edge-colored.
Next suppose that $G$ contains \textbf{the fourth graph in Figure~\ref{fig:auxreduc}}
as a subgraph (again, other edges can only attach at the `empty' vertices,
some of which may be identified).
We use Lemma~\ref{thm:recursive}, part (b) to show that $G-e$ is
7-edge-colored in a particular way that allows us to use
Lemma~\ref{thm:recursive}, part (a) to extend the coloring on $e$.
We let $A$ be the vertices of the pentagon, so that $X$ is the
set of the three thick edges. Note that the conditions of the part (b) are
satisfied for $G-e$, but not for $G$ itself.
By induction there is a star 7-edge-coloring of $G[B]$, and we
again extend it to $X$ so that the edges in $X$ have distinct color
from edges at distance $\le 2$. Observe that there are at most six
edges in $G[B] \cup X$ that are at distance $\le 2$ from $e$,
so there is a color, say $C$, not used on any of those.
We shall reserve $C$ to be used at $e$. First, however,
we apply part~(b) to color the graph~$G-e$. We use the coloring of $G[A]$
shown in the figure, assuming that $C \not\in \{1,2,3\}$ and
that none of the colors $1,2,3$ is used on $X$.
Finally, we use part~(a) to extend the coloring on $G$,
letting the color of $e$ be $C$.
As the last reduction, we show that \textbf{$G$ does not contain a path $wxyz$,
where $w$ and $z$ are degree~2 vertices.}
If $G$ did contain such a path,
we could color $H = G - \{w, x, y, z\}$. Next, we describe how to extend
this coloring to a star 7-edge-coloring of~$G$. We will denote the edges
as in Figure~\ref{fig:pathreduc}; to ease the notation we will use $a$ to
denote both the edge and its color.
We may assume that all vertices among $w, x, y, z$, and their neighbours
are distinct (*), as otherwise $G$ contains one of the previously
handled subgraphs --- those in Figure~\ref{fig:auxreduc}, triangle with a degree~2 vertex,
parallel edges or two adjacent degree~2 vertices (the straightforward checking is left to the reader).
It may, however, happen that, e.g., edges $s$ and $u$ have an edge adjacent
to both of them. This has no effect on the proof, we only will have, say, $b=e$.
The edges $a$, \dots, $h$ are
part of $H$, so they are colored already. Similarly as in the previous cases,
we choose a color for $s$, $t$, $u$, $v$ so that none of these edges shares a color
with an edge of $G-\{p,q,r\}$ at distance at most~2;
using 7 colors, this is easy to achieve.
Condition~(*) in the previous paragraph
implies, that every 3-edge path starting at $w$, $x$, $y$~or~$z$ by an edge~$s$, $t$, $u$~or~$v$
avoids edges $p$, $q$, $r$ --- consequently, no such path has first and last edge of the same color,
and no such path can be part of a bi-colored 4-path or 4-cycle. This greatly reduces the number
of path and cycles we need to take care of.
Next, we pick a color for $q$ that differs from $c$, $d$, $e$, $f$, $t$, and $u$.
\begin{figure}
\caption{Subgraphs that cannot appear in a minimal counterexample.}
\label{fig:auxreduc}
\end{figure}
\begin{figure}
\caption{Illustration of the proof that minimal counterexample to Theorem~\ref{thm:cubic}
\label{fig:pathreduc}
\end{figure}
Now, we distinguish several cases based on colors of $s$, $q$, and $v$.
We again assume the colors are 1, \dots, 7; up to symmetry we have only the
following cases.
\textbf{Case 1. } $s=1, q=2, v=3$ \\
We only need to avoid bi-colored paths
$aspt$, $bspt$, $sptc$, $sptd$, and the four symmetrical paths
in the right part of the figure.
If $t=1$, we choose $p$ to be different from $1, 2, a, b, c, d$.
If $t \ne 1$, it suffices to make $p$ different from $s, q, t$.
The procedure for $r$ is analogous.
\textbf{Case 2. } $s=1, q=1, v=2$ \\
In this case $t\ne 1$, so we only need to avoid bi-colored paths
$aspq, bspq, spqr, spqu$ and
$urvh, urvg$, $eurv, furv$.
If $u=2$, we make sure that $r$ differs from
$1,2,e,f,g,h$. Otherwise, it suffices to make
$r$ different from $q, u, v$.
Then we choose $p$ to differ from
$a, b, 1, t, u, r$.
\textbf{Case 3. } $s=1, q=2, v=1$ \\
This is handled in exactly the same way as Case 1.
\textbf{Case 4. } $s=1, q=1, v=1$ \\
Now $t, u \ne 1$, so we only need to avoid the paths
$aspq, bspq, spqr, pqrv, qrvg, qrvh$, and
$spqu, tqrv$.
To do this, we only need to ensure, that
$p \ne 1, a, b, t, u, r$ and
$r \ne 1, h, g, u, t, p$, which is easily possible.
This finishes the proof of the claim that minimal counterexample $G$ does not contain
a path $wxyz$, where $w$ and $z$ are degree~2-vertices.
This finished the first part of the proof.
Next we will use the above-derived properties of the supposed minimal
counterexample~$G$ to find its star 7-edge-coloring and thus reach a contradiction.
We will use only the boldface claims from the above part of the proof.
Let $G'$ be the graph obtained from $G$ by suppressing all degree~2
vertices, i.e., replacing each path $xzy$, where $z$ is a degree~2 vertex,
by a single edge $xy$. Clearly, $G'$ is a cubic graph. It is bridgeless
(as $G$ is bridgeless) and contains no parallel edges -- as $G$~contains
no parallel edges, no triangle with a degree~2 vertex and no 4-cycle
with two opposite degree~2 vertices.
By a result of Kaiser and \v{S}krekovski \cite{KS}, $G'$ contains a perfect matching~$M'$
such that $M'$ does not contain all edges of any minimal 3-cut or 4-cut.
Note that each edge in $G'$ corresponds
either to a single edge in $G$ or to a path of length two.
Let $M$ denote the set of edges of $G$ corresponding to an edge of~$M'$.
Our goal is to use four colors (say 4, 5, 6, 7) on $M$,
and three colors (say 1, 2, 3) on the other
edges that form a disjoint union of circuits.
We form an auxiliary graph~$K$, whose vertices are the edges in $M$.
We make two of these edges $e$, $f$ adjacent in~$K$ if
either they form a 2-edge path corresponding to an edge in~$M'$
or there is an edge in $G$ joining an end of $e$ with an end of~$f$.
Observe that $K$ is a graph of maximum degree at most four. Also note that if~$K$
is disconnected, then each component contains a vertex of degree at most
three. By the Brooks Theorem, $K$ is 4-colorable unless it contains a
connected component isomorphic to $K_5$. It is easy to see that the latter
case occurs if and only if $K=K_5$ and $G=G'$.
Let us first consider the case when $K$ is 4-colorable.
In this case we will not need the fact that $M'$ does not contain minimal 3-cuts or 4-cuts.
The 4-coloring of the vertices of $K$ determines a 4-coloring of the edges in $M$ with the
property that every color class is an induced matching in $G$. We shall
show that we can star 3-color the edges in $G-M$ unless $G-M$ contains
a 5-cycle; this case will be treated separately. By extending that 3-edge
coloring to a 7-edge-coloring of $G$ (by using the 4-coloring of edges in
$M$) we obtain a star 7-edge-coloring since none of the four colors used on
the edges in $M$ can give rise to a bi-colored 4-path or a cycle (Lemma~\ref{thm:recursive}(a)).
Thus it suffices to find a star 3-edge-coloring of $G-M$. This is not hard
unless $G-M$ contains a 5-cycle.
Recall that $G-M$ is the union of disjoint cycles and every $k$-cycle, where $k\ne 5$,
admits a star 3-edge-coloring: This is easy if $k\in\{3,4\}$ or if $k$ is
divisible by three. If $k\equiv 1\pmod{3}$ and $k>5$, we can use the colors
in the following order $1232123\cdots 123$. Similarly, if
$k\equiv 2\pmod{3}$ and $k>5$, we can use the colors $12132123\cdots 123$.
Thus, the only problems are the 5-cycles in $G-M$.
To color them, we shall choose an edge $e=e_C$ in each 5-cycle $C$ and
a color $c=c_C$, that is otherwise used as a color for $M$.
Then we color $e$ with color $c$ and color the 4-path $C-e$ as $1,2,3,1$.
We pick $c$ and $e$ in such a way that no edge of $M$ at distance
at most 2 from $e$ has color $c$ (we will show below that this is possible).
It is easy to check that this, together with the fact that $K$ is
properly colored, prevents all 4-paths and 4-cycles from being bi-colored
(Lemma~\ref{thm:recursive}(a) again).
So, this finishes the proof of the case when $K$~is 4-colorable---provided
we show how to pick $e$ and $c$ for each 5-cycle $C$.
To do this, we let $F$ be the set of edges of $M$ that are incident with a vertex
of $C$ but not part of $C$.
Further, we let $X$ be the (possibly empty) set of edges of $M$ adjacent
with some edge of $F$. Easily, $|X|$ is the number of 2-edge paths
in $M$ that are adjacent to $C$. (A 2-edge path with both ends at $C$
counts twice; in this case $X$ and $F$ intersect.)
As $G$ contains no 3-edge path with both ends of degree 2, we have $|X| \le 2$.
We distinguish two cases based on the color pattern on edges of $F$.
These cases cover all possibilities up to renaming the colors.
\textbf{Case 1.} Edges of $F$ use in some order colors 4, 4, 5, 6, and 7
(that is, one color appears twice, the other colors once). If $X=\emptyset$, there
are three possible choices for edge $e$: for each color $c$ among 5, 6, and 7
we may choose the edge of $C$ opposite to the edge of $F$ colored $c$.
Edges of $X$ may be at distance 2 to some of these edges of $C$. However,
there are at most two such edges, hence at most two colors are affected.
So, one of colors 5, 6, and 7 is still valid.
\textbf{Case 2.} Edges of $F$ use in some order colors 4, 4, 5, 5, and 6
(that is, two colors twice, one once, one not at all).
In this case, if $X=\emptyset$, all five edges of $C$ can be colored 7.
Each edge of $X$ (if such an edge exists) is at distance 2 from two edges of $C$,
so one edge of $C$ is far from edges of $X$ and we can let this edge be $e$
and $c$ be 7.
Finally, let us consider the case when $K$ does not admit a 4-coloring,
i.e., $K=K_5$. As argued before, this implies that $G=G'$ is a cubic graph
containing precisely 10 vertices. Note that $G-M$ is a 2-regular graph
with no 3-cycles or 4-cycles (due to the choice of $M'$).
Thus $G-M$ is isomorphic either to a 10-cycle, or to the union of two 5-cycles.
If $G-M$ is the union of two
5-cycles, then it is easy to check that $G$ is the Petersen graph, and hence
$\chis'(G)=5$. (A star 5-edge-coloring is easy to find, and the star
4-edge-coloring does not exist as shown in part (b) below.)
The final case is that $G-M$ is a 10-cycle.
Color its chords with colors~1, 2, 3, 4,~5. Then color the first, fourth, and seventh edges
by colors from 1, 2, 3, 4, 5 so that no two edges at distance two share a color.
Finally, color the remaining edges with 6~and~7.
This completes the proof.
(b)
Every 3-edge-coloring of a cubic graph has bi-colored cycles, thus
$\chis'(G)\ge4$.
In Figure~\ref{fig:cube} there is a 4-edge-coloring of the cube $Q_3$.
It is easy to verify that this is indeed a star edge-coloring.
Perhaps the fastest way to see this is to observe that for each $i\ne j$, there is
(a unique) 3-edge path colored $i,j,i$ between the two vertices colored~$j$.
Consider now a graph $G$ that covers $Q_3$ and use the covering map to
lift the edge-coloring of~$Q_3$ to an edge-coloring of~$G$.
From the definition of covering projections we see that a path of length $2$ in $G$ is mapped
to a path of length $2$ in $Q_3$. It follows that the defined edge-coloring is
proper. It also follows that a path of length $4$ in $G$ is mapped to a path
of length $4$ in $Q_3$ or to a $4$-cycle in $Q_3$, and a 4-cycle in $G$ is
mapped to a 4-cycle in $Q_3$. It follows that we have a star edge-coloring of $G$.
For the reverse implication, suppose that $G$ has a star 4-edge-coloring $c$.
Let us first define a (vertex) 4-coloring $f$ by letting
$f(v)$ be the (unique) color that is missing on edges incident with $v$.
\textbf{$f$ is a proper coloring.} For a contradiction, suppose that $f(u)=f(v)$
for an edge $uv$ of $G$. Let $u_1$, $u_2$ be the other neighbors of $u$,
and $v_1$, $v_2$ be the other neighbors of $v$.
By symmetry we may assume that $f(u)=f(v)=4$, $c(uv)=3$,
$f(uu_i) = f(vv_i)=i$ (for $i=1,2$).
The bi-chromatic paths $u_i u v v_i$ imply that $3$ is neither used
on edges incident with $v_1$ nor on those incident with $v_2$. This, however, implies
that there is an edge-colored $2$ incident with $v_1$ and
an edge-colored $1$ incident with $v_2$, which create a bi-chromatic
4-edge path (or 4-cycle), a contradiction.
Note that the cases where $uv$ is contained in a triangle
($u_1=v_2$, $u_2=v_1$ or both) are also covered by the above.
\textbf{$f$ is a covering map $G \to K_4$.} Suppose for a contradiction
that there is a vertex $v$ with neighbors $v_1$, $v_2$ such that
$f(v_1) = f(v_2)$. By symmetry we may assume that $f(v)=4$,
$f(v_1)=f(v_2)=3$, $c(vv_i) = i$.
Now $v_1$ must be incident with an edge of color~$2$
and $v_2$ must be incident with an edge of color~$1$,
producing again a bi-chromatic 4-edge path (or cycle).
\textbf{$f$ together with $c$ define a covering $G \to Q_3$.}
Let $i,j,k,l$ denote $1,2,3,4$ in some order. If a vertex $v$ of $G$
has $f(v)=i$ then the $c$-colors of its incident edges are $j$, $k$, $l$
and the same holds for the $f$-colors of its adjacent vertices.
There are exactly two possibilities: either the edges incident with $v$ colored $j,k,l$
lead to vertices colored $k,l,j$ (respectively), or to vertices colored $l,j,k$.
We refer to these two possibilities as the \emph{local color pattern} at $v$.
Observe that in $Q_3$ as depicted in Figure~\ref{fig:cube},
there are for each $i$ two vertices colored $i$ and they use different local
color patterns.
This implies there is a unique vertex mapping $F: V(G) \to V(Q_3)$ such that for each
$v \in V(G)$ the following conditions hold:
\begin{enumerate}
\setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt}
\item we have $f(v) = f(F(v))$ (we use $f$ also for the vertex coloring
of $Q_3$), and
\item $v$ and $F(v)$ use the same local color pattern.
\end{enumerate}
To show that $F$ is a covering map, we need to observe that for each $v\in V(G)$,
the three neighbours of $v$ in $G$ map by $F$ to the three neighbours of $F(v)$ in $Q_3$.
As we already know that $f$ is a covering map to $K_4$, it suffices to show, that
a neighbour $u$ of $v$ is indeed mapped to the neighbour of $F(v)$ with color $f(u)$
(and not to the other vertex with the same color).
For this we observe that the local coloring pattern at a vertex $v$ determines the
local coloring pattern at each neighbour of $v$, in any cubic graph that is
star 4-edge-colored. As this holds both in $G$ and in $Q_3$,
our definition of $F$ yields a covering map, which finishes the proof.
\end{proofof}
\begin{figure}
\caption{Cube $Q_3$ with star edge-coloring by four colors. The vertex labels are
used in the proof of Theorem~\ref{thm:cubic}
\label{fig:cube}
\end{figure}
There are cubic graphs whose star chromatic index is equal to 6.
One example is $K_{3,3}$. To see this, let us suppose that we have a star
edge-coloring of $K_{3,3}$, and let $F$ be a color class. If $|F|=3$, then every other
color class contains at most one edge and hence there are at least seven
colors all together. So, we may assume that every color class contains one or two
edges only. If $F = \{ab,cd\}$ is a color class, then one of the edges $ad$ or $cb$
forms a singleton color class since the second edge in the color class of $ad$
(and the same for $cb$) would need to be the edge of $K_{3,3}$ disjoint from
$a,b,c,d$. This implies that there are at least two singleton color classes.
Hence, the total number of colors is at least 6. Finally, a star 6-edge-coloring
of $K_{3,3}$ is easy to construct, proving that $\chis'(K_{3,3})=6$.
\section{Open problems}
As we saw in Sections~2 and~4, establishing the star
chromatic index is nontrivial even for complete graphs.
We established bounds
$$
(2+o(1))\cdot n \le \chis'(K_n) \le n \cdot \frac{ 2^{ 2\sqrt2(1+o(1)) \sqrt{\log n} } }{(\log n)^{1/4}} \,.
$$
\begin{question}
What is the true order of magnitude of $\chis'(K_n)$?
Is $\chis'(K_n) = O(n)$?
\end{question}
In the previous section we obtained the bound
$\chis'(G) \le 7$ for a subcubic graph $G$.
We also saw that $\chis'(K_{3,3}) = 6$.
A bipartite cubic graph that we thought might require seven colors is the Heawood graph
(the incidence graph of the points and lines of the Fano plane).
However, it turned out that also its star chromatic index is at most 6.
After some additional thoughts, we propose the following.
\begin{conjecture}
If\/ $G$ is a subcubic graph, then\/ $\chis'(G)\le 6$.
\end{conjecture}
It would be interesting to understand the list version
of star edge-coloring: by an edge $k$-list for a graph $G$
we mean a collection $(L_e)_{e \in E(G)}$ such that
each $L_e$ is a set of size~$k$. We shall say that $G$~is
\emph{$k$-star edge choosable} if for every edge $k$-list~$(L_e)$
there is a star edge-coloring $c$ such that
$c(e) \in L_e$ for every edge $e$.
We let $ch'_s(G)$ be the minimum $k$ such that
$G$ is $k$-star edge choosable.
All of the results in this paper may have extension to list colorings.
Let us ask specifically two questions:
\begin{question}
Is it true that $\mathrm{ch}'_s(G) \le 7$ for every subcubic graph~$G$?
(Perhaps even $\le 6$?)
\end{question}
\begin{question}
Is it true that $\mathrm{ch}'_s(G) = \chi'_s(G)$ for every graph~$G$?
\end{question}
\end{document} |
\begin{document}
\centerline{\bf The First Derivative of Ramanujans Cubic Continued Fraction}\vskip .10in
\centerline{\bf Nikos Bagis}
\centerline{Department of Informatics}
\centerline{Aristotele University of Thessaloniki Greece}
\centerline{[email protected]}
\begin{quote}
\begin{abstract}
We give the complete evaluation of the first derivative of the Ramanujans cubic continued fraction using Elliptic functions. The Elliptic functions are easy to handle and give the results in terms of Gamma functions and radicals from tables.
\end{abstract}
\bf keywords \rm{Ramanujan's Cubic Fraction; Jacobian Elliptic Functions; Continued Fractions; Derivative}
\end{quote}
\section{Introduction}
\label{intro}
The Ramanujan's Cubic Continued Fraction is (see [3], [7], [8], [9], [11]).
\begin{equation}
V(q):=\frac{q^{1/3}}{1+}\frac{q+q^2}{1+}\frac{q^2+q^4}{1+}\frac{q^3+q^6}{1+}\ldots
\end{equation}
Our main result is the evaluation of the first derivative of Ramanujan's cubic fraction. For this, we follow a different way from previous works and use the theory of Elliptic functions. Our method consists to find the complete polynomial equation of the cubic fraction which is a solvable, in radicals, quartic equation, in terms only of the inverse elliptic nome $k_r$, Using the derivative of $k_r$ which we evaluate in Section 2 of this article, we find the desired formula of the first derivative. For beginning we give some definitions first.\\
Let
\begin{equation}
\left(a;q\right)_k=\prod^{k-1}_{n=0}(1-aq^n)
\end{equation}
Then we define
\begin{equation}
f(-q)=(q;q)_\infty
\end{equation}
and
\begin{equation}
\Phi(-q)=(-q;q)_\infty
\end{equation}
Also let
\begin{equation}
K(x)=\int^{\pi/2}_{0} \frac{1}{\sqrt{1-x^2\sin^2(t)}}dt
\end{equation}
be the elliptic integral of the first kind.\\
We denote
\begin{equation}
\theta_4(u,q)=\sum^{\infty}_{n=-\infty}(-1)^nq^{n^2}e^{2nui}
\end{equation}
the Elliptic Theta function of the 4th-kind. Also hold the following relations (see [16]):
\begin{equation}
\prod^{\infty}_{n=1}(1-q^{2n})^6=\frac{2kk'K(k)^3}{\pi^3q^{1/2}}
\end{equation}
\begin{equation}
q^{1/3}\prod^{\infty}_{n=1}(1+q^n)^8=2^{-4/3}\left(\frac{k}{1-k^2}\right)^{2/3}
\end{equation}
and
\begin{equation}
f(-q)^8=\prod^{\infty}_{n=1}(1-q^n)^8=\frac{2^{8/3}}{\pi^4}q^{-1/3}k^{2/3}(k')^{8/3}K(k)^4
\end{equation}
The variable $k$ is defined from the equation
\begin{equation}
\frac{K(k')}{K(k)}=\sqrt{r}
\end{equation}
where $r$ is positive , $q=e^{-\pi \sqrt{r}}$ and $k'=\sqrt{1-k^2}$. Note also that whenever $r$ is positive rational, the $k=k_r$ are algebraic numbers.
\section{The Derivative $\left\{r,k\right\}$}
\textbf{Lemma 1.}\\
If $\left|t\right|<\pi a/2$ and $q=e^{-\pi a}$ then
\begin{equation}
\sum^{\infty}_{n=1}\frac{\cosh(2tn)}{n\sinh(\pi a n)}=\log(f(-q^2))-\log\left(\theta_4(it,e^{-a\pi})\right)
\end{equation}
\textbf{Proof.}\\
From the Jacobi Triple Product Identity (see [4]) we have
\begin{equation}
\theta_4(z,q)=\prod^{\infty}_{n=0}(1-q^{2n+2})(1-q^{2n-1}e^{2iz})(1-q^{2n-1}e^{-2iz})
\end{equation}
By taking the logarithm of both sides and expanding the logarithm of the individual terms in a power series it is simple to show (11) from (12).
\[
\]
\textbf{Lemma 2.}\\
Let $q=e^{-\pi \sqrt{r}}$ with $r$ real positive
\begin{equation}
\phi(x)=2\frac{d}{dx}\left(\frac{\partial}{\partial t}\log\left(\vartheta_4\left(\frac{it\pi}{2},e^{-2\pi x}\right)\right)_{t=x}\right)
\end{equation}
then
\begin{equation}
\frac{d(\sqrt{r})}{dk}=\frac{K^{(1)}(k)}{\phi\left(\frac{K(\sqrt{1-k^2})}{K(k)}\right)}=
\frac{K^{(1)}(k)}{\phi\left(\frac{K(k')}{K(k)}\right)}
\end{equation}
Where $K^{(1)}(k)$ is the first derivative of $K$.\\
\textbf{Proof.}\\
From Lemma 1 we have
$$2\frac{\partial}{\partial t}\log\left(\vartheta_4\left(\frac{it\pi}{2},e^{-2\pi x}\right)\right)_{t=x}=-\pi\sum^{\infty}_{n=1}\frac{1}{\cosh\left(n\pi x\right)}=\frac{\pi}{2}-K(k_x)$$
then
$$\sqrt{x(k_2)}-\sqrt{x(k_1)}=-\int^{k_2}_{k_1}\frac{K^{(1)}(k)}{\phi\left(\frac{K\left(\sqrt{1-k^2}\right)}{K(k)}\right)}dk$$
Differentiating the above relation with respect to $k$ we get the result.
\[
\]
\textbf{Lemma 3.}\\
Set $q=e^{-\pi \sqrt{r}}$ and
$$\left\{r,k\right\}:=\frac{dr}{dk}=2\frac{K(k')K^{(1)}(k)}{K(k)\phi\left(\frac{K(k')}{K(k)}\right)}$$
Then
\begin{equation}
\left\{r,k\right\}=\frac{\pi\sqrt{r}}{K^2(k_r)k_rk'^2_r}
\end{equation}
\textbf{Proof.}\\
From (9) taking the logarithmic derivative with respect to $k$ and using Lemma 2 we get:
\begin{equation}
\pi\left\{r,k\right\}\left(1-24\sum^{\infty}_{n=1}\frac{nq^n}{1-q^n}\right)=\left(\frac{1-5k^2}{(k-k^3)}+\frac{6K^{(1)}}{K}\right)\frac{4K'}{K}
\end{equation}
But it is known that
\begin{equation}
\sum^{\infty}_{n=1}\frac{nq^n}{1-q^n}=\frac{1}{24}+\frac{K}{6\pi^2}((5-k^2)K-6E)
\end{equation}
Hence
\begin{equation}
\left\{r,k\right\}=\frac{\pi K'}{K^2}\frac{\frac{1-5k^2}{k-k^3}+\frac{6K^{(1)}}{K}}{(k^2-5)K+6E}
\end{equation}
Also $$a(r)=\frac{\pi}{4K^2}+\sqrt{r}-\frac{E\sqrt{r}}{K} ,$$
where $a(r)$ is the elliptic alpha function. Using the above relations we get the result.\\
\textbf{Note.}\\
1) The first derivative of $K$ is $$K^{(1)}=\frac{E}{k_r\cdot k'^2_r}-\frac{K}{k_r}$$
where $k=k_r$ and $k'=k'_r=\sqrt{1-k^2_r}$.\\
2) In the same way we can find form the relation
\begin{equation}
k_{4r}=\frac{1-k'_r}{1+k'_r}
\end{equation}
the 2-degree modular equation of the derivative.\\
Noting first that (the proof is easy)
\begin{equation}
\left\{r,k'_{r}\right\}=\frac{k'_r}{k_{r}}\left\{r,k_r\right\}
\end{equation}
we have
\begin{equation}
\left\{r,k_{4r}\right\}=\frac{k'_r(1+k'_r)^2}{2k_r}\left\{r,k_r\right\}
\end{equation}
\section{The Ramanujan's Cubic Continued Fraction}
Let
\begin{equation} V(q):=\frac{q^{1/3}}{1+}\frac{q+q^2}{1+}\frac{q^2+q^4}{1+}\frac{q^3+q^6}{1+}\ldots
\end{equation}
is the Ramanujan's cubic continued fraction, then holds
\[
\]
\textbf{Lemma 4.}
\begin{equation}
V(q)=\frac{2^{-1/3}(k_{9r})^{1/4}(k'_{r})^{1/6}}{(k_r)^{1/12}(k'_{9r})^{1/2}}
\end{equation}
where the $k_{9r}$ are given by (see [7]):
\begin{equation}
\sqrt{k_rk_{9r}}+\sqrt{k'_rk'_{9r}}=1
\end{equation}
\textbf{Proof.}\\
The proof can be found in [18].
\[
\]
\textbf{Lemma 5.}\\
If $$G(x)=\frac{x}{\sqrt{2\sqrt{x}-3x+2x^{3/2}-2\sqrt{x}\sqrt{1-3\sqrt{x}+4x-3x^{3/2}+x^2}}}$$ and
\begin{equation}
k_r=G(w)
\end{equation}
then
$$k_{9r}=\frac{w}{k_r}$$
and
$$k'_{9r}=\frac{(1-\sqrt{w})^2}{k'_r}$$
\textbf{Proof.}\\
See [18].
\[
\]
\textbf{Theorem 1.}\\
Set $T=\sqrt{1-8V^3(q)}$ then
\begin{equation}
(k_r)^2=\frac{(1-T)(3+T)^3}{(1+T)(3-T)^3}
\end{equation}
\textbf{Proof.}\\
See [18].
\[
\]
Equation (26) is a solvable quartic equation with respect to $T$.\\
An example of evaluation is
\begin{equation}
V(e^{-\pi})=\frac{1}{2}\left(-2-\sqrt{3}+\sqrt{3(3+2\sqrt{3})}\right)
\end{equation}
\[
\]
\textbf{Main Theorem.}\\
Let $q=e^{-\pi\sqrt{r}}$, then
\begin{equation}
V'(q)=\frac{dV(q)}{dq}=\frac{-2\sqrt{r}}{q\pi}\frac{dV}{dr}=\frac{4K^2(k_r)k'^2_r(V(q)+V^4(q))}{3q\pi^2\sqrt{r}\sqrt{1-8V^3(q)}}
\end{equation}
\textbf{Proof.}\\ Derivate (26) with respect to $r$ then
\begin{equation}
\sqrt{\frac{2k_r}{\left\{k,r\right\}}}=\frac{4T(3+T)}{(3-T)^2(1+T)}\sqrt{\frac{dT}{dr}}
\end{equation}
or
\begin{equation}
T_r=\frac{dT}{dr}=\frac{1}{8k_r\left\{r,k\right\}}\frac{(9-T^2)(1-T^2)}{T^2}
\end{equation}
Using the relation $T=\sqrt{1-8V(q)^3}$, we get
$$
\frac{dV(q)}{dr}=-\frac{2}{3}\frac{V(q)+V^4(q)}{k_r \left\{r,k\right\} \sqrt{1-8V^3(q)}}
\eqno{(a)}$$
which is the result.\\ Hence the problem of finding $V(q)$ and $V'(q)$ is completely solvable in radicals when we know $k_r$ and $K(k_r)$ (see [12]), $r\in\bf Q\rm$, $r>0$.
\[
\]
We often use the notations $V[r]:=V(e^{-\pi\sqrt{r}})$, $T[r]:=T(e^{-\pi\sqrt{r}})=t$.
\[
\]
\textbf{Proposition 1.}
\begin{equation}
V[4r]=\frac{1-T[r]}{4V[r]}
\end{equation}
\textbf{Proof.}\\
See [9].
\[
\]
\textbf{Proposition 2.}\\
Set $T'[4r]=u$, $T'[r]=\nu$, then
\begin{equation}
\frac{u}{\nu}=\frac{(1-t)(3+t)}{8\sqrt{t}(1+t)^{5/3}(3-t)^{1/2}}
\end{equation}
\textbf{Proof.}\\
From (19), (20), (21) and (28) we get
\begin{equation}
V'[4r]=\frac{-2\left\{k,r\right\}}{3\frac{1-k'}{1+k'}\frac{k'(1+k')^2}{2k}}\frac{V[4r]+V[4r]^4}{T[r]}
\end{equation}
If we use the duplication formula (31) we get the result.
\[
\]
\textbf{Evaluations}.\\
1) We can calculate now easy the values of $V'(q)$ from (28) using (26). An example of evaluation is $$k_1=\frac{1}{\sqrt{2}}$$, $$E(k_1)=\frac{4\pi^{3/2}}{\Gamma(-1/4)^2}+\frac{\Gamma(3/4)^2}{2\sqrt{\pi}}$$ and $$K(k_1)=\frac{8\pi^{3/2}}{\Gamma(-1/4)^2}$$
When $r=1$ we get $$\left\{r,k\right\}=\frac{8\sqrt{2}\Gamma(3/4)^4}{\pi^2}$$ Hence
\begin{equation}
V'(e^{-\pi})=-\frac{64 \left(-26-15 \sqrt{3}+10 \sqrt{3+2 \sqrt{3}}+6 \sqrt{9+6 \sqrt{3}}\right)}{\sqrt{45+26 \sqrt{3}-18 \sqrt{3+2 \sqrt{3}}-10 \sqrt{9+6 \sqrt{3}}}}\frac{e^{\pi } \pi}{\Gamma\left(-\frac{1}{4}\right)^4}
\end{equation}
2) It is $$T_1=T(e^{-\pi\sqrt{3}})=-39+22\sqrt{3}-\frac{2\cdot 6^{2/3}(-123+71\sqrt{3})}{\left(-4725+2728\sqrt{3}-\sqrt{4053-2340\sqrt{3}}\right)^{1/3}}+
$$
$$
+2\cdot6^{1/3}\left(-4725+2728\sqrt{3}-\sqrt{4053-2340\sqrt{3}}\right)^{1/3}$$
and $V_1=V(e^{-\pi\sqrt{3}})=\frac{1}{2}\sqrt[3]{1-T_1^2}$.\\ From tables and (15) it is:
$$
\left\{3,k_3\right\}=\frac{192 \sqrt{2} \left(-1+\sqrt{3}\right) \pi ^2}{\Gamma\left(\frac{1}{6}\right)^2 \Gamma\left(\frac{1}{3}\right)^2}$$
We find the value of $V'(e^{-\pi\sqrt{3}})$ in terms of Gamma function and algebraic numbers.
$$V'(e^{-\pi\sqrt{3}})=\frac{4\sqrt{3}e^{\pi\sqrt{3}}}{3}\frac{V_1+V^4_1}{k_3 \left\{3,k_3\right\} \sqrt{1-8 V^3_1}}$$
\[
\]
\centerline{\bf References}\vskip .2in
\noindent
[1]: M.Abramowitz and I.A.Stegun. 'Handbook of Mathematical Functions'. Dover Publications, New York. 1972.
[2]: C. Adiga, T. Kim. 'On a Continued Fraction of Ramanujan'. Tamsui Oxford Journal of Mathematical Sciences 19(1) (2003) 55-56 Alethia University.
[3]: C. Adiga, T. Kim, M.S. Naika and H.S. Madhusudhan. 'On Ramanujan`s Cubic Continued Fraction and Explicit Evaluations of Theta-Functions'. arXiv:math/0502323v1 [math.NT] 15 Feb 2005.
[4]: G.E.Andrews. 'Number Theory'. Dover Publications, New York. 1994.
[5]: B.C.Berndt. 'Ramanujan`s Notebooks Part I'. Springer Verlag, New York (1985).
[6]: B.C.Berndt. 'Ramanujan`s Notebooks Part II'. Springer Verlag, New York (1989).
[7]: B.C.Berndt. 'Ramanujan`s Notebooks Part III'. Springer Verlag, New York (1991).
[8]: Bruce C. Berndt, Heng Huat Chan and Liang-Cheng Zhang. 'Ramanujan`s class invariants and cubic continued fraction'. Acta Arithmetica LXXIII.1 (1995).
[9]: Heng Huat Chan. 'On Ramanujans Cubic Continued Fraction'. Acta Arithmetica. 73 (1995), 343-355.
[10]: I.S. Gradshteyn and I.M. Ryzhik. 'Table of Integrals, Series and Products'. Academic Press (1980).
[11]: Megadahalli Sidda Naika Mahadeva Naika, Mugur Chin
[12]: Habib Muzaffar and Kenneth S. Williams. 'Evaluation of Complete Elliptic Integrals of The First Kind and Singular Moduli'. Taiwanese Journal of Mathematics. Vol. 10, No. 6, pp, 1633-1660, Dec 2006.
[13]: L. Lorentzen and H. Waadeland. 'Continued Fractions with Applications'. Elsevier Science Publishers B.V., North Holland (1992).
[14]: S.H.Son. 'Some integrals of theta functions in Ramanujan's lost notebook'. Proc. Canad. No. Thy Assoc. No.5 (R.Gupta and K.S.Williams, eds.), Amer. Math. Soc., Providence.
[15]: H.S. Wall. 'Analytic Theory of Continued Fractions'. Chelsea Publishing Company, Bronx, N.Y. 1948.
[16]:E.T.Whittaker and G.N.Watson. 'A course on Modern Analysis'. Cambridge U.P. (1927)
[17]:I.J. Zucker. 'The summation of series of hyperbolic functions'. SIAM J. Math. Ana.10.192 (1979)
[18]: Nikos Bagis. 'The complete evaluation of Rogers Ramanujan and other continued fractions with elliptic functions'. arXiv:1008.1304v1
\end{document} |
\begin{document}
\title{\Large \bf Testing spooky action at a distance}
\author{\normalsize D. Salart, A. Baas, C. Branciard, N. Gisin, and H. Zbinden\\
\it \small Group of Applied Physics, University of Geneva, 20, Rue de l'Ecole de M\'edecine, CH-1211 Geneva 4, Switzerland}
\date{\small \today}
\maketitle
\begin{multicols}{2}
{\bf In science, one observes correlations and invents theoretical models that describe them. In all sciences, besides quantum physics, all
correlations are described by either of two mechanisms. Either a first event influences a second one by sending some information
encoded in bosons or molecules or other physical carriers, depending on the particular science. Or the correlated events have some common
causes in their common past. Interestingly, quantum physics predicts an entirely different kind of cause for some correlations, named
entanglement. This new kind of cause reveals itself, e.g., in correlations that violate Bell inequalities (hence cannot be described by
common causes) between space-like separated events (hence cannot be described by classical communication). Einstein branded it as {\it spooky
action at a distance}.
A real {\it spooky action at a distance} would require a faster than light influence defined in some hypothetical universally privileged reference frame.
Here we put stringent experimental bounds on the speed of all such hypothetical influences. We performed a Bell test during
more than 24 hours between two villages separated by 18 km and approximately east-west oriented, with the source located precisely in the
middle. We continuously observed 2-photon interferences well above the Bell inequality threshold. Taking advantage of the Earth's rotation,
the configuration of our experiment allowed us to determine, for any hypothetically privileged frame, a lower bound for the speed of this spooky influence.
For instance, if such a privileged reference frame exists and is such that the Earth's speed in this frame is less than $10^{-3}$ that of the
speed of light, then the speed of this spooky influence would have to exceed that of light by at least 4 orders of magnitude.}
According to quantum theory, quantum correlations violating Bell inequalities merely happen, somehow from outside space-time, in the sense
that there is no story in space-time that can describe their occurrence: there is not an event here that somehow influences another distant
event there. Yet, such a description of correlations, radically different from all those found in any other part of science, should be
thoroughly tested. And indeed, many Bell tests have already been published\cite{Asp1}. Recently, both the locality\cite{LocLoopholeAspect,LocLoopholeGeneva,LocLoopholeInnsbrug}
and the detection\cite{DetLoopholeRowe,DetLoopholeMat} loopholes have been closed in several independent experiments. Still, one could imagine
that there is indeed a first event that influences the second one. However, the speed of this hypothetical influence would have to be defined in
some universal privileged reference frame and be larger than the speed of light, hence Einstein's condemned it as {\it spooky action at a distance}.
In 1989, Eberhard noticed that the existence of such a hypothetically privileged reference frame could be experimentally tested\cite{Eber}. The idea
is that the speed of this influence, though greater than the speed of light, is finite. Hence, if in the hypothetically privileged frame both
events are simultaneous, then the signal does not arrive on time and no violation of Bell inequalities should be observed. Note that if both
events are simultaneous in a reference frame, then they are also simultaneous with respect to any reference frame moving in a direction
perpendicular to the line joining the two events. Accordingly, Eberhard proposed\cite{Eber2,Scarani} to perform a Bell test over a long distance oriented
east-west during 12 hours. In such a way, if the events are simultaneous in the Earth reference frame, then they are also simultaneous with respect to
all frames moving in the plane perpendicular to the east-west axis and in 12 hours all possible hypothetically privileged frames are scanned.
Bohm's pilot-wave model of quantum mechanics is an example containing an explicit spooky action at a distance\cite{Bohm I}.
As recognized by Bohm, this requires the assumption that there is a universally privileged frame\cite{Bohm II}. In their book\cite{BohmHiley},
Bohm and Hiley also noticed that if the spooky action at a distance propagates at finite speed, then an experiment like the one presented below
could possibly falsify the pilot-wave model. In this book, the authors stress that the existence of a universally privileged frame would not
contradict relativity.
In 2000, some of us already analyzed a Bell experiment along the
lines presented above\cite{Scarani,2000Gisin,2000Zbinden}. However,
the analysis concerned only two hypothetically privileged reference
frames: since that older experiment did not last long enough and was
not oriented east-west, no other reference frame was analyzed. The
first frame was defined by the cosmic background radiation at around
2.7 K. The second frame we analyzed was the "Swiss Alps reference
frame", i.e. not a universal frame, but merely the frame defined by
the massive environment of the experiment. The assumption that the
privileged frame depends on the experiment's environment leads
naturally to question situations where the massive environments on
both sides of the experiment differ, and this was indeed the main
subject of the experiment in 2000\cite{2000Gisin,2000Zbinden}. In
both of these analyses we termed the hypothetical supra-luminal
influence, the {\it speed of quantum information}, to stress that it
is not a classical signaling. We shall keep this terminology, but we
like to emphasize that this is only the speed of a hypothetical
influence and that our result casts very serious doubts on its
existence. Still, it is useful to give names to the objects under
study, even when their existence is hypothetical. For views on the
{\it speed of quantum information}, see\cite{Garisto}.
Before presenting our experiment and results, let us clarify the
principle of our measurements and how one can obtain bounds on this
{\it speed of quantum information} in any reference frame.
In an inertial reference frame centered on the Earth, two events $A$
and $B$ (in our experiment, two single-photon detections) occur at
positions $\vec{r}_A$ and $\vec{r}_B$ at times $t_A$ and $t_B$. Let
us consider another inertial reference frame $F$, the hypothetically
privileged frame, relative to which the Earth frame moves at a speed
$\vec{v}$ (see Figure 1). When correlations violating a Bell
inequality are observed, the {\it speed of quantum information}
$V_{QI}$ in frame $F$ that could cause the correlation is lower
bounded by
\begin{equation}
V_{QI} \geq \frac{||\vec{r'}_B - \vec{r'}_A||}{|t'_B - t'_A|}
\end{equation}
where $(\vec{r'}_A, t'_A)$ and $(\vec{r'}_B, t'_B)$ are the
coordinates of events $A$ and $B$ in frame $F$, obtained from
$(\vec{r}_A, t_A)$ and $(\vec{r}_B, t_B)$ after a Lorentz
transformation. After simplification, one gets
\begin{equation}
\left(\frac{V_{QI}}{c}\right)^2 \geq 1+
\frac{(1-\beta^2)(1-\rho^2)}{(\rho+\beta_{\parallel})^2}
\label{eq_VQI}
\end{equation}
where $\beta = \frac{v}{c}$ is the relative speed of the Earth frame
in frame $F$ ($c$ being the speed of light),
$\beta_{\parallel}=\frac{v_{\parallel}}{c}$, with ${v}_{\parallel}$
the component of $\vec{v}$ parallel to the $AB$ axis, and
$\rho=\frac{c~t_{AB}}{r_{AB}}$ quantifies the alignment of the two
events in the Earth frame (with $t_{AB} = t_B-t_A$ and $r_{AB} =
|\vec{r}_B - \vec{r}_A|$). In the following, we will consider
space-like separated events, for which $|\rho| < 1$: the bound
(\ref{eq_VQI}) on $V_{QI}$ will then be larger than $c$. For a given
privileged frame $F$, this bound depends on the orientation of the
$AB$ axis through $\beta_{\parallel}$ and on the alignment $\rho$.
To obtain a good lower bound for $V_{QI}$, one should upper bound
the term $(\rho+\beta_{\parallel})^2$ by the smallest possible
value, during a period of time $T$ needed to observe a Bell
violation (which, in our experiment, will be the integration time of
a 2-photon interference fringe).
To get an intuition, consider first the simple case where $\rho = 0$
(the two events are perfectly simultaneous in the Earth frame), and
the $AB$ axis is perfectly aligned in the east-west direction. Then,
when the Earth rotates, there will be a moment $t_0$ when the
east-west direction is perpendicular to $\vec v$, i.e.
$\beta_{\parallel}(t_0) = 0$.
\noindent
During a small time interval around
$t_0$, one can bound $|\beta_{\parallel}(t)|$ by a small value, and
thus obtain a high lower bound for $V_{QI}$.
In principle, the alignment $\rho$ could actually be optimized for
each privileged frame that one wishes to test, so as to decrease the
bound that one can put on $(\rho+\beta_{\parallel})^2$ during a time
interval $T$ (and increase the upper term $(1-\rho^2)$ at the same
time). In our experiment, since we want to scan all possible frames,
we will not optimize $\rho$ for each frame; instead, we shall align
the detection events such that $|\rho| \leq \bar{\rho} \ll 1$, where
$\bar{\rho}$ is our experimental precision on the alignment $\rho$.
We shall then use the fact that
$\frac{1-\rho^2}{(\rho+\beta_{\parallel})^2} \geq
\frac{1-\bar{\rho}^2}{(\bar{\rho}+|\beta_{\parallel}|)^2}$, to get
the bound
\begin{equation} \left(\frac{V_{QI}}{c}\right)^2 \geq 1+
\frac{(1-\beta^2)(1-\bar{\rho}^2)}{(\bar{\rho}+|\beta_{\parallel}|)^2}
\label{eq_VQI_bis}
\end{equation}
The problem reduces to bounding $|\beta_{\parallel}|$ directly.
\begin{center}
\includegraphics[width=1.0\linewidth]{Earth5b.png}
\end{center}
{\small {\bf Figure 1 $\mid$ Reference frames.} The Earth frame
moves with respect to a hypothetically privileged reference frame
$F$ at a speed $\vec{v}$. The zenith angle $\chi$ between $\vec{v}$
and the $z$ axis can have values between $0^\circ$ and $180^\circ$.
The $AB$ axis forms an angle $\alpha$ with the equatorial ($xy$) plane. $\omega$ is the angular velocity of the Earth.}\\
In the configuration of our experiment, the $AB$ axis is almost, but
not perfectly, oriented along the east-west direction. Consequently,
the component $\beta_{\parallel}(t)$ has a 24-hour period, and
geometric considerations show that it can be written as (see the
Methods section)
\begin{equation}
\beta_{\parallel}(t) = \beta \cos \chi \sin \alpha + \beta \sin \chi
\cos \alpha \cos \omega t \ ,
\end{equation}
where $\chi$ is the zenith angle of $\vec v$, $\alpha$ is the angle
between the $AB$ axis and the equatorial ($xy$) plane (see Figure
1), and $\omega$ is the angular velocity of the Earth.
As we show in the Methods section, in order to upper bound
$|\beta_{\parallel}|$ during a period of time $T$, one can consider
two cases, depending whether $\vec v$ points close to a pole or not:
\begin{eqnarray}
(i) && C_T \, |\tan \chi| > |\tan \alpha| \label{case_i} \\
(ii) && C_T \, |\tan \chi| \leq |\tan \alpha| \ \label{case_ii}
\end{eqnarray}
with $C_T = \cos^2 \frac{\omega T}{4} \simeq 1$ when $\omega T$ is
small. For each case, there exists a time interval of length $T$,
during which $|\beta_{\parallel}(t)|$ is respectively upper-bounded
by
\begin{eqnarray}
(i) && \!\!\!\!\!\! |\beta| \ \sqrt{\sin^2 \chi
\cos^2 \alpha - \cos^2 \chi \sin^2 \alpha} \ \frac{\omega T}{2} \\
(ii) && \!\!\!\!\!\! |\beta| \ \Big( |\cos \chi \sin \alpha| - |\sin
\chi \cos \alpha| \cos \frac{\omega T}{2} \Big) \label{bound_ii}
\end{eqnarray}
These bounds, together with equation (\ref{eq_VQI_bis}), provide the
desired lower bound for $V_{QI}$.
We now describe our experiment. In essence it is a large Franson interferometer\cite{Franson}. A source situated in our laboratory in down-town
Geneva emits entangled photon pairs using the standard parametric down-conversion process in a nonlinear crystal (here a cw laser pumps a waveguide in a
Periodically Poled Lithium Niobate (PPLN) crystal)\cite{SebEPJD}. Using fiber Bragg gratings and optical circulators, each pair is deterministically split
and one photon is sent via the Swisscom fiber optic network to Satigny, a village west of Geneva, while the other photon is sent to Jussy, another
village east of Geneva. The two receiving stations, located in those two villages, are separated by a direct distance of 18.0 km, see Figure 2. We use
energy-time entanglement, a form of entanglement well suited for quantum communication in standard telecom fibers\cite{Rob}. At each receiving station,
the photons pass through identically unbalanced fiber optic Michelson interferometers. The imbalance ($\approx 25$ cm) is larger than the single-photon
coherence length ($\approx 2.5$ mm), hence avoiding any single-photon interference, but much smaller than the pump laser coherence length ($>20$ m).
Accordingly, when a photon pair is detected simultaneously in Satigny and Jussy, there is no information about which path the photons took in their
interferometer, the long arm or the short arm. But since both photons were also emitted simultaneously, both took the same path: both long or both short.
This indistinguishability leads, as always in quantum physics, to interference between the long-long and short-short paths. By scanning continuously the
phase in one interferometer, at Jussy, while keeping the other one stable, produces a sinusoidal oscillation of the correlation between the photon
detections at Satigny and Jussy.
During each run of the experiment we continuously monitored both the single count rates (as a check of the stability of the entire setup) and the
coincidence count rate. The average coincidence rate was 33 coinc./min. and the number of accidental coincidences 2.5 coinc./min. We are primarily
interested in the coincidence rate as its oscillations follow the scanned phase and should have a visibility large enough to exclude any common
cause explanation. The correlations are thus either due to entanglement, as predicted by quantum physics, or due to some hypothetical
{\it spooky action at a distance} whose speed we wish to lower bound.
The phases are controlled by the temperature of the fiber-based interferometers.
To scan the temperature in one of the interferometers, a voltage ramp is applied to its temperature controller. The temperature
decreases regularly for several hours and is then heated quickly at the end of the ramp. This process was repeated during several days.
The end of the cooling ramp stops the phase scan for several minutes making impossible to obtain arbitrarily long measurements of uninterrupted
fringes.
Figure 3 presents a measurement run over 4 hours with a fringe period of $T$=900 s and a continuous sinusoidal fit. This result is remarkable
because the period of the interference fringes remains stable for a very long time, wich allows us to fit the entire measurement with a continuous
fit and obtain a high visibility value.
\end{multicols}
\begin{center}
\includegraphics[width=0.97\linewidth]{Mapwhite.PNG}
\end{center}
{\small {\bf Figure 2 $\mid$ Experimental setup.} The source sends pairs of photons from Geneva to two receiving stations through the Swisscom fiber
optic network. The stations are situated in two villages (Satigny and Jussy) in the Geneva region, at 8.2 and 10.7 km, respectively. The direct
distance between them is 18.0 km. At each receiving station, the photons pass through identically unbalanced Michelson interferometers and are
detected by a single-photon InGaAs APD (id Quantique, id201). The length of the fiber going to Jussy is 17.5 km. The fiber going to Satigny was only
13.4 km long so we added a fiber coil of 4.1 km (represented as a loop) to equalize the length in both fibers. Having fibers with the same length allows
us to satisfy the condition of good alignment ($\rho\ll1$).}
\begin{multicols}{2}
The bound for $V_{QI}$ is higher for shorter fringe periods $T$. To reduce the time $T$, one should increase the rate of the
phase scan (more degrees per unit of time). Unfortunately, with a higher rate, the number of coincidences per minute diminishes, hence, the slope
of the temperature ramp was adjusted so as to obtain a compromise period of $T=360$ seconds.
\begin{center}
\includegraphics[width=1.0\linewidth]{Longfit2.png}
\end{center}
{\small {\bf Figure 3 $\mid$ } Interference fringes with a period $T$=900 s obtained during a 4 hours measurement fitted with a sinusoidal function
yielding a visibility of $V=(87.6\pm 1.1)\%$. If we substract the accidental coincidences, the visibility climbs to $V_{net}=(94.1\pm 1.0)\%$.}\\
Interference fringes were recorded in many runs usually lasting several hours, up to 15 hours for the longest run. The limitations for the
length of these measurements were the end of the cooling ramp and small instabilities in the setup that produced short interruptions in the scan.
Juxtaposing several of these measurement runs obtained over several weeks, we covered a 24-hour period with interference fringes periods of $T$=360 s
with visibilities well above the threshold ($V=\frac{1}{\sqrt{2}}$) set by the CHSH Bell inequality\cite{CHSH}.
Since long measurements of fringes with short fringe periods $T$ are difficult to fit continuously, we fit the data over a time-window corresponding to
one and a half fringe and scanned this time-window, as explained in the Supplementary information. The results are presented in Figure 4.
\begin{center}
\includegraphics[width=1.0\linewidth]{VisFits.png}
\end{center}
{\small {\bf Figure 4 $\mid$ }
Visibility fits for several uninterrupted runs obtained at different times of the day were collected. Together these runs cover
each moment of the day at least twice. Visibility values remain above the threshold (black line, $V=\frac{1}{\sqrt{2}}$) set by
the CHSH Bell inequality at all times.}\\
The violation of the Bell inequality at all times of the day allows
one to calculate the lower bound for the {\it speed of quantum
information} for any reference frame. This bound depends on
precision of the alignment in the actual experiment, see eq. (3).
Since we wish to have a good alignment ($\rho\ll1$),
the difference in the arrival times of the single-photons ($t_{AB}$)
should be minimized. First, the length of each fiber between the
source and the single-photon detectors was measured. The long fibers
(of several km) were measured using a single-photon Optical Time
Domain Reflectometer ($\nu$-OTDR)\cite{nu-OTDR} and the short fibers
(less than 500 m) were measured with an Optical Frequency Domain
Reflectometer (OFDR)\cite{OFDR}. The fiber on the Satigny side was
found to be shorter by 4.1 km. We added a fiber coil to the short
side (represented as a loop in figure 2), reducing this difference
to below one centimeter with an uncertainty of 1 cm which
corresponds to a time of 49 ps. To avoid controversy over where
exactly the measurement takes place, we adjusted the fiber lengths
from the source to the fiber couplers inside each interferometer and
also to the photodiodes (where the photons are detected). Hence the
configuration is totally symmetric. Next, we considered the
chromatic dispersion in the fibers. Chromatic dispersion adds an
uncertainty in the arrival times, and because the entangled photons
are anticorrelated in energy, their time delay is always opposite to
each other, thus always increasing this uncertainty. Chromatic
dispersion was measured to be 18.2
$\frac{\textrm{ps}}{\textrm{nm}\cdot \textrm{km}}$ using a Chromatic
Dispersion Analyzer\cite{CDA}. For a spectral half width of
$\Delta\lambda = 0.5$ nm and two times the distance of 17.55 km,
this is equivalent to 319 ps of uncertainty. Thus, the overall
uncertainty in the relative lengths of the fibers is $t_{AB}=323$
ps. This, together with the direct distance between the receivers,
$r_{AB}=18.0$ km, allows one to estimate the precision of our
alignment: $|\rho| \leq \bar{\rho} = 5.4\cdot10^{-6}\ll1$.
Finally, we use equation $(3)$ to calculate a lower bound for
$V_{QI}$. We use the value of $\bar{\rho}$ just calculated, the
period of time $T=360$ s needed to observe a Bell violation
(corresponding to the interference fringe period) and the angle
formed by the axis between the two receiving stations (axis $AB$)
and the east-west direction, $\alpha=5.8^\circ$. The results are
shown in Figures 5a and 5b, for certain hypothetically privileged
frames. In Figure 5a, we scan all possible directions $\chi$,
but set the Earth's relative speed at $\beta=10^{-3}$.
A lower bound for $V_{QI}$ greater than $10.000$
times the speed of light is found for any such reference frame. The
non-perfect east-west orientation ($\alpha\ne0$) is responsible for
the minimum values of the bound at angles $\chi$ near $0^\circ$ and
$180^\circ$. For smaller Earth speeds, the bound on $V_{QI}$ is even
larger. On the other hand, if $\beta$ is very large, then the
corresponding bound on $V_{QI}$ is less stringent. To illustrate
this, in Figure 5b we set $\chi=90^\circ$, i.e. $\vec v$ in the
equatorial plane, and scan the velocity $\beta$.
Indeed, when $\beta \simeq 1$, the bound drops rapidly.
Recall, however, that for large values of $\beta$ one could, in principle,
optimize the alignment $\rho$ in the experiment, so as to get a better
bound on $V_{QI}$. For small values of $\beta$, our bound is limited by
the inverse of our precision of alignment $\bar{\rho}$.
\end{multicols}
\begin{center}
\includegraphics[width=0.49\linewidth]{Borne-chi.png}
\includegraphics[width=0.49\linewidth]{Borne-beta.png}
\end{center}
{\small {\bf Figure 5 $\mid$ Lower bounds for the {\it speed of
quantum information}. a.} Bound obtained for $\frac{V_{QI}}{c}$ as a
function of the angle $\chi$, when $\beta=10^{-3}$. For angles $\chi
\, \begin{picture}(8,8)\put(0,2){$<$}\put(0,-3){$\sim$}\end{picture} \, \alpha$ or $\chi \, \begin{picture}(8,8)\put(0,2){$>$}\put(0,-3){$\sim$}\end{picture} \, 180^\circ-\alpha$, the bound is
obtained by considering case $(ii)$ (see text), while for angles
$\alpha \, \begin{picture}(8,8)\put(0,2){$<$}\put(0,-3){$\sim$}\end{picture} \, \chi \, \begin{picture}(8,8)\put(0,2){$<$}\put(0,-3){$\sim$}\end{picture} \, 180^\circ-\alpha$, the bound is
obtained by considering case $(i)$. The bound at $\chi=90^\circ$ is
$V_{QI} \geq 54000 c$. {\bf b.} Bound obtained for
$\frac{V_{QI}}{c}$ as a
function of the speed $\beta$, when $\chi=90^\circ$. When $\beta \rightarrow 0$, our bound on $\frac{V_{QI}}{c} \rightarrow 1/\bar{\rho}$.}\\
\begin{multicols}{2}
In conclusion, we performed a Bell experiment using entangled photons between two villages separated by 18 km and approximately east-west
oriented, with the source located precisely in the middle. The rotation of the Earth allowed us to test all possible hypothetically privileged
frames in a period of 24 hours. Two-photon interferences fringes with visibilities well above the threshold set by the Bell inequality
were observed at all times of the day.
From these observations we conclude that the nonlocal correlations observed here and in previous experiments\cite{Asp1} are indeed truly nonlocal.
Indeed, to maintain an explanation based on {\it spooky action at a distance} one would have to assume that the spooky action propagates at speeds
even greater than the bounds obtained in our experiment.
\begin{enumerate}
{\small {\itemsep=-0.1cm
\bibitem{Asp1} Aspect A. Bell's inequality test: more ideal than ever. {\it Nature}, {\bf 398}, 189-190 (1999).
\bibitem{LocLoopholeAspect} Aspect A. {\it et al.} Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities. {\it Phys. Rev. Lett.} {\bf 49}, 91-94 (1982).
\bibitem{LocLoopholeGeneva} Tittel W., Brendel J., Zbinden H., and Gisin N. Violation of Bell Inequalities by Photons More Than 10 km Apart. {\it Phys. Rev. Lett.} {\bf 81}, 3563-3566 (1998).
\bibitem{LocLoopholeInnsbrug} Weihs G., Jennewein T., Simon C., Weinfurter H., and Zeilinger A. Violation of Bell's Inequality under Strict Einstein Locality Conditions. {\it Phys. Rev. Lett.} {\bf 81}, 5039-5043 (1998).
\bibitem{DetLoopholeRowe} Rowe M. A. {\it et al.} Experimental violation of a Bell's inequality with efficient detection. {\it Nature} {\bf 409}, 791-794 (2001).
\bibitem{DetLoopholeMat} Matsukevich D. N. {\it et al.} Bell inequality violation with two remote atomic qubits. arXiv:0801.2184.
\bibitem{Eber} Eberhard Ph.H. {\it Quantum theory and pictures of reality.} ed. W. Schommers, 169-216 (Springer 1989).
\bibitem{Eber2} Eberhard Ph.H. private communication.
\bibitem{Scarani} Scarani V. {\it et al.} The speed of quantum information and the preferred frame: analysis of experimental data. {\it Phys. Lett. A} {\bf 276}, 1-7 (2000).
\bibitem{Bohm I} Bohm D. A Suggested Interpretation of the Quantum Theory in Terms of "Hidden" Variables. I {\it Phys. Rev.} {\bf 85}, 166 (1952).
\bibitem{Bohm II} Bohm D. A Suggested Interpretation of the Quantum Theory in Terms of "Hidden" Variables. II {\it Phys. Rev.} {\bf 85}, 180 (1952).
\bibitem{BohmHiley} Bohm D. and Hiley B.J. {\it The Undivided Universe} Routledge, 293 (1993).
\bibitem{2000Gisin} Gisin N. {\it et al.} Optical tests of quantum nonlocality: from EPR-Bell tests towards experiments with moving observers. {\it Annal. Phys.} 9, 831-841 (2000).
\bibitem{2000Zbinden} Zbinden H. {\it et al.} Experimental test of nonlocal quantum correlation in relativistic configurations. {\it Phys. Rev. A} {\bf 63}, 022111/1-10 (2001).
\bibitem{Garisto} Garisto R. What is the speed of quantum information? arXiv:quant-ph/0212078.
\bibitem{Franson} Franson J. D. Bell inequality for position and time. {\it Phys. Rev. Lett.} {\bf 62}, 2205-2208 (1989).
\bibitem{SebEPJD} Tanzilli S. {\it et al.} PPLN waveguide for quantum communication. {\it Eur. Phys. J. D} {\bf 18}, 155-160 (2002).
\bibitem{Rob} Thew R. {\it et al.} Experimental investigation of the robustness of partially entangled qubits over 11 km. {\it Phys. Rev. A} {\bf 66}, 062304/1-5 (2002).
\bibitem{CHSH} Clauser J. F., Horne M. A., Shimony A., and Holt R. A. Proposed Experiment to Test Local Hidden-Variable Theories. {\it Phys. Rev. Lett.} {\bf 23}, 880-884 (1969).
\bibitem{nu-OTDR} Scholder F., Gautier J.-D., Wegm\"uller M., and Gisin N. Long-distance OTDR using photon counting and large detection gates at telecom wavelength. {\it Opt. Comm.} {\bf 213}, 57-61 (2002).
\bibitem{OFDR} Passy R. {\it et al.} Experimental and theoretical investigations of coherent OFDR with semiconductor laser sources. {\it J. Lightwave Tech.} {\bf 12}, 1622-1630 (1994).
\bibitem{CDA} Brendel J., Gisin N., and Zbinden H. Optical Fiber Measurement Conference, OFMC'99, pp 12-17, Eds Ch. Boisrobert and E. Tanguy (Universit\'e de Nantes), Nantes, September 1999.
}}
\end{enumerate}
{\small {\bf Acknowledgements} We acknowledge technical support by J-D. Gautier and C. Barreiro. The access to the telecommunication network
was provided by Swisscom. This work was supported by the Swiss NCCR Quantum Photonics and the EU project QAP. Credit for the image of the Earth
in figure 1 goes to NASA Goddard Space Flight Center Image by Reto St\"ockli.}
\end{multicols}
\end{document} |
\begin{document}
\title{Identifying nonclassicality from experimental data using artificial neural networks}
\author{Valentin Gebhart}
\email{[email protected]}
\affiliation{QSTAR, INO-CNR, and LENS, Largo Enrico Fermi 2, I-50125 Firenze, Italy}
\affiliation{Universit\`a degli Studi di Napoli ”Federico II”, Via Cinthia 21, I-80126 Napoli, Italy}
\author{Martin Bohmann}
\email{[email protected]}
\affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria}
\author{Karsten Weiher}
\affiliation{Institut f\"ur Physik, Universit\"at Rostock, D-18051 Rostock, Germany}
\author{Nicola Biagi}
\affiliation{Istituto Nazionale di Ottica (CNR-INO), L.go E. Fermi 6, 50125 Florence, Italy}
\affiliation{LENS and Department of Physics $\&$ Astronomy, University of Firenze, 50019 Sesto Fiorentino, Florence, Italy}
\author{Alessandro Zavatta}
\affiliation{Istituto Nazionale di Ottica (CNR-INO), L.go E. Fermi 6, 50125 Florence, Italy}
\affiliation{LENS and Department of Physics $\&$ Astronomy, University of Firenze, 50019 Sesto Fiorentino, Florence, Italy}
\author{Marco Bellini}
\affiliation{Istituto Nazionale di Ottica (CNR-INO), L.go E. Fermi 6, 50125 Florence, Italy}
\affiliation{LENS and Department of Physics $\&$ Astronomy, University of Firenze, 50019 Sesto Fiorentino, Florence, Italy}
\author{Elizabeth Agudelo}
\email{[email protected]}
\affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria}
\begin{abstract}
The fast and accessible verification of nonclassical resources is an indispensable step toward a broad utilization of continuous-variable quantum technologies.
Here, we use machine learning methods for the identification of nonclassicality of quantum states of light by processing experimental data obtained via homodyne detection.
For this purpose, we train an artificial neural network to classify classical and nonclassical states from their quadrature-measurement distributions.
We demonstrate that the network is able to correctly identify classical and nonclassical features from real experimental quadrature data for different states of light.
Furthermore, we show that nonclassicality of some states that were not used in the training phase is also recognized.
Circumventing the requirement of the large sample sizes needed to perform homodyne tomography, our approach presents a promising alternative for the identification of nonclassicality for small sample sizes, indicating applicability for fast sorting or direct monitoring of experimental data.
\end{abstract}
\maketitle
\section{Introduction}
Quantum technologies promise various advantages over classical technologies.
By employing different features of quantum systems that are not present in classical systems, one can, e.g., perform more precise measurements, speed up computations, or share information in a more secure way.
These nonclassical properties create possibilities to optimally exploit physical systems for many technological challenges.
Light fields, described as continuous-variable systems, play a key role for the transmission and manipulation of quantum information \cite{braunstein2005}.
Due to their infinite dimensions and an accessible control by means of linear optical elements and homodyne detection, they are widely considered for quantum technological applications.
In the case of single-mode continuous-variable quantum systems, the central quantum resource is nonclassicality \cite{streltsov_2017,sperling_2018}.
Directly related to the negativities \cite{titulaer_1965,mandel_1986} of the Glauber-Sudarshan $P$ representation of the quantum state \cite{glauber_1963,sudarshan_1963}, nonclassicality manifests itself in different observable characteristics such as photon antibunching \cite{Carmichael_1976,Kimble_1976,kimble_1977}, sub-Poissonian photon-number statistics \cite{mandel_1979,Zou_1990}, and quadrature squeezing \cite{Yuen_1976,Walls_1983,Caves_1985, Loudon_1987,Dodonov_2002}, and can be transformed into other quantum resources such as entanglement \cite{vogel_2014,killoran_2016}.
The fundamental nature of nonclassicality is exploited for the investigation of the roots of quantum phenomena and several quantum technological tasks such as, e.g., precision measurements.
Due to its crucial importance for quantum technologies, a fast and reliable identification of nonclassicality from experimental observations of the quantum state represents an unavoidable step toward a practical usage of such a resource for quantum technologies.
In continuous-variable systems, one of the most common measurement methods is homodyne detection \cite{welsch_1999}.
Advanced state tomography techniques based on this type of measurements have been developed \cite{paris_2004,lvovsky_2009}.
However, nonclassicality certification based on homodyne tomography usually requires many different quadrature measurements and involved analysis tools.
A different approach is nonclassicality certification via negativities of reconstructed quasiprobabilities \cite{cahill_1969b} (particularly, the Glauber-Sudarshan P function \cite{Kiesel_2008} and the Wigner function \cite{smithey_1993, Dunn_1995, leibfried_1996, deleglise_2008}).
Methods that involve regularizations of quasiprobabilities have been implemented for the single-mode and multimode scenarios \cite{kiesel_2010, agudelo_2013}, and more recently, phase-space inequalities have been proposed and tested experimentally \cite{bohmann_2020, bohmann_2020b, biagi_2020}.
Finally, a direct nonclassicality estimation without the need for quantum state tomography was proposed in Ref. \cite{mari_2011}. Here, the nonclassicality of phase randomized states was classified via semidefinite programming.
In all above approaches, to guarantee the detection of nonclassicality with a high statistical significance, extensive measurements must be performed (using different measurement settings or sampling different moments), after which advanced postprocessing is required (estimation of pattern functions, reconstruction of quasiprobabilities, and semidefinite programming, among others).
Consequently, these methods are often complex and time consuming.
A direct access to nonclassicality identifiers from unprocessed and finite homodyne-detection data is therefore desirable.
In recent years, the problem of the classification of unstructured and complex data has been increasingly addressed with the help of machine learning (ML) techniques \cite{nielsen_2015}.
In the quantum domain, a wide range of challenges was tackled using various different forms of ML, see, e.g., \cite{hentschel_2010,wiebe_2014,magesan_2015,krenn_2016,van_nieuwenburg_2017,carleo_2017,torlai_2018,lumino_2018,bukov_2018,fosel_2018,canabarro_2019,cimini_2019,nautrup_2019,agresti_2019,gebhart_2020,cimini_2020,tiunov_2020,nolan_2020,dunjko_2018,ahmed_2020} for a review.
ML tools have been applied to the identification of nonclassicality \cite{cimini_2020,gebhart_2020}.
In Ref. \cite{gebhart_2020}, neural networks (NNs) were trained to identify nonclassicality from simulated data of multiplexed click-counting detection schemes, and in Ref. \cite{cimini_2020}, the networks were trained to detect the negativity of the multimode Wigner function using results from multimode homodyne detection measurements.
Also, ML in the form of restricted Boltzmann machines have been used to perform homodyne tomography in Ref. \cite{tiunov_2020}.
In this paper, we use ML techniques to identify nonclassicality of single-mode states based on a finite number of quadrature measurements recorded via balanced homodyne detection.
For this purpose, we employ a dense artificial NN and train it with supervised learning of simulated homodyne detection data from several noisy classical and nonclassical states.
We demonstrate the successful performance of the NN nonclassicality prediction on real experimental data and compare the results with established nonclassicality identification methods.
Furthermore, we test the performance of the network for experimentally generated states which were not used in the training procedure and show that the NN can identify different nonclassical features at once.
We conclude that the ML approach offers an accessible alternative for the classification of single-mode nonclassicality, and, particularly, due to its performance on small sample sizes, the presented approach constitutes a powerful tool for data pre-selecting, sorting, and on-site real-time monitoring of experiments.
Our result represents an approach to train NNss for identifying nonclassicality of single-mode phase-sensitive states, here measured by homodyne detection.
The paper is structured as follows.
In Sec. \ref{sec:2}, we briefly recall the technique of single-mode balanced homodyne detection.
In Sec. \ref{sec:3}, we describe in detail the training of the NN and the resulting nonclassicality identifier.
In Sec. \ref{sec:4}, we apply the NN to experimental homodyne measurement data and then analyze its performance on untrained data in Sec. \ref{sec:5}.
We summarize and conclude in Sec. \ref{sec:6}.
\section{Balanced homodyne measurement and nonclassical states}\label{sec:2}
Any direct experimental investigation of light is based on photodetection.
Depending on the information on the quantum statistics of the measured light required, different measurement schemes need to be implemented.
For example, photon-counting measurements are not sensitive to the phase of the sensed field.
To get
information about the phase, interferometric methods have to be applied.
In these methods, the field is mixed with a reference beam, the so-called local oscillator (LO).
The mixing takes place just before intensity measurements~\cite{welsch_1999, lvovsky_2009}.
The scheme of balanced homodyne detection is shown in Fig. \ref{fig:bhd}.
It consists of the signal field $\hat\rho$, the LO, a 50:50 beam splitter (BS), two proportional photodetectors, and the electronics used to subtract and amplify the photocurrents after all.
Homodyning with an intense coherent LO gives the phase sensitivity necessary to measure the quadrature variances
\cite{Carmichael_1987,Braunstein_1990, Vogel_1993}.
\begin{figure}
\caption{
Homodyne detection scheme.
The signal field $\hat\rho$ and the reference coherent beam (LO) are mixed using a 50:50 beam splitter (BS) before the measuring light intensity.
}
\label{fig:bhd}
\end{figure}
This kind of interferometric approach is necessary for the reconstruction of the quasiprobabilities of bosonic states.
In principle, all normally ordered moments can be determined from this measurement scheme, including the ones which contain different numbers of creation and annihilation operators.
Thus, homodyne detection drastically enlarges our measuring capabilities in a simple way.
The key for the quasiprobabilitiy estimation is to perform measurements for a large set of quadrature phases, which leads ultimately to a proper state reconstruction.
Balanced homodyne detection and the subsequent reconstruction of the Wigner function have become a standard measuring technique in quantum systems such as, e.g., quantum light, molecules, and trapped atoms \cite{smithey_1993, Dunn_1995, leibfried_1996, deleglise_2008}.
Although experimentally accessible, phase-space function reconstructions and moment-based nonclassicality criteria require significant amounts of measurement data, computational power, and postprocessing time.
Here, we propose a shortcut to this process.
Using NNs, we can do an on-the-fly nonclassicality identification with few measurements.
\section{Training the NN}\label{sec:3}
\subsection{Setup of the network}
The input vector of the network consists of a normalized histogram (relative frequencies) of homodyne-detection data which is collected along a fixed phase setting.
To generate the histogram from simulated or experimentally generated data (produced from quadrature-measurement outcomes $x$), we bin the data into $160$ equally sized intervals which cover the interval $[ -8,8]$ \cite{Note1}.
Since the histogram is normalized, input vectors constructed from arbitrary numbers of detection events can be used for the same network.
We use a fully connected artificial NN with an input layer of size $160$, an output layer of size $2$ and three hidden layers with sizes $64$, $32$ and $16$.
The hidden layers are activated with the rectified linear unit, and the output layer is activated with a softmax function.
These parameters were chosen for a good performance in discriminating between classical and nonclassical states.
The simulated data consisting of $2\times 10^4$ input vectors per training family (see below) are split into training data ($80\%$) and validation data ($20\%$). The network is trained until the validation error stops decreasing for more than $10$ training cycles.
Considering the experimental data on which we want to test the network's prediction later, we simulate $16000$ detection events to generate each training input vector.
We train the NN with data generated from Fock, squeezed-coherent, and single-photon-added coherent states (SPACS) as states that show nonclassical signatures and with coherent, thermal, and mixtures of coherent states as states showing classical characteristics, see Appendix \ref{sec:appendix_traingdata} for a discussion of this choice.
All families of states used in the training are summarized together with their parameters in Appendix \ref{sec:Appendix}.
To account for realistic (imperfect) scenarios, we chose an overall efficiency of the homodyne measurement of $\eta=0.6$ \cite{biagi_2020}.
Note that the quantum efficiency, that represents external limitations such as channel or detector efficiencies, can equivalently be used to describe noisy quantum states.
Thus, we train the network with data that correspond to the detection of realistic, lossy quantum states.
\subsection{Identification of nonclassicality}
In the training process, we assign the value $0$ to all classical quadrature data and the value $1$ to nonclassical data.
The output of the NN is a value $r$ between $0$ and $1$ that provides a way to discriminate classical and nonclassical data.
A high output value (close to $1$) indicates the nonclassical character of the tested quadrature data.
We choose a threshold value $t$ above which we say that the NN identifies nonclassicality.
As our goal is to faithfully identify nonclassicality, we set $t=0.9$.
This means that, for $r>t=0.9$, we conclude that the NN identifies nonclassicality.
In this way, we might reject some nonclassical states to be recognized as such, but we minimize the risk of falsely recognizing classical states as nonclassical ones.
Note that depending on the specific requirements and the choice of trained and studied states, the value of $t$ can be adapted.
In this context, it is important to stress that the result of the NN can only be an indication for nonclassical states; cf. also Ref. \cite{gebhart_2020}.
A certification of nonclassicality requires full analysis including the evaluation of a nonclassicality test (witness) and a proper treatment of errors.
While such an analysis can be rather involved, the proposed NN approach allows one to implement an easy and fast identification of nonclassicality.
Therefore, it provides a useful tool for pre-selecting and sorting of data or the online, in-laboratory monitoring of experiments.
\subsection{Performance of the network on trained states}
In Fig. \ref{fig:performance_training}, we show the output $r$ of the network for the different families of training states in their corresponding parameter ranges.
All training families are correctly and consistently recognized to be classical or nonclassical.
This holds for the total parameter regions of the considered states (cf. Appendix \ref{sec:Appendix}), indicating that the training of the NN is successful in the sense that the network learned to correctly classify the states from the training set into classical and nonclassical ones.
\begin{figure}
\caption{
Nonclassicality prediction of the neural network (NN) on the training states [coherent, thermal and mixed coherent states as classical ones; Fock, squeezed-coherent and single-photon-added coherent states (SPACS) as nonclassical ones], each in its corresponding state-parameter domain.
$\alpha$ is the coherent amplitude, $n$ is the number of photons, and $\bar n$ is the mean number of photons.
The gray horizontal line corresponds to the nonclassicality threshold $t=0.9$.
Note that for the squeezed-coherent states, the squeezing parameter $\xi$ is chosen randomly in $\xi\in[0.5,1]$ and is not shown in this plot. For each Fock state, the NN prediction is tested for four different simulations of the quadrature measurements.
For details on the state parameters, see Appendix \ref{sec:Appendix}
\label{fig:performance_training}
\end{figure}
\section{Application to experimental data}\label{sec:4}
Here, we will use the trained NN for the identification of nonclassicality from experimental quadrature data.
We analyze data from two different families of states: single-mode squeezed states and SPACS.
This analysis will demonstrate the strength of the network approach as a fast and easy-to-implement characterization tool for experimental data.
\subsection{Squeezed vacuum states}
The first nonclassical experimental state we consider is a squeezed vacuum state.
The vacuum state is squeezed along the real axis of the coherent plane.
Details on the experimental realization can be found in Ref.~\cite{agudelo_2015}.
In the measurements, the homodyne phase setting is changed continuously within the interval $\phi\in[0,2\pi]$.
The resulting measurement data are then divided into $125$ bins of size $\Delta\phi=2\pi/125$, such that $\sim 16000$ detection events are grouped together to constitute an input vector of the NN.
For our analysis, the amount of squeezing $|\xi_\mathrm{exp}|$ and the quantum efficiency $\eta_\mathrm{exp}$ of the detectors do not have to be known, which highlights the practicability of the NN prediction.
In Fig. \ref{fig:squeezed} (bottom), we show the prediction of the network for the nonclassicality of the squeezed state with respect to the homodyne phase setting together with the variance of the measured quadrature distribution.
Additionally, the quadrature distributions $p(x)$ for $\phi=0$ and $\phi=\pi/2$ (solid) compared with the vacuum quadrature distribution (dashed) are displayed (top).
It is known that nonclassicality in quadrature data can be verified by observing single-mode quadrature squeezing, see, e.g., Ref. \cite{vogel_2006}.
That is, if the quadrature variance $\mathrm{Var[\hat x(\phi)]}$ is below the vacuum noise for some values of $\phi$, $\mathrm{Var[\hat x(\phi)]}<1/4$, nonclassicality is detected.
We see that the domain of nonclassicality classification of the network coincides well with the domain of nonclassicality detection by sub-shot-noise variance.
In short, we confirm that the NN learns the standard nonclassicality classifier of sub-shot-noise variance.
If one is simply interested in the detection of squeezing, measuring the variance of the quadrature distribution remains sufficient.
However, as discussed below, in contrast to a mere variance classifier, the NN can learn how to identify further nonclassicality features.
It is more flexible than the squeezing condition which recognizes only one specific nonclassical feature, and it can be advantageous in scenarios where the underlying quantum state is not known and cannot be captured by a simple variance condition.
\begin{figure}
\caption{
Bottom: Nonclassicality prediction of the neural network (NN; teal) and quadrature variance of the corresponding distribution (yellow) for experimental data from squeezed states, dependent on the homodyne phase setting. Shaded regions are indicating nonclassicality for the corresponding quantity. Top: For $\phi=0$ and $\phi=\pi/2$, the exact quadrature distribution $p(x)$ is shown (solid), in comparison with the quadrature distribution of the vacuum state (dashed).
}
\label{fig:squeezed}
\end{figure}
\subsection{SPACS}
Let us now analyze the prediction of the network for experimentally generated SPACS, which are the result of the single application of the photon creation operator onto a coherent state. In principle, such states are always nonclassical, independent of the input coherent state; however, they present an evident Wigner negativity and resemble single-photon Fock states only for small coherent state amplitudes. On the other hand, for intermediate amplitudes, they also present quadrature squeezing. Exhibiting a variety of different quantum features in different parameter regions, SPACS are therefore particularly interesting candidates for testing the performance of the NN.
The experimental data consist of quadrature values, measured via homodyne detection, for the states $\mathcal{N}\hat{a}^\dagger \left | \alpha\right\rangle$ ($\mathcal{N}$ is a normalization constant) with $14$ different values of $\alpha\in \mathbb{R}^+$. To experimentally generate such optical states, we injected the signal mode of a parametric down conversion crystal with coherent states obtained from the 786 nm emission of a Ti:Sa mode-locked laser \cite{zavatta_2005}.
When the same crystal is pumped with an intense ultraviolet beam, obtained by frequency doubling the same laser, the detection of an idler photon heralds the addition of a single photon onto the seed coherent state. In other words, each idler detection event announces the presence of SPACS along the signal mode. Performing heralded homodyne detection on this mode, we measured the quadrature distributions along $11$ different quadrature angles $\phi$ for each value of $\alpha$ \cite{zavatta_2005}. Mode mismatch between the seed coherent states and the pump and LO beams, optical losses, electronic noise, and limited detector quantum efficiency in the homodyne measurement setup are the main causes for a non-unit overall efficiency of $\eta_{\mathrm{exp}}\approx0.6$ in the experiment.
For each state, $15963$ detection events are used to construct the network input vector.
In Fig. \ref{fig:performance_spacs}(a), we show the (binary) prediction of the network for the experimental SPACS data, together with exemplary quadrature distributions $p(x)$ for different combinations of $\alpha$ and $\phi$.
We observe that the ability of the NN to identify nonclassicality depends crucially on the homodyne phase setting.
For $\sin \phi \approx 0$, SPACS are identified as nonclassical in a wide range of $\alpha$; cf. Fig. \ref{fig:performance_spacs}(b) for the detailed NN predictions for this case.
On the other hand, for suboptimal directions, SPACS are rarely recognized as nonclassical by the NN (except for small $\alpha$).
Also, for large $\alpha$, SPACS are generally classified as classical in all directions.
As a comparison, we show the NN prediction for experimental homodyne data generated by coherent states in Fig. \ref{fig:performance_spacs}(c) for the same parameters as used in Fig. \ref{fig:performance_spacs}(b).
The network correctly recognizes coherent states as classical.
The phase-dependent behavior of the NN output for the experimental SPACS can be explained by the fact that, for $\sin \phi \approx 0$, the quadrature distributions differ significantly from the one produced by a coherent state, while for other directions, the corresponding quadrature distributions resemble closely the ones of coherent states \cite{zavatta_2004,zavatta_2005,filippov_2013}.
For small $\alpha<0.5$, SPACS resemble single-photon states and are thus recognized as nonclassical at all quadrature angles [see $p(x)$ for $\alpha=0.32$ in Fig. \ref{fig:performance_spacs}].
On the other hand, for large $\alpha$, the quadrature distribution of the SPACS approaches the one of coherent states also in the optimal direction ($\phi=0$), and therefore, the NN eventually does not indicate nonclassicality anymore. In this regime, it is known that SPACS can be a good approximation of a coherent state of a larger amplitude \cite{jeong_2014}.
The similarity of the SPACS quadrature distribution $p(x)$ for large $\alpha$ and the distribution from a coherent state explains the difficulty for the NN to classify SPACS as nonclassical in this regime.
To summarize, for an optimal homodyne phase setting, SPACS are identified as nonclassical in a wide range of parameters.
It is a direct and simple method for testing nonclassicality of SPACS directly based on quadrature distributions.
As we discuss below, this identification is successful even in a parameter regime where the homodyne distribution does not show sub-shot-noise or similarity to Fock states.
Therefore, the NN prediction proves operational for several different states and nonclassicality features.
\begin{figure}
\caption{
(a) (bottom) Binary neural network (NN) prediction for experimentally generated single-photon-added coherent states (SPACS; $\hat{a}
\label{fig:performance_spacs}
\end{figure}
\section{Influence of the training set and application to untrained data}
\label{sec:untrained}
In this section, we first discuss the ability of the NN to recognize different features of nonclassicality at the same time. Then, we test its performance to recognize nonclassicality of states that were not seen in the training phase and of measurement data consisting of varying sample sizes.
\subsection{Beyond single-feature recognition}
To get some insights into which features are learned by the NN, we examine the performance in recognizing simulated SPACS of a network trained without SPACS; see Fig. \ref{fig:training_nospacs}.
We observe that a network which is not trained with SPACS recognizes the latter only in specific parameter regions (teal dots).
For $|\alpha|\in[0,0.5]$, SPACS are recognized as nonclassical states due to their similarity to single-photon states.
On the other hand, in the parameter domain $|\alpha|\in[1,2]$, their nonclassicality is recognized because the variance of the quadrature distribution is significantly smaller than the vacuum variance.
Beyond that, the distribution does not resemble Fock states and has a large quadrature variance and is, therefore, not classified as nonclassical.
For $|\alpha|>3$, the variance approaches the vacuum variance, making a correct classification as nonclassical impossible.
In total, we see that the network can identify some SPACS even if they were not part of the training set.
The network effectively identifies similarity to Fock states and sub-shotnoise variances.
This is one example of the general fact that common features can lead to the identification of untrained data.
In comparison, a NN that also used SPACS for its training can only achieve its performance (c.f. Fig. \ref{fig:performance_training}) by learning how to recognize similarity to SPACS where they do not resemble Fock or squeezed states.
Therefore, we conclude that the network is sensitive to different nonclassical features at the same time and, thus, identifies nonclassicality beyond single features.
Hence, a properly trained network can be advantageous, as it can recognize different nonclassical features for which one would otherwise need to implement different test conditions.
This is particularly useful if the nonclassical features of the state to be tested are unknown.
As we have just seen, a state must not be part of the training set to be recognized by the network.
The above analysis also indicates the necessity to train a deep NN to perform this task since simple baseline models like, e.g., sub-shot-noise variance only provide single-feature recognition.
\begin{figure}
\caption{
Nonclassicality prediction of a network trained without single-photon-added coherent states (SPACS; $r_\mathrm{noSPACS}
\label{fig:training_nospacs}
\end{figure}
\subsection{Application to untrained data}\label{sec:5}
Now we discuss the performance of the NN on states which are not used in the training.
We apply the network to the family of so-called (odd) cat states
\begin{equation}
\left|\alpha_-\right\rangle=\frac{1}{\sqrt{2-2\exp(-2|\alpha|^2)}}\left(\left|\alpha\right\rangle-\left|-\alpha \right\rangle\right),
\end{equation}
where $\alpha\in\mathbb{R}^+$.
As all states in this family consist of a coherent superposition of coherent states, they are all nonclassical.
In Fig. \ref{fig:performance_cat1}, we show the $\alpha$-dependent prediction $r$ of the NN for quadrature measurements simulated for $\left|\alpha_-\right\rangle$. We use quadrature angles (a) $\phi=\pi/2$ and (b) $\phi=\pi/4$.
For each subfigure, we additionally display the quadrature distribution $p(x)$ for different values of $\alpha$ (solid) compared with $p(x)$ for the same parameters but using a quantum efficiency $\eta=1$ (dashed).
For both quadrature angles, the network correctly classifies the states as nonclassical in a significant range of $\alpha$.
Thus, this example shows that the NN can certify nonclassicality also of untrained states.
For larger $\alpha$, cat states are not identified as nonclassical.
This behavior can be explained as follows.
For small $\alpha$, the cat state resembles a single-photon Fock state and can therefore be identified as nonclassical.
For larger $\alpha$ and measured along $\phi=\pi/2$ with unit quantum efficiency $\eta=1$, the quadrature distribution develops a nonclassical interference pattern (a, dashed).
However, for a realistic efficiency $\eta=0.6$, this interference is smoothed away (a, solid) such that the states are eventually classified as classical.
Surprisingly, by choosing a different quadrature angle of, e.g., $\phi=\pi/4$, the cat states are classified as nonclassical in a wider range of $\alpha$ [Fig. \ref{fig:performance_cat1}(b)].
This is because the quadrature distribution still resembles a Fock state in this direction.
Note that the performance of the NN prediction for cat states can be increased by including this family in the training process.
In summary, the NN is able to identify the nonclassicality also for states that were not used in the training process.
However, for an optimized performance, it remains practical to adapt classes of states and parameter ranges in the training, see Appendix \ref{sec:appendix_traingdata}.
\begin{figure}
\caption{
Prediction $r$ of the neural network (NN) for simulated quadrature measurements of the cat state $\left|\alpha_-\right\rangle$ as a function of $\alpha$, for (a) $\phi=\pi/2$ and for (b) $\phi=\pi/4$. Above the subplots, we show the quadrature distribution $p(x)$ for different $\alpha$ (solid), in comparison with the quadrature distribution using the same parameters but a quantum efficiency $\eta=1$ (dashed).}
\label{fig:performance_cat1}
\end{figure}
\subsection{Influence of the sample size}
Finally, we want to discuss the prediction of the NN if it is given measurement data with a smaller sample size than that used in the training phase.
In Fig. \ref{fig:samplesize}, we show the NN nonclassicality prediction $r$ for experimental quadrature data of a SPACS (yellow) and a coherent state (teal) for $\alpha=0.32$ and $\phi=0$.
We observe that a NN trained with sample sizes of $16000$ (dashed line) can correctly classify these two states for measurement data starting from sample sizes as low as $\sim 800$.
Decreasing the sample size even further results in false classification of coherent states as nonclassical and vice versa, which renders the NN prediction unreliable in this regime.
This analysis shows the flexibility of the NN even once it has been trained.
Importantly, the NN can provide conclusive predictions based on comparably very small sample sizes, which opens the possibility of online classification during measurements or fast (pre-)classification of data.
Note that the performance of the NN for small sample sizes can also be improved by training it with the corresponding sample size.
\begin{figure}
\caption{
Prediction $r$ of the neural network (NN) for experimental data from single-photon-added coherent states (SPACS; yellow) and coherent states (teal) for $\alpha=0.32$ and $\phi=0$ as a function of the measurement sample size. The dashed vertical line indicates the sample size $16000$ that was used in the training phase.
}
\label{fig:samplesize}
\end{figure}
\section{Conclusions}\label{sec:6}
In this paper, we introduced an artificial NN-based nonclassicality identifier for single-mode quantum states of light measured with balanced homodyne detection.
We trained the network using simulated homodyne detection data for realistic noisy measurements of different classical and nonclassical states.
We observed that the trained network can correctly classify different classical and nonclassical states, i.e., coherent states, squeezed states, and SPACS, from real experimental data.
Furthermore, the network recognizes certain nonclassical states that were not used in the training phase of the network.
Compared with typical nonclassicality conditions based on homodyne tomography or other more complex nonclassicality tests, the strength of our approach lies in its simple implementation and the fact that only a small amount of data is required.
We would like to emphasize that the NN nonclassicality prediction cannot certify nonclassicality and, if necessary, should be complemented by an error-proof nonclassicality witness.
The ML-based classification offers a fast and accessible method to sort and preselect experimental data, considering that it circumvents the need to first perform homodyne tomography or the calculation of complex test conditions and, as we showed, performs well also on small sample sizes. It is furthermore easy to implement and applicable in multiple experimental settings.
ML has been used before for the detection of quantum effects \cite{gebhart_2020,cimini_2020,tiunov_2020}. In this context, it is important to highlight that the presented approach can detect phase-sensitive nonclassical features, which was not possible with previous results \cite{gebhart_2020}.
Further, the network approach can be used to search interesting experimental parameter regimes, especially if the production rate of detection events is small.
To maximize the accuracy of the NN prediction in experiments, any specific information about possible states and noise (such as phase or amplitude noise) should be included in the training phase.
Finally, note that the presented approach can be generalized to multi-mode scenarios and might be adapted to the identification of entanglement in a similar fashion.
Also, different additional ML methods such as convolutional layers or regularizations can be considered to optimize the performance of the NN nonclassicality prediction and make it more applicable to untrained data.
\section*{Appendix}
\section{Choice of classical states in the training phase}\label{sec:appendix_traingdata}
Here, we want to emphasize the role of the choice of different classical states in the training data. In the main text, we used, in addition to coherent and thermal states, mixtures of coherent states of the form $\rho_\mathrm{mix}=(\ket{\alpha}\bra{\alpha}+\ket{-\alpha}\bra{-\alpha})/2$, where $\alpha$ is sampled from the same parameter domain as for coherent states.
These states have to be included in the training because, otherwise, training only on classical states with single-peaked quadrature distributions, the NN might interpret double- or multi-peak structures as features of nonclassicality.
However, the choice of which classical state to use here is not unique.
For instance, a different classical state that occurs typically in experiments is a phase averaged coherent state, $\rho_\mathrm{av}=\int_0^{2\pi}\mathrm{d}\phi\ket{\alpha e^{i\phi}}\bra{\alpha e^{i\phi}}/2\pi$.
Using $\rho_\mathrm{av}$ instead of $\rho_\mathrm{mix}$ in the training results in a similar performance of the NN as in the main text, with the expectation that, for larger $\alpha$, $\rho_\mathrm{mix}$ (and therefore also cat states measured along $\phi=0$) are classified as nonclassical.
This points to an important caveat of the NN classification of nonclassicality:
as mentioned in the main text, the different states used in the training phase must be chosen carefully, given the experimental conditions.
Training with more families of classical states decreases the probability of false identification of nonclassicality for states that were not seen in the training.
At the same time, it makes it harder for the NN to learn nonclassicality features of the corresponding nonclassical training states.
This discussion shows that the NN nonclassicality classification, while representing a simple and fast nonclassicality identification if possible input states are known, is not universal and does not yield a strict nonclassicality certification.
\section{Parameters and probabilities used for the simulation of quadrature measurement data}\label{sec:Appendix}
Here, we specify the state-dependent quadrature probability distributions and the corresponding parameters used in the simulation of quadrature measurement data for the different states in the main text.
In Table \ref{tab:statestatistics}, we list the different states together with the corresponding parameter regions used in the simulations and the quadrature distribution along the quadrature angle $\phi$. Note that we use a vacuum variance of $1/4$.
For the simulation of the training data, we fixed a quadrature angle $\phi=0$.
For thermal and Fock states, this restriction does not influence the distribution, as these states are phase insensitive.
For coherent states with amplitude $\alpha$, the distribution along a nonzero $\phi$ is equivalent to the one of a coherent state with amplitude $\alpha\cos\phi$, measured along a zero quadrature angle.
For squeezed coherent states and SPACS, this choice assures that only quadrature distributions which show nonclassical features are used in the training.
As noted in Ref.~\cite{Note1}, the different parameter limits are chosen such that the probability for an event outside the considered measurement range, $\left|x\right|>8$, is small ($<10^{-6}$).
Note that, for SPACS, we further restrict the parameters ($|\alpha|\leq 3$) to a domain where the network is able to separate them clearly from the classical states. For the simulation of the squeezed states, the squeezing parameter is chosen uniformly in $\xi\in[0.5,1]$.
\begin{table*}[b]
\centering
\begin{tabular}{ccc}
\hline \hline
State & Parameters & Probability $p(x,\phi)$ \\ \hline
coherent & $\alpha\in [-5,5]$ & $\sqrt{\frac{2}{\pi}}\exp[-2(x-\sqrt{\eta}\alpha\cos\phi)^2]$ \\
thermal & $\bar n \in [0,5]$ & $\sqrt{\frac{2}{\pi(1+2\eta\bar n)}}\exp\left[-\frac{2x^2}{1+2\eta\bar n}\right]$ \\
Fock & $n\in\{1,\dots,6\}$ & $\sqrt{\frac{2}{\pi}}\sum_{k=0}^n \binom{n}{k}\frac{\eta^k}{2^k k!}e^{-2 x^2}H_{2k}(\sqrt{2}x)$ \\
squeezed coherent & $\alpha\in [-5,5],\xi\in [0.5,1]$ & $\sqrt{\frac{2}{\pi(1-\eta+e^{2|\xi|}\cos^2\phi+e^{-2|\xi|}\sin^2\phi)}}\exp \left[-\frac{2(x-\sqrt{\eta}\alpha)^2}{1-\eta+e^{2|\xi|}\cos^2\phi+e^{-2|\xi|}\sin^2\phi}\right]$ \\
SPACS & $\alpha\in [-3,3]$ & $\frac{1}{1+\alpha^2}\sqrt{\frac{2}{\pi}}\exp[-2(x-\sqrt{\eta}\alpha\cos\phi)^2]$ \\
& & $\times \left[\eta\left(2x\cos\phi-\frac{2\eta-1}{\sqrt{\eta}}\alpha\right)^2+4\eta x^2\sin^2 \phi +(1-\eta)(1+4\eta\alpha^2\sin^2\phi)\right]$ \cite{zavatta_2005}\\
cat & $\alpha\in [-5,5]$ & $ \sqrt{\frac{2}{\pi}}\frac{1}{2-2e^{-2\alpha^2}}\Big\{ \exp[-2(x-\sqrt{\eta}\alpha\cos\phi)^2]$ \\
& & $- 2e^{-2\alpha^2}\operatorname{Re}[\exp[-2(x+i\sqrt{\eta}\alpha\sin\phi)^2]]+ \exp[-2(x+\sqrt{\eta}\alpha\cos\phi)^2]\Big\}$ \\
\hline \hline
\end{tabular}
\caption{
For each family of states, the used parameter regions and the corresponding quadrature distributions $p(x,\phi)$ are shown, where $x$ is the quadrature value and $\phi$ is the phase in the balanced homodyne detection. $\eta$ is the overall quantum efficiency.
}
\label{tab:statestatistics}
\end{table*}
\begin{thebibliography}{64}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Braunstein}\ and\ \citenamefont {van
Loock}(2005)}]{braunstein2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Samuel~L.}\
\bibnamefont {Braunstein}}\ and\ \bibinfo {author} {\bibfnamefont {Peter}\
\bibnamefont {van Loock}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Quantum information with continuous variables},}\ }\href {\doibase
10.1103/RevModPhys.77.513} {\bibfield {journal} {\bibinfo {journal} {Rev.
Mod. Phys.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {513--577}
(\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Streltsov}\ \emph {et~al.}(2017)\citenamefont
{Streltsov}, \citenamefont {Adesso},\ and\ \citenamefont
{Plenio}}]{streltsov_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Alexander}\
\bibnamefont {Streltsov}}, \bibinfo {author} {\bibfnamefont {Gerardo}\
\bibnamefont {Adesso}}, \ and\ \bibinfo {author} {\bibfnamefont {Martin~B.}\
\bibnamefont {Plenio}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Colloquium: Quantum coherence as a resource},}\ }\href {\doibase
10.1103/RevModPhys.89.041003} {\bibfield {journal} {\bibinfo {journal}
{Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages}
{041003} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sperling}\ and\ \citenamefont
{Walmsley}(2018)}]{sperling_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Sperling}}\ and\ \bibinfo {author} {\bibfnamefont {I.~A.}\ \bibnamefont
{Walmsley}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Quasiprobability representation of quantum coherence},}\ }\href {\doibase
10.1103/PhysRevA.97.062327} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {062327}
(\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Titlaer}\ and\ \citenamefont
{Glauber}(1965)}]{titulaer_1965}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.~M.}\ \bibnamefont
{Titlaer}}\ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont
{Glauber}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Correlation
functions for coherent fields},}\ }\href {\doibase 10.1103/PhysRev.140.B676}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo
{volume} {140}},\ \bibinfo {pages} {B676--B682} (\bibinfo {year}
{1965})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mandel}(1986)}]{mandel_1986}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L}~\bibnamefont
{Mandel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Non-classical
states of the electromagnetic field},}\ }\href {\doibase
10.1088/0031-8949/1986/t12/005} {\bibfield {journal} {\bibinfo {journal}
{Phys. Scr.}\ }\textbf {\bibinfo {volume} {T12}},\ \bibinfo {pages} {34--42}
(\bibinfo {year} {1986})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Glauber}(1963)}]{glauber_1963}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Roy~J.}\ \bibnamefont
{Glauber}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Coherent and
incoherent states of the radiation field},}\ }\href {\doibase
10.1103/PhysRev.131.2766} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev.}\ }\textbf {\bibinfo {volume} {131}},\ \bibinfo {pages} {2766--2788}
(\bibinfo {year} {1963})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sudarshan}(1963)}]{sudarshan_1963}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~C.~G.}\
\bibnamefont {Sudarshan}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Equivalence of semiclassical and quantum mechanical descriptions of
statistical light beams},}\ }\href {\doibase 10.1103/PhysRevLett.10.277}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {10}},\ \bibinfo {pages} {277--279} (\bibinfo {year}
{1963})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Carmichael}\ and\ \citenamefont
{Walls}(1976)}]{Carmichael_1976}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H~J}\ \bibnamefont
{Carmichael}}\ and\ \bibinfo {author} {\bibfnamefont {D~F}\ \bibnamefont
{Walls}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Proposal for the
measurement of the resonant stark effect by photon correlation techniques},}\
}\href {\doibase 10.1088/0022-3700/9/4/001} {\bibfield {journal} {\bibinfo
{journal} {J. Phys. B}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages}
{L43--L46} (\bibinfo {year} {1976})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kimble}\ and\ \citenamefont
{Mandel}(1976)}]{Kimble_1976}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont
{Kimble}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Mandel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Theory of
resonance fluorescence},}\ }\href {\doibase 10.1103/PhysRevA.13.2123}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {13}},\ \bibinfo {pages} {2123--2144} (\bibinfo {year}
{1976})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kimble}\ \emph {et~al.}(1977)\citenamefont {Kimble},
\citenamefont {Dagenais},\ and\ \citenamefont {Mandel}}]{kimble_1977}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont
{Kimble}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Dagenais}}, \
and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Mandel}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Photon antibunching in
resonance fluorescence},}\ }\href {\doibase 10.1103/PhysRevLett.39.691}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {39}},\ \bibinfo {pages} {691--695} (\bibinfo {year}
{1977})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mandel}(1979)}]{mandel_1979}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Mandel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Sub-poissonian
photon statistics in resonance fluorescence},}\ }\href {\doibase
10.1364/OL.4.000205} {\bibfield {journal} {\bibinfo {journal} {Opt. Lett.}\
}\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {205--207} (\bibinfo
{year} {1979})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zou}\ and\ \citenamefont {Mandel}(1990)}]{Zou_1990}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.~T.}\ \bibnamefont
{Zou}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Mandel}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Photon-antibunching and
sub-poissonian photon statistics},}\ }\href {\doibase
10.1103/PhysRevA.41.475} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {41}},\ \bibinfo {pages} {475--476}
(\bibinfo {year} {1990})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yuen}(1976)}]{Yuen_1976}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Horace~P.}\
\bibnamefont {Yuen}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Two-photon coherent states of the radiation field},}\ }\href {\doibase
10.1103/PhysRevA.13.2226} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {2226--2243}
(\bibinfo {year} {1976})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Walls}(1983)}]{Walls_1983}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~F.}\ \bibnamefont
{Walls}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Squeezed states
of light},}\ }\href {\doibase 10.1038/306141a0} {\bibfield {journal}
{\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {306}},\ \bibinfo
{pages} {141--146} (\bibinfo {year} {1983})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Caves}\ and\ \citenamefont
{Schumaker}(1985)}]{Caves_1985}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Carlton~M.}\
\bibnamefont {Caves}}\ and\ \bibinfo {author} {\bibfnamefont {Bonny~L.}\
\bibnamefont {Schumaker}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{New formalism for two-photon quantum optics. i. quadrature phases and
squeezed states},}\ }\href {\doibase 10.1103/PhysRevA.31.3068} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{31}},\ \bibinfo {pages} {3068--3092} (\bibinfo {year} {1985})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Loudon}\ and\ \citenamefont
{Knight}(1987)}]{Loudon_1987}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Loudon}}\ and\ \bibinfo {author} {\bibfnamefont {P.L.}\ \bibnamefont
{Knight}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Squeezed
light},}\ }\href {\doibase 10.1080/09500348714550721} {\bibfield {journal}
{\bibinfo {journal} {J. Mod. Opt.}\ }\textbf {\bibinfo {volume} {34}},\
\bibinfo {pages} {709--759} (\bibinfo {year} {1987})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dodonov}(2002)}]{Dodonov_2002}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V~V}\ \bibnamefont
{Dodonov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {`nonclassical'
states in quantum optics: a `squeezed' review of the first 75 years},}\
}\href {\doibase 10.1088/1464-4266/4/1/201} {\bibfield {journal} {\bibinfo
{journal} {J. Opt. B: Quantum Semiclass. Opt.}\ }\textbf {\bibinfo {volume}
{4}},\ \bibinfo {pages} {R1--R33} (\bibinfo {year} {2002})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Vogel}\ and\ \citenamefont
{Sperling}(2014)}]{vogel_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Vogel}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Sperling}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Unified
quantification of nonclassicality and entanglement},}\ }\href {\doibase
10.1103/PhysRevA.89.052302} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {052302}
(\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Killoran}\ \emph {et~al.}(2016)\citenamefont
{Killoran}, \citenamefont {Steinhoff},\ and\ \citenamefont
{Plenio}}]{killoran_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Killoran}}, \bibinfo {author} {\bibfnamefont {F.~E.~S.}\ \bibnamefont
{Steinhoff}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Plenio}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Converting
nonclassicality into entanglement},}\ }\href {\doibase
10.1103/PhysRevLett.116.080402} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages}
{080402} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Welsch}\ \emph {et~al.}(1999)\citenamefont {Welsch},
\citenamefont {Vogel},\ and\ \citenamefont {Opatrn\'y}}]{welsch_1999}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Dirk-Gunnar}\
\bibnamefont {Welsch}}, \bibinfo {author} {\bibfnamefont {Werner}\
\bibnamefont {Vogel}}, \ and\ \bibinfo {author} {\bibfnamefont {Tom\'a\v{s}}\
\bibnamefont {Opatrn\'y}},\ }\href {\doibase
https://doi.org/10.1016/S0079-6638(08)70389-5} {\emph {\bibinfo {title}
{Homodyne Detection and Quantum-State Reconstruction}}},\ edited by\ \bibinfo
{editor} {\bibfnamefont {E.}~\bibnamefont {Wolf}},\ \bibinfo {series}
{Progress in Optics}, Vol.~\bibinfo {volume} {39}\ (\bibinfo {publisher}
{Elsevier},\ \bibinfo {year} {1999})\ pp.\ \bibinfo {pages} {63 --
211}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Paris}\ and\ \citenamefont
{\v{R}eh\'{a}\v{c}ek}(2004)}]{paris_2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Paris}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{\v{R}eh\'{a}\v{c}ek}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum
State Estimation}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {year}
{2004})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lvovsky}\ and\ \citenamefont
{Raymer}(2009)}]{lvovsky_2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~I.}\ \bibnamefont
{Lvovsky}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Raymer}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Continuous-variable optical quantum-state tomography},}\ }\href {\doibase
10.1103/RevModPhys.81.299} {\bibfield {journal} {\bibinfo {journal} {Rev.
Mod. Phys.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {299--332}
(\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cahill}\ and\ \citenamefont
{Glauber}(1969)}]{cahill_1969b}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~E.}\ \bibnamefont
{Cahill}}\ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont
{Glauber}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Density
operators and quasiprobability distributions},}\ }\href {\doibase
10.1103/PhysRev.177.1882} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev.}\ }\textbf {\bibinfo {volume} {177}},\ \bibinfo {pages} {1882--1902}
(\bibinfo {year} {1969})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kiesel}\ \emph {et~al.}(2008)\citenamefont {Kiesel},
\citenamefont {Vogel}, \citenamefont {Parigi}, \citenamefont {Zavatta},\ and\
\citenamefont {Bellini}}]{Kiesel_2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Kiesel}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Vogel}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Parigi}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Zavatta}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Bellini}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Experimental determination of a nonclassical
glauber-sudarshan $p$ function},}\ }\href {\doibase
10.1103/PhysRevA.78.021804} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {021804(R)}
(\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Smithey}\ \emph {et~al.}(1993)\citenamefont
{Smithey}, \citenamefont {Beck}, \citenamefont {Raymer},\ and\ \citenamefont
{Faridani}}]{smithey_1993}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont
{Smithey}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Beck}},
\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Raymer}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Faridani}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Measurement of the wigner distribution
and the density matrix of a light mode using optical homodyne tomography:
Application to squeezed states and the vacuum},}\ }\href {\doibase
10.1103/PhysRevLett.70.1244} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo {pages}
{1244--1247} (\bibinfo {year} {1993})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dunn}\ \emph {et~al.}(1995)\citenamefont {Dunn},
\citenamefont {Walmsley},\ and\ \citenamefont {Mukamel}}]{Dunn_1995}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Dunn}}, \bibinfo {author} {\bibfnamefont {I.~A.}\ \bibnamefont {Walmsley}},
\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Mukamel}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Experimental determination
of the quantum-mechanical state of a molecular vibrational mode using
fluorescence tomography},}\ }\href {\doibase 10.1103/PhysRevLett.74.884}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {74}},\ \bibinfo {pages} {884--887} (\bibinfo {year}
{1995})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Leibfried}\ \emph {et~al.}(1996)\citenamefont
{Leibfried}, \citenamefont {Meekhof}, \citenamefont {King}, \citenamefont
{Monroe}, \citenamefont {Itano},\ and\ \citenamefont
{Wineland}}]{leibfried_1996}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Leibfried}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Meekhof}}, \bibinfo {author} {\bibfnamefont {B.~E.}\ \bibnamefont {King}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}}, \bibinfo
{author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}}, \ and\ \bibinfo
{author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Experimental determination of the
motional quantum state of a trapped atom},}\ }\href {\doibase
10.1103/PhysRevLett.77.4281} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages}
{4281--4285} (\bibinfo {year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Del{\'e}glise}\ \emph {et~al.}(2008)\citenamefont
{Del{\'e}glise}, \citenamefont {Dotsenko}, \citenamefont {Sayrin},
\citenamefont {Bernu}, \citenamefont {Brune}, \citenamefont {Raimond},\ and\
\citenamefont {Haroche}}]{deleglise_2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Samuel}\ \bibnamefont
{Del{\'e}glise}}, \bibinfo {author} {\bibfnamefont {Igor}\ \bibnamefont
{Dotsenko}}, \bibinfo {author} {\bibfnamefont {Cl{\'e}ment}\ \bibnamefont
{Sayrin}}, \bibinfo {author} {\bibfnamefont {Julien}\ \bibnamefont {Bernu}},
\bibinfo {author} {\bibfnamefont {Michel}\ \bibnamefont {Brune}}, \bibinfo
{author} {\bibfnamefont {Jean-Michel}\ \bibnamefont {Raimond}}, \ and\
\bibinfo {author} {\bibfnamefont {Serge}\ \bibnamefont {Haroche}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Reconstruction of
non-classical cavity field states with snapshots of their decoherence},}\
}\href {https://doi.org/10.1038/nature07288} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {455}},\ \bibinfo {pages}
{510 EP --} (\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kiesel}\ and\ \citenamefont
{Vogel}(2010)}]{kiesel_2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Kiesel}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Vogel}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Nonclassicality filters and
quasiprobabilities},}\ }\href {\doibase 10.1103/PhysRevA.82.032107}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {82}},\ \bibinfo {pages} {032107} (\bibinfo {year}
{2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Agudelo}\ \emph {et~al.}(2013)\citenamefont
{Agudelo}, \citenamefont {Sperling},\ and\ \citenamefont
{Vogel}}]{agudelo_2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Agudelo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Sperling}}, \
and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Vogel}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Quasiprobabilities for multipartite
quantum correlations of light},}\ }\href {\doibase
10.1103/PhysRevA.87.033811} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {033811}
(\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bohmann}\ and\ \citenamefont
{Agudelo}(2020)}]{bohmann_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Martin}\ \bibnamefont
{Bohmann}}\ and\ \bibinfo {author} {\bibfnamefont {Elizabeth}\ \bibnamefont
{Agudelo}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Phase-space
inequalities beyond negativities},}\ }\href {\doibase
10.1103/PhysRevLett.124.133601} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {124}},\ \bibinfo {pages}
{133601} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bohmann}\ \emph {et~al.}(2020)\citenamefont
{Bohmann}, \citenamefont {Agudelo},\ and\ \citenamefont
{Sperling}}]{bohmann_2020b}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Martin}\ \bibnamefont
{Bohmann}}, \bibinfo {author} {\bibfnamefont {Elizabeth}\ \bibnamefont
{Agudelo}}, \ and\ \bibinfo {author} {\bibfnamefont {Jan}\ \bibnamefont
{Sperling}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Probing
nonclassicality with matrices of phase-space distributions},}\ }\href
{\doibase 10.22331/q-2020-10-15-343} {\bibfield {journal} {\bibinfo
{journal} {{Quantum}}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages}
{343} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Biagi}\ \emph {et~al.}(2021)\citenamefont {Biagi},
\citenamefont {Bohmann}, \citenamefont {Agudelo}, \citenamefont {Bellini},\
and\ \citenamefont {Zavatta}}]{biagi_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Nicola}\ \bibnamefont
{Biagi}}, \bibinfo {author} {\bibfnamefont {Martin}\ \bibnamefont {Bohmann}},
\bibinfo {author} {\bibfnamefont {Elizabeth}\ \bibnamefont {Agudelo}},
\bibinfo {author} {\bibfnamefont {Marco}\ \bibnamefont {Bellini}}, \ and\
\bibinfo {author} {\bibfnamefont {Alessandro}\ \bibnamefont {Zavatta}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Experimental certification
of nonclassicality via phase-space inequalities},}\ }\href {\doibase
10.1103/PhysRevLett.126.023605} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages}
{023605} (\bibinfo {year} {2021})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mari}\ \emph {et~al.}(2011)\citenamefont {Mari},
\citenamefont {Kieling}, \citenamefont {Nielsen}, \citenamefont {Polzik},\
and\ \citenamefont {Eisert}}]{mari_2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Mari}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kieling}},
\bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {Nielsen}},
\bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {Polzik}}, \ and\
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Directly estimating nonclassicality},}\
}\href {\doibase 10.1103/PhysRevLett.106.010403} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\
\bibinfo {pages} {010403} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nielsen}(2015)}]{nielsen_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Michael~A}\
\bibnamefont {Nielsen}},\ }\href@noop {} {\emph {\bibinfo {title} {Neural
networks and deep learning}}}\ (\bibinfo {publisher} {Determination Press,
San Francisco},\ \bibinfo {year} {2015})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hentschel}\ and\ \citenamefont
{Sanders}(2010)}]{hentschel_2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Alexander}\
\bibnamefont {Hentschel}}\ and\ \bibinfo {author} {\bibfnamefont {Barry~C.}\
\bibnamefont {Sanders}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Machine learning for precise quantum measurement},}\ }\href {\doibase
10.1103/PhysRevLett.104.063603} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages}
{063603} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wiebe}\ \emph {et~al.}(2014)\citenamefont {Wiebe},
\citenamefont {Granade}, \citenamefont {Ferrie},\ and\ \citenamefont
{Cory}}]{wiebe_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Nathan}\ \bibnamefont
{Wiebe}}, \bibinfo {author} {\bibfnamefont {Christopher}\ \bibnamefont
{Granade}}, \bibinfo {author} {\bibfnamefont {Christopher}\ \bibnamefont
{Ferrie}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont
{Cory}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Hamiltonian
learning and certification using quantum resources},}\ }\href
{http://dx.doi.org/10.1103/PhysRevLett.112.190501} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\
\bibinfo {pages} {190501} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Magesan}\ \emph {et~al.}(2015)\citenamefont
{Magesan}, \citenamefont {Gambetta}, \citenamefont {C\'orcoles},\ and\
\citenamefont {Chow}}]{magesan_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Easwar}\ \bibnamefont
{Magesan}}, \bibinfo {author} {\bibfnamefont {Jay~M.}\ \bibnamefont
{Gambetta}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont
{C\'orcoles}}, \ and\ \bibinfo {author} {\bibfnamefont {Jerry~M.}\
\bibnamefont {Chow}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Machine learning for discriminating quantum measurement trajectories and
improving readout},}\ }\href {\doibase 10.1103/PhysRevLett.114.200501}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {114}},\ \bibinfo {pages} {200501} (\bibinfo {year}
{2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Krenn}\ \emph {et~al.}(2016)\citenamefont {Krenn},
\citenamefont {Malik}, \citenamefont {Fickler}, \citenamefont {Lapkiewicz},\
and\ \citenamefont {Zeilinger}}]{krenn_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Mario}\ \bibnamefont
{Krenn}}, \bibinfo {author} {\bibfnamefont {Mehul}\ \bibnamefont {Malik}},
\bibinfo {author} {\bibfnamefont {Robert}\ \bibnamefont {Fickler}}, \bibinfo
{author} {\bibfnamefont {Radek}\ \bibnamefont {Lapkiewicz}}, \ and\ \bibinfo
{author} {\bibfnamefont {Anton}\ \bibnamefont {Zeilinger}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Automated search for new quantum
experiments},}\ }\href {\doibase 10.1103/PhysRevLett.116.090405} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {116}},\ \bibinfo {pages} {090405} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {van Nieuwenburg}\ \emph {et~al.}(2017)\citenamefont
{van Nieuwenburg}, \citenamefont {Liu},\ and\ \citenamefont
{Huber}}]{van_nieuwenburg_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Evert P. L.}\
\bibnamefont {van Nieuwenburg}}, \bibinfo {author} {\bibfnamefont {Ye-Hua}\
\bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont {Sebastian D.}\
\bibnamefont {Huber}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Learning phase transitions by confusion},}\ }\href {\doibase
10.1038/nphys4037} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\
}\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {435–439} (\bibinfo
{year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Carleo}\ and\ \citenamefont
{Troyer}(2017)}]{carleo_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Giuseppe}\
\bibnamefont {Carleo}}\ and\ \bibinfo {author} {\bibfnamefont {Matthias}\
\bibnamefont {Troyer}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Solving the quantum many-body problem with artificial neural networks},}\
}\href {\doibase 10.1126/science.aag2302} {\bibfield {journal} {\bibinfo
{journal} {Science}\ }\textbf {\bibinfo {volume} {355}},\ \bibinfo {pages}
{602--606} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Torlai}\ \emph {et~al.}(2018)\citenamefont {Torlai},
\citenamefont {Mazzola}, \citenamefont {Carrasquilla}, \citenamefont
{Troyer}, \citenamefont {Melko},\ and\ \citenamefont {Carleo}}]{torlai_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Giacomo}\ \bibnamefont
{Torlai}}, \bibinfo {author} {\bibfnamefont {Guglielmo}\ \bibnamefont
{Mazzola}}, \bibinfo {author} {\bibfnamefont {Juan}\ \bibnamefont
{Carrasquilla}}, \bibinfo {author} {\bibfnamefont {Matthias}\ \bibnamefont
{Troyer}}, \bibinfo {author} {\bibfnamefont {Roger}\ \bibnamefont {Melko}}, \
and\ \bibinfo {author} {\bibfnamefont {Giuseppe}\ \bibnamefont {Carleo}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Neural-network quantum state
tomography},}\ }\href {\doibase 10.1038/s41567-018-0048-5} {\bibfield
{journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume}
{14}},\ \bibinfo {pages} {447–450} (\bibinfo {year} {2018})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Lumino}\ \emph {et~al.}(2018)\citenamefont {Lumino},
\citenamefont {Polino}, \citenamefont {Rab}, \citenamefont {Milani},
\citenamefont {Spagnolo}, \citenamefont {Wiebe},\ and\ \citenamefont
{Sciarrino}}]{lumino_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Alessandro}\
\bibnamefont {Lumino}}, \bibinfo {author} {\bibfnamefont {Emanuele}\
\bibnamefont {Polino}}, \bibinfo {author} {\bibfnamefont {Adil~S.}\
\bibnamefont {Rab}}, \bibinfo {author} {\bibfnamefont {Giorgio}\ \bibnamefont
{Milani}}, \bibinfo {author} {\bibfnamefont {Nicol\`o}\ \bibnamefont
{Spagnolo}}, \bibinfo {author} {\bibfnamefont {Nathan}\ \bibnamefont
{Wiebe}}, \ and\ \bibinfo {author} {\bibfnamefont {Fabio}\ \bibnamefont
{Sciarrino}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Experimental
phase estimation enhanced by machine learning},}\ }\href {\doibase
10.1103/PhysRevApplied.10.044033} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Applied}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages}
{044033} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bukov}\ \emph {et~al.}(2018)\citenamefont {Bukov},
\citenamefont {Day}, \citenamefont {Sels}, \citenamefont {Weinberg},
\citenamefont {Polkovnikov},\ and\ \citenamefont {Mehta}}]{bukov_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Marin}\ \bibnamefont
{Bukov}}, \bibinfo {author} {\bibfnamefont {Alexandre G.~R.}\ \bibnamefont
{Day}}, \bibinfo {author} {\bibfnamefont {Dries}\ \bibnamefont {Sels}},
\bibinfo {author} {\bibfnamefont {Phillip}\ \bibnamefont {Weinberg}},
\bibinfo {author} {\bibfnamefont {Anatoli}\ \bibnamefont {Polkovnikov}}, \
and\ \bibinfo {author} {\bibfnamefont {Pankaj}\ \bibnamefont {Mehta}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Reinforcement learning in
different phases of quantum control},}\ }\href {\doibase
10.1103/PhysRevX.8.031086} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {031086}
(\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {F\"osel}\ \emph {et~al.}(2018)\citenamefont
{F\"osel}, \citenamefont {Tighineanu}, \citenamefont {Weiss},\ and\
\citenamefont {Marquardt}}]{fosel_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Thomas}\ \bibnamefont
{F\"osel}}, \bibinfo {author} {\bibfnamefont {Petru}\ \bibnamefont
{Tighineanu}}, \bibinfo {author} {\bibfnamefont {Talitha}\ \bibnamefont
{Weiss}}, \ and\ \bibinfo {author} {\bibfnamefont {Florian}\ \bibnamefont
{Marquardt}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Reinforcement
learning with neural networks for quantum feedback},}\ }\href {\doibase
10.1103/PhysRevX.8.031084} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {031084}
(\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Canabarro}\ \emph {et~al.}(2019)\citenamefont
{Canabarro}, \citenamefont {Brito},\ and\ \citenamefont
{Chaves}}]{canabarro_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Askery}\ \bibnamefont
{Canabarro}}, \bibinfo {author} {\bibfnamefont {Samura\'{\i}}\ \bibnamefont
{Brito}}, \ and\ \bibinfo {author} {\bibfnamefont {Rafael}\ \bibnamefont
{Chaves}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Machine
Learning Nonlocal Correlations}},}\ }\href {\doibase
10.1103/PhysRevLett.122.200401} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo {pages}
{200401} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cimini}\ \emph {et~al.}(2019)\citenamefont {Cimini},
\citenamefont {Gianani}, \citenamefont {Spagnolo}, \citenamefont {Leccese},
\citenamefont {Sciarrino},\ and\ \citenamefont {Barbieri}}]{cimini_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Valeria}\ \bibnamefont
{Cimini}}, \bibinfo {author} {\bibfnamefont {Ilaria}\ \bibnamefont
{Gianani}}, \bibinfo {author} {\bibfnamefont {Nicol\`o}\ \bibnamefont
{Spagnolo}}, \bibinfo {author} {\bibfnamefont {Fabio}\ \bibnamefont
{Leccese}}, \bibinfo {author} {\bibfnamefont {Fabio}\ \bibnamefont
{Sciarrino}}, \ and\ \bibinfo {author} {\bibfnamefont {Marco}\ \bibnamefont
{Barbieri}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Calibration of
quantum sensors by neural networks},}\ }\href {\doibase
10.1103/PhysRevLett.123.230502} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages}
{230502} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Poulsen~Nautrup}\ \emph {et~al.}(2019)\citenamefont
{Poulsen~Nautrup}, \citenamefont {Delfosse}, \citenamefont {Dunjko},
\citenamefont {Briegel},\ and\ \citenamefont {Friis}}]{nautrup_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Hendrik}\ \bibnamefont
{Poulsen~Nautrup}}, \bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont
{Delfosse}}, \bibinfo {author} {\bibfnamefont {Vedran}\ \bibnamefont
{Dunjko}}, \bibinfo {author} {\bibfnamefont {Hans~J.}\ \bibnamefont
{Briegel}}, \ and\ \bibinfo {author} {\bibfnamefont {Nicolai}\ \bibnamefont
{Friis}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Optimizing
{Q}uantum {E}rror {C}orrection {C}odes with {R}einforcement {L}earning},}\
}\href {\doibase 10.22331/q-2019-12-16-215} {\bibfield {journal} {\bibinfo
{journal} {{Quantum}}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages}
{215} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Agresti}\ \emph {et~al.}(2019)\citenamefont
{Agresti}, \citenamefont {Viggianiello}, \citenamefont {Flamini},
\citenamefont {Spagnolo}, \citenamefont {Crespi}, \citenamefont {Osellame},
\citenamefont {Wiebe},\ and\ \citenamefont {Sciarrino}}]{agresti_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Iris}\ \bibnamefont
{Agresti}}, \bibinfo {author} {\bibfnamefont {Niko}\ \bibnamefont
{Viggianiello}}, \bibinfo {author} {\bibfnamefont {Fulvio}\ \bibnamefont
{Flamini}}, \bibinfo {author} {\bibfnamefont {Nicol\`o}\ \bibnamefont
{Spagnolo}}, \bibinfo {author} {\bibfnamefont {Andrea}\ \bibnamefont
{Crespi}}, \bibinfo {author} {\bibfnamefont {Roberto}\ \bibnamefont
{Osellame}}, \bibinfo {author} {\bibfnamefont {Nathan}\ \bibnamefont
{Wiebe}}, \ and\ \bibinfo {author} {\bibfnamefont {Fabio}\ \bibnamefont
{Sciarrino}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Pattern
recognition techniques for boson sampling validation},}\ }\href {\doibase
10.1103/PhysRevX.9.011013} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. X}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {011013}
(\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gebhart}\ and\ \citenamefont
{Bohmann}(2020)}]{gebhart_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Valentin}\
\bibnamefont {Gebhart}}\ and\ \bibinfo {author} {\bibfnamefont {Martin}\
\bibnamefont {Bohmann}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Neural-network approach for identifying nonclassicality from click-counting
data},}\ }\href {\doibase 10.1103/PhysRevResearch.2.023150} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo
{volume} {2}},\ \bibinfo {pages} {023150} (\bibinfo {year}
{2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cimini}\ \emph {et~al.}(2020)\citenamefont {Cimini},
\citenamefont {Barbieri}, \citenamefont {Treps}, \citenamefont {Walschaers},\
and\ \citenamefont {Parigi}}]{cimini_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Valeria}\ \bibnamefont
{Cimini}}, \bibinfo {author} {\bibfnamefont {Marco}\ \bibnamefont
{Barbieri}}, \bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont
{Treps}}, \bibinfo {author} {\bibfnamefont {Mattia}\ \bibnamefont
{Walschaers}}, \ and\ \bibinfo {author} {\bibfnamefont {Valentina}\
\bibnamefont {Parigi}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Neural networks for detecting multimode wigner negativity},}\ }\href
{\doibase 10.1103/PhysRevLett.125.160504} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo
{pages} {160504} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Tiunov}\ \emph {et~al.}(2020)\citenamefont {Tiunov},
\citenamefont {(Vyborova)}, \citenamefont {Ulanov}, \citenamefont {Lvovsky},\
and\ \citenamefont {Fedorov}}]{tiunov_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont
{Tiunov}}, \bibinfo {author} {\bibfnamefont {V.~V.~Tiunova}\ \bibnamefont
{(Vyborova)}}, \bibinfo {author} {\bibfnamefont {A.~E.}\ \bibnamefont
{Ulanov}}, \bibinfo {author} {\bibfnamefont {A.~I.}\ \bibnamefont {Lvovsky}},
\ and\ \bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont {Fedorov}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Experimental quantum
homodyne tomography via machine learning},}\ }\href {\doibase
10.1364/OPTICA.389482} {\bibfield {journal} {\bibinfo {journal} {Optica}\
}\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {448--454} (\bibinfo
{year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nolan}\ \emph {et~al.}(2020)\citenamefont {Nolan},
\citenamefont {Smerzi},\ and\ \citenamefont {Pezz\`{e}}}]{nolan_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Samuel~P.}\
\bibnamefont {Nolan}}, \bibinfo {author} {\bibfnamefont {Augusto}\
\bibnamefont {Smerzi}}, \ and\ \bibinfo {author} {\bibfnamefont {Luca}\
\bibnamefont {Pezz\`{e}}},\ }\href@noop {} {\enquote {\bibinfo {title} {A
machine learning approach to bayesian parameter estimation},}\ } (\bibinfo
{year} {2020}),\ \Eprint {http://arxiv.org/abs/arXiv:2006.02369}
{arXiv:2006.02369} \BibitemShut {NoStop}
\bibitem [{\citenamefont {Dunjko}\ and\ \citenamefont
{Briegel}(2018)}]{dunjko_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Vedran}\ \bibnamefont
{Dunjko}}\ and\ \bibinfo {author} {\bibfnamefont {Hans~J}\ \bibnamefont
{Briegel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Machine
learning {\&} artificial intelligence in the quantum domain: a review of
recent progress},}\ }\href {\doibase 10.1088/1361-6633/aab406} {\bibfield
{journal} {\bibinfo {journal} {Rep. Prog. Phys.}\ }\textbf {\bibinfo
{volume} {81}},\ \bibinfo {pages} {074001} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ahmed}\ \emph {et~al.}(2020)\citenamefont {Ahmed},
\citenamefont {Mu{\~n}oz}, \citenamefont {Nori},\ and\ \citenamefont
{Kockum}}]{ahmed_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Ahmed}}, \bibinfo {author} {\bibfnamefont {C.~S.}\ \bibnamefont
{Mu{\~n}oz}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}}, \
and\ \bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Kockum}},\
}\href@noop {} {\enquote {\bibinfo {title} {Classification and reconstruction
of optical quantum states with deep neural networks},}\ } (\bibinfo {year}
{2020}),\ \Eprint {http://arxiv.org/abs/arXiv:2012.02185} {arXiv:2012.02185}
\BibitemShut {NoStop}
\bibitem [{\citenamefont {Carmichael}(1987)}]{Carmichael_1987}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont
{Carmichael}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Spectrum of
squeezing and photocurrent shot noise: a normally ordered treatment},}\
}\href {\doibase 10.1364/JOSAB.4.001588} {\bibfield {journal} {\bibinfo
{journal} {J. Opt. Soc. Am. B}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo
{pages} {1588--1603} (\bibinfo {year} {1987})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Braunstein}(1990)}]{Braunstein_1990}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Samuel~L.}\
\bibnamefont {Braunstein}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Homodyne statistics},}\ }\href {\doibase 10.1103/PhysRevA.42.474} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{42}},\ \bibinfo {pages} {474--481} (\bibinfo {year} {1990})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Vogel}\ and\ \citenamefont
{Grabow}(1993)}]{Vogel_1993}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Werner}\ \bibnamefont
{Vogel}}\ and\ \bibinfo {author} {\bibfnamefont {Jens}\ \bibnamefont
{Grabow}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Statistics of
difference events in homodyne detection},}\ }\href {\doibase
10.1103/PhysRevA.47.4227} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {4227--4235}
(\bibinfo {year} {1993})}\BibitemShut {NoStop}
\bibitem [{Not()}]{Note1}
\BibitemOpen
\href@noop {} {}\bibinfo {note} {To preserve essential information
with this restriction of detection values, we only simulate states for which
the probability of producing an event $\left|x\right|>8$ is small
($<10^{-6}$), see Appendix. If the network is to be applied to states which
produce values outside of this domain, the grid has to be
adjusted.}\BibitemShut {Stop}
\bibitem [{\citenamefont {Agudelo}\ \emph {et~al.}(2015)\citenamefont
{Agudelo}, \citenamefont {Sperling}, \citenamefont {Vogel}, \citenamefont
{K\"ohnke}, \citenamefont {Mraz},\ and\ \citenamefont {Hage}}]{agudelo_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Agudelo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Sperling}},
\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Vogel}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {K\"ohnke}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Mraz}}, \ and\ \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Hage}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Continuous sampling of the squeezed-state
nonclassicality},}\ }\href {\doibase 10.1103/PhysRevA.92.033837} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{92}},\ \bibinfo {pages} {033837} (\bibinfo {year} {2015})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Vogel}\ and\ \citenamefont
{Welsch}(2006)}]{vogel_2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Vogel}}\ and\ \bibinfo {author} {\bibfnamefont {D.G.}\ \bibnamefont
{Welsch}},\ }\href {https://books.google.it/books?id=GE7FuoEaGQAC} {\emph
{\bibinfo {title} {Quantum Optics}}}\ (\bibinfo {publisher} {Wiley},\
\bibinfo {year} {2006})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zavatta}\ \emph {et~al.}(2005)\citenamefont
{Zavatta}, \citenamefont {Viciani},\ and\ \citenamefont
{Bellini}}]{zavatta_2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Alessandro}\
\bibnamefont {Zavatta}}, \bibinfo {author} {\bibfnamefont {Silvia}\
\bibnamefont {Viciani}}, \ and\ \bibinfo {author} {\bibfnamefont {Marco}\
\bibnamefont {Bellini}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Single-photon excitation of a coherent state: Catching the elementary step
of stimulated light emission},}\ }\href {\doibase 10.1103/PhysRevA.72.023820}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {72}},\ \bibinfo {pages} {023820} (\bibinfo {year}
{2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zavatta}\ \emph {et~al.}(2004)\citenamefont
{Zavatta}, \citenamefont {Viciani},\ and\ \citenamefont
{Bellini}}]{zavatta_2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Alessandro}\
\bibnamefont {Zavatta}}, \bibinfo {author} {\bibfnamefont {Silvia}\
\bibnamefont {Viciani}}, \ and\ \bibinfo {author} {\bibfnamefont {Marco}\
\bibnamefont {Bellini}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Quantum-to-classical transition with single-photon-added coherent states of
light},}\ }\href {\doibase 10.1126/science.1103190} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {306}},\ \bibinfo
{pages} {660--662} (\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Filippov}\ \emph {et~al.}(2013)\citenamefont
{Filippov}, \citenamefont {Man'ko}, \citenamefont {Coelho}, \citenamefont
{Zavatta},\ and\ \citenamefont {Bellini}}]{filippov_2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S~N}\ \bibnamefont
{Filippov}}, \bibinfo {author} {\bibfnamefont {V~I}\ \bibnamefont {Man'ko}},
\bibinfo {author} {\bibfnamefont {A~S}\ \bibnamefont {Coelho}}, \bibinfo
{author} {\bibfnamefont {A}~\bibnamefont {Zavatta}}, \ and\ \bibinfo {author}
{\bibfnamefont {M}~\bibnamefont {Bellini}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Single-photon-added coherent states: estimation of
parameters and fidelity of the optical homodyne detection},}\ }\href
{\doibase 10.1088/0031-8949/2013/t153/014025} {\bibfield {journal} {\bibinfo
{journal} {Physica Scripta}\ }\textbf {\bibinfo {volume} {T153}},\ \bibinfo
{pages} {014025} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Jeong}\ \emph {et~al.}(2014)\citenamefont {Jeong},
\citenamefont {Zavatta}, \citenamefont {Kang}, \citenamefont {Lee},
\citenamefont {Costanzo}, \citenamefont {Grandi}, \citenamefont {Ralph},\
and\ \citenamefont {Bellini}}]{jeong_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Hyunseok}\
\bibnamefont {Jeong}}, \bibinfo {author} {\bibfnamefont {Alessandro}\
\bibnamefont {Zavatta}}, \bibinfo {author} {\bibfnamefont {Minsu}\
\bibnamefont {Kang}}, \bibinfo {author} {\bibfnamefont {Seung-Woo}\
\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {Luca~S.}\ \bibnamefont
{Costanzo}}, \bibinfo {author} {\bibfnamefont {Samuele}\ \bibnamefont
{Grandi}}, \bibinfo {author} {\bibfnamefont {Timothy~C.}\ \bibnamefont
{Ralph}}, \ and\ \bibinfo {author} {\bibfnamefont {Marco}\ \bibnamefont
{Bellini}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Generation of
hybrid entanglement of light},}\ }\href
{https://doi.org/10.1038/nphoton.2014.136} {\bibfield {journal} {\bibinfo
{journal} {Nat. Photon.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages}
{564 EP --} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\end{thebibliography}
\end{document} |
\begin{document}
\title[Lyapunov Functionals and Local Dissipativity] {Lyapunov Functionals and Local
Dissipativity for the Vorticity Equation in $\mathrm{L^{p}}$ and
Besov Spaces}
\author[Utpal Manna]{Utpal Manna*\\ Department of Mathematics, University of Wyoming,\\ Laramie, Wyoming 82071,
USA\\ e-mail: [email protected]\\ \\}
\thanks{* This research is supported by Army Research Office, Probability and
Statistics Program, grant number DODARMY1736}
\author[S.S. Sritharan]{\\ \\S.S. Sritharan*\\Department of Mathematics, University of Wyoming,\\ Laramie, Wyoming
82071, USA\\ e-mail: [email protected]}
\subjclass[2000] {35Q35, 47H06, 76D03, 76D05}
\keywords{Vorticity equation, Lyapunov function, Dissipative
operator, Littlewood-Paley decomposition, Besov Spaces}
\begin{abstract}
In this paper we establish the local Lyapunov property of certain
$\mathrm{L}^{p}$ and Besov norm of the vorticity fields. We have
resolved in part, certain open problem posed by Tosio Kato for the
three dimensional Navier-Stokes equation by studying the vorticity
equation. The local dissipativity of the sum of linear and
non-linear operators of the vorticity equation is established. One
of the main techniques used here is the Littlewood-Paley analysis.
\end{abstract}
\maketitle
\section{Introduction}
Stability and control of a dynamical system is often studied using
Lyapunov functions~\cite{FrKo96, La66, La62}. The local Lyapunov
property we study in this paper can thus be of interest to the
understanding, control and stabilization of turbulent
fields~\cite{Sr98}. This property also sheds some light towards the
research on global Navier-Stokes solutions in super-critical spaces
(for definitions and examples of these spaces see~\cite{Ca95}
and~\cite{CaMe95}). Weak solutions of the Navier-Stokes equation
satisfy the energy inequality which in turn implies that the
$\mathrm{L}^2$-norm of velocity decreases in time~\cite{La69}. This
idea was generalized by Tosio Kato~\cite{Ka90} to prove that for
every solution of Navier-Stokes equation in $\mathbb{R}^m$ ($m\geq
3$), there exist a large number of Lyapunov functions, which
decrease monotonically in time if the solution have small
$\mathrm{L}^m(\mathbb{R}^m)$-norm. More specifically Kato proved
that the local Lyapunov property in $\mathrm{L}^p$-norm for
$1<p<\infty$ and in $\mathrm{W}^{s,p}$-norm for $ s>0,\ 2\leq
p<\infty$. He also noted that for any Lyapunov function
$\mathfrak{L}(u)$ and any monotone increasing function $\Phi$,
$\Phi(\mathfrak{L}(u))$ will again be a Lyapunov function. Moreover
Kato also proved the local dissipativity of the sum of the linear
and nonlinear operators of the Navier-Stokes equation in
$\mathrm{L}^p$-norm for $2\leq p<\infty$. However the local
dissipativity in $\mathrm{W}^{s,p}$-norm for $s
> 0$ has remained an open problem.
Cannone and Planchon~\cite{CaPl00} proved the Lyapunov property for
the $3$-D Navier-Stokes equation in Besov spaces. In particular they
proved that if $p, q \geq 2$, $\frac{3}{p}+\frac{2}{q}>1$ and as
long as the $\/\mathrm{d}\/ot{B}_{\infty}^{-1, \infty}$-norm of the velocity is
small, the $\/\mathrm{d}\/ot{B}_{p}^{-1+3/p, q}$-norm of velocity decreases in
time.
In~\cite{KoTa01} Koch and Tataru considered the local and global (in
time) well-posedness for the incompressible Navier-Stokes equation
and proved the existence and uniqueness of global mild solution in
$BMO^{-1}$ provided that the initial solution is small enough in
this space. Due to Cannone and Planchon~\cite{CaPl00}, existence of
the Lyapunov functions for small $\/\mathrm{d}\/ot{B}_{\infty}^{-1,
\infty}$-norm is known but the global solvability of Navier-Stokes
equation in this space remains an open problem. Noting the embedding
theorem $BMO^{-1}\subset \/\mathrm{d}\/ot{B}_{\infty}^{-1, \infty}$, $BMO^{-1}$
is thus the largest space of initial data for which global mild
solution has been shown to exist.
Recently, P. G. Lemari\'{e}-Rieusset~\cite{Lr02} has extended the
result of Cannone and Planchon~\cite{CaPl00} to a larger class of
initial data. He proved that for initial data $u_{0} \in
\/\mathrm{d}\/ot{B}_{p}^{s,q} \cap BMO^{-1}$ where $s
> -1$, $p\geq 2$, $q\geq 1$ and $s + \frac{2}{q} > 0$, there exists
a constant $C_{0}
> 0$ independent of $p$ and
$q$, such that if $u$ is a Koch-Tataru solution of Navier-Stokes
equation and satisfying $sup_{t}\parallel u(t)
\parallel_{\/\mathrm{d}\/ot{B}_{\infty}^{-1, \infty}} < C_{0}$, then
$ t\rightarrow\parallel u(t)\parallel_{\/\mathrm{d}\/ot{B}_{p}^{s,q}}$ is a
Lyapunov function.
Local monotonicity of different type has been used in proving the
solvability in unbounded domains for Navier-Stokes in
$2$-D~\cite{MeSr00}, in $3$-D~\cite{BaSr01} and for modified $2$-D
Navier-Stokes with artificial compressibility~\cite{MaMeSr06}. Local
monotonicity has also been useful in Control theory~\cite{BaSr01}.
For extensive theories and applications on dissipative and accretive
operators see Barbu~\cite{Ba76} and Browder~\cite{Br76}.
In this paper, we achieve a partial resolution to the open problems
posed by Kato~\cite{Ka90} for the Navier-Stokes equation by studying
the vorticity equation:
\begin{equation}\label{e1.1}\left\{\begin{aligned}
&\partial_{t} \omega\ - \nu\triangle\omega+ u\cdot\nabla\omega -
\omega\cdot\nabla u = 0,\
\text{in} \ R^{m}\times R_{+},\\
&\nabla\cdot\omega =0,\ \text{in} \ R^{m}\times R_{+},\\
&\omega(x,0)= \omega_{0}(x), \ x \in \ R^{m}.
\end{aligned}\right.\end{equation}
To be specific, we have proved that the vorticity equation have a
family of Lyapunov functions in $\mathrm{L}^{p}(\mathbb{R}^m)$ for
$2\leq p<\infty$ and $m \geq 3$ provided that the
$\mathrm{L}^{m}$-norm of the velocity is small enough. We then prove
$\/\mathrm{d}\/ot{B}_{p}^{-1+3/p, q}$-norm of the vorticity is a Lyapunov
function for $3$-D vorticity equation provided the velocity and the
vorticity are small in $\/\mathrm{d}\/ot{B}_{\infty}^{-1, \infty}$-norm and
$\/\mathrm{d}\/ot{B}_{\infty}^{-2, \infty}$-norm respectively.
We have also proved the dissipativity of the sum of the linear and
nonlinear operators of the vorticity equation \eqref{e1.1} in
$\mathrm{L}^{p}$ for $2\leq p<\infty$, which in part answers the
open problem of Kato for the local dissipativity of the
Navier-Stokes operators in $\mathrm{W}^{1,p}$-norm.
In Section 2 and 3 we recall some basic facts concerning
Littlewood-Paley decomposition, homogeneous Besov spaces and the
Paraproduct rule. The main results are presented in section 4.
\section{Some Definitions and Estimates}
\begin{defn}($Duality\ Map$)
The mapping $G: X\rightarrow 2^{X^{\star}}$ is
called the duality mapping of the space X if $$ G(x) =
\{x^{\star}\in X^{\star};\ \langle x, x^{\star}\rangle =\ \parallel
x \parallel_{X}^{2}\ =\ \parallel x^{\star}
\parallel_{X^{\star}}^{2},\ \forall x\in X \}.$$
\end{defn}
\begin{rem}
The duality map for $\mathrm{L}^{p}$ is given by
$$G(x)= \frac{x\mid
x\mid^{p-2}}{\parallel x\parallel_{p}^{p-2}}.$$
\end{rem}
\begin{defn}($Dissipative\ Operator$)
An operator $\textit{A}$ is said to be dissipative
if $$\langle \textit{A} x - \textit{A} y, \ G(x-y) \rangle \leq 0,\qquad \ \forall x, y \in \emph{D}(\textit{A}).$$
An operator $\textit{A}$ is said to be accretive if $- \textit{A}$ is
dissipative.\\
See~\cite{Ba76} and~\cite{Br76} for extensive theories and applications on nonlinear
operators in Banach spaces.
\end{defn}
\begin{defn}($Lyapunov\ Function$)
Let $v$ be a solution of the Navier-Stokes equation. Then any
function $\mathfrak{L}(v)(t)$ montonically decreasing in time is
called a Lyapunov function associated to $v$.
\end{defn}
The most well-known example is certainly provided by
energy~\cite{La69}
$$ E(v)(t) = \frac{1}{2}\parallel v(t)\parallel_{2}^{2}.$$
The energy equality for the Navier-Stokes equation yield
\begin{align}
\frac{d}{dt}E(t) + \nu\parallel\nabla v(t)\parallel_{2}^{2}\ =
0,\nonumber
\end{align}
which proves that $E(t)$ is Lyapunov functional.
Let us now recall two lemmas due to Kato~\cite{Ka90}.
\begin{lem}
Let $2\leq p <\infty\ \text{and}\ \phi\in \mathrm{W}^{1,p}$. Define
\begin{align}\label{e2.1}
&Q_{p}(\phi) = \int_{\partial\phi(x)\neq
0}\mid\phi(x)\mid^{p-2}\mid\nabla\phi(x)\mid^{2}\/\mathrm{d}\/ x\ \geq 0.
\end{align}
Then
\begin{align}\label{e2.2}
CQ_{p}(\phi) \leq -\langle\mid\phi\mid^{p-2}\phi,
\Delta\phi\rangle \ < \infty,
\end{align}
where $C$ denotes a positive constant.
\end{lem}
\begin{lem}
Let $2\leq p <\infty$ and $\phi\in \mathrm{W}^{1,p}$. Then
\begin{align}\label{e2.3}
\parallel \phi
\parallel_{\frac{mp}{m-2}} \ \leq CQ_{p}(\phi)^\frac{1}{p}.
\end{align}
\end{lem}
\begin{lem}
Let $u$ be the velocity field obtained from $\omega$ via the
Biot-Savart law:
\begin{align}\label{e2.4}
u(x) =
-\frac{\Gamma(m/2+1)}{m(m-2)\pi^{m/2}}\int_{\mathbb{R}^{m}}\frac{(x-y)}{\mid
x-y
\mid^{m}}\ \times \omega(y) \/\mathrm{d}\/ y, \qquad x \in {\mathbb{R}^{m}},
m\geq 3.
\end{align}
\textbf{(a)} Assume that $ 1 < p <\infty $. Then for every
divergence-free vector field $u$ whose gradient is in
$\mathrm{L}^{p}$, there exists a $ C > 0 ,$ depending on $p$, such
that
\begin{align}\label{e2.5}
\parallel\nabla u \parallel_{p}\leq \ \
C\parallel\omega\parallel_{p}.
\end{align}
\textbf{(b)} If $\omega\in \mathrm{L}^{1}(\mathbb{R}^{m}) \cap
\mathrm{L}^{p}(\mathbb{R}^{m})$, $\frac{m}{m-1} \ < \ p \ \leq \
\infty $, then
\begin{align}\label{e2.6}
\parallel u \parallel_{\mathrm{L}^{p}(\mathbb{R}^{m})} \ \leq
C\big(\parallel\omega\parallel_{\mathrm{L}^{1}(\mathbb{R}^{m})} \ +
\
\parallel\omega\parallel_{\mathrm{L}^{p}(\mathbb{R}^{m})}\big).
\end{align}
\end{lem}
\begin{proof}
\textbf{(a)} See Theorem 3.1.1 in~\cite{Ch98}.\\
\textbf{(b)} The proof is due to Ying and Zhang~\cite{LyZp96}, Lemma
3.3.1.
\end{proof}
\section{Littlewood-Paley Decomposition and \\ Besov Spaces}
In this section, we recall some classical results concerning the
homogeneous Besov spaces in terms of the Littlewood-Paley
decomposition. Several related embedding relations and inequalities
will also be given here. For more details the reader is referred to
the
books~\cite{JbLo76},~\cite{Ca04},~\cite{Ch98},~\cite{Pe76},~\cite{Tr83}
for a comprehensive treatment.
\subsection{Littlewood-Paley Decomposition:} Let us start with the Littlewood-Paley decomposition
in $\mathbb{R}^{3}$. To this end, we take an arbitrary function
$\psi$ in the Schwartz class $\mathcal{S}(\mathbb{R}^{3}) $ whose
Fourier transform $\hat{\psi}$ is such that
\begin{eqnarray}
supp\ \hat{\psi} \subset \{\xi, \frac{1}{2}\leq \mid\xi\mid\leq 2\},
\end{eqnarray}
and
\begin{align} \forall\xi\neq 0, \quad\sum_{j \in\mathbb{Z}}
\hat{\psi}(\frac{\xi}{2^{j}}) = 1.\nonumber
\end{align} Let us define
$\varphi$ by
\begin{align}
\hat{\varphi}(\xi) = 1 - \sum_{j\geq
0}\hat{\psi}(\frac{\xi}{2^{j}}),\nonumber
\end{align}
and hence
\begin{eqnarray}
supp\ \hat{\varphi} \subset \{\xi, \mid\xi\mid\leq 1\}.
\end{eqnarray}
For $j\in \mathbb{Z}$, we write $\varphi_{j}(x) =
2^{3j}\varphi(2^{j}x)$. We denote by $S_{j}$ and $\triangle_{j}$,
the convolution operators with $\varphi_{j}$ and $\psi_{j}$
respectively. Hence
\begin{align} S_{j}(f) =
f\star\varphi_{j},\nonumber
\end{align}
and
\begin{align}
\triangle_{j}f = \psi_{j}\star f,\quad\text{where}\ \psi_{j}(x) =
2^{3j}\psi(2^{j}x).\nonumber
\end{align}
Then
\begin{align}
S_{j} = \sum_{p<j} \triangle_{p}\quad\text{and}\quad
I = \sum_{j\in \mathbb{Z}}\ \triangle_{j}.\nonumber
\end{align}
The dyadic decomposition
\begin{eqnarray}\label{e3.3}
u = \sum_{j\in \mathbb{Z}}\ \triangle_{j} u,
\end{eqnarray}
is called the homogeneous Littlewood-Paley decomposition of $u$ and
converges only in the quotient space
$\mathcal{S}^{'}/_{\mathcal{P}}$ where $\mathcal{S}^{'}$ is the
space of tempered distributions and $\mathcal{P}$ is the space of
polynomials. Now let us mention here the following
quasi-orthogonality properties of the dyadic
decomposition~\cite{Ch98} (proposition 2.1.1):
\begin{eqnarray}\label{e3.4}
\triangle_{p}\triangle_{q}u = 0 \quad\text{if}\quad\mid p-q \mid
\geq 2,
\end{eqnarray}
\begin{eqnarray}\label{e3.5}
\triangle_{p}(S_{q-2}u\triangle_{q}u) = 0 \quad\text{if}\quad\mid
p-q \mid \geq 4.
\end{eqnarray}
\subsection{Besov Spaces:}
Let $0<p, q\leq\infty$ and $s\in \mathbb{R}$. Then a tempered
distribution $f$ belongs to the homogeneous Besov space
$\/\mathrm{d}\/ot{B}_{p}^{s,q}$ if and only if
\begin{eqnarray}\label{e3.6}
\Big(\sum_{j\in\mathbb{Z}}\
2^{jsq}\parallel\triangle_{j}f\parallel_{p}^{q}\Big)^{\frac{1}{q}} <
\infty
\end{eqnarray}
and $f = \sum_{j\in Z}\triangle_{j}f$ in $
\mathcal{S}^{'}/_{\mathcal{P}_{m}}$ where $\mathcal{P}_{m}$ is the
space of polynomials of degree $\leq m$ and $m = [s-\frac{d}{p}]$,
the integer part of $s-\frac{d}{p}$.
Besov space is a quasi-Banach space~\cite{RuSi96}. Here we recall
the following standard embedding
rules~\cite{Tr83} (chapter 2.7):\\
If $s_{1} > s_{2}$ and $p_{2}\geq p_{1}\geq 1$ such that
$s_{1}-\frac{d}{p_{1}} = s_{2}-\frac{d}{p_{2}}$, then
\begin{eqnarray}
\/\mathrm{d}\/ot{B}_{p_{1}}^{s_{1},q}\hookrightarrow \/\mathrm{d}\/ot{B}_{p_{2}}^{s_{2},q}.
\end{eqnarray}
Moreover if $q_{1} < q_{2}$ then
\begin{eqnarray}
\/\mathrm{d}\/ot{B}_{p}^{s,q_{1}}\hookrightarrow \/\mathrm{d}\/ot{B}_{p}^{s,q_{2}}.
\end{eqnarray}
The above mentioned embeddings are also valid for inhomogeneous
Besov spaces. For more embedding theorems and their proofs we refer
the readers to~\cite{Pe76} and~\cite{Tr83}.
Next let us recall the following result from Chapter 3 in
Triebel~\cite{Tr83}:
\begin{lem}
Let $1\leq p, q\leq\infty$ and $s<0$. Then $\forall f\in
\/\mathrm{d}\/ot{B}_{p}^{s,q}$ we have,
\begin{eqnarray}\label{e3.9}
\Big(\sum_{j\in\mathbb{Z}}\
2^{jsq}\parallel\triangle_{j}f\parallel_{p}^{q}\Big)^{\frac{1}{q}} <
\infty \ \Leftrightarrow \ \Big(\sum_{j\in\mathbb{Z}}\
2^{jsq}\parallel S_{j}f\parallel_{p}^{q}\Big)^{\frac{1}{q}} <
\infty.
\end{eqnarray}
\end{lem}
Now we recall the following versions of Bernstein inequalities
(chapter 3 in~\cite{Lr02}):
\begin{lem}
Let $1\leq p\leq\infty$. Then there exist constants $C_{0},
C_{1}, C_{2} > 0$ such that \\
\textbf{(a)} If $f$ has its frequency in a ball $\mathbb{B}(0,
\lambda)$ $(supp\ \mathcal{F}(f) \subset \mathbb{B}(0, \lambda))$
then
\begin{eqnarray}\label{e3.10}
\parallel (-\triangle)^{\frac{s}{2}} f\parallel_{p} \ \leq C_{0} \ \lambda^{\mid
s\mid}\parallel f\parallel_{p}.
\end{eqnarray}
\textbf{(b)} If $f$ has its frequency in an annulus $\mathbb{C}(0,
A\lambda, B\lambda)$ \\ $(supp\ \mathcal{F}(f) \subset \{\xi,
A\lambda \leq \mid\xi\mid \leq B\lambda\} )$ then
\begin{eqnarray}\label{e3.11}
C_{1} \ \lambda^{\mid s\mid}\parallel f\parallel_{p} \ \leq \
\parallel (-\triangle)^{\frac{s}{2}} f\parallel_{p} \ \leq C_{2} \ \lambda^{\mid
s\mid}\parallel f\parallel_{p}.
\end{eqnarray}
\end{lem}
Now let us state here the modified Poincar\'{e} type inequality
given by Planchon~\cite{Pl00}.
\begin{lem}
Let $f\in \mathcal{S}$, the Schwartz space, whose fourier transform
is supported outside the ball $\mathbb{B}(0,1)$. Then for $p\geq 2$,
\begin{eqnarray}\label{e3.12}
\int\mid f\mid^{p}\/\mathrm{d}\/ x\ \leq \ C_{p}\int\mid\nabla f\mid^{2} \mid
f\mid ^{p-2}\/\mathrm{d}\/ x.
\end{eqnarray}
\end{lem}
\subsection{The Paraproduct rule:}
Another important tool in Littlewood-Paley analysis is the
paraproduct operator introduced by J. M. Bony~\cite{Bo81}. The idea
of the paraproduct enables us to define a new product between
distributions which turns out to be continuous in many functional
spaces where the pointwise multiplication does not make sense. This
is a
powerful tool for the analysis of nonlinear partial differential equations. \\
Let $f, g\in \mathcal{S}^{'}$. Then using the formal
Littlewood-Paley decomposition,
\begin{eqnarray}
f = \sum_{j\in \mathbb{Z}} \triangle_{j} f, \qquad g = \sum_{j\in \mathbb{Z}} \triangle_{j}
g.\nonumber
\end{eqnarray}
Hence
\begin{eqnarray}
fg &=& \sum_{j, l}\triangle_{j} f \triangle_{l}
g\nonumber\\
&=& \sum_j\sum_{l<j-2}\triangle_{j} f \triangle_{l}
g + \sum_j\sum_{l>j+2}\triangle_{j} f \triangle_{l}
g + \sum_j\sum_{\mid l-j\mid\leq 2}\triangle_{j} f \triangle_{l}
g \nonumber\\
&=& \sum_j\sum_{l<j-2}\triangle_{j} f \triangle_{l}
g + \sum_l\sum_{j<l-2}\triangle_{j} f \triangle_{l}
g + \sum_j\sum_{\mid l-j\mid\leq 2}\triangle_{j} f \triangle_{l}
g \nonumber\\
&=& \sum_{j}\triangle_{j}f S_{j-2}g + \sum_{j}\triangle_{j}g
S_{j-2}f + \sum_{\mid l-j\mid \leq 2}\triangle_{j}f
\triangle_{l}g\nonumber.
\end{eqnarray}
In other words, the product of two tempered distributions is
decomposed into two homogeneous paraproducts, respectively
\begin{eqnarray}
\/\mathrm{d}\/ot\pi(f,g) = \sum_{j}\triangle_{j}f S_{j-2}g \qquad \text{and}
\qquad \/\mathrm{d}\/ot\pi(g,f) = \sum_{j}\triangle_{j}g S_{j-2}f \nonumber,
\end{eqnarray}
plus a remainder
\begin{eqnarray}
R(f,g) = \sum_{\mid l-j\mid \leq 2}\triangle_{j}f
\triangle_{l}g\nonumber.
\end{eqnarray}
$\/\mathrm{d}\/ot\pi$ is called the homogeneous paraproduct operator and the
convergence of the above series holds true in the quotient space
$\mathcal{S}^{'}/_{\mathcal{P}}$. Finally, using the
quasi-orthogonality properties from \eqref{e3.4} and \eqref{e3.5}
and after neglecting some non-diagonal terms for simplicity (since
the contributions from these non-diagonal terms are taken care of by
the terms which are being considered and hence negligible and also
this does not affect the convergence of the paraproducts~\cite{Ca95,
CaMe95}), we obtain
\begin{eqnarray}\label{e3.13}
\triangle_{j}(fg) = \triangle_{j}f S_{j-2}g + \triangle_{j}g
S_{j-2}f + \triangle_{j}\big(\sum_{k\geq j}\triangle_{k}f
\triangle_{k}g\big).
\end{eqnarray}
We refer the
readers~\cite{Ca95},~\cite{Ch98},~\cite{Me81},~\cite{RuSi96} for
extensive studies on paraproducts.
\section{Main Results}
\begin{thm}[Local Lyapunov Property in $\mathrm{L}^p$]
Let $m\geq 3$, $2\leq p <\infty$. Let $\omega$ be the solution of
the vorticity equation \eqref{e1.1} such that
\begin{align}
& u\in \mathrm{C}{([0,T];\mathrm{L}^{m}\cap \mathrm{L}^{p})},
\qquad\nabla u\in \mathrm{L}^{1}_{loc}((0, T); \mathrm{L}^{p}),
\nonumber
\end{align}
and
\begin{align}
&\omega\in \mathrm{C}{([0,T];\mathrm{L}^{m}\cap
\mathrm{L}^{p})},\qquad\nabla\omega\in \mathrm{L}^{1}_{loc}((0, T);
\mathrm{L}^{p}),\quad\text{for} \ 0 < T \leq\infty.\nonumber
\end{align}
Then
\begin{eqnarray}\label{e4.1}
\partial_{t}\parallel\omega(t)\parallel_{p}^{p} &\leq& -C(\nu - K \parallel
u(t)\parallel_{m})Q_{p}(\omega(t)) , \qquad 0 < t < T,
\end{eqnarray}
where $K$ denotes a positive constant depending upon $m$ and $p$.
\end{thm}
This implies for small $\mathrm{L}^m$-norm
$t\rightarrow\parallel\omega(t)\parallel_{\mathrm{L}^p}$ is a
Lyapunov function.
\begin{proof}
Consider,
\begin{eqnarray}
\partial_{t}\parallel\omega\parallel_{p}^{p} \nonumber &=&
\frac{\partial}{\partial t}\int\mid\omega\mid^{p}\/\mathrm{d}\/ x \ = \
\frac{\partial}{\partial t}\int\mid\omega^{2}\mid^{p/2}\/\mathrm{d}\/ x\nonumber
\\ &=& p \int \mid\omega\mid^{p-2}\omega\cdot\frac{\partial \omega}{\partial
t}\/\mathrm{d}\/ x \ = \ p\langle\mid\omega\mid^{p-2}\omega,
\partial_{t}\omega\rangle\nonumber
\\ &=& p\langle\mid\omega\mid^{p-2}\omega,\ \nu\triangle\omega -
u\cdot\nabla\omega + \omega\cdot\nabla u\rangle\nonumber
\\ &=& \nu p \langle\mid\omega\mid^{p-2}\omega, \
\triangle\omega\rangle \ - \ p \langle\mid\omega\mid^{p-2}\omega, \
u\cdot\nabla\omega\rangle\nonumber\\
&&\quad + \ p \langle\mid\omega\mid^{p-2}\omega, \ \omega\cdot\nabla
u\rangle.\label{e4.2}
\end{eqnarray}
Using Lemma 2.5 on the first term of the right hand side, we have
from \eqref{e4.2}
\begin{eqnarray}\label{e4.3}
\partial_{t}\parallel\omega\parallel_{p}^{p}\ &\leq&
-C \nu Q_{p}(\omega) - \ p \langle\mid\omega\mid^{p-2}\omega, \
u\cdot\nabla\omega\rangle \ + \ p \langle\mid\omega\mid^{p-2}\omega,
\ \omega\cdot\nabla u\rangle\nonumber\\
\end{eqnarray}
Now we need to estimate the second and the third terms of the right
hand side of the equation \eqref{e4.3}.
Using the fact that $\mathop{\mathrm{Div}} u = 0$, we have
\begin{eqnarray}\label{e4.4}
u\cdot\nabla\omega \ = \ u_{i}\frac{\partial\omega_{j}}{\partial
x_{i}} \ = \ \frac{\partial}{\partial x_{i}}(u_{i}\omega_{j}) \ - \
\omega_{j}\frac{\partial u_{i}}{\partial x_{i}} \ =
\nabla\cdot(u\otimes\omega),
\end{eqnarray}
where $\otimes$ represents the tensor product.\\
Then
\begin{eqnarray}
\mid\langle\mid\omega\mid^{p-2}\omega, \
u\cdot\nabla\omega\rangle\mid &=&
\mid\langle\mid\omega\mid^{p-2}\omega, \
\nabla\cdot(u\otimes\omega)\rangle\mid\nonumber
\\ &=& \mid\langle\nabla(\mid\omega\mid^{p-2}\omega), \
u\otimes\omega\rangle\mid\nonumber
\\ &\leq & \langle\mid\nabla(\mid\omega\mid^{p-2}\omega)\mid, \
\mid u \otimes\omega\mid\rangle.\label{e4.5}
\end{eqnarray}
Notice that $ \mid\nabla\mid\omega\mid^{p-2}\omega\mid
\ \leq\ C\mid\omega\mid^{p-2}\mid\nabla\omega\mid.$
Hence using this and H\"{o}lder's inequality in \eqref{e4.5} we have
\begin{eqnarray}
\mid\langle\mid\omega\mid^{p-2}\omega, \
u\cdot\nabla\omega\rangle\mid &\leq&
\langle\mid\omega\mid^{p-2}\mid\nabla\omega\mid, \ \mid u
\otimes\omega\mid\rangle\nonumber\\
&\leq &
\parallel\ \mid\omega\mid^{p-2}\mid\nabla\omega\mid \ \parallel_{q} \
\parallel u\otimes\omega\parallel_{q'},\quad \text{where}\
\frac{1}{q}+\frac{1}{q'}=1.\nonumber\\ \label{e4.6}
\end{eqnarray}
Now
\begin{align} \parallel \
\mid\omega\mid^{p-2}\mid\nabla\omega\mid \
\parallel_{q}^{q}
&=\int \mid\omega\mid^{q(p-2)} \ \mid\nabla\omega\mid^{q}\/\mathrm{d}\/
x\nonumber
\\&=\int \mid\omega\mid^{q(p-2)/2} \ \big(\ \mid\omega\mid^{p-2} \
\mid\nabla\omega\mid^{2}\ \big)^{q/2}\/\mathrm{d}\/ x.\nonumber
\end{align}
Since $\frac{2-q}{2}+\frac{q}{2}=1$, H\"{o}lder inequality yields
\begin{align}
&\parallel \ \mid\omega\mid^{p-2}\mid\nabla\omega\mid \
\parallel_{q}^{q}\nonumber
\\&\leq\Big[\int \big(\ \mid\omega\mid^{q(p-2)/2}\
\big)^{2/(2-q)}\/\mathrm{d}\/ x \Big]^\frac{2-q}{2} \ \Big[\int \Big\{\big(\
\mid\omega\mid^{p-2} \ \mid\nabla\omega\mid^{2}\big)^{q/2}
\Big\}^{2/q}\/\mathrm{d}\/ x \Big ]^\frac{q}{2} \nonumber
\\&=\Big[\int\mid\omega\mid^{q(p-2)/(2-q)}\/\mathrm{d}\/ x\Big]^{(2-q)/2} \ \Big[
\int\mid\omega\mid^{p-2}\ \mid\nabla\omega\mid^{2}\/\mathrm{d}\/
x\Big]^{q/2}\nonumber
\\&=\ \parallel\omega\parallel_{r}^{q(p-2)/2} \ Q_{p}(\omega)^{q/2},
\quad \text{where} \ r = \frac{q(p-2)}{(2-q)}. \nonumber
\end{align}
Hence
\begin{align}\label{e4.7}
\parallel \ \mid\omega\mid^{p-2}\mid\nabla\omega\mid \
\parallel_{q}\ \leq\ \parallel\omega\parallel_{r}^{(p-2)/2} \
Q_{p}(\omega)^{1/2}.
\end{align}
Again by H\"{o}lder,
\begin{align}\label{e4.8}
\parallel u\otimes\omega\parallel_{q'}\ \leq\ C\parallel
u\parallel_{m} \ \parallel\omega\parallel_{r},\quad \text{since} \
\frac{1}{q'}=\frac{1}{m}+\frac{1}{r}.
\end{align}
Now from the relations
\begin{eqnarray}
\frac{1}{q} + \frac{1}{q'} = 1 ,\ r = \frac{q(p-2)}{(2-q)}\
\text{and} \ \frac{1}{q'} = \frac{1}{m} + \frac{1}{r}, \nonumber
\end{eqnarray}
we find that
\begin{eqnarray}\label{e4.9}
\qquad\ r = \frac{mp}{(m-2)}.
\end{eqnarray}
Using equations \eqref{e4.7} and \eqref{e4.8} in \eqref{e4.6} we
have
\begin{eqnarray}
\mid\langle\mid\omega\mid^{p-2}\omega, \
u\cdot\nabla\omega\rangle\mid &\leq &
C\parallel\omega\parallel_{r}^{(p-2)/2} \ Q_{p}(\omega)^{1/2}
\parallel u\parallel_{m}\ \parallel\omega\parallel_{r}\nonumber
\\&=& C\parallel u\parallel_{m}\ \parallel\omega\parallel_{r}^{p/2}\
Q_{p}(\omega)^{1/2}.\nonumber
\end{eqnarray}
Applying the Lemma 2.6 in the above equation we obtain
\begin{eqnarray}\label{e4.10}
\mid\langle\mid\omega\mid^{p-2}\omega, \
u\cdot\nabla\omega\rangle\mid &\leq& C\parallel u\parallel_{m}\
Q_{p}(\omega).
\end{eqnarray}
The third term in the equation \eqref{e4.3} can be estimated by
using the fact that $\mathop{\mathrm{Div}} \omega = 0$ along with the similar kind of
techniques taken to estimate the second term.
Thus we get
\begin{eqnarray}\label{e4.11}
\mid\langle\mid\omega\mid^{p-2}\omega,\ \omega\cdot\nabla\ u
\rangle\mid &\leq & C\parallel u\parallel_{m}\ Q_{p}(\omega).
\end{eqnarray}
Combining \eqref{e4.10} and \eqref{e4.11} with \eqref{e4.3} we get
the desired result \eqref{e4.1}.
\end{proof}
\begin{thm}[Local Lyapunov Property in Besov Spaces]
Let the initial data $\omega_{0}$ for the $3$-D vorticity equation be
in $\/\mathrm{d}\/ot{B}_{p}^{s,q}$ where $ s= \frac{3}{p}-1$, $p, q \geq 2$, and
$\frac{3}{p}+\frac{2}{q}>1$. Then there exist small constants
$\varepsilon_{1}> 0$ and $\varepsilon_{2}> 0$ such that if the
velocity field satisfies $sup_{t}\parallel u(t)
\parallel_{\/\mathrm{d}\/ot{B}_{\infty}^{-1, \infty}} < \varepsilon_{1}$ and the
vorticity field satisfies $sup_{t}\parallel \omega
(t)\parallel_{\/\mathrm{d}\/ot{B}_{\infty}^{-2, \infty}} < \varepsilon_{2}$,
then $ t\rightarrow\parallel\omega(t)\parallel_{\/\mathrm{d}\/ot{B}_{p}^{s,q}}$
is a Lyapunov function.
\end{thm}
\begin{proof}
Let us consider
\begin{align}
F(u,w) = u\cdot\nabla\omega - \omega\cdot\nabla u.\nonumber
\end{align}
Multiply the equation \eqref{e1.1} by $\triangle_{j}$ to get,
\begin{align}\label{e4.12}
\partial_{t}(\triangle_{j}\omega)\ - \nu\triangle(\triangle_{j}\omega)\
+ \triangle_{j}(F(u,w)) = 0.
\end{align}
Now,
\begin{align}
\partial_{t}\parallel\triangle_{j}\omega\parallel_{p}^{p}&=
\frac{\partial}{\partial t}\int\mid\triangle_{j}\omega\mid^{p}\/\mathrm{d}\/ x \
= \ \frac{\partial}{\partial
t}\int\mid(\triangle_{j}\omega)^{2}\mid^{p/2}\/\mathrm{d}\/ x\nonumber
\\ &=p \int \mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega\cdot\frac{\partial(\triangle_{j}\omega)}{\partial
t}\/\mathrm{d}\/ x\nonumber\\
&=p\langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega,
\partial_{t}(\triangle_{j}\omega)\rangle.\nonumber
\end{align}
Hence using \eqref{e4.12} we have from the above equation
\begin{align}
\partial_{t}\parallel\triangle_{j}\omega\parallel_{p}^{p}
&=p\langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega,\
\nu\triangle(\triangle_{j}\omega) - \triangle_{j}(F(u,w))
\rangle\nonumber\\
&=\nu p \langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega,
\ \triangle(\triangle_{j}\omega)\rangle \nonumber\\
&\qquad -p
\langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega,
\triangle_{j}(F(u,w)) \rangle.\nonumber
\end{align}
Applying the Lemma 2.5 on the first term on the right hand side of
the above equation we obtain
\begin{align}
\partial_{t}\parallel\triangle_{j}\omega\parallel_{p}^{p}
&\leq -\nu
p\int\mid\triangle_{j}\omega\mid^{p-2}\mid\nabla\triangle_{j}\omega\mid^{2}dx
\nonumber\\
&\qquad -p
\langle\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega,
\triangle_{j}(F(u,w)) \rangle.\nonumber
\end{align}
Hence,
\begin{align}
\partial_{t}\parallel\triangle_{j}\omega\parallel_{p}^{p} +
&\nu
p\int\mid\triangle_{j}\omega\mid^{p-2}\mid\nabla\triangle_{j}\omega\mid^{2}\/\mathrm{d}\/
x\nonumber\\
&\leq -p \int\mid\triangle_{j}\omega\mid^{p-2}\triangle_{j}\omega
\triangle_{j}(F(u,w))\/\mathrm{d}\/ x, \nonumber
\end{align}
which is equivalent of considering the equation
\begin{align}
\frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + & \nu
p\int\mid\triangle_{j}\omega\mid^{p-2}\mid\nabla\triangle_{j}\omega\mid^{2}\/\mathrm{d}\/
x\nonumber\\
&\leq p
\int\mid\triangle_{j}\omega\mid^{p-1}\mid\triangle_{j}(F(u,w))\mid
dx. \nonumber
\end{align}
Using Lemma 3.3 we replace the second term to get
\begin{align}\label{e4.13}
&\frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + \
\tilde{C}_p\nu p\ 2^{2j}
\parallel\triangle_{j}\omega\parallel_{p}^{p} \ \leq p
\int\mid\triangle_{j}\omega\mid^{p-1}\mid\triangle_{j}(F(u,w))\mid
\/\mathrm{d}\/ x,\\
&\text{where $\tilde{C}_p$ is positive constant depending on
$p$}.\nonumber
\end{align}
Now,
\begin{align}
\mid\triangle_{j}(F(u,w))\mid\nonumber &=
\mid\triangle_{j}(u\cdot\nabla\omega - \omega\cdot\nabla u)\mid
\nonumber
\\ &\leq\mid\triangle_{j}(u\cdot\nabla\omega)\mid + \mid\triangle_{j}(\omega\cdot\nabla
u)\mid.\nonumber
\end{align}
Moreover
\begin{align}
u\cdot\nabla\omega \ = \ u_{i}\frac{\partial\omega_{j}}{\partial
x_{i}} \ = \ \frac{\partial}{\partial x_{i}}(u_{i}\omega_{j}) \ - \
\omega_{j}\frac{\partial u_{i}}{\partial x_{i}} \ =
\nabla\cdot(u\otimes\omega), \quad \text{since}\ \mathop{\mathrm{Div}} u =
0\nonumber,
\end{align}
and similarly $\omega\cdot\nabla u = \nabla\cdot(\omega\otimes u),$
where $\otimes$ represents the usual tensor product. \\
Since the terms $\nabla\cdot(u\otimes\omega)$ and
$\nabla\cdot(\omega\otimes u)$ behave in similar fashion, we have
from equation \eqref{e4.13}
\begin{align}\label{e4.14}
\frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + \
\tilde{C}_p\nu p\ 2^{2j}
\parallel\triangle_{j}\omega\parallel_{p}^{p} \ \leq\ 2p
\int\mid\triangle_{j}\omega\mid^{p-1}\mid\triangle_{j}\nabla\cdot(u\otimes\omega)\mid
\/\mathrm{d}\/ x.
\end{align}
Now using the paraproduct rule \eqref{e3.13}, we have
\begin{align}
\triangle_{j}\nabla\cdot(u\otimes\omega)\nonumber &=
\nabla\triangle_{j}(u\otimes\omega)\nonumber
\\ &= \nabla\big(\triangle_{j}u \ S_{j-2}\omega\big) +
\nabla\big(\triangle_{j}\omega \ S_{j-2}u\big) + \nabla\big(
\triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\
\triangle_{k}\omega\big)\big).\label{e4.15}
\end{align}
Using \eqref{e4.15} in \eqref{e4.14} we obtain,
\begin{align}
&\frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + \
\tilde{C}_p
\nu p\ 2^{2j} \parallel\triangle_{j}\omega\parallel_{p}^{p}\nonumber\\
&\quad\leq 2p
\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}u \
S_{j-2}\omega\big)\mid \/\mathrm{d}\/ x \nonumber
\\ &\quad\quad + 2p \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega
\ S_{j-2}u\big)\mid \/\mathrm{d}\/ x \nonumber
\\ &\quad\quad + 2p \int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\big(\sum_{k\geq
j}\triangle_{k}u\ \triangle_{k}\omega\big)\big)\mid \/\mathrm{d}\/
x.\label{e4.16}
\end{align}
We need to estimate each of the terms on the right hand side of
\eqref{e4.16} separately.
First consider the term
\begin{align}
\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega
\ S_{j-2}u\big)\mid \/\mathrm{d}\/ x,\nonumber
\end{align}
and apply H\"{o}lder's Inequality to get,
\begin{align}
\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega
\ S_{j-2}u\big)\mid \/\mathrm{d}\/ x\leq \
\parallel\triangle_{j}\omega\parallel_{p}^{p-1} \
\parallel\nabla\big(\triangle_{j}\omega
\ S_{j-2}u\big)\parallel_{p}.\nonumber
\end{align}
With the help of Lemma 3.2 we obtain
\begin{align}
&\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega
\ S_{j-2}u\big)\mid \/\mathrm{d}\/ x\nonumber\\
&\quad\leq \ C_1 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \
2^{j}
\parallel\triangle_{j}\omega\ S_{j-2}u\parallel_{p}\nonumber\\
&\quad=\ C_1 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ 2^{j}
\parallel\big(2^{j}\triangle_{j}\omega\big)\big(2^{-j}S_{j-2}u\big)\parallel_{p}\nonumber
\\ &\quad\leq\ C_1 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ 2^{j}
\parallel 2^{j}\triangle_{j}\omega\parallel_{p} \ \sup_{j}\big(2^{-j}\parallel
S_{j-2}u\parallel_{\infty}\big)\nonumber
\\ &\quad=\ C_1\ 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p}\ \sup_{j}\big(2^{-j}\parallel
S_{j-2}u\parallel_{\infty}\big),\label{e4.17}\\
&\text{where $C_1$ is positive constant.}\nonumber
\end{align}
Now from Lemma 3.1, for $s = -1$
and $p = q = \infty$, we have
\begin{align}
&\qquad 2^{-j}\parallel\triangle_{j}
u\parallel_{\mathrm{L}^{\infty}}\ \in l^{\infty}\ \Leftrightarrow \
2^{-j}\parallel S_{j}
u\parallel_{\mathrm{L}^{\infty}}\ \in l^{\infty}\nonumber\\
&\mathbb{R}ightarrow \ \sup_{j} 2^{-j}\parallel\triangle_{j}
u\parallel_{\infty}\ \Leftrightarrow \ \sup_{j} 2^{-j}\parallel
S_{j} u\parallel_{\infty}\nonumber\\
&\mathbb{R}ightarrow \ \parallel u(x,t)\parallel_{\/\mathrm{d}\/ot{B}_{\infty}^{-1,
\infty}}\ \Leftrightarrow \ \sup_{j} 2^{-j}\parallel S_{j}
u\parallel_{\infty}.
\end{align}
Then using the conditions assumed in the theorem, we get,
\begin{align}
&\qquad\parallel u(x,t)\parallel_{\/\mathrm{d}\/ot{B}_{\infty}^{-1, \infty}} \
\leq \ \sup_{t}\parallel u(x,t)\parallel_{\/\mathrm{d}\/ot{B}_{\infty}^{-1,
\infty}}\ \leq\varepsilon_{1},\nonumber\\
&\mathbb{R}ightarrow \ \sup_{j} 2^{-j}\parallel S_{j} u\parallel_{\infty}\
\leq\varepsilon_{1}.
\end{align}
So finally \eqref{e4.17} yields
\begin{align}\label{e4.20}
\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\omega
\ S_{j-2}u\big)\mid dx \ \leq C_1\varepsilon_{1}
2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p}.
\end{align}
Now let us consider the term:
\begin{align}
\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}u \
S_{j-2}\omega\big)\mid \/\mathrm{d}\/ x.\nonumber
\end{align}
As before H\"{o}lder's Inequality and Lemma 3.2 yield
\begin{align}
&\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}u
\ S_{j-2}\omega\big)\mid \/\mathrm{d}\/ x\nonumber\\
&\quad\leq \ \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \
\parallel\nabla\big(\triangle_{j} u
\ S_{j-2}\omega\big)\parallel_{p}\nonumber\\
&\quad\leq \ C_2 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \
2^{j}
\parallel\triangle_{j} u\ S_{j-2}\omega\parallel_{p}\nonumber\\
&\quad=\ C_2 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \ 2^{j}
\parallel\big(2^{2j}\triangle_{j} u\big)\big(2^{-2j}S_{j-2}\omega\big)\parallel_{p}\nonumber\\
&\quad\leq\ C_2 \parallel\triangle_{j}\omega\parallel_{p}^{p-1} \
2^{j}
\parallel 2^{2j}\triangle_{j} u\parallel_{p} \ \sup_{j}\big(2^{-2j}\parallel
S_{j-2}\omega\parallel_{\infty}\big),\label{e4.21}\\
&\text{where $C_2$ is positive constant.}\nonumber
\end{align}
From Lemma 3.2, equation \eqref{e3.11}, we obtain
\begin{align}\label{e4.22}
2^{j}\parallel\triangle_{j} u\parallel_{p}\ \leq \
\parallel\nabla\triangle_{j} u\parallel_{p}.
\end{align}
The above equation and \eqref{e2.5} in Lemma 2.7 yield
\begin{align}\label{e4.24}
\parallel\triangle_{j} u\parallel_{p}\ \leq \ 2^{-j}\parallel
\triangle_{j}\omega\parallel_{p}.
\end{align}
Now applying Lemma 3.1, for $s = -2$ and $p = q = \infty$ and
proceeding as before we obtain
\begin{align}\label{e4.25}
\sup_{j} 2^{-j}\parallel S_{j}\omega\parallel_{\infty}\
\leq\varepsilon_{2}.
\end{align}
Using \eqref{e4.24} and \eqref{e4.25} in \eqref{e4.21} we have
\begin{align}\label{e4.26}
\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}u \
S_{j-2}\omega\big)\mid dx \leq C_2\varepsilon_{2}
2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p}.
\end{align}
Next we estimate the last term
\begin{align}
&\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\big(\sum_{k\geq
j}\triangle_{k}u\ \triangle_{k}\omega\big)\big)\mid \/\mathrm{d}\/ x\nonumber\\
&\quad\leq \ \parallel\triangle_{j}\omega\parallel_{p}^{p-1}
\parallel\nabla\big(\triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\
\triangle_{k}\omega\big)\big)\parallel_{p}\nonumber\\
&\quad\leq\ C_3 2^{j}\parallel\triangle_{j}\omega\parallel_{p}^{p-1}
\parallel\triangle_{j}\big(\sum_{k\geq j}\triangle_{k}u\
\triangle_{k}\omega\big)\parallel_{p}\nonumber\\
&\text{where $C_3$ is positive constant.}\nonumber
\end{align}
Using Young's Inequality as in~\cite{CaMe95}, we have
\begin{align}
&\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\big(\sum_{k\geq
j}\triangle_{k}u\ \triangle_{k}\omega\big)\big)\mid \/\mathrm{d}\/
x\nonumber\\&\quad\leq\ C_p
2^{j}\parallel\triangle_{j}\omega\parallel_{p}^{p-1}
\big(\sum_{k\geq j}\parallel\triangle_{k}u\parallel_{p}\parallel
\triangle_{k}\omega\parallel_{p}\big),\label{e4.27}\\
&\text{where $C_p$ is positive constant depending on $p$.}\nonumber
\end{align}
Now
\begin{align}
\parallel\triangle_{j}\omega\parallel_{p}^{p-1} &=\ 2^{2j}
\big(2^{-2j}\parallel\triangle_{j}\omega\parallel_{p}\big)\
\parallel\triangle_{j}\omega\parallel_{p}^{p-2}\nonumber
\\ &\leq\ 2^{2j} \big(\sup_{j}
2^{-2j}\parallel\triangle_{j}\omega\parallel_{\infty}\big) \
\parallel\triangle_{j}\omega\parallel_{p}^{p-2}\nonumber
\\ &=\ 2^{2j} \parallel\omega(x,t)\parallel_{\/\mathrm{d}\/ot{B}_{\infty}^{-2,
\infty}}\
\parallel\triangle_{j}\omega\parallel_{p}^{p-2}\nonumber
\\ &\leq\ \varepsilon_{2}\
2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p-2}.\label{e4.28}
\end{align}
Using \eqref{e4.24} and \eqref{e4.28} in \eqref{e4.27} we have,
\begin{align}\label{e4.29}
&\int\mid\triangle_{j}\omega\mid^{p-1}\mid\nabla\big(\triangle_{j}\big(\sum_{k\geq
j}\triangle_{k}u\ \triangle_{k}\omega\big)\big)\mid\/\mathrm{d}\/ x\nonumber\\
&\quad\leq \ C_p \varepsilon_{2}\
2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p-2}
\big(\sum_{k\geq j}\parallel\triangle_{k}
\omega\parallel_{p}^{2}\big).
\end{align}
Now combining all results from \eqref{e4.20}, \eqref{e4.26} and
\eqref{e4.29} and neglecting the constants $C_1, C_2, C_p,
\tilde{C}_p$, we obtain from \eqref{e4.16}
\begin{align}
\frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{p} + \nu p\
2^{2j}
\parallel\triangle_{j}\omega\parallel_{p}^{p}\ &\leq\ 2p\varepsilon
_{1}\ 2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p} +\
2p\varepsilon_{2}\
2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p} \nonumber\\
&\quad +\ 2p\varepsilon_{2}\
2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{p-2}
\big(\sum_{k\geq j}\parallel\triangle_{k}
\omega\parallel_{p}^{2}\big).\nonumber
\end{align}
Simplifying we get,
\begin{align}\label{e4.30}
\frac{d}{dt}\parallel\triangle_{j}\omega\parallel_{p}^{2} +\ p
\big(\nu - 2\varepsilon_{1} -
2\varepsilon_{2}\big)2^{2j}\parallel\triangle_{j}\omega\parallel_{p}^{2}
\ &\leq\ 2p\varepsilon_{2}\ 2^{2j}\big(\sum_{k\geq
j}\parallel\triangle_{k} \omega\parallel_{p}^{2}\big).
\end{align}
The rest of the construction is motivated by~\cite{CaPl00}.
Multiplying both sides of \eqref{e4.30} by
$2^{jqs}\parallel\triangle_{j}\omega\parallel_{p}^{q-2}$, we get
\begin{align}
&\frac{d}{dt}\big(2^{jqs}\parallel\triangle_{j}\omega\parallel_{p}^{q}\big)
+\ p \big(\nu - 2\varepsilon_{1} -
2\varepsilon_{2}\big)2^{j(qs+2)}\parallel\triangle_{j}\omega\parallel_{p}^{q}\nonumber\\
&\quad\leq \ 2p\varepsilon_{2}\
2^{j(qs+2)}\parallel\triangle_{j}\omega\parallel_{p}^{q-2}\big(\sum_{k\geq
j}\parallel\triangle_{k} \omega\parallel_{p}^{2}\big).\nonumber
\end{align}
Let $ s= \frac{3}{p}-1$, $p, q \geq 2$, and
$\frac{3}{p}+\frac{2}{q}>1$. Then $r = \frac{2}{q} + s > 0$ or $qs +
2 = rq$. \\
Then
\begin{align}
&\frac{d}{dt}\big(2^{jqs}\parallel\triangle_{j}\omega\parallel_{p}^{q}\big)
+ \ p \big(\nu - 2\varepsilon_{1} -
2\varepsilon_{2}\big)2^{rqj}\parallel\triangle_{j}\omega\parallel_{p}^{q}\nonumber\\
&\quad\leq \ 2p\varepsilon_{2}\
2^{(q-2)rj}\parallel\triangle_{j}\omega\parallel_{p}^{q-2}
2^{2rj}\big(\sum_{k\geq j}\parallel\triangle_{k}
\omega\parallel_{p}^{2}\big).\label{e4.31}
\end{align}
Let $ f_{j} = 2^{js}\parallel\triangle_{j}\omega\parallel_{p}$ and
$g_{j} = 2^{jr}\parallel\triangle_{j}\omega\parallel_{p}$. Then
taking sum over $j$ of \eqref{e4.31} we have,
\begin{align}
\frac{d}{dt}\big(\sum_{j}f_{j}^{q}\big) + \ p \big(\nu -
2\varepsilon_{1} - 2\varepsilon_{2}\big)\sum_{j}g_{j}^{q}&\leq\
2p\varepsilon_{2} \sum_{j}g_{j}^{q-2} \ 2^{2rj}\big(\sum_{k\geq
j}\parallel\triangle_{k}
\omega\parallel_{p}^{2}\big)\nonumber\\
&=\ 2p\varepsilon_{2}\sum_{k=1}^{\infty}\sum_{j=1}^{k}\ g_{j}^{q-2}
\
2^{2rj}\parallel\triangle_{k} \omega\parallel_{p}^{2}\nonumber\\
&=\ 2p\varepsilon_{2}\sum_{k=1}^{\infty}\sum_{j=1}^{k}\ g_{j}^{q-2}
\ 2^{2rj} \ 2^{-2rk}\ g_{k}^{2}.\label{e4.32}
\end{align}
Let us consider $$\sum_{j=1}^{k}\ g_{j}^{q-2} \ 2^{2rj} = 2^{2rk}\
h_{k}^{q-2}.$$ Then it is clear that
\begin{align}\label{e4.33}
\sum_{k} h_{k}^{q} \ \leq \sum_{j} g_{j}^{q}.
\end{align}
So \eqref{e4.32} yields with the help of H\"{o}lder Inequality and
\eqref{e4.33}
\begin{align}
&\frac{d}{dt}\big(\sum_{j}f_{j}^{q}\big) + \ p \big(\nu -
2\varepsilon_{1} - 2\varepsilon_{2}\big)\sum_{j}g_{j}^{q}\nonumber\\
&\quad\leq\ 2p\varepsilon_{2}\sum_{k} h_{k}^{q-2}
g_{k}^{2}\nonumber\\
&\quad\leq\
2p\varepsilon_{2}\Big(\sum_{k}\big(h_{k}^{q-2}\big)^{\frac{q}{q-2}}\Big)^{\frac{q-2}{q}}
\
\Big(\sum_{k}\big(g_{k}^{2}\big)^{\frac{q}{2}}\Big)^{\frac{2}{q}},\quad\text{since}\
\ \frac{q-2}{q}+\frac{2}{q} =
1,\nonumber\\
&\quad\leq\ 2p\varepsilon_{2} \big(\sum_{k}
g_{k}^{q}\big)^{\frac{q-2}{q}}\ \big(\sum_{k}
g_{k}^{q}\big)^\frac{2}{q}\nonumber\\
&\quad=\ 2p\varepsilon_{2}\sum_{k} g_{k}^{q}.
\end{align}
Hence,
\begin{align}
\frac{d}{dt}\big(\sum_{j}f_{j}^{q}\big) + \ p \big(\nu -
2\varepsilon_{1} - 4\varepsilon_{2}\big)\sum_{j}g_{j}^{q}\ \leq 0.
\end{align}
Using the definition of Besov Spaces in \eqref{e3.6}, we can write,
\begin{align}
\frac{d}{dt}\big(\parallel\omega(x,t)\parallel_{\/\mathrm{d}\/ot{B}_{p}^{s,
q}}^{q}\big) + \ p \big(\nu - 2\varepsilon_{1} -
4\varepsilon_{2}\big)\parallel\omega(x,t)\parallel_{\/\mathrm{d}\/ot{B}_{p}^{r,
q}}^{q} \ \leq 0.\nonumber
\end{align}
Hence $ t\rightarrow\parallel\omega(t)\parallel_{\/\mathrm{d}\/ot{B}_{p}^{s,q}}$
is a Lyapunov function for small $\varepsilon_{1}$ and
$\varepsilon_{2}$ and comparatively large $\nu$.
\end{proof}
Now let us prove the dissipativity of the sum of the linear and
nonlinear operators of the vorticity equation in
$\mathrm{L}^{m}(\mathbb{R}^{m})$.
We write \eqref{e1.1} in the form $\partial_{t}\omega = \emph{A}
(u,\omega)$, where $\emph{A} : u, \omega \mapsto \emph{A} (u,\omega)
= \nu\triangle\omega - u\cdot\nabla\omega + \omega\cdot\nabla u$ is
a nonlinear operator. We know that $ G(\omega) =
\mid\omega\mid^{p-2}\omega$ \ is the duality map on $\mathrm{L}^{p}$
to $\mathrm{L}^{p^{\prime}}$. In Theorem 4.1 we proved that
\begin{align}\label{e4.36}
\langle\emph{A} (u,\omega), G(\omega)\rangle\ \leq\ -C(\nu - K
\parallel u(t)\parallel_{m})Q_{p}(\omega(t)).
\end{align}
Here we will prove a stronger property than \eqref{e4.36}.
\begin{thm}[Local Dissipativity in $\mathrm{L}^p$]
Let \ $m\geq 3$, $2\leq p <\infty$. Then if $(\omega -
\tilde{\omega})\in {\mathrm{L}^{1}}(\mathbb{R}^{m}) \cap
{\mathrm{L}^{r}}(\mathbb{R}^{m}), \ for \ r = \frac{mp}{m-2},$
\begin{align}
&\langle\emph{A}(u,\omega) - \emph{A}(v,\tilde{\omega}), G(\omega -
\tilde{\omega})\rangle\nonumber\\
&\quad\leq\ -C\big(\nu - K (\parallel u
\parallel _{m} +\parallel v \parallel _{m} + \parallel\omega
\parallel_{m}+\parallel \tilde{\omega} \parallel _{m})Q_{p}(\omega -
\tilde{\omega})\nonumber \\
&\quad\quad\ -K(\parallel\omega
\parallel_{m}+\parallel \tilde{\omega} \parallel
_{m})\parallel\omega -
\tilde{\omega}\parallel_{\mathrm{L}^{1}}Q_{p}(\omega -
\tilde{\omega})^{1/p'}\big),\label{e4.37}
\end{align}
where $\frac{1}{p} + \frac{1}{p'} = 1.$
\end{thm}
Hence, in the light of \eqref{e2.6}, we note that if $\omega$ and
$\tilde{\omega}$ are small in $\mathrm{L}^1\cap\mathrm{L}^m$, then
$$\langle\emph{A}(u,\omega) - \emph{A}(v,\tilde{\omega}), G(\omega -
\tilde{\omega})\rangle\ \leq\ 0,$$ which is a local dissipativity
property for $\emph{A}(\cdot, \cdot)$.
\begin{proof}
It is clear that
\begin{align}
&\langle\emph{A}(u,\omega) - \emph{A}(v,\tilde{\omega}),\ G(\omega -
\tilde{\omega})\rangle\nonumber\\
&\quad= \langle\nu\triangle\omega - u\cdot\nabla\omega +
\omega\cdot\nabla u -\nu\triangle\tilde{\omega} +
v\cdot\nabla\tilde{\omega} - \tilde{\omega}\cdot\nabla v, \ G(\omega
- \tilde{\omega})\rangle\nonumber\\
&\quad= \nu\langle\triangle(\omega-\tilde{\omega}),\ G(\omega -
\tilde{\omega})\rangle - \langle u\cdot\nabla\omega -
v\cdot\nabla\tilde{\omega},\ G(\omega -
\tilde{\omega})\rangle\nonumber\\
&\quad\quad + \langle\omega\cdot\nabla u - \tilde{\omega}\cdot\nabla
v,\ G(\omega - \tilde{\omega})\rangle.\label{e4.38}
\end{align}
According to Lemma 2.5
\begin{align}\label{e4.39}
\nu\langle\triangle(\omega-\tilde{\omega}),\ G(\omega -
\tilde{\omega})\rangle\ \leq\ -C\nu Q_{p}(\omega - \tilde{\omega}).
\end{align}
Now we need to estimate the second and third terms of the right hand
side of \eqref{e4.38}.\\
Notice that
\begin{align}
&\mid\langle u\cdot\nabla\omega - v\cdot\nabla\tilde{\omega},\
G(\omega - \tilde{\omega})\rangle\mid\nonumber\\
&\quad=\ \mid\langle u\cdot\nabla\omega - v\cdot\nabla\omega +
v\cdot\nabla\omega - v\cdot\nabla\tilde{\omega},\ G(\omega -
\tilde{\omega})\rangle\mid\nonumber\\
&\quad\leq\ \mid\langle (u-v)\cdot\nabla\omega, \ G(\omega -
\tilde{\omega})\rangle\mid\ + \mid\langle v\cdot\nabla(\omega -
\tilde{\omega}),\ G(\omega -
\tilde{\omega})\rangle\mid.\label{e4.40}
\end{align}
Let us denote $\omega^{*} = \omega - \tilde{\omega}$. Then with the
help of \eqref{e4.10} we obtain
\begin{align}\label{e4.41}
\mid\langle v\cdot\nabla(\omega - \tilde{\omega}),\ G(\omega -
\tilde{\omega})\rangle\mid\ &\leq\ C\parallel v\parallel_{m}\
Q_{p}(\omega - \tilde{\omega}).
\end{align}
Since $\mathop{\mathrm{Div}}(u-v) = 0$, we have
\begin{align}
\mid\langle (u-v)\cdot\nabla\omega, \ G(\omega -
\tilde{\omega})\rangle\mid\ &=\
\mid\langle\nabla\cdot((u-v)\otimes\omega), \
G(\omega^{*})\rangle\mid\nonumber\\
&=\ \mid\langle\nabla\cdot((u-v)\otimes\omega),\
\mid\omega^{*}\mid^{p-2}\omega^{*} \rangle\mid\nonumber.
\end{align}
Integrating by parts we get,
\begin{align}
\mid\langle (u-v)\cdot\nabla\omega, \ G(\omega -
\tilde{\omega})\rangle\mid\ &=\ \mid\langle (u-v)\otimes\omega ,\
\nabla(\mid\omega^{*}\mid^{p-2}\omega^{*}) \rangle\mid\nonumber\\
&\leq\langle\mid(u-v)\otimes\omega\mid, \
\mid\nabla(\mid\omega^{*}\mid^{p-2}\omega^{*})\mid\rangle\nonumber\\
&\leq\langle\mid(u-v)\otimes\omega\mid, \
\mid\omega^{*}\mid^{p-2}\mid\nabla\omega^{*}\mid\rangle.\nonumber
\end{align}
Now using the H\"{o}lder's inequality we obtain,
\begin{align}\label{e4.42}
\mid\langle (u-v)\cdot\nabla\omega, \ G(\omega -
\tilde{\omega})\rangle\mid\ &\leq\parallel
(u-v)\otimes\omega\parallel_{q'} \
\parallel\ \mid\omega^{*}\mid^{p-2}\mid\nabla\omega^{*}\mid\
\parallel_{q},
\end{align}
where $\frac{1}{q} + \frac{1}{q'} = 1$.
Using \eqref{e4.7} and H\"{o}lder's inequality one more time, we
have
\begin{align}\label{e4.43}
\mid\langle (u-v)\cdot\nabla\omega, \ G(\omega -
\tilde{\omega})\rangle\mid\ &\leq\ C\parallel u-v\parallel_{r} \
\parallel\omega\parallel_{m}\
\parallel\omega^{*}\parallel_{r}^{(p-2)/2} \
Q_{p}(\omega^{*})^{1/2},
\end{align}
where $\frac{1}{q'} = \frac{1}{r} + \frac{1}{m}$ and $r =
\frac{mp}{m-2}$.
Notice that if $K$ is the Biot-Savart kernel then $u-v = K*\omega -
K*\tilde{\omega} = K*(\omega - \tilde{\omega}) = K*\omega^{*}$.
Hence using the Lemma 2.7, \eqref{e2.6} we get from \eqref{e4.43}
\begin{align}
&\mid\langle (u-v)\cdot\nabla\omega, \ G(\omega -
\tilde{\omega})\rangle\mid\nonumber\\
&\quad\leq\ C\
(\parallel\omega^{*}\parallel_{\mathrm{L}^{1}}+\parallel\omega^{*}\parallel_{\mathrm{L}^{r}})\
\parallel\omega\parallel_{m} \
\parallel\omega^{*}\parallel_{r}^{(p-2)/2} \
Q_{p}(\omega^{*})^{1/2}\nonumber\\
&\quad=\ C\parallel\omega^{*}\parallel_{r}^{p/2} \
\parallel\omega\parallel_{m} \ Q_{p}(\omega^{*})^{1/2}\nonumber \\
&\quad\quad+ C\parallel\omega^{*}\parallel_{\mathrm{L}^{1}} \
\parallel\omega^{*}\parallel_{r}^{(p-2)/2} \ \parallel\omega\parallel_{m} \
Q_{p}(\omega^{*})^{1/2}.\label{e4.44}
\end{align}
With the help of Lemma 2.6, equation \eqref{e4.44} yields
\begin{align}
&\mid\langle (u-v)\cdot\nabla\omega, \ G(\omega -
\tilde{\omega})\rangle\mid\nonumber\\
&\quad\leq\ C\parallel\omega\parallel_{m} \ Q_{p}(\omega^{*}) + \
C\parallel\omega^{*}\parallel_{\mathrm{L}^{1}} \
\parallel\omega\parallel_{m} \
Q_{p}(\omega^{*})^{(p-1)/p}\nonumber\\
&\quad=\ C\parallel\omega\parallel_{m} \ Q_{p}(\omega
-\tilde{\omega}) + \ C\parallel\omega -
\tilde{\omega}\parallel_{\mathrm{L}^{1}} \
\parallel\omega\parallel_{m} \ Q_{p}(\omega
-\tilde{\omega})^{1/p'}.\label{e4.45}
\end{align}
Thus substituting the results from \eqref{e4.41} and \eqref{e4.45}
in \eqref{e4.40} we have
\begin{align}
\mid\langle u\cdot\nabla\omega - v\cdot\nabla\tilde{\omega},\
G(\omega - \tilde{\omega})\rangle\mid &\leq\ C\ (\ \parallel v
\parallel_{m} + \parallel\omega\parallel_{m}\ ) \ Q_{p}(\omega
-\tilde{\omega})\nonumber \\ &\quad + \ C\parallel\omega -
\tilde{\omega}\parallel_{\mathrm{L}^{1}} \
\parallel\omega\parallel_{m} \ Q_{p}(\omega
-\tilde{\omega})^{1/p'},\label{e4.46}
\end{align}
where $C$ is a positive constant depending upon $m$ and $p$.
Next we estimate the third term of the equation \eqref{e4.38}. We
notice that
\begin{align}
&\mid\langle\omega\cdot\nabla u - \tilde{\omega}\cdot\nabla v,\
G(\omega - \tilde{\omega})\rangle\mid\nonumber\\
&\quad=\ \mid\langle\omega\cdot\nabla u - \tilde{\omega}\cdot\nabla
u + \tilde{\omega}\cdot\nabla u - \tilde{\omega}\cdot\nabla v, \
G(\omega - \tilde{\omega})\rangle\mid\nonumber\\
&\quad\leq\ \mid\langle (\omega - \tilde{\omega})\cdot u, \ G(\omega
- \tilde{\omega})\rangle\mid + \mid\langle\tilde{\omega}\cdot\nabla
(u-v),\ G(\omega - \tilde{\omega})\rangle\mid.\label{e4.47}
\end{align}
Here we proceed in the similar way as before to get
\begin{align}\label{e4.48}
\mid\langle (\omega - \tilde{\omega})\cdot u, \ G(\omega -
\tilde{\omega})\rangle\mid\ \leq\ C\parallel u\parallel_{m} \
Q_{p}(\omega - \tilde{\omega}),
\end{align}
and
\begin{align}
\mid\langle\tilde{\omega}\cdot\nabla (u-v),\ G(\omega -
\tilde{\omega})\rangle\mid\ &\leq\
C\parallel\tilde{\omega}\parallel_{m} \ Q_{p}(\omega
-\tilde{\omega})\nonumber\\
&\quad+\ C\parallel\omega - \tilde{\omega}\parallel_{\mathrm{L}^{1}}
\
\parallel\tilde{\omega}\parallel_{m} \ Q_{p}(\omega -\tilde{\omega})^{1/p'}.
\end{align}
Thus \eqref{e4.47} yields
\begin{align}
\mid\langle\omega\cdot\nabla u - \tilde{\omega}\cdot\nabla v,\
G(\omega - \tilde{\omega})\rangle\mid &\leq\ C\ (\ \parallel u
\parallel_{m} + \parallel\tilde{\omega}\parallel_{m}\ ) \ Q_{p}(\omega
-\tilde{\omega})\nonumber\\
&\quad+\ C\parallel\omega - \tilde{\omega}\parallel_{\mathrm{L}^{1}}
\
\parallel\tilde{\omega}\parallel_{m} \ Q_{p}(\omega
-\tilde{\omega})^{1/p'},\label{e4.50}
\end{align}
where $C$ is a positive constant depending upon $m$ and $p$.
Hence \eqref{e4.39}, \eqref{e4.46} and \eqref{e4.50} yield the
desired result \eqref{e4.37} from \eqref{e4.38}.
\end{proof}
\end{document} |
\begin{document}
\begin{abstract}
Given a partition ${\mathcal V}=(V_1, \ldots,V_m)$ of the vertex set of a graph $G$, an {\em independent transversal} (IT) is an independent set in $G$ that contains one vertex from each $V_i$.
A {\em fractional IT} is a non-negative real valued function on $V(G)$ that represents each part with total weight at least $1$, and belongs as a vector to the convex hull of the incidence vectors of independent sets in the graph.
It is known that if the domination number of the graph induced on the union of every $k$ parts $V_i$ is at least $k$, then there is a fractional IT. We prove a weighted version of this result. This is a special case of a general conjecture, on the weighted version of a duality phenomenon, between independence and domination in pairs of graphs.
\end{abstract}
\title{ Independence-Domination Duality in weighted graphs}
\author{Ron Aharoni}
\address{Department of Mathematics\\
Technion, Haifa\\
Israel 32000}
\email{Ron Aharoni: [email protected]}
\author{Irina Gorelik}
\address{Department of Mathematics\\
Technion, Haifa\\
Israel 32000}
\email{Irina Gorelik: [email protected]}
\maketitle
\begin{section}{Introduction}
\subsection{Domination and collective domination}
All graphs in this paper are assumed to be simple, namely not containing parallel edges or loops. The (open) neighborhood of a vertex $v$ in a graph $G$, denoted by $\tilde{N}(v)=\tilde{N}_G(v)$, is the set of all vertices connected to $v$. Given a set $D$ of vertices we write $\tilde{N}(D)$ for $\bigcup_{v \in D}\tilde{N}(v)$. Let $N(D)=N_G(D) = \tilde{N}(D) \cup D$.
A set $D$ is said to be {\em dominating} if $N(D)=V$ and {\em totally dominating} if
$\tilde{N}(D)=V$.
The minimal size of a dominating set is denoted by $\gamma(G)$, and the minimal size of a totally dominating set by
$\tilde{\gamma}(G)$.
There is a {\em collective} version of domination.
Given a system of graphs
${\mathcal G}=(G_1,\dots,G_k)$ on the same vertex set $V$, a system ${\mathcal D}=(D_1,\dots,D_k)$ of subsets of $V$ is said to be {\em collectively dominating} if $\bigcup_{i \le k}N_{G_i}(D_i)=V$. Let $\gamma_\cup({\mathcal G})$ be the minimum of $\sum_{i \le k}|D_i|$ over all collectively dominating systems.
\subsection{Independence and joint independence}
A set of vertices is said to be {\em independent} in $G$ if its elements are pairwise non-adjacent. The complex (closed down hypergraph) of independent sets in $G$ is denoted by ${\mathcal I}(G)$. The {\em independence polytope} of $G$, denoted by
$IP(G)$, is the convex hull of the characteristic vectors of the
sets in ${\mathcal I}(G)$. For a system of graphs
${\mathcal G}=(G_1,\dots,G_k)$ on $V$ the
{\em joint independence number}, $\alpha_\cap({\mathcal G})$, is
$\max\{|I| : I \in \cap_{i \le k}{\mathcal I}(G_i)\}$. The
{\em fractional joint independence number}, $\alpha_\cap^*({\mathcal G})$, is
$\max\{\vec{x}\cdot\vec{1}:~~\vec{x}\in\bigcap_{i\le k} IP(G_i)\}$.\\
We shall mainly deal with the case $k=2$. Let us first observe that it is possible to have $\alpha_\cap^*(G_1,G_2)<\min(\alpha(G_1),\alpha(G_2))$.
\begin{example}
Let $G_1$ be obtained from the complete bipartite graph with respective sides $\{v_1, \ldots,v_6\}$ and $\{u_1,u_2\}$, by the addition of the edges $v_1v_2$, $v_3v_4$ and $u_1u_2$, and let $G_2=\bar{G_1}$. Then $\alpha(G_1)=\alpha(G_2)=4$, while $\alpha_\cap^*(G_1,G_2)=2$, the optimal vector in $IP(G_1) \cap IP(G_2)$ being the constant $\frac{1}{4}$ vector.
\end{example}
A graph $H$ is called a {\em partition graph} if it is the disjoint union of cliques. In a partition graph $\alpha=\gamma$.
The union of two systems of disjoint cliques is the line graph of a bipartite graph, having the set of cliques in one system as one side of the graph, and the set of cliques in the other system as the other side, an edge connecting two vertices (namely, cliques in different systems) if they intersect.
Thus, by K\"{o}nig's famous duality theorem \cite{konig}, we have:
\begin{theorem}\label{konig2}
If $G$ and $H$ are partition graphs on the same vertex set, then $$\alpha_\cap(G,H)=\gamma_\cup(G,H)$$.
\end{theorem}
There are graphs in which $\alpha >\gamma$, and thus
equality does not necessarily hold for general pairs $(G,H)$ of graphs, even when $G=H$. On the other hand, since a maximal independent set is dominating, we have $\gamma(G) \le \alpha(G)$ in every graph $G$. But the corresponding inequality for pairs of graphs is not necessarily true, as the following example shows.
\begin{example}\label{noteq}
Let $G=P_4$, namely the path with $3$ edges on $4$ vertices, and let $H$ be its complement. Then $\alpha_\cap(G,H)=1$
and $\gamma_\cup(G,H)=2$, so $\alpha_\cap(G,H)<\gamma_\cup(G,H)$.
\end{example}
However, as was shown in \cite{abhk}, if $\alpha_\cap$ is replaced by its
fractional version, then the non-trivial inequality in Theorem
\ref{konig2} does hold.
\begin{theorem}\label{inddom}
For any two graphs $G$ and $H$ on the same set of vertices we have $$\alpha_\cap^*(G,H)\geq\gamma_\cup(G,H)$$.
\end{theorem}
In Example \ref{noteq} $\vec{\frac{1}{2}}\in IP(G)\cap IP(H)$, and $\alpha_\cap^*(G,H)=2$, so $\alpha_\cap^*(G,H)=\gamma_\cup(G,H)$.
\begin{lemma}\label{fracit}
Let ${\mathcal V}=(V_1, \ldots ,V_m)$ be a system of disjoint sets, let ${\mathcal I}$ be the set of ranges of partial choice functions from ${\mathcal V}$, and let $V=\bigcup_{i \le m} V_i$.
Then
$$\{f: V \to \mathbb{R}^+ \mid \sum_{v \in V_j}f(v)\le 1 \text{~for~ every~} j \le m \}=conv(\{\chi_I \mid I \in {\mathcal I}\})$$.
\end{lemma}
\begin{proof}
Obviously, the right hand side is contained in the left hand side. For the reverse containment, let
$f: V \to \mathbb{R}^+$ be such that $\sum_{v \in V_j}f(v)\le 1$ for every $j \le m$, and assume for negation that it can be separated from all functions $\chi_I$, $ I \in {\mathcal I}$, namely there exists a vector $\vec{u}$ such that $\sum_{v\in V} u(v)f(v)\ge 1$, and $\sum_{v\in I}u(v)=\sum_{v\in V} u(v)\chi_I(v)<1$ for all
$I \in {\mathcal I}$. Since $conv(\{\chi_I \mid I \in {\mathcal I}\})$ is closed down, we may assume that $\vec{u}$ is non-negative. For each $j \le m$ let $v_j$ be such that $u(v_j)$ is maximal over all $v\in V_j$, and let $I=\{v_j \mid j \le m\}$. The fact that $\sum_{v\in I}u(v)< 1$ implies then that
$$\sum_{j \le m}\sum_{v\in V_j}u(v)f(v)\le \sum_{j \le m}u(v_j)\sum_{v \in V_j} f(v)\le \sum_{j \in V_j}u(v_j)< 1, $$
a contradiction.
\end{proof}
\subsection{Independent transversals}
When one graph in the pair $(G,H)$, say $H$, is a partition graph, the parameters $\alpha_\cap(G,H)$ and $\alpha^*_\cap(G,H)$ can be described using the terminology of so-called {\em independent transversals}.
Given a graph $G$ and a partition ${\mathcal V}=(V_1, \ldots ,V_m)$ of $V(G)$, an independent transversal (IT) is an independent set in $G$ consisting of the choice of one vertex from each set $V_i$. A {\em partial IT} is an independent set representing some $V_i$'s (so, it is the independent range of a partial choice function from ${\mathcal V}$). A function $f : V \to \mathbb{R}^+$ is called a {\em partial fractional IT} if, when viewed as a vector, it belongs to $IP(G)$, and $\sum_{v\in V_j}f(v)\le 1$ for all $j \le m$. If $\sum_{v\in V_j}f(v)=1$ for all $j \le m$ then $f$ is called a {\em fractional IT}. By Lemma \ref{fracit} this means that $f \in IP(H) \cap IP(G)$, namely it is a jointly fractional independent set, where ${\mathcal V}$ is the set of cliques in $H$.
For $I \subseteq [m]$ let $V_I=\bigcup_{i \in I}V_i$.
The following was proved in \cite{penny}:
\begin{theorem}\label{thm:penny}
If $\tilde{\gamma}(G[V_I]) \ge 2|I|-1$ for every $I \subseteq [m]$ then there exists an IT.
\end{theorem}
Theorem \ref{inddom}, applied to the case in which $H$ is a partition graph, yields:
\begin{theorem}\label{fractr}
If $\gamma(G[V_I]) \ge |I|$ for every $I \subseteq [m]$ then there exists a fractional IT.
\end{theorem}
\section{Putting weights on the vertices}
In \cite{abz} a weighted version of Theorem \ref{thm:penny} was proved.
As is often the case with weighted versions, the motivation came from decompositions: weighted results give, by duality, fractional decompositions results. It is conjectured that if $|V_i| \ge 2\Delta(G)$ then there exists a partition of $V(G)$ into $\max_{i \le m}|V_i|$ IT's. The weighted version of Theorem \ref{thm:penny} yielded the existence of a fractional such decomposition.
\begin{notation}
Given a real valued function $f$ on a set $S$, and a set $A\subseteq
S$, define $f[A]=\sum_{a\in A}f(a)$. We also write $|f|=f[S]$ and we
call $|f|$ the {\em size} of $f$.
\end{notation}
\begin{definition}
Let $G=(V,E)$ be a graph, and let $w:V\to\mathbb{N}$ be a weight
function on $V$. We say that a function $f:V\to\mathbb{N} $ {\em
$w$-dominates} a set $U$ of vertices, if $f[N(u)]\geq w(u)$ for every $u\in U$. We say that $f$ is {\em $w$-dominating} if it $w$-dominates $V$.
The {\em weighted domination number}
$\gamma^w(G)$ is $\min\{|f| \mid f ~\text{is $w$-dominating}\} $
\end{definition}
This definition extends to systems of graphs:
\begin{definition}
Let $\mathcal{G}=(G_1,\dots,G_k)$ be a system of graphs on the same
vertex set $V$. Let $w:V\to\mathbb{N}$ be a non-negative weight function on $V$, and let ${\mathcal F}=(f_i:~V\to\mathbb{N}, ~~i \le k)$ be a system of functions. We say that
${\mathcal F}$
$w$-dominates ${\mathcal G}$ if $\sum_{i=1}^k f_i[N_{G_i}(v)]\geq w(v)$ for every $v\in V$. The {\em
weighted collective domination number } is
$$\gamma_\cup^w(\mathcal{G})=\min\{\sum_{i=1}^k
|f_i|:(f_1,\dots,f_k)\; is\; w-dominating\}.$$
The extension of the independence parameter to the weighted case is also quite natural:
$$(\alpha_\cap^w)^*({\mathcal G})=\max\{\sum_{v \in V}x(v)w(v) \mid \vec{x}\in\bigcap_{i=1}^k
IP(G_i)\}.$$
\end{definition}
The aim of this paper is to study the following possible extension of Theorem \ref{inddom} to the weighted case.
\begin{Conjecture}\label{conj:main}
If $G$ and $H$ are graphs on the same vertex set $V$
then for any weight function $w:V\to\mathbb{N}$ we have
$$(\alpha_\cap^w)^*(G,H)\geq\gamma_\cup^w(G,H).$$
\end{Conjecture}
If $H=G$ then
the stronger $\alpha_\cap^w(G,G) \geq\gamma_\cup^w(G,G)$ is true, namely:
\begin{lemma}
$\alpha^w(G) \ge \gamma^w(G)$.
\end{lemma}
\begin{proof}
We have to exhibit a $w$-dominating function $f$ and an independent set $I$ with $|f|\leq w[I]$.
Let $V(G)=\{v_1,\dots,v_n\}$. We define a $w$-dominating function $f:V\to \mathbb{N}$ inductively.
Let $f(v_1)=w(v_1)$. Having defined $f(v_1),\dots,f(v_{i-1})$ let $$f(v_i)= [w(v_i)-\sum_{v_j\in N(v_i),\; j<i}f(v_j)]^+$$
Clearly, $f$ is $w$-dominating.
We next find an independent set $I$ such that $w[I]\geq|f|$.
Let $v_{i_1}$ be the vertex that has the maximal index over all the vertices in $V_1=V\cap supp(f)$. Since $f(v_{i_1})>0$, we have $f[N(v_{i_1})]=w(v_{i_1})$.
Suppose that we have defined the sets of vertices $V_1,V_2,\dots, V_{k-1}$ and vertices $v_{i_1},\dots,v_{i_k}$ such that $v_{i_j}$ is the vertex whose index is maximal over all the vertices in $V_j$ where $V_j=V_{j-1}\setminus N(v_{i_{j-1}})$ for every $j=1,\dots,k-1$.
Let $V_k=V_{k-1}\setminus N(v_{i_{k-1}})$ and let $v_{i_k}$ be the vertex whose index is maximal over all the vertices in $V_k$. By the definition of $f$ we have $\sum_{v_j\in N(v_{i_k}),\; j<i_k}f(v_j)=w(v_{i_k})$, so $\sum_{v_j\in V_k\cap N(v_{i_k})}f(v_j)\leq w(v_{i_k})$. We stop the process when $V_t=\emptyset$ for some $t$. In this case $I=\{v_{i_1},\dots,v_{i_{t-1}}\}$ is an independent set that satisfies $w[I]\geq |f|$ as desired.
\end{proof}
\section{The case of partition graphs}
The main result of this paper is:
\begin{theorem}\label{fractrconj}
Conjecture \ref{conj:main} is true if $H$ is a partition graph. Namely, if $H$ is a partition graph and $G$ is any graph, then
$$(\alpha_\cap^w)^*(G,H)\geq\gamma_\cup^w(G,H).$$
\end{theorem}
Let us first re-formulate the left hand side of the inequality in terms of partitions.
For a partition ${\mathcal V}=(V_1, \ldots, V_m)$ of the vertex set $V$ of a graph $G$, let \begin{equation} \label{ns} {({\nu}^w)}^*(G, {\mathcal V}) =\max\{ \sum_{v \in V}w(v)f(v)\; \mid \; f ~\text{is~a~fractional~ partial ~IT}\}.\end{equation}
By Lemma \ref{fracit} we have:
\begin{lemma}\label{lem:param}
$(\alpha_\cap^w)^*(G,H)={({\nu}^w)}^*(G, {\mathcal V})$.
\end{lemma}
Let us also re-formulate the right hand side using the terminology of partitions.
Given partition ${\mathcal V}=(V_1, \ldots, V_m)$ of $V(G)$, a pair of non-negative real valued functions $f$ on $V$ and $g$ on $[m]$ is said to be {\em collectively $w$-dominating} if for every vertex $v \in V_i$ we have $g(i)+f[N(v)] \ge w(v)$. Let $\gamma^w(G, {\mathcal V})$ be the minimum of $|g|+|f|$ over all collectively $w$-dominating pairs of functions. In this terminology, $\gamma_\cup^w(G,H)=\gamma^w(G,{\mathcal V})$. In addition, let $\tau^w(G, {\mathcal V})$ be the minimum of $|g|+\frac{|f|}{2}$ over all collectively $w$-dominating pairs of functions.
In \cite{abz} the following weighted version of Theorem \ref{fractr} was proved.
\begin{theorem}\label{thm:abz}
$\nu^w(G,{\mathcal V}) \ge \tau^w(G, {\mathcal V})$.
\end{theorem}
\begin{remark} Note the factor $\frac{1}{2}$ difference between the definitions of $\tau^w(G, {\mathcal V})$ and $\gamma^w(G,{\mathcal V})$. It mirrors the factor $\frac{1}{2}$ difference (manifest in the factor $2$ in ``$2|I|-1$'') between the statements of
Theorems \ref{thm:penny} and \ref{fractr}. The same factor appears in the weighted case: the difference between the integral and fractional versions is the $\frac{1}{2}$ factor hidden in the right hand sides
of Theorems \ref{thm:abz} and of \ref{thm:partition} below. \end{remark}
By Lemma \ref{lem:param} the case of Conjecture \ref{conj:main} in which $H$ is a partition graph is:
\begin{theorem}\label{thm:partition}
${(\nu^w)}^*(G,{\mathcal V}) \ge \gamma^w(G, {\mathcal V})$, where ${\mathcal V}$ is the partition of $V$ into cliques of $H$.
\end{theorem}
\begin{proof}
Note that if $f = \sum_{I \in {\mathcal I}(G)}x_I\chi_I$ then $f[V_j]=\sum_{I\in{\mathcal I}(G)}x_I|I\cap
V_j|$, and thus the constraints defining the linear program for ${({\nu}^w)}^*(G, {\mathcal V})$ are $\sum_{I\in{\mathcal I}(G)}x_I|I\cap
V_j|\leq 1$ and $\sum_{I\in{\mathcal I}(G)}x_I=1$.
\begin{assertion}
Let
\begin{equation}\label{t}{({\nu}^w)}^*(G,{\mathcal V})=\max\{\sum_{I\in {\mathcal I}(G)}x_I w[I]|\sum_{I\in{\mathcal I}(G)}x_I|I\cap
V_j|\leq 1,\; \sum_{I\in{\mathcal I}(G)}x_I\leq 1\}\end{equation}
\end{assertion}
\begin{proof}
Denote the right hand side by $t$.
If $f=\sum_{I\in{\mathcal I}(G)}x_I\chi_I$ is an optimal solution of the linear program \eqref{ns} then ${({\nu}^w)}^*(G,{\mathcal V})=\sum_{v\in V}w(v)f(v)=\sum_{I\in {\mathcal I}(G)}x_I w[I]$. Hence ${({\nu}^w)}^*(G,{\mathcal V})\leq t$.
On the other hand, suppose by negation that there exists an optimal solution of the linear program \eqref{t} that satisfies $\sum_{I\in{\mathcal I}(G)}x_I=1-\epsilon$ for some $\epsilon>0$. Clearly $t>0$, and hence there exists an independent set $I_0$ such that $x_{I_0}>0$. Choose a vertex $v\in I_0$, and define a vector $\vec{x'}$ as follows. Let $x'_{I_0}=x_{I_0}-\epsilon$, $x'_{I_0\setminus\{v\}}=x'_{\{v\}}=\epsilon$ and $x'_I=x_I$ otherwise. Note that the vector $x'$ satisfies constrains of the linear program, but the weight of $\sum_{I\in{\mathcal I}(G)}x'_I\chi_I$ is $\sum_{I\in{\mathcal I}(G)}x_Iw[I]+\epsilon$, contradicting the maximality of the optimal solution. Hence this optimal solution is also a solution for the linear program \eqref{ns}, so, $t\leq {({\nu}^w)}^*(G,{\mathcal V})$ proving the desired equality.
\end{proof}
By LP duality
${({\nu}^w)}^*(G, {\mathcal V})$ is the minimum of $\sum_{j=0}^m y_j$ over all vectors $\vec{y}=(y_0,y_1,\dots,y_m)$ satisfying $y_0+\sum_{j=1}^my_j |I\cap V_j| \geq w[I]$ for all $I\in{\mathcal I}(G)$.
Let $\vec{y}=(y_0,y_1,\dots,y_m)$ be a vector in which the minimum is attained, meaning that $\sum_{j=0}^m y_j={({\nu}^w)}^*(G,{\mathcal V})$, and let $g(j)=\lfloor y_j\rfloor$ for all $j \le m$. We define a new weight function $w_g$ by $w_g(v)=[w(v)-\lfloor y_{j(v)}\rfloor]^+$, where $j(v)$ is that $j$ for which $v \in V_j$. Let $V'=\{v \mid w_g(v)>0\}$ be the support of $w_g$, and let $G'=G[V']$.
For a number $s$ let $\{s\}$ be the fractional part of $s$, namely $\{s\}=s- \lfloor s \rfloor$.
\begin{assertion}
The vector $(y_0,\{y_1\},\dots,\{y_m\})$ is an optimal solution for the program dual to: $(\nu^{w_g})^*(G', {\mathcal V})$, namely
$$y_0+\sum_{j=1}^m \{y_j\}=(\nu^{w_g})^*(G',{\mathcal V}):=$$ $$\max\{\sum_{I\in{\mathcal I}(G')}x_I w_g[I]\mid \;\sum_{I\in{\mathcal I}(G')} x_I\leq1\; \text{and} \; \forall j\;\sum_{I\in{\mathcal I}(G')} x_I |I\cap V_j|\leq 1\}$$
\end{assertion}
\begin{proof}
Denote by $y$ the sum $y_0+\sum_{j=1}^m \{y_j\}$.
For every $v\in V'$, we have $w_g(v)=w(v)-\lfloor y_{j(v)}\rfloor$, hence $y_0+\sum_{j=1}^m\{y_j\} |I\cap V_j|\geq w_g[I]$ for every $I\in{\mathcal I}(G')\subseteq{\mathcal I}(G)$, proving that $y\geq (\nu^{w_g})^*(G',{\mathcal V})$.
For the reverse inequality, assume for negation that there exists a solution $\vec{x}=(x_0,\dots,x_m)$ such that $\sum_{j=0}^m x_j<y$.
Then the vector $\vec{x}=(x_0,x_1+\lfloor y_1\rfloor,\dots,x_m+\lfloor y_m\rfloor)$ is a solution to the original problem that satisfies $x_0+\sum_{j=1}^m x_j+\lfloor y_j\rfloor<\sum_{j=0}^m y_j={({\nu}^w)}^*(G,{\mathcal V})$, contradicting the optimality of $\vec{x}$.
\end{proof}
Since an optimal solution of the primary problem corresponding to the weight function $w_g$ satisfies $\sum_{I\in\mathcal{I}(G')}x_I=1$, there exists a set $I$ such that $x_I>0$.
Let $I_0$ be a set of minimal weight in $supp(x)$. Then \begin{equation}\label{I_0} w_g[I_0]=w_g[I_0]\sum_{I\in{\mathcal I}(G')}x_I\leq \sum_{I\in{\mathcal I}(G')}x_I w_g[I]=(\nu^{w_g})^*(G',{\mathcal V})\end{equation}
Let $h:V'\to\mathbb{N}$ defined by $h(v)=w_g(v)$ if $v\in I_0$ and $h(v)=0$ otherwise.
\begin{assertion}
The function $h$ is $w_g$-dominating in $G'$.
\end{assertion}
\begin{proof}
Suppose not. Then there exists a vertex $v\in V'$ such that $w_g(v)>h(v)+h[\tilde{N}(v)]=h(v)+w_g[\tilde{N}(v)\cap I_0]$. Clearly, $v\notin I_0$, hence $h(v)=0$. The set $I'=(I_0\setminus \tilde{N}(v))\cup \{v\}$ satisfies
$$w_g[I']=w_g[I_0\setminus \tilde{N}(v)]+w_g(v)>w_g[I_0\setminus \tilde{N}(v)]+w_g[I_0\cap \tilde{N}(v)]=w_g[I_0]$$ Since $w$ is an integral function, $w_g$ is also integral, hence $w_g[I']=w_g[I_0]+k_v$ for some $k_v\geq 1$.
On the other hand, by the definition of the dual program $w_g[I']\leq y_0+\sum_{j=1}^m \{y_j\}|I'\cap V_j|$. In addition, since $x_{I_0}>0$, the complementary slackness conditions state that equality holds in the corresponding constraint in the dual problem, i.e. $w_g[I_0]= y_0+\sum_{j=1}^m \{y_j\}|I_0\cap V_j|$. Hence,
$$k_v=w_g[I']-w_g[I_0]\leq \sum_{j=1}^m \{y_j\}(|I'\cap V_j|-|I_0\cap V_j|)=\{y_{j(v)}\}-\sum_{u\in N(v)\cap I_0}\{y_{j(u)}\}<1 $$
is a contradiction.
\end{proof}
Since $g$ dominates all vertices $v\in V\setminus V'$, the pair $(g,h)$ is $w$-dominating, hence using \eqref{I_0} we have $$\gamma^w(G,{\mathcal V})=\leq |g|+|h|=|g|+w_g[I_0]\leq |g|+(\nu^{w_g})^*(G',{\mathcal V})$$ $$=\sum_{j=1}^m \lfloor y_j\rfloor+y_0+\sum_{j=1}^m \{y_j\}=\sum_{j=0}^m y_j={({\nu}^w)}^*(G, {\mathcal V}) $$
as desired.
\end{proof}
\end{section}
\end{document} |
\begin{document}
\title{Consistent Re-Calibration of the Discrete-Time Multifactor Vasi{c}
\begin{abstract}
\noindent The discrete-time multifactor Vasi\v{c}ek model is a tractable Gaussian spot rate model. Typically, two- or three-factor versions allow one to capture the dependence structure between yields with different times to maturity in an appropriate way. In practice, re-calibration of the model to the prevailing market conditions leads to model parameters that change over time. Therefore, the model parameters should be understood as being time-dependent or even stochastic. Following the consistent re-calibration (CRC) approach, we construct models as concatenations of yield curve increments of Hull--White extended multifactor Vasi\v{c}ek models with different parameters. The CRC approach provides attractive tractable models that preserve the no-arbitrage premise. As~a numerical example, we fit Swiss interest rates using CRC multifactor Vasi\v{c}ek models.
\end{abstract}
\section{Introduction}
The tractability of affine models, such as the Vasi\v{c}ek \cite{Vasicek} and the Cox--Ingersoll--Ross \cite{CIR} models, has made them appealing for term structure modeling. Affine term structure models are based on a (multidimensional) factor process, which in turn describes the evolution of the spot rate and the bank account processes. No-arbitrage arguments then provide the corresponding zero-coupon bond prices, yield curves and forward rates. Prices in these models are calculated under an equivalent martingale measure for known static model parameters. However, model parameters typically vary over time as financial market conditions change. They may, for instance, be of a regime switching nature and need to be permanently re-calibrated to the actual financial market conditions. In practice, this re-calibration is done on a regular basis (as new information becomes available). This implies that model parameters are not static and, henceforth, may also be understood as stochastic processes. The re-calibration should preserve the no-arbitrage condition, which provides side constraints in the re-calibration. The aim of this work is to discuss these side constraints with the help of the discrete-time multifactor Vasi\v cek interest rate model, which is a tractable, but also flexible model. We show that re-calibration under the side constraints naturally leads to Heath--Jarrow--Morton \cite{HJM} models with stochastic parameters, which we call consistent re-calibration (CRC) models \cite{Harms}.
These models are attractive in financial applications for several reasons. In risk management and in the current regulatory framework \cite{BIS}, one needs realistic and tractable models of portfolio returns. Our approach provides tractable non-Gaussian models for multi-period returns on bond portfolios. Moreover, stress tests for risk management purposes can be implemented efficiently in our framework by selecting suitable models for the parameter process. While an in-depth market study of the performance of CRC models remains to be done, we provide in this paper some evidence of improved fits.
The paper is organized as follows. In Section~\ref{sec: hwe}, we introduce Hull--White extended discrete-time multifactor Vasi\v cek models, which are the building blocks for CRC in this work. We define CRC of the Hull--White extended multifactor Vasi\v cek model in Section~\ref{sec: crc}. Section~\ref{sec: real world dynamics} specifies the market price of risk assumptions used to model the factor process under the real-world probability measure and the equivalent martingale measure, respectively. In Section~\ref{sec: parameters}, we deal with parameter estimation from market data. In Section~\ref{sec: numerical example}, we fit the model to Swiss interest rate data, and in Section~\ref{sec: conclusion}, we conclude. All proofs are presented in Appendix~\ref{sec: proofs}.
\section[Discrete-Time Multifactor Vasicek Model and Hull--White Extension]{Discrete-Time Multifactor Vasi\v{c}ek Model and Hull--White Extension} \label{sec: hwe}
\subsection{Setup and Notation}
Choose a fixed grid size $\Delta>0$ and consider the discrete-time grid $\{0,\Delta, 2\Delta, 3\Delta, \ldots \} = {\mathbb N}_0\Delta$. For~example, a daily grid corresponds to $\Delta=1/252$ if there are 252 business days per year. Choose a (sufficiently rich) filtered probability space
$(\Omega,{\cal F},{\mathbb F},{\mathbb P}^\ast)$ with discrete-time filtration ${\mathbb F}=({\cal F}(t))_{t\in{\mathbb N}_0}$, where $t\in {\mathbb N}_0$ refers to time point $t\Delta$. Assume that ${\mathbb P}^\ast$ denotes an equivalent martingale measure for a (strictly positive) bank account numeraire $(B(t))_{t\in{\mathbb N}_0}$. $B(t)$ denotes the value at time $t\Delta$ of an investment of one unit of currency at Time $0$ into the bank account (i.e., the risk-free rollover relative to $\Delta$).
We use the following notation. Subscript indices refer to elements of vectors and matrices. Argument indices refer to time points. We denote the $n\times n$ identity matrix by $\mathds{1}\in{\mathbb R}^{n\times n}$. We also introduce the vectors $\boldsymbol 1=\left(1,\ldots,1\right)^\top\in{\mathbb R}^n$ and $\boldsymbol e_1=(1,0,\ldots,0)^\top\in{\mathbb R}^n$.
\subsection[Discrete-Time Multifactor Vasicek Model]{Discrete-Time Multifactor Vasi\v{c}ek Model}
\label{discrete-time one-factor Vasicek model}
We choose $n\in{\mathbb N}$ fixed and introduce the $n$-dimensional ${\mathbb F}$-adapted factor process:
\[
\boldsymbol X=\left(\boldsymbol X(t)\right)_{t\in\mathbb N_0}=\left(X_1(t),\ldots,X_n(t)\right)^\top_{t\in\mathbb N_0},
\]
which generates the spot rate and bank account processes as follows:
\begin{equation}\label{eq: spot rate}
r(t)=\boldsymbol 1^\top\boldsymbol X(t)\quad\text{and}\quad B(t)=\exp\left\{\Delta\sum_{s=0}^{t-1}r(s)\right\},
\end{equation}
where $t\in\mathbb N_0$; empty sums are set equal to zero. The factor process $\boldsymbol X$ is assumed to evolve under ${\mathbb P}^\ast$ according to:
\begin{equation}\label{eq: ARn}
\boldsymbol X(t)=\boldsymbol b+\beta\boldsymbol X(t-1)+\Sigma^{\frac{1}{2}}\boldsymbol\varepsilon^\ast(t),\quad t>0,
\end{equation}
with initial factor $\boldsymbol X(0)\in{\mathbb R}^n$, $\boldsymbol b\in\mathbb R^n$, $\beta\in\mathbb R^{n\times n}$, $\Sigma^{\frac{1}{2}}\in\mathbb R^{n\times n}$ and $(\boldsymbol \varepsilon^\ast(t))_{t\in\mathbb N}=(\varepsilon_1^\ast(t),\ldots,\varepsilon_n^\ast(t))^\top_{t\in\mathbb N}$ being $\mathbb F$-adapted. The following assumptions are in place throughout the paper.
\begin{Assumption}\label{assumption}
We assume that the spectrum of matrix $\beta$ is a subset of $(-1,1)^n$ and that matrix $\Sigma^{\frac{1}{2}}$ is non-singular. Moreover, for each $t\in{\mathbb N}$, we assume that $\boldsymbol\varepsilon^\ast(t)$ is independent of $\mathcal F(t-1)$ under ${\mathbb P}^\ast$ and has standard normal distribution $\boldsymbol \varepsilon^\ast(t)\stackrel{\mathbb P^\ast}{\sim}\mathcal N(\boldsymbol 0, \mathds{1})$.
\end{Assumption}
\begin{Remark*}
In Assumption~\ref{assumption}, the condition on matrix $\beta$ ensures that $\mathds{1}-\beta$ is invertible and that the geometric series generated by $\beta$ converges. The condition on $\Sigma^{\frac{1}{2}}$ ensures that $\Sigma=\Sigma^{\frac{1}{2}}(\Sigma^{\frac{1}{2}})^\top$ is symmetric positive definite. Under Assumption~\ref{assumption}, Equation \eqref{eq: ARn} defines a stationary process; see \cite{TS}, Section 11.3.
\end{Remark*}
The model defined by Equations \eqref{eq: spot rate} and \eqref{eq: ARn} is called the discrete-time multifactor Vasi\v cek model. Under the above model assumptions, we have for $m>t$:
\begin{equation}\label{eq: AR conditional distribution}
\boldsymbol X(m)|\mathcal F(t)\stackrel{\mathbb P^\ast}{\sim}\mathcal N\left(\left(\mathds{1}-\beta\right)^{-1}\left(\mathds{1}-\beta^{m-t}\right)\boldsymbol b+\beta^{m-t}\boldsymbol X(t),\sum_{s=0}^{m-t-1}\beta^s\Sigma(\beta^\top)^s\right).
\end{equation}
\begin{Remark*}
For $m>t$, the conditional distribution of $\boldsymbol X(m)$, given ${\cal F}(t)$, depends only on the value $\boldsymbol X(t)$ at time $t\Delta$ and on lag $m-t$. In other words, the factor process \eqref{eq: ARn} is a time-homogeneous Markov~process.
\end{Remark*}
At time $t\Delta$, the price of the zero-coupon bond (ZCB) with maturity date $m \Delta >t\Delta$ with respect to filtration ${\mathbb F}$ and equivalent martingale measure ${\mathbb P}^\ast$ is given by:
\begin{equation*}
P(t,m)={\mathbb E}^\ast \left[\left.\frac{B(t)}{B(m)}\right|{\cal F}(t)\right]
={\mathbb E}^\ast \left[\left. \exp \left\{-\Delta\sum_{s=t}^{m-1}\boldsymbol1^\top\boldsymbol X(s) \right\}\right|{\cal F}(t)\right].
\end{equation*}
For the proof of the following result, see Appendix \ref{sec: proofs}.
\begin{Theorem}\label{theo: ARn prices}
The ZCB prices in the discrete-time multifactor Vasi\v cek Models \eqref{eq: spot rate} and \eqref{eq: ARn} with respect to filtration ${\mathbb F}$ and equivalent martingale measure ${\mathbb P}^\ast$ have an affine term structure:
\[
P(t,m)=e^{A(t,m)-\boldsymbol B(t,m)^\top\boldsymbol X(t)},\quad m>t,
\]
with $A(m-1,m)=0$, $\boldsymbol B(m-1,m)=\boldsymbol 1\Delta$ and for $m-1>t\geq0$:
\[
\begin{aligned}
A(t,m)&=A(t+1,m)-\boldsymbol B(t+1,m)^\top\boldsymbol b+\frac{1}{2}\boldsymbol B(t+1,m)^\top\Sigma\boldsymbol B(t+1,m),\\
\boldsymbol B(t,m)&=\left(\mathds{1}-\beta^\top\right)^{-1}\left(\mathds{1}-(\beta^\top)^{m-t}\right)\boldsymbol 1\Delta.
\end{aligned}
\]
\end{Theorem}
In the discrete-time multifactor Vasi\v{c}ek Models \eqref{eq: spot rate} and \eqref{eq: ARn}, the term structure of interest rates (yield curve) takes the following form at time $t\Delta$ for maturity dates $m\Delta>t\Delta$:
\begin{equation}\label{eq: ARn yields}
Y(t,m)=-\frac{1}{(m-t)\Delta}\log P(t,m)
=-\frac{A(t,m)}{(m-t)\Delta}+\frac{\boldsymbol B(t,m)^\top\boldsymbol X(t)}{(m-t)\Delta},
\end{equation}
with the spot rate at time $t\Delta$ given by $Y(t,t+1)=\boldsymbol 1^\top\boldsymbol X(t)=r(t)$.
\subsection[Hull--White Extended Discrete-Time Multifactor Vasicek Model]{Hull--White Extended Discrete-Time Multifactor Vasi\v{c}ek Model}
The possible shapes of the Vasi\v{c}ek yield curve \eqref{eq: ARn yields} are restricted by the choice of the parameters $\boldsymbol b\in{\mathbb R}^n$, $\beta\in{\mathbb R}^{n\times n}$ and $\Sigma\in{\mathbb R}^{n\times n}$. These parameters are not sufficiently flexible to exactly calibrate the model to an arbitrary observed initial yield curve. Therefore, we consider the Hull--White extended version (see \cite{HW1994}) of the discrete-time multifactor Vasi\v{c}ek model. We replace the factor process defined in \eqref{eq: ARn} as follows. For fixed $k\in{\mathbb N}_0$, let $\boldsymbol X^{(k)}$ satisfy:
\begin{equation}\label{eq: ARn+}
\boldsymbol X^{(k)}(t)=\boldsymbol b+\theta(t-k)\boldsymbol e_1+\beta\boldsymbol X^{(k)}(t-1)+\Sigma^{\frac{1}{2}}\boldsymbol\varepsilon^\ast(t),\quad t>k,
\end{equation}
with starting factor $\boldsymbol X^{(k)}(k)\in{\mathbb R}^n$, $\boldsymbol e_1=(1,0,\ldots,0)^\top\in{\mathbb R}^n$ and function $\theta:{\mathbb N}\rightarrow{\mathbb R}$. Model assumption~\eqref{eq: ARn+} corresponds to \eqref{eq: ARn}, where the first component of $\boldsymbol b$ is replaced by the time-dependent coefficient $(b_1+\theta(i))_{i\in{\mathbb N}}$ and all other terms ceteris paribus. Without loss of generality, we choose the first component for this replacement. Note that parameter $b_1$ is redundant in this model specification, but for didactical reasons, it is used below. The time-dependent coefficient $\theta$ is called the \emph{Hull--White extension}, and it is used to calibrate the model to a given yield curve at a given time point $k\Delta$. The upper index $^{(k)}$ denotes that time point and corresponds to the time shift we apply to the Hull--White extension $\theta$ in Model~\eqref{eq: ARn+}. The factor process $\boldsymbol X^{(k)}$ generates the spot rate process and the bank account process as in \eqref{eq: spot rate}.
The model defined by (\ref{eq: spot rate}, \ref{eq: ARn+}) is called the Hull--White extended discrete-time multifactor Vasi\v cek model. Under these model assumptions, we have for $m>t\geq k$:
\[
\boldsymbol X^{(k)}(m)|\mathcal F(t)\stackrel{\mathbb P^\ast}{\sim}\mathcal N\left(\sum_{s=0}^{m-t-1}\beta^s\left(\boldsymbol b+\theta(m-s-k)\boldsymbol e_1\right)+\beta^{m-t}\boldsymbol X^{(k)}(t),\sum_{s=0}^{m-t-1}\beta^s\Sigma(\beta^\top)^s\right).
\]
\begin{Remark*}
For $m>t\geq k$, the conditional distribution of $\boldsymbol X^{(k)}(m)$, given ${\cal F}(t)$, depends only on the factor $\boldsymbol X^{(k)}(t)$ at time $t\Delta$. In this case, factor process \eqref{eq: ARn+} is a time-inhomogeneous Markov process. Note that the upper index $^{(k)}$ in the notation is important since the conditional distribution depends explicitly on the lag $m-k$.
\end{Remark*}
\begin{Theorem}\label{theo: ARn+ prices}
The ZCB prices in the Hull--White extended discrete-time multifactor Vasi\v cek model (\ref{eq: spot rate}, \ref{eq: ARn+}) with respect to filtration $\mathbb F$ and equivalent martingale measure ${\mathbb P}^\ast$ have affine term structure:
\[
P^{(k)}(t,m)=e^{A^{(k)}(t,m)-\boldsymbol B(t,m)^\top\boldsymbol X^{(k)}(t)},\quad m>t\geq k,
\]
with $\boldsymbol B(t,m)$ as in Theorem \ref{theo: ARn prices}, $A^{(k)}(m-1,m)=0$ and for $m-1>t\geq k$:
\[
\begin{aligned}
A^{(k)}(t,m)&=A^{(k)}(t+1,m)-\boldsymbol B(t+1,m)^\top\left(\boldsymbol b+\theta(t+1-k)\boldsymbol e_1\right)\\&\quad+\frac{1}{2}\boldsymbol B(t+1,m)^\top\Sigma\boldsymbol B(t+1,m).
\end{aligned}
\]
\end{Theorem}
In the Hull--White extended discrete-time multifactor Vasi\v{c}ek model (\ref{eq: spot rate}, \ref{eq: ARn+}), the yield curve takes the following form at time $t\Delta$ for maturity dates $ m\Delta>t\Delta\geq k\Delta$:
\begin{equation}\label{eq: ARn+ yields}
Y^{(k)}(t,m)=-\frac{1}{(m-t)\Delta}\log P^{(k)}(t,m)
=-\frac{A^{(k)}(t,m)}{(m-t)\Delta}+\frac{\boldsymbol B(t,m)^\top\boldsymbol X^{(k)}(t)}{(m-t)\Delta},
\end{equation}
with spot rate at time $t\Delta$ given by $Y^{(k)}(t,t+1)=\boldsymbol 1^\top\boldsymbol X^{(k)}(t)$.
\begin{Remark*}
Note that the coefficient $\boldsymbol B(t,m)$ in Theorem \ref{theo: ARn+ prices} is not affected by the Hull--White extension $\theta$ and depends solely on $m-t$, whereas the coefficient $A^{(k)}(t,m)$ depends explicitly on the Hull--White extension $\theta$.
\end{Remark*}
\subsection{Calibration of the Hull--White Extended Model}
We consider the term structure model defined by the Hull--White extended factor process $\boldsymbol X^{(k)}$ and calibrate the Hull--White extension $\theta\in{\mathbb R}^{\mathbb N}$ to a given yield curve at time point $k\Delta$. We explicitly introduce the time index $k$ in Model~\eqref{eq: ARn+} because the CRC algorithm is a concatenation of multiple Hull--White extended models, which are calibrated at different time points $k\Delta$, see Section~\ref{sec: crc} below.
Assume that there is a fixed final time to maturity date $M\Delta$ and that we observe at time $k\Delta$ the yield curve $\widehat{\boldsymbol{y}}(k)\in{\mathbb R}^M$ for maturity dates $(k+1)\Delta,\ldots,(k+M)\Delta$. For these maturity dates, the Hull--White extended discrete-time multifactor Vasi\v{c}ek yield curve at time $k\Delta$, given by Theorem \ref{theo: ARn+ prices}, reads as:
\begin{equation*}
\boldsymbol{y}^{(k)}(k)=
\left(
-\frac{1}{i\Delta}A^{(k)}(k,k+i)+\frac{1}{i\Delta}\boldsymbol B(k,k+i)^\top\boldsymbol X^{(k)}(k)\right)^\top_{i=1,\ldots,M} \in {\mathbb R}^M.
\end{equation*}
For given starting factor $\boldsymbol X^{(k)}(k)\in\mathbb R^n$ and parameters $\boldsymbol b\in\mathbb R^n$,
$\beta\in\mathbb R^{n\times n}$ and $\Sigma\in\mathbb R^{n\times n}$, our aim is to choose the Hull--White extension $\theta\in\mathbb R^{\mathbb N}$ such that we get an exact fit at time $k\Delta$ to the yield curve $\widehat{\boldsymbol{y}}(k)$, that is,
\begin{equation}\label{eq: calibration theta}
\boldsymbol{y}^{(k)}(k)=\widehat{\boldsymbol{y}}(k).
\end{equation}
The following theorem provides an equivalent condition to \eqref{eq: calibration theta}, which allows one to calculate the Hull--White extension $\theta\in{\mathbb R}^{\mathbb N}$ explicitly.
\begin{Theorem}\label{theo: calibration}
Denote by $\boldsymbol{y}^{(k)}(k)$ the yield curve at time $k\Delta$ obtained from the Hull--White extended discrete-time multifactor Vasi\v{c}ek Model (\ref{eq: spot rate}, \ref{eq: ARn+}) for given starting factor $\boldsymbol X^{(k)}(k)=\boldsymbol x\in\mathbb R^n$, parameters $\boldsymbol b\in\mathbb R^n$, $\beta\in\mathbb R^{n\times n}$ and $\Sigma\in\mathbb R^{n\times n}$ and Hull--White extension $\theta\in{\mathbb R}^{\mathbb N}$. For given $\boldsymbol y\in{\mathbb R}^M$, identity $\boldsymbol{y}^{(k)}(k)=\boldsymbol y$ holds if and only if the Hull--White extension $\boldsymbol\theta$ fulfills:
\begin{equation}\label{eq: calibration matrix}
\boldsymbol{\theta}={\cal C}(\beta)^{-1} \boldsymbol{z}\left(\boldsymbol b,\beta,\Sigma, \boldsymbol x,\boldsymbol y\right),
\end{equation}
where $\boldsymbol\theta=(\theta_i)_{i=1,\ldots,M-1}^\top\in{\mathbb R}^{M-1}$, ${\cal C}(\beta)=\left({\cal C}_{ij}(\beta)\right)_{i,j=1,\ldots,M-1}\in{\mathbb R}^{(M-1)\times(M-1)}$ and \\$\boldsymbol{z}\left(\boldsymbol b,\beta,\Sigma, \boldsymbol x,\boldsymbol y\right)=\left(z_i\left(\boldsymbol b,\beta,\Sigma, \boldsymbol x,\boldsymbol y\right)\right)_{i=1,\ldots,M-1}^\top\in{\mathbb R}^{M-1}$ are defined by:
\[
\begin{aligned}
\theta_i&=\theta(i),\\
{\cal C}_{ij}(\beta)&=B_1(k+j, k+i+1)~1_{\{ j \le i \}},\\
z_i\left(\boldsymbol b,\beta,\Sigma, \boldsymbol x,\boldsymbol y\right)&=\sum_{s=k+1}^{k+i}\left(\frac{1}{2}\boldsymbol B(s,k+i+1)^\top\Sigma\boldsymbol B(s,k+i+1)-\boldsymbol B(s,k+i+1)^\top\boldsymbol b\right)\\&\qquad\qquad\qquad-\boldsymbol 1^\top\left(\mathds{1}-\beta^{i+1}\right)\left(\mathds{1}-\beta\right)^{-1}\boldsymbol x\Delta+(i+1) y_{i+1}(k)\Delta,
\end{aligned}
\]
with $i,j=1,\ldots,M-1$ and $\boldsymbol B(\cdot,\cdot)=\left(B_1(\cdot,\cdot),\ldots,B_n(\cdot,\cdot)\right)^\top$ given by Theorem \ref{theo: ARn prices}.
\end{Theorem}
Theorem \ref{theo: calibration} shows that the Hull--White extension can be calculated by inverting the \mbox{$(M-1)\times(M-1)$} lower triangular positive definite matrix ${\cal C}(\beta)$.
\section{Consistent Re-Calibration}\label{sec: crc}
The crucial extension now is the following: we let parameters $\boldsymbol b$, $\beta$ and $\Sigma$ vary over time, and we re-calibrate the Hull--White extension in a consistent way at each time point, that is according to the actual choice of the parameter values using Theorem \ref{theo: calibration}. Below, we show that this naturally leads to a Heath--Jarrow--Morton \cite{HJM} (HJM) approach to term structure modeling.
\subsection{Consistent Re-Calibration Algorithm} \label{Re-calibration algorithm}
Assume that $\left(\boldsymbol b(k)\right)_{k\in\mathbb N_0}$, $(\beta(k))_{k\in {\mathbb N}_0}$ and $(\Sigma(k))_{k\in {\mathbb N}_0}$ are ${\mathbb F}$-adapted parameter processes with $\beta(k)$ and $\Sigma(k)$ satisfying Assumption \ref{assumption}, ${\mathbb P}^\ast$-a.s., for all $k\in{\mathbb N}_0$. Based on these parameter processes, we define the $n$-dimensional ${\mathbb F}$-adapted CRC factor process $\boldsymbol{\mathcal X}$, which evolves according to Steps (i)--(iv) of the CRC algorithm described below. Thus, factor process $\boldsymbol{\mathcal X}$ will define a spot rate model similar to \eqref{eq: spot rate}.
In the CRC algorithm, Steps \ref{subsubsec crc step 1}--\ref{subsubsec crc step 3} below are executed iteratively.
\subsubsection[Initialization k=0]{Initialization $k=0$} \label{subsubsec crc step 1}
Assume that the initial yield curve observation at Time 0 is given by $\widehat{\boldsymbol{y}}(0)\in{\mathbb R}^{M}$. Let $\theta^{(0)}\in{\mathbb R}^{\mathbb N}$ be an ${\cal F}(0)$-measurable Hull--White extension, such that condition \eqref{eq: calibration theta} is satisfied at Time $0$ for initial factor $\boldsymbol{\mathcal X}(0)\in{\mathbb R}^n$ and parameters $\boldsymbol b(0)$, $\beta(0)$ and $\Sigma(0)$. By Theorem \ref{theo: calibration}, the values $\boldsymbol \theta^{(0)}=(\theta^{(0)}(i))_{i=1,\ldots,M-1}\in{\mathbb R}^{M-1}$ are given by:
\begin{equation*}
\boldsymbol{\theta}^{(0)} =
{\cal C}\left(\beta(0)\right)^{-1} \boldsymbol{z}\left(\boldsymbol b(0),\beta(0),\Sigma(0),\boldsymbol{\cal X}(0),\widehat{\boldsymbol{y}}(0)\right).
\end{equation*}
This provides Hull--White extended Vasi\v{c}ek yield curve $\boldsymbol y^{(0)}(0)$ identically equal to $\widehat{\boldsymbol{y}}(0)$ for given initial factor $\boldsymbol{\cal X}(0)$ and parameters $\boldsymbol b(0)$, $\beta(0)$, $\Sigma(0)$.
\subsubsection[Increments of the Factor Process from k to k+1]{Increments of the Factor Process from $k \to k+1$}\label{subsubsec crc step 2}
Assume factor $\boldsymbol{\cal X}(k)$, parameters $\boldsymbol b(k),\beta(k)$ and $\Sigma(k)$ and Hull--White extension $\theta^{(k)}$ are given. Define the Hull--White extended model $\boldsymbol X^{(k)}=(\boldsymbol X^{(k)}(t))_{t\geq k}$ by:
\begin{equation}\label{re-calibration step 1}
\boldsymbol X^{(k)}(t)=\boldsymbol b(k)+\theta^{(k)}(t-k)\boldsymbol e_1+\beta(k)\boldsymbol X^{(k)}(t-1) + \Sigma(k)\boldsymbol\varepsilon^\ast(t),\quad t>k,
\end{equation}
with starting value $\boldsymbol X^{(k)}(k)=\boldsymbol{\cal X}(k)$, ${\cal F}(k)$-measurable parameters $\boldsymbol b(k)$, $\beta(k)$ and $\Sigma(k)$ and Hull--White extension $\theta^{(k)}$. We update the factor process $\boldsymbol{\cal X}$ at time $(k+1)\Delta$ according to the $\boldsymbol X^{(k)}$-dynamics, that is, we set:
\[
\boldsymbol{\cal X}(k+1)=\boldsymbol X^{(k)}(k+1).
\]
This provides ${\cal F}(k+1)$-measurable yield curve at time $(k+1)\Delta$ for maturity dates \mbox{$ m\Delta>(k+1)\Delta$:}
\begin{equation*}
Y^{(k)}(k+1,m)=-\frac{A^{(k)}(k+1,m)}{(m-(k+1))\Delta}+\frac{\boldsymbol B^{(k)}(k+1,m)^\top\boldsymbol{\cal X}(k+1)}{(m-(k+1))\Delta},
\end{equation*}
with $A^{(k)}(m-1,m)=0$ and $\boldsymbol B^{(k)}(m-1,m)=\Delta\boldsymbol 1$, and recursively for $m-1>t\geq k$:
\[
\begin{aligned}
A^{(k)}(t,m)&=A^{(k)}(t+1,m)-\boldsymbol B^{(k)}(t+1,m)^\top\left(\boldsymbol b(k)+\theta^{(k)}(t+1-k)\boldsymbol e_1\right)\\&\quad+\frac{1}{2}\boldsymbol B^{(k)}(t+1,m)^\top\Sigma(k)\boldsymbol B^{(k)}(t+1,m),\\
\boldsymbol B^{(k)}(t,m)&=\left(\mathds{1}-\beta(k)^\top\right)^{-1}\left(\mathds{1}-(\beta(k)^\top)^{m-t}\right)\boldsymbol 1\Delta.
\end{aligned}
\]
This is exactly the no-arbitrage price under ${\mathbb P}^\ast$ if the parameters $\boldsymbol b(k)$, $\beta(k)$ and $\Sigma(k)$ and the Hull--White extension $\theta^{(k)}$ remain constant for all $t>k$.
\subsubsection[Parameter Update and Re-Calibration at k+1]{Parameter Update and Re-Calibration at $k+1$}\label{subsubsec crc step 3}
Assume that at time $(k+1)\Delta$, the parameters $(\boldsymbol b(k),\beta(k),\Sigma(k))$ are updated to $(\boldsymbol b(k+1),\beta(k+1),\Sigma(k+1))$. We may think of this parameter update as a consequence of model selection after we observe a new yield curve at time $(k+1)\Delta$. This is discussed in more detail in Section~\ref{sec: parameters} below.
The no-arbitrage yield curve at time $(k+1)\Delta$ from the model with parameters $(\boldsymbol b(k),\beta(k),\Sigma(k))$ and Hull--White extension $\theta^{(k)}$ is given by:
\[
\boldsymbol{y}^{(k)}(k+1)=\left(Y^{(k)}(k+1,k+2),\ldots,Y^{(k)}(k+1,k+1+M)\right)^\top\in{\mathbb R}^M.
\]
The parameter update $\left(\boldsymbol b(k), \beta(k),\Sigma(k)\right) \mapsto \left(\boldsymbol b(k+1), \beta(k+1),\Sigma(k+1)\right)$ requires re-calibration of the Hull--White extension, otherwise arbitrage is introduced into the model. This re-calibration provides ${\cal F}(k+1)$-measurable Hull--White extension $\theta^{(k+1)}\in{\mathbb R}^{\mathbb N}$ at time $(k+1)\Delta$. The values $\boldsymbol \theta^{(k+1)}=(\theta^{(k+1)}(i))_{i=1,\ldots,M-1}\in{\mathbb R}^{M-1}$ are given by (see Theorem \ref{theo: calibration}):
\begin{equation}\label{eq: re-calibration step 2}
\boldsymbol\theta^{(k+1)}={\cal C}\left(\beta(k+1)\right)^{-1} \boldsymbol{z}\left(\boldsymbol b(k+1),\beta(k+1),\Sigma(k+1),\boldsymbol{\cal X}(k+1),\boldsymbol{y}^{(k)}(k+1)\right),
\end{equation}
and the resulting yield curve $\boldsymbol{y}^{(k+1)}(k+1)$ under the updated parameters is identically equal to $\boldsymbol{y}^{(k)}(k+1)$. Note that this CRC makes the upper index $(k)$ in the yield curve superfluous, because the Hull--White extension is re-calibrated to the new parameters, such that the resulting yield curve remains unchanged. Therefore, we write ${\cal Y}(k,\cdot)$ in the sequel for the CRC yield curve with factor $\boldsymbol{\cal X}(k)$, parameters $\boldsymbol b(k),\beta(k),\Sigma(k)$ and Hull--White extension $\theta^{(k)}$.
(End of algorithm.)
\begin{Remark*}
For the implementation of the above algorithm, we need to consider the following issue. Assume we start the algorithm at Time $0$ with initial yield curve $\widehat{\boldsymbol{y}}(0)\in{\mathbb R}^M$. At times $ k\Delta$, for $k>0$, calibration of $\boldsymbol\theta^{(k)}\in{\mathbb R}^{M-1}$ requires yields with times to maturity beyond $ M\Delta$. Either yields for these times to maturity are observable, and the length of $\boldsymbol{\theta}^{(k)}$ is reduced in every step of the CRC algorithm or an appropriate extrapolation method beyond the latest available maturity date is applied in every~step.
\end{Remark*}
\subsection{Heath--Jarrow--Morton Representation}
We analyze the yield curve dynamics $({\cal Y}(k,\cdot))_{k\in{\mathbb N}_0}$
obtained by the CRC algorithm of Section~\ref{Re-calibration algorithm}.
Due to re-calibration \eqref{eq: re-calibration step 2},
the yield curve fulfills the following identity for $m>k+1$:
\begin{equation}\label{eq: crc yields}
\begin{aligned}
{\cal Y}(k+1,m)&=-\frac{A^{(k)}(k+1,m)}{(m-(k+1))\Delta}+\frac{\boldsymbol B^{(k)}(k+1,m)^\top\boldsymbol{\cal X}(k+1)}{(m-(k+1))\Delta}\\&=-\frac{A^{(k+1)}(k+1,m)}{(m-(k+1))\Delta}+\frac{\boldsymbol B^{(k+1)}(k+1,m)^\top\boldsymbol{\cal X}(k+1)}{(m-(k+1))\Delta},
\end{aligned}
\end{equation}
where the first line is based on the ${\cal F}(k)$-measurable parameters
$(\boldsymbol b(k),\beta(k),\Sigma(k))$ and Hull--White extension $\theta^{(k)}$, and the second line is based on the ${\cal F}(k+1)$-measurable parameters and Hull--White extension $(\boldsymbol b(k+1),\beta(k+1),\Sigma(k+1),\theta^{(k+1)})$ after CRC Step (iii). Note that in the re-calibration only $(\boldsymbol b(k+1),\beta(k+1),\Sigma(k+1))$ can be chosen exogenously, and the Hull--White extension $\theta^{(k+1)}$ is used for consistency property \eqref{eq: re-calibration step 2}. Our aim is to express ${\cal Y}(k+1,m)$ as a function of $\boldsymbol{\cal X}(k)$ and ${\cal Y}(k,m)$. Using Equations \eqref{re-calibration step 1} and \eqref{eq: crc yields}, we have for $m>k+1$:
\begin{equation} \label{start HJM}
\begin{aligned}
{\cal Y}(k+1,m)&\left(m-(k+1)\right) \Delta=-A^{(k)}(k+1,m)\\&+\boldsymbol B^{(k)}(k+1,m)^\top \left(\boldsymbol b(k)+\theta^{(k)}(1)\boldsymbol e_1+\beta(k)\boldsymbol{\cal X}(k)+\Sigma(k)^{\frac{1}{2}}\boldsymbol\varepsilon^\ast(k+1)
\right).
\end{aligned}
\end{equation}
This provides the following theorem; see Appendix \ref{sec: proofs} for the proof.
\begin{Theorem} \label{theorem HJM view}
Under equivalent martingale measure ${\mathbb P}^\ast$, the yield curve dynamics $({\cal Y}(k,\cdot))_{k\in{\mathbb N}_0}$ obtained by the CRC algorithm of Section~\ref{Re-calibration algorithm} has the following HJM representation for $m>k+1$:
\[
\begin{aligned}
{\cal Y}(k+1,m)(m-(k+1))\Delta &={\cal Y}(k,m)(m-k)\Delta- {\cal Y}(k,k+1)\Delta\\&\quad+\frac{1}{2}\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)\boldsymbol B^{(k)}(k+1,m)\\&\quad+\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)^{\frac{1}{2}}\boldsymbol\varepsilon^\ast(k+1),
\end{aligned}
\]
with $\boldsymbol B^{(k)}(k+1,m)=\left(\mathds{1}-\beta^\top(k)\right)^{-1}\left(\mathds{1}-(\beta(k)^\top)^{m-k-1}\right)\boldsymbol 1\Delta$.
\end{Theorem}
\begin{KeyObservation}
Observe that in Theorem \ref{theorem HJM view}, a remarkable simplification happens. Simulating the CRC algorithm \eqref{re-calibration step 1} and \eqref{eq: re-calibration step 2} to future time points $k\Delta>0$ does not require the calculation of the Hull--White extensions $(\theta^{(k)})_{k\in {\mathbb N}_0}$ according to \eqref{eq: re-calibration step 2},
but the knowledge of the parameter process $\left(\boldsymbol b(k),\beta(k), \Sigma(k)\right)_{k\in {\mathbb N}_0}$ is sufficient. The
Hull--White extensions are fully encoded in the yield curve process $({\cal Y}(k,\cdot))_{k\in\mathbb N_0}$, and we can avoid the inversion of (potentially) high dimensional matrices ${\cal C}(\beta(k))_{k\in\mathbb N_0}$.
\end{KeyObservation}
\begin{FurtherRemarks*}
\begin{itemize}[leftmargin=*,labelsep=4mm]
\item CRC of the multifactor Vasi\v cek spot rate model can be defined directly in the HJM framework assuming a stochastic dynamics for the parameters. However, solely from the HJM representation, one cannot see that the yield curve dynamics is obtained, in our case, by combining well-understood Hull--White extended multifactor Vasi\v cek spot rate models using the CRC algorithm of Section~\ref{sec: crc}; that is, the Hull--White extended multifactor Vasi\v cek model gives an explicit functional form to the HJM representation.
\item The CRC algorithm of Section~\ref{sec: crc} does not rely directly on $(\boldsymbol\varepsilon^\ast(t))_{t\in{\mathbb N}}$ having independent and Gaussian components. The CRC algorithm is feasible as long as explicit formulas for ZCB prices in the Hull--White extended model are available. Therefore, one may replace the Gaussian innovations by other distributional assumptions, such as normal variance mixtures. This replacement is possible provided that conditional exponential moments can be calculated under the new innovation assumption. Under non-Gaussian innovations, it will no longer be the case that the HJM representation does not depend on the Hull--White extension $\theta^{(k)}\in{\mathbb R}^{\mathbb N}$.
\item Interpretation of the parameter processes will be given in Section~\ref{sec: parameters}, below.
\end{itemize}
\end{FurtherRemarks*}
\section{Real World Dynamics and Market Price of Risk} \label{sec: real world dynamics}
All previous derivations were done under an equivalent martingale measure ${\mathbb P}^\ast$ for the bank account numeraire. In order to statistically estimate parameters from market data, we need to specify a Girsanov transformation to the real-world measure, which is denoted by $\mathbb P$. We present a specific change of measure, which provides tractable spot rate dynamics under $\mathbb P$. Assume that $(\boldsymbol\lambda(k))_{k\in\mathbb N_0}$ and $(\Lambda(k))_{k\in\mathbb N_0}$ are $\mathbb R^n$- and $\mathbb R^{n\times n}$-valued $\mathbb F$-adapted processes, respectively. Let $(\boldsymbol{\cal X}(k))_{k\in\mathbb N_0}$ be the factor process obtained by the CRC algorithm of Section~\ref{Re-calibration algorithm}. Then, we assume that the $n$-dimensional ${\mathbb F}$-adapted process $(\boldsymbol{\lambda}(k)+\Lambda(k)\boldsymbol{\cal X}(k))_{k\in \mathbb N_0}$ describes the market price of risk dynamics. We define the following ${\mathbb P}^\ast$-density process:
$(\xi(k))_{k\in \mathbb N_0}$
\[
\xi(k)=\exp\left\{-\frac{1}{2}\sum_{s=0}^{k-1}\left\|\boldsymbol\lambda(s)+\Lambda(s)\boldsymbol {\cal X}(s)\right\|_2^2+\sum_{s=0}^{k-1}\left(\boldsymbol\lambda(s)+\Lambda(s)\boldsymbol {\cal X}(s)\right)^\top \boldsymbol\varepsilon^{*}(s+1)\right\},\quad k \in \mathbb N_0.
\]
The real-world probability measure ${\mathbb P}$ is then defined by the Radon--Nikodym derivative:
\begin{equation}\label{transformation}
\left.\frac{d{\mathbb P}}{d{\mathbb P}^\ast}\right|_{\mathcal F(k)}= \xi(k),\quad k \in \mathbb N_0.
\end{equation}
An immediate consequence is that for $k \in \mathbb N_0$:
\begin{equation*}
\boldsymbol\varepsilon(k+1)=\boldsymbol\lambda(k)+
\Lambda(k)\boldsymbol{\cal X}(k)+\boldsymbol\varepsilon^\ast(k+1),
\end{equation*}
has a standard Gaussian distribution under ${\mathbb P}$, conditionally on ${\cal F}(k)$.
This implies that under the real-world measure ${\mathbb P}$, the factor process
$(\boldsymbol{\cal X}(k))_{k\in \mathbb N_0}$ is described by:
\begin{equation}\label{eq: real world dynamics}
\boldsymbol{\cal X}(k+1)=\boldsymbol a(k)+\alpha(k)\boldsymbol{\cal X}(k)+\Sigma(k)^{\frac{1}{2}}\boldsymbol\varepsilon(k+1),
\end{equation}
where we define:
\begin{equation}\label{eq: real world transform}
\boldsymbol a(k)=\boldsymbol b(k)+\theta^{(k)}(1)\boldsymbol e_1-\Sigma(k)^{\frac{1}{2}}\boldsymbol\lambda(k)
\quad \text{and }\quad \alpha(k)=\beta(k)-\Sigma(k)^{\frac{1}{2}}\Lambda(k).
\end{equation}
As in Assumption~\ref{assumption}, we require $\Lambda(k)$ to be such that the spectrum of $\alpha(k)$ is a subset of $(-1,1)^n$. Formula \eqref{eq: real world dynamics} describes the dynamics of the factor process $(\boldsymbol{\cal X}(k))_{k\in \mathbb N_0}$ obtained by the CRC algorithm of Section~\ref{Re-calibration algorithm} under real-world measure ${\mathbb P}$. The following corollary describes the yield curve dynamics obtained by the CRC algorithm under ${\mathbb P}$, in analogy to Theorem \ref{theorem HJM view}.
\begin{Corollary}\label{HJM under the real world measure}
Under real-world measure ${\mathbb P}$ satisfying \eqref{transformation}, the yield curve dynamics $({\cal Y}(k,\cdot))_{k\in{\mathbb N}_0}$ obtained by the CRC algorithm of Section~\ref{Re-calibration algorithm} has the following HJM representation for $m>k+1$:
\[
\begin{aligned}
{\cal Y}(k+1,m)
\left(m-(k+1)\right)\Delta &={\cal Y}(k,m)(m-k)\Delta- {\cal Y}(k,k+1)\Delta\\&\quad+\frac{1}{2}\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)\boldsymbol B^{(k)}(k+1,m)\\&\quad-\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)^{\frac{1}{2}}\boldsymbol\lambda(k)\\&\quad-\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)^{\frac{1}{2}}\Lambda(k)\boldsymbol {\cal X}(k)\\&\quad+\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)^{\frac{1}{2}}\boldsymbol\varepsilon(k+1),
\end{aligned}
\]
with $\boldsymbol B^{(k)}(k+1,m)=\left(\mathds{1}-\beta(k)^\top\right)^{-1}\left(\mathds{1}-\left(\beta(k)^\top\right)^{m-k-1}\right)\boldsymbol 1\Delta$.
\end{Corollary}
Compared to Theorem~\ref{theorem HJM view}, there are additional drift terms $-\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)^{\frac{1}{2}}\boldsymbol\lambda(k)$ and $-\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)^{\frac{1}{2}}\Lambda(k)\boldsymbol X(k)$, which
are characterized by the market price of risk parameters $\boldsymbol{\lambda}(k)\in{\mathbb R}^n$ and $\Lambda(k)\in{\mathbb R}^{n\times n}$.
\section{Choice of Parameter Process}\label{sec: parameters}
The yield curve dynamics obtained by the CRC algorithm of Section~\ref{Re-calibration algorithm} require exogenous specification of the parameter process of the multifactor Vasi\v cek Models~\eqref{eq: spot rate} and \eqref{eq: ARn} and the market price of risk process, i.e., we need to model the process:
\begin{equation}\label{eq: parameter process}
\left(\boldsymbol b(t),\beta(t), \Sigma(t), \boldsymbol{\lambda}(t), \Lambda(t)\right)_{t\in {\mathbb N}_0}.
\end{equation}
By Equation \eqref{re-calibration step 1}, the one-step ahead development of the CRC factor process $\boldsymbol{\mathcal{X}}$ under ${\mathbb P}$ reads as:
\begin{equation}\label{eq: factor evolution}
\boldsymbol {\cal X}(t+1)=\boldsymbol b(t)+\theta^{(t)}(1)\boldsymbol e_1-\Sigma(t)^{\frac{1}{2}}\boldsymbol\lambda(t)+\left(\beta(t)-\Sigma(t)^{\frac{1}{2}}\Lambda(t)\right)\boldsymbol {\cal X}(t) + \Sigma(t)^{\frac{1}{2}}\boldsymbol\varepsilon(t+1),
\end{equation}
with ${\cal F}(t)$-measurable parameters $\boldsymbol b(t)$, $\beta(t)$ and $\Sigma(t)$ and Hull--White extension $\theta^{(t)}$. Thus, on the one hand, the factor process $(\boldsymbol {\cal X}(t))_{t\in {\mathbb N}_0}$ evolves according to \eqref{eq: factor evolution}, and on the other hand, parameters $(\boldsymbol b(t),\beta(t), \Sigma(t), \boldsymbol{\lambda}(t), \Lambda(t))_{t\in {\mathbb N}_0}$ evolve according to the financial market conditions. Note that the process $(\theta^{(t)})_{t\in{\mathbb N}_0}$ of Hull--White extensions is fully determined through CRC by \eqref{eq: re-calibration step 2}. In order to distinguish the evolutions of $(\boldsymbol {\cal X}(t))_{t\in {\mathbb N}_0}$ and $(\boldsymbol b(t),\beta(t), \Sigma(t), \boldsymbol{\lambda}(t), \Lambda(t))_{t\in {\mathbb N}_0}$, respectively, we assume that process \eqref{eq: parameter process} changes at a slower pace than the factor process, and therefore, parameters can be assumed to be constant over a short time window. This assumption motivates the following approach to specifying a model for process \eqref{eq: parameter process}. For each time point $t\Delta$, we fit multifactor Vasi\v cek Models~\eqref{eq: spot rate} and \eqref{eq: ARn} with fixed parameters $\left(\boldsymbol b, \beta, \Sigma, \boldsymbol\lambda,\Lambda\right)$ on observations from a time window $\{t-K+1,\ldots, t\}$ of length $K$. For estimation, we assume that we have yield curve observations $(\widehat{\boldsymbol y}(k))_{k=t-K+1,\ldots, t}=((\widehat{y}_1(k),\ldots,\widehat{y}_M(k)))_{{k=t-K+1,\ldots, t}}$ for times to maturity $ \tau_1\Delta<\ldots< \tau_M\Delta$. Since yield curves are not necessarily observed on a regular time to the maturity grid, we introduce the indices $\tau_1,\ldots,\tau_M\in{\mathbb N}$ to refer to the available times to maturity. Varying the time of estimation $t\Delta$, we obtain time series for the parameters from historical data. Finally, we fit a stochastic model to these time series. In the following, we discuss the interpretation of the parameters and present two different estimation procedures. The two procedures are combined to obtain a full specification of the model parameters.
\subsection{Interpretation of Parameters}
\subsubsection{Level and Speed of Mean Reversion}
By Equation \eqref{eq: AR conditional distribution}, we have under ${\mathbb P}^\ast$ for $m>t$:
\[
\begin{aligned}
\mathbb E^{\ast}\left[\boldsymbol X(m)\middle|\mathcal F(t)\right]&=\left(\mathds{1}-\beta\right)^{-1}\left(\mathds{1}-\beta^{m-t}\right)\boldsymbol b+\beta^{m-t}\boldsymbol X(t),\\
\mathbb E^{\ast}\left[r(m)\middle|\mathcal F(t)\right]&=\boldsymbol 1^\top\left(\mathds{1}-\beta\right)^{-1}\left(\mathds{1}-\beta^{m-t}\right)\boldsymbol b+\boldsymbol 1^\top\beta^{m-t}\boldsymbol X(t).
\end{aligned}
\]
Thus, $\beta$ determines the speed at which the factor process $(\boldsymbol X(t))_{t\in {\mathbb N}_0}$ and the spot rate process $(r(t))_{t\in {\mathbb N}_0}$ return to their long-term means:
\[
\lim_{m\to\infty}\mathbb E^{\ast}\left[\boldsymbol X(m)|\mathcal F(t)\right]=\left(\mathds{1}-\beta\right)^{-1}\boldsymbol b\quad\text{and}\quad\lim_{m\to\infty}\mathbb E^{\ast}\left[r(m)|\mathcal F(t)\right]=\boldsymbol 1^\top\left(\mathds{1}-\beta\right)^{-1}\boldsymbol b.
\]
A sensible choice of $(\beta(t))_{t\in{\mathbb N}_0}$ adapts the speed of mean reversion to the prevailing financial market conditions at each time point $t\Delta$.
\subsubsection{Instantaneous Variance}
By Equation \eqref{eq: AR conditional distribution}, we have under ${\mathbb P}^{\ast}$ for $t>0$:
\[
\mathrm{Cov}^\ast\left[\boldsymbol X(t)\middle|\mathcal F(t-1)\right]=\Sigma,\quad\text{and}\quad\mathrm{Var}^\ast\left[r(t)\middle|\mathcal F(t-1)\right]=\boldsymbol 1^\top\Sigma\boldsymbol 1.
\]
Thus, matrix $\Sigma$ plays the role of the instantaneous covariance matrix of $\boldsymbol X$, and it describes the instantaneous spot rate volatility.
\subsection{State Space Modeling Approach}\label{sec: kalman MLE}
On each time window, we want to use yield curve observations to estimate the parameters of time-homogeneous Vasi\v cek Models~\eqref{eq: spot rate} and \eqref{eq: ARn}. In general, this model is not able to reproduce the yield curve observations exactly. One reason might be that the data are given in the form of parametrized yield curves, and the parametrization might not be compatible with the Vasi\v cek model. For example, this is the case for the widely-used Svensson family \cite{Svensson}. Another reason might be that yield curve observations do not exactly represent risk-free zero-coupon bonds.
The discrepancy between the Vasi\v cek model and the yield curve observations can be accounted for by adding a noise term to the Vasi\v cek yield curves. This defines a state space model with the factor process as the hidden state variable. In this state space model, the parameters of the factor dynamics can be estimated using Kalman filter techniques in conjunction with maximum likelihood estimation (\cite{Wuethrich} Section 3.6.3). This is explained in detail in Sections~\ref{subsubsec: kalman MLE transition}--\ref{subsubsec: kalman MLE likelihood} below.
\subsubsection{Transition System}\label{subsubsec: kalman MLE transition} The evolution of the unobservable process $\boldsymbol X$ under ${\mathbb P}$ is assumed to be given on time window $\{t-K+1,\ldots,t\}$ by:
\[
\boldsymbol X(k)=\boldsymbol a+\alpha \boldsymbol X(k-1)+\Sigma^{\frac{1}{2}}\boldsymbol \varepsilon(k),\quad k\in\{t-K+1,\ldots,t\},
\]
with initial factor $\boldsymbol X(t-K)\in{\mathbb R}^n$ and parameters $\boldsymbol a=\boldsymbol b-\Sigma^{\frac{1}{2}}\boldsymbol\lambda$ and $\alpha=\beta-\Sigma^{\frac{1}{2}}\Lambda$. The initial factor $\boldsymbol X(t-K)$ is updated according to the output of the Kalman filter for the previous time window $\{t-K,\ldots,t-1\}$. The initial factor is set to zero for the first time window available.
\begin{Remark*}
Parameters $(\boldsymbol b,\beta,\Sigma,\boldsymbol\lambda,\Lambda)$ are assumed to be constant over the time window \{$t-K+1$,$\ldots$,$t$\}. Thus, we drop the index $k$ compared to Equations \eqref{eq: real world dynamics} and \eqref{eq: real world transform}. For estimation, we assume that the factor process evolves according to the time-homogeneous multifactor Vasi\v cek Models~\eqref{eq: spot rate} and \eqref{eq: ARn} in that time window. The Hull--White extension is calibrated to the yield curve at time $t\Delta$ given the estimated parameter values of the time-homogeneous model.
\end{Remark*}
\subsubsection{Measurement System}
We assume that the observations in the state space model are given by:
\begin{equation}\label{eq: noisy measurement}
\widehat{\boldsymbol Y}(k)=\boldsymbol d+D\boldsymbol X(k)+S^{\frac{1}{2}}\boldsymbol\eta(k),\quad k\in\{t-K,\ldots,t\},
\end{equation}
where:
\[
\begin{aligned}
\widehat{\boldsymbol Y}(k)&=\left(\widehat Y(k,k+\tau_1),\ldots,\widehat Y(k,k+\tau_M)\right)^\top\in{\mathbb R}^M,\\
\boldsymbol d&=\left(-( \tau_1\Delta)^{-1}A(k,k+\tau_1),\ldots,-( \tau_M\Delta)^{-1}A(k,k+\tau_M)\right)^\top\in{\mathbb R}^M,\\
D_{ij}&=( \tau_i\Delta)^{-1}B_j(k,k+\tau_i),\quad1\leq i\leq M,\quad1\leq j\leq n,
\end{aligned}
\]
with $A(\cdot,\cdot)$ and $\boldsymbol B(\cdot,\cdot)=(B_1(\cdot,\cdot),\dots,B_n(\cdot,\cdot))^\top$ given by Theorem \ref{theo: ARn prices} and $M$-dimensional $\mathcal{F}(k)$-measurable noise term $S^{\frac{1}{2}}\boldsymbol\eta(k)$ for non-singular $S^{\frac{1}{2}}\in\mathbb R^{M\times M}$. We assume that $\boldsymbol\eta(k)$ is independent of $\mathcal{F}(k-1)$ and $\boldsymbol\varepsilon(k)$ under ${\mathbb P}$ and that $\boldsymbol \eta(k)\stackrel{\mathbb P}{\sim}\mathcal N(\boldsymbol 0, \mathds{1})$. The error term $S^{\frac12}\boldsymbol{\eta}$ describes the discrepancy between the yield curve observations and the model. For $S=0$, we would obtain a yield curve in \eqref{eq: noisy measurement} that corresponds exactly to the multifactor Vasi\v cek one.
Given the parameter and market price of risk value $(\boldsymbol b, \beta, \Sigma, \boldsymbol\lambda, \Lambda)$, we estimate the factor using the following iterative procedure. For each fixed value of $k\in\{t-K,\ldots,t\}$ and fixed time $t$, we consider the $\sigma$-field \linebreak$\mathcal F^{\widehat{\boldsymbol Y}}(k)=\sigma\left(\widehat{\boldsymbol Y}(s)\:\middle|\:t-K\leq s\leq k\right)$ and describe the estimation procedure in this state space~model.
\subsubsection{Anchoring} Fix initial factor $\boldsymbol X(t-K)=\boldsymbol x(t-K|t-K-1)$, and initialize:
\[
\begin{aligned}
\boldsymbol x(t-K+1|t-K)&=\mathbb E\left[\boldsymbol X(t-K+1)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(t-K)\right]=\boldsymbol a+\alpha\boldsymbol x(t-K|t-K-1),\\
\Sigma(t-K+1|t-K)&=\mathrm{Cov}\left(\boldsymbol X(t-K+1)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(t-K)\right)=\Sigma.
\end{aligned}
\]
\subsubsection{Forecasting the Measurement System} At time $k\in\{t-K+1,\ldots,t\}$, we have:
\[
\begin{aligned}
\boldsymbol y(k|k-1)&=\mathbb E\left[\widehat{\boldsymbol Y}(k)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(k-1)\right]=\boldsymbol d+D\boldsymbol x(k|k-1),\\
F(k)&=\mathrm{Cov}\left(\widehat{\boldsymbol Y}(k)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(k-1)\right)=D\Sigma(k|k-1)D^\top+S,\\
\boldsymbol \zeta(k)&=\widehat{\boldsymbol y}(k)-\boldsymbol y(k|k-1).
\end{aligned}
\]
\subsubsection{Bayesian Inference in the Transition System} The prediction error $\boldsymbol \zeta(k)$ is used to update the unobservable factors.
\[
\begin{aligned}
\boldsymbol x(k|k)&=\mathbb E\left[\boldsymbol X(k)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(k)\right]=\boldsymbol x(k|k-1)+K(k)\boldsymbol\zeta(k),\\
\Sigma(k|k)&=\mathrm{Cov}\left(\boldsymbol X(k)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(k)\right)=\left(\mathds{1}-K(k)D\right)\Sigma(k|k-1),
\end{aligned}
\]
where $K(k)$ denotes the Kalman gain matrix given by:
\[
K(k)=\mathrm{Cov}\left(\boldsymbol X(k)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(k-1)\right)D^\top\mathrm{Cov}\left(\widehat{\boldsymbol Y}(k)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(k-1)\right)^{-1}=\Sigma(k|k-1)D^\top F(k)^{-1}.
\]
\subsubsection{Forecasting the Transition System} For the unobservable factor process, we have the following forecast:
\[
\begin{aligned}
\boldsymbol x(k+1|k)&=\mathbb E\left[\boldsymbol X(k+1)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(k)\right]=\boldsymbol a + \alpha\boldsymbol x(k|k),\\
\Sigma(k+1|k)&=\mathrm{Cov}\left(\boldsymbol X(k+1)\middle|\mathcal F^{\widehat{\boldsymbol Y}}(k)\right)=\alpha\Sigma(k|k)\alpha^\top+\Sigma.
\end{aligned}
\]
\subsubsection{Likelihood Function}\label{subsubsec: kalman MLE likelihood}
The Kalman filter procedure above allows one to infer factors $\boldsymbol X$ given the parameter and market price of risk values. Of course, in this section, we are interested in estimating these values in the first~place. For this purpose, the procedure above can be used in conjunction with maximum likelihood estimation. For the underlying parameters $\boldsymbol\Theta=\left(\boldsymbol b,\beta,\Sigma,\boldsymbol a,\alpha\right)$, we have the following likelihood function given the observations $(\widehat{\boldsymbol y}(k))_{k=t-K+1,\ldots, t}$:
\begin{equation} \label{eq: likelihood}
\mathcal L_t(\boldsymbol\Theta)={\mathbb P}rod_{k=t-K+1}^{t}\frac{\exp\left(-\frac{1}{2}\boldsymbol\zeta(k)^\top F(k)^{-1}\boldsymbol\zeta(k)\right)}{\left(2{\mathbb P}i\right)^{\frac{M}{2}}\det F(k)^{\frac{1}{2}}}.
\end{equation}
The maximum likelihood estimator (MLE) $\widehat{\boldsymbol\Theta}^{\mathrm{MLE}}=(\widehat{\boldsymbol b}^{\mathrm{MLE}},\widehat{\beta}^{\mathrm{MLE}},\widehat{\Sigma}^{\mathrm{MLE}},\widehat{\boldsymbol a}^{\mathrm{MLE}},\widehat{\alpha}^{\mathrm{MLE}})$ is found by maximizing the likelihood function $\mathcal L_t(\boldsymbol\Theta)$ over $\boldsymbol\Theta$, given the data. As in the EM (expectation maximization) algorithm, maximization of the likelihood function is alternated with Kalman filtering until convergence of the estimated parameters $\widehat{\boldsymbol\Theta}^{\mathrm{MLE}}$ is achieved.
\subsection{Estimation Motivated by Continuous Time Modeling}
\label{calibration real world 2}
\subsubsection{Rescaling the Time Grid}
Assume factor process $(\boldsymbol X(t))_{t\in{\mathbb N}_0}$ is given under ${\mathbb P}$ by $\boldsymbol X(0)\in {\mathbb R}^n$ and for $t>0$:
\[
\boldsymbol X(t)=\boldsymbol a+\alpha\boldsymbol X(t-1)+\Sigma^{\frac{1}{2}}\boldsymbol\varepsilon(t),
\]
where $\boldsymbol a=\boldsymbol b-\Sigma^{\frac{1}{2}}\boldsymbol\lambda$ and $\alpha=\beta-\Sigma^{\frac{1}{2}}\Lambda$. Furthermore, assume that $\alpha$ is a diagonalizable matrix with $\alpha=TDT^{-1}$ for $T\in{\mathbb R}^{n\times n}$ and diagonal matrix $D\in(-1,1)^{n\times n}$. Then, the transformed process $\boldsymbol Z=(T^{-1}\boldsymbol X(t))_{t\in{\mathbb N}_0}$ evolves according to:
\[
\boldsymbol Z(t)=\boldsymbol{c}+D\boldsymbol{Z}(t-1)+\Psi^{\frac{1}{2}}\boldsymbol\varepsilon(t),\quad t>0,
\]
where $\boldsymbol c=T^{-1}\boldsymbol a$ and $\Psi=T^{-1}\Sigma(T^{-1})^\top$. For $d \in \mathbb N_+$, the $d$-step ahead conditional distribution of $\boldsymbol Z$ under ${\mathbb P}$ is given by:
\[
\boldsymbol Z(t+d)|\mathcal F(t)\stackrel{\mathbb P}{\sim}\mathcal N\left(\boldsymbol\mu+\gamma\boldsymbol{Z}(t),\Gamma\right),\quad t\geq0,
\]
where $\boldsymbol\mu=\left(\mathds{1}-D\right)^{-1}\left(\mathds{1}-D^d\right)\boldsymbol c$, $\gamma=D^d$ and $\Gamma=\sum_{s=0}^{d-1}D^s\Psi D^s$. Suppose we have estimated $\boldsymbol\mu\in{\mathbb R}^n$, the diagonal matrix $\gamma\in(-1,1)^n$ and $\Gamma\in{\mathbb R}^{n\times n}$ on the time grid with size $d\Delta$, for instance, using MLE, as explained in Section~\ref{sec: kalman MLE}. We are interested in recovering the parameters $\boldsymbol c$, $D$ and $\Psi$ of the dynamics on the refined time grid with size $\Delta$ from $\boldsymbol\mu$, $\gamma$ and $\Gamma$.
The diagonal matrix $D$ and vector $\boldsymbol c$ are reconstructed from the diagonal matrix $\gamma$ as follows:
\[
\begin{aligned}
D&=\gamma^{\frac{1}{d}}=\mathds{1}+\frac{1}{d}\log(\gamma)+o\left(\frac{1}{d}\right), \quad \text{as $d\to\infty$},\\
\boldsymbol c&=\left(\mathds 1-\gamma\right)^{-1}\left(\mathds 1-\gamma^{\frac{1}{d}}\right)\boldsymbol{\mu}=\frac{1}{d}\left(\mathds 1-\gamma\right)^{-1}\log\left(\gamma^{-1}\right)\boldsymbol{\mu}+o\left(\frac{1}{d}\right), \quad \text{as $d\to\infty$},
\end{aligned}
\]
where logarithmic and power functions applied to diagonal matrices are defined on their diagonal elements. Note that for $i,j=1,\ldots,n$, we have:
\[
\Gamma_{ij}=\sum_{s=0}^{d-1}\gamma_{ii}^\frac{s}{d}\Psi_{ij}\gamma_{jj}^\frac{s}{d}=\Psi_{ij}\sum_{s=0}^{d-1}\left(\gamma_{ii}^{\frac{1}{d}}\gamma_{jj}^{\frac{1}{d}}\right)^s=\Psi_{ij}\frac{1-\gamma_{ii}\gamma_{jj}}{1-\left(\gamma_{ii}\gamma_{jj}\right)^{\frac{1}{d}}}.
\]
Therefore, we recover $\Psi$ from $\gamma$ and $\Gamma$ as follows.
\[
\Psi=\frac{1}{d}\upsilon+o\left(\frac{1}{d}\right),\quad\text{as $d\to\infty$},
\]
where $\upsilon=(-\Gamma_{ij}\log(\gamma_{ii}\gamma_{jj})(1-\gamma_{ii}\gamma_{jj})^{-1})_{i,j=1,\ldots,n}\in{\mathbb R}^{n\times n}$. Consider for $t>0$ the increments \linebreak$\mathcal D_t\boldsymbol Z=\boldsymbol Z(t)-\boldsymbol Z(t-1)$. From the formulas for $\boldsymbol c$, $D$ and $\Psi$, we observe that the $\mathcal F_{t-1}$-conditional mean of $\mathcal D_t \boldsymbol Z$:
\[
\boldsymbol c+\left(D-\mathds 1\right)\boldsymbol Z(t-1)=-\frac{1}{d}\left(\mathds1-\gamma\right)^{-1}\log(\gamma)\boldsymbol\mu+\frac{1}{d}\log(\gamma)\boldsymbol Z(t-1)+o\left(\frac{1}{d}\right),
\]
and the $\mathcal F_{t-1}$-conditional volatility of $\mathcal D_t \boldsymbol Z$:
\[
\Psi^{\frac{1}{2}}=\sqrt{\frac{1}{d}}\upsilon^{\frac{1}{2}}+o\left(\sqrt{\frac{1}{d}}\right),
\]
live on different scales as $d \to \infty$; in fact, volatility dominates for large $d$. Under ${\mathbb P}$ for $t>0$, we have:
\[
\begin{aligned}
&\mathbb E\left[\mathcal D_t\boldsymbol Z\left(\mathcal D_t\boldsymbol Z\right)^\top\middle|\mathcal F_{t-1}\right]=\mathrm{Cov}\left[\mathcal D_t\boldsymbol Z,\mathcal D_t\boldsymbol Z\middle|\mathcal F_{t-1}\right]+\mathbb E\left[\mathcal D_t\boldsymbol Z\middle|\mathcal F_{t-1}\right]\mathbb E\left[\mathcal D_t\boldsymbol Z\middle|\mathcal F_{t-1}\right]^\top\\\quad&=\mathrm{Cov}\left[\boldsymbol Z(t),\boldsymbol Z(t)\middle|\mathcal F_{t-1}\right]+\left(\mathbb E\left[\boldsymbol Z(t)\middle|\mathcal F_{t-1}\right]-Z(t-1)\right)\left(\mathbb E\left[\boldsymbol Z(t)\middle|\mathcal F_{t-1}\right]-Z(t-1)\right)^\top\\&=\Psi+\left(\boldsymbol c+\left(D-\mathds 1\right)\boldsymbol Z(t-1)\right)\left(\boldsymbol c+\left(D-\mathds 1\right)\boldsymbol Z(t-1)\right)^\top.
\end{aligned}
\]
Therefore, setting $\mathcal D_t\boldsymbol X=\boldsymbol X(t)-\boldsymbol X(t-1)$, we obtain as $d\to\infty$:
\begin{equation}\label{eq: small delta}
\begin{aligned}
&\mathbb E\left[\mathcal D_t\boldsymbol X\left(\mathcal D_t\boldsymbol X\right)^\top\middle|\mathcal F_{t-1}\right]=T\mathbb E\left[\mathcal D_t\boldsymbol Z\left(\mathcal D_t\boldsymbol Z\right)^\top\middle|\mathcal F_{t-1}\right]T^\top\\&\quad=T\Psi T^\top+T\left(\boldsymbol c+\left(D-\mathds 1\right)\boldsymbol Z(t-1)\right)\left(\boldsymbol c+\left(D-\mathds 1\right)\boldsymbol Z(t-1)\right)^\top T^\top\\&\quad=\frac{1}{d}T\upsilon T^\top+o\left(\frac{1}{d}\right)=T\Psi T^\top+o\left(\frac{1}{d}\right)=\Sigma+o\left(\frac{1}{d}\right),
\end{aligned}
\end{equation}
\subsubsection{Longitudinal Realized Covariations of Yields}
We consider the yield curve increments within the discrete-time multifactor Vasi\v{c}ek Models~\eqref{eq: spot rate} and \eqref{eq: ARn}. The increments of the yield process $(Y(t,t+\tau))_{t\in\mathbb N_0}$ for fixed time to maturity $\tau\Delta>0$ are given by:
\[
\begin{aligned}
\mathcal D_{t,\tau}Y&=Y\left(t,t+\tau\right)-Y\left(t-1,t-1+\tau\right)\\&=\frac{1}{\tau\Delta}\boldsymbol B(t,t+\tau)^\top\left(\boldsymbol X(t)-\boldsymbol X(t-1)\right)=\frac{1}{\tau\Delta}\boldsymbol B(t,t+\tau)^\top\mathcal D_t\boldsymbol X,
\end{aligned}
\]
where $\mathcal D_t\boldsymbol X|\mathcal F(t-1)\stackrel{\mathbb P}{\sim}\mathcal N\left(\boldsymbol a+\left(\alpha-\mathds{1}\right)\boldsymbol X(t-1),\Sigma\right)$. For times to maturity $\tau_1\Delta,\tau_2\Delta>0$, we get under~${\mathbb P}$:
\[
\mathbb E\left[\mathcal D_{t,\tau_1}Y\mathcal D_{t,\tau_2}Y\middle|{\cal F}_{t-1}\right]=\frac{1}{\tau_1\tau_2\Delta^2}\boldsymbol B(t,t+\tau_1)^\top\mathbb E\left[\mathcal D_t\boldsymbol X\left(\mathcal D_t\boldsymbol X\right)^\top\middle|{\cal F}_{t-1}\right]\boldsymbol B(t,t+\tau_2).
\]
By Equation \eqref{eq: small delta} for small grid size $\Delta$, we estimate the last expression by:
\begin{equation}\label{eq: calibration}
\mathbb E\left[\mathcal D_{t,\tau_1}Y\mathcal D_{t,\tau_2}Y\middle|{\cal F}_{t-1}\right]\approx\frac{1}{\tau_1\tau_2}\boldsymbol 1^\top\left(\mathds{1}-\beta^{\tau_1}\right)\left(\mathds{1}-\beta\right)^{-1}\Sigma\left(\mathds{1}-\beta^\top\right)^{-1}\left(\mathds{1}-\left(\beta^\top\right)^{\tau_2}\right)\boldsymbol 1.
\end{equation}
Formula \eqref{eq: calibration} is interesting for the following reasons:
\begin{itemize}[leftmargin=*,labelsep=6mm]
\item It does not depend on the unobservable factors $\boldsymbol X$.
\item It allows for direct cross-sectional estimation of $\beta$ and $\Sigma$. That is, $\beta$ and $\Sigma$ can directly be estimated from market observations without knowing the market-price of risk.
\item It is helpful to determine the number of factors needed to fit the model to market yield curve increments. This can be analyzed by principal component analysis.
\item It can also be interpreted as a small-noise approximation for noisy measurement systems of the form \eqref{eq: noisy measurement}.
\end{itemize}
Let $\widehat{y}_{1}(k)$ and $\widehat{y}_{2}(k)$ be market observations for times to maturity $\tau_{1}\Delta$ and $\tau_{2}\Delta$ and at times $k\in\{t-K+1,\ldots,t\}$, also specified in Section~\ref{sec: kalman MLE}. Then, the expectation on the left hand side of \eqref{eq: calibration} can be estimated by the realized covariation:
\begin{equation}\label{eq: rcov}
\widehat{\rm RCov}(t,\tau_1,\tau_2)=\frac{1}{K} \sum_{k=t-K+1}^t\big(\widehat{y}_{1}(k)-\widehat{y}_{1}(k-1)\big)\big(\widehat{y}_{2}(k)-\widehat{y}_{2}(k-1)\big).
\end{equation}
The quality of this estimator hinges on two crucial assumptions. First, higher order terms in \eqref{eq: small delta} are negligible in comparison to $\Sigma$. Second, the noise term $S^{\frac12}\boldsymbol\eta$ in \eqref{eq: noisy measurement} leads to a negligible distortion in the sense that observations $\widehat{\boldsymbol Y}$ are reliable indicators for the underlying Vasi\v cek yield curves.
\subsubsection[Cross-Sectional Estimation of beta and Sigma]{Cross-Sectional Estimation of $\beta$ and $\Sigma$}
Realized covariation estimator \eqref{eq: rcov} can be used in conjunction with asymptotic relation \eqref{eq: calibration} to estimate parameters $\beta$ and $\Sigma$ at time $ t\Delta$ in the following way. For given symmetric weights $w_{ij}=w_{ji}\geq0$, we solve the least squares problem:
\begin{equation}\label{eq: co-var estimate}
\begin{aligned}
\left(\widehat{\beta}^{\rm RCov},\widehat{\Sigma}^{\rm RCov}\right)&={\arg\min}_{\beta,\Sigma}\Bigg\{\sum_{i,j=1}^{M}w_{ij}\bigg[\widehat{\rm RCov}(t,\tau_i,\tau_j)\\&\quad-\frac{1}{\tau_i\tau_j}\boldsymbol 1^\top\left(\mathds{1}-\beta^{\tau_i}\right)\left(\mathds{1}-\beta\right)^{-1}\Sigma\left(\mathds{1}-\beta^\top\right)^{-1}\left(I-\left(\beta^\top\right)^{\tau_j}\right)\boldsymbol 1\bigg]^2\Bigg\},
\end{aligned}
\end{equation}
where we optimize over $\beta$ and $\Sigma$ satisfying Assumption~\ref{assumption}.
\subsection{Inference on Market Price of Risk}
Finally, we aim at determining parameters $\boldsymbol\lambda$ and $\Lambda$ of the change of measure specified in Section~\ref{sec: real world dynamics}. For this purpose, we combine MLE estimation (Section~\ref{sec: kalman MLE}) with estimation from realized covariations of yields (Section~\ref{calibration real world 2}). First, we estimate $\beta$ and $\Sigma$ by $\widehat\beta^{\mathrm{RCov}}$ and $\widehat\Sigma^{\mathrm{RCov}}$ as in Section~\ref{calibration real world 2}. Second, we estimate $\boldsymbol a$, $\boldsymbol b$ and $\alpha$ by maximizing the log-likelihood:
\[
\log\mathcal{L}_t\left(\boldsymbol b,\beta,\Sigma,\boldsymbol a,\alpha\right)=\sum_{k=t-K+1}^{t}\log\left(\det F(k)\right)-\sum_{k=t-K+1}^{t}\boldsymbol\zeta(k)^\top F(k)^{-1}\boldsymbol\zeta(k)+\mathrm{const},
\]
for fixed $\beta$ and $\Sigma$ over $\boldsymbol{b}\in{\mathbb R}^n$, $\boldsymbol{a}\in{\mathbb R}^n$ and $\alpha\in{\mathbb R}^{n\times n}$ with spectrum in $(-1,1)^n$, i.e.,
\begin{equation}\label{eq: mle}
\left(\widehat{\boldsymbol{b}}^{\mathrm{MLE}},\widehat{\boldsymbol{a}}^{\mathrm{MLE}},\widehat{\alpha}^{\rm MLE}\right)={\arg\max}_{\boldsymbol b,\boldsymbol a,\alpha}\:\:\log\mathcal{L}_t\left(\boldsymbol b,\widehat\beta^{\mathrm{RCov}},\widehat\Sigma^{\mathrm{RCov}},\boldsymbol a,\alpha\right).
\end{equation}
The constraint on the matrix $\alpha$ ensures that the factor process is stationary under the real-world measure ${\mathbb P}$. From Equation \eqref{eq: real world transform}, we have $\boldsymbol\lambda=\Sigma^{-\frac{1}{2}}\left(\boldsymbol b-\boldsymbol a\right)$ and $\Lambda=\Sigma^{-\frac{1}{2}}\left(\beta-\alpha\right)$. This motivates the inference of $\boldsymbol\lambda$ by:
\begin{equation}\label{eq: lambda}
\widehat{\boldsymbol\lambda}=\left(\widehat{\Sigma}^{\mathrm{RCov}}\right)^{-\frac{1}{2}}\left(\widehat{\boldsymbol b}^{\mathrm{MLE}}-\widehat{\boldsymbol a}^{\mathrm{MLE}}\right),
\end{equation}
and the inference of $\Lambda$ by:
\begin{equation}\label{eq: lambda mat}
\widehat{\Lambda}(k)=\left(\widehat{\Sigma}^{\mathrm{RCov}}\right)^{-\frac{1}{2}}\left(\widehat{\beta}^{\mathrm{RCov}}-\widehat{\alpha}^{\mathrm{MLE}}\right).
\end{equation}
We stress the importance of estimating as many parameters as possible from the realized covariations of yields prior to using maximum likelihood estimation. The MLE procedure of Section~\ref{sec: kalman MLE} is computationally intensive and generally does not work well to estimate volatility~parameters.
\section{Numerical Example for Swiss Interest Rates}\label{sec: numerical example}
\subsection{Description and Selection of Data}
We choose $\Delta=1/252$, which corresponds to a daily time grid (assuming that a financial year has 252 business days). For the Swiss currency (CHF), we consider as yield observations the Swiss Average Rate (SAR), the London InterBank Offered
Rate (LIBOR) and the Swiss Confederation Bond (SWCNB). See Figures \ref{fig: data start} and \ref{fig: libor comment}.
\begin{itemize}[leftmargin=*,labelsep=6mm]
\item {\it Short times to maturity.} The SAR is an ongoing volume-weighted average rate calculated by the Swiss National Bank (SNB) based on repo transactions between financial institutions. It is used for short times to maturity of at most three months. For SAR, we have the Over-Night SARON that corresponds to a time to maturity of $\Delta$ (one business day) and the SAR Tomorrow-Next (SARTN) for time to maturity $2 \Delta$ (two business days). The latter is not completely correct, because SARON is a collateral over-night rate and tomorrow-next is a call money rate for receiving money tomorrow, which has to be paid back the next business day. Moreover, we have the SAR for times to maturity of one week (SAR1W), two weeks (SAR2W), one month (SAR1M) and three months (SAR3M); see also \cite{Jordan}.
\item {\it Short to medium times to maturity.} The LIBOR reflects times to maturity, which correspond to one~month (LIBOR1M), three months (LIBOR3M), six months (LIBOR6M) and 12 months (LIBOR12M) in the London interbank market.
\item {\it Medium to long times to maturity.} The SWCNB is based on Swiss government bonds, and it is used for times to maturity, which correspond to two years (SWCNB2Y), three years (SWCNB3Y), four~years (SWCNB4Y), five years (SWCNB5Y), seven years (SWCNB7Y), 10 years (SWCNB10Y), 20 years (SWCNB20Y) and 30 years (SWCNB30Y).
\end{itemize}
These data are available from 8 December 1999, and we set 15 September 2014 to be the last observation date. Of course, SAR, LIBOR and SWCNB do not exactly model risk-free zero-coupon bonds, and these different classes of instruments are not completely consistent, because prices are determined slightly differently for each class. In particular, this can be seen during the 2008--2009 financial crisis. However, these data are in many cases the best approximation to CHF risk-free zero-coupon yields that is available. For the longest times to maturity of SWCNB, one may also raise issues about the liquidity of these instruments, because insurance companies typically run a buy-and-hold strategy for long-term bonds.
In Figures \ref{fig: rcov start}--\ref{fig: rcov end}, we compute the realized volatility $\widehat{\rm RCov}(t,\tau,\tau)^{\frac{1}{2}}$ of yield curves $(\widehat{y}_\tau(k))_{k=t-K+1,\ldots, t}$ for different times to maturity $\tau\Delta$ and window length $K$; see Equation \eqref{eq: rcov}. In~Figures \ref{fig: libor comment} and \ref{fig: rcov end}, we observe that SAR fits SWCNB better than LIBOR after the financial crisis of 2008. For this reason, we decide to drop LIBOR and build daily yield curves from SAR and SWCNB, only. The mismatch between LIBOR, SAR and SWCNB is attributable to differences in liquidity and the credit risk of the underlying instruments.
\subsection{Model Selection}\label{sec: model selection}
In this numerical example, we restrict ourselves to multifactor Vasi\v cek models with $\beta$ and $\alpha$ of diagonal form:
\[
\beta=\mathrm{diag}\left(\beta_{11},\ldots,\beta_{nn}\right),\quad\text{and}\quad\alpha=\mathrm{diag}\left(\alpha_{11},\ldots,\alpha_{nn}\right),
\]
where $-1<\beta_{11},\ldots,\beta_{nn},\alpha_{11},\ldots,\alpha_{nn}<1$. In the following, we explain exactly how to perform the delicate task of parameter estimation in the multifactor Vasi\v cek Models~\eqref{eq: spot rate} and \eqref{eq: ARn} using the procedure explained in Section~\ref{sec: parameters}.
\subsubsection{Discussion of Identification Assumptions} We select short times to maturity (SAR) to estimate parameters $\boldsymbol b$, $\beta$, $\Sigma$, $\boldsymbol a$ and $\alpha$. This is reasonable because these parameters describe the dynamics of the factor process and, thus, of the spot rate. As we are working on a small (daily) time grid, asymptotic Formulas \eqref{eq: small delta} and \eqref{eq: calibration} are expected to give good approximations. Additionally, it is reasonable to assume that the noise covariance matrix $S$ in data-generating Model~\eqref{eq: noisy measurement} is negligible compared to \eqref{eq: calibration}. Therefore, we can estimate the left hand side of \eqref{eq: calibration} by the realized covariation of observed yields; see estimator \eqref{eq: rcov}. Then, we determine the Hull--White extension $\theta$ in order to match the prevailing yield curve interpolated from SAR and~SWCNB.
\begin{figure}
\caption{Yield rates (lhs): Swiss Average Rate (SAR) and (rhs) London InterBank Offered Rate (LIBOR) from 8 December 1999, until 15 September 2014.}
\label{fig: data start}
\end{figure}
\begin{figure}
\caption{Yield rates: (lhs) Swiss Confederation Bond (SWCNB) and (rhs) a selection of SAR, LIBOR and Swiss Confederation Bond (SWCNB) from 8 December 1999, until 15 September 2014. Note that LIBOR looks rather differently from SAR and SWCNB after the financial crisis of 2008.}
\label{fig: libor comment}
\end{figure}
\begin{figure}
\caption{SAR realized volatility $\widehat{\rm RCov}
\label{fig: rcov start}
\end{figure}
\begin{figure}
\caption{LIBOR realized volatility $\widehat{\rm RCov}
\end{figure}
\begin{figure}
\caption{SWCNB realized volatility $\widehat{\rm RCov}
\end{figure}
\begin{figure}
\caption{A selection of SAR, LIBOR and SWCNB realized volatility $\widehat{\rm RCov}
\label{fig: rcov end}
\end{figure}
\subsubsection{Determination of the Number of Factors} We need to determine the appropriate number of factors $n$. The more factors we use, the better we can fit the model to the data. However, the dimensionality of the estimation problem increases quadratically in the number of factors, and the model may become over-parametrized. Therefore, we look for a trade-off between the accuracy of the model and the number of parameters used. In Figure \ref{fig: number of factors sample dates}, we determine $\beta_{11},\ldots,\beta_{nn}$ and $\Sigma$ by solving optimization~\eqref{eq: co-var estimate} numerically for three observation dates and $n=2,3$. A three-factor model is able to capture rather accurately the dependence on the time to maturity $\tau$. In Figure \ref{fig: number of factors all dates start}, we compare the realized volatility of the numerical solution of \eqref{eq: co-var estimate} to the market realized volatility for all observation dates. We observe that in several periods, the two-factor model is not able to fit the SAR realized volatilities accurately for all times to maturities. The three-factor model achieves an accurate fit for most observation dates. The model exhibits small mismatches in 2001, 2008--2009 and 2011--2012. These are periods characterized by a sharp reduction in interest rates in response to financial crises. In September 2011, following strong appreciation of the Swiss Franc with respect to the Euro, the SNB pledged to no longer tolerate Euro-Franc exchange rates below the minimum rate of $1.20$, effectively enforcing a currency floor for more than three years. As a consequence of the European sovereign debt crisis and the intervention of the SNB starting from 2011, we have a long period of very low (even negative) interest rates.
\begin{figure}
\caption{SAR realized volatility $\widehat{\mathrm{RCov}
\label{fig: number of factors sample dates}
\end{figure}
\subsubsection[Determination of Vasicek Parameters]{Determination of Vasi\v cek Parameters} Considering the results of Figure \ref{fig: number of factors all dates start}, we restrict ourselves from now on to three-factor Vasi\v cek models with parameters $\boldsymbol a,\boldsymbol b\in{\mathbb R}^3$ and:
\[
\begin{aligned}
\beta=\mathrm{diag}\left(\beta_{11},\beta_{22},\beta_{33}\right),\quad\alpha=\mathrm{diag}\left(\alpha_{22},\alpha_{22},\alpha_{33}\right),\quad\Sigma^{\frac{1}{2}}=\begin{pmatrix} \Sigma^{\frac{1}{2}}_{11} & 0 & 0 \\ \Sigma^{\frac{1}{2}}_{21} & \Sigma^{\frac{1}{2}}_{22} & 0 \\ \Sigma^{\frac{1}{2}}_{31} & \Sigma^{\frac{1}{2}}_{32} & \Sigma^{\frac{1}{2}}_{33} \end{pmatrix},
\end{aligned}
\]
where $-1\leq\beta_{11},\beta_{22},\beta_{33},\alpha_{11},\alpha_{22},\alpha_{33}\leq1$, $\Sigma^{\frac{1}{2}}_{11},\Sigma^{\frac{1}{2}}_{22},\Sigma^{\frac{1}{2}}_{33}>0$ and $\Sigma^{\frac{1}{2}}_{21},\Sigma^{\frac{1}{2}}_{31},\Sigma^{\frac{1}{2}}_{32}\in{\mathbb R}$.
\begin{figure}
\caption{$\tau=1$.}
\caption{$\tau=2$.}
\caption{$\tau=5$.}
\caption{$\tau=10$.}
\caption{$\tau=21$.}
\caption{$\tau=63$.}
\caption{SAR realized volatility $\widehat{\mathrm{RCov}
\label{fig: number of factors all dates start}
\end{figure}
\begin{figure}
\caption{Note the values of $\beta_{ij}
\caption{Note the large spikes in the volatilities and strong correlations among the factors during the European sovereign debt crisis and after the SNB intervention in 2011.}
\caption{Note the considerable difference between $\boldsymbol b$ and $\boldsymbol a$ in 2000--2002 and 2006--2009.}
\caption{Parameter estimation by optimizations \eqref{eq: co-var estimate}
\label{fig: parameters start a}
\label{fig: sigma}
\label{fig: parameters start}
\end{figure}
In Figure \ref{fig: parameters start}, we plot the numerical solutions of optimizations \eqref{eq: co-var estimate} and \eqref{eq: mle} for all observation dates. The parameters are reasonable for most of the observation dates. We observe that the estimates of $\beta_{11}$ are close to one for all observation dates. Our values for the speed of mean reversion are reasonable on a daily time grid. Note that $\beta$ scales as $\beta^d$ on a $d$-days time grid; see Section~\ref{calibration real world 2}. The speeds of mean reversion of $X_2$ and $X_3$ are higher than that of $X_1$ for most of the observation dates. We also see that the volatility of $X_1$ is lower than that of $X_2$ and $X_3$. In 2011, we observe large spikes in the factor volatilities. Starting from 2011, we have a period with strong correlations among the factors. From these results, we conclude that the three-factor Vasi\v cek model is reasonable for Swiss interest rates. Particularly challenging for the estimation is the period 2011--2014 of low interest rates following the European sovereign debt crisis and the SNB intervention. In Figure \ref{fig: parameters start a} (rhs), we observe that the difference in the speeds of mean-reversion under the risk-neutral and real-world measures is negligible. The difference between $\boldsymbol b$ and $\boldsymbol a$ is considerable in certain time periods. From the estimation results, we conclude that a constant market price of risk assumption is reasonable and set from now on $\Lambda=0$. In Figure \ref{fig: loglikelihood}, we compute the objective function of optimization~\eqref{eq: mle} for $(\boldsymbol b,\beta,\Sigma,\boldsymbol a,\alpha)=(\boldsymbol 0, \widehat{\beta}^{\mathrm{RCov}},\widehat{\Sigma}^{\mathrm{RCov}},\boldsymbol 0, \widehat{\beta}^{\mathrm{RCov}})$ and compare it to the numerical solution $(\widehat{\boldsymbol b}^{\mathrm{MLE}}, \widehat{\beta}^{\mathrm{RCov}},\widehat{\Sigma}^{\mathrm{RCov}},\widehat{\boldsymbol a}^{\mathrm{MLE}}, \widehat{\beta}^{\mathrm{RCov}})$. We observe that in 2003--2005 and 2010--2014, the parameter configuration $(\boldsymbol 0, \widehat{\beta}^{\mathrm{RCov}},\widehat{\Sigma}^{\mathrm{RCov}},\boldsymbol 0, \widehat{\beta}^{\mathrm{RCov}})$ is nearly optimal. In these periods, we have very low interest rates, and therefore, estimates of $\boldsymbol b$ and $\boldsymbol a$ close to zero are reasonable. Given the estimated parameters, we calibrate the Hull--White extension by equation $\eqref{eq: re-calibration step 2}$ to the full yield curve interpolated from SAR and SWCNB; see Figure \ref{fig: hwe}. We point out that our fitting method is not a purely statistical procedure; rather, it is a combination of estimation and calibration in accordance with the paradigm of robust calibration, as explained in \cite{Harms}.
\begin{figure}
\caption{Objective function $\log\mathcal{L}
\label{fig: loglikelihood}
\end{figure}
\begin{figure}
\caption{Three-factor Hull--White extended Vasi\v cek yield curve (lhs) and Hull--White extension $\theta$ (rhs) as of 29 September 2006. The parameters are estimated as in Figure \ref{fig: parameters start}
\label{fig: hwe}
\end{figure}
\subsubsection[Selection of a Model for the Vasicek Parameters]{Selection of a Model for the Vasi\v cek Parameters} In the following, we use the CRC approach to construct a modification of the Vasi\v cek model with stochastic volatility. We model the process $(\Sigma(t))_{t\in\mathbb N_0}$ by a Heston-like \cite{Heston} approach. We assume deterministic correlations among the factors and stochastic volatility given by:
\[
\begin{pmatrix} \Sigma_{11}(t) \\\Sigma_{22}(t) \\ \Sigma_{33}(t) \end{pmatrix}=\boldsymbol\varphi+{\mathbb P}hi\begin{pmatrix} \Sigma_{11}(t-1) \\\Sigma_{22}(t-1) \\ \Sigma_{33}(t-1)\end{pmatrix}+\begin{pmatrix} \sqrt{\Sigma_{11}(t-1)} & 0 & 0 \\ 0 & \sqrt{\Sigma_{22}(t-1)} & 0\\ 0 & 0 & \sqrt{\Sigma_{33}(t-1)}\end{pmatrix}\Phi^{\frac{1}{2}}\widetilde{\boldsymbol\varepsilon}(t),
\]
where $\boldsymbol\varphi\in\mathbb R^3_+$, ${\mathbb P}hi=\mathrm{diag}\left({\mathbb P}hi_{11},{\mathbb P}hi_{22},{\mathbb P}hi_{33}\right)\in\mathbb R^{3\times 3}$, $\Phi^{\frac{1}{2}}\in\mathbb R^{3\times 3}$ non-singular, and for each $t\in{\mathbb N}$, $\widetilde{\boldsymbol\varepsilon}(t)$ has a standard Gaussian distribution under ${\mathbb P}$, conditionally given $\mathcal F(t-1)$. Moreover, we assume that $\left(\boldsymbol\varepsilon(t),\widetilde{\boldsymbol\varepsilon}(t)\right)$ is multivariate Gaussian under ${\mathbb P}$, conditionally given $\mathcal F(t-1)$. Note that $\boldsymbol\varepsilon(t)$ and $\widetilde{\boldsymbol\varepsilon}(t)$ are allowed to be correlated. The matrix valued process $(\Sigma(t))_{t\in\mathbb N_0}$ is constructed combining this stochastic volatility model with fixed correlation coefficients. This model is able to capture the stylized fact that volatility appears to be more noisy in high volatility periods; see Figure \ref{fig: sigma}.
We use the volatility time series of Figure \ref{fig: sigma} to specify $\boldsymbol\varphi$, ${\mathbb P}hi$ and $\Phi$. We rewrite the equation for the evolution of the volatility as:
\[
\frac{\Sigma_{ii}(t)}{\sqrt{\Sigma_{ii}(t-1)}}=\frac{\varphi_i}{\sqrt{\Sigma_{ii}(t-1)}}+{\mathbb P}hi_{ii}\sqrt{\Sigma_{ii}(t-1)}+(\Phi^{\frac{1}{2}}\widetilde{\boldsymbol\varepsilon}(t))_{i},\quad i=1,2,3,
\]
and use least square regression to estimate $\boldsymbol\varphi$, ${\mathbb P}hi$ and $\Phi$. From the regression residuals, we estimate the correlations between $\boldsymbol\varepsilon(t)$ and $\widetilde{\boldsymbol\varepsilon}(t)$. Figures \ref{fig: parameter crc start}--\ref{fig: parameter crc end} show the estimates of $\boldsymbol\varphi$, ${\mathbb P}hi$ and $\Phi$.
\subsection{Simulation and Back-Testing}
Section~\ref{sec: model selection} provides a full specification of the three-factor Vasi\v cek CRC model under the risk-neutral and real-world probability measures. Various model quantities of interest in applications can then be calculated by simulation.
\begin{figure}
\caption{Estimation of $\varphi_{1}
\label{fig: parameter crc start}
\end{figure}
\begin{figure}
\caption{Estimation of ${\mathbb P}
\end{figure}
\begin{figure}
\caption{Estimation of correlations $\widetilde{\rho}
\label{fig: parameter crc end}
\end{figure}
\subsubsection{Simulation} The CRC approach has the remarkable property that yield curve increments can be simulated accurately and efficiently using Theorem \ref{theorem HJM view} and Corollary \ref{HJM under the real world measure}. In contrast, spot rate models with stochastic volatility without CRC have serious computational drawbacks. In such models, the calculation of the prevailing yield curve for given state variables requires Monte Carlo simulation. Therefore, the simulation of future yield curves requires nested simulations.
\subsubsection{Back-Testing}\label{subsec:bt} We backtest properties of the monthly returns of a buy and hold portfolio investing equal proportions of wealth in the zero-coupon bonds with times to maturity of 2, 3, 4, 5, 6 and 9 months and 1, 2, 3, 5, 7 and 10 years. We divide the sample into disjoint monthly periods and calculate the monthly return of this portfolio assuming that at the beginning of each period, we invest in the bonds with these times to maturity in equal proportions of wealth. The returns and some summary statistics are shown in Figure \ref{fig: returns}. We observe that the returns are positively skewed, leptokurtic and have heavier tails than the Gaussian distribution. These stylized facts are essential in applications.
For each monthly period, we select a three-factor Vasi\v cek model and its CRC counterpart with stochastic volatility. Then, we simulate for each period realizations of the returns of the test portfolio. By construction, the Vasi\v cek model generates Gaussian log-returns and is unable to reproduce the stylized facts of the sample; see Tables \ref{tab: stats1} and \ref{tab: stats2} and Figure \ref{fig: stats}. Increasing the number of factors does not help much, because the log-returns remain Gaussian. On the other hand, CRC of the Vasi\v cek model with stochastic volatility provides additional modeling flexibility. In particular, we can see from the statistics in Table \ref{tab: stats2} and the confidence intervals in Figure \ref{fig: stats} that the model matches the return distribution better than the Vasi\v cek model. As explained in Figure \ref{fig: stats}, statistical tests assuming the independence of disjoint monthly periods show that the difference between the Vasi\v cek model and its CRC counterpart is statistically significant. We conclude that the three-factor CRC Vasi\v cek model is a parsimonious and tractable alternative that provides reasonable results.
\begin{table}[H]
\centering
\includegraphics[width=\textwidth]{./figures/bt_stats_3fac.pdf}
\caption{Statistics computed from simulations of the test portfolio returns for some of the monthly periods in the Vasi\v cek model. For each monthly period, we simulate $10^4$ realizations.}\label{tab: stats1}
\end{table}
\begin{table}[H]
\centering
\includegraphics[width=\textwidth]{./figures/bt_stats_crc_3fac.pdf}
\caption{Statistics computed from the simulations of the test portfolio returns for some of the monthly periods in the consistent re-calibration (CRC) counterpart of the Vasi\v cek model with stochastic volatility. For each monthly period, we simulate $10^4$ realizations.}\label{tab: stats2}
\end{table}
\begin{figure}
\caption{Logarithmic monthly returns of a buy and hold portfolio investing in equal wealth proportions in the zero-coupon bonds with times to maturity of 2, 3, 4, 5, 6 and 9 months and 1, 2, 3, 5, 7 and 10 years. For each monthly period, we calculate the logarithmic return of this portfolio assuming that at the beginning of each period, we are invested in the bonds with these times to maturity in equal proportions of wealth.}
\label{fig: returns}
\end{figure}
\begin{figure}
\caption{Confidence intervals computed from $10^4$ simulations of the test portfolio returns in the Vasi\v cek model and its CRC counterpart with stochastic volatility. For each monthly period, we check if the market return lies in the confidence interval. This is more often the case for the CRC than for the standard Vasi\v cek model. A one-sided binomial test assuming the independence of monthly periods shows that the difference is statistically significant ($p=0.0013$ for the $25\%$ and $p=$ 0.00017 for the $5\%$ quantiles). The result remains significant if every second month is discarded to account for dependencies ($p\approx 0.01$). This suggests that the CRC Vasic\v ek model is able to match the return distribution better than its counterpart with constant parameters.}
\label{fig: stats}
\end{figure}
\subsubsection{Regulatory Framework}
The type of analysis that was performed in the previous section is an integral component of the present regulatory framework for risk management. In the Basel framework \cite{BIS}, the capital charge for the trading book is based on quantile risk measures. Under the internal model approach (\cite{BIS}, Section 2.VI.D), a bank calculates quantiles for the distribution of possible 10-day losses based on recent market data under the assumption that the trading book portfolio is held fixed over the time period. The approach relies on accurate modeling of the distribution of portfolio returns over holding periods of multiple days. A similar analysis is required by the Basel (\cite{BIS}, Section 2.VI.D) regulatory framework for model validation and stress testing: model validation is performed by backtesting the historical performance of the model, and stress tests are carried out using the same methodology by calibrating the model to historical periods of significant financial stress.
These tasks can be accomplished using the CRC approach by selecting suitable classes of affine models and parameter processes. The approach is fairly general, since there are few restrictions on the parameter processes. In particular, it allows for stochastic volatility and can be used to create realistic non-Gaussian distributions of multi-period bond returns (see Section~\ref{subsec:bt}). Nevertheless, computing these bond return distributions does not require nested simulations. This is crucial for reasons of efficiency. Moreover, the flexibility in the specification of the parameter processes makes the CRC approach well suited for stress testing, because it allows one to freely select and specify stress~scenarios.
\section{Conclusions}\label{sec: conclusion}
\begin{itemize}[leftmargin=*,labelsep=4mm]
\item {\it Flexibility and tractability.}
Consistent re-calibration of the multifactor Vasi\v cek model provides a tractable extension that allows parameters to follow stochastic processes. The additional flexibility can lead to better fits of yield curve dynamics and return distributions, as we demonstrated in our numerical example. Nevertheless, the model remains tractable. In particular, yield curves can be simulated efficiently using Theorem~\ref{theorem HJM view} and Corollary~\ref{HJM under the real world measure}. This allows one to efficiently calculate model quantities of interest in risk management, forecasting and pricing.
\item {\it Model selection.} CRC models are selected from the data in accordance with the robust calibration principle of \cite{Harms}. First, historical parameters, market prices of risk and Hull--White extensions are inferred using a combination of volatility estimation, MLE and calibration to the prevailing yield curve via Formulas (\ref{eq: co-var estimate}--\ref{eq: lambda mat}, \ref{eq: re-calibration step 2}). The only choices in this inference procedure are the number of factors of the Vasi\v cek model and the window length $K$. Then, as a second step, the time series of estimated historical parameters are used to select a model for the parameter evolution. This results in a complete specification of the CRC model under the real world and the pricing measure.
\item {\it Application to modeling of Swiss interest rates.} We fitted a three-factor Vasi\v cek CRC model with stochastic volatility to Swiss interest rate data. The model achieves a reasonably good fit in most time periods. The tractability of CRC allowed us to compute several model quantities by simulation. We looked at the historical performance of a representative buy and hold portfolio of Swiss bonds and concluded that a multifactor Vasi\v cek model is unable to describe the returns of this portfolio accurately. In contrast, the CRC version of the model provides the necessary flexibility for a good fit.
\end{itemize}
\appendix
\section*{\noindent Appendix A Proofs}
\renewcommand{\Alph{section}}{\Alph{section}}
\refstepcounter{section}
\label{sec: proofs}
\renewcommand{\Alph{section}}{}
\begin{proof}[Proof of Theorem \ref{theo: ARn prices}]
We prove Theorem \ref{theo: ARn prices} by induction as in (\cite{Wuethrich} Theorem 3.16) where ZCB prices are derived under the assumption that $\beta$ and $\Sigma$ are diagonal matrices. Note that we have the relation $P(m-1,m)=\exp\left(-\boldsymbol 1^\top\boldsymbol X(m-1)\Delta\right)$, which proves the claim for $t=m-1$. Assume that Theorem~\ref{theo: ARn prices} holds for $t+1\in\{2,\ldots,m-1\}$. We verify that it also holds for $t\in\{1,\ldots,m-2\}$. Under equivalent martingale measure ${\mathbb P}^\ast$, we have using the tower property for conditional expectations and the induction assumption:
\begin{myequation1}
\begin{aligned}
P(t,m)&=\exp\left\{-\boldsymbol 1^\top\boldsymbol X(t)\Delta\right\}\mathbb E^{*}\left[\mathbb E^{*}\left[\exp\left\{-\Delta\sum_{s=t+1}^{m-1}\boldsymbol 1^\top\boldsymbol X(s)\right\}\middle|{\cal F}(t+1)\right]\middle|{\cal F}(t)\right]\\&=\exp\left\{-\boldsymbol 1^\top\boldsymbol X(t)\Delta\right\}\mathbb E^{*}\left[P(t+1,m)\middle|{\cal F}(t)\right]\\&=\exp\left\{-\boldsymbol 1^\top\boldsymbol X(t)\Delta\right\}\mathbb E^{*}\left[\exp\left\{A(t+1,m)-\boldsymbol B(t+1,m)^\top\boldsymbol X(t+1)\right\}\middle|{\cal F}(t)\right]\\&=\exp\left\{-\boldsymbol 1^\top\boldsymbol X(t)\Delta+A(t+1,m)-\boldsymbol B(t+1,m)^\top\left(\boldsymbol b+\beta\boldsymbol X(t)\right)+\frac{1}{2}\boldsymbol B(t+1,m)^\top\Sigma\boldsymbol B(t+1,m)\right\}\\&=
\exp\left\{A(t+1,m)-\boldsymbol B(t+1,m)^\top\boldsymbol b+\frac{1}{2}\boldsymbol B(t+1,m)^\top\Sigma\boldsymbol B(t+1,m)-\left(\boldsymbol B(t+1,m)^\top\beta+\boldsymbol 1^\top\Delta\right)\boldsymbol X(t)\right\}.\nonumber
\end{aligned}
\end{myequation1}
This proves the following recursive formula for $m-1>t\geq0$:
\[
\begin{aligned}
A(t,m)&=A(t+1,m)-\boldsymbol B(t+1,m)^\top\boldsymbol b+\frac{1}{2}\boldsymbol B(t+1,m)^\top\Sigma\boldsymbol B(t+1,m),\\
\boldsymbol B(t,m)&=\beta^\top\boldsymbol B(t+1,m)+\boldsymbol 1\Delta.
\end{aligned}
\]
Finally, note that the recursive formula for $\boldsymbol B(\cdot,\cdot)$ implies:
\[
\boldsymbol B(t,m)=\sum_{s=0}^{m-t-1}\left(\beta^\top\right)^s\boldsymbol 1\Delta=\left(\mathds{1}-\beta^\top\right)^{-1}\left(\mathds{1}-\left(\beta^\top\right)^{m-t}\right)\boldsymbol 1\Delta.
\]
This concludes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theo: ARn+ prices}.]
The proof goes by induction as the proof of Theorem \ref{theo: ARn prices}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theo: calibration}]
First, observe that the condition $\boldsymbol{y}^{(k)}(k)=\boldsymbol y$ imposes conditions only on the values $\theta(1),\ldots,\theta(M-1)$. Secondly, note that the vector $\boldsymbol\theta$, such that the condition is satisfied, can be calculated recursively in the following way.
\begin{enumerate}[leftmargin=*,labelsep=3mm]
\item \emph{First component $\theta_1$.} We have $A^{(k)}(k+1,k+2)=0$, $\boldsymbol B(k+1,k+2)=\boldsymbol 1\Delta$ and:
\begin{equation*}
A^{(k)}(k,k+2)=-\boldsymbol 1^\top\boldsymbol b\Delta-\theta_1\Delta+\frac{1}{2}\boldsymbol 1^\top\Sigma\boldsymbol 1\Delta^2,
\end{equation*}
see Theorem \ref{theo: ARn+ prices}.
Solving the last equation for $\theta_1$, we have:
\begin{equation*}
\theta_1
=\frac{1}{2}\boldsymbol 1^\top\Sigma\boldsymbol 1\Delta-\boldsymbol 1^\top\boldsymbol b-A^{(k)}(k,k+2)\Delta^{-1}.
\end{equation*}
From \eqref{eq: ARn+ yields} and the equation for $\boldsymbol B$ in Theorem~\ref{theo: ARn prices}, we obtain:
\begin{equation*}
A^{(k)}(k,k+2)=\boldsymbol 1^\top\left(\mathds{1}-\beta^2\right)\left(\mathds{1}-\beta\right)^{-1}\boldsymbol x\Delta-2 y_2\Delta.
\end{equation*}
This is equivalent to:
\begin{equation}\label{eq: first component}
\theta_1=\frac{1}{2}\boldsymbol 1^\top\Sigma\boldsymbol 1\Delta-\boldsymbol 1^\top\boldsymbol b-\boldsymbol 1^\top\left(\mathds{1}-\beta^2\right)\left(\mathds{1}-\beta\right)^{-1}\boldsymbol x+2y_2.
\end{equation}
\item \emph{Recursion $i\rightarrow i+1$.} Assume
we have determined $\theta_1,\ldots,\theta_i$ for $i=1,\ldots,M-2$. We want to determine $\theta_{i+1}$. We have $A^{(k)}(k+i+1,k+i+2)=0$, and iteration of the recursive formula for $A^{(k)}$ in Theorem~\ref{theo: ARn+ prices} implies:
\[
A^{(k)}(k,k+i+2)=-\sum_{s=k+1}^{k+i+1}\boldsymbol B(s,k+i+2)^\top\left(\boldsymbol b +\theta(s-k)\boldsymbol e_1\right)+\frac{1}{2}\sum_{s=k+1}^{k+i+1}\boldsymbol B(s,k+i+2)^\top\Sigma\boldsymbol B(s,k+i+2).
\]
Solving the last equation for $\theta_{i+1}$ and using $\boldsymbol B(k+i+1,k+i+2)=\boldsymbol 1\Delta$, we have:
\[
\begin{aligned}
\theta_{i+1}=&-\frac{1}{\Delta}A^{(k)}(k,k+i+2)-\frac{1}{\Delta}\sum_{s=k+1}^{k+i}\boldsymbol B(s,k+i+2)^\top\left(\boldsymbol b +\theta(s-k)\boldsymbol e_1\right)-\boldsymbol 1^\top\boldsymbol b\\&\quad+\frac{1}{2\Delta}\sum_{s=k+1}^{k+i+1}\boldsymbol B(s,k+i+2)^\top\Sigma\boldsymbol B(s,k+i+2).
\end{aligned}
\]
From \eqref{eq: ARn+ yields} and the equation for $\boldsymbol B$ in Theorem~\ref{theo: ARn prices}, we obtain:
\[
A^{(k)}(k,k+i+2)=\boldsymbol 1^\top\left(\mathds{1}-\beta^{i+2}\right)\left(\mathds{1}-\beta\right)^{-1}\boldsymbol x\Delta-(i+2)y_{i+2}\Delta.
\]
This is equivalent to:
\begin{equation}\label{eq: recursion}
\begin{aligned}
\theta_{i+1}&=(i+2)y_{i+2}-\boldsymbol 1^\top\left(\mathds{1}-\beta^{i+2}\right)\left(\mathds{1}-\beta\right)^{-1}\boldsymbol x-\frac{1}{\Delta}\sum_{s=k+1}^{k+i}\boldsymbol B(s,k+i+2)^\top\left(\boldsymbol b+\theta_{s-k}\boldsymbol e_1\right)\\&\quad-\boldsymbol 1^\top\boldsymbol b+\frac{1}{2\Delta}\sum_{s=k+1}^{k+i+1}\boldsymbol B(s,k+i+2)^\top\Sigma\boldsymbol B(s,k+i+2)\\&=(i+2)y_{i+2}-\boldsymbol 1^\top\left(\mathds{1}-\beta^{i+2}\right)\left(\mathds{1}-\beta\right)^{-1}\boldsymbol x-\frac{1}{\Delta}\sum_{s=k+1}^{k+i+1}\boldsymbol B(s,k+i+2)^\top\boldsymbol b\\&\quad-\frac{1}{\Delta}\sum_{s=k+1}^{k+i}B_1(s,k+i+2)\theta_{s-k}+\frac{1}{2\Delta}\sum_{s=k+1}^{k+i+1}\boldsymbol B(s,k+i+2)^\top\Sigma\boldsymbol B(s,k+i+2).
\end{aligned}
\end{equation}
\end{enumerate}
This recursion allows one to determine the components of $\boldsymbol\theta$. Note that Equation \eqref{eq: recursion} can be written~as:
\[
\left({\cal C}(\beta)\boldsymbol\theta\right)_{i+1}=z_{i+1}\left(\boldsymbol b,\beta,\Sigma, \boldsymbol x,\boldsymbol y\right),\quad i=1,\ldots,M-2.
\]
Observe that the lower triangular matrix ${\cal C}(\beta)$ is invertible since $\det{\cal C}(\beta)=\Delta^{M-1}>0$. Hence, Equations \eqref{eq: first component} and \eqref{eq: recursion} prove \eqref{eq: calibration matrix}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem HJM view}]
We add and subtract $-A^{(k)}(k,m)+\boldsymbol B^{(k)}(k,m)^\top\boldsymbol{\cal X}(k)$ to the right hand side of Equation \eqref{start HJM} and obtain:
\begin{myequation2}\label{eq: identity1}
\begin{aligned}
{\cal Y}(k+1,m)\left(m-(k+1)\right)\Delta&=A^{(k)}(k,m)-A^{(k)}(k+1,m)-A^{(k)}(k,m)\\&\quad+\boldsymbol B^{(k)}(k,m)^\top\boldsymbol{\cal X}(k)-\boldsymbol B^{(k)}(k,m)^\top\boldsymbol{\cal X}(k)\\&\quad+\boldsymbol B^{(k)}(k+1,m)^\top \left(\boldsymbol b(k)+\theta^{(k)}(1)\boldsymbol e_1+\beta(k)\boldsymbol {\cal X}(k)+\Sigma(k)^{\frac{1}{2}}\boldsymbol\varepsilon^\ast(k+1)
\right).
\end{aligned}
\end{myequation2}
We have the following two identities from Section~\ref{subsubsec crc step 2}:
\begin{myequation1}
\begin{aligned}
-A^{(k)}(k,m)+\boldsymbol B^{(k)}(k,m)^\top\boldsymbol{\cal X}(k)&={\cal Y}(k,m)(m-k)\Delta,\\
A^{(k)}(k,m) -A^{(k)}(k+1,m)&=-\boldsymbol B^{(k)}(k+1,m)^\top\left(\boldsymbol b(k)+\theta^{(k)}(1)\boldsymbol e_1\right)+\frac{1}{2}\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)\boldsymbol B^{(k)}(k+1,m).
\end{aligned}
\end{myequation1}
Therefore, the right hand side of \eqref{eq: identity1} is rewritten as:
\begin{myequation1}
\begin{aligned}
{\cal Y}(k+1,m)
{(m-(k+1))\Delta}
&={\cal Y}(k,m){(m-k)\Delta}+\left(\boldsymbol B^{(k)}(k+1,m)^\top\beta(k)-\boldsymbol B^{(k)}(k,m)^\top\right)\boldsymbol{\cal X}(k)\\&\quad+\frac{1}{2}\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)\boldsymbol B^{(k)}(k+1,m)+\boldsymbol B^{(k)}(k+1,m)^\top\Sigma(k)^{\frac{1}{2}}\boldsymbol\varepsilon^\ast(k+1).
\end{aligned}
\end{myequation1}
Observe that:
\[
\begin{aligned}
\boldsymbol B^{(k)}(k+1,m)^\top\beta(k)=\left(\sum_{s=0}^{m-k-2}\left(\beta^\top(k)\right)^s\boldsymbol 1\right)^\top\beta(k)\Delta=\boldsymbol 1^\top\sum_{s=1}^{m-k-1}\beta(k)^{s}\Delta=\boldsymbol B^{(k)}(k,m)^\top-\boldsymbol 1^\top\Delta,
\end{aligned}
\]
and that $Y(k,k+1)=\boldsymbol 1^\top\boldsymbol {\cal X}(k)$. This proves the claim.
\end{proof}
{\small
}
\end{document} |
\begin{document}
\title{\Large Quantum Simulation of Simple Many-Body Dynamics}
\begin{abstract}
Quantum computers could potentially simulate the dynamics of systems such as polyatomic molecules on a much larger scale than classical computers. We investigate a general quantum computational algorithm that simulates the time evolution of an arbitrary non-relativistic, Coulombic many-body system in three dimensions, considering only spatial degrees of freedom. We use a simple discretized model of Schr\"odinger evolution and discuss detailed constructions of the operators necessary to realize the scheme of Wiesner and Zalka. The algorithm is simulated numerically for small test cases, and its outputs are found to be in good agreement with analytical solutions.
\end{abstract}
\section{Introduction}
Quantum computers could provide an exponential speedup over classical methods of simulating quantum mechanical systems \cite{Feynman}. Wiesner \cite{Wiesner} and Zalka \cite{Zalka} have sketched a basic theory of such simulations that includes discretization techniques and decomposition of time evolution operators. Other approaches in the literature include the quantum lattice gas automaton (QLGA) \cite{Generic, Lattice, Yepez} and various methods of decomposing fermionic Hamiltonians into spin operators via the Jordan-Wigner transformation \cite{Abrams, Ortiz, Ovrum, Lanyon}.
However, work based on the first approach has generally focused on bounds for relevant time or space complexities rather than the explicit form of the requisite operators \cite{Giuliano, Kassal, NC}, as well as on the problem of calculating energy spectra \cite{Aspuru-Guzik, Lanyon}. Conversely, in this note, we expand on Wiesner and Zalka's approach to compute position-dependent wavefunctions. We propose detailed constructions of the operators (quantum gates) for such simulations and numerical tests of the algorithm described.
Given the initial state of some number of particles confined to a finite volume in three-dimensional space, we aim to compute the wavefunction of the system after an arbitrary period of time. Our simplified model uses only the non-relativistic Coulomb Hamiltonian, whose terms include kinetic energies and electric potentials, and neglects spin-orbit effects.
\section{Method} \label{method}
\subsection{Hamiltonian}
The Hamiltonian $\hat{H}$ is a Hermitian operator acting on states in a Hilbert space. For quantum computation, the Hilbert space in question is finite-dimensional. We will consider primarily the Coulomb Hamiltonian $\hat{H} = \hat{T} + \hat{U}$, which consists of terms of the form
\[
\hat{T} = -\sum_i \frac{\hbar^2}{2m_i}\nabla_i^2 \mbox{ and } \hat{U} = \frac{1}{4\pi\epsilon_0} \sum_i\sum_{j > i} \frac{q_i q_j}{\| \mathbf{r}_i - \mathbf{r}_j \|}.
\]
For a molecular system,
\begin{equation}
\label{molecular Hamiltonian}
\hat{H} = \hat{T}_e + \hat{T}_n + \hat{U}_{ee} + \hat{U}_{nn} + \hat{U}_{en}
\end{equation}
where the $\hat{U}$ terms represent various interactions between electrons and nuclei. One of our goals is to make such continuous operators precise in the context of quantum computing: what does the single-particle momentum operator $-i\hbar\nabla$ mean as a finite-dimensional unitary matrix and a quantum gate?
We use the following simulation parameters to determine the dimension of the Hilbert space and the evolution of $\psi$: $T$, the total time over which to simulate; $N_t$, the number of small time increments; $L$, the bound on the dimensions of the system; and $n$, a parameter chosen to reflect our desired precision.
\subsection{Discretization} \label{discretization}
Wiesner \cite{Wiesner} and Zalka \cite{Zalka} have outlined algorithms for simulating quantum many-body systems with the common element of diagonalization using the quantum Fourier transform (QFT). Expanding on their framework, we explicitly construct the operators needed to carry out these simulations.
We first review the common encoding scheme for Cartesian coordinates in three spatial dimensions. Let all particles be confined to a finite volume ($0 \leq x, y, z \leq L$). We divide the intervals along each coordinate axis into $2^n$ subintervals of length $\delta = L/2^n$. The volume is hence partitioned into many subvolumes (position basis states) where each amplitude of the state vector of a particle in the system specifies the probability that it will be found in a subvolume. The system wavefunction is a tensor product of discretized single-particle wavefunctions, or a sum thereof, encoded in a register of qubits. We approximate a continuous single-particle wavefunction $\psi(\mathbf{r}, t)$ by the state vector
\begin{equation}
\label{state vector}
|\psi(\mathbf{r}, t)\rangle = \sum_{i=0}^{2^n-1}\sum_{j=0}^{2^n-1}\sum_{k=0}^{2^n-1} \psi(\mathbf{r}_{ijk}, t)|ijk\rangle,
\end{equation}
normalized by a factor of $(\sum_{i,j,k} |\psi(\mathbf{r}_{ijk}, t)|^2)^{-1/2}$, where
\begin{equation}
\label{r}
\mathbf{r}_{ijk} = (r_x, r_y, r_z) = \delta\left(i+1/2, j+1/2, k+1/2\right).
\end{equation}
The one-dimensional position basis states $|i\rangle$, $|j\rangle$, and $|k\rangle$ range over discrete positions on the $x$, $y$, and $z$ axes, respectively, while $|ijk\rangle$ denotes the tensored state $|i\rangle|j\rangle|k\rangle$. We assign the amplitude associated with each subvolume to be the amplitude of $\psi$ at its center. Each particle requires a register of $n$ qubits for each of the $x$, $y$, and $z$ intervals. The algorithm thus requires $3nN$ total qubits, $N$ being the number of particles.
The particles are assumed spinless, but that leaves the question of distinguishability. Our algorithm is fully compatible with simulating indistinguishable fermions if one adds ancillary qubit registers to allow for the initialization of an antisymmetric spatial wavefunction via the algorithm of Abrams and Lloyd \cite{Abrams}. In our notation, this requires $O((3nN)^2)$ operations given $N$ single-particle states. This procedure also suffices (and requires fewer operations) for symmetrization --- for example, when dealing with bosonic nuclei. Our main focus here is the form of the operators, so we will not pursue this point further.
From the Schr\"odinger equation $\hat{H} \psi = i\hbar\partial_t \psi$, we have the time evolution
\begin{equation}
\label{time evolution}
|\psi(\mathbf{r}, t_0 + \epsilon)\rangle = e^{-\frac{i}{\hbar}\hat{H}\epsilon}|\psi(\mathbf{r}, t_0)\rangle.
\end{equation}
For very small time scales $\epsilon$, we can apply the Trotter decomposition \cite{NC} to construct gates for each term of the Hamiltonian separately and apply them sequentially as unitary time evolution operators:
\begin{equation}
\label{time evolution operator}
e^{-\frac{i}{\hbar}\hat{H}\epsilon} = e^{-\frac{i}{\hbar}\hat{U}\epsilon} e^{-\frac{i}{\hbar}\hat{T}\epsilon} + O(\epsilon^2).
\end{equation}
To carry out the algorithm, we choose a time interval $T$ over which to simulate and partition it into $N_t$ small intervals $\epsilon = T/N_t$. At each of the $N_t$ time steps, we apply the operator \eqref{time evolution operator} to the state vector of our system in the position basis.
Na\"ively, one can repeat the entire algorithm many times and make subsequent measurements of every qubit to iteratively determine a probability distribution for the particle configurations, thus allowing one to predict the overall spatial density at the end of the desired time interval. The procedure is identical to that required for the algorithm of Abrams and Lloyd \cite{Abrams} for simulating fermions in the first-quantized representation, namely constructing a histogram of configurations obtained from the repeated measurements.
\subsection{Construction of Kinetic Energy Operators} \label{kinetic}
It is commonly noted \cite{Wiesner, Zalka, Giuliano, Kassal, NC} that implementing the kinetic energy operator by Fourier transforming to the momentum basis and back reduces the time evolution operators to diagonal operators and the QFT. However, the form of the kinetic energy operator in the model of Wiesner and Zalka has not, to the author's knowledge, been previously addressed.
To this end, we determine the form of the momentum operator matrix $\hat{P}$ by analogy with the continuous case. First, consider the $-i\hbar\nabla$ operator for a single particle in one dimension: i.e., along an $x$-interval of length $L$ divided into $2^n$ subintervals of length $\delta$. To obtain approximations for the derivative of the discrete-valued wavefunction $\psi$ at each point $k$ ($0\leq k < 2^n$) corresponding to a position basis state, define
\[
D_k^+ = \frac{\psi(k + 1) - \psi(k)}{\delta}, \hspace{2 mm} D_k^- = \frac{\psi(k) - \psi(k - 1)}{\delta}, \hspace{2 mm} D_k^\textrm{ave} = \frac{\psi(k + 1) - \psi(k - 1)}{2\delta}
\]
so that to the best approximation,
\begin{eqnarray*}
D_x\psi(0) &=& D_0^+ + O(\delta), \\
D_x\psi(2^n - 1) &=& D_{2^n - 1}^- + O(\delta), \mbox{ and} \\
D_x\psi(k) &=& D_k^\textrm{ave} + O(\delta^2) \hspace{2 mm} (0 < k < 2^n - 1).
\end{eqnarray*}
The action of the ``discrete'' derivative operator is thus described by
\[
\sum_{k=0}^{2^n-1}\psi(k)|k\rangle \stackrel{\hat{D}_x}{\longmapsto} D_0^+ |0\rangle + \sum_{k=1}^{2^n-2}D_k^\textrm{ave}|k\rangle + D_{2^n - 1}^- |2^n - 1\rangle,
\]
or in matrix form as
\begin{equation}
\label{derivative matrix}
\hat{D}_x = \frac{1}{2\delta}
\left(\begin{array}{cccccc}
-2 & 2 & 0 & \cdots & 0 & 0 \\
-1 & 0 & 1 & \cdots & 0 & 0 \\
0 & -1 & 0 & \ddots & \vdots & \vdots \\
\vdots & \vdots & \ddots & \ddots & 1 & 0 \\
0 & 0 & \cdots & -1 & 0 & 1 \\
0 & 0 & \cdots & 0 & -2 & 2
\end{array}\right).
\end{equation}
In addition, we require that the matrix $-\frac{i}{\hbar}(\hat{P}^2/2M)\epsilon$ be skew-Hermitian so that its exponential is unitary. Hence $\hat{P}^2$ must be symmetric, and we can make it so by sacrificing the approximations for $\hat{D}_x$ at the endpoints. The corresponding momentum operator is
\begin{equation}
\label{P}
\hat{P} = -i\hbar\hat{D}_x \approx -\frac{i\hbar}{2\delta}
\left(\begin{array}{cccc}
0 & 1 & \cdots & 0 \\
-1 & 0 & \ddots & \vdots \\
\vdots & \ddots & \ddots & 1 \\
0 & \cdots & -1 & 0
\end{array}\right).
\end{equation}
The form of the kinetic energy operator follows immediately, and extending the kinetic energy operator to three dimensions is as simple as applying it to the qubits encoding the $x$-, $y$-, and $z$-coordinates of each particle separately. Since this construction takes place in the position basis, its implementation does not require a QFT. We next discuss two methods of implementing the kinetic energy term of the time evolution operator $e^{-\frac{i}{\hbar}\hat{T}\epsilon}$ as a set of quantum gates.
As a first method, note that
\[
\hat{P}^2 = -\frac{\hbar^2}{4\delta^2}\left(-|0\rangle\langle 0| + \sum_{i=1}^{2^n-2} \hat{P}_i - |2^n-1\rangle\langle 2^n-1|\right)
\]
where $\hat{P}_i = |i-1\rangle\langle i+1| - 2|i\rangle\langle i| + |i+1\rangle\langle i-1|$. Let $\xi = \frac{i\hbar\epsilon}{8M\delta^2}$. The Trotter formula gives
\begin{equation}
\label{KE Trotter}
e^{-\frac{i}{\hbar}\left(\frac{\hat{P}^2}{2M}\right)\epsilon} = e^{-\xi |0\rangle\langle 0|} \left(\prod_i e^{\xi\hat{P}_i}\right) e^{-\xi |2^n-1\rangle\langle 2^n-1|} + O(\xi^2)
\end{equation}
where $e^{-\xi |0\rangle\langle 0|}$ and $e^{-\xi |2^n-1\rangle\langle 2^n-1|}$ are both single-qubit phase rotation operators. Furthermore, since each of the $\hat{P}_i$ is a very sparse matrix, the $e^{\xi\hat{P}_i}$ are block diagonal matrices representing two-qubit operations: $e^{\xi\hat{P}_i} = \operatorname{diag}(I_{i-1}, M_P, I_{2^n-2-i})$ where
\[
M_P = \exp
\left(\begin{array}{ccc}
0 & 0 & \xi \\
0 & -2\xi & 0 \\
\xi & 0 & 0
\end{array}\right)
\]
and $I_m$ denotes the $m\times m$ identity matrix. Hence we have decomposed the kinetic energy operator entirely into controlled one- and two-qubit gates. Since $2^n$ such gates act on each $n$-qubit register representing position basis states along one coordinate axis, $3N2^n$ one- and two-qubit gates are needed to simulate the kinetic time evolution operator of the entire system. This gate complexity is linear in the number of subintervals along each axis, or the ``absolute'' precision $\widetilde{N} = 2^n$.
A second, and more robust, method is to realize that in the continuum limit, $\hat{P}$ will be exactly diagonalized by the QFT, as follows: letting $D$ be the dimension of the Hilbert space, $F$ be the QFT of dimension $D$, and $\alpha = -i\hbar/2\delta$, we can write
\[
\hat{P} = \alpha \sum_{x=0}^{D-2} (|x\rangle\langle x+1| - |x+1\rangle\langle x|).
\]
Applying $F = \frac{1}{\sqrt{N}}\sum_{j,k=0}^{N-1} e^{i2\pi jk/N} |j\rangle\langle k|$ and using $\langle j|k\rangle = \delta_{jk}$ for simplification shows that
\begin{equation}
F\hat{P}F^\dag = \frac{\alpha}{D} \sum_{j=0}^{D-1} \sum_{k=0}^{D-1} (e^{-i2\pi k/D} S_1(j, k) - e^{i2\pi k/D} S_2(j, k)) |j\rangle\langle k|
\end{equation}
where
\[
S_1(j, k) = \sum_{p=0}^{D-2} e^{i2\pi p(j-k)/D} \mbox{ and } S_2(j, k) = \sum_{p=1}^{D-1} e^{i2\pi p(j-k)/D}.
\]
Summing the geometric series gives $S_1(j, k) = -e^{i2\pi(j-k)(D-1)/D}$ and $S_2(j, k) = -1$ if $j\neq k$. Consequently, as $D\rightarrow\infty$, $S_1(j, k) \rightarrow -1$ and the non-diagonal entries approach
\begin{equation}
\frac{\alpha}{D}(e^{i2\pi k/D} - e^{-i2\pi k/D}) = \frac{\alpha}{D}\left(2i\sin\frac{2\pi k}{D}\right) \rightarrow 0.
\end{equation}
If $j = k$, $S_1(j, k) = S_2(j, k) = D - 1$, so that as $D\rightarrow\infty$, the $k^\mathrm{th}$ diagonal entry becomes
\begin{equation}
\label{diagonal entries}
\frac{D-1}{D}\alpha(e^{-i2\pi k/D} - e^{i2\pi k/D}) \rightarrow -2i\alpha\sin\frac{2\pi k}{D} = -\frac{\hbar}{\delta}\sin\frac{2\pi k}{D}.
\end{equation}
Thus $F\hat{P}F^\dag$ is a diagonal matrix with real eigenvalues in the continuum limit, as expected physically. Now that we have explicitly determined these eigenvalues, we can construct the kinetic energy operator from QFTs and a diagonal $\hat{P}$ matrix with entries of the form \eqref{diagonal entries} in the momentum basis. Once again, an unoptimized method of implementing the associated time evolution operators requires only $3N(2^n + O(n^2))$ gates --- taking into account both the QFT and the diagonal component --- which is linear in both the number of particles and the number of subintervals along each coordinate axis.
\subsection{Construction of Potential Energy Operators}
The construction of the potential energy operators is generally straightforward in the position basis. For example, the Coulomb potential operator for a molecular system has the following physical parameters: $N_e, \mathbf{r}_i$ (resp.\ $N_n, \mathbf{R}_i$) for the number and positions of the electrons (resp.\ nuclei) and $Z_i, M_i$ for the atomic numbers and masses of the nuclei. Take $\mathbf{r}_p \equiv \mathbf{r}_{i_p j_p k_p}$ (resp.\ $\mathbf{R}_p \equiv \mathbf{R}_{I_p J_p K_p}$) as shorthand, where $i_p j_p k_p$ (resp.\ $I_p J_p K_p$) is a bit string corresponding to the binary representation of some index describing a position basis state, or subvolume, occupied by the $p^\mathrm{th}$ electron (resp.\ nucleus). Recall that the position vector $\mathbf{r}_{ijk}$ is defined in equation \eqref{r}. Also, let $A[m]$ denote the $m^\mathrm{th}$ diagonal entry of matrix $A$.
$\hat{U}_{ee}$ can be seen as an operator on the state of the entire system of electrons, or as a $2^{3nN_e} \times 2^{3nN_e}$ diagonal matrix whose entries give the total potential energy due to any of the $2^{3nN_e}$ allowed configurations of electrons. If
\[
|m\rangle = \bigotimes_{s=1}^{N_e}|i_s j_s k_s\rangle
\]
represents one of the many possible electron configurations, then
\begin{equation}
\label{U_ee entries}
\hat{U}_{ee}[m] = \frac{e^2}{4\pi\epsilon_0} \sum_{p=1}^{N_e} \sum_{q=p+1}^{N_e} \frac{1}{\| \mathbf{r}_p - \mathbf{r}_q \|}.
\end{equation}
Analogously, $\hat{U}_{nn}$ acts on the state vector of the entire system of nuclei and $\hat{U}_{en}$ on all particles. For those diagonal entries that represent configurations of the system in which multiple particles are found in the same ``box'' of volume (i.e., when $|i_p j_p k_p\rangle = |i_q j_q k_q\rangle$ and hence $\mathbf{r}_p = \mathbf{r}_q$ for some $p, q$), we can set the potential to some approximate large, but finite, value. For example, in the case of two electrons, we can replace the undefined term $e^2/4\pi\epsilon_0 \| \mathbf{r}_p - \mathbf{r}_q \|$ with the approximation $e^2/4\pi\epsilon_0 \delta$.
\subsection{Complexity of Potential Energy Operators}
It is unknown whether nonlocal potential energy operators such as these can be implemented in a polynomial number of resources; Lloyd demonstrated efficiently-scaling implementations for local interactions \cite{Lloyd}. Benenti and Strini \cite{Giuliano} have shown that the time evolution operators corresponding to a harmonic oscillator potential do scale polynomially with the number of qubits, and their argument is easily extended to power-law potentials of the form $V(x) = ax^r$ where $r$ is a positive integer. The difficulty with the Coulomb potential lies in $r$ being negative, which prevents easy factorization of the operators into products of phase-shift gates acting on a fixed number of qubits, even after introducing ancillary qubits. However, it may be possible to implement these nonlocal potential operators in polynomial time if we choose to sacrifice some additional precision in computing the values of our potential energies. Polynomial scaling is important because it is necessary to first calculate the potential energies \eqref{U_ee entries} on a classical computer before implementing the corresponding phase rotations in quantum gates.
First, we can utilize redundancy to implement the potential energy operators more efficiently. As an example, consider the three-qubit phase shift operator
\[
\operatorname{diag}(e^{i\theta_1}, e^{i\theta_2}, e^{i\theta_3}, e^{i\theta_4}, e^{i\theta_3}, e^{i\theta_4}, e^{i\theta_1}, e^{i\theta_2})
\]
with ``redundant'' entries. The na\"ive method of constructing this diagonal operator is to use the first two qubits as controls for phase shift operators on the third qubit, resulting in four gates acting on the third qubit. A more efficient scheme is shown in Figure \ref{efficient}, using the single-qubit gates
\[
\hat{\theta}_{12} =
\left(\begin{array}{cc}
e^{i\theta_1} & 0 \\
0 & e^{i\theta_2}
\end{array}\right)
\mbox{ and } \hat{\theta}_{34} =
\left(\begin{array}{cc}
e^{i\theta_3} & 0 \\
0 & e^{i\theta_4}
\end{array}\right).
\]
Because there are many configurations of a molecular system with the same potential energy (up to a precision), the potential energy operators also have many redundant entries. In fact, let $\hat{U}$ be one of $\hat{U}_{ee}$, $\hat{U}_{nn}$, and $\hat{U}_{en}$, and let the corresponding $N$ be $N_e$, $N_n$, or $N_e + N_n$. Then $\hat{U}[x] = \hat{U}[2^{3nN} - 1 - x]$ for $0\leq x\leq 2^{3nN-1} - 1$, meaning that the potential energy operators are symmetric about the antidiagonal. This is easily seen by noting that the indices of the states $|x\rangle$ and $|2^{3nN} - 1 - x\rangle$ are bitwise complements, meaning that by symmetry, they represent reflected versions of the same molecular configuration. This observation reduces the number of gates needed by a factor of two, and similar optimizations could lead to much more efficient implementations of the potential energy operators.
\begin{figure}
\caption{Efficient implementation of a diagonal operator. \label{efficient}
\label{efficient}
\end{figure}
Second, because we would like to simulate arbitrarily large systems, the following variables are expensive: $N_e$, $N_n$ (in addition to the atomic numbers $Z_i$), and the absolute precision $\widetilde{N}$ where $n = \log_2 \widetilde{N}$. We have already seen that the kinetic energy operators scale linearly with $\widetilde{N}$. The number of distinct values of the potential energy of the system can scale polynomially if we restrict our calculations to some appropriate precision $\Delta U$. Then, despite the exponential number of particle configurations, their potential energies only assume a polynomial number of values, perhaps allowing us to employ redundancy to implement the potential energy operators efficiently.
To see how this might work, let $e' = e^2/4\pi\epsilon_0$. The extreme values of the potential energy of the entire system might go like
\[
U_\mathrm{max} = \frac{e'}{\delta} \left(\frac{N_e(N_e - 1)}{2} + \sum_i \sum_{j > i} Z_i Z_j\right) \mbox{ and } U_\mathrm{min} = -\frac{e'}{\delta} \sum_i Z_i.
\]
We would like to determine some increment $\Delta U$ so that the number of possible potential energies of the system between $U_\mathrm{min}$ and $U_\mathrm{max}$, to a precision of $\Delta U$, is $(U_\mathrm{max} - U_\mathrm{min})/\Delta U$ and becomes polynomial in the ``expensive'' variables. The smallest possible change in potential energy between two different configurations of the particles occurs roughly when two electrons are separated by a large distance and one of them is very slightly displaced. The maximum distance between the two electrons is of order $L$, and the minimum displacement is of order $\delta$. Let the initial $\mathbf{r}$ vector between the electrons have length $L$ and let the final one have length $L'$. The minimum change in distance $L' - L$ occurs when the displacement vector of length $\delta$ makes approximately a right angle with the initial $\mathbf{r}$ vector, so that $L' = \sqrt{L^2 + \delta^2} \approx L + \delta^2/2L$. This assumption is justified because the angle $\theta$ between the initial $\mathbf{r}$ vector and the displacement vector for which $L' = L$ is such that $\cos\theta = \delta/2L \ll 1$, so the angle that minimizes $L' - L$ can be made arbitrarily close to $\pi/2$. Then the minimum change in potential energy due to this small displacement is
\[
\Delta U = e'\left(\frac{1}{L} - \frac{1}{L'}\right) \approx e'\frac{\delta^2}{2L^3},
\]
whence
\begin{equation}
\frac{U_\mathrm{max} - U_\mathrm{min}}{\Delta U} \approx \frac{2L^3}{\delta^3}P(N_e, Z_1, \ldots, Z_{N_n}) = \widetilde{N}^3 \widetilde{P}(N_e, Z_1, \ldots, Z_{N_n})
\end{equation}
where $\widetilde{P}(N_e, Z_1, \ldots, Z_{N_n})$ is some polynomial expression of the arguments with constant factors absorbed into it. Hence it may be possible to construct the potential energy operators in a number of gates polynomial in $N_e$, the $Z_i$, and $\widetilde{N}$ as long as we restrict the precision of the potential energy values to $\Delta U$ and utilize redundancy effectively.
\section{Numerical Tests}
We illustrate the use of these operators in an actual algorithm with two examples.
\subsection{Order of Error: Particle in a Box}
We first numerically demonstrate the accuracy of our approximation for $e^{-\frac{i}{\hbar}\hat{T}\epsilon}$ by using it to determine a time evolution for a particle in a one-dimensional infinite potential well of length $L$, hence dropping the Coulombic term from the Hamiltonian. The goal here is to determine the order of error of our approximations by simulating the simplest possible system. We enforce the boundary conditions by introducing a potential energy operator that takes an arbitrarily large value at the endpoints.
An exact wavefunction beginning in a uniform superposition can be expressed as a linear combination of eigenfunctions by Fourier expansion:
\[
\psi_\textrm{exact}(x, 0) = \frac{1}{\sqrt{L}} \hspace{2 mm} (0 < x < L) \implies \psi_\textrm{exact}(x, t) = \frac{2^{3/2}}{\pi}\sum_{k=1}^\infty \frac{1}{2k-1}\psi_{2k-1}(x)e^{-iE_{2k-1}t/\hbar}
\]
where $\psi_a = \sqrt{2/L}\sin(a\pi x/L)$, with corresponding energies $E_a = a^2\hbar^2\pi^2/2mL^2$. We compare $|\psi_\textrm{exact}(x, t)|^2$ to the probability density of an initial input $|\psi(x, 0)\rangle = \frac{1}{2^{n/2}}\sum_{i=0}^{2^n-1}|i\rangle$ to the quantum algorithm as it evolves under the time evolution operators described in Section \ref{method}. Figure \ref{KE sim} shows qualitatively the probability density of $\psi_\textrm{exact}(x, t)$ versus that of $|\psi(x, t)\rangle$ for several evolution times $T$. In all simulations, 1000 terms are used in the Fourier series to compute $\psi_\textrm{exact}(x, t)$.
\begin{figure}
\caption{Comparison of probability densities for a particle in a box as predicted by the quantum algorithm (top row) and from analytical solutions (bottom row), evolved from a ``flat'' distribution. We use $N_t = 1000$, $n = 10$, and $L = 1$. Time is in atomic units. \label{KE sim}
\label{KE sim}
\end{figure}
To estimate the convergence properties of the stated algorithm, we compute the root mean square error of the probability density of $|\psi(x, t)\rangle$ produced by the quantum algorithm versus both spatial and temporal discretization:
\[
\textrm{RMSE} = \frac{1}{2^{n/2}}\left(\sum_{i=0}^{2^n-1}\Big[\langle\psi(x, t)|\psi(x, t)\rangle - |\psi_\textrm{exact}(x, t)|^2\Big]^2\right)^{1/2}.
\]
The plots of Figure \ref{convergence} indicate that the error scales as a power of $\delta$, while preliminary results do not reveal the order of error in the temporal discretization. We additionally compute the measure of error used by Yepez and Boghosian, defined by $E_\textrm{YB} = 2^{-n/2}(\textrm{RMSE})$ (\cite{Yepez}, eq. 30). The slope of $\log(\textrm{RMSE})$ in the left plot is 0.2547 while that of $\log(E_\textrm{YB})$ is 0.7547, so the respective errors scale as $O(\delta^{0.2547})$ and $O(\delta^{0.7547})$. This convergence in space is not nearly as efficient as with the QLGA representation of Schr\"odinger evolution, which usually converges with fourth-order error ($E_\textrm{YB}$) in $\delta$ \cite{Yepez}. Thus future work might seek to increase the efficiency of our intuitive ``Cartesian'' operator representation. From the right plot, the envelope of the maximum RMSE also appears to decrease slightly sublinearly with decreasing $\epsilon$, while interesting nonlinear patterns appear that suggest the presence of vertical asymptotes and more complicated behavior for most $\epsilon$. Of course, the temporal error in the time evolution operator itself is second-order by equation \eqref{time evolution operator}, although this error can be reduced at the cost of increased gate complexity with a third-order Trotter decomposition.
\begin{figure}
\caption{Left: log-log plot of RMSE of $|\psi(x, t)|^2$ versus spatial interval $\delta$ at fixed $N_t = 1000$ for $n = 1, 2, \ldots, 10$. Both $\log(\textrm{RMSE}
\label{convergence}
\end{figure}
Several subtle sources of numerical error arise in our simulations. One downside to our construction of the kinetic energy operator is that its endpoint error, as mentioned in Section \ref{kinetic}, propagates with each application of the time evolution operator constructed from the approximate $\hat{P}$ in equation \eqref{P}. However, this error affects at most a fixed small number of basis states in the particle state vectors at each time step. As long as the spatial discretization sufficiently exceeds the temporal discretization, or $N_t\ll 2^n$, it should not noticeably hinder the simulation. Figure \ref{KE sim} shows that even when $N_t\approx 2^n$, one obtains qualitatively excellent results: hence our construction of $\hat{P}$ as diagonal in the momentum basis is indeed robust. Conversely, if $|h\rangle$ differs only slightly from an eigenvector of $\hat{H}$ (as a result of the approximation for $\hat{P}$), then $e^{ic\hat{H}}|h\rangle$ poorly approximates a phase shift of $|h\rangle$ for large $c$, so short absolute simulation times $T$ confer the best numerical stability.
Reasons that the RMSE does not converge exactly to 0 when either of the discretizations tend to 0 include the fixed error in the other variable, endpoint error, error due to a finite boundary potential, and (to a much lesser degree) error from a finite Fourier sum.
\subsection{Simple Molecules}
Though the electronic structure of molecules represents a physically interesting case, the present algorithm is limited in this domain because modeling electrons and nuclei as spinless precludes the simulation of atomic orbitals beyond the 1s.
Nonetheless, the algorithm is simulated in MATLAB to determine what it would predict for the electronic charge density of several molecules, each with at most two spinless ``electrons'' and three nuclei. To emulate the physical setup of a molecular system, the electron wavefunctions are initialized as uniform superpositions over subsets of position basis states via Hadamard transforms, while the much more massive nuclei are initialized to single position basis states. For the range of precision that we consider, antisymmetrization makes little noticeable difference. For simplicity, the numerical implementation uses fully quantum encoding and operators but the classical clamped-nucleus Born-Oppenheimer approximation. We thus include only the terms $\hat{T}_e$, $\hat{U}_{ee}$, and $\hat{U}_{en}$ in the molecular Hamiltonian \eqref{molecular Hamiltonian}. Also, we carry out the simulation in two dimensions both to reduce the space complexity and for easier visualization. A normalized time interval of $T = 1$ is used. The results of the algorithm for four very simple molecules are displayed in Figure \ref{simSemiQM}. These plots show effective cross-sections of three-dimensional orbitals, where each square represents an electron position basis state.
\begin{figure}
\caption{2-D electron densities of H, H$_2^+$, H$_2$, and H$_3^+$, respectively, generated by the quantum simulation algorithm. \label{simSemiQM}
\label{simSemiQM}
\end{figure}
The very limited precision of the classical computer does not provide a detailed picture, but the plots all exhibit symmetry with respect to the nuclei. To simulate H$_2$ and H$_3^+$ efficiently, the precision was decreased by a factor of four. In these latter two plots, the greatest electron density appears to be in the center, where the atomic orbitals overlap. The simulated hydrogen and H$_2^+$ orbitals, though lobed and artificially resembling those of a 3d subshell, most likely reflect numerical errors in the simulation due to discretization. Compare the plots of Figure \ref{simSemiQM} to the classical results of Figure \ref{classical simulations}.
\begin{figure}
\caption{Analytical and classical results for $|\psi|^2$ of H, H$_2^+$, H$_2$, and H$_3^+$ (only the latter two molecules are in the ground state). The first image shows the exact electron density of an H atom corresponding to a 3d orbital in two dimensions, with wavefunction of the form $\psi(r_0, \theta) = \eta r_0^2 e^{-r_0/3}
\label{classical simulations}
\end{figure}
We have not addressed the question of simulating both ground and excited states of systems such as molecules. Adjusting the input qubits to represent an initial configuration with a desired energy might allow simulation of states with arbitrary energies. Alternatively, Zalka's proposed ``energy drain'' method could induce a simulated system to decay into its ground state by coupling it to an external reservoir \cite{Zalka}.
\section{Conclusion} \label{conclusion}
We have constructed the relevant quantum gates for simulating the non-relativistic Coulomb Hamiltonian for spinless many-body quantum systems, building on Wiesner and Zalka's model. We have suggested implementations of these gates and proven the exactness of the kinetic energy operator in the continuum limit. Lastly, we have simulated the algorithm in MATLAB and compared its outputs to analytical solutions, finding good agreement. More numerical tests are needed and forthcoming. In particular, it would be desirable to simulate two particles acting via the Coulomb force in a potential well in 1, 2, and 3 dimensions, and to compare those numerical tests involving interacting particles to classically obtained benchmarks.
Our preliminary investigation of this algorithm, under highly simplifying assumptions, has two main limitations. First, disregarding fermionic spin statistics (the spin component of the wavefunction) means there is no way to enforce the Pauli exclusion principle, which does not currently permit the recovery of realistic atomic and molecular structure. Second, if one chooses to sample the probability density of the many-body wavefunction at the end of the time evolution by making repeated runs of the algorithm and measuring (Section \ref{discretization}), the number of runs necessary to obtain sufficiently good ``resolution'' is unknown. This latter problem is also encountered in the algorithm of Yepez and Boghosian \cite{Yepez}.
It should be possible to incorporate smaller terms into the Hamiltonian with no change in the form of the momentum operator. Doing so lifts many of the simplifications we have made. For example, the Dirac Hamiltonian corrects for both spin and relativistic effects \cite{Shankar}. Two extra qubits are required to represent positive- and negative-energy states as well as spin up or down. An electron wavefunction would thus reside in a Hilbert space $\mathcal{H}_r \otimes \mathcal{H}_s^{\otimes 2}$ where $\mathcal{H}_r$ is $2^{3n}$-dimensional and $\mathcal{H}_s$ is two-dimensional. This results in four-component spinor wavefunctions where each component is itself a position-dependent wavefunction. The Dirac Hamiltonian for an electron in an electromagnetic field is
\[
\hat{H}_D = \gamma^0(mc^2 + \gamma^\mu \pi_\mu c) + \hat{U}
\]
where $\gamma^\mu$ are the gamma matrices and $\vec{\pi} \equiv \vec{P} + e\vec{A}/c$ where $\vec{P}$ and $\vec{A}$ are operator-valued vectors of momentum operators and vector potential operators, respectively, in the $x$, $y$, and $z$ directions. The momentum operators in each of the three spatial dimensions act only on the qubits encoding the electron's position in that dimension. Namely, $\vec{P} = (\hat{P}_x, \hat{P}_y, \hat{P}_z)$ with $\hat{P}_x = \hat{P} \otimes I_{2^{2n}}$, $\hat{P}_y = I_{2^n} \otimes \hat{P} \otimes I_{2^n}$, $\hat{P}_z = I_{2^{2n}} \otimes \hat{P}$, and $\hat{P}$ as defined in equation \eqref{P}. Because the gamma matrices act on the spinor components, we can rewrite these operators in terms of matrices of the correct dimension as
\[
\hat{H}_\textrm{relativistic + spin} = \gamma^0 mc^2 \otimes I_{2^{3n}} + (\gamma^0 \gamma^\mu \otimes I_{2^{3n}})(I_4 \otimes \pi_\mu c) + I_4 \otimes \hat{U}.
\]
Both the position and the velocity of every particle in the system make a nonlocal contribution to the vector potential at every point, so implementing $\vec{A}$ might therefore require additional qubits for keeping track of such velocities. Finally, these corrections we have described account only for electron spin and not nuclear spin.
\end{document} |
\begin{document}
\title[satellite constructions and Geometric Classification of Brunnian links]
{satellite constructions and Geometric Classification of Brunnian links}
\author{Sheng Bai}
\address{College of Mathematics and Physics, Beijing University of Chemical Technology, Beijing, 100029, China}
\email{[email protected]}
\author{Jiming Ma}
\address{School of Mathematical Sciences \\Fudan University\\
Shanghai 200433, China} \email{[email protected]}
\subjclass[2010]{57M25, 57M50}
\keywords{Brunnian links, hyperbolic links, satellite links, JSJ-decomposition}
\thanks{Jiming Ma was partially supported by NSFC 11771088.}
\begin{abstract}
We study satellite operations on Brunnian links. Firstly, we find two special satellite operations, both of which can construct infinitely many distinct Brunnian links from almost every Brunnian link. Secondly, we give a geometric classification theorem for Brunnian links, characterize the companionship graph defined by Budney in \cite{RB}, and develop a canonical geometric decomposition, which is simpler than JSJ-decomposition, for Brunnian links. The building blocks of Brunnian links then turn out to be Hopf $n$- links, hyperbolic Brunnian links, and hyperbolic Brunnian links in unlink-complements. Thirdly, we define an operation to reduce a Brunnian link in an unlink-complement into a new Brunnian link in $S^3$ and point out some phenomena concerning this operation.
\end{abstract}
\date{June. 25, 2020}
\maketitle
\section{Introduction}
The well-known geometric classification theorem for knots, proved by Thurston \cite{Th}, states that every knot type is in exactly one of the three classes, torus knots, hyperbolic knots, and satellite knots. Similarly, we have a generalization of Thurston's classification for the case of all links. Here by a \emph{knot}, we mean an oriented circle in the 3-sphere upto ambient isotopies. A motivation of this paper is a refinement of this classification for the case of Brunnian links.
When provided with a link, we may naturally consider its sublinks. The nontrivial sublinks with least components are exactly knots and Brunnian links. In this sense, Brunnian links can be viewed as next in importance to knots for study of links. However, limited works has been done in Brunnian links. This is the first of a series of papers on Brunnian links. We are concerned in this paper with the satellites of Brunnian links and the objective is twofold. The first one is to construct infinitely many new Brunnian links using satellites. The other is to give deep geometric decomposition of Brunnian links by satellite patterns. We therefore present a general definition of Brunnian links.
\begin{defn}\label{def:brunn}
An $n$-component link $L$ in $S^3$ is \emph{Brunnian} if $n>1$, and
(ST): every $(n-1)$-component sublink is trivial;
(NT): $L$ is nontrivial.
\end{defn}
This paper has three main parts. Firstly, satellites of links have been defined and generalized in various ways (for instance \cite{RB,DFL,EN}). In this paper we give new names to two specific operations. The standard orientation of meridian and longitude used below will be described in the next section.
\begin{defn}\label{def:sum0}
Let $L$ and $L'$ be links in $S^3$, and $C \subset L$ and $C' \subset L'$ be oriented unknotted components. A homeomorphism $h: S^3 -int N(C') \longrightarrow N(C)$ maps the oriented meridian of $N(C')$ to the oriented longitude of $N(C)$ and maps the oriented longitude of $N(C')$ to the oriented meridian of $N(C)$, where $N(C)$ is the regular neighborhood of $C$. Then the link $(L-C) \cup h(L'-C')$ is the \emph{satellite-sum} of $L(C)$ and $L'(C')$, denoted $L_{C} \dagger_ {C'} L'$.
\end{defn}
\begin{defn}\label{def:tie0}
Let $L_0 \sqcup L^n$ be a link in $S^3$ where $L^n=\sqcup_{i=1}^n C^i$ is an oriented unlink, and $k_i$, $i=1,...,n$, be nontrivial knot types. Let $h: U_n =S^3 - int N(L^n) \longrightarrow S^3$ be an orientation-preserving embedding so that
\begin{align}
S^3-h(U_n ) \cong \sqcup_{i=1}^n (S^3-N(k_i)), \nonumber
\end{align}
where $h(\partial N(C^i))$ corresponds to $\partial N(k_i)$, and the oriented meridian of $N(C^i)$ maps to the oriented null-homologous curve in $S^3-h(U_n)$ corresponding to the oriented longitude of $N(k_i)$. Then $L=h(L_0 )$ is a \emph{satellite-tie}, denoted $( L_0 , U_n \subset S^3 )\underrightarrow{k_1,...,k_n} L$.
\end{defn}
Satellite-sum and satellite-tie are in general different operations in that they correspond to unknotted and knotted tori in the complements of resulted links respectively. The \emph{splice} of links defined by Budney \cite{RB} includes both of them, and the notion of splice given by Eisenbud and Neuman \cite{EN} generalizes Budney's notion into homology spheres. Focusing on Brunnian links, we prove the following.
\begin{thm}\label{thm:sum0}
(1) Brunnian property is preserved by satellite-sum.
(2) If a torus $T$ in $S^3$ splits a Brunnian link $L$, then $T$ is incompressible, and $L$ is decomposed by $T$ as a satellite-sum of two Brunnian links.
\end{thm}
\begin{thm}\label{thm:tie0}
(1) Suppose $L$ is a satellite-tie of $( L_0 , U_n \subset S^3 )$, that is, $( L_0 , U_n \subset S^3 )\underrightarrow{k_1,...,k_n} L$. Then $L_0$ is Brunnian in $U_n$ if and only if $L$ is Brunnian in $S^3$.
(2) Every knotted essential torus in the complement of a Brunnian link bounds the whole link in the solid torus side.
\end{thm}
We emphasize that one can construct infinitely many distinct Brunnian links by satellite-sum from every Brunnian link, and by satellite-tie from almost every Brunnian link.
By Alexander's Theorem \cite{Ha}, every smooth torus bounds at least one solid torus in $S^3$.
\begin{defn}
Fixing an embedding of 3-manifold $M$ in $S^3$, a smooth torus in $M$ is \emph{knotted} if it bounds only one solid torus in $S^3$, and otherwise, \emph{unknotted}.
\end{defn}
\begin{defn}\label{def:prime0}
A Brunnian link is \emph{s-prime} if it is not Hopf link, and there is no essential unknotted torus in its complement space.
\end{defn}
\begin{defn}\label{defn:untied0}
A Brunnian link is \emph{untied} if there is no essential knotted torus in its complement space.
\end{defn}
Secondly, we classify Seifert-fibred Brunnian links and give the following geometric classification theorem of Brunnian links.
\begin{thm} \label{thm:classification0}
If a Brunnian link $L$ is s-prime, untied, and not a $(2, 2n)-$torus link, then $S^3 - L$ has a hyperbolic structure.
\end{thm}
We then further study the geometric decomposition of Brunnian links. In \cite{RB}, Budney describes a formalism for the JSJ decomposition of link complements by the construction of two graphs associated to a link, JSJ-graph and companionship graph. Briefly speaking, the companionship graph is a partially-directed tree with each edge labeled by a JSJ-decomposition torus, and each vertex labeled by a link whose complement is homeomorphic to the corresponding JSJ-piece. The graph with labels discarded is the JSJ-graph. We will characterize the two graphs of Brunnian links.
\begin{defn}
A partially-directed tree is a \emph{planting tree} if all of its vertices and directed edges form a disjoint union of rooted trees and the endpoints of each non-directed edge are roots.
\end{defn}
\begin{thm} \label{thm:jsj0}
The JSJ-graph of a Brunnian link is a planting tree.
\end{thm}
Moreover, we develop a notation of canonical decomposition for Brunnian links, simpler than the JSJ-decomposition.
If a link is a satellite-sum of certain links, we draw the tree structure of the satellite-sum expressed in terms of the links and removed components on a horizontal plane, and then replace the satellite-tie factors by their representation in Definition \ref{def:tie0}, the arrows drawn upward. We call such representation a \emph{tree-arrow structure}.
If, further, each factor with no arrow is either a hyperbolic Brunnian link or a $(2, 2n)-$torus link with $|n| \ge 2$, and each link in the satellite pattern is a hyperbolic Brunnian link in an unlink-complement, then we call this representation a \emph{Brunnian tree-arrow structure}.
By a Brunnian link in an unlink-complement, we mean a nontrivial link $L$ in an unlink-complement such that all its proper sublinks are trivial, see also Section 2. See Figure \ref{fig:treearrow} for an example of tree-arrow structure.
The main result in this part is
\begin{thm}\label{th:cano0}
Brunnian tree-arrow structures correspond bijectively to Brunnian links.
\end{thm}
Lastly, we are not satisfied with the occurrence of hyperbolic Brunnian links in unlink-complements as building blocks in the above theorem. So we define an operation, \emph{untie}, to reduce a Brunnian link in unlink-complement into a Brunnian link in $S^3$. We investigate some complicities of untying.
On the whole, given a Brunnian link in an unlink-complement, one can always untie it into untied, and then decompose it into s-prime factors in finite steps. Proposition \ref{prop:independent} in Section \ref{sec:gdbl} guarantees the resulted links are untied and s-prime. Combining with Theorem \ref{th:cano0} and \ref{thm:classification0}, in a word, we can say $(2, 2n)-$torus links and hyperbolic Brunnian links are the building blocks in the class of Brunnian links in that they generate all Brunnian links by satellite-sum and satellite-tie.
This paper is organized as follows. In Section 2, we present notations and some useful results. In Section \ref{sec:sum} and \ref{sec:tie}, we study satellite-sum and satellite-tie of Brunnian links, and prove Theorem \ref{thm:sum0} and \ref{thm:tie0} respectively. In Section \ref{sec:annuli}, we classify Seifert-fibred Brunnian links and prove Theorem \ref{thm:classification0} by discussion of essential annuli in link complements. We treat geometric decomposition of Brunnian links in Section \ref{sec:gdbl}. We characterize JSJ-graphs (Theorem \ref{thm:jsj0}) and companionship graphs, investigate the uniqueness of decomposition of satellite-sum and satellite-tie, and describe Brunnian tree-arrow structure (Theorem \ref{th:cano0}) by analysis on essential tori in this section. Finally, in Section \ref{sec:untie} we define and discuss untying.
\textbf{Acknowledgement}:
The authors would like to thank Zhiyun Cheng for helpful discussion in early time. The authors also appreciate the very helpful comments of an anonymous referee.
\section{Preliminaries}
In this paper, all objects and maps are smooth. All intersections are compact and transverse. We always consider links in $S^3$ if there is no extra claim. We use $L_i$ to denote a link, $C_i$ to denote an unknot, $N(\cdot)$ to denote a closed regular neighborhood, $D_i$ to denote an embedded disk, and $A_i$ to denote an annulus.
A link is \emph{trivial} if it is the boundary of mutually disjoint disks. Given a link $L$ in $S^3$, the following statements are equivalent: (i) $L$ is trivial in $S^3$; (ii) $L$ is trivial in $R^3$, which is $S^3$ with a point removed; (iii) $L$ is trivial in a certain 3-ball.
Although the proof of the following proposition is trivial, the consequences of this result are of major importance for us.
\begin{prop}\label{prop:onedisk}
\cite{BW} Let $L$ be a link satisfying (ST) in Definition \ref{def:brunn} and $L_0$ be a proper sublink of $L$. Then $L$ is trivial if and only if $L_0$ bounds mutually disjoint disks not intersecting $L-L_0$.
\end{prop}
Recall that for a knot $K$ in $S^3$, there are two canonical isotopy classes of
curves in $\partial N(K)$, the \emph{meridian} and \emph{longitude} respectively. The meridian is the
essential curve in $\partial N(K)$ that bounds a disc in $N(K)$. The longitude is the essential curve in $N(K)$ that is null-homologous in $S^3-intN(K)$. We orient the longitude of $N(K)$ similarly to $K$ and orient the meridian so that the longitude and the meridian has link number $+1$ as in \cite{RB}.
To investigate satellite-tie in Section \ref{sec:tie}, it is necessary to define \emph{Brunnian links in unlink-complements}. In a $3$-manifold, a \emph{link} is a disjoint union of simple closed curves. A link is \emph{trivial} if it is the boundary of mutually disjoint disks. A link in a $3$-manifold is \emph{Brunnian} if it has more than one component, all of its proper sublinks are trivial, and itself is not trivial.
A 3-manifold $U_n$ is an \emph{(n-component) unlink-complement} if it is homeomorphic to $S^3 - int N(L^n)$, where $L^n$ is an $n$-component unlink for $n>0$. For example, if $L \subset S^3$ is a Brunnian link and $L'$ is a proper sublink with more than one component, then $L'$ is a Brunnian link in the unlink-complement $S^3 - int N(L-L')$.
In Section \ref{sec:annuli} we need Thurston's Hyperbolization Theorem. If a compact 3-manifold $M$ with toric boundary is irreducible, $\partial$-irreducible, anannular and atoroidal, then by Thurston's Geometrization for Haken 3-manifolds \cite{Th}, $M$ admits a unique hyperbolic structure up to isometry.
We refer to \cite{Ha} for notions and details on 3-dimensional topology. Most arguments in Section \ref{sec:gdbl} rely on JSJ-Decomposition Theorem (mainly Torus Decomposition Theorem). The commonly used form of this theorem in our paper is: every oriented prime 3-manifold $M$ contains a canonical collection $\mathcal{T}_{JSJ}$ of disjoint essential tori, such that each component of $M-int N(\mathcal{T}_{JSJ})$ is either atoroidal or Seifert-fibred. Moreover, any maximal collection of disjoint essential tori with no parallel pair contains $\mathcal{T}_{JSJ}$ after isotopy.
We call each component of $M- int N(\mathcal{T}_{JSJ})$ a \emph{JSJ-piece}.
\section{Satellite-sum}\label{sec:sum}
\subsection{Definition and examples}
For simplicity of presentation, we always abbreviate satellite-sum by \emph{s-sum}. We name it as a sum since it is a satellite of both $L$ and $L'$.
Clearly, the s-sum is symmetric for $L(C)$ and $L'(C')$. S-sum is also associative for three links, with respect to the removed components. Accurately speaking, the s-sum of multiple links will form a (non-directed) tree structure. Let us illustrate it with an example.
\begin{example}\label{example:sum}
See Figure \ref{fig:sum}. There are 5 Brunnian links in (1). An s-sum of them, expressed in terms of the links and the removed components, forms a tree structure as shown in (2). The resulted link is (3).
\end{example}
\begin{figure}\label{fig:sum}
\end{figure}
\begin{example}\label{example:milnor}
Milnor links (denoted $M^n$ if composed of $n$ components). It is observed from Figure \ref{fig:milnor} that, in general, $M^{n+m-2} = {M^n} _{C_r} \dagger _{C_l} M^m$, with respect to the rightmost component $C_r$ of $M^n$ and the leftmost component $C_l$ of $M^m$. Notice that $M_3$ is Borromean rings. Performing the decomposition in succession, we obtain that $M_n$ is the s-sum of $n-2$ Borromean rings, and the tree structure is a path with $n-2$ vertices and $n-3$ edges.
\end{example}
\begin{figure}
\caption{Milnor link: $M^7 = {M^4}
\label{fig:milnor}
\end{figure}
Note that Hopf link is the identity in s-sum operations, as the s-sum of any link and Hopf link is itself.
\subsection{The special nature of s-sums of Brunnian links}
Recall from \cite{DR} that a compact set $K$ in a solid torus $V$ is \emph{geometrically essential} in $V$ if every meridian disk of $V$ intersects $K$. The following statements are equivalent: (i) $K$ is geometrically essential in $V$; (ii) there is no 3-ball in $V$ containing $K$; (iii) if $V$ is a standard solid torus in $S^3$, and $C$ is the core of $S^3-V$, then $C$ fails to bound a disk avoiding $K$.
The crucial result in this section is the following proposition.
\begin{prop}\label{prop:sum}
Let $L$ be an unlink, $C$ be an unknot not intersecting $L$, and $L'$ be a link in $N(C)$. If $L \cup L' \subset S^3$ satisfies (ST) in Definition \ref{def:brunn}, then the following statements are equivalent.
(i) $L \cup L'$ is Brunnian.
(ii) $L'$ and $L$ are geometrically essential in $N(C)$ and $S^3 - int N(C)$ respectively.
(iii) $L \cup C$ and $L' \cup C_0$ are Brunnian, where $C_0$ is the core of the solid torus $S^3 - int N(C)$.
\end{prop}
\begin{proof}
Before beginning to prove, we note that the positions of $L$ and $L'$ in the proposition are symmetric.
(i) $\Rightarrow$ (ii). We need only show that $L'$ is geometrically essential in $N(C)$. Assume otherwise. Then there is a meridian disk $D$ of $N(C)$ not intersecting $L'$. In other words, $L'$ lies in a 3-ball subset of $N(C)$. Notice that $L \cup L'$ satisfies (ST). It follows that $L'$ bounds mutually disjoint disks in this 3-ball. According to Proposition \ref{prop:onedisk}, $L \cup L'$ is trivial.
(ii) $\Rightarrow$ (iii). We need only show that $L \cup C$ is Brunnian. First, we show that $L \cup C$ satisfies (ST). Let $C_i , i=1, ... ,n$ be the components of $L$. Since $L$ is trivial, it suffices to prove $\cup_{i=2}^n C_i \cup C$ is trivial without loss of generality.
Since $L \cup L'$ satisfies (ST), then $\sqcup_{i=2}^l C_i$ can bound disjoint disks $\sqcup_{i=2}^n D_i$ not intersecting $L'$. We may assume $\cup_{i=2}^n D_i \cap \partial N(C)$ is a disjoint union of circles. Delete all circles inessential on $\partial N(C)$ from innermost by surgery and denote the revised disks still by $D_i$'s.
{\sc Case~1}. $\cup_{i=2}^n D_i \cap \partial N(C)= \emptyset$, namely, $\cup_{i=2}^n D_i \subset
S^3 - N(C)$. Then $\cup_{i=2}^n D_i$ is not geometrically essential in $S^3 - intN(C)$. Therefore $C$ also bounds a disk disjoint from $\cup_{i=2}^n D_i$ and thus $\cup_{i=2}^n C_i \cup C$ is trivial.
{\sc Case~2}. $\cup_{i=2}^n D_i \cap \partial N(C) \neq \emptyset$. Then we choose an intersection circle $\tilde{c}$ innermost in some $D_i$, bounding $\tilde{D}$ on $D_i$. It follows that $\tilde{D}$ is a meridian disk of either $N(C)$ or $S^3 - intN(C)$. If $\tilde{D}$ is a meridian disk of $N(C)$, then $L'$ is not geometrically essential in $N(C)$, contradicting (ii). Hence, $\tilde{D}$ is a meridian disk of $S^3 - N(C)$. Then $\sqcup_{i=2}^n C_i$ is contained in a 3-ball, and thus bounds disjoint disks $\sqcup_{i=2}^n \bar{D_i}$ in it since $L \cup L'$ satisfies (ST). Clearly $C$ also bounds a disk disjoint from $\cup_{i=2}^n \bar{D_i}$ and thus $\cup_{i=2}^n C_i \cup C$ is trivial.
Now we show that $L \cup C$ is nontrivial. Assume $C$ bounds a disk disjoint from $L$, then there exists a meridian disk in $S^3 - intN(C)$ disjoint from $L$, contradicting that $L$ is geometrically essential in $S^3 - intN(C)$.
(iii) $\Rightarrow$ (i). Assume for contradiction that $L \cup L'$ is trivial. Then there is a disk $D_1$ bounded by a component of $L$, say $C_1$, disjoint from $(L-C_1) \cup L'$. We may assume $D_1 \cap \partial N(C)$ is disjoint union of circles. Delete all circles inessential on $\partial N(C)$ from innermost by surgery on $D_1$ and denote the revised disk still by $D_1$.
{\sc Case~1}. $D_1 \cap \partial N(C)= \emptyset$. Then $C_1$ bounds $D_1$ is disjoint from $(L-C_1) \cup C$, contradicting that $L \cup C$ is Brunnian.
{\sc Case~2}. $D_1 \cap \partial N(C) \neq \emptyset$. Then all the essential circles are parallel on $\partial N(L)$. Choose a circle $\tilde{c}$ innermost in $D_1$, bounding $\tilde{D} \subset D_1$. Then $\tilde{D}$ is either in $N(C)$ or $S^3 - intN(C)$, as a meridian disk. If $\tilde{D}$ is a meridian disk of $S^3 - intN(C)$, then $C$ can bound a disk disjoint from $L$, contradicting that $L \cup C$ is Brunnian. A similar argument shows that $\tilde{D}$ can not be a meridian disk of $N(C)$.
\end{proof}
An immediate consequence of this proposition is
\begin{thm}\label{thm:sum}
(1) Brunnian property is preserved by s-sum.
(2) If a torus $T$ in $S^3$ splits a Brunnian link $L$, then $T$ is incompressible, and $L$ is decomposed by $T$ as an s-sum of two Brunnian links.
\end{thm}
\begin{proof}
(1). Let $L$ and $L'$ be Brunian links, $C_i , i=1, ... ,n$ be the components of $L$, and $C'$ be a component of $L'$. Consider $L_{C_1} \dagger_ {C'} L'$. Applying the fact that (iii) $\Rightarrow$ (i) in Proposition \ref{prop:sum} to $L-C_1$ and $L'-C'$, we only must show that $(L-C_1) \cup (L'-C')$ satisfies (ST). Without loss of generality, it is sufficient to show that $\cup_{i=3}^n C_i \cup (L'-C')$ is trivial.
Since $L$ satisfies (ST), the sublink $\sqcup_{i=3}^n C_i$ is trivial in $S^3 - N(C_1)$. In particular, $\sqcup_{i=3}^n C_i$ bounds mutually disjoint disks in a 3-ball $B \subset S^3 - N(C_1)$. Therefore $L'-C'$ is contained in the 3-ball $S^3 - intB$ and thus trivial in it since $L'$ satisfies (ST).
(2). It follows from Theorem~\ref{thm:tie}(2) that $T$ is unknotted. Apparently the conclusion comes from the fact that (i) $\Rightarrow$ (iii) in Proposition \ref{prop:sum}.
\end{proof}
\section{Satellite-tie}\label{sec:tie}
\subsection{Definition and examples}\label{subsection:tiedefandexample}
For simplicity of presentation, we always abbreviate satellite-tie by \emph{s-tie}. We call $( L_0 , U_n \subset S^3 )$ in Definition \ref{def:tie0} an \emph{s-pattern} of $L$.
As an example, consider one of the simplest 2-component Brunnian links $L_0$ in a solid torus as shown in Figure \ref{fig:treborromean} (i). Notice that $U_1$ is a solid torus. The s-tie $( L_0 , U_1 \subset S^3 )\underrightarrow{k} L$, where $k$ is a trefoil, is a Brunnian link, as shown in Figure \ref{fig:treborromean} (ii). Varying the knot type of $k$ gives infinitely many distinct Brunnian links.
\begin{figure}\label{fig:treborromean}
\end{figure}
\subsection{Construction and the special nature of s-ties for Brunnian links}
The key idea of the proof of the main theorem in this section is the following Lemma \ref{lem:tie} (1).
\begin{lem}\label{lem:tie}
Let $K$ be a nontrivial knot, $L^r$ be an $r$-component unlink in $V=N(K)$ and $h$ be a homeomorphism from $U_{r+1}$ to $V - int N(L^r)$ where $U_{r+1}$ is an unlink-complement. Then for a link $L\subset U_{r+1}$,
(1) $L$ is trivial in $U_{r+1}$ if and only if $h(L)$ is trivial in $S^3 - N(L^r)$;
(2) $L$ is Brunnian in $U_{r+1}$ if and only if $h(L)$ is Brunnian in $S^3 - N(L^r)$.
\end{lem}
\begin{proof}
(1). The ``only if '' implication is trivial, while the ``if '' implication follows when $h(L) \cup L^r$ is not geometrically essential in $V$, owing that $h(L) \cup L^r$ is contained in a 3-ball subset of $V$ and the triviality of $h(L)$ will be irrelevant with $V$.
Now suppose that $h(L) \cup L^r$ is geometrically essential in $V$ and $L=\sqcup_{i=1}^n C_i$ bounds mutually disjoint disks $\sqcup_{i=1}^n D_i$ in $S^3 - N(L^r)$. Then $\cup_{i=1}^n D_i$ cannot be contained in $V$ since otherwise $h(L) \cup L^r$ would become trivial in $V$ and then not geometrically essential in $V$. We may assume $\cup_{i=1}^n D_i \cap \partial V$ is a disjoint union of circles. Eliminate all circles inessential on $\partial V$ by surgery from innermost and denote the revised disks still by $D_i$'s.
Choose an intersection circle $\tilde{c}$ essential on $\partial V$ and innermost on some $D_k$. Then $\tilde{c}$ bounds a disk $\tilde{D}$ on $D_k$. We claim this contradicts the assumptions. In fact, if $\tilde{D} \subset V$, then $h(L) \cup L^r$ is not geometrically essential in $V$. If $\tilde{D} \subset S^3 - intV$ then $K$ would be an unknot.
(2). Clause (1) implies that $L$ is not trivial in $U_{r+1}$ if and only if $h(L)$ is not trivial in $S^3 - N(L^r)$. Applying (1) to all the $(n-1)$-component sublinks of $L$, we obtain that all proper sublinks of $L$ are trivial in $U_{r+1}$ if and only if all proper sublinks of $h(L)$ are trivial in $S^3 - N(L^r)$.
\end{proof}
\begin{thm}\label{thm:tie}
(1) Suppose $( L_0 , U_n \subset S^3 )\underrightarrow{k_1,...,k_n} L$. Then $L_0$ is Brunnian in $U_n$ if and only if $L$ is Brunnian in $S^3$.
(2) Every knotted essential torus in the complement of a Brunnian link bounds the whole link in the solid torus side.
\end{thm}
\begin{proof}
(1). Let $h: U_n \longrightarrow S^3$ be the re-embedding in Definition \ref{def:tie0}, and $\sqcup_{r=1}^n H^r = S^3 -inth(U_n)$ be the disjoint union of the knot complement spaces. Set $U_i= S^3 - (\sqcup_{r=1}^i H^r)$ for $i= 0,1,...,n$. Then applying Lemma \ref{lem:tie}(2) for $i= 1,...,n$ in succession leads to the required conclusion.
(2). Given a Brunnian link $L$, suppose $T$ is a knotted torus in the complement space splitting $L$. Then the proper sublink in the solid torus $V$ bounded by $T$ is trivial in $S^3$ since $L$ satisfies (ST), and thus trivial in $V$ by Lemma \ref{lem:tie}(1). It follows from Proposition \ref{prop:onedisk} that $L$ is trivial. This is a contradiction.
\end{proof}
The ``only if'' implication of Theorem \ref{thm:tie}(1) gives a general method to construct infinitely many Brunnian links.
Theorem \ref{thm:tie}(2) implies that once there is a knotted essential torus in the complement of a Brunnian link, the link is an s-tie.
To construct infinitely many new Brunnian links, it is necessary to generalize the notion of geometrical essentiality into unlink-complements.
\begin{defn}
A compact set $K$ in an unlink-complement $U$ is \emph{geometrically essential} if $\partial U$ is incompressible in $U-K$.
\end{defn}
It is evident that $K$ is geometrically essential in the unlink-complement $U_n =S^3 - int N(\sqcup_{i=1}^n C_i)$ if and only if for each $i$, $C_i$ fails to bound a disk avoiding $K\cup_{j \ne i} C_j$.
For example, if $L \subset S^3$ is a Brunnian link and $L'$ is a proper sublink, then $L-L'$ is geometrically essential in the unlink-complement $S^3 - int N(L')$.
We now see, given a geometrically essential Brunnian link in an unlink-complement $( L_0 , U_n \subset S^3 )$, one can always construct infinitely many distinct Brunnian links $( L_0 , U_n \subset S^3 )\underrightarrow{k_1,...,k_n} L$ by taking each $k_i$ to be every nontrivial knot type.
\subsection{Brunnian links in unlink-complements}\label{subsec:Bliuc}
We conclude this section with some observations on Brunnian links in unlink-complements. The diversity of Brunnian links in unlink-complements can be demonstrated by a typical example.
\begin{example}\label{example:unlinkcomplement}
The link $L$ in Figure \ref{fig:unlinkcomplement} (i) is Brunnian. Consider the red circles $C_1, C_2, C_3, C_4$ in Figure \ref{fig:unlinkcomplement} (ii). Then L becomes a Brunnian link in the unlink-complements $S^3 - N(C_i)$, $S^3 - N(C_i \sqcup C_j)$, $S^3 - N(C_i \sqcup C_j \sqcup C_k)$, $S^3 - N(C_1 \sqcup C_2 \sqcup C_3 \sqcup C_4)$ and so on. We can even ``twist'' or ``tie a knot'' of any type at the middle part of $C_4$. We therefore obtain infinitely many distinct Brunnian links in unlink-complements from $L$.
\end{example}
\begin{figure}\label{fig:unlinkcomplement}
\end{figure}
Before ending this section we show that one can always construct Brunnian links in infinitely many distinct unlink-complements, given a geometrically essential Brunnian link in an unlink-complement.
\begin{prop}\label{prop:unlinkcomplement}
Let $L$ be a geometrically essential Brunnian link in $U^n= S^3 - intN(L^n)$, where $L^n = \sqcup_{i=1}^n C^i$ is an unlink, $L_1$, ..., $L_n$ be Brunnian links, and $C_i$ be a component of $L_i$ for each $i$. Let $\tilde{L}$ be an s-sum of $L^n $ and $L_1$,..., $L_n$, with respect to all pairs of $C^i$ and $C_i$. Then $L$ is a geometrically essential Brunnian link in the unlink-complement $\tilde{U} = S^3 -intN(\tilde{L})$.
\end{prop}
\begin{proof}
It is easy to check that $\tilde{L}$ is an unlink. Suppose $U_i$ is the complement space of $L_i$ for each $i$. Then $\tilde{U}$ is a gluing of $U^n$ and all of $U_i$'s via boundaries.
First we show that $L$ is Brunnian in $\tilde{U}$. Since $U^n \subset \tilde{U}$, $L$ satisfies (ST) in $\tilde{U}$. We prove $L$ is not trivial in $\tilde{U}$ by negation. Assume $L$ bounds mutually disjoint disks in $\tilde{U}$. Then the disks cannot be contained in $U^n$ since $L$ is not trivial in $U^n$. Thus some disk intersects $\partial N(L^n)$. We may isotope $N(L^n)$ to be thin enough in $S^3$ so that the disks intersect $N(L^n)$ only in their meridian disks. This contradicts the fact that $L_i$'s are nontrivial.
It remains to show that $L$ is geometrically essential in $\tilde{U}$. Assume otherwise. Without loss of generality let $C$ be a component of $L_1 -C_1$ bounding a disk in $\tilde{U}$. Then the disk cannot be isotoped into $U_1$ since $L_1$ is nontrivial. Thus the disk must contain a subdisk $D$ as a meridian disk of $S^3 - intN(C^1)$. If $ D \subset U^n$, then $L$ is not geometrically essential in $U^n$. Otherwise, we may isotope $N(L^n - C^1)$ to be thin enough so that $D$ intersects $N(L^n - C^1)$ only in their meridian disks, contradicting that other $L_i$'s are nontrivial.
\end{proof}
\section{Brunnian links with essential annuli in their complements}\label{sec:annuli}
In this section we will determine Brunnian links in $S^3$ with Seifert-fibered complements, and then prove the geometric classification theorem for Brunnian links.
We will call the $(2, 2n)-$torus link to be \emph{Hopf $n$-link}. Before proceeding further, we note that Hopf $n$-link has Seifert-fibered complement where $n$ is a nonzero integer. In addition, Hopf $n$-link with $|n| \ge 2$ is s-prime and untied because the complement space is atoroidal.
\begin{prop}\label{prop:annulus}
The Brunnian links with essential annulus joining different boundary components in their complements are Hopf $n$-links.
\end{prop}
\begin{proof}
Let $L=\cup_{i=1}^k C_i$ be a Brunnian link. Set $T_i = \partial N(C_i)$ for $i=1, 2$. Suppose $A \subset S^3 - intN(L)$ is an essential annulus joining essential curves $l_1$ on $T_1$ and $l_2$ on $T_2$.
First, we discuss the types of $l_1$ and $l_2$ on the two tori by considering the linking number $n=lk(C_1 ,C_2)$. Suppose $l_1$ is $(p, q)$-curve on $T_1$ and $l_2$ is $(p', q')$-curve on $T_2$. Notice that $l_1$ and $l_2$ are isotopic in the complement of $L$. They cannot both be $(0, 1)$-curves, i.e. meridians of $N(C_1)$ and $N(C_2)$, since these meridians are in different homology classes of $S^3 - intN(L)$. We have
\begin{align*}
lk(l_1, C_1)=q, lk(l_2, C_1)=np';\\
lk(l_1, C_2)=np, lk(l_2, C_2)=q' .
\end{align*}
This implies that
\begin{align*}
q =np', q'=np.
\end{align*}
We discuss the link type of $L$ by cases.
{\sc Case~1}. $|p|=1$. Then since $l_1$ is an unknot, so does $l_2$. Thus we have $|p'|=1$ or $|q'|=1$.
{\sc Subcase~1.1}. $|p'|=1$. Then $l_i$ is a $\pm (1, n)$-curve on $T_i$ for $i=1, 2$, and thus $C_1$ is parallel to $C_2$. If $n=0$, then $L - C_1$ is trivial would implies $L$ is trivial. So $n\neq 0$ and $L=C_1 \cup C_2$ is the Hopf $n$-link.
{\sc Subcase~1.2}. $|q'|=1$. Then $n=\pm 1$, $l_1$ is a $\pm (1, q)$-curve and $l_2$ is a $\pm (q, 1)$-curve.
{\sc Case~2}. $|q|=1$. Then an argument similar to the one used in Case 1 shows that $n=\pm 1$, $l_1$ is a $\pm (p, 1)$-curve, and $l_2$ is a $\pm (1, p)$-curve.
{\sc Case~3}. $pq=0$. Then either $l_1$ is $(0, 1)$-curve and $l_2$ is $(1, 0)$-curve or verse visa. In either subcase, $L$ is the Hopf link.
{\sc Case~4}. $|p|, |q| >1$. Then $l_1$ is a torus knot. Since $l_2$ is isotopic to $l_1$, by classification of torus knots, $l_2$ must be either $\pm(p, q)$-curve or $\pm(q, p)$-curve on $T_2$. The first subcase is impossible. In the latter subcase, $n=\pm 1$.
In summary, we have proved that $L$ is Hopf $n$-link unless $n=\pm 1$, $l_1$ is a $(p, q)$-curve, and $l_2$ is a $\pm (q, p)$-curve provided that $pq \ne 0$.
Now, without loss of generality, suppose $L=C_1 \cup C_2$, $n= 1$, $l_1$ is $(p, q)$-curve on $T_1$, and $l_2$ is $(q, p)$-curve on $T_2$ provided that $pq \ne 0$. We will prove $L$ is the Hopf link.
Set $V_1 =S^3 - intN(C_1)$. Then $A \cup N(C_2) \subset V_1$. Choose a meridian disk $D$ of $V_1$ such that $\partial D \cap l_1$ consists of $q$ points. Then $D \cap T_2$ is a disjoint union of circles. First, we eliminate all circles inessential on $T_2$ from innermost by isotopy of $T$. Then the remained circles, denoted by $\sqcup_{i=1}^r \bar{C_i}$, have to be meridians of $N(C_2)$ since they are parallel on $T_2$ and an innermost one on $D$ bounds a meridian disk of $N(C_2)$. Next, consider $\partial D \cap A$, a disjoint union of circles and proper arcs.
We claim that each proper arc cannot have both endpoints on $\partial D$. Assume otherwise. Then such an arc $\alpha$, outermost on $A$, cuts off a disk on $A$. We can isotope $D$ to push forward this disk to eliminate $\alpha$, contradicting that the number of $\partial D \cap \partial A$ has achieved minimum.
Therefore, the circles are inessential on $A$ and can be eliminated from innermost by isotopy of $A$. Using the same argument as in the previous paragraph, each proper arc with both endpoints on some $\bar{C_i}$ can also be eliminated.
It remains to eliminate each proper arc connecting two different $\bar{C_i}$'s. Start from such an arc $\beta$ outermost on $A$. Then it cuts off a disk $D_\beta$ on $A$. An isotopy of a neighborhood on $D$ of $\beta$ to push $\beta$ across $D_\beta$ eliminates $\beta$. We need to eliminate the occurred circles inessential on $T_2$. Now $D \cap A$ is a disjoint union of proper arcs each of which connects $\partial D$ and some $\bar{C_i}$'s.
Notice that $L_2$ is a $(q, p)$-curve. On each $\bar{C_i}$ the number of endpoints is no less than $q$. So there is only one $\bar{C_i}$ and thus $C_2$ is the core of $V_1$. It follows that $L$ is the Hopf link.
\end{proof}
\begin{prop}\label{prop:seifert}
The only Brunnian links with Seifert-fibred complements are Hopf $n$-links.
\end{prop}
\begin{proof}
Let $L$ be a Brunnian link such that $S^3 - intN(L)$ is a Seifert manifold with singular fibers $f_i, i=1,...,t$. We only need to find an essential annulus in $S^3 - intN(L)$. Let $p: S^3 - N(L \cup_{i=1}^t f_i) \rightarrow B$ be the circle bundle. Let $T_i = \partial N(C_i)$ consist of fibers for $i=1, 2$ where $C_i$'s are components of $L$. Choose a simple arc $\alpha$ in $B$ joining $p(T_1)$ and $p(T_2)$. Then $A=p^{-1} (\alpha)$ is an annulus in $S^3 - intN(L)$ joining essential curves on $T_1$ and on $T_2$. Clearly $A$ is essential for otherwise $C_1$ would be a trivial component.
\end{proof}
We point out that Burde and Murasugi \cite{BM} (also Budney \cite[Section 3]{RB}) classified links in $S^3$ with Seifert-fibered complements. We can also use their result to obtain Proposition \ref{prop:seifert}.
We are now in a position to prove the geometric classification result for Brunnian links.
\begin{thm} \label{thm:classification}
If a Brunnian link $L$ is s-prime, untied, and not a Hopf $n$-link, then $S^3 - L$ has a hyperbolic structure.
\end{thm}
\begin{proof}
The complement space of a Brunnian link is irreducible and $\partial$-irreducible by Proposition \ref{prop:onedisk}. Then the theorem follows immediately from Proposition \ref{prop:annulus} and Thurston's Hyperbolization Theorem.
\end{proof}
The next proposition will be used in the next section.
\begin{prop}\label{prop:annulus2}
Let $L$ be a Brunnian link in an unlink-complement $U$. Then in the complement space $U-intN(L)$, there is no essential annulus joining $\partial U$ and $\partial N(L)$, and thus $U-intN(L)$ is not Seifert-fibered.
\end{prop}
\begin{proof}
This proposition can be proved in the same way as shown in Proposition \ref{prop:annulus} and Proposition \ref{prop:seifert}.
\end{proof}
\section{Geometric decomposition of Brunnian links}\label{sec:gdbl}
In this section, we will characterize JSJ-graphs and develop a notation of canonical geometric decomposition for Brunnian links.
\subsection{JSJ-graphs of Brunnian links}\label{subset:JSJ}
We will use the terminologies of \cite{RB} in this subsection.
\begin{defn}
A partially-directed tree is a \emph{planting tree} if all of its vertices and directed edges form a disjoint union of rooted trees and the endpoints of each non-directed edge are roots.
\end{defn}
We sketch a 3-dimensional planting tree in Figure \ref{fig:planting}.
\begin{figure}
\caption{A 3-dimensional graph of planting tree.}
\label{fig:planting}
\end{figure}
\begin{thm} \label{thm:jsj}
The JSJ-graph of a Brunnian link is a planting tree.
\end{thm}
\begin{proof}
This follows easily from Theorem \ref{thm:tie} (2), the fact that every unknotted essential torus splits the link, and that the JSJ-graph of a knot is a rooted tree \cite{RB}.
\end{proof}
Specifically, in the companionship graph of a Brunnian link \cite{RB}:
(1) each leaf incident to a directed edge is labeled by a knot; each vertex neither a leaf nor a root is labeled by a KGL \cite{RB};
(2) the directed edges terminating to a vertex correspond together to a trivial sublink of the link labelling this vertex.
Moreover, as will be shown at the end of this section, we have
(3) Each root is labeled by a Hopf $n$-link, a hyperbolic Brunnian link, or a hyperbolic Brunnian link in unlink-complement. In accurate, if the pattern here is $( L_1 , U_l \subset S^3 )$ and $U_l = S^3 - int N(L^l)$, then the root is labeled by $L_1 \cup L^l$.
It is not difficult to identify that companionship graphs satisfying the above properties correspond bijectively to Brunnian links.
\subsection{Decomposition of s-sum and s-tie and their relationship}\label{subset:fine}
We now turn to investigations on decomposition of s-sum and s-tie of Brunnian links.
First consider the s-sum decomposition. It is not surprising that
\begin{prop}\label{prop:sumtree}
Every Brunnian link, except Hopf link, can be represented uniquely as an s-sum of s-prime Brunnian links, with respect to the pairs of removed components.
\end{prop}
In other words, the s-sum tree structures, expressed in terms of s-prime Brunnian links, correspond bijectively to Brunnian links, except Hopf link. We point out that the conclusion does not hold for general links in view of Seifert-fibred links such as key-chain links \cite{RB}.
We would rather prove it directly than employ the Torus Decomposition Theorem. A direct proof will be given in Appendix and a proof by Torus Decomposition Theorem will be outlined in the next subsection. We will deal with Proposition \ref{prop:fine} and Lemma \ref{lem:isotopy} in the same manner.
Next, to explore the uniqueness of s-tie decomposition, it is necessary to note a special class of knotted essential tori in link complements.
\begin{defn}\label{def:fine}
For a link, a knotted essential torus $T_0$ in the link complement is a \emph{fine companion} if there is no knotted essential torus $T$ so that
(i) $T \subset V_0$, where $V_0$ is the solid torus bounded by $T_0$ in $S^3$.
(ii) $T$ bounds a solid torus in $V_0$ and $T$ is not parallel to $T_0$.
\end{defn}
Informally, a fine companion is an innermost knotted essential torus in a link complement. By \cite[Proposition 2.1]{RB}, a disjoint union of fine companions with no parallel pair always bounds an unlink-complement.
\begin{prop}\label{prop:fine}
For a Brunnian link, the maximal collection of fine companions with no isotopic pair is finite, and the tori are disjoint after isotopy.
\end{prop}
We point out that, for a Brunnian link, the maximal collection of knotted essential tori with no isotopic pair may be neither finite nor mutually disjoint after isotopy, in view of an occurrence of Seifert-fibred piece in the JSJ-decomposition. Furthermore, the conclusion does not hold for knots in view of a connected sum of 3 prime knots as counterexample.
Another investigation is the relationship between s-sum and s-tie. In the rest of this paper, for a Brunnian link $L$, we let $\mathcal{T}_{sum} (L)$ denote the maximal collection of disjoint tori in the s-sum decomposition, and $\mathcal{T}_{fine} (L)$ denote the maximal collection of disjoint fine companions with no parallel pair.
\begin{lem}\label{lem:isotopy}
Let $L$ be a Brunnian link, $\mathcal{T}_{knotted}$ and $\mathcal{T}_{unknotted}$ be collections of disjoint knotted and unknotted essential tori in the link complement respectively. Then the tori in $\mathcal{T}_{knotted}$ are disjoint with the tori of $\mathcal{T}_{unknotted}$ after isotopy.
\end{lem}
\begin{prop}\label{prop:independent}
Let $L$ and $L'$ be Brunnian links, $C \subset L$ and $C' \subset L'$ be components, and $h$ be the map in Definition \ref{def:sum0}. Then $\mathcal{T}_{fine} (L_{C} \dagger_ {C'} L')= \mathcal{T}_{fine} (L) \sqcup h(\mathcal{T}_{fine} (L'))$.
\end{prop}
\begin{proof}
It suffices to show that (i) each knotted essential torus in the complement of $L$ is a knotted essential torus in the complement of $L_{C} \dagger_ {C'} L'$; (ii) each knotted essential torus in the complement of $L_{C} \dagger_ {C'} L'$ is inherited from the complement of either $L$ or $L'$.
Set $V=N(C)$ containing $h(L'-C')$ and $V'=S^3 - intN(C)$ containing $L-C$. For (i), we need only show that such a torus in $V'$ is still essential in the complement of $L_{C} \dagger_ {C'} L'$. Notice that $h(L'-C')$ is geometrically essential in $V$. A routine argument gives (i). For (ii), we may assume such a torus in the complement of $L_{C} \dagger_ {C'} L'$ is in $V'$ by the previous lemma. Clearly it is essential in the complement of $L$.
\end{proof}
\subsection{Essential tori in the complement of a Brunnian link}\label{subsec:tori}
We will give a detailed analysis of essential tori in the complements of Brunnian links. Given a Brunnian link $L$, rather than consider a single torus, we consider a maximal collection of disjoint essential tori with no parallel pair, say $\mathcal{T}_{maximal}$.
Let $\mathcal{T}_{knotted}$ and $\mathcal{T}_{unknotted}$ be the subsets of knotted and unknotted tori in $\mathcal{T}_{JSJ}$ respectively, and $\mathcal{T'}_{knotted}$ and $\mathcal{T'}_{unknotted}$ be the subsets of knotted and unknotted tori in $\mathcal{T}_{maximal}$ respectively. Set $\mathcal{T}_{canonical} = \mathcal{T}_{sum} (L) \sqcup \mathcal{T}_{fine} (L)$.
We will show the following three facts. It is desirable to consider the JSJ-graph and use the Torus Decomposition Theorem. The main point of our argument is that each JSJ-piece corresponding to a root is atoroidal. It can by verified by Theorem \ref{thm:classification} and Proposition \ref{prop:annulus2}.
(1) $\mathcal{T}_{sum} = \mathcal{T}_{unknotted}=\mathcal{T'}_{unknotted}$. In fact, since each JSJ-piece corresponding to a root is atoroidal, it is easy to show that $\mathcal{T}_{sum} \subset \mathcal{T}_{unknotted}$. By the Torus Decomposition Theorem, we directly have $\mathcal{T}_{unknotted} \subset \mathcal{T'}_{unknotted}$, while Theorem \ref{thm:sum}(2) implies $\mathcal{T'}_{unknotted} \subset \mathcal{T}_{sum}$. This also outlines a proof of Proposition \ref{prop:sum}, and then Lemma \ref{lem:isotopy} follows from the Torus Decomposition Theorem.
(2) $\mathcal{T}_{fine}$ labels exactly the directed edges adjacent to roots. In fact, we may choose $\mathcal{T}_{maximal}$ so that it contains $\mathcal{T}_{fine}$. Then by the Torus Decomposition Theorem, $\mathcal{T}_{knotted} \subset \mathcal{T'}_{knotted}$. By definition of $\mathcal{T}_{fine}$, the unlink-complement bounded by it contains no knotted essential torus. On the other hand, the JSJ-piece corresponding to a root also contains no fine companion since it is atoroital. Thus the claim and Proposition \ref{prop:fine} follow.
(3) $\mathcal{T}_{maximal} - \mathcal{T}_{JSJ}$ can only appear in JSJ-pieces homeomorphic to the complements of key-chain links. This needs a careful examination on all of possible Seifert-fibred JSJ-pieces with the help of \cite{BM} (or \cite[Section 3]{RB}).
\begin{figure}
\caption{The containment relationship of the nine collections of tori.}
\label{fig:jsj}
\end{figure}
\subsection{Tree-arrow structure}
Now we develop a notation of canonical geometric decomposition, simpler than the JSJ-decomposition, for Brunnian links.
\begin{defn}\label{def:cano}
If a link is an s-sum of certain links, we draw the tree structure of the s-sum expressed in terms of the links and removed components on a horizontal plane, and then replace the s-tie factors by their representation in Definition \ref{def:tie0}, the arrows drawn upward. We call such representation a \emph{tree-arrow structure}.
Moreover, if each factor with no arrow is either a hyperbolic Brunnian link or a Hopf $n$-link with $|n| \ge 2$, and each link in the s-pattern is a hyperbolic Brunnian link in unlink-complement, then we call this representation a \emph{Brunnian tree-arrow structure}.
\end{defn}
We sketch a 3-dimensional tree-arrow structure in Figure \ref{fig:treearrow}.
\begin{figure}
\caption{A 3-dimensional graph of tree-arrow structure.}
\label{fig:treearrow}
\end{figure}
We present two equivalent descriptions of Brunnian tree-arrow structure.
Given a Brunnian link $L$, other than Hopf link, $\mathcal{T}_{sum} (L)$ decomposes it into s-prime factors $L_1, ... ,L_n$. Take the s-pattern of $L_i$ corresponding to $\mathcal{T}_{fine} (L_i)$ as long as $\mathcal{T}_{fine} (L_i) \ne \emptyset$. Then we obtain a Brunnian tree-arrow structure due to Theorem \ref{thm:classification}, Proposition \ref{prop:annulus2} and Thurston's Hyperbolization Theorem. Moreover, $\mathcal{T}_{fine} (L) = \sqcup \mathcal{T}_{fine} (L_i)$.
Combining (1) and (2) in the previous subsection, we see Brunnian tree-arrow structure is essentially the companionship graph with each maximal rooted tree replaced by its representative knots.
It is sure that every Brunnian tree-arrow structure gives a Brunnian link by Theorem \ref{thm:sum} (1) and Theorem \ref{thm:tie} (1). From the discussion of the previous two paragraphs, we may assert:
\begin{thm}\label{th:cano}
Brunnian tree-arrow structures correspond bijectively to Brunnian links.
\end{thm}
\section{Untying Brunnian links}\label{sec:untie}
Hyperbolic Brunnian links in unlink-complements occur as building blocks in Brunnian tree-arrow structure. However, Example \ref{example:unlinkcomplement} indicates that Brunnian links in unlink-complements may be too diverse in general. This motivates the following operation to transform a Brunnian link in unlink-complement into $S^3$.
\begin{defn}\label{def:untie}
Let $L$ be a Brunnian link in $S^3$, and $T$ be a torus in the complement $S^3-L$, which bounds a solid torus $V$ such that $L \subset V$.
We perform \emph{untying $T$ for $L$} if we re-embed $V$ into $S^3$ in the unknotted way by a map $h$ such that the longitude of $V$ is mapped to the meridian of $S^3-int h(V)$, and
type-1: take $h(L) \sqcup C$, if $h(L)$ is trivial in $S^3$, where $C$ is the core of $S^3-h(V)$;
type-0: take $h(L)$, if $h(L)$ is nontrivial in $S^3$.
\end{defn}
It is straightforward to show that the resulted link is Brunnian. Since untying inessential torus gives the original link, we only focus on essential torus.
Set $\mathcal{T}_{fine} (L)= \{T_1 ,T_2 , ... ,T_l \}$ for a Brunnian link $L$. The basic question is how the order of untying tori affects the resulted link. Three questions arise:
(1) Is the type of untying invariant under the change of order?
(2) After untying $T_1$, does $T_2$ must remain essential?
(3) After untying all fine companions of $L$, is the resulted link untied?
They all have negative answers as will be shown in Examples \ref{example:untie1}, \ref{example:untie2}, and \ref{example:untie3} respectively.
\begin{example}\label{example:untie1}
Figure \ref{fig:order}. If we untie $T_1$ before $T_2$, then $T_1$ is untied in type-0 and $T_2$ is untied in type-1. On the other hand, if we untie $T_2$ first, then we should untie $T_2$ in type-0.
\end{example}
\begin{figure}
\caption{$(C_1 \sqcup C_2 , U_2 \subset S^3 )$ is an s-pattern.}
\label{fig:order}
\end{figure}
\begin{example}\label{example:untie2}
Figure \ref{fig:t1t2}. Untie $T_1$, then $T_2$ becomes inessential.
\end{example}
\begin{figure}
\caption{$(L_5, U_2 \subset S^3 )$ is an s-pattern. $T_1$, $T_2$ are the boundary of the regular neighborhood of the two red circles.}
\label{fig:t1t2}
\end{figure}
\begin{example}\label{example:untie3}
Figure \ref{fig:tienew}. After untying all fine companions, a new essential knotted torus emerges.
\end{example}
\begin{figure}
\caption{The right figure is an s-pattern$ (L, U_1 \subset S^3 )$. The boundary of regular neighborhood of the red circle is $T$. After untying $T$, i.e. the red circle is filled, the link is still an s-tie.}
\label{fig:tienew}
\end{figure}
Fortunately, we have partially affirmative answers to questions (1) and (3).
\begin{prop}\label{prop:finite}
(1) Let $T_1 ,T_2 , ... ,T_l $ be fine companions for a Brunnian link $L$. Suppose $L_0$ is the resulted link after untying $T_1 ,T_2 , ... ,T_{l-1}$ in order. If we should untie $T_l$ for the original $L$ in type-1, then we should now untie $T_l$ for $L_0$ in type-1.
(2) For a Brunnian link, if in each step we untie an essential knotted torus for the obtained link, then the procedure stops in finite steps.
\end{prop}
\begin{proof}
(1). Let $V_l$ be the solid torus bounded by $T_l$ and the unlink-complement bounded by $T_1 \cup T_2 \cup ... \cup T_{l-1}$ be $U_0$. It is equivalent to prove that if the re-embedding $h$ of $V_l$ makes $h(L)$ trivial, then the re-embedding by the same choice of longitude on $T_l$ will make $L_0$ trivial. Untie $T_l$ first. By Theorem \ref{thm:tie} (1), if $h(L)$ is trivial in $S^3$, then it is trivial in $U_0$.
(2). In each step, take all the link complement pieces in the Brunnian tree-arrow structure of the obtained link. There are no essential knotted tori in Seifert pieces by Proposition \ref{prop:annulus} and Proposition \ref{prop:annulus2}. For each hyperbolic piece, untying either keeps or decreases the hyperbolic volume. The volumes of hyperbolic 3-manifolds have a uniform lower bound, in fact it is of order type $\omega^\omega$, see \cite{Th}. So the procedure stops in finite steps.
\end{proof}
We finally remark that the main theorem of \cite{MS} implies that in Definition \ref{def:untie}, the way of re-embedding making $h(L)$ trivial is unique. Using this theorem inductively with the help of $s$-tie construction, we confirm that we may always drawn Brunnian links in unlink-complements like Figure \ref{fig:unlinkcomplement}. We will omit the details here.
\begin{prop}
For every Brunnian link $L$ in unlink-complement $U_n$, there exists an embedding $h: U_n \rightarrow S^3$ (actually infinitely many) so that $S^3 - inth(U_n)$ is a regular neighborhood of unlink and $h(L)$ is Brunnian in $S^3$.
\end{prop}
In conclusion, study of Brunnian links in unlink-complements can be reduced to that in $S^3$, as what we have hoped. However, the following problem is basic but seems challenging.
\begin{problem}
Given a Brunnian link $L$, find or classify all the unlinks not intersecting $L$ such that $L$ is Brunnian in their complements.
\end{problem}
From Theorem \ref{thm:classification}, we ask the question of the probability of a Brunnian link to be a hyperbolic link.
Note that there are different models for random links. For random links via random braids or via random bridge decompositions, a generic link is a hyperbolic link \cite{IchiharaMa, Ito, Ma2014}. But surprisely, via the more combinatorial model, that is, from the viewpoint of crossing number, hyperbolic links are not generic \cite{Malyutin2018, Malyutin2019}. Note that Brunnian links are determined by their complement when there are at least three components \cite{MS}. Inspiried by \cite{Malyutin2018}, we propose the following problem.
\begin{problem}Let $B(n)$ be the number of Brunnian links with $n$ or fewer crossings, and $B_{h}(n)$ be the number of hyperbolic Brunnian links with $n$ or fewer crossings. Then
$$\limsup_{n \rightarrow \infty} \frac{B_{h}(n)}{B(n)}=a, $$
$$\liminf_{n \rightarrow \infty} \frac{B_{h}(n)}{B(n)}=b.$$
We have $0 \leq b \leq a \leq 1$ trivially, we believe that $a=b$. Is $b=0$ or $a <1$?
\end{problem}
\section{Appendix}
In this appendix, we prove Proposition \ref{prop:sumtree}, Proposition \ref{prop:fine}, and Lemma \ref{lem:isotopy}.
\begin{lem}\label{lem: hopf}
Let $L = L^1$$_{C_1} \dagger_{C_2} L^2$ be a Brunnian link and $L^i- C_i$ is contained in the solid torus $V_i$ bounded by the decomposing torus $T$, for $i=1, 2$. If neither $L^1$ nor $L^2$ is Hopf link, then
(i)Any meridian disk in $V_i$ intersects $L^i - C_i$ with at least 2 points, for $i=1, 2$;
(ii) If a component of $L^1 - C_1$ bounds a disk that intersects $T$ with a meridian of $V_2$, then the disk intersects $T$ with at least two meridians of $V_2$.
\end{lem}
\begin{proof}
Note that if a link has two unknotted components, and one of which bounds a disk intersecting the other with exactly one point, then the link is Hopf link.
(i) Without loss of generality, suppose a meridian disk $D$ of $V_1$ intersects $L^1 - C_1$ with only
one point, say $p$, and $p$ is on a component $C \subset L^1$. Then $\partial D$ and $C$ form Hopf link. So by the Brunnian property, $L^1 - C_1 =C$. It follows that $T$ is $\partial$-parallel to $N(C)$.
(ii) Assume a component $C \subset L^1 - C_1$ bounds a disk $D$ such that only one of the intersection circles, say $\tilde{C} \subset D \cap T$, is a meridian of $V_2$. Then all the other circles are trivial on $T$. Eliminating all trivial circles by surgery from the innermost on $T$, we get a new disk $D'$ intersecting the core of $V_2$ with exactly one point. Hence this core and $C$ form Hopf link. Thus $L^1$ is Hopf link; this is impossible.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:sumtree}]
The proof will be split into two parts, existence and uniqueness.
{\sc Existence}. Given a Brunnian link $L$, an s-sum decomposition with no Hopf link factor is equivalent to a collection of
disjoint unknotted essential tori $\{T_1, T_2, ... ,T_n\}$ in $S^3 -L$. Each $T_i$ decomposes $L$ as an s-sum. If some solid tori bounded by other tori are replaced by their cores, $T_i$ still decomposes this Brunnian link as an s-sum. There are no parallel tori in the collection since Hopf link is not s-prime. We add these tori in succession into $S^3$. Each step gives an s-sum decomposition.
Now in a step, suppose $L$ is decomposed as $L_1 (C_1)$ and $L_2 (C_2)$. If both $L_1$ and $L_2$ has more than two component, it's called a \emph{big step}. Otherwise, it's called a \emph{small step}. In each big step, $L_1$ and $L_2$ both have the numbers of components less than $L$. So there are finitely many big steps in total. In each small step, without loss of generality, we may assume $L_2$ has two components and $L_1$ has the same number of components as $L$.
Consider the disks bounded by a component of $L$ intersecting other components. We take the least number of the intersecting points among all such disks. The \emph{total intersection number} of $L$ is the sum of such numbers over all components of $L$. By Lemma \ref{lem: hopf}, the total intersection number of $L_1$ is less than that of $L$. The same holds when we decompose $L_2$. So there are finitely many small steps between two big steps. It follows that there are finitely many steps in total.
{\sc Uniqueness}. Suppose $\{T_1, T_2, ... ,T_n\}$ and $\{T'_1, T'_2, ... ,T'_l\}$ are two maximal collections of disjoint tori, both decomposing $L$ into s-prime Brunnian links. We consider each $T_i \cap T'_j$, a union of disjoint circles.
First consider circles inessential in both $T_i$ and $T'_j$. Choose an innermost circle $C_0$ on $T_i$, bounding $D_i$ and $D'_j$ on $T_i$ and $T'_j$ respectively. Then $D_i$ and $D'_j$ form a sphere. There is one \emph{empty} 3-ball bounded by this sphere, namely, the 3-ball does not intersect $L$, since $L$ is not split. So we can isotope $T'_j$ to push $D'_j$ across $D_i$ to eliminate $C_0$. We may assume there is
no circle inessential in both $T_i$ and $T'_j$.
Next consider circles inessential in one of $T_i$ and $T'_j$, say $T'_j$. Choose an innermost such
circle $C'$, bounding $D'$ on $T'_j$. Then $D'$ is the meridian disk of one solid torus bounded by
$T_i$, which is impossible by (i) $\Rightarrow$ (ii) in Proposition \ref{prop:sum}. So we may assume the
intersection circles are both essential on $T_i$ and $T'_j$, and thus are parallel.
Now the essential parallel circles cut both $T_i$ and $T'_j$ into annuli. Recall that any properly embedded annulus in a solid torus with essential boundary is $\partial$-parallel unless the boundaries are meridians. We may choose a solid torus $V$ bounded by $T_i$, such that the circles are not meridian of $V$. Consider an annulus $A' \subset T'_j$ with boundary on $T_i$, outermost in $V$. Then there is an annulus $A \subset T_i$ with the same boundary, and $A \cup A'$ bounds a compression solid torus $V_1$ for $A'$ in $V$. It follows that $V- V_1$ is also a solid torus.
If $V_1$ is empty, isotope $T'_j$ to delete $\partial A$. Otherwise, we denote the proper sublink in $V_1$ to be $L_1$, which is geometrically essential in $V_1$. We claim $V- V_1$ is empty. Assume otherwise. Then the proper sublink $L_2$ in $V - V_1$ is also geometrically essential. Notice that the sublink in $V$ and the core of $S^3 - V$ form a Brunnian link. Consequently when deleting $L_1$, $L_2$ bounds mutually disjoint disks, contradicting that $L_2$ is geometrically essential in $V - V_1$.
If $V_1$ is knotted, whose core is a torus knot, then $L_1$ is already nontrivial in $S^3$, a contradiction.
So $V_1$ is unknotted. This implies that the circles on $T_i$ are either $(1, n)$-curves or $(n, 1)$-curves.
{\sc Case~1}. They are $(1, n)$-curves. Then $T_i - A$ is also parallel to $A'$ in $V - V_1$. Since $V - V_1$ is empty, we can isotope $T_i$ across $A'$ to delete $\partial A'$.
{\sc Case~2}. They they are $(n, 1)$-curves. Then we can re-choose $V$ to be the other solid torus bounded by $T_i$ to transform into the first case.
Finally, we have that $T_1 \cup T_2 \cup ... \cup T_n$ and $T'_1 \cup T'_2 \cup ... \cup T'_l$ are disjoint. If there is a $T'_j$ not parallel to any $T_i$, we may add it into the first collection, contradicting the maximal choice. Thus they are parallel to each other up to orders and $n=l$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:fine}]
Suppose there are two collections of mutually disjoint fine companions $\mathcal{T}$ and $\mathcal{T'}$, where both have no parallel tori. By Haken's finiteness theorem, the number of complements in such collections is bounded.
The remainder of the proof is split into 2 steps. (1): After isotopy, $\mathcal{T}$ and $\mathcal{T'}$ are disjoint. (2): If the two collections are maximal, then they coincide after isotopy.
(1). Consider the intersection of $T_1 \in \mathcal{T}$ and $T'_1 \in \mathcal{T'}$ without loss of generality and denote the solid tori bounded by them to be $V$ and $V'$ respectively. Using the same argument as in the proof of the uniqueness in Proposition \ref{prop:sumtree}, we can easily eliminate intersection circles inessential on $T_1$ or $T'_1$.
Consider an annulus $A' \subset T'_1$ in $V$ with boundary on $T_1$, outermost in $V$. Then there is an annulus $A$ on $T_1$ with the same boundary.
{\sc Case~1}. $A'$ is $\partial$-parallel. Then we may assume $A \cup A'$ bounds a compression solid torus $V_1$ for $A'$ in $V$. If $V_1$ is empty, we delete $\partial A'$ as in the proof of Proposition \ref{prop:sumtree}. Suppose $V_1$ is not empty.
First we claim $V - V_1$ is empty, and thus $L \subset V_1$. In fact, otherwise, the proper sublink in $V - V_1$ is geometrically essential and thus nontrivial in $S^3$. Since $L$ is geometrically essential in $V$, it is geometrically essential in $V_1$. Now we discuss the circles $\partial A$ on $T_1$.
{\sc Subcase~1.1}. $\partial A$ are (0,1)-curves, namely, meridians of $V$. This is impossible since if there is a compressing disk for $T'_1$ in $V$, then $T_1$ is not essential torus.
{\sc Subcase~1.2}. $\partial A$ are $(1, n)$-curves. Then $T_1 - A$ is also parallel to $A'$ in $V - V_1$. We can isotope $T_1$ across $A'$ to eliminate $\partial A'$.
{\sc Subcase~1.3}. Other cases. Then by surgery along $A$, we produce a new torus with $A'$ in $V$ which is not parallel to $T_1$, contradicting that $T_1$ is fine.
{\sc Case~2}. $A'$ is not $\partial$-parallel. Then $\partial A$ are $(0, 1)$-curves and $A' \cup (T_1 - A)$ bounds a knotted solid torus $\tilde{V}$ in $V$, with the same meridian. So the torus $A' \cup (T_1 - A)$ is
essential and thus $L \subset \tilde{V}$. Surgery along $T_1 - A$ produces a new torus with $A'$ in
$V$ which is not parallel to $T_1$, contradicting that $T_1$ is fine.
In summary, $T_1$ and $T'_1$ can be isotoped to be disjoint. Step by step we can make the fine companions in the two collections to be disjoint.
(2). Since the two collections are both maximal, they must be parallel to each other pair by pair.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:isotopy}]
Consider the intersection circles of $T_1 \cup T_2 \cup ... \cup T_n$ and $T'_1 \cup
T'_2 \cup ... \cup T'_l$, the unions of tori in the two collections. Inessential circles can be deleted owing to that $L$ is not split and the sublinks in the corresponding solid tori are geometrically essential. Notice that that the proof of uniqueness in Proposition \ref{prop:sumtree} did not use that $T'_i$'s are unknotted tori. So the proof here is by the same token.
\end{proof}
\end{document} |
\begin{document}
\newtheorem{theorem}{Theorem}
\newtheorem{proposition}{Proposition}
\newtheorem{lemma}{Lemma}
\newtheorem{corollary}{Corollary}
\newtheorem{conjecture}{Conjecture}
\numberwithin{equation}{section}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\title{\bf Gaps between Primes in Beatty Sequences}
\author{Roger C. Baker and Liangyi Zhao}
\date{\today}
\maketitle
\begin{abstract}
In this paper, we study the gaps between primes in Beatty sequences following the methods in the recent breakthrough of \cite{May}.
\end{abstract}
2010 Mathematics Subject Classficiation: 11B05, 11L20, 11N35
\section{Introduction}
Let $p_n$ denote the $n$-th prime and $t$ a natural number with $t \geq 2$. It has long been conjectured that
\[ \liminf_{n \to \infty} \left( p_{n+t-1} - p_n \right) < \infty . \]
This was established recently for $t=2$ by Y. Zhang \cite{Zhang} and shortly after for all $t$ by J. Maynard \cite{May}. Maynard showed that for $N > C(t)$, the interval $[N, 2N)$ contains a set $\mathcal{S}$ of $t$ primes of diameter
\[ D ( \mathcal{S} ) \ll t^3 \exp (4t) , \]
where
\[ D ( \mathcal{S} ) : = \max \{ n : n \in \mathcal{S} \} - \min \{ n : n \in \mathcal{S} \} . \]
In the present paper, we adapt Maynard's method to prove a similar result where $\mathcal{S}$ is contained in a prescribed set $\mathcal{A}$ (see Theorem~\ref{gentheo}). We then work out applications (Theorems \ref{bea1} and \ref{bea2}) to a section of a Beatty sequence, so that
\[ \mathcal{A} = \{ [ \alpha m + \beta ] : m \geq 1 \} \cap [ N, 2N ) . \]
The number $\alpha$ is assumed to be irrational with $\alpha > 1$, while $\beta$ is a given real number. We require an auxiliary result (Theorem~\ref{beabomvino}) for the estimation of errors of the form
\[ \sum_{\mathclap{\substack{N \leq n < N' \\ \gamma n \in I \bmod{1} \\ n \equiv a \bmod{q}}}} \Lambda(n) - \frac{(N-N') |I|}{\varphi(q)} , \]
where $I$ is an interval of length $|I| < 1$ and $\gamma = \alpha^{-1}$. Theorem~\ref{beabomvino} is of ``Bombieri-Vinogradov type"; for completeness, we include a result of Barban-Davenport-Halberstam type for these errors (Theorem~\ref{beabardavhal}). \newline
We note that Chua, Park and Smith \cite{CPS} have already used Maynard's method to prove the existence of infinitely many sets of $k$ primes of diameter at most $C = C(\alpha, k)$ in a Beatty sequence $[\alpha n]$, where $\alpha$ is irrational and of finite type. However, no explicit bound for $C$ is given. \newline
In this paragraph, we introduce some notations to be used throughout this paper. We suppose that $t \in \mathbb{N}$, $N \geq C(t)$ and write $\mathcal{L} = \log N$,
\[ D_0 = \frac{\log \mathcal{L}}{\log \log \mathcal{L}} . \]
Moreover, $(d,e)$ and $[d,e]$ stand for the great common divisor and the least common multiple of $d$ and $e$, respectively. $\tau(q)$ and $\tau_k(q)$ are the usual divisor functions. $\| x \|$ is the distance of between $x \in \mathbb{R}$ and the nearest integer. Set
\[ P(z) = \prod_{p < z} p \; \mbox{with} \; z \geq 2 \; \; \; \mbox{and} \; \; \; \psi(n,z) = \left\{ \begin{array}{cl} 1 & \mbox{if} \; (n, P(z)) = 1, \\ 0 & \mbox{otherwise}. \end{array} \right. \]
$X (E; n)$ stands for the indicator function of a set $E$ and $\mathbb{P}$ for the set of primes. Let $\varepsilon$ be a positive constant, sufficiently small in terms of $t$. The implied constant ``$\ll$", when it appears, may depend on $\varepsilon$ and on $A$ (if $A$ appears in the statement of the result). ``$F \asymp G$" means both $F \ll G$ and $G \ll F$ hold. As usual, $e(y) = \exp(2\pi i y)$, and $o(1)$ indicates a quantity tending to 0 as $N$ tends to infinity. Furthermore,
\[ \sum_{\chi \bmod{q}} , \; \; \; \; \; \; \sideset{}{'} \sum_{\chi \bmod{q}} , \; \; \; \; \; \; \sideset{}{^{\star}} \sum_{\chi \bmod{q}} \]
denote, respectively, a sums over all Dirichlet characters modulo $q$, a sum over nonprincipal characters modulo $q$ and a sum restricted to primitive characters, other than $\chi = 1$, modulo $q$. We write $\hat{\chi}$ for the primitive character that induces $\chi$. A set $\mathcal{H} = \{ h_1 , \cdots, h_k \}$ of distinct non-negative integers is {\it admissible} if for every prime $p$, there is an integer $a_p$ such that $a_p \not\equiv h \pmod{p}$ for all $h \in \mathcal{H}$. \newline
In Sections 1 and 2, let $\theta$ be a positive constant. Let $\mathcal{A}$ be a subset of $[N, 2N) \cap \mathbb{N}$. Suppose that $Y>0$ and $Y/q_0$ is an approximation to the cardinality of $\mathcal{A}$, $\# \mathcal{A}$. Let $q_0$, $q_1$ be given natural numbers not exceeding $N$ with $(q_1, q_0P(D_0)) =1$ and $\varphi(q_1)=q_1 (1+o(1))$. Suppose that $n \equiv a_0 \pmod{q_0}$ for all $n \in \mathcal{A}$ with $(a_0,q_0)=1$. An admissible set $\mathcal{H}$ is given with
\[ h \equiv 0 \pmod{q_0} \; (h \in \mathcal{H}) \]
and
\begin{equation} \label{divcond}
p | h-h', \; \mbox{with} \; h, h' \in \mathcal{H}, \; h \neq h', p > D_0 \; \mbox{implies} \; p | q_0.
\end{equation}
We now state ``regularity conditions" on $\mathcal{A}$.
\begin{enumerate}[(I)]
\item We have
\begin{equation} \label{regcond1}
\sum_{\mathclap{\substack{q \leq N^{\theta} \\ (q, q_0q_1)=1}}} \mu^2(q) \tau_{3k}(q) \left| \sum_{n \equiv a_q \bmod{qq_0}} X ( \mathcal{A} ; n) - \frac{Y}{qq_0} \right| \ll \frac{Y}{q_0 \mathcal{L}^{k+\varepsilon}}
\end{equation}
(any $a_q \equiv a_0 \pmod{q_0}$).
\item There are nonnegative functions $\varrho_1, \cdots , \varrho_s$ defined on $[N, 2N)$ (with $s$ a constant, $0 < a \leq s$) such that
\begin{equation} \label{regcond2}
X \left( \mathbb{P}; n \right) \geq \varrho_1(n) + \cdots + \varrho_a(n) - \left( \varrho_{a+1} (n) + \cdots + \varrho_s (n) \right)
\end{equation}
for $n \in [N, 2N)$. There are positive $Y_{g,m}$ ($g=1, \cdots, s$ and $m= 1, \cdots , k$) with
\[ Y_{g,m} = Y \left( b_{g,m} + o(1) \right) \mathcal{L}^{-1} . \]
where the positive constants $b_{g,m}$ satisfy
\begin{equation} \label{bbound}
b_{1,m} + \cdots + b_{a,m} - \left( b_{a+1,m} + \cdots + b_{s,m} \right) \geq b > 0 ,
\end{equation}
for $m=1, \cdots, k$. Moreover, for $m \leq k$, $g \leq s$ and any $a_q \equiv a_0 \pmod{q_0}$ with $(a_q,q)=1$ defined for $q \leq x^{\theta}$, $(q,q_0q_1)=1$, we have
\begin{equation} \label{regcond3}
\sum_{\substack{q \leq N^{\theta} \\ (q,q_0q_1)=1}} \mu^2(q) \tau_{3k} (q) \left| \sum_{n \equiv a_q \bmod{qq_0}} \varrho_g(n) X \left( \left( \mathcal{A} + h_m \right) \cap \mathcal{A} ; n \right) - \frac{Y_{g,m}}{\varphi(q_0q)} \right| \ll \frac{Y}{\varphi(q_0) \mathcal{L}^{k+\varepsilon}} .
\end{equation}
Finally, $\varrho_g(n) =0$ unless $(n, P (N^{\theta/2})) = 1$.
\end{enumerate}
\begin{theorem} \label{gentheo}
Under the above hypotheses on $\mathcal{H}$ and $\mathcal{A}$, there is a set $\mathcal{S}$ of $t$ primes in $\mathcal{A}$ with diameter not exceeding $D(\mathcal{H})$, provided that $k \geq k_0 (t,b,\theta)$ ($k_0$ is defined at the end of this section).
\end{theorem}
In proving Theorem~\ref{bea1}, we shall take $s=a=1$, $q_0=q_1=1$, $\rho_1(n) = X(\mathbb{P}; n)$. A more complicated example with $s=5$, of the inequality \eqref{regcond2}, occurs in proving Theorem~\ref{bea2}, but again, $q_0=q_1=1$. We shall consider elsewhere a result in which $q_0$, $q_1$ are large. Maynard's Theorem 3.1 in \cite{May2} overlaps with our Theorem~\ref{gentheo}, but neither subsumes the other.
\begin{theorem} \label{bea1}
Let $\alpha >1$, $\gamma= \alpha^{-1}$ and $\beta \in \mathbb{R}$. Suppose that
\begin{equation} \label{alphacond}
\| \gamma r \| \gg r^{-3}
\end{equation}
for all $r \in \mathbb{N}$. Then for any $N > c_1(t, \alpha, \beta)$, there is a set of $t$ primes of the form $[\alpha m + \beta]$ in $[N, 2N)$ having diameter
\[ < C_2 \alpha (\log \alpha + t) \exp (8t) , \]
where $C_2$ is an absolute constant.
\end{theorem}
\begin{theorem} \label{bea2}
Let $\alpha$ be irrational with $\alpha >1$ and $\beta \in \mathbb{R}$. Let $r \geq C_3 (\alpha, \beta)$ and
\[ \left| \frac{1}{\alpha} - \frac{b}{r} \right| < \frac{1}{r^2} , \; b \in \mathbb{N} , \; (b,r) =1 . \]
Let $N = r^2$. There is a set of $t$ primes of the form $[ \alpha n + \beta]$ in $[N, 2N)$ having diameter
\[ < C_4 \alpha \left( \log \alpha + t \right) \exp ( 7.743 t) , \]
where $C_4$ is an absolute constant.
\end{theorem}
Theorem~\ref{bea2} improves Theorem~\ref{bea1} in that $\alpha$ can be any irrational number in $(1,\infty)$ and $7.743 < 8$, but we lose the arbitrary placement of $N$. \newline
Turning our attention to our theorem of Bombieri-Vinogradov type, we write
\[ E (N, N', \gamma, q, a ) = \sup_I \left| \sum_{\substack{ N \leq n < N' \\ \gamma n \in I \bmod{1} \\ n \equiv a \bmod{q}}} \Lambda(n) - \frac{(N'-N)|I|}{\varphi(q)} \right| . \]
Here, $I$ runs over intervals of length $|I| < 1$.
\begin{theorem} \label{beabomvino}
Let $A>0$, $\gamma$ be a real number and $b/r$ a rational approximation to $\gamma$,
\begin{equation} \label{gammaratapprox}
\left| \gamma - \frac{b}{r} \right| \leq \frac{1}{r N^{3/4}} , \; N^{\varepsilon} \leq r \leq N^{3/4}, \; (b,r)=1 .
\end{equation}
Then for $N < N' \leq 2N$ and any $A>0$, we have
\begin{equation} \label{beabomvinoeq}
\sum_{q \leq \min (r , N^{1/4}) N^{-\varepsilon}} \max_{(a,q)=1} E (N, N', \gamma, q, a) \ll N \mathcal{L}^{-A} .
\end{equation}
\end{theorem}
Our Barban-Davenport-Halberstam type result is the following.
\begin{theorem} \label{beabardavhal}
Let $A>0$ and $\gamma$ be an irrational number. Suppose that for each $\eta > 0$ and sufficiently large $r \in \mathbb{N}$, we have
\begin{equation} \label{gammatyperes}
\| \gamma r \| > \exp (- r^{\eta} ) .
\end{equation}
Let $N \mathcal{L}^{-A} \leq R \leq N$. Then for $N < N' \leq 2N$,
\begin{equation} \label{beabardavhaleq}
\sum_{q \leq R} \sum_{\substack{a=1 \\ (a,q)=1}}^q E(N, N', \gamma, q, a)^2 \ll NR \mathcal{L} ( \log \mathcal{L} )^2 .
\end{equation}
\end{theorem}
There are weaker results overlapping with Theorems~\ref{beabomvino} and \ref{beabardavhal} by W. D. Banks and I. E. Shparlinski \cite{BaSh}. \newline
Let $\gamma$ be irrational, $\eta > 0$ and suppose that
\[ \| \gamma r \| \leq \exp \left( - r^{\eta} \right) \]
for infinitely many $r\in \mathbb{N}$. Then \eqref{beabardavhaleq} fails (so Theorem~\ref{beabardavhal} is optimal in this sense). To see this, take $N = \exp (r^{\eta/2})$, $N'=2N$, $R = N \mathcal{L}^{-8/\eta}$. We have, for some $u \in \mathbb{Z}$,
\[ \left| \gamma n - \frac{un}{r} \right| \leq 2N r^{-1} \exp ( - r^{\eta} ) < \frac{1}{4r} , \; (n \leq 2N) . \]
From this, we infer that
\[ \gamma n \not\in \left( \frac{1}{4r}, \frac{3}{4r} \right) \pmod{1} , \; (n \leq 2N) .\]
So
\[ E (N, 2N, \gamma, q,a)^2 \geq \frac{N^2}{4r^2\varphi(q)} , \; ( q \leq R, (a,q)=1) . \]
Therefore,
\[ \sum_{q \leq R} \sum_{\substack{a=1 \\ (a,q)=1}}^q E(N, 2N, \gamma, q, a)^2 \geq \frac{N^2}{4r^2} \sum_{q \leq R} \frac{1}{\varphi(q)} > \frac{N^2}{r^2} = NR \mathcal{L}^{4/\eta} . \]
We now turn to the definition of $k_0(t,b,\theta)$. For a smooth function $F$ supported on
\[ \mathcal{R}_k = \left\{ (x_1, \cdots, x_k) \in [0,1]^k : \sum_{i=1}^k x_i \leq 1 \right\} , \]
set
\[ I_k(F) = \int\limits_0^1 \cdots \int\limits_0^1 F(t_1, \cdots, t_k)^2 \mathrm{d} t_1 \cdots \mathrm{d} t_k , \]
and
\[ J^{(m)}_k(F) =\int\limits_0^1 \cdots \int\limits_0^1\left( \int\limits_0^1 F(t_1, \cdots, t_k) \mathrm{d} t_m \right)^2 \mathrm{d} t_1 \cdots \mathrm{d} t_{m-1} \mathrm{d} t_{m+1} \cdots \mathrm{d} t_k \]
for $m=1, \cdots, k$. Let
\[ M_k = \sup_F \frac{\displaystyle{\sum_{m=1}^k J^{(m)}_k (F)}}{I_k(F)} , \]
where the $\sup$ is taken over all functions $F$ specified above and subject to the conditions $I_k(F) \neq 0$ and $J^{(m)}_k(F) \neq 0$ for each $m$. Sharpening a result of Maynard \cite{May}, D. H. J. Polymath \cite{polym} gives the lower bound
\begin{equation} \label{mklowerbound}
M_k \geq \log k + O(1) .
\end{equation}
Now let $k_0 (t,b, \theta)$ be the least integer $k$ for which
\begin{equation} \label{k0defcond}
M_k > \frac{2t-2}{b\theta} .
\end{equation}
\section{Deduction of Theorem~\ref{gentheo} from Two Propositions}
We first write down some lemmas that we shall need later.
\begin{lemma} \label{GGPYlem}
Let $\kappa$, $A_1$, $A_2$, $L > 0$. Suppose that $\gamma$ is a multiplicative function satisfying
\[ 0 \leq \frac{\gamma(p)}{p} \leq 1-A_1 \]
for all prime $p$ and
\[ -L \leq \sum_{w \leq p \leq z} \frac{\gamma(p) \log p}{p} - \kappa \log \frac{z}{w} \leq A_2 \]
for any $w$ and $z$ with $2 \leq w \leq z$. Let $g$ be the totally multiplicative function defined by
\[ g(p) = \frac{\gamma(p)}{p-\gamma(p)} . \]
Suppose that $G: [0,1] \to \mathbb{R}$ is a piecewise differentiable function with
\[ |G(y)| + |G'(y)| \leq B \]
for $0 \leq y \leq 1$ and
\begin{equation} \label{Sdef}
S = \prod_p \left( 1- \frac{\gamma(p)}{p} \right)^{-1} \left( 1- \frac{1}{p} \right)^{\kappa} .
\end{equation}
Then for $z >1$, we have
\[ \sum_{d < z} \mu(d)^2 g(d) G \left( \frac{\log d}{\log z} \right) = \frac{S (\log z)^{\kappa}}{\Gamma(\kappa)} \int_0^1 t^{\kappa-1} G(t) \mathrm{d} t + O \left( SLB (\log z)^{\kappa-1} \right) . \]
The implied constant above depends on $A_1$, $A_2$, $\kappa$, but is independent of $L$.
\end{lemma}
\begin{proof}
This is \cite[Lemma 4]{GGPY}.
\end{proof}
Throughout this section, we assume that the hypotheses of Theorem~\ref{gentheo} hold. Moreover, we write
\[ W_1 = \prod_{p \leq D_0 \; \mbox{or} \; \; \; p | q_0q_1} p , \; \; \; W_2 = \prod_{\substack{p \leq D_0 \\ p \nmid q_0 }} p , \; \; \; R = N^{\theta/2-\varepsilon} . \]
Recalling the definition of admissible set, we pick a natural number $\nu_0$ with
\[ (\nu_0 + h_m, W_2) = 1 \; \; \; \; (m=1, \cdots, k) . \]
\begin{lemma} \label{multlem}
Suppose that $\gamma(p) = 1 + O(p^{-1})$ if $p \nmid W_1$ and $\gamma(p) = 0$ if $p | W_1$. Let $\kappa =1$ and $S$ as defined in \eqref{Sdef}. We have
\[ S = \frac{\varphi(W_1)}{W_1} \left( 1 + O(D_0^{-1}) \right) . \]
\end{lemma}
\begin{proof}
We have
\[ S = \prod_{p | W_1} \left( 1 - \frac{1}{p} \right) \prod_{p \nmid W_1} \left( 1 - \frac{1}{p} + O \left( \frac{1}{p^2} \right) \right)^{-1} \left( 1- \frac{1}{p} \right) = \frac{\varphi(W_1)}{W_1} \prod_{\substack{p > D_0 \\ p \nmid q_0q_1}} \left( 1+ O(p^{-2}) \right) , \]
from which the statement of the lemma can be readily obtained.
\end{proof}
\begin{lemma} \label{T1T2lem}
Let $H>1$,
\[ T_1 = \sum_{\substack{d \leq R \\ (d,W_1)=1}} \frac{\mu^2(d)}{d} \sum_{a | d} \frac{4^{\omega(a)}}{a} \; \; \mbox{and} \; \; T_2 = \sum_{H < d \leq R} \frac{\mu^2(d)}{d^2} \sum_{a|d} a^{-1/2} .\]
Then, we have
\begin{equation} \label{T1bound}
T_1 \ll \frac{\varphi(W_1)}{W_1} \mathcal{L}
\end{equation}
and
\begin{equation} \label{T2bound}
T_2 \ll H^{-1} .
\end{equation}
\end{lemma}
\begin{proof}
Let $\gamma(p)=0$ if $p | W_1$ and
\[ \gamma(p) = \frac{p^2+4p}{p^2+p+4} \]
if $p \nmid W_1$. Then $g(p)$, as defined in the statement of Lemma~\ref{GGPYlem}, is
\[ g(p) = \frac{1}{p} + \frac{4}{p^2} \]
if $p \nmid W_1$. Therefore, if $d$ is square-free and $(d,W_1)=1$,
\[ \frac{1}{d} \sum_{a|d} \frac{4^{\omega(a)}}{a} = \frac{1}{d} \prod_{p|d} \left( 1 + \frac{4}{p} \right) =g(d) . \]
Otherwise, if $(d,W_1) \neq 1$, then $g(d)=0$. Using Lemma~\ref{GGPYlem} with $G(y)=1$ and Lemma~\ref{multlem}, we have
\[ T_1 = \sum_{d \leq R} \mu^2(d) g(d) G \left( \frac{ \log d}{\log R} \right) = \frac{\varphi(W_1)}{W_1} \left( 1 + O (D_0^{-1} ) \right) \log R + O \left( \frac{\varphi(W_1)}{W_1} L \right) , \]
where we can take
\[ L = \sum_{p | W_1} \frac{\log p}{p} \ll \log D_0 + \log \omega(q_0) \ll \log \mathcal{L} . \]
Combining everything, we get \eqref{T1bound}. \newline
To prove \eqref{T2bound}, we interchange the summations and get
\[ T_2 \leq \sum_{a \leq R} a^{-5/2} \sum_{Ha^{-1} < k \leq Ra^{-1}} k^{-2} \ll \sum_{a \leq R} a^{-3/2} H^{-1} \ll H^{-1} , \]
completing the proof of the lemma.
\end{proof}
\begin{lemma} \label{multfunction}
Let $f_0$, $f_1$ be multiplicative functions with $f_0(p)=f_1(p)+1$. Then for squarefree $d$, $e$,
\[ \frac{1}{f_0 ([d,e])} = \frac{1}{f_0(d) f_0(e)} \sum_{k | d,e} f_1 (k) . \]
\end{lemma}
\begin{proof}
We have
\[ \frac{1}{f_0(d) f_0(e)} \sum_{k | d,e} f_1 (k) = \frac{1}{f_0(d) f_0(e)} \prod_{p | (d,e)} \left( 1+ f_1 (p) \right) = \frac{1}{f_0(d) f_0(e)} \prod_{p | (d,e)} f_0 (p) = \prod_{p | [d,e]} (f_0 (p))^{-1} . \]
The lemma follows from this.
\end{proof}
We now prove two propositions that readily yield Theorem~\ref{gentheo} when combined. To state them, we define weights $y_{\mathbf{r}}$ and $\lambda_{\mathbf{r}}$ for tuples
\[ \mathbf{r} = ( r_1, \cdots, r_k ) \in \mathbb{N}^k \]
having the properties
\begin{equation} \label{tupprop}
\left( \prod_{i=1}^k r_i, W_1 \right) =1, \; \mu^2 \left( \prod_{i=1}^k r_i \right) = 1.
\end{equation}
We set $y_{\mathbf{r}} = \lambda_{\mathbf{r}} = 0$ for all other tuples. Let $F$ be a smooth function with $|F| \leq 1$ and the properties given at the end of Section 1. Let
\begin{equation} \label{yrdef}
y_{\mathbf{r}} = F \left( \frac{\log r_1}{\log R} , \cdots , \frac{\log r_k}{\log R} \right) ,
\end{equation}
and
\begin{equation} \label{lamddef}
\lambda_{\mathbf{d}} = \prod_{i=1}^k \mu(d_i) d_i \sum_{\substack{\mathbf{r} \\ d_i | r_i \; \forall i}} \frac{y_{\mathbf{r}}}{\prod_{i=1}^k \varphi(r_i)} .
\end{equation}
We have
\begin{equation} \label{lambbound}
\lambda_{\mathbf{r}} \ll \mathcal{L}^k
\end{equation}
(see (5.9) of \cite{May}). For $n \equiv \nu_0 \pmod{W_2}$, let
\[ w_n = \left( \sum_{d_i | n+h_i \; \forall i} \lambda_{\mathbf{d}} \right)^2 , \]
and $w_n=0$ for all other natural numbers $n$. \newline
\begin{proposition} \label{S1prop}
Let
\[ S_1 = \sum_{N \leq n < 2N} w_n X ( \mathcal{A} ; n) . \]
Then
\[ S_1 = \frac{(1+o(1)) \varphi(W_1)^k Y (\log R)^k I_k (F)}{q_0 W_1^k W_2} . \]
\end{proposition}
\begin{proposition} \label{S2prop}
Let
\[ S_2 (g, m) = \sum_{\substack{N \leq n < 2N \\ n \in \mathcal{A} \cap (\mathcal{A} - h_m)}} w_n \varrho_g(n+h_m) . \]
Then for $ 1 \leq g \leq s$ and $1 \leq m \leq k$,
\[ S_2 (g,m) = \frac{b_{g,m} (1+o(1)) \varphi(W_1)^{k+1} Y (\log R)^{k+1} J^{(m)}_k (F)}{\varphi(q_0) \varphi(W_2) W_1^{k+1} \mathcal{L}} . \]
\end{proposition}
Before proving the above propositions, we shall deduce Theorem~\ref{gentheo} from them.
\begin{proof}[Proof of Theorem~\ref{gentheo}]
Let
\[ Z = \frac{Y\varphi(W_1)^k}{q_0W_1^kW_2} (\log R)^k \]
and
\[ S(N) = \sum_{\substack{N \leq n < 2N \\ n \in \mathcal{A}}} w_n \left( \sum_{m=1}^k X \left( \mathbb{P} \cap \mathcal{A}; n + h_m \right) - (t-1) \right) . \]
Since $w_n \geq 0$, \eqref{regcond2} gives that
\[ S(N) \geq \sum_{m=1}^k \left( \sum_{g=1}^a S_2(g,m) - \sum_{g=a+1}^s S_2 (g,m) \right) - (t-1)S_1 . \]
Using Propositions \ref{S1prop} and \ref{S2prop}, the right-hand side of the above is
\[ \left(1+o(1) \right) Z \left( \sum_{m=1}^k \left( \sum_{g=1}^a b_{g,m} - \sum_{g=a+1}^s b_{g,m} \right) J^{(m)}_k(F) \left( \frac{\theta}{2} - \varepsilon \right) - (t-1) I_k(F) \right) . \]
Here we have used
\[ \frac{\varphi(q_0) \varphi(q_1) \varphi(W_2)}{q_0q_1 W_2} \frac{W_1}{\varphi(W_1)} = 1 \; \; \; \mbox{and} \; \; \; \frac{\varphi(q_1)}{q_1} = 1 + o(1) . \]
Therefore, using \eqref{bbound}, we get
\[ S(N) \geq \left(1+o(1) \right) Z \left( b\sum_{m=1}^k J^{(m)}_k(F) \left( \frac{\theta}{2} - \varepsilon \right) - (t-1) I_k(F) \right) > 0 , \]
for a suitable choice of $F$. The positivity of the above expression is a consequence of \eqref{k0defcond}. Therefore, there must be at least one $n \in \mathcal{A}$ for which
\[ \sum_{m=1}^k X \left( \mathbb{P} \cap \mathcal{A} ; n+ h_m \right) > t-1 . \]
For this $n$, there is a set of $t$ primes $n+h_{m_1}, \cdots , \; n+ h_{m_t}$ in $\mathcal{A}$.
\end{proof}
\section{Proof of Propositions \ref{S1prop} and \ref{S2prop}}
This section is devoted to the proofs of the two propositions.
\begin{proof}[Proof of Proposition~\ref{S1prop}]
We first show that
\begin{equation} \label{S11st}
S_1 = \frac{Y}{q_0W_2} \sum_{\mathbf{r}} \frac{y_{\mathbf{r}}^2}{\prod_{i=1}^k \varphi(r_i)} + O \left( \frac{Y \varphi(W_1)^k \mathcal{L}^k}{q_0W_2W_1^kD_0} \right).
\end{equation}
From the definition of $w_n$, we get
\begin{equation} \label{S1wn}
S_1 = \sum_{\mathbf{d}, \; \mathbf{e}} \lambda_{\mathbf{d}} \lambda_{\mathbf{e}} \sum_{\substack{N \leq n < 2N \\ n \equiv \nu_0 \bmod{W_2} \\ [d_i, e_i ] | n+h_i \; \forall i}} X \left( \mathcal{A}; n \right).
\end{equation}
Recall that $n \equiv a_0 \pmod{q_0}$ for all $n \in \mathcal{A}$. The inner sum of the above takes the form
\[ \sum_{\substack{N \leq n < 2N \\ n \equiv a_q \bmod{qq_0}}} X \left( \mathcal{A}; n \right), \]
where
\[ q = W_2 \prod_{i=1}^k [d_i, e_i] , \]
provided that $W_2$, $[d_1, e_1]$, $\cdots$, $[d_k, e_k]$ are pairwise coprime. The latter restriction reduces to
\begin{equation} \label{coprimcond}
(d_i, e_j) = 1
\end{equation}
for all $i \neq j$, and we exhibit this condition on the summation by writing
\[ \sideset{}{'} \sum_{\mathbf{d}, \; \mathbf{e}} . \]
Outside of $ \sum_{\mathbf{d}, \; \mathbf{e}}^{'}$, the inner sum is empty. To see this, suppose that $p | d_i$, $p|e_j$ with $i \neq j$, then the conditions
\[ [d_i, e_i] | n+h_i , \; \mbox{and} \; [d_j, e_j] | n+h_j \]
imply that $p | h_i-h_j$. This means that either $p \leq D_0$ or $p| q_0$, both contrary to $p|d_i$. \newline
Counting the number of times a given $q$ can arise, we get
\begin{equation} \label{countq}
S_1 - \frac{Y}{q_0W_2} \sideset{}{'} \sum_{\mathbf{d}, \; \mathbf{e}} \frac{\lambda_{\mathbf{d}} \lambda_{\mathbf{e}}}{ \prod_{i=1}^k [d_i, e_i]} \ll \left( \max_{\mathbf{d}} \left| \lambda_{\mathbf{d}} \right| \right)^2 \sum_{\substack{q \leq R^2 W_2 \\ (q,q_0)=1}} \mu^2(q) \tau_{3k} (q) \left| \sum_{n \equiv a_q \bmod{qq_0}} X \left( \mathcal{A}; n \right) - \frac{Y}{qq_0} \right| .
\end{equation}
Since $R^2 W_2 \leq N^{\theta}$, we can appeal to \eqref{regcond1} and \eqref{lambbound} to majorize the right-hand side of \eqref{countq} by
\[ \ll \frac{Y}{q_0} \mathcal{L}^{2k-(k+\varepsilon)} \ll \frac{\varphi(W_1)^k Y \mathcal{L}^k}{q_0 W_2 W_1^k D_0} . \]
Applying Lemma~\ref{multfunction} with $f_1 = \varphi$, we see that
\[ S_1 = \frac{Y}{q_0W_2} \sum_{\mathbf{u}} \prod_{i=1}^k \varphi(u_i) \sideset{}{'} \sum_{\substack{\mathbf{d}, \; \mathbf{e} \\ u_i | d_i, e_i \; \forall i}} \frac{\lambda_{\mathbf{d}} \lambda_{\mathbf{e}}}{ \prod_{i=1}^k d_i e_i} + O \left( \frac{\varphi(W_1)^k Y \mathcal{L}^k}{q_0 W_2 W_1^k D_0} \right) . \]
Now we follow \cite{May} verbatim to transform this equation into
\begin{equation} \label{maytrans}
S_1 = \frac{Y}{q_0 W_2} \sum_{\mathbf{u}} \prod_{i=1}^k \varphi(u_i) \sideset{}{^*} \sum_{s_{1,2}, \cdots, s_{k,k-1}} \prod_{\substack{1 \leq i, j \leq k \\ i \neq j}} \mu \left( s_{i,j} \right) \sum_{\substack{\mathbf{d}, \; \mathbf{e} \\ u_i | d_i, e_i \; \forall i \\ s_{i,j} | d_i, e_j \; \forall i \neq j}} \frac{\lambda_{\mathbf{d}} \lambda_{\mathbf{e}}}{ \prod_{i=1}^k d_i e_i} + O \left( \frac{\varphi(W_1)^k Y \mathcal{L}^k}{q_0 W_2 W_1^k D_0} \right) .
\end{equation}
Here $\sum^{*}$ indicates that $(s_{i,j}, u_iu_j )=1$ and $(s_{i,j} , s_{i, c}) = 1 = (s_{i,j} , s_{d,j})$, for $c \neq j$, $d \neq i$. Now define
\begin{equation} \label{ajbjdef}
a_j = u_j \prod_{i \neq j} s_{j,i}, \; \; b_j = u_j \prod_{i \neq j} s_{i,j} .
\end{equation}
As in \cite{May}, we recast \eqref{maytrans} as
\begin{equation} \label{maytrans2}
S_1 = \frac{Y}{q_0 W_2} \sum_{\mathbf{u}} \prod_{i=1}^k \frac{\mu(u_i)^2}{\varphi(u_i)} \sideset{}{^*} \sum_{s_{1,2}, \cdots, s_{k,k-1}} \prod_{\substack{1 \leq i, j \leq k \\ i \neq j}} \mu \left( s_{i,j} \right) \sum_{\substack{\mathbf{d}, \; \mathbf{e} \\ u_i | d_i, e_i \; \forall i \\ s_{ij} | d_i, e_j \; \forall i \neq j}} \frac{\mu(s_{i,j})}{\varphi(s_{i,j})^2} y_{\mathbf{a}} y_{\mathbf{b}} + O \left( \frac{\varphi(W_1)^k Y \mathcal{L}^k}{q_0 W_2 W_1^k D_0} \right) .
\end{equation}
For the non-zero terms on the right-hand side of \eqref{maytrans2}, either $s_{i,j}=1$ or $s_{i,j} > D_0$. The terms of the latter kind (for given $i, j, i \neq j$) contribute
\begin{equation} \label{maytrans2bound}
\ll \frac{Y}{q_0W_2} \left( \sum_{\substack{ u < R \\ (u, W_1)=1}} \frac{\mu(u)^2}{\varphi(u)} \right)^k \left( \sum_{s_{i,j} > D_0} \frac{\mu(s_{i,j})^2}{\varphi(s_{i,j})^2} \right) \left( \sum_{s \geq 1} \frac{\mu(s)}{\varphi(s)^2} \right)^{k^2-k-1} = \frac{Y}{q_0W_2} U_1 U_2 U_3 ,
\end{equation}
say. Clearly, $U_3 \ll 1$. Now if $u$ is squarefree, we have
\[ \frac{1}{\varphi(u)} = \frac{1}{u} \prod_{p |u} \left( 1 - \frac{1}{p} \right)^{-1} \ll \frac{1}{u} \sum_{a|u} \frac{1}{a} \]
and
\[ \frac{1}{\varphi(u)^2} \ll \frac{1}{u^2} \prod_{p |u} \left( 1 + \frac{2}{p} \right) = \frac{1}{u^2} \sum_{a|u} \frac{2^{\omega(a)}}{a} \ll \frac{1}{u^2} \sum_{a|u} a^{-1/2}. \]
So \eqref{T1bound} and \eqref{T2bound} give, respectively,
\[ U_1 \ll \left( \frac{\varphi(W_1)}{W_1} \mathcal{L} \right)^k \; \; \; \mbox{and} \; \; \; U_2 \ll \frac{1}{D_0} . \]
Hence, the right-hand side of \eqref{maytrans2bound} is
\[ \ll \frac{\varphi(W_1)^k Y \mathcal{L}^k}{q_0 W_2 W_1^k D_0} \]
and we have \eqref{S11st}. \newline
Now, we shall deduce Proposition~\ref{S1prop} from \eqref{S11st}. Mindful of \eqref{lamddef}, we have
\[ S_1 = \frac{Y}{q_0W_2} \sum_{\substack{ \mathbf{u} \\ (u_l, u_j) =1 \; \forall l \neq j \\ (u_l , W_1) =1 \; \forall l}} \prod_{i=1}^k \frac{\mu(u_i)^2}{\varphi(u_i)} F \left( \frac{\log u_1}{\log R} , \cdots , \frac{\log u_k}{\log R} \right)^2 + O \left( \frac{\varphi(W_1)^k Y \mathcal{L}^k}{q_0 W_2 W_1^k D_0} \right) . \]
Note that the common prime factors of two integers both coprime to $W_1$ are strictly greater than $D_0$. Thus, we may drop the condition $(u_l, u_j)=1$ in the above expression at the cost of an error of size
\[ \ll \frac{Y}{q_0W_2} \sum_{p > D_0} \sum_{\substack{u_1 \cdots u_k < R \\ p | u_l, u_j \\ (u_l, W_1)=1 \; \forall l}} \prod_{i=1}^k \frac{\mu(u_i)^2}{\varphi(u_i)} \ll \frac{Y}{q_0W_2} \sum_{p > D_0} \frac{1}{(p-1)^2} \left( \sum_{\substack{ u < R \\ (u,W_1)=1}} \frac{\mu(u)^2}{\varphi(u)} \right)^k \ll \frac{\varphi(W_1)^k Y \mathcal{L}^k}{q_0 W_2 W_1^k D_0} , \]
by virtue of \eqref{T1bound}. \newline
It remains to evaluate the sum
\begin{equation} \label{S1mt}
\sum_{\substack{\mathbf{u} \\ (u_l, W_1)=1 \; \forall l}} \prod_{i=1}^k \frac{\mu(u_i)^2}{\varphi(u_i)} F \left( \frac{\log u_1}{\log R} , \cdots , \frac{\log u_k}{\log R} \right)^2.
\end{equation}
This requires applying Lemma~\ref{GGPYlem} $k$ times with
\[ \gamma (p) = \left\{ \begin{array}{cl} 0 & p | W_1, \\ 1 & p \nmid W_1 . \end{array} \right. \]
We take $A_1$ and $A_2$ to be suitable constants and
\[ L \ll 1 + \sum_{p | W_1} \frac{\log p}{p} \ll \log \mathcal{L} \]
as noted earlier. In the $j$-th application, we replace the summation over $u_j$ by the integeral over $[0,1]$. Ultimately, we express the sum in \eqref{S1mt} in the form
\[ \frac{\varphi(W_1)^k}{W_1^k} \left( \log R \right)^k I_k (F) + O \left( \frac{\varphi(W_1) (\log \mathcal{L}) \mathcal{L}^{k-1}}{W_1^k} \right) \]
and Proposition~\ref{S1prop} follows at once.
\end{proof}
We shall need the following lemma in the proof of Proposition~\ref{S2prop}.
\begin{lemma} \label{rm1ylem}
Let $1 \leq m \leq k$ and suppose that $r_m=1$. Let
\[ y_{\mathbf{r}}^{(m)} = \prod_{i=1}^k \mu(r_i) g(r_i) \sum_{\substack{ \mathbf{d} \\ r_i | d_i \forall i \\ d_m=1}} \frac{\lambda_{\mathbf{d}}}{\prod_{i=1}^k \varphi(d_i)} . \]
Then
\[ y_{\mathbf{r}}^{(m)} = \sum_{a_m} \frac{y_{r_1, \cdots, r_{m-1}, a_m, r_{m+1}, \cdots , r_k}}{\varphi(a_m)} + O \left( \frac{\varphi(W_1) \mathcal{L}}{W_1 D_0} \right). \]
\end{lemma}
\begin{proof}
Following \cite{May} verbatim, we have
\begin{equation} \label{lem5start}
y_{\mathbf{r}}^{(m)} = \prod_{i=1}^k \mu(r_i) g(r_i) \sum_{\substack{\mathbf{a} \\ r_i | a_i \forall i}} \frac{y_{\mathbf{a}}}{\prod_{i=1}^k \varphi(a_i)} \prod_{i\neq m} \frac{\mu(a_i)r_i}{\varphi(a_i)} .
\end{equation}
Fix $j$, $1 \leq j \leq k$. In \eqref{lem5start}, the nonzero terms will have either $a_j = r_j$ or $a_j > D_0r_j$. The contribution from the terms with $a_j \neq r_j$ is
\begin{equation} \label{ajneqrj}
\ll \prod_{i=1}^k g(r_i) r_i \left( \sum_{\substack{a_j > D_0r_j \\ r_j | a_j}} \frac{\mu(a_j)^2}{\varphi(a_j)^2} \right) \left( \sum_{\substack{a_m < R \\ (a_m, W_1)=1}} \frac{\mu(a_m)^2}{\varphi(a_m)} \right) \prod_{\substack{ 1 \leq i \leq k \\ i \neq j, m}} \sum_{r_i | a_i} \frac{\mu(a_i)^2}{\varphi(a_i)^2} .
\end{equation}
Now, as before, from \eqref{T1bound} and \eqref{T2bound},
\[ \sum_{\substack{a_j > D_0r_j \\ r_j | a_j}} \frac{\mu(a_j)^2}{\varphi(a_j)^2} \ll \frac{1}{D_0 \varphi(r_j)^2} , \; \sum_{\substack{a_m < R \\ (a_m, W_1)=1}} \frac{\mu(a_m)^2}{\varphi(a_m)} \ll \frac{\varphi(W_1)}{W_1} \mathcal{L} \]
and
\[ \sum_{r_i | a_i} \frac{\mu(a_i)^2}{\varphi(a_i)^2} \leq \frac{\mu(r_i)^2}{\varphi(r_i)^2} \sum_k \frac{\mu(k)}{\varphi(k)^2} \ll \frac{1}{\varphi(r_i)^2}, \]
majorizing \eqref{ajneqrj} by
\[ \ll \prod_{i=1}^k \frac{g(r_i)r_i}{\varphi(r_i)^2} \frac{\varphi(W_1)}{W_1 D_0} \mathcal{L} \ll \frac{\varphi(W_1) \mathcal{L}}{W_1 D_0} . \]
Hence \eqref{lem5start} becomes
\[ y_{\mathbf{r}}^{(m)} = \prod_{i=1}^k \frac{g(r_i)r_i}{\varphi(r_i)^2} \sum_{a_m} \frac{y_{r_1, \cdots, r_{m-1}, a_m, r_{m+1}, \cdots , r_k}}{\varphi(a_m)} + O \left( \frac{\varphi(W_1) \mathcal{L}}{W_1 D_0} \right) , \]
and the proof is completed by applying Lemma~\ref{multlem}.
\end{proof}
Now we proceed to the proof of Proposition~\ref{S2prop}.
\begin{proof}[Proof of Proposition~\ref{S2prop}]
Let
\[ y_{\max}^{(m)} = \max_{\mathbf{r}} \left| y_{\mathbf{r}}^{(m)} \right|, \]
where $y_{\mathbf{r}}^{(m)}$ is defined in Lemma~\ref{rm1ylem}. We shall first show that
\begin{equation} \label{firstthingprop2}
S_2 (g,m) = \frac{Y_{g,m}}{\varphi(q_0) \varphi(W_2)} \sum_{\mathbf{u}} \frac{\left( y_{\mathbf{u}}^{(m)}\right)^2}{\prod_{i=1}^k g(u_i)} + O \left( \frac{Y \mathcal{L}^{k-2} \varphi^{k-1}(W_1) \left( y_{\max}^{(m)} \right)^2}{\varphi(q_0) \varphi(W_2) W_1^{k-1} D_0} + \frac{Y \mathcal{L}^{k-\varepsilon}}{\varphi(q_0)} \right) .
\end{equation}
From the definition of $w_n$, we have
\begin{equation} \label{S2withWn}
S_2(g,m) = \sum_{\mathbf{d}, \mathbf{e}} \lambda_{\mathbf{d}} \lambda_{\mathbf{e}} \sum_{\substack{n \in \mathcal{A} \cap ( \mathcal{A} - h_m ) \\ N \leq n < 2N , \; n \equiv \nu_0 \bmod{W_2} \\ [d_i, e_i] | n+h_i \; \forall i}} \varrho_g(n+h_m) .
\end{equation}
As in the proof of Proposition~\ref{S1prop}, $\sum_{\mathbf{d}, \mathbf{e}}$ reduces to $\sum_{\mathbf{d}, \mathbf{e}}^{'}$. Let $n' = n+h_m$. Since $n+h_m \equiv a_0 \pmod{q_0}$ for $n \in \mathcal{A}$, the inner sum of \eqref{S2withWn} reduces to
\[ T( \mathbf{d}, \mathbf{e} ) : = \sum_{\substack{ n' \equiv \nu_0 + h_m \bmod{W_2} \\ n' \equiv a_0 \bmod{q_0} \\ n' \equiv h_m-h_i \bmod{[d_i,e_i]} \forall i}} X \left( \mathcal{A} \cap ( \mathcal{A} + h_m ) , n' \right) \varrho_g(n'). \]
Recall that $\varrho_g(n') = 0$ if $n'$ is divisible by a prime divisor of $[d_i,e_i]$. Since one condition of the summation is $[d_m , e_m] | n'$, we have $T(\mathbf{d}, \mathbf{e}) = 0$ unless $d_m=e_m=1$. When $d_m=e_m=1$,
\[ T(\mathbf{d}, \mathbf{e}) = \sum_{n \equiv a_q \bmod{qq_0}} X \left( \mathcal{A} \cap (\mathcal{A}+h_m) , n \right) \varrho_g(n) . \]
Here we have
\[ q = W_2 \prod_{i=1}^k [d_i, e_i], \; (a_q, q)=1, \; a_q \equiv a_0 \pmod{q_0} . \]
For $(a_q,q)=1$, we need $(h_m-h_i, [d_i,e_i])=1$ whenever $m \neq i$, which was noted earlier. \newline
Arguing as in the proof of Proposition~\ref{S1prop}, \eqref{regcond3} now gives
\[ S_2(g,m) = \frac{Y_{g,m}}{\varphi(q_0) \varphi(W_2)} \sideset{}{'} \sum_{\substack{\mathbf{d}, \mathbf{e} \\ d_m=e_m=1}} \frac{\lambda_{\mathbf{d}} \lambda_{\mathbf{e}}}{\prod_{i=1}^k \varphi([d_i,e_i])} + O \left( \frac{Y \mathcal{L}^{k-\varepsilon}}{\varphi(q_0)} \right). \]
With $a_j$ and $b_j$ as in \eqref{ajbjdef}, we follow \cite{May} to obtain
\begin{equation} \label{S2afterMay}
S_2(g,m) = \frac{Y_{g,m}}{\varphi(q_0)\varphi(W_2)} \sum_{\mathbf{u}} \prod_{i=1}^k \frac{\mu(u_i)^2}{g(u_i)} \sideset{}{^*} \sum_{s_{1,2}, \cdots, s_{k,k-1}} \prod_{\substack{1 \leq i,j \leq k \\ i \neq j}} \frac{\mu(s_{i,j}}{g^2(s_{i,j})} y_{\mathbf{a}}^{(m)} y_{\mathbf{b}}^{(m)} + O \left( \frac{Y \mathcal{L}^{k-\varepsilon}}{\varphi(q_0)} \right).
\end{equation}
Here $q$ is the totally multiplicative function with $g(p) = p-2$ for all $p$ and we have used Lemma~\ref{multfunction} with $f_1=g$. \newline
The contribution to the sum in \eqref{S2afterMay} from $s_{i,j} \neq 1$ (for given $i,j$) is
\begin{equation} \label{not1contrib}
\begin{split}
\ll \frac{Y \left( y_{\max}^{(m)} \right)^2}{\varphi(q_0) \varphi(W_2) \mathcal{L}} & \left( \sum_{\substack{ u < R \\ (u, W_1)=1}} \frac{\mu(u)^2}{g(u)} \right)^{k-1} \left( \sum_s \frac{\mu(s)^2}{g(s)^2} \right)^{k(k-1)-1} \left( \sum_{s_{i,j}>D_0} \frac{\mu(s_{i,j})^2}{g(s_{i,j})^2} \right) \\ & = \frac{Y \left( y_{\max}^{(m)} \right)^2}{\varphi(q_0) \varphi(W_2) \mathcal{L}} V_1 V_2 V_3,
\end{split}
\end{equation}
say. Clearly, $V_2 \ll 1$. Using \eqref{T1bound} while mindful of the estimate
\[ \frac{1}{g(s)} \ll \frac{1}{s} \sum_{a|s} \frac{2^{\omega(a)}}{a} \]
yields that
\[ V_1 \ll \left( \frac{\varphi(W_1)}{W_1} \mathcal{L} \right)^{k-1} . \]
From \eqref{T2bound} and the observation that, for $s$ squarefree,
\[ \frac{1}{g^2(s)} \ll \frac{1}{s^2} \sum_{a|s} \frac{4^{\omega(a)}}{a} \ll \frac{1}{s^2} \sum_{a|s} a^{-1/2} , \]
we get that
\[ V_3 \ll D_0^{-1} . \]
Note the bound in \eqref{not1contrib} is
\[ \ll \frac{Y \left( y_{\max}^{(m)} \right)^2 \mathcal{L}^{k-2}}{\varphi(q_0) \varphi(W_2)} \left( \frac{\varphi(W_1)}{W_1} \right)^{k-1} \frac{1}{D_0} , \]
and we have established \eqref{firstthingprop2}. \newline
Now we use Lemma~\ref{rm1ylem} in \eqref{firstthingprop2}, recalling \eqref{yrdef}. When $r_m=1$,
\begin{equation} \label{yrm=1}
\begin{split}
y_{\mathbf{r}}^{(m)} = \sum_{\left( u, W_1 \prod_{i=1}^k r_i \right)=1} \frac{\mu(u)^2}{\varphi(u)} F &\left( \frac{\log r_1}{\log R} , \cdots , \frac{\log r_{m-1}}{\log R} , \frac{\log u}{\log R} , \frac{\log r_{m+1}}{\log R}, \cdots , \frac{\log r_k}{\log R} \right) \\
& + O \left( \frac{\varphi(W_1) \mathcal{L}}{ W_1 D_0} \right) .
\end{split}
\end{equation}
From this, we find that
\[ y_{\max}^{(m)} \ll \frac{\varphi(W_1)}{W_1} \mathcal{L} . \]
We shall apply Lemma~\ref{GGPYlem} to \eqref{yrm=1} with $\kappa=1$,
\[ \gamma(p) = \left\{ \begin{array}{cl} 1, & p \nmid W_1 \prod_{i=1}^k r_i \\ 0 , & \mbox{otherwise}, \end{array} \right. \]
$A_1$, $A_2$ suitably chosen and
\[ L \ll \log \mathcal{L} \]
(similar to the proof of \eqref{T1bound}). Define
\[ F_{\mathbf{r}}^{(m)} = \int\limits_0^1 F \left( \frac{\log r_1}{\log R} , \cdots , \frac{\log r_{m-1}}{\log R} , t_m , \frac{\log r_{m+1}}{\log R}, \cdots , \frac{\log r_k}{\log R} \right) \mathrm{d} t_m . \]
We obtain that
\[ y_{\mathbf{r}}^{(m)} = \log R \frac{\varphi(W_1)}{W_1} \left( \prod_{i=1}^k \frac{\varphi(r_i)}{r_i} \right) F_{\mathbf{r}}^{(m)} + O \left( \frac{\varphi(W_1) \mathcal{L}}{W_1 D_0} \right) . \]
Inserted into \eqref{firstthingprop2}, the above produces a main term
\begin{equation} \label{usefirstthing}
\frac{(\log R)^2 Y_{g,m} \varphi(W_1)^2}{\varphi(q_0) \varphi(W_2) W_1^2} \sum_{\substack{ \mathbf{r} \\ (r_i, W_1)=1 \forall i \\ (r_i, r_j)=1 \forall i \neq j \\ r_m=1}} \prod_{i=1}^k \frac{\varphi(r_i) \mu(r_i)^2}{g(r_i)r_i^2} \left( F_{\mathbf{r}}^{(m)} \right)^2
\end{equation}
and an error term of size
\[ \ll \frac{Y_{g,m}}{\varphi(q_0)\varphi(W_2)} \sum_{\substack{ \mathbf{r} \\ r_m=1}} \frac{\varphi(W_1)^2 \mathcal{L}^2}{W_1^2 D_0 \prod_{i=1}^k g(r_i)} \ll \frac{Y\varphi(W_1)^2 \mathcal{L}^2}{\varphi(q_0)\varphi(W_2) W_1^2 D_0} \left( \sum_{\substack{r < R \\ (r,W_1)=1}} \frac{1}{g(r)} \right)^{k-1} \ll \frac{Y\varphi(W_1)^{k+1} \mathcal{L}^k}{\varphi(q_0)\varphi(W_2) W_1^{k+1} D_0} . \]
Recall that $Y_{g,m} \ll Y \mathcal{L}^{-1}$. Now we remove the condition $(r_i,r_j)=1$ from \eqref{usefirstthing}. As before, this introduces an error of size
\[ \ll \frac{\mathcal{L}^2 Y \varphi(W_1)^2}{\varphi(q_0) \varphi(W_2) W_1^2} \left( \sum_{p > D_0} \frac{\varphi(p)^2}{g(p)^2 p^2} \right) \left( \sum_{\substack{r <R \\ (r,W_1)=1}} \frac{\mu(r)^2 \varphi(r)}{g(r) r} \right)^{k-1} \ll \frac{Y\mathcal{L}^k \varphi(W_1)^{k+1}}{\varphi(q_0)\varphi(W_2) W_1^{k+1} D_0} \]
by an application of Lemma~\ref{T1T2lem}. Combining all our results, we get
\[ S_2(g,m) = \frac{(\log R)^2 Y_{g,m} \varphi(W_1)^2}{\varphi(q_0) \varphi(W_2) W_1^2} \sum_{\substack{ \mathbf{r} \\ (r_i, W_1)=1 \forall i \\ r_m=1}} \prod_{i=1}^k \frac{\varphi(r_i)^2 \mu(r_i)^2}{g(r_i) r_i^2} \left( F_{\mathbf{r}}^{(m)} \right)^2 + O \left( \frac{Y\varphi(W_1)^{k+1} \mathcal{L}^k}{\varphi(q_0) \varphi(W_2) W_1^{k+1} D_0} \right) . \]
The last sum is evaluated by applying Lemma~\ref{GGPYlem} to each summation variable in turn, taking
\[ \gamma(p) = \left\{ \begin{array}{cl} \frac{p^3-2p^2+p}{p^3-p^2-2p+1} , & p \nmid W_1 \\ 0 , & p | W_1 \end{array} \right. \]
to produce the right value of $\gamma(p)/(p-\gamma(p))$. Of course
\[ S = \frac{\varphi(W_1)}{W_1} \left( 1 + O (D_0^{-1}) \right) \]
by Lemma~\ref{multlem}, while $L \ll \log \mathcal{L}$. Our final conclusion is that
\[ S_2(g,m) = \frac{(\log R)^{k+1} Y_{g,m} \varphi(W_1)^{k+1} J_k^{(m)}}{\varphi(q_0) \varphi(W_2) W_1^{k+1}} \left( 1 + o (1) \right) \]
completing the proof.
\end{proof}
\section{Further Lemmas}
Let $\gamma= \alpha^{-1}$. As noted in \cite{BaSh}, the set of $[ \alpha m + \beta ]$ in $[N, 2N)$ may be written as
\[ \{ n \in [N, 2N) : \gamma n \in ( \gamma \beta - \gamma, \beta \gamma] \pmod{1} \} . \]
\begin{lemma} \label{intervallem}
Let $I=(a,b)$ be an interval of length $l$ with $0 < l < 1$ and let $h$ be a natural number satisfying
\[ 0 < -h \gamma < 2 \varepsilon \pmod{1} , \]
where $2 \varepsilon < l$. Let
\[ \mathcal{A} = \{ n \in [N, 2N) : \gamma n \in I \pmod{1} \} . \]
Then
\[ \mathcal{A} \cap (\mathcal{A}+h) = \{ n \in [N+h, 2N) : \gamma n \in J \pmod{1} \} \]
where $J$ is an interval of length $l'$ with
\[ l - 2 \varepsilon < l' < l .\]
\end{lemma}
\begin{proof}
Let $t \equiv -h \gamma \pmod{1}$, $0 < t < 2 \varepsilon$. Clearly $\mathcal{A} \cap ( \mathcal{A}+h)$ consists of the integers in $[N+h, 2N)$ for which
\[ \gamma n \in (a,b) \pmod{1}, \; \gamma n + t \in (a,b) \pmod{1} .\]
The lemma follows with $J=(a,b-t)$.
\end{proof}
\begin{lemma} \label{Baklem}
Let $I$ be an interval of length $l$, $0 < l < 1$. Let $x_1$, $\cdots$, $x_N$ be real. Then
\begin{enumerate}[(i)]
\item \label{Baklem1} There exists $z$ such that
\[ \# \left\{ j \leq N : x_j \in z + I \pmod{1} \right\} \geq N l. \]
\item \label{Baklem2} We have (for $a_j \geq 0$, $j=1, \cdots, N$ and $L \geq 1$)
\[ \sum_{\mathclap{\substack{j=1 \\ x_j \in I \bmod{1}}}}^N a_j - l \sum_{j=1}^N a_j \ll L^{-1} \sum_{j=1}^N a_j + \sum_{h=1}^L h^{-1} \left| \sum_{j=1}^N a_j e(h x_j) \right| . \]
\end{enumerate}
\end{lemma}
\begin{proof}
We leave (1) as an exercise; (2) is a slight variant of \cite[Theorem 2.1]{Bak}.
\end{proof}
\begin{lemma} \label{splitdiag}
Let $1 \leq Q \leq N$ and $F$ a nonnegative function defined on Dirichlet characters. Then for some $Q_1$, $1 \leq Q_1 \leq Q$,
\[ \sum_{q \leq Q} \ \sideset{} {'} \sum_{\chi \bmod{q}} F ( \hat{\chi} ) \ll \frac{\mathcal{L} Q}{Q_1} \sum_{Q_1 \leq q_1 < 2Q_1} \ \sideset{} {^{\star}} \sum_{\psi \bmod{q_1}} F(\psi) . \]
\end{lemma}
\begin{proof}
We recall that $\hat{\chi}$ is the primitive character that induces $\chi$, so that $F( \hat{\chi} )$ may be quite different from $F (\chi)$. \newline
The left-hand side of the claimed inequality is
\[ \sum_{q_1 \leq Q} \ \sideset{} {^{\star}} \sum_{\psi \bmod{q_1}} F( \psi) \sum_{\substack{\chi \bmod{q} \\ q \leq Q, \; q_1 | q \\ \psi \; \mathrm{induces} \; \chi}} 1 \leq \sum_{q_1 \leq Q} \ \sideset{} {^{\star}} \sum_{\psi \bmod{q_1}} F ( \psi ) \frac{Q}{q_1} . \]
The lemma follows on applying a splitting-up argument to $q_1$.
\end{proof}
\begin{lemma} \label{BaBalem}
Let $f(j)$ ($j \geq 1$) be a periodic function with period $q$,
\[ S(f, n) = \sum_{j=1}^n f(j) e \left( - \frac{nj}{q} \right) ,\]
$F>0$, and $R \geq 1$. Let $H(y)$ be a real function with $H'(y)$ monotonic and
\[ |H'(y))| \leq Fy^{-1} \]
for $R \leq y \leq 2R$. Then for $J= [R,R']$ with $R < R' \leq 2R$,
\[ \sum_{m \in J} f(m) H(m) - q^{-1} \sum_{1 \leq |n| \leq 2FqR^{-1}} S(f,n) \int\limits_J e \left( \frac{ny}{q} + H(y) \right) \mathrm{d} y \ll \frac{R|S(f,0)|}{qF} + \sum_{|n| \in J'} \frac{|S(f,n)|}{n} , \]
where
\[ J' = [ \min \{2FqR^{-1}, q/2 \} , \max \{ 2FqR^{-1} , q \} + q ] . \]
\end{lemma}
\begin{proof}
This is \cite[Theorem 8]{BaBa}.
\end{proof}
For a finite sequence $\{ a_k : K \leq k < K' \}$, set
\[ \| a \|_2 = \left( \sum_{K \leq k < K'} |a_k|^2 \right)^{1/2} . \]
\begin{lemma}
Let $R \geq 1$, $M \geq 1$, $H \geq 1$. Let $\beta$ be real and
\begin{equation} \label{betadiriapprox}
\left| \beta - \frac{u_1}{r_1} \right| \leq \frac{H}{r_1^2}
\end{equation}
where $r_1 \geq H$ and $(u_1, r_1)=1$. Then for $M_1 \in \mathbb{N}$,
\begin{equation} \label{spacingbound}
\sum_{m=M_1+1}^{M_1+M} \min \left( R, \frac{1}{\| m \beta \|} \right) \ll \left( \frac{HM}{r_1} + 1 \right) \left( R + r_1 \log r_1 \right) .
\end{equation}
If $M < r_1$ and
\[ M \left| \beta - \frac{u_1}{r_1} \right| \leq \frac{1}{2r_1} , \]
then
\begin{equation} \label{spacingbound2}
\sum_{m=1}^M \frac{1}{\| m \beta \|} \ll r_1 \log 2 r_1 .
\end{equation}
\end{lemma}
\begin{proof}
For \eqref{spacingbound}, it suffices to show that a block of $[r_1/H]$ consecutive $m$'s contribute
\[ \ll R + \sum_{l=1}^{r_1} \frac{r_1}{l} . \]
Writing $m = m_0 +j$, $1 \leq j \leq [r_1/H]$,
\[ \left| (m_0+j)\beta - m_0 \beta - \frac{ju_1}{r_1} \right| \leq \frac{jH}{r_1^2} \leq \frac{1}{r_1} , \]
so there are $O(1)$ values of $j$ for which the bound
\[ \| (m_0 + j ) \beta \| \geq \frac{1}{2} \left\| m_0 \beta + \frac{ju_1}{r_1} \right\| \]
fails. Our block estimate follows immediately. \newline
The argument for \eqref{spacingbound2} is similar. In this case,
\[ \left| m\beta - \frac{mu_1}{r_1} \right| \leq \frac{1}{2r_1} , \]
if $ 1 \leq m \leq M$. Therefore, the left-hand side of \eqref{spacingbound2} can be estimated by $\sum_{l=1}^{r_1} r_1/l$.
\end{proof}
\begin{lemma} \label{doubleaddmultsum}
Let $N < N' \leq 2N$, $MK \asymp N$, $N \geq K \geq M \geq 1$. Suppose that
\begin{equation} \label{modgammacond}
\left| \gamma - \frac{u}{r} \right| \leq \frac{H}{r^2} , \; (u,r)=1, \; H \leq r \leq N .
\end{equation}
Let $(a_m)_{M \leq m < 2M}$, $(b_k)_{K \leq k < 2K}$ be two sequences of complex numbers. Then
\begin{equation} \label{Ssumdef}
S : = \sum_{Q \leq q < 2Q} \sum_{\chi \bmod{q}} \left| \mathop{\sum_{M \leq m < 2M} \sum_{K \leq k < 2K}}_{N \leq mk < N'} a_m b_k \chi (mk) e (\gamma mk) \right|
\end{equation}
satisfies the bound
\[ S \ll \| a \|_2 \| b \|_2 \mathcal{L}^{3/2} D^{1/2} \left( Q^2 M^{1/2} + \frac{Q^{3/2} H^{1/2} N^{1/2}}{r^{1/2}} + Q^{3/2} H^{1/2} K^{1/2} + Q^{3/2} r^{1/2} \right) , \]
where
\[ D = \max_{n < N} \# \{ q \in [Q, 2Q) : n =lq \}. \]
\end{lemma}
\begin{proof}
Let $S'$ be the sum obtained from $S$ by removing the condition $N \leq mk <N'$. It suffices to prove the same bound, with $\mathcal{L}^{1/2}$ in place of $\mathcal{L}^{3/2}$, for $S'$, since the condition can be restored at the cost of a factor of $\mathcal{L}$. See \cite[Section 3.2]{GH}. \newline
We have
\[ S' \leq \sum_{Q \leq q < 2Q} \sum_{\chi \bmod{q}} \sum_{M \leq m < 2M} |a_m| \left| \sum_{K \leq k < 2K} b_k \chi(k) e(\gamma mk) \right| = \sum_q S_q, \]
say. We may also assume that $b_k = 0$ if $(k,q)>1$. By Cauchy's inequality, and with summations subject to the obvious restrictions on $m$, $k_1$ and $k_2$,
\[ S_q^2 \leq \varphi(q) \| a \|_2^2 \sum_{\chi \bmod{q}} \sum_m \sum_{k_1} \sum_{k_2} b_{k_1} \overline{b}_{k_2} \chi (k_1) \overline{\chi} (k_2) e ( \gamma m (k_1-k_2) ) . \]
Bringing the sum over $\chi$ inside we see that the right-hand side of the above is
\[ \varphi(q)^2 \| a \|_2^2 \sum_{\substack{k_1, k_2 \\ k_1 \equiv k_2 \bmod{q}}} b_{k_1} \overline{b}_{k_2} \sum_m e ( \gamma m (k_1-k_2) ) \leq \varphi(q)^2 \| a \|_2^2 \sum_{k_1} |b_{k_1}|^2 \sum_{k_1 \equiv k_2 \bmod{q}} \left| \sum_m e ( \gamma m (k_1-k_2) ) \right| \]
upon using the parallelogram rule
\[ \left| b_{k_1} b_{k_2} \right| \leq \frac{1}{2} \left( \left| b_{k_1} \right|^2 + \left| b_{k_2} \right|^2 \right) . \]
Now summing the geometric sum over $m$ and then summing over $q$, we see that
\begin{equation} \label{aveSqsq}
\sum_{Q \leq q < 2Q} S_q^2 \ll Q^3 \| a \|_2^2 \| b \|_2^2 M + Q^2 \| a \|_2^2 \| b \|_2^2 \sum_{Q \leq q < 2Q} \sum_{1 \leq l < K/q} \min \left( M, \frac{1}{\| \gamma l q \|} \right) .
\end{equation}
Now we combine the variables $l$ and $q$ and then apply \eqref{spacingbound}, leading to
\begin{equation*}
\begin{split}
\sum_{Q \leq q < 2Q} S_q^2 & \ll Q^3 \| a \|_2^2 \| b \|_2^2 M + Q^2 \| a \|_2^2 \| b \|_2^2 D \left( \frac{HK}{r} + 1 \right) \left( M + r \log r \right) \\
& \ll \| a \|_2^2 \| b \|_2^2 \left( Q^3 M + \mathcal{L} Q^2 D \left( \frac{HN}{r}+ HK +M +r \right) \right).
\end{split}
\end{equation*}
The desired bound for $S'$ follows by another application of Cauchy's inequality.
\end{proof}
\begin{lemma} \label{Ssumestnew}
Under the hypotheses of Lemma~\ref{doubleaddmultsum}, suppose that $4MQ <N$, $b_k =1$ for $K \leq k < 2K$ and $|a_m| \leq 1$ for $M \leq m < 2M$. Define $D$ as in Lemma~\ref{doubleaddmultsum}. Then
\begin{enumerate}[(i)]
\item \label{doubleaddmultsum2} We have
\[ S \ll Q^{3/2} \mathcal{L} D \left( \frac{QMH}{r} + 1 \right) \left( \frac{K}{Q} + r \right) . \]
\item \label{doubleaddmultsum3} If $4MQ < r$ and
\[ 4MQ \left| \gamma - \frac{u}{r} \right| \leq \frac{1}{2r} , \]
then
\[ S \ll \mathcal{L} D Q^{3/2} r . \]
\end{enumerate}
\end{lemma}
\begin{proof}
Let $I_m$ (here and after) denote a subinterval of $[N/m, N'/m)$. We have
\[ S \leq QS^* + S^{**} , \]
where, for a suitably chosen nonprincipal $\chi_q \pmod{q}$,
\[ S^* = \sum_{Q \leq q < 2Q} \sum_{M \leq m < 2M} \left| \sum_{k \in I_m} \chi_q(k) e (\gamma mk) \right| \]
and
\[ S^{**} = \sum_{Q \leq q < 2Q} \sum_{M \leq m < 2M} \left| \sum_{k \in I_m} \chi_0 (k) e (\gamma m k) \right| . \]
To prove part \eqref{doubleaddmultsum2}, it suffices to show that
\[ S^* \ll Q^{1/2} \mathcal{L} D \left( \frac{QMH}{r} + 1 \right) \left( \frac{K}{Q} + r \right) \]
and
\[ S^{**} \ll Q \mathcal{L} D \left( \frac{QMH}{r} + 1 \right) \left( \frac{K}{Q} + 1 \right) . \]
We give the proof for $S^*$; the proof for $S^{**}$ is similar. \newline
Given $q$ and $m$, Lemma~\ref{BaBalem} together with, using the notation from Lemma~\ref{BaBalem},
\[ \left| S (\chi, q) \right| \leq \sqrt{q} \]
(see Chapter 9 of \cite{HD}) gives
\begin{equation*}
\begin{split}
\sum_{k \in I_m} \chi_q(k) e (\gamma mk) - \frac{1}{q} \sum_{1 \leq |n| \ll Mq} S(\chi_q, n) & \int\limits_{I_m} e \left( \left( \frac{n}{q} + \gamma m \right) y \right) \mathrm{d} y \\
& \ll q^{-1/2} M^{-1} + q^{1/2} \sum_{1 \leq n \ll Mq} n^{-1} \ll q^{1/2} \mathcal{L} .
\end{split}
\end{equation*}
Therefore
\[ \sum_{k \in I_m} \chi_q(k) e (\gamma mk) \ll q^{1/2} \mathcal{L} + q^{-1/2} \sum_{1 \leq |n| \ll Mq} \min \left( K, \frac{1}{\left| \gamma m - \frac{n}{q} \right|} \right). \]
Summing over $m$ and $q$,
\[ S^* \ll MQ^{3/2} \mathcal{L} + Q^{1/2} \sum_{Q \leq q < 2Q} \sum_{M \leq m < 2M} \sum_{1 \leq |n| \ll Mq} \min \left( \frac{K}{Q}, \frac{1}{\left| \gamma mq - n \right|} \right). \]
The contribution to the right-hand side of the above from $n$'s with $|n-\gamma mq| > 1/2$ is
\[ \ll MQ^{3/2} \mathcal{L} . \]
Now combining the variables $m$, $q$,
\begin{equation} \label{S*est}
S^* \ll MQ^{3/2} \mathcal{L} + Q^{1/2}D \sum_{MQ \leq m' < 4MQ} \min \left( \frac{K}{Q}, \frac{1}{\| \gamma m' \|} \right) .
\end{equation}
We can now deduce the desired bound for $S^*$ by applying \eqref{spacingbound}. \newline
Now for part \eqref{doubleaddmultsum3}, we note that \eqref{spacingbound2} is applicable to the reciprocal sum in \eqref{S*est} with $4MQ$ and $\gamma$ in place of $M$ and $\beta$. Hence
\[ S^* \ll MQ^{3/2} \mathcal{L} + Q^{1/2} D r \log 2r \ll D \mathcal{L} Q^{1/2} r \]
since $4MQ < r$. Similarly $S^{**} \ll D \mathcal{L} Q r$, and part \eqref{doubleaddmultsum3} follows.
\end{proof}
\begin{lemma} \label{SsumtoW}
Suppose that
\[ \left| \gamma - \frac{u}{r} \right| \leq \frac{\mathcal{L}^{A+1}}{r^2} \]
with $(u,r)=1$ and that $r^2 \leq N \leq r^2 \mathcal{L}^{2A+2}$. Then
\begin{enumerate}[(i)]
\item \label{Ssumesta} For $Q < N^{2/7-\varepsilon}$, $N^{4/7} \ll K \ll N^{5/7}$ and any $a_m$, $b_k$ with $|a_m| \leq \tau(m)^B$, $|b_k| \leq \tau(k)^B$, where $B$ is an absolute constant, the sum $S$ in \eqref{Ssumdef} satisfies the bound
\begin{equation} \label{Sest1}
S \ll QN^{1-\varepsilon/4}.
\end{equation}
\item \label{Ssumestb} For $Q \leq N^{2/7-\varepsilon}$, $M \ll N^{4/7}$ and $b_k =1$ for $K \leq k < 2K$, $|a_m| \leq 1$ for $M \leq m < 2M$, the sum $S$ in \eqref{Ssumdef} satisfies \eqref{Sest1}.
\end{enumerate}
\end{lemma}
\begin{proof}
In order to prove \eqref{Ssumesta}, we use Lemma~\ref{doubleaddmultsum}. As $D \ll N^{\varepsilon/15}$,
\[ SQ^{-1} N^{-1+\varepsilon/4} \ll Q^{-1} N^{-1/2+ \varepsilon/3} \left( Q^2 N^{3/14} + Q^{3/2} N^{5/14} \right) \ll N^{-1/2+\varepsilon/2} \left( QN^{3/14} + Q^{1/2} N^{5/14} \right) \ll 1. \]
To prove \eqref{Ssumestb}, we break the situation into two cases. If $K < N^{1-\varepsilon}$, then by \eqref{doubleaddmultsum2} of Lemma~\ref{Ssumestnew}
\[ SQ^{-1} N^{-1+\varepsilon/4} \ll Q^{1/2} N^{-1+\varepsilon/2} \left( N^{1/2} + MQ + \frac{N^{1-\varepsilon}}{Q} \right) \ll N^{1/7-1/2+\varepsilon} + N^{3/7+4/7-1-\varepsilon} + N^{-\varepsilon/2} \ll 1. \]
If $K \geq N^{1-\varepsilon}$, then $M \ll N^{\varepsilon}$ and \eqref{doubleaddmultsum3} of Lemma~\ref{Ssumestnew} is applicable since
\[ 4MQ \left| \gamma - \frac{u}{r} \right| \ll N^{-1+2/7+\varepsilon} . \]
Hence
\[ SQ^{-1} N^{-1+\varepsilon/4} \ll Q^{1/2} N^{-1/2+\varepsilon} \ll 1 , \]
giving the desired majorant.
\end{proof}
\begin{lemma} \label{hdid}
Let $f$ be an arbitrary complex function on $[N,2N)$. Let $N < N' \leq 2N$. The sum
\[ S = \sum_{N \leq n < N'} \Lambda(n) f(n) \]
can be decomposed into $O ( \mathcal{L}^2 )$ sums of the form
\[ \sum_{M<m \leq 2M} a_m \sum_{\substack{K \leq k < 2K \\ N \leq mk < N'}} f(mk) \; \; \; \;\mbox{or} \; \; \; \; \int\limits_N^{N'} \sum_{M \leq m < 2M} a_m \sum_{\substack{k \geq w \\ K \leq k < 2K \\ N \leq mk < N'}} f(mk) \frac{\mathrm{d} w}{w} \]
with $M \leq N^{1/4}$ and $|a_m| \leq 1$, together with $O (\mathcal{L})$ sums of the form
\[ \sum_{M<m \leq 2M} a_m \sum_{\substack{K \leq k < 2K \\ N \leq mk < N'}} b_k f(mk) \]
with $N^{1/2} \leq K \ll N^{3/4}$ and $\| a \|_2 \| b \|_2 \ll N^{1/2} \mathcal{L}^2$.
\end{lemma}
\begin{proof}
This follows from the arguments in \cite[Chapter 24]{HD} by taking $U=V=N^{1/4}$.
\end{proof}
We record a special case of \cite[Lemma 14]{BaWe}. For more background on the ``Harman sieve", see \cite{GH}.
\begin{lemma} \label{BaWelem}
Let $W(n)$ be a complex function with support in $(N, 2N] \cap \mathbb{Z}$, $|W(n)| \leq N^{1/\varepsilon}$. For $r \in \mathbb{N}$, $z \geq 2$, let
\begin{equation}
S^*(r,z) = \sum_{(n, P(z))=1} W(rn) .
\end{equation}
Suppose that for some constant $c>0$, $0 \leq d \leq 1/2$, and for some $Y > 0$, we have, for any coefficients $a_m$, $b_k$ with $|a_m| \leq 1$, $|b_k| \leq \tau(k)$,
\begin{equation} \label{type1sumest}
\sum_{m \leq 2N^c} a_m \sum_k W(mk) \ll Y,
\end{equation}
and
\begin{equation} \label{type2sumest}
\sum_{N^c \leq m \leq 2N^{c+d}} a_m \sum_k b_k W(mk) \ll Y.
\end{equation}
Let $u_r$ ($r \leq N^c$) be complex numbers with $|u_r| \leq 1$ and $u_r=0$ for $\left( r, P(N^{\varepsilon}) \right) >1$. Then
\[ \sum_{r \leq (2N)^c} u_r S^* \left( r, (2N)^d \right) \ll Y \mathcal{L}^3 . \]
\end{lemma}
The following application of Lemma~\ref{BaWelem} will be used in the proof of Theorem~\ref{bea2}. We take
\begin{equation} \label{Wdef}
W(n) = \sum_{Q \leq q < 2Q} \sum_{\chi \bmod{q}} \eta_{\chi} \chi(n) e( \gamma n)
\end{equation}
for $N \leq n < N'$; otherwise, $W(n)=0$. Here $\eta_{\chi}$ is arbitrary with $| \eta_{\chi}| \leq 1$.
\begin{lemma} \label{S*withweights}
Suppose that
\[ \left| \gamma - \frac{u}{r} \right| \leq \frac{\mathcal{L}^{A+1}}{r^2} , \; (u,r)=1, \; N=r^2 , \; 1 \leq Q \leq N^{2/7-\varepsilon} . \]
Define $S^*(r,z)$ as above with $W$ defined in \eqref{Wdef}. Then
\[ \sum_{r \leq (2N)^{4/7}} u_r S^* \left( r, (2N)^{1/7} \right) \ll N \mathcal{L}^{-A} \]
for every $A>0$, provided that $|u_r| \leq 1$, $u_r=0$ for $\left( r, P(N^{\varepsilon}) \right) > 1$.
\end{lemma}
\begin{proof}
We need to verify \eqref{type1sumest} and \eqref{type2sumest} with $c=4/7$, $d=1/7$ and $Y=N \mathcal{L}^{-A-3}$. This is an application of Lemma~\ref{SsumtoW}.
\end{proof}
We now introduce some subsets of $\mathbb{R}^j$ needed in the proof of Theorem~\ref{bea2}. Write $E_j$ for the set of $j$-tuples $\mathbold{\alpha}_j = ( \alpha_1, \cdots, \alpha_j)$ satisfying
\[ \frac{1}{7} \leq \alpha_j < \alpha_{j-1} < \cdots < \alpha_1 \leq \frac{1}{2} \; \; \mbox{and} \; \; \alpha_1 + \alpha_2 + \cdots + \alpha_{j-1} + 2 \alpha_j \leq 1. \]
A tuple $\mathbold{\alpha}_j$ is said to be {\it good} if some subsum of $\alpha_1+ \cdots + \alpha_j$ is in $[2/7, 3/7] \cup [4/7, 5/7]$ and {\it bad} otherwise. \newline
We use the notation $p_j = (2N)^{\alpha_j}$. For instance, the sum
\[ \sum_{\substack{p_1 p_2 n_3 =k \\ (2N)^{1/7} \leq p_2 < p_1 < (2N)^{1/2}}} \psi( n_3, p_2) \]
will be written as
\[ \sum_{\substack{p_1p_2n_3 = k \\ \mathbold{\alpha}_2 \in E_2}} \psi(n_3, p_2) . \]
\begin{lemma} \label{primesumest}
Let $\gamma$, $u/r$, $N$, $Q$ be as in Lemma~\ref{S*withweights} and $E$ be a subset of $E_j$ defined by a bounded number of inequalities of the form
\begin{equation} \label{ineqgenform}
c_1 \alpha_1 + \cdots + c_j \alpha_j < c_{j+1} \; (\mbox{or} \; \leq c_{j+1}).
\end{equation}
Suppose that all points in $E$ are good and that throughout $E$, $z_j$ is either the function $z_j = (2N)^{\alpha_j}$ or the constant $z_j = (2N)^{1/7}$. Then for arbitrary $\eta_{\chi}$ with $| \eta_{\chi}| \leq 1$,
\[ \sum_{Q \leq q < 2Q} \sum_{\chi \bmod{q}} \eta_{\chi} \sum_{\substack{N \leq p_1 \cdots p_j n_{j+1} < N' \\ \mathbold{\alpha}_j \in E}} \chi(p_1 \cdots p_j n_{j+1}) e ( \gamma p_1 \cdots p_j n_{j+1}) \psi( n_{j+1} , z_j ) \ll N \mathcal{L}^{-A} , \]
for every $A>0$.
\end{lemma}
\begin{proof}
This is a consequence of \eqref{Ssumesta} of Lemma~\ref{SsumtoW}. On grouping a subset of the variables as a product $m=\prod_{i \in \mathcal{S}} p_i$, with $S \subset \{ 1, \cdots, j \}$, we obtain a sum $S$ of the form appearing in \eqref{Ssumesta} of Lemma~\ref{SsumtoW}, except that a bounded number of inequalities of the form \eqref{ineqgenform} are present. These inequalities may be removed at the cost of a log power, by the mechanism noted earlier. See page 184 of \cite{BaWe} for a few more details of a similar argument. The lemma follows at once.
\end{proof}
\begin{lemma} \label{buchstabdecomp}
Let $D= \{ (\alpha_1, \alpha_2) \in E_2 : (\alpha_1, \alpha_2) \; \mbox{is bad}, \; \alpha_1 + 2 \alpha_2 > 5/7 \}$. Then
\[ X \left( \mathbb{P} ; n \right) - \sum_{\substack{p_1 p_2 n_3 = n \\ \mathbold{\alpha}_2 \in D}} \psi( n_3, p_2) = \varrho_1(n) + \varrho_2(n) + \varrho_3(n) - \varrho_4(n) - \varrho_5(n) . \]
Here
\[ \varrho_1(n) = \psi(n, (2N)^{1/7}) , \; \varrho_4 (n) = \sum_{\substack{p_1 n_2 = n \\ \mathbold{\alpha}_1 \in E_1}} \psi(n_2, (2N)^{1/7}) , \; \varrho_2(n) = \sum_{\substack{p_1p_2n_3=n \\ \mathbold{\alpha}_2 \in E_2 \setminus D}} \psi(n_3 , (2N)^{1/7}) , \]
\[ \varrho_5(n) = \sum_{\substack{p_1p_2p_3n_4=n \\ \mathbold{\alpha}_3 \in E_3 \\ (\alpha_1, \alpha_2) \in E_2 \setminus D}} \psi(n_4, (2N)^{1/7}) \; \; \; \mbox{and} \; \; \; \varrho_3(n) = \sum_{\substack{p_1p_2p_3p_4n_5=n \\ \mathbold{\alpha}_4 \in E_4 \\ (\alpha_1, \alpha_2) \in E_2 \setminus D}} \psi(n_5, p_4) . \]
\end{lemma}
\begin{proof}
We repeatedly use Buchstab's identity in the form
\[ \psi(m,z) = \psi(m,w) - \sum_{\substack{ph=m\\ w \leq p < z}} \psi(h,p) \; \; \; ( 2 \leq w < z) . \]
Thus
\begin{equation} \label{buchstab1}
\begin{split}
X ( \mathbb{P} ; n) & = \psi(n, (2N)^{1/2}) = \psi(n, (2N)^{1/7}) - \sum_{\substack{(2N)^{1/7} \leq p_1 < (2N)^{1/2} \\ p_1n_2=n}} \psi(n_2, p_1) \\
& = \varrho_1(n) - \varrho_4(n) + \sum_{\substack{p_1p_2n_3 =n \\ \mathbold{\alpha}_2 \in E_2}} \psi(n_3,p_2) ,
\end{split}
\end{equation}
\[ X( \mathbb{P} ; n ) - \sum_{\substack{p_1p_2n_3=n \\ \mathbold{\alpha}_2 \in D}} \psi(n_3 , p_2) = \varrho_1(n) - \varrho_4(n) + \sum_{\substack{p_1p_2n_3=n \\ \mathbold{\alpha}_2 \in E_2 \setminus D}} \psi(n_3 , p_2) . \]
Continuing the decomposition of the last sum,
\begin{equation} \label{buchstab2}
\sum_{\substack{p_1p_2n_3=n \\ \mathbold{\alpha}_2 \in E_2 \setminus D}} \psi(n_3 , p_2) = \sum_{\substack{p_1p_2n_3=n \\ \mathbold{\alpha}_2 \in E_2 \setminus D}} \psi( n_3 , (2N)^{1/7} ) - \sum_{\substack{p_1p_2p_3n_4=n \\ \mathbold{\alpha}_3 \in E_3 \\ (\alpha_1, \alpha_2) \in E_2 \setminus D}} \psi( n_4, (2N)^{1/7} ) + \sum_{\substack{p_1 p_2 p_3 p_4 n_5 = n \\ \mathbold{\alpha}_4 \in E_4 \\ (\alpha_1, \alpha_2) \in E_2 \setminus D}} \psi(n_5 , p_4) .
\end{equation}
Combining \eqref{buchstab1} and \eqref{buchstab2}, we complete the proof of the lemma.
\end{proof}
\begin{lemma} \label{primechardetect}
Let $r$, $u/r$, $N$ and $Q$ be as in Lemma~\ref{S*withweights} with $\varrho_1$, $\cdots$, $\varrho_5$ as in Lemma~\ref{buchstabdecomp}; we have
\[ \sum_{Q \leq q < 2Q} \sum_{\chi \bmod{q}} \eta_{\chi} \sum_{N \leq n < N
} \varrho_j (n) \chi(n) e(\gamma n) \ll QN \mathcal{L}^{-A} \]
for arbitrary $\eta_{\chi}$ with $|\eta_{\chi}| \leq 1$ and any $A > 0$.
\end{lemma}
\begin{proof}
This follows from Lemmas \ref{S*withweights} and \ref{primesumest} for $j=1,2, 4, 5$ on noting that $\alpha_1 + \alpha_2 + \alpha_3 \leq \alpha_1 + 2 \alpha_2 \leq 5/7$ for $j=5$, so that either $\mathbold{\alpha}_3$ is good or $\alpha_1 + \alpha_2 + \alpha_3 < 4/7$ (similarly for $j=2$). For $j=3$, we need to show that each $\mathbold{\alpha}_4$ counted is good. Suppose that some $\mathbold{\alpha}_4$ is bad. We have $\alpha_1 + \alpha_2 + \alpha_3 + 2 \alpha_4 \leq 1$. Hence $\alpha_1 + \alpha_2 + \alpha_3 \leq 5/7$ from which we infer that $\alpha_1 + \alpha_2 + \alpha_3 < 4/7$. Therefore, $\alpha_1 + \alpha_2 < 3/7$. But we know that $\alpha_1 + \alpha_2 > 2/7$. This makes $\mathbold{\alpha}_4$ good, a contradiction.
\end{proof}
\section{Proof of Theorems \ref{beabomvino} and \ref{beabardavhal}}
\begin{proof}[Proof of Theorem \ref{beabomvino}]
With a suitable choice of $a_q$, $(a_q, q) =1$, we have
\begin{equation*}
\begin{split}
\max_{(a,q)=1} E (N, N', \gamma, q, a) & \leq \sup_I \left| \sum_{\substack{N \leq n < N' \\ \gamma n \in I \bmod{1} \\ n \equiv a_q \bmod{q}}} \Lambda(n) - |I| \sum_{\substack{N \leq n < N' \\ n \equiv a_q \bmod{q}}} \Lambda(n) \right| + \left| \sum_{\substack{N \leq n < N' \\ n \equiv a_q \bmod{q}}} \Lambda(n) - \frac{N'-N}{\varphi(q)} \right| \\
&= T_1(q) + T_2(q),
\end{split}
\end{equation*}
say. In view of the Bombieri-Vinogradov theorem, we need only bound $\sum_q T_1(q)$, which is, applying Lemma~\ref{Baklem},
\[ \ll \sum_{q \leq N^{1/4-\varepsilon}} \mathcal{L}^{-A-1} \sum_{\substack{N \leq n < N' \\ n \equiv a_q \bmod{q}}} \Lambda(n) + \sum_{q \leq \min (r, N^{1/4})N^{-\varepsilon}} \sum_{h \leq \mathcal{L}^{A+1}} \frac{1}{h} \left| \sum_{\substack{N \leq n < N' \\ n \equiv a_q \bmod{q}}} \Lambda(n) e( \gamma nh) \right| . \]
Let $H= \mathcal{L}^{A+1}$. Mindful of the Brun-Titchmarsh inequality, it remains to show that for $1 \leq h \leq H$,
\[ \sum_{q \leq \min (N^{1/4}, r) N^{-\varepsilon}} \left| \sum_{\substack{N \leq n < N' \\ n \equiv a_q \bmod{q}}} \Lambda(n) e ( \gamma nh) \right| \ll N \mathcal{L}^{-A-1} . \]
Reducing $hu/r$ into lowest terms, we need only show that
\[ \sum_{q \leq \min (N^{1/4}, r) N^{-\varepsilon/2}} \eta_q \sum_{\substack{N \leq n < N' \\ n \equiv a_q \bmod{q}}} \Lambda(n) e (\gamma n) \ll N \mathcal{L}^{-A-1} \]
under the modified hypothesis \eqref{modgammacond} on $\gamma$ (with $H= \mathcal{L}^{A+1}$), whenever $| \eta_q | \leq 1$. \newline
Using Lemma~\ref{hdid}, it suffices to show that
\begin{equation} \label{afterhdid}
\sum_{q \leq \min (N^{1/4}, r) N^{-\varepsilon/2}} \eta_q \mathop{\sum_{M \leq m < 2M} \sum_{K \leq k < 2K}}_{\substack{N \leq mk < N' \\ mk \equiv a_q \bmod{q}}} a_m b_k e (\gamma mk) \ll N \mathcal{L}^{-A-3}
\end{equation}
under either of the following sets of conditions.
\begin{enumerate}[(a)]
\item \label{type2sumreq} $ \| a \|_2 \| b \|_2 \ll N^{1/2} \mathcal{L}^2$, $N^{1/2} \leq K \leq N^{3/4}$;
\item \label{type1sumreq} $|a_m| \leq 1$, $b_k =1$ for $k \in I_m \subset [K, 2K)$, $b_k = 0$ otherwise, $M \leq N^{1/4}$.
\end{enumerate}
We use Dirichlet characters to detect the congruence relation in \eqref{afterhdid} and we require the estimate
\[ \sum_{q \leq \min (N^{1/4}, r) N^{-\varepsilon/2}} \frac{\eta_q}{\varphi(q)} \sum_{\chi \bmod{q}} \overline{\chi} (a_q) \mathop{\sum_{M \leq m < 2M} \sum_{K \leq k < 2K}}_{N \leq mk < N'} a_m b_k \chi(mk) e (\gamma mk) \ll N \mathcal{L}^{-A-4} . \]
It suffices to show that
\begin{equation} \label{afterhdid2}
S: = \sum_{Q \leq q < 2Q} \sum_{\chi \bmod{q}} \left| \mathop{\sum_{M \leq m < 2M} \sum_{K \leq k < 2K}}_{N \leq mk < N'} a_m b_k \chi(mk) e(\gamma mk) \right| \ll QN \mathcal{L}^{-A-6}
\end{equation}
for $Q \leq \min (N^{1/4} , r ) N^{-\varepsilon/2}$. \newline
In case \eqref{type2sumreq}, we apply Lemma~\ref{doubleaddmultsum}, which gives
\begin{equation*}
\begin{split}
S & \ll N^{1/2+\varepsilon/6} \left( Q^2 M^{1/2} + \frac{Q^{3/2} N^{1/2}}{r^{1/2}} + Q^{3/2} K^{1/2} + Q^{3/2} r^{1/2} \right) \\
& \ll N^{3/4+\varepsilon/6} Q^2 + \frac{N^{1+\varepsilon/6} Q^{3/2}}{r^{1/2}} + Q^{3/2} N^{7/8+\varepsilon/6} .
\end{split}
\end{equation*}
Each one of these three terms is $\ll QN \mathcal{L}^{-A-6}$ as
\[ N^{3/4+\varepsilon/6} Q^2 ( QN \mathcal{L}^{-A-6} )^{-1} \ll QN^{-1/4+\varepsilon/5} \ll 1, \]
\[ N^{1+\varepsilon/6} Q^{3/2} r^{-1/2} (QN \mathcal{L}^{-A-6})^{-1} \ll Q^{1/2} N^{\varepsilon/4} r^{-1/2} \ll 1 , \]
since $Q \leq r N^{-\varepsilon/2}$, and
\[ N^{7/8+\varepsilon/6} Q^{3/2} (QN \mathcal{L}^{-A-6})^{-1} \ll N^{-1/8+\varepsilon/5} Q^{1/2} \ll 1 . \]
In case \eqref{type1sumreq}, we use Lemma~\ref{Ssumestnew}. Suppose that $K < N^{1-\varepsilon/4}$; \eqref{doubleaddmultsum2} of Lemma~\ref{Ssumestnew} gives
\[ S \ll Q^{3/2} N^{\varepsilon/6} \left( \frac{N}{r} + QM + \frac{K}{Q} + r \right) . \]
Each of the above four terms is $\ll QN \mathcal{L}^{-A-6}$, since
\[ \frac{Q^{3/2}N^{1+\varepsilon/6}}{r} (QN \mathcal{L}^{-A-6})^{-1} \ll Q^{1/2} r^{-1} N^{\varepsilon/5} \ll 1, \]
\[ Q^{5/2} N^{\varepsilon/6} M (QN \mathcal{L}^{-A-6})^{-1} \ll Q^{3/2} N^{-3/4+\varepsilon/5} \ll 1, \]
\[ Q^{1/2} N^{\varepsilon/6} K (QN \mathcal{L}^{-A-6})^{-1} \ll K N^{-1+\varepsilon/4} \ll 1\]
and
\[ Q^{3/2} N^{\varepsilon/6} r (QN \mathcal{L}^{-A-6})^{-1} \ll Q^{1/2} N^{-1/4+\varepsilon/5} \ll 1. \]
Now suppose that $K \geq N^{1-\varepsilon/4}$. Then
\[ 4MQ \ll QN^{\varepsilon/4}, \; \mbox{thus} \; 4MQ < r \]
and
\[ 4MQr \left| \gamma - \frac{u}{r} \right| \ll MQ N^{-3/4}, \; \mbox{hence} \; 4MQr \left| \gamma - \frac{u}{r} \right| \leq \frac{1}{2} .\]
So \eqref{doubleaddmultsum3} of Lemma~\ref{Ssumestnew} gives comfortably:
\[ S \ll N^{\varepsilon} Q^{3/2} r \ll QN \mathcal{L}^{-A-6} , \]
completing the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{beabardavhal}]
We first show that the contribution to the sum in \eqref{beabardavhaleq} from $q \leq \mathcal{L}^{A+1}$ is
\[ \ll N^2 \mathcal{L}^{-A} \ll NR. \]
Since, for some $Q \leq \mathcal{L}^{A+1}$,
\[ \sum_{q \leq \mathcal{L}^{A+1}} \sum_{\substack{a=1 \\ (a,q)=1}}^q E^2 \ll N \sum_{q \leq \mathcal{L}^{A+1}} \frac{1}{\varphi(q)} \sum_{\substack{a=1 \\ (a,q)=1}}^q E(N, N', \gamma, q, a) \ll \frac{N \mathcal{L}}{Q} \sum_{Q \leq q < 2Q} \max_{(a,q)=1} E(N, N', \gamma, q, a) , \]
it suffices to show for this $Q$ that
\begin{equation} \label{smallqest}
\sum_{Q \leq q < 2Q} \max_{(a,q)=1} E(N,N', \gamma, q, a) \ll QN \mathcal{L}^{-A-1} .
\end{equation}
We may suppose that $A$ is large. Arguing as in the proof of Theorem~\ref{beabomvino}, we need only show that \eqref{afterhdid2} follows from either \eqref{type2sumreq} or \eqref{type1sumreq}. By Dirichlet's theorem, there is a rational approximation $b/r$ to $\gamma$ satisfying \eqref{gammaratapprox}. For any $\eta > 0$,
\[ N^{-3/4} \geq \| \gamma r \| \gg \exp (- r^{\eta}) , \]
hence $r \gg \mathcal{L}^{5A}$. Now we apply Lemma~\ref{doubleaddmultsum} to prove the desired bound under \eqref{type2sumreq}. Since $D \leq Q \leq \mathcal{L}^{A+1}$, the term
\[ \| a \|_2 \| b \|_2 \mathcal{L}^2 D^{1/2} Q^{3/2} H^{1/2} N^{1/2} r^{-1/2} \]
presents no difficulty; the other terms are clearly all small enough. For the bound under \eqref{type1sumreq}, a similar remark applies to Lemma~\ref{Ssumestnew} and the terms
\[ Q^{3/2} \mathcal{L} DNHr^{-1} \]
if $K < N^{1-\varepsilon/4}$ and
\[ \mathcal{L} D Q^{3/2} r \]
if $K \geq N^{1-\varepsilon/4}$. This establishes \eqref{smallqest}. \newline
It remains to examine the contribution to the sum in \eqref{beabardavhaleq} from $q \in [Q, 2Q)$ with $\mathcal{L}^{A+1} \leq Q \leq R$. We have
\begin{equation*}
\begin{split}
\sum_{Q \leq q < 2Q} \sum_{\substack{a=1 \\ (a,q)=1}}^q E(N, N', \gamma, q, a)^2 \ll \sum_q \sum_a \sup_I & \left| \sum_{\substack{N < n \leq N' \\ \{ \gamma n \} \in I \\ n \equiv a \bmod{q}}} \Lambda(n) - |I| \sum_{\substack{N < n \leq N' \\ n \equiv a \bmod{q}}} \Lambda(n) \right|^2 \\
&+ \sum_q \sum_a \left( \sum_{\substack{N < n \leq N' \\ n \equiv a \bmod{q}}} \Lambda(n) - \frac{N'-N}{\varphi(q)} \right)^2 = T_1(Q) + T_2(Q),
\end{split}
\end{equation*}
say. Since $T_2(Q)$ is covered by a slight variant of the discussion in \cite[Chapter 29]{HD}, we focus our attention on $T_1(Q)$. By Lemma~\ref{Baklem},
\begin{equation*}
\begin{split}
T_1(Q) & \ll \sum_{Q \leq q < 2Q} \sum_{\substack{a=1 \\ (a,q)=1}}^q \mathcal{L}^{-2A} \left( \sum_{\substack{N < n \leq N' \\ n\equiv a \bmod{q}}} \Lambda(n) \right)^2 +\sum_{Q \leq q < 2Q} \sum_{\substack{a=1 \\ (a,q)=1}}^q \left( \sum_{h \leq \mathcal{L}^A} \frac{1}{h} \left| \sum_{\substack{N < n \leq N' \\ n \equiv a \bmod{q}}} \Lambda(n) e ( \gamma n h) \right| \right)^2 \\
& = T_3(Q) + T_4(Q),
\end{split}
\end{equation*}
say. The Brun-Titchmarsh Theorem gives a satisfactory bound for $T_3(Q)$. Applying Cauchy's inequality to $T_4(Q)$, we get
\begin{equation*}
\begin{split}
T_4(Q) & \leq \left( \sum_{h \leq \mathcal{L}^A} \frac{1}{h} \right) \sum_{h \leq \mathcal{L}^A} \frac{1}{h} \sum_{Q \leq q < 2Q} \sum_{\substack{a=1 \\ (a,q)=1}}^q \left| \sum_{\substack{N < n \leq N' \\ n \equiv a \bmod{q}}} \Lambda(n) e (\gamma n h ) \right|^2 \\
& \ll (\log \mathcal{L})^2 \sum_{Q \leq q < 2Q} \frac{1}{\varphi(q)} \sum_{\chi \bmod{q}} \left| \sum_{N < n \leq N'} \Lambda(n) \chi(n) e(\gamma n h) \right|^2,
\end{split}
\end{equation*}
for some $h \leq \mathcal{L}^A$. From this point, we can conclude the proof by following, with slight changes, the argument in \cite[pp. 170-171]{HD}.
\end{proof}
\section{Proof of Theorems \ref{bea1} and \ref{bea2}}
\begin{proof}[Proof of Theorem~\ref{bea1}]
Let $\gamma = \alpha^{-1}$ and $N \geq C_1(\alpha, t)$, $0 < \varepsilon < C_2(\alpha, t)$. By Dirichlet's theorem, there is a reduced fraction $b/r$ satisfying \eqref{gammaratapprox}. Our hypothesis on $\alpha$ implies that
\[ N^{-3/4} \geq \| \gamma r \| \gg r^{-3} , \; r \gg N^{1/4} . \]
Let $h_1''$, $\cdots$, $h_l''$ be the first $l$ primes in $(l, \infty)$. Any translate
\[ \mathcal{H} = \{ h_1' , \cdots, h_k' \} + h , \; h \in \mathbb{N} \]
with $\{ h_1' , \cdots, h_k' \} \subset \{ h_1'', \cdots, \; h_l'' \}$, is an admissible set. Using \eqref{Baklem1} of Lemma~\ref{Baklem}, we choose $h_1'$, $\cdots$, $h_k'$ so that
\begin{equation} \label{klowerbound}
k \geq \varepsilon \gamma l
\end{equation}
and for some real $\eta$,
\[ - \gamma h_m' \in ( \eta, \eta + \varepsilon \gamma ) \pmod{1} \]
for every $m = 1, \cdots , k$. Now choose $h \in \mathbb{N}$, $h \ll_{\gamma} 1$ so that
\[ h \gamma \in ( \eta - \varepsilon \gamma, \eta ) \pmod{1} . \]
Thus, writing $h_m = h_m'+h$, we have
\[ - \gamma h_m = - \gamma h_m' - \gamma h \in (0 , 2 \varepsilon \gamma) \pmod{1} . \]
We apply Theorem~\ref{gentheo} to the set
\[ \mathcal{A} = \{ n \in [N, 2N) : \gamma m \in I \pmod{1} \} \]
where $I = ( \gamma \beta - \gamma, \gamma \beta )$, taking $q_0 = q_1 = 1$, $s=1$, $\varrho(n) = X ( \mathbb{P}; n)$, $\theta = 1/4 - \varepsilon$, $b = 1-2 \varepsilon$,
\[ Y = \gamma N, \; Y_{1,m} = l_m \int\limits_N^{2N} \frac{1}{\log t} \mathrm{d} t = \frac{l_m Y}{\mathcal{L} \gamma} \left( 1 + o(1) \right) . \]
Here $J_m$, $l_m$ are the interval $J$ and its length $l$ in Lemma~\ref{intervallem} (with $\varepsilon \gamma$ in place of $\varepsilon$), so that
\[ \gamma > l_m > \gamma (1-2 \varepsilon) . \]
Since \eqref{regcond1} can be proved in a similar (but simpler) fashion to \eqref{regcond3}, we only show that \eqref{regcond3} holds. We can rewrite this in the form
\begin{equation} \label{bomvinoalt}
\sum_{q \leq x^{1/4-\varepsilon}} \mu^2(q) \tau_{3k}(q) \left| \sum_{\substack{ N+h_m \leq p < 2N \\ p \equiv a_q \bmod{q} \\ \gamma p \in J_m \bmod{1}}} 1 - \frac{l_m}{\varphi(q)} \int\limits_N^{2N} \frac{\mathrm{d} t}{\log t} \right| \ll N \mathcal{L}^{-k-\varepsilon} .
\end{equation}
The function $E(N, N' , \gamma, q, a)$ appearing in Theorem~\ref{beabomvino} is not quite in the form that we need. However, discarding prime powers and using partial summation in the standard way, we readily deduce a variant of \eqref{bomvinoalt} from Theorem~\ref{beabomvino}, in which $N \mathcal{L}^{-A}$ appears in place of $N \mathcal{L}^{-k-\varepsilon}$, and the weight $\mu^2(q) \tau_{3k} (q)$ is absent. We then obtain \eqref{bomvinoalt} by using Cauchy's inequality; see \cite[(5.20)]{May} for a very similar computation. \newline
We are now in a position to use Theorem~\ref{gentheo}, obtaining a set of $\mathcal{S}$ of $t$ primes in $\mathcal{A} \cap [N, 2N)$, which of course have the form $[\alpha n + \beta]$, with
\[ D(\mathcal{S}) \leq h_k - h_1 \leq h_l'' \]
provided that
\begin{equation} \label{Mkreq}
M_k > \frac{2t-2}{(1-2\varepsilon)(1/4-\varepsilon)} .
\end{equation}
We take $l$ to be the least integer with
\[ \log (\varepsilon \gamma l) \geq \frac{2t-2}{(1-2\varepsilon)(1/4-\varepsilon)} + C \]
for a suitable absolute constant $C$, so that \eqref{Mkreq} follows from \eqref{klowerbound} and \eqref{mklowerbound}. Therefore,
\[ \gamma l \ll \exp (8t) , \; l \ll \alpha \exp (8t) , \; D( \mathcal{S} ) \ll l \log l \ll \alpha (t + \log \alpha ) \exp (8t) , \]
completing the proof.
\end{proof}
In the proof of Theorem~\ref{bea2}, we shall need the following.
\begin{lemma} \label{intbound}
Let $D$ be as in Lemma~\ref{buchstabdecomp} and let $\omega_0(t)$ denote Buchstab's function.
\begin{enumerate}[(i)]
\item \label{intbound1} The points of $D$ lie in two triangles $A_1$, $A_2$, where $A_1$ has vertices
\[ \left( \frac{5}{21}, \; \frac{5}{21} \right), \; \left( \frac{2}{7} , \frac{3}{14} \right) , \; \left( \frac{2}{7} , \frac{2}{7} \right) \]
and $A_2$ has vertices
\[ \left( \frac{1}{2}, \; \frac{3}{14} \right), \; \left( \frac{3}{7} , \frac{2}{7} \right) , \; \left( \frac{1}{2} , \frac{1}{4} \right) . \]
\item \label{intbound2} For $j=1,2$, let
\[ I_j = \int\limits_{A_j} \frac{1}{\alpha_1 \alpha_2^2} \omega_0 \left( \frac{1-\alpha_1 - \alpha_2}{\alpha_2} \right) \mathrm{d} \alpha_1 \mathrm{d} \alpha_2 . \]
Then $I_1 < 0.03925889$ and $I_2 < 0.0566295$.
\end{enumerate}
\end{lemma}
\begin{proof} Let $(\alpha_1, \alpha_2) \in D$. If $\alpha_1 + \alpha_2 > 5/7$, then we have
\[ \alpha_1 + \alpha_2 > \frac{5}{7}, \; \alpha_1 + 2 \alpha_2 \leq 1, \; \alpha_1 \leq \frac{1}{2} . \]
This defines a triangle which is easily verified to be $A_2$. If $\alpha_1 + \alpha_2 \leq 5/7$, then as $\mathbold{\alpha}_2$ is bad, we have in turn
\[ \alpha_1 + \alpha_2 < \frac{4}{7} , \; \alpha_1 < \frac{3}{7} , \; \alpha_1 < \frac{2}{7} . \]
Altogether, we have
\[ \alpha_1 + 2 \alpha_2 > \frac{5}{7} , \; \alpha_1 < \frac{2}{7} , \; \alpha_2 < \alpha_1 . \]
This defines a triangle which we can verify to be $A_1$. This proves \eqref{intbound1}. Now \eqref{intbound2} requires a computer calculation, which was kindly carried out by Andreas Weingartner.
\end{proof}
\begin{proof} [Proof of Theorem~\ref{bea2}]
With a different value of $l$, we choose $h_1''$, $\cdots$, $h_l''$ and $h_1$, $\cdots$, $h_k$ exactly as in the proof of Theorem~\ref{bea1}. In applying Theorem~\ref{gentheo}, we also take $I$, $\mathcal{A}$, $q_0$, $q_1$, $Y$, $J_m$, $l_m$ as in that proof, but now $\theta=2/7-\varepsilon$, $s=5$, $a=3$; the functions $\varrho_1(n)$, $\cdots$, $\varrho_5(n)$ are given in Lemma~\ref{buchstabdecomp}. \newline
There is little difficulty in verifying \eqref{regcond1} by a similar but simpler version of the proof of \eqref{regcond3}. So we concentrate on \eqref{regcond3}. We recall that this can be rewritten as
\begin{equation} \label{bomvinoalt2}
\sum_{q \leq x^{\theta}} \mu^2 (q) \tau_{3k} (q) \left| \sum_{\substack{ n \equiv a_q \bmod{q} \\ \gamma n \in J_m \bmod{1} \\ N + h_m \leq n < 2N}} \varrho_g(n) - \frac{Y_{g,m}}{\varphi(q)} \right| \ll N \mathcal{L}^{-k-\varepsilon} .
\end{equation}
We define $Y_{g,m}$ by
\[ Y_{g,m} = l_m \sum_{N \leq n < 2N} \varrho_g(n) . \]
It is well known that
\begin{equation} \label{Ygmapprox}
Y_{g,m} = \frac{l_m c_g N}{\mathcal{L}} \left( 1 + o(1) \right) ,
\end{equation}
where $c_g$ is given by a multiple integral. In fact, we have
\[ c_1 + c_2 + c_3 - c_4 - c_5 = 1 - \int\limits_{\mathbold{\alpha}_2 \in D} \frac{1}{\alpha_1 \alpha_2^2} \omega_0 \left( \frac{1-\alpha_1 - \alpha_2}{\alpha_2} \right) \mathrm{d} \alpha_1 \mathrm{d} \alpha_2 . \]
Similar calculations are found in \cite[Chapter 1]{GH}. \newline
Fix $m$ and $g$. By analogy with the proof of Theorem~\ref{beabomvino}, we can obtain \eqref{regcond3} by showing
\begin{equation} \label{regcond3req1}
\sum_{q \leq N^{2/7-\varepsilon}} \left| \sum_{\substack{N \leq n < N' \\ n \equiv a_q \bmod{q}}} \varrho_g(n) - \frac{1}{\varphi(q)} \sum_{N \leq n < N'} \varrho_g(n) \right| \ll N \mathcal{L}^{-A}
\end{equation}
for every $A>0$ and
\begin{equation} \label{regcong3req2}
\sum_{q \leq N^{2/7-\varepsilon}} \left| \sum_{\substack{N \leq n < N' \\ n \equiv a_q \bmod{q}}} \varrho_g(n) e ( \gamma nh) \right| \ll N \mathcal{L}^{-A}
\end{equation}
for $1 \leq h \leq \mathcal{L}^{A+1}$ and for every $A > 0$. Again adapting the argument of Theorem~\ref{beabomvino}, we see that \eqref{regcong3req2} is a consequence of Lemma~\ref{primechardetect}. \newline
For \eqref{regcond3req1}, it suffices to show, recalling Lemma~\ref{splitdiag}, that for arbitrary $\eta_{\chi} \ll 1$ and $Q \leq N^{2/7-\varepsilon}$,
\begin{equation} \label{regcond3req1redprim}
\sum_{Q \leq q < 2Q} \ \sideset{}{^{\star}} \sum_{\chi \bmod{q}} \eta_{\chi} \sum_{N \leq n < N'} \varrho_g(n) \chi(n) \ll QN\mathcal{L}^{-A}
\end{equation}
for every $A > 0$. This can be readily deduced from the Siegel-Walfisz theorem for $Q \leq \mathcal{L}^{2A}$, so we assume that $Q > \mathcal{L}^{2A}$. \newline
We apply Lemma~\ref{BaWelem} with
\[ W (n) = \sum_{Q \leq q < 2Q} \ \sideset{}{^{\star}} \sum_{\chi \bmod{q}} \eta_{\chi} \chi(n) \]
if $N \leq n < N'$ and $W(n)=0$ otherwise. \newline
For example, when $g=3$, the left-hand side of \eqref{regcond3req1redprim} is
\[ \sum_{\substack{N \leq p_1 p_2 p_3 n_4 < N' \\ (n_4, P((2N)^{1/7}) = 1 \\ \mathbold{\alpha}_3 \in E_3 \\ (\alpha_1, \alpha_2) \in E_2 \setminus D}} W (p_1 p_2 p_3 n_4) = \sum_{\substack{ \mathbold{\alpha}_3 \in E_3 \\ ( \alpha_1 , \alpha_2) \in E_2 \setminus D}} S^* (p_1 p_2 p_3, (2N)^{1/7} ) . \]
We shall show that \eqref{type1sumest} and \eqref{type2sumest} hold with $Y = QN \mathcal{L}^{-A-3}$, $c = 4/7$ and $d=1/7$. (We could reduce the constraints on $c$ and $d$, but that would not be useful in the present context.) Once we have done this, we can follow the proof of Lemma~\ref{primechardetect} to prove \eqref{regcond3req1redprim}. \newline
To prove \eqref{type1sumest}, we use the Polya-Vinogradov bound for character sums to obtain
\begin{equation*}
\begin{split}
\sum_{m \leq 2N^{4/7}} \sum_k W(nk) & = \sum_{m \leq 2N^{4/7}} a_m \sum_{Q \leq q < 2Q} \sideset{}{^{\star}} \sum_{\substack{\chi \bmod{q} \\ N \leq mk < N'}} \eta_{\chi} \chi(mk) \\
& \ll \mathcal{L} \sum_{m \leq 2N^{4/7}} \sum_{Q \leq q < 2Q} q^{1/2} \ll \mathcal{L} Q^{3/2} N^{4/7-\varepsilon} \ll QN \mathcal{L}^{-A-3} .
\end{split}
\end{equation*}
Now to prove \eqref{type2sumest}, we note that by the method of \cite[Section 3.2]{GH} mentioned earlier, it suffices to show that
\[ \sum_{M \leq m < 2M} a_m \sum_{K \leq k < 2K} b_k W(mk) \ll QN \mathcal{L}^{-A} \]
whenever $|a_m| \leq 1$ and $|b_k| \leq \tau(k)$, $N^{4/7} \ll M \ll N^{5/7}$, $MK \asymp N$. That is, it suffices to show that
\begin{equation} \label{beforelargesieve}
\sum_{Q \leq q < 2Q} \ \sideset{}{^{\star}} \sum_{\chi \bmod{q}} \left| \sum_{M \leq m < 2M} a_m \chi(m) \right| \left| \sum_{K \leq k < 2K} b_k \chi(k) \right| \ll Q N \mathcal{L}^{-A} .
\end{equation}
Following the proof of (6) in \cite[Chapter 28]{HD}, the left-hand side of \eqref{beforelargesieve} is
\[ \ll \mathcal{L} ( M+Q^2)^{1/2} (K+Q^2)^{1/2} \| a \|_2 \| b \|_2 \ll \mathcal{L}^3 \left( N^{1/2} + M^{1/2} Q + Q^2 \right) N^{1/2} \ll QN \mathcal{L}^{-A} , \]
since $\mathcal{L}^3 Q^{-1} N \ll \mathcal{L}^{3-A} N$, $\mathcal{L}^3 M^{1/2} N^{1/2} \ll \mathcal{L}^3 N^{6/7} \ll N \mathcal{L}^{-A}$ and $\mathcal{L}^3 Q N^{1/2} \ll \mathcal{L}^3 N^{11/14} \ll N \mathcal{L}^{-A}$. This proves \eqref{regcond3} with the present choice of $\mathcal{A}$, $Y_{g,m}$, etc. \newline
Applying Theorem~\ref{gentheo}, we find that there is a set $\mathcal{S}$ of $t$ primes in $\mathcal{A}$ (and thus of the form $[ \alpha m + \beta ]$) having diameter
\[ D(\mathcal{S}) \leq h_k - h_1 \ll l \log l \]
provided that
\[ M_k > \frac{2t-2}{b (2/7- \varepsilon)} . \]
Here $b$ must have the property
\[ b_{1,m} + b_{2,m} + b_{3,m} - b_{4,m} - b_{5,m} \geq b > 0 ; \]
that is,
\[ l_m (c_1 + c_2 + c_3 - c_4 - c_5 ) \geq b \gamma > 0 . \]
We can choose
\[ b = (1-2 \varepsilon) \left( 1- \int\limits_{\mathbold{\alpha}_2 \in D} \frac{1}{\alpha_1 \alpha_2^2} \omega_0 \left( \frac{1-\alpha_1 - \alpha_2}{\alpha_2} \right) \mathrm{d} \alpha_1 \mathrm{d} \alpha_2 \right) . \]
Using Lemma~\ref{intbound}, we see that
\[ b > 0.90411. \]
Now we proceed just as the proof of Theorem~\ref{bea1}. We may choose any $l$ for which
\[ \log ( \varepsilon \gamma l ) \geq \frac{2t-2}{0.90411 (2/7-\varepsilon)} + C \]
for a suitable constant $C$, and now it is a simple matter to deduce that
\[ D( \mathcal{S} ) < C_4 \alpha ( \log \alpha + t) \exp ( 7.743 t) ,\]
where $C_4$ is an absolute constant.
\end{proof}
\noindent{\bf Acknowledgments.} This work was done while L. Z. held a visiting position at the Department of Mathematics of Brigham Young University (BYU). He wishes to thank the warm hospitality of BYU during his thoroughly enjoyable stay in Provo.
\vspace*{.5cm}
\noindent\begin{tabular}{p{8cm}p{8cm}}
Roger C. Baker & Liangyi Zhao \\
Department of Mathematics & School of Mathematics and Statistics \\
Brigham Young University & University of New South Wales\\
Provo, UT 84602, U. S. A. & Sydney, NSW 2052 Australia \\
Email: {\tt [email protected]} & Email: {\tt [email protected]} \\
\end{tabular}
\end{document} |
\begin{document}
\numberwithin{equation}{section}
\title{Analysis of the High Water Mark Convergents of Champernowne's Constant in Various Bases}
\begin{abstract}
\noindent In this paper, we show that patterns exist in the properties of the High Water Mark (HWM) convergents of Champernowne's Constant\xspace in various bases (\ifmmode C_b \else $C_b$\xspace \fi), specifically in bases 2 through 124. The convergents are formed by truncating the Continued Fraction Expansion (CFE) of \ifmmode C_b \else $C_b$\xspace \fi immediately before the CFE HWMs. These patterns have been extended from the known patterns in the CFE HWMs of Champernowne's Constant\xspace in base ten (\ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi). We show that the patterns may be used to efficiently calculate the CFE coefficients of \ifmmode C_b \else $C_b$\xspace \fi, and to calculate and predict the lengths of the HWM coefficients, the number of correct digits of \ifmmode C_b \else $C_b$\xspace \fi as calculated by the convergent, and the convergent error. We have discovered that minor corrections to the pattern formulations in base 10 are required for some bases, and these corrections are presented and discussed. The resulting formulations may be used to make the calculations for \ifmmode C_b \else $C_b$\xspace \fi in any base.
\end{abstract}
\section{Introduction} \label{sec:intro}
Champernowne's Constant\xspace, \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi, was formulated in 1933 by British mathematician and economist D. G. Champernowne as an example of a normal number \cite{bbcn}. It is formed by concatenating the positive integers to the right of a decimal point, i.e., $0.123456789101112 \ldots$, without end. In 1937, Kurt Mahler proved that \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi is transcendental \cite{bbcnt}.
It is well known that the CFE of \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi consists mostly of coefficients with a reasonably small number of digits, interspersed with coefficients with a very large number of digits. A coefficient that has a value that is larger than any previous coefficient is called a HWM. As one progresses along the CFE, the pattern of relatively small coefficients sprinkled with very large coefficients continues between the HWMs, even though the other large coefficients are not themselves HWMs.
There are many patterns in the properties of the convergents formed by truncating the CFE of \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi immediately before the HWMs for HWM numbers greater than 3. These properties and their patterns have been formulated and discussed \cite[Conjectures~1-7]{bbjks}. The purpose of this paper is to extend the formulations and the conjectures from base 10 to bases 2 through 124, with exceptions noted and the pertinent formulae provided.
In addition, another conjecture which describes the lengths of the largest coefficients between the HWMs \cite[Conjecture~8]{bbjks2} is generalized from \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi to \ifmmode C_b \else $C_b$\xspace \fi. We show that deviations from the expected behavior occur in some bases and these deviations are noted and briefly discussed.
\section{Definitions} \label{sec:term}
{\setlength{\parindent}{0cm}
\begin{defn} \label{def:dcc}
\boldmath \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi \unboldmath - Champernowne's Constant\xspace in base 10.
\end{defn}
\begin{defn} \label{def:dcb}
\boldmath \ifmmode C_b \else $C_b$\xspace \fi \unboldmath - Champernowne's Constant\xspace in base $b$. For example \csb{3} is $0.12101112202122100101 \ldots_3$
\end{defn}
\begin{defn} \label{def:dpo}
\boldmath \ifmmode C_b \else $C_b$\xspace \fi \unboldmath \textbf{position\xspace}, or \textbf{position\xspace} - For an integer, it is the position\xspace of the first digit of the integer in the consecutive sequence used to generate \ifmmode C_b \else $C_b$\xspace \fi, starting at position\xspace 0 for the 0 to the left of the radix point. The radix point is not counted. Therefore, for \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi, the integer 1 is in position\xspace 1 and the integer 12 is in position\xspace 14. For $C_3$, the integer $11_3$ is in position\xspace 5.
\end{defn}
\begin{defn} \label{def:dcn}
\textbf{CFE \cn}, or \textbf{\cn} - The number of the CFE coefficient, starting at \cn 0. \Cn 0 is always 0 for \ifmmode C_b \else $C_b$\xspace \fi.
\end{defn}
\begin{defn} \label{def:dhwmn}
\textbf{HWM number}, or \textbf{HWM \#} - The number of the HWM. The first HWM is 0 and it is HWM number 1.
\end{defn}
\begin{defn} \label{def:dhwml}
\textbf{HWM length} - The number of digits of the HWM of \ifmmode C_b \else $C_b$\xspace \fi \emph{as represented in base} $b$.
\end{defn}
\begin{defn} \label{def:dtbp}
\boldmath \tbp \unboldmath - This is equal to $b^X$ and it is read one-zero, or ``ten'', to the power of $X$ in base $b$. This notation is used instead of $b^X$ primarily because the computer programs that were used in the analysis calculate the HWM convergents with two primary inputs: base $b$, and the power of ``ten'', $X$. This terminology was chosen so that there would be consistency between this paper and the programs.
\end{defn}
\begin{defn} \label{def:dpotbp}
\boldmath \ptbx \unboldmath \hspace{-0.4pc}, or \boldmath \ifmmode C_b \else $C_b$\xspace \fi \unboldmath \textbf{position\xspace of} \boldmath \tbp \unboldmath - The formula for the \ifmmode C_b \else $C_b$\xspace \fi position\xspace of the integer $\tbp, \ X \geq 0$, is:
\begin{equation} \label{eq:cbp}
\ifmmode C_b \else $C_b$\xspace \fi \text{ position\xspace of } \tbp=\ptbx=1+\sum_{x=0}^X (b-1)\cdot x \cdot b^{(x-1)}
\end{equation}
\end{defn}
Thus, the \ifmmode C_b \else $C_b$\xspace \fi position\xspace of \tbp[b][0] is 1, the position\xspace of \tbp[b][1] is $b$, and the position\xspace of \tbp[3][2] is 15.
\begin{defn} \label{def:dtbpc}
\boldmath \tbp \textbf{HWM convergent}, \unboldmath or \boldmath{\tbp} \textbf{convergent} \unboldmath - The convergent formed from the numerator, \nbx, and the denominator, \dbx, as calculated using \Cref{conj:numer} and \Cref{conj:denom}, respectively. The convergent is named in this manner because the \ifmmode C_b \else $C_b$\xspace \fi position\xspace \tbp is used in the calculation of the convergent. The CFE derived from the convergent does not calculate the \tbp HWM coefficient as defined in \Cref{def:dtbpcc}. The HWM coefficient is, however, the next term in the CFE.
\end{defn}
\begin{defn} \label{def:dtbpcc}
\boldmath \tbp \textbf{HWM coefficient}, \unboldmath or \boldmath \tbp \textbf{coefficient} \unboldmath - The CFE coefficient immediately following the last coefficient as calculated by the \tbp convergent. With a few exceptions as noted later in this paper, these coefficients constitute the HWMs of the CFE of \ifmmode C_b \else $C_b$\xspace \fi.
\end{defn}
\begin{defn} \label{def:dbx}
\textbf{The denominator of the} \boldmath \tbp \unboldmath \textbf{HWM convergent}, or \boldmath \dbx \unboldmath - The denominator for the \tbp HWM convergent as constructed using \Cref{conj:denom}.
\end{defn}
\begin{defn} \label{def:nbx}
\textbf{The numerator of the} \boldmath \tbp \unboldmath \textbf{HWM convergent}, or \boldmath \nbx \unboldmath - The numerator for the \tbp HWM convergent as calculated using \Cref{conj:numer}.
\end{defn}
\begin{defn} \label{def:dncd}
\textbf{Number of correct digits}, or \textbf{NCD}, or $\boldsymbol{NCD(b,X)}$ - The number of correct consecutive digits of \ifmmode C_b \else $C_b$\xspace \fi as calculated by a convergent, starting at, and counting the 0 to the left of the radix point. The radix point itself is not counted. Therefore, if the position\xspace of the last correct digit, $p_c$, is known, the number of correct digits is $p_c+1$ since the 0 is counted as the first correct digit, but it is in position\xspace 0. When given with arguments, base $b$, and power of ``ten'' $X$, $NCD(b,X)$ is the NCD of the \tbp HWM convergent as defined in \Cref{def:dtbpc}.
\end{defn}
}
In general, in this paper we refer to the HWM convergents in a manner different from that in our previous paper which detailed the HWM convergent properties of \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi \cite{bbjks}. In the previous paper, a HWM convergent is referred to as ``the convergent before HWM \#N''. In this paper a HWM convergent is referred to as ``the \tbp HWM convergent'', or as the ``\tbp convergent'' as defined in \Cref{def:dtbpc}. The reason is that in this paper, bases 2 through 124 are discussed, rather than base 10 exclusively, and the HWM numbers are different for the various bases.\footnote{The HWM numbers match for bases 6 through 124, but not for bases 2 through 5. See \Cref{conj:cncalc} and \Cref{conj:hwml} for details.}
\section{Method} \label{sec:meth}
A program was written in Ruby to calculate \ifmmode C_b \else $C_b$\xspace \fi in the desired base by taking the digits of each of the integers used to form \ifmmode C_b \else $C_b$\xspace \fi, dividing by the appropriate power of the base for each digit, and summing. The Euclidean Algorithm applied to real numbers was then used to calculate the CFE coefficients of \ifmmode C_b \else $C_b$\xspace \fi. The use of this method was difficult and care was taken to calculate \ifmmode C_b \else $C_b$\xspace \fi to sufficient accuracy so that the desired number of coefficients could be accurately calculated. Bases 2, 7, 8, 9, and 10 were used in the initial investigation.
It was noticed that large coefficients (HWMs) in the CFE of \ifmmode C_b \else $C_b$\xspace \fi appeared in other bases, similar to the behavior of the CFE of \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi. The CFE coefficients of \ifmmode C_b \else $C_b$\xspace \fi, terminated before the HWMs were then used to calculate the numerator and denominator of each convergent. Patterns were observed in the denominators of the convergents as given in \Cref{conj:denom}. The means of calculating the numerators of the convergents was determined as given in \Cref{conj:numer}.
Programs were then written in C to calculate the numerators and denominators using these conjectures and the resulting calculations led to the formulation of \Cref{conj:cncalc} through \Cref{conj:hwml}. These calculations were checked for each \tbp convergent and HWM as described in \Cref{ssec:rv}.
\section{Conjectures} \label{sec:ce}
In this section we extend the \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi conjectures \cite[Conjectures~1-7]{bbjks} to \ifmmode C_b \else $C_b$\xspace \fi. The order of the conjectures in the present paper matches the order of the conjectures in the previous paper for ease of comparison. However, \Cref{conj:cncalc} through \Cref{conj:hwml} follow from analyzing the convergents formed using \Cref{conj:denom} and \Cref{conj:numer}.
\begin{conj} \label{conj:cncalc}
The \tbp convergent, for $X \geq 0$, calculates the value of \ifmmode C_b \else $C_b$\xspace \fi correctly until the integer $\tbp[b][X+1]-\mni{2}$ in the integer sequence used to form \ifmmode C_b \else $C_b$\xspace \fi is reached. Furthermore, this integer is calculated as $\tbp[b][X+1]-\mni{1}$ instead of $\tbp[b][X+1]-\mni{2}$.
\end{conj}
\begin{excep} \label{excep:b2cbc}
\narrower
For base 2, the convergents that calculate \csb{2} per \Cref{conj:cncalc} start with the $\tbp[2][X]$ convergent, $X \geq 2$, rather than $X \geq 0$.
\end{excep}
\begin{excep} \label{excep:b34cbc}
\narrower
For base 3 and base 4, the HWM convergents that calculate \csb{3} and \csb{4} per \Cref{conj:cncalc} start with the $\tbp[3][X]$ and $\tbp[4][X]$ convergent, $X \geq 1$, rather than $X \geq 0$.
\end{excep}
\noindent The following are examples of the conjecture:
\noindent the \tbp[10][1] HWM convergent calculates \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi as ``$0.123456\ldots95969799\ldots$'' instead of \\ ``$0.123456\ldots95969798\ldots$''. Similarly, the \tbp[5][2] HWM convergent calculates \csb{5} as \\ ``$0.12341011\ldots440441442444\ldots_5$'' instead of ``$0.12341011\ldots440441442443\ldots_5$''.
\begin{conj} \label{conj:ncd}
The NCD of the \tbp HWM convergent for base $b, \ X \geq 0$ is given by:
\begin{equation} \label{eq:ncd}
NCD(b,X) = \ifmmode C_b \else $C_b$\xspace \fi \text{ position\xspace of } \tbp[b][X+1] - X - 2
\end{equation}
\end{conj}
\begin{excep} \label{excep:b2ncd}
\narrower
For $b=2$, \Cref{eq:ncd} is valid for $X \geq 2$.
\end{excep}
\begin{excep} \label{excep:b34ncd}
\narrower
For $b=3$ and $b=4$, \Cref{eq:ncd} is valid for $X \geq 1$.
\end{excep}
\noindent The \ifmmode C_b \else $C_b$\xspace \fi position\xspace of \tbp[b][X] is given in \Cref{eq:cbp}. Thus, the NCD(10,0) is 8, the NCD(5,3) is 2340, the NCD(16,5) is 99544809, and the NCD(10,7) is 788888881.
\begin{conj} \label{conj:ceven}
The \tbp HWM \cn{s} are even. The \cn{s} start at 0 per \Cref{def:dcn}.
\end{conj}
The \tbp HWM convergents calculate \ifmmode C_b \else $C_b$\xspace \fi slightly larger than \ifmmode C_b \else $C_b$\xspace \fi. Per \Cref{def:dcn}, this means that the \tbp HWM \cn{s} will be even.
This conjecture is very useful at the end of the calculation of the CFE coefficients as the algorithm cannot distinguish the degenerate case where the last coefficient is 1 from the case where the last coefficient is not 1. In the degenerate case where the last coefficient equals 1, the algorithm calculates the last term as $Y+1$ instead of $Y$, and the calculation ends on an even numbered coefficient. However, if the calculation ends on an even numbered coefficient, it is taken from \Cref{conj:ceven} that this is the degenerate case, the even numbered coefficient is $Y$, and the last coefficient (the one before the HWM) is 1.
\begin{conj} \label{conj:error}
The error of the \tbp HWM convergent is given by:
\begin{align} \label{eq:err}
\text{error} =
\begin{cases}
1.0 \times b^{-(exponent-1)} & \text{base } b \geq 5, X=0 \\
(b-1).1_b \times b^{-exponent} & \text{base } b \geq 3, X=1 \\
1.01_2 \times b^{-exponent} & \text{base } b=2, X=2 \\
(b-1).\underbrace{00\ldots00}_{\mathclap{\text{number of zeroes}}}(b-1)(b-1)_b \times b^{-exponent} & b=2, X \geq 3; \ b \geq 3, X \geq 2 \\
\end{cases}
\end{align}
\begin{align}
\text{number of zeroes} &= X \nonumber \\
exponent &= NCD(b,X)+X+2 \nonumber
\end{align}
\end{conj}
For example, the \tbp[8][0] HWM has an error of $1.0 \times 8^{-7}$, the \tbp[3][1] HWM has an error of $2.1_3 \times 3^{-15_{10}}$, the \tbp[2][2] HWM has an error of $1.01_2 \times 2^{-18_{10}}$, the \tbp[10][2] HWM has an error of $9.0099_{10}\times 10^{-2890_{10}}$, and the \tbp[15][5] HWM has an error of $\text{e.00000ee}_{15}\times 15^{-67530135_{10}}$. Note that the mantissa is given in base $b$ while the exponent is given in base ten, similar to the output from the GNU Multiple Precision Arithmetic (GMP) library \cite{bbgmp} function \texttt{mpf\_out\_str}. This library was utilized by the programs used in the analysis.
The value of $exponent$ is also equal to the \ifmmode C_b \else $C_b$\xspace \fi position\xspace of \tbp[b][X+1]. Note that for the base $b=2, X=2$ case we could have explicitly provided the value of $exponent$, but we wanted to show that the value of $exponent$ was consistent for all of the given cases as formulated.
\begin{conj} \label{conj:hwml}
Let:
\begin{align}
(b==2) =
\begin{cases} \nonumber
0 & b \neq 2 \\
1 & b = 2
\end{cases}
\end{align}
\begin{align}
(b==4) =
\begin{cases} \nonumber
0 & b \neq 4 \\
1 & b = 4
\end{cases}
\end{align}
The length, \emph{as represented in base} $b$, of the HWM coefficient that occurs after the last coefficient of the CFE calculated by the \tbp HWM convergent for $X \geq 0$ is given by:
\begin{equation} \label{eq:hwml}
\text{length}=NCD(b,X)-2 \cdot NCD(b,X-1)-3 \cdot (X-(b==2))-2+(b==4)
\end{equation}
\end{conj}
\begin{excep} \label{excep:b2hwml}
\narrower
For base 2, \Cref{eq:hwml} is valid for $X \geq 2$, rather than $X \geq 0$. In addition, the \tbp[2][2] coefficient is \cn 6 and it is of length 3, per \Cref{conj:hwml}. It is not a HWM but the convergent does calculate \csb{2} per \Cref{conj:cncalc}. The \cn of the next HWM, which is the \tbp[2][3] HWM coefficient, is 14.\footnote{The \tbp[2][2] coefficient has a value of 5, one less than the value of \cn[s] 2 and 5, which are the largest terms along with \cn 10 until the \tbp[2][3] HWM coefficient at \cn 14.} All subsequent HWMs are \tbp[2][X] HWM coefficients.
\end{excep}
\begin{excep} \label{excep:b3hwml}
\narrower
For base 3 and base 4, \Cref{eq:hwml} is valid for $X \geq 1$, rather than $X \geq 0$. For base 3, the \tbp[3][1] HWM coefficient is \cn 6, and for base 4, the \tbp[4][1] HWM coefficient is \cn 12. Both are HWMs, and subsequent HWMs are \tbp HWM coefficients.
\end{excep}
\begin{excep} \label{excep:b5hwml}
\narrower
For base 5, the \tbp[5][0] coefficient is \cn 4 and it is of length 1, per \Cref{conj:hwml}.\footnote{The coefficient is 1.} It is not a HWM but the convergent does calculate \csb{5} per \Cref{conj:cncalc}. The \cn of the \tbp[5][1] HWM coefficient is 10. The HWMs at \cn 7 and \cn 9 are not \tbp[5][X] HWM coefficients.
\end{excep}
\noindent For example, the \tbp[2][3] HWM as represented in binary has a length of 9, the \tbp[10][2] HWM as represented in decimal has a length of 2504, and the \tbp[16][5] HWM as represented in hexadecimal has a length of 89198852.
For the \tbp[b][0] coefficient, $b \geq 5$, the coefficient length is $b-4$. For bases greater than 5, the \tbp[b][0] coefficient is a HWM, and it will be HWM \#4 at \cn 4, per \Cref{def:dcn} and \Cref{def:dhwmn}. All subsequent HWMs are those generated by a \tbp convergent, $X \geq 1$.\footnote{There is a close call for base 6 as \cn 8 is not a \tbp[6][X] coefficient and it is only one less than \cn 4, the \tbp[6][0] coefficient.}
Furthermore, it appears that for $b \geq 5$, the first 4 CFE terms are [0; $b-2$, $b-1$, 1] = [0; $b-2$, $b$]. This gives a \tbp[b][0] convergent of
\begin{equation} \label{eq:ptz}
\tbp[b][0] \text{ convergent} =\frac{b}{(b-1)^2}, \ b \geq 5
\end{equation}
As stated above, the \tbp[b][0] convergent is a HWM for $b \geq 6$. For the numerators and denominators for other convergents, see \Cref{eq:numer} and \Cref{eq:denom}, respectively.
It should be noted that for the lowest value of $X$ allowed in any base, the $NCD(b,X-1)$ term is not the true NCD as the \tbp[b][X-1] convergent is not a valid HWM convergent. However, the calculated value as used in \Cref{eq:hwml} gives the correct HWM length. It should also be reiterated that the HWM following the \tbp HWM convergent is not calculated by the convergent. It is calculated by the \tbp[b][X+1] HWM convergent, or by other means \cite{bbdew}.
It is unknown why the HWM lengths in base 4, as formulated, are longer by 1 digit than the HWM lengths in other bases (other than base 2). It is also unknown why the HWM lengths in base 2, as formulated, are longer by 3 digits than the HWM lengths in other bases (other than base 4). However, in the base 2 case, we feel that since there are no \tbp HWM convergents for $X < 2$, this may increase the lengths of the HWM coefficients.
\begin{conj} \label{conj:denom}
The denominator, \dbx[b][0], used to calculate the \tbp[b][0] HWM convergent for base, $b \geq 5, \ X=0$, is given in \Cref{eq:ptz}. The denominator used to calculate the \tbp HWM convergent for base, $b, \ X \geq 1$, is given by:
\begin{align} \label{eq:denom}
\dbx =
\begin{cases}
\underbrace{(b-1).(b-1)\ldots (b-1)}_{\mathclap{\text{number of }(b-1)\text{ digits}}}(b-2)\overbrace{0\ldots 0}^{\mathclap{\text{number of zeroes}}}1 \times b^{exponent} & b \text{ is odd} \\
\text{number of } (b-1)\text{ digits} &= X \\
\text{number of zeroes } &= X \\ \\
\left(\frac{b}{2}-1\right).\underbrace{(b-1)\ldots (b-1)}_{\mathclap{\text{number of }(b-1)\text{ digits}}}\overbrace{0\ldots 0}^{\mathclap{\text{number of zeroes}}}\left(\frac{b}{2}\right) \times b^{exponent} & b \text{ is even} \\
\text{number of }(b-1)\text{ digits} &= X \\
\text{number of zeroes } &= X+1 \\ \\
exponent &= NCD(b, X-1)+2X+1
\end{cases}
\end{align}
\end{conj}
\begin{excep} \label{excep:b2denom}
\narrower
For base 2, \Cref{eq:denom} is valid for $X \geq 2$, rather than $X \geq 1$.\footnote{Although the \tbp[2][1] HWM convergent is not valid, $NCD(2,1)=3$ which gives a valid exponent of 8 for the \tbp[2][2] HWM convergent denominator, $0.110001_2 \times 2^{8_{10}}$.}
\end{excep}
\noindent For example, \dbx[2][4], the denominator for the \tbp[2][4] HWM convergent is $0.1111000001_2 \times 2^{54_{10}}$, \dbx[5][3] is $4.4430001_5 \times 5^{348_{10}}$, \dbx[10][8] is $4.999999990000000005_{10} \times 10^{788888898_{10}}$, and \dbx[15][5] is $\text{e.eeeed000001}_{15} \times \break 15^{3742640_{10}}$.
If $X=1$ for an odd base, then the $(b-1)$ term is to the left of the radix point and the $(b-2)$ term is immediately to the right of the radix point. For example, the denominator for the \tbp[7][1] HWM convergent is $6.501_7 \times 7^{8_{10}}$.
\begin{conj} \label{conj:numer}
The numerator, \nbx[b][0], used to calculate the \tbp[b][0] HWM convergent for base, $b \geq 5, \ X=0$, is given in \Cref{eq:ptz}. For other $b$ and $X$, \nbx is calculated as described below.
\noindent Given:
\begin{enumerate}
\item The denominator, \dbx, of a \tbp convergent as calculated from \Cref{conj:denom}
\item The \ifmmode C_b \else $C_b$\xspace \fi position\xspace of the integer \tbp, \ptbx, as given by \Cref{eq:cbp}
\item $C_b[0.1 \ldots \ptbx]$ denotes \ifmmode C_b \else $C_b$\xspace \fi truncated to position\xspace \ptbx\footnote{The last digit is a 1 as it is the first digit of \tbp in the integer sequence used to form \ifmmode C_b \else $C_b$\xspace \fi.}
\item $\text{Ceil}(f)$ denotes the integer immediately above non-integer $f$.
\end{enumerate}
Then the numerator, $\nbx, \ X \geq 1$, of the convergent is calculated as:
\begin{align} \label{eq:numer}
\nbx =
\begin{cases}
\text{Ceil}(\dbx \cdot C_b[0.1 \ldots \ptbx]) + 1 & b \text{ is odd} \\
\text{Ceil}(\dbx \cdot C_b[0.1 \ldots \ptbx]) & b \text{ is even}
\end{cases}
\end{align}
\end{conj}
\begin{excep} \label{excep:b2numer}
\narrower
\noindent For base 2, \Cref{eq:numer} is valid for $X \geq 2$, rather than $X \geq 1$.
\end{excep}
For example, the calculation for \nbx[2][3], the numerator for the \tbp[2][3] HWM convergent is: \\
Ceil$(0.11100001_2 \times 2^{21_{10}} \cdot 0.110111001011101111_2) = 110000100000000100001_2$.
The calculation for \nbx[3][2] is: \\
Ceil$(2.21001_3 \times 3^{17_{10}} \cdot 0.121011122021221_3) + 1 = 112222220011222111_3$.
The calculation for \nbx[10][1] is: \\
Ceil$(4.9005_{10} \times 10^{11_{10}} \cdot 0.1234567891_{10}) = 60499999499_{10}$.
And the calculation for \nbx[15][1] is: \\
Ceil$(\text{e.d}01_{15} \times 15^{16_{10}} \cdot 0.123456789\text{abcde}1_{15}) + 1 = \text{120eeeeeeeeeedeed}_{15}$.
\section{Results, Verification, and Data Location} \label{sec:rv}
\subsection{Results and Verification} \label{ssec:rv}
The \ifmmode C_b \else $C_b$\xspace \fi CFE coefficients may be calculated very efficiently from the \tbp HWM convergents. For $X \geq 1$, the denominator of the convergent is given by \Cref{conj:denom} and the numerator of the convergent may be calculated by multiplying the denominator by a sufficient number of digits of \ifmmode C_b \else $C_b$\xspace \fi. The required number of digits is $\ptbx+1$ as given in \Cref{conj:numer}, resulting in a very efficient calculation for the \ifmmode C_b \else $C_b$\xspace \fi CFE coefficients.\footnote{The required number of digits is $\ptbx+1$ because the zero to the left of the radix point has been included in the count.} The Euclidean Algorithm is then used to calculate the CFE coefficients from the numerator and the denominator. \Cref{conj:ceven} is used to determine if the last coefficient is 1. For $b \geq 5, \ X=0$, \Cref{eq:ptz} is used to calculate the numerator and the denominator.
The properties of the convergents as given in \Cref{conj:cncalc} through \Cref{conj:error} may be observed by dividing the numerator of the convergent by the denominator of the convergent and comparing the result with \ifmmode C_b \else $C_b$\xspace \fi for each base and power. The property of the convergents as given in \Cref{conj:hwml} may be checked by examination of the \ifmmode C_b \else $C_b$\xspace \fi CFE coefficients.
In the current study, the calculations and comparisons were performed for bases 2 to 124. \Cref{tbl:basesx} gives the base, $b$, and the powers of ``ten'', $X$ , in base $b$ that were computed. The calculated results for \textit{each base and each power} were checked against the predictions made in \Cref{conj:cncalc} through \Cref{conj:hwml}.
The maximum power for each base was usually dictated by memory size. The amount of memory required to calculate \ifmmode C_b \else $C_b$\xspace \fi, the \ifmmode C_b \else $C_b$\xspace \fi CFE coefficients, and the error for the \tbp[b][X-1] HWM convergent is approximately the same as the amount of memory required to calculate the \ifmmode C_b \else $C_b$\xspace \fi CFE coefficients for the \tbp HWM convergent, without calculating \ifmmode C_b \else $C_b$\xspace \fi and the error. There are separate columns for each type of calculation in \Cref{tbl:basesx}. For more details, see \Cref{ssec:dlads} and \Cref{ssec:c}.
Sequences A244330 \cite{bboeis2hwmcn}, A244331 \cite{bboeis2hwml} for base 2, and A244332 \cite{bboeis3hwmcn}, and A244333 \cite{bboeis3hwml}, for base 3 have been added to the OEIS which correspond to sequences A038705 \cite{bboeis10hwmcn} and A143534 \cite{bboeis10hwml} in base 10, respectively. The first member of each pair of sequences gives the CFE \cn of each HWM coefficient and the second gives the length of each HWM coefficient as represented in the specified base.\footnote{The \cn{s} start at 1 per the precedent set in A038705 \cite{bboeis10hwmcn}.}
In addition, sequences A244758 \cite{bboeis2allcl} for base 2, and A244759 \cite{bboeis3allcl} for base 3 have been added to the OEIS which correspond to sequence A143532 \cite{bboeis10allcl} in base 10. These sequences give the number of digits as represented in the specified base for all of the calculated CFE coefficients.
We are considering adding similar sequences for base 4 through base 9. For these bases, there are existing OEIS sequences for the digits of \ifmmode C_b \else $C_b$\xspace \fi, but not for the CFE coefficients of \ifmmode C_b \else $C_b$\xspace \fi. Therefore, sequences for the CFE coefficients of \ifmmode C_b \else $C_b$\xspace \fi would be added as well as sequences for the HWM coefficients, the \cn of the HWMs, the length of all of the calculated coefficients, and the length of the HWM coefficients.
It should be noted that the maximum numbers of CFE coefficients as calculated and presented in \Cref{tbl:basesx} are not necessarily the maximum number of coefficients that have been calculated in a particular base. For example, at present for \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi, 82328 coefficients have been calculated \cite{bbwmwcncfe}.
\begin{table}[!htbp]
\centering
\caption{Calculated Bases, Powers, and Number of CFE Coefficients}
\label{tbl:basesx}
\begin{tabular}{|c||c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{base} $\boldsymbol{b}$} & \multirow{2}{*}{\textbf{Min} $\boldsymbol{X}$} & \multirow{2}{*}{\textbf{Max} $\boldsymbol{X}$ \textbf{w/ error}} & \multirow{2}{*}{\textbf{Max} $\boldsymbol{X}$ \textbf{w/o error}} & \textbf{Number of CFE Coefficients} \\
& & & & \textbf{(for max} $\boldsymbol{X}$ \textbf{w/o error)} \\
\hline
2 & 2 & 25 & 24 & 98093504 \\ \hline
3 & 1 & 15 & 16 & 2982556 \\ \hline
4 & 1 & 12 & 13 & 629420 \\ \hline
5 & 0 & 10 & 11 & 195554 \\ \hline
6 & 0 & 9 & 10 & 105806 \\ \hline
7 & 0 & 8 & 9 & 53596 \\ \hline
\multirow{2}{*}{8--10} & \multirow{2}{*}{0} & \multirow{2}{*}{7} & \multirow{2}{*}{8} & base 8: 26362 \\
& & & & base 10: 34062 \\ \hline
\multirow{2}{*}{11--14} & \multirow{2}{*}{0} & \multirow{2}{*}{6} & \multirow{2}{*}{7} & base 11: 14424 \\
& & & & base 14: 16386 \\ \hline
\multirow{2}{*}{15--16} & \multirow{2}{*}{0} & \multirow{2}{*}{5} & \multirow{2}{*}{6} & base 15: 6080 \\
& & & & base 16: 6258 \\ \hline
\multirow{2}{*}{17--27} & \multirow{2}{*}{0} & \multirow{2}{*}{4} & \multirow{2}{*}{5} & base 17: 2382 \\
& & & & base 27: 3184 \\ \hline
\multirow{2}{*}{28--62} & \multirow{2}{*}{0} & \multirow{2}{*}{3} & \multirow{2}{*}{4} & base 28: 1104 \\
& & & & base 62: 1830 \\ \hline
\multirow{2}{*}{63--90} & \multirow{2}{*}{0} & \multirow{2}{*}{2} & \multirow{2}{*}{3} & base 63: 540 \\
& & & & base 90: 574 \\ \hline
\multirow{2}{*}{91--124} & \multirow{2}{*}{0} & \multirow{2}{*}{1} & \multirow{2}{*}{2} & base 91: 136 \\
& & & & base 124: 128 \\ \hline
\end{tabular}
\end{table}
A tabular view of the properties for a specific base, $9$, appears in \Cref{tbl:basenine}. To the best of our knowledge, the values in the shaded cells are predictions, and they have not been confirmed. A similar table, for base 10, is given in \cite[Table 1]{bbjks}.
\begin{table}[!htbp]
\begin{small}
\centering
\caption{$C_{\mni{9}}$ HWM Convergents Summary}
\label{tbl:basenine}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \textbf{HWM} & \textbf{Integer that} & & & & \\
& \textbf{Coefficient} & \textbf{Fails ; Fails} & & & & \textbf{Length of} \\
& \textbf{Number} & \textbf{As, in} & & & & \textbf{HWM} \\
\textbf{Power of} & \textbf{(next CFE} & \textbf{Convergent} & & \textbf{Convergent} & \textbf{Convergent} & \textbf{(next CFE} \\
$\boldsymbol{10_9}$, $\boldsymbol{X}$ & \textbf{term)} & \textbf{Calculation} & $\boldsymbol{NCD(\mni{9},X)}$ & \textbf{Error} & \textbf{Denominator} & \textbf{term)} \\ \hline
\rule{0pt}{2.6ex} 0 & 4 & $7_9;8_9$ & 7 & $1.0 \times 9^{-8_{10}}$ & $64_{10}$ & 5 \\ \hline
\rule{0pt}{2.6ex} 1 & 16 & $87_9;88_9$ & 150 & $8.1_9 \times 9^{-153_{10}}$ & $8.701_9 \times 9^{10_{10}}$ & 131 \\ \hline
\multirow{2}{*}{2} & \multirow{2}{*}{52} & \multirow{2}{*}{$887_9;888_9$} & \multirow{2}{*}{2093} & $8.0088_9$ & $8.87001_9$ & \multirow{2}{*}{1785} \\
& & & & $\times 9^{-2097_{10}}$ & $\times 9^{155_{10}}$ & \\ \hline
\multirow{2}{*}{3} & \multirow{2}{*}{152} & \multirow{2}{*}{$8887_9;8888_9$} & \multirow{2}{*}{25420} & $8.00088_9$ & $8.8870001_9$ & \multirow{2}{*}{21223} \\
& & & & $\times 9^{-25425_{10}}$ & $\times 9^{2100_{10}}$ & \\ \hline
\multirow{2}{*}{4} & \multirow{2}{*}{492} & \multirow{2}{*}{$88887_9;88888_9$} & \multirow{2}{*}{287859} & $8.000088_9$ & $8.888700001_9$ & \multirow{2}{*}{237005} \\
& & & & $\times 9^{-287865_{10}}$ & $\times 9^{25429_{10}}$ & \\ \hline
\multirow{2}{*}{5} & \multirow{2}{*}{1598} & $888887_9;$ & \multirow{2}{*}{3122210} & $8.0000088_9$ & $8.88887000001_9$ & \multirow{2}{*}{2546475} \\
& & $888888_9$ & & $\times 9^{-3122217_{10}}$ & $\times 9^{287870_{10}}$ & \\ \hline
\multirow{3}{*}{6} & \multirow{3}{*}{4512} & \multirow{2}{*}{$8888887_9;$} & \multirow{3}{*}{32882905} & \multirow{2}{*}{$8.00000088_9$} & $8.888887$ & \multirow{3}{*}{26638465} \\
& & \multirow{2}{*}{$8888888_9$} & & \multirow{2}{*}{$\times 9^{-32882913_{10}}$} & $0000001_9$ & \\
& & & & & $\times 9^{3122223_{10}}$ & \\ \hline
\multirow{3}{*}{7} & \multirow{3}{*}{12164} & \multirow{2}{*}{$88888887_9;$} & \multirow{3}{*}{338992920} & \multirow{2}{*}{$8.000000088_9$} & $8.8888887$ & \multirow{3}{*}{273227087} \\
& & \multirow{2}{*}{$88888888_9$} & & \multirow{2}{*}{$\times 9^{-338992929_{10}}$} & $00000001_9$ & \\
& & & & & $\times 9^{32882920_{10}}$ & \\ \hline
\multirow{3}{*}{8} & \multirow{3}{*}{30410} & \cellcolor[gray]{0.8} & \cellcolor[gray]{0.8} & \cellcolor[gray]{0.8} & $8.88888887$ & \cellcolor[gray]{0.8} \\
& & \cellcolor[gray]{0.8}\multirow{-2}{*}{$888888887_9;$} & \cellcolor[gray]{0.8} & \cellcolor[gray]{0.8}\multirow{-2}{*}{$8.0000000088_9$} & $000000001_9$ & \cellcolor[gray]{0.8} \\
& & \cellcolor[gray]{0.8}\multirow{-2}{*}{$888888888_9$} & \cellcolor[gray]{0.8}\multirow{-3}{*}{3438356831} & \cellcolor[gray]{0.8}\multirow{-2}{*}{$\times 9^{-3438356841_{10}}$} & $\times 9^{338992937_{10}}$ & \cellcolor[gray]{0.8}\multirow{-3}{*}{2760370965} \\ \hline
& \tiny{\textbf{\Cref{conj:ceven}}} & \multirow{3}{*}{\tiny{\textbf{\Cref{conj:cncalc}}}} & \multirow{2}{*}{\tiny{\textbf{\Cref{conj:cncalc},}}} & \multirow{2}{*}{\tiny{\textbf{\Cref{conj:cncalc},}}} & \multirow{2}{*}{\tiny{\textbf{\Cref{conj:denom},}}} & \multirow{3}{*}{\tiny{\textbf{\Cref{conj:hwml}}}} \\
& \tiny{\textbf{(even}} & & \multirow{2}{*}{\tiny{\textbf{\Cref{conj:ncd}}}} & \multirow{2}{*}{\tiny{\textbf{\Cref{conj:error}}}} & \multirow{2}{*}{\tiny{\textbf{\Cref{eq:ptz}}}} & \\
& \tiny{\textbf{numbered)}} & & & & & \\ \hline
\end{tabular}
\end{small}
\end{table}
\subsection{Data Location and Digit Symbols} \label{ssec:dlads}
The computed data may be downloaded \cite{bbdataprogloc} and found under the ``data'' folder. The data is available under the Open Data Commons Public Domain Dedication and License (ODC PDDL) \cite{bbodcpddl}.
The downloaded data must be unzipped and extracted. The data includes the numerator of the convergent and the CFE coefficients as calculated from the convergent for each base, $b$, and power of ``ten'', $X$, as given in \Cref{tbl:basesx} under the ``Max $X$ w/o error'' column. For the bases and powers under the ``Max $X$ w/ error'' column, the data also includes \ifmmode C_b \else $C_b$\xspace \fi as calculated by the convergent as well as the convergent error. However, for $b=2$, $X=25$, the CFE coefficients are not given due to the lengthy calculation time.\footnote{The calculation is in progress.}
The base, $b$, and power, $X$, appear in each file name. The following prefixes identify the types of files:
\begin{tabular}{llll}
$\bullet$ & \texttt{cn\_calc\_} & -- & \ifmmode C_b \else $C_b$\xspace \fi as calculated by the convergent \\
$\bullet$ & \texttt{cn\_cfe\_coeffs\_} & -- & the CFE coefficients calculated from the convergent \\
$\bullet$ & \texttt{cn\_error\_} & -- & \ifmmode C_b \else $C_b$\xspace \fi as calculated by the convergent minus \ifmmode C_b \else $C_b$\xspace \fi (approximately 30 digits) \\
$\bullet$ & \texttt{cn\_numer\_} & -- & the numerator of the convergent \\
\end{tabular}
It should be noted that there are various suffixes, after the base and power, that indicate the specific program that was used to generate the data. These suffixes include, for example, whether or not the error was calculated.
The symbols used for the digits in various bases are summarized in \Cref{tbl:basessym}. Of course, for any base, $b$, the symbol range is valid only for digits 0 through $b-1$. Referring to the table, it may be seen, for example, that when examining a data file for a base 36 calculation, the symbol for the digit $35_{10}$ is ``z''. However, when examining a data file for a base 37 calculation, the symbol for the digit $35_{10}$ is ``Z'' and the symbol for the digit $36_{10}$ is ``a''. When examining a data file for a base 64 calculation, the symbol for the digit $63_{10}$ is ``$|$1'', with the symbol representing a single character.
\begin{table}[!htbp]
\centering
\caption{Digit Symbols for the Various Bases}
\label{tbl:basessym}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
& \multicolumn{6}{c|}{\textbf{Digit Symbols}} \\ \hline
\textbf{Base} $\boldsymbol{b}$ & \textbf{0--9} & \textbf{10--35} & \textbf{36--61} & \textbf{62--71} & \textbf{72--97} & \textbf{98--123} \\ \hline
2--10 & 0--9 & & & & & \\ \hline
11--36 & 0--9 & a--z & & & & \\ \hline
37--62 & 0--9 & A--Z & a--z & & & \\ \hline
63--72 & 0--9 & A--Z & a--z & $|$0--$|$9 & & \\ \hline
73--98 & 0--9 & A--Z & a--z & $|$0--$|$9 & $|$a--$|$z & \\ \hline
99-124 & 0--9 & A--Z & a--z & $|$0--$|$9 & $|$A--$|$Z & $|$a--$|$z \\ \hline
\end{tabular}
\end{table}
Discussions of the results of the processing of the data and compilations of the data may be found in \Cref{ssec:oprogdata}.
\section{\texorpdfstring{2\textsuperscript{nd}\xspace}{2nd} Generation HWM Length Extension to Bases 2 through 62}
\label{sec:sg}
It is well known that in addition to the \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi HWM CFE coefficients, there are other coefficients that have a large number of digits that are not themselves HWMs. For example, coefficient 101 (starting from 0) is 140 digits long, but it is not a HWM since \cn 40 is 2504 digits long and \cn 18 is 166 digits long. Coefficient 101 is an example of what we have designated a 2\textsuperscript{nd} Generation HWM\xspace \cite{bbjks2}.
It is known that for \ifmmode C_{10} \else $C_{\mathit{10}}$\xspace \fi, the lengths of the 2\textsuperscript{nd} Generation HWM\xspaces appear to be related to the lengths of the HWMs \cite[Conjecture 8]{bbjks2}. The differences in the lengths of a HWM and its corresponding 2\textsuperscript{nd} Generation HWM\xspace show a regular pattern, increasing by 10 for each HWM number increase. We have found that, in general, the same holds true for bases 2 through 62: the difference in the lengths of a HWM and its corresponding 2\textsuperscript{nd} Generation HWM\xspace increases by 10 for each HWM number increase.\footnote{The length is the number of digits as represented in the base of \ifmmode C_b \else $C_b$\xspace \fi.}\footnote{No analysis was performed for bases 63 through 124.} However, there were irregularities in some bases as can be seen by the shaded cells in \Cref{tbl:bdhwml}, which is a compilation of the data. Note, for example, the entries for bases 4, 5, 9, 22, and 36.
It should also be noted that starting at base 24, the first 2\textsuperscript{nd} Generation HWM\xspace starts after HWM \#5 rather than after HWM \#6. Further analysis may show that this is true for base 23 and possibly base 22, but this would most likely involve looking at the error of the potential 2\textsuperscript{nd} Generation HWM\xspace convergents \cite[Conjecture 9]{bbjks2}.
\begin{center}
\begin{longtable}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\caption{Bases, Difference Between HWM Lengths and 2\textsuperscript{nd} Generation HWM\xspace Lengths} \label{tbl:bdhwml} \\ \hline
\textbf{base} & \multicolumn{14}{c|}{\textbf{HWM Length - 2\textsuperscript{nd} Generation HWM\xspace Length}} \\ \hline \endfirsthead
\multicolumn{15}{c}{\tablename\ \thetable{} -- continued from previous page} \\ \hline
\textbf{base} & \multicolumn{14}{c|}{\textbf{HWM Length - 2\textsuperscript{nd} Generation HWM\xspace Length}} \\ \hline
\endhead
\multicolumn{15}{c}{continued on next page} \\
\endfoot
\endlastfoot
2 & 80 & 90 & 100 & 110 & 120 & 130 & 140 & 150 & 160 & 170 & 180 & 190 & 200 & 210 \\ \hline
3 & 48 & 58 & 68 & 78 & 88 & 98 & 108 & 118 & 128 & 138 & 148 & & & \\ \hline
4 & \cellcolor[gray]{0.8}36 & 47 & 57 & 67 & 77 & 87 & 97 & 107 & 117 & & & & & \\ \hline
5 & \cellcolor[gray]{0.8}25 & 37 & 47 & 57 & 67 & 77 & 87 & 97 & & & & & & \\ \hline
6 & 25 & 35 & 45 & 55 & 65 & 75 & 85 & & & & & & & \\ \hline
7 & 27 & 37 & 47 & 57 & 67 & 77 & & & & & & & & \\ \hline
8 & 25 & 35 & \cellcolor[gray]{0.8}42 & 55 & 65 & 75 & & & & & & & & \\ \hline
9 & \cellcolor[gray]{0.8}24 & 37 & 47 & 57 & 67 & & & & & & & & & \\ \hline
10 & 26 & 36 & 46 & 56 & 66 & & & & & & & & & \\ \hline
11 & 27 & 37 & 47 & 57 & 67 & & & & & & & & & \\ \hline
12 & 26 & 36 & 46 & 56 & & & & & & & & & & \\ \hline
13 & 27 & 37 & 47 & 57 & & & & & & & & & & \\ \hline
14 & 26 & 36 & 46 & 56 & & & & & & & & & & \\ \hline
15 & 27 & 37 & \cellcolor[gray]{0.8}44 & 57 & & & & & & & & & & \\ \hline
16 & 26 & 36 & 46 & 56 & & & & & & & & & & \\ \hline
17 & 27 & 37 & 47 & & & & & & & & & & & \\ \hline
18 & \cellcolor[gray]{0.8}24 & 36 & 46 & & & & & & & & & & & \\ \hline
19 & 27 & 37 & 47 & & & & & & & & & & & \\ \hline
20 & 26 & 36 & 46 & & & & & & & & & & & \\ \hline
21 & 27 & 37 & 47 & & & & & & & & & & & \\ \hline
22 & 26 & 36 & \cellcolor[gray]{0.8}43 & & & & & & & & & & & \\ \hline
23 & 27 & 37 & 47 & & & & & & & & & & & \\ \hline
24 & 16 & 26 & 36 & 46 & & & & & & & & & & \\ \hline
25 & 17 & 27 & 37 & 47 & & & & & & & & & & \\ \hline
26 & \cellcolor[gray]{0.8}14 & \cellcolor[gray]{0.8}24 & 36 & 46 & & & & & & & & & & \\ \hline
27 & 17 & 27 & 37 & 47 & & & & & & & & & & \\ \hline
28 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
29 & \cellcolor[gray]{0.8}15 & 27 & 37 & & & & & & & & & & & \\ \hline
30 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
31 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
32 & 16 & 26 & \cellcolor[gray]{0.8}34 & & & & & & & & & & & \\ \hline
33 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
34 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
35 & 17 & \cellcolor[gray]{0.8}25 & 37 & & & & & & & & & & & \\ \hline
36 & \cellcolor[gray]{0.8}15 & 26 & 36 & & & & & & & & & & & \\ \hline
37 & 17 & 27 & \cellcolor[gray]{0.8}34 & & & & & & & & & & & \\ \hline
38 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
39 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
40 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
41 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
42 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
43 & \cellcolor[gray]{0.8}14 & \cellcolor[gray]{0.8}25 & 37 & & & & & & & & & & & \\ \hline
44 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
45 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
46 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
47 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
48 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
49 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
50 & \cellcolor[gray]{0.8}15 & 26 & 36 & & & & & & & & & & & \\ \hline
51 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
52 & 16 & \cellcolor[gray]{0.8}24 & 36 & & & & & & & & & & & \\ \hline
53 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
54 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
55 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
56 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
57 & \cellcolor[gray]{0.8}16 & 27 & 37 & & & & & & & & & & & \\ \hline
58 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
59 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
60 & \cellcolor[gray]{0.8}15 & \cellcolor[gray]{0.8}24 & 36 & & & & & & & & & & & \\ \hline
61 & 17 & 27 & 37 & & & & & & & & & & & \\ \hline
62 & 16 & 26 & 36 & & & & & & & & & & & \\ \hline
\end{longtable}
\end{center}
\section{Notes on Computing} \label{sec:nc}
\subsection{Ruby Programs} \label{ssec:r}
As mentioned in \Cref{sec:meth}, a program was written in Ruby to look for the patterns in the CFE coefficients of \ifmmode C_b \else $C_b$\xspace \fi in various bases. In particular, the HWM coefficients were noted and another Ruby program was written to calculate the numerators and the denominators, as represented in the particular base, of the convergents terminated immediately before the HWM coefficients. The patterns in the denominators and the methods of calculating the numerators as given in \Cref{conj:denom} and \Cref{conj:numer}, respectively, were determined. The main CFE calculating programs were then written in C due to speed considerations. Other programs were written in Ruby and C to compile the statistics from the C program outputs and to formulate and check the conjectures.
\subsection{C Programs} \label{ssec:c}
The main \ifmmode C_b \else $C_b$\xspace \fi CFE coefficient calculation programs were written in C. Both the MPIR 32 bit \cite{bbmpir} and GMP 64 bit multiple precision libraries were used. In general, the GMP library was used for the highest power of ``ten'', $X$, for each base due to the speed advantage of GMP on Linux versus MPIR on Windows and also due to the larger data handling capability of the 64 bit GMP library versus the 32 bit MPIR library.
There are three main types of programs. The first two main types of programs each have two subtypes since the limit on the input and output base range of the MPIR and GMP libraries is 2 to 62, while the study was conducted to base 124. The last main type finds use only for low bases and large values of $X$, so the extension to higher bases is unnecessary. The following list shows the main types and subtypes of programs:
\begin{itemize}
\item Programs that calculate the convergent, the CFE coefficients of the convergent, \ifmmode C_b \else $C_b$\xspace \fi as calculated by the convergent, and the convergent error
\begin{itemize}
\item Programs for bases 2 through 62
\item Programs for bases 63 through 124
\end{itemize}
\item Programs that calculate the convergent and the CFE coefficients of the convergent
\begin{itemize}
\item Programs for bases 2 through 62
\item Programs for bases 63 through 124
\end{itemize}
\item A program that calculates the convergent, \ifmmode C_b \else $C_b$\xspace \fi as calculated by the convergent, and the convergent error
\end{itemize}
\Cref{tbl:basest} gives the calculation times for the various bases and powers of ``ten'', $X$. See \Cref{ssec:hw} for more details on the calculations. It may be observed that the base 63 through base 124 calculation times increase rapidly with $X$ as the base conversion processing is not as efficient as the MPIR and GMP input and output functions.
Currently, for bases from approximately 4 to 62, the limitation in these calculations for the maximum power, $X$, for a given base, $b$, is usually memory. However, for the lower bases, particularly base 2 and base 3, and higher powers, the number of coefficients that are calculated becomes very high. In these cases, the calculation time of the CFE coefficients becomes a major factor. As more memory becomes available, the CFE coefficient calculation time may become more of a factor for bases higher than base 3 at the higher powers.
The program that calculates the convergent, \ifmmode C_b \else $C_b$\xspace \fi as calculated by the convergent, and the convergent error, but not the CFE coefficients was written to further check the validity of the applicable conjectures for the highest power of `` ten'' in the lower bases without the very time consuming calculation of the coefficients. This program may also be used for higher bases but the calculation time savings is not nearly as beneficial.
For bases 63 to 124, the limitation is calculation time rather than memory due to the inefficient base conversion. However, as the base increases, the difference in the number of digits between powers of ``ten'' increases, and this factors into the increase in calculation time as well.
\begin{table}[!htbp]
\centering
\caption{Calculation Times}
\label{tbl:basest}
\begin{tabular}{|c||c|c|c|c|}
\hline
& & \multicolumn{3}{c|}{\textbf{calculation time in seconds}} \\ \hline
\multirow{3}{*}{\textbf{base} $\boldsymbol{b}$} & \multirow{3}{*}{$\boldsymbol{X}$} & \textbf{CFE} & \textbf{CFE} & \textbf{With} \\
& & \textbf{Calculation} & \textbf{Calculation} & \textbf{Error, No} \\
& & \textbf{with Error} & \textbf{without Error} & \textbf{Coefficients} \\
\hline
\multirow{2}{*}{2} & 25 & & & 219 \\ \cline{2-5}
& 24 & & 2041980 & 105 \\ \hline
\multirow{2}{*}{3} & 15 & 21551 & & \\ \cline{2-4}
& 16 & & 176523 & \\ \hline
\multirow{2}{*}{4} & 12 & 6048 & & \\ \cline{2-4}
& 13 & & 59031 & \\ \hline
\multirow{2}{*}{5} & 10 & 1206 & & \\ \cline{2-4}
& 11 & & 13169 & \\ \hline
\multirow{2}{*}{6} & 9 & 869 & & \\ \cline{2-4}
& 10 & & 8908 & \\ \hline
\multirow{2}{*}{7} & 8 & 397 & & \\ \cline{2-4}
& 9 & & 3161 & \\ \hline
\multirow{4}{*}{8--10} & \multirow{2}{*}{7} & base 8: 56 & & \\
& & base 10: 928 & & \\ \cline{2-4}
& \multirow{2}{*}{8} & & base 8: 569 & \\
& & & base 10: 5524 & \\ \hline
\multirow{4}{*}{11--14} & \multirow{2}{*}{6} & base 11: 130 & & \\
& & base 14: 903 & & \\ \cline{2-4}
& \multirow{2}{*}{7} & & base 11: 476 & \\
& & & base 14: 3330 & \\ \hline
\multirow{4}{*}{15--16} & \multirow{2}{*}{5} & base 15: 60 & & \\
& & base 16: 36 & & \\
\hhline{~---~}
& \multirow{2}{*}{6} & & base 15: 139 & \\
& & & base 16: 152 & \\ \hline
\multirow{4}{*}{17--27} & \multirow{2}{*}{4} & base 17: 5 & & \\
& & base 27: 88 & & \\
\hhline{~---~}
& \multirow{2}{*}{5} & & base 17: 9 & \\
& & & base 27: 172 & \\ \hline
\multirow{4}{*}{28--62} & \multirow{2}{*}{3} & base 28: 1 & & \\
& & base 62: 82 & & \\
\hhline{~---~}
& \multirow{2}{*}{4} & & base 28: 3 & \\
& & & base 62: 144 & \\ \hline
\multirow{4}{*}{63--90} & \multirow{2}{*}{2} & base 63: 5271 & & \\
& & base 90: 51821 & & \\
\hhline{~---~}
& \multirow{2}{*}{3} & & base 63: 14275 & \\
& & & base 90: 53217 & \\ \hline
\multirow{4}{*}{91--124} & \multirow{2}{*}{1} & base 91: 1 & & \\
& & base 124: 3 & & \\
\hhline{~---~}
& \multirow{2}{*}{2} & & base 91: 1 & \\
& & & base 124: 3 & \\ \hline
\end{tabular}
\end{table}
\subsection{Program Location and Preferred Programs} \label{ssec:plapp}
All of the programs that were used in the study, including ones that are not mentioned, are available under the GNU LGPL \cite{bbgnulgpl}. To download the programs, see reference \cite{bbdataprogloc}, under the ``programs'' folder. As there are many programs, some of which were used to check results, the preferred program for each main type used in the HWM convergent calculations is indicated in \Cref{tbl:pp}. The preferred programs for MPIR are Microsoft Visual Studio C++ projects, while the GMP programs consist of a C program and a C header file. For instructions on how to run the programs, see the \texttt{readme\_mpir.txt} and \texttt{readme\_gmp.txt} files that may be found in the appropriate folders.
\subsection{Other Programs and Data of Interest} \label{ssec:oprogdata}
As already mentioned, several programs were written to process the data and compare the output of the main programs with the conjectures. One processed data set that we feel may be of general interest is a list of the CFE coefficient lengths for all \cn[s], not just the HWMs. A separate file has been created for each base, $b$, and the maximum power of ``ten'', $X$, in which the coefficients were calculated. In some of the lower base cases, where the files may be very large (over 1 GB), compilations for lower values of $X$ are included.
Once the \texttt{vcpp.tar.gz} file has been unzipped and extracted, these files may be found in the following folders:
\begin{itemize}
\item \texttt{get\_hwm\_lengths\_generic\_base}
\item \texttt{get\_hwm\_lengths\_generic\_base\_auto}
\item \texttt{get\_hwm\_lengths\_generic\_base\_v\_hi\_base\_auto}
\end{itemize}
\noindent The name of the file includes $b$ and $X$. An example name is \\ \texttt{cn\_cfe\_coeffs\_base\_2\_pow\_10\_24\_no\_err\_all\_coeffs\_lengths.txt}.
There are other files in each of these directories for each $b$ and $X$. These files have been processed from the files containing all of the coefficient lengths. The files with \texttt{\_powers} at the end of the name give the HWM lengths for each $b$ and $X$.
\begin{table}[!htbp]
\centering
\caption{Preferred Programs}
\label{tbl:pp}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Type of Program} & \textbf{Applicable Bases} & \textbf{Preferred Program} \\ \hline
\multirow{4}{*}{w/ error} & \multirow{2}{*}{2--62} & MPIR: \texttt{cn\_generic\_base\_w\_err\_no\_itoa\_auto} \\
& & GMP: \texttt{cn\_generic\_base\_gmp\_w\_err\_auto} \\ \cline{2-3}
& \multirow{2}{*}{63--124} & MPIR: \texttt{cn\_generic\_base\_w\_err\_very\_hi\_base\_auto} \\
& & GMP: \texttt{cn\_generic\_base\_gmp\_w\_err\_very\_hi\_base\_auto} \\ \hline
\multirow{4}{*}{w/o error} & \multirow{2}{*}{2--62} & MPIR: \texttt{cn\_generic\_base\_no\_err\_no\_itoa\_auto} \\
& & GMP: \texttt{cn\_generic\_base\_gmp\_no\_err} \\ \cline{2-3}
& \multirow{2}{*}{63--124} & MPIR: \texttt{cn\_generic\_base\_no\_err\_very\_hi\_base\_auto} \\
& & GMP: \texttt{cn\_generic\_base\_gmp\_no\_err\_very\_hi\_base\_auto} \\ \hline
w/ error, no coeffs & 2--62 & GMP: \texttt{cn\_generic\_base\_gmp\_w\_err\_auto\_no\_coeffs\_calc} \\ \hline
\end{tabular}
\end{table}
\subsection{Hardware} \label{ssec:hw}
The results given in \Cref{tbl:basest} were obtained on a desktop computer with a 3.40 GHz i7$-$2600 CPU, running a single thread, with 8 GB of 1333 MHz DDR3 SDRAM. The programs were run with GMP on Linux, and GMP was built via the command line using \texttt{$-$$-$build=x86\_64$-$intel$-$linux$-$gnu} to optimize GMP to 64 bit operation on an Intel x86 processor. Additional tuning per GMP instructions was attempted, but no performance increase was observed. Building GMP in this manner resulted in code that ran approximately twice as fast as the generic C code.
The MPIR programs were run in 32 bit mode on Windows and the build mode was for generic C code. The GMP programs ran about 8 times faster than the MPIR programs.
\section{Suggestions for Future Analysis} \label{sec:fa}
\subsection{Extending the powers of ``ten''} \label{ssec:ept}
Naturally, extending the maximum power of ``ten'', $X$, for each base and checking the calculations against the conjectures is desirable. Generally, the amount of memory required increases by a factor of approximately $1.25 \cdot base$ for each increment in power. When extending the calculations to a higher power of $X$, we feel that it is very important to make the calculations with the program that calculates \ifmmode C_b \else $C_b$\xspace \fi as calculated by the convergent as well as with the program that calculates the CFE coefficients without calculating \ifmmode C_b \else $C_b$\xspace \fi, with no more than one power difference between the two. We believe that ensuring that the calculation of \ifmmode C_b \else $C_b$\xspace \fi conforms to \Cref{conj:cncalc} for power $X-1$ increases the credibility of the calculations of the CFE coefficients for power $X$.
As explained in \Cref{ssec:c}, for the lower bases, computing time becomes problematic. In general, for bases 62 and lower, each increment in power increases the time by a factor of approximately $2.5 \cdot base$, with a minimum of approximately a factor of 8 for base 2 and high powers of $X$.
\subsection{Analysis of the ratio of adjacent HWM coefficient numbers} \label{ssec:hwmsp}
We feel that it would be interesting to compare the ratio of the \cn{s} of successive HWMs in the various bases. A base dependent formula may exist for the limit of this ratio as the power of ``ten'', $X$ increases without bound.
\subsection{More efficient algorithm for calculating the CFE coefficients} \label{ssec:cfee}
Implementation of a faster, more efficient algorithm to calculate the CFE coefficients would be of great benefit for the lower bases and higher powers of ``ten''.
\subsection{Further analysis of the \texorpdfstring{2\textsuperscript{nd}\xspace}{2nd} Generation HWM coefficients} \label{ssec:sghwma}
Further analysis of the 2\textsuperscript{nd} Generation HWM\xspaces is suggested to see if the difference in HWM and 2\textsuperscript{nd} Generation HWM\xspace lengths stabilizes at 10 as the powers of ``ten'' go higher. It would also be of interest to see if the 2\textsuperscript{nd} Generation HWM\xspace convergent error, and denominators in other bases have similar patterns to those in base 10 \cite{bbjks2}.
\subsection{More efficient base conversion for bases greater than base 62} \label{ssec:mebc}
The current programs for bases greater than base 62 have inefficient methods for calculating \ifmmode C_b \else $C_b$\xspace \fi and for converting between bases. This results in greatly increased calculation times for higher powers of ``ten''. Therefore, it is desired that more efficient algorithms be employed.
\section{Conclusions} \label{sec:cc}
We have determined that patterns exist in the CFE HWM convergents for Champernowne's Constant\xspace in bases 2 to 124, similar to the patterns that exist in the CFE HWM convergents for Champernowne's Constant\xspace in base 10. These patterns have been presented in the form of formulae in conjectures. In some bases, slight changes in the formulae are required. There are also a few exceptions to the conjectures, depending upon the base and the power of ``ten''. These differences and exceptions have been presented and discussed.
A cursory analysis of the 2\textsuperscript{nd} Generation HWM\xspace lengths in base 2 to base 62 has been presented and the results and exceptions to the expected results have been discussed. It was suggested that further study of the 2\textsuperscript{nd} Generation HWM\xspace convergents would be worthwhile.
The programs used in the analysis, and the data calculated by the programs, were discussed. The location of the programs and data was given. The programs are available under the GNU LGPL. The data is available under the ODC PDDL.
\def1em{1em}
\end{document} |
\begin{document}
\def\btt#1{{\tt$\backslash$#1}}
\def{\rm\,tr\,}{{\rm\,tr\,}}
\newcommand{\bra}[1]{\langle #1|}
\newcommand{\ket}[1]{|#1\rangle}
\newcommand{\braket}[2]{\langle #1|#2\rangle}
\newcommand{\half}{\textstyle{\frac{1}{2}}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{displaymath}}{\begin{displaymath}}
\newcommand{\end{displaymath}}{\end{displaymath}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\begin{eqnarray}s}{\begin{eqnarray*}}
\newcommand{\end{eqnarray}s}{\end{eqnarray*}}
\title{Effect of conformations on charge transport in a thin elastic tube}
\author{Radha Balakrishnan}
\affiliation{The Institute of Mathematical Sciences, Chennai 600 113,
India}
\email {[email protected]}
\author{Rossen Dandoloff}
\affiliation{Laboratoire de Physique Th\'eorique et Mod\'elisation,
Universit\'e de Cergy-Pontoise, F-95302 Cergy-Pontoise, France}
\email{[email protected]}
\pacs{87.15.He,\, 87.15.-v,\, 05.45.Yv}
\begin{abstract}
We study the effect of conformations
on charge transport in a thin elastic tube.
Using the Kirchhoff model for a tube with any given Poisson ratio,
cross-sectional shape and intrinsic twist,
we obtain a class of exact solutions for its
conformation.
The tube's torsion is found in terms of its intrinsic twist and its
Poisson ratio, while its curvature satisfies a nonlinear differential
equation which supports exact {\it periodic} solutions in the form of
Jacobi elliptic functions,
which we call {\it conformon lattice} solutions.
These solutions
typically describe conformations with loops.
Each solution induces a corresponding quantum effective {\it periodic}
potential in the Schr\"{o}dinger equation for an electron in the tube.
The wave function describes the delocalization of the electron
along the central axis of the tube.
We discuss some possible applications of this novel mechanism
of charge transport.
\end{abstract}
\maketitle
\maketitle
\section{Introduction}
Nanotubes and nanowires have attracted considerable attention due to their
their potential technological applications \cite{baug}. Carbon nanotubes and DNA are well known
examples. One important problem is to
understand their mechanical properties. Another is to gain some insight into
the effect of quantum confinement on
electronic transport in such nanostructures.\\
One striking observation concerning the DNA molecule is that its central
axis can take on
various curved conformations. Similarly, nanotubes need
not be always straight.
Helix-shaped nanotubes have been
observed
by Volodin {\it et al} \cite{volo}.
One way to theoretically model these twisted thin tubes (wires)
would be to assume that they are elastic filaments described
within the framework of the Kirchhoff rod model \cite{kirc}, derive the
various possible curved conformations that can arise from it,
and see if they are in accordance with the shapes actually
observed.
This would also help in studying the elastic properties
of these structures within this model. \cite{fons}
The natural question that arises
is how the confinement of a particle to a curved path would affect
its transport.\\
Now, the problem of quantum transport of a free
particle in a thin curved tube has been studied by several authors
\cite{daco, gold, clar}.
They have shown that the curved geometry of the tube
essentially induces a potential in the path of the particle,
thereby affecting its transport properties.\\
In a recent paper \cite{jpha}, we had analyzed the case when the thin
tube is made of elastic material, and applied it to electron transport
in a biopolymer.
We analyzed the
statics and dynamics of the thin elastic tube
using the Kirchhoff
model\cite{kirc}.
Under certain conditions, the curvature function of the tube supports
a {\it spatially localized} traveling wave solution called
a Kovalevskaya solitary wave \cite{cole, gor1}.
We showed that this solution corresponds to a
conformation which is a localized twisted loop
traveling along the tube \cite{jpha}.
The localized bend
induces a quantum potential well in the Schr\"{o}dinger
equation for an electron in the tube \cite{daco,
gold, clar}, which traps it
in the twisted loop , resulting in its efficient motion,
without change of form,
along the polymer. Our result formalizes the
concept of a {\it conformon} that has been
hypothesized in biology. \cite{volk, scot, keme}\\
The motivation for the present paper is as follows. Firstly,
the analysis of the Kirchhoff model given in \cite{jpha,gor1}
had assumed that the elastic tube had no intrinsic twist. It was therefore
not strictly applicable to either to DNA (which has an intrinsic twist
of about $10 \deg/$\AA, even in the straight relaxed state) or to
the class of intrinsically twisted nanotubes.
In what follows, we present
a more general analysis of the Kirchhoff model, by incorporating
an intrinsic twist in the tube, and
point out the nontrivial role that it plays in
determining the geometrical shape of the axis of the tube.
Further, our analysis is valid for any general value of the
Poisson ratio.
Secondly, we show that the nonlinear differential equation for the curvature
function obtained from the static Kirchhoff equations can, in addition to the
localized solution
discussed in \cite{jpha}, also support a class of {\it spatially periodic}
solutions in the form of Jacobi
elliptic functions. We call them {\it conformon lattice solutions}.
Each such solution induces a quantum periodic potential.
Finally, we show explicitly that the Schr\"{o}dinger equation supports
an exact solution, which corresponds to
the electron getting {\it delocalized} along the axis of the
quantum wire,
even in the static case.
Thus this mechanism for electron transport
in a quantum nanotube is distinct from the conformon mechanism
discussed in Ref. \cite{jpha}.
\section{The Kirchhoff model}
We consider a thin elastic rod \cite{kirc}
whose central axis is described by a
space curve $\mathbf{R}(s,t)$.
Here, $s$ denotes the arc length of the
curve and $t$ the time.
Let the rod remain inextensible with time evolution, with
the unit tangent to its axial curve given by
${\bf t} = {\bf R}_s$.
(The subscript $s$ denotes the derivative with respect to
$s$). In the plane perpendicular to ${\bf t}$,
we define as usual \cite{struik}, two orthogonal unit vectors
${\bf n}$ and ${\bf b}$,
where the principal normal ${\bf n}$ is the unit vector
along ${\bf t}_{s}$, and the binormal ${\bf b} = ({\bf t} \times {\bf n})$.
The rotation of the Frenet frame $({\bf t}, {\bf n}, {\bf b})$ as
it moves along the curve,
is given by the well
known Frenet-Serret equations, ${\bf
t}_{s} = k{\bf
n}$, ${\bf n}_{s} = -k {\bf t} + \tau {\bf b}$
and ${\bf b}_{s} = -\tau {\bf n}$, where
the curvature $k=|\mathbf{t}_{s}|$ and
the torsion
$\tau=\mathbf{t}~\cdot~
(\mathbf{t}_{s}~\times~\mathbf{t}_{ss})~/~k^{2}$.
For elastic tubes, instead of ${\bf n}$ and ${\bf b}$,
it is more natural to use `material' orthogonal unit
vectors ${\bf d}_1$ and ${\bf d}_2$ lying on the
cross-section of the tube,
perpendicular to its axial curve. For e.g., they could lie along the
principal axes of inertia of the cross-section. Let
${\bf d}_{1}$ make an angle
$\phi$ with ${\bf n}$.
Hence we get ${\bf
d}_{1} = {\bf n }
\cos \phi + {\bf b} \sin \phi $ and ${\bf d}_{2} = -{\bf n} \sin
\phi + {\bf b} \cos \phi$. Denoting the tangent ${\bf t}$
by ${\bf d}_{3}$, and using the Frenet-Serret
equations, it is easy to show that the
rotation of the `material' frame $({\bf d}_1, {\bf
d}_2, {\bf d}_3)$ along the curve is given by
\begin{eqnarray}
\mathbf{d}_{i,s}=\mathbf{k} \times \mathbf{d}_i,
\label{FE}
\end{eqnarray}
where $i=1,2,3$, and the vector $\mathbf{k}$
is given by
\begin{eqnarray}
\mathbf{k}= k_1\mathbf{d}_1 + k_2\mathbf{d}_2 +k_3 \mathbf{d}_3.
\label{k}
\end{eqnarray}
Here
\begin{eqnarray}
(k_1, k_2, k_3) = (k\sin\phi,~ k\cos\phi,~ \tau + \phi_s).
\label{kcomp}
\end{eqnarray}
Let the internal elastic force (or tension)
and the total torque, that act on each cross section of the rod be
given by $\mathbf{g}$ and $\mathbf{m}$ respectively. The Kirchhoff equations that
result from the conservation of linear and angular momentum at every point $s$
are given in dimensionless form by \cite{cole, gor1}
\begin{eqnarray} \mathbf{g}_{s}= \mathbf{R}_{tt} \label{K1}\end{eqnarray}
and
\begin{eqnarray}
\mathbf{m}_{s}+ \mathbf{d}_3 \times
\mathbf{g} = a \mathbf{d}_1 \times \mathbf{d}_{1,tt} + \mathbf{d}_2 \times
\mathbf{d}_{2,tt}\,,
\label{K2}
\end{eqnarray}
where the subcript $t$ stands for the derivative with respect to time.
The internal force is written in a general form as
\begin{eqnarray}
\mathbf{g} =
g_1\mathbf{d}_1 + g_2 \mathbf{d}_2 + g_3\mathbf{d}_3\,.
\label{g}
\end{eqnarray}
We take into account the intrinsic twist $k_3^{0}$ of the elastic tube,
by writing the constitutive relation for the internal
torque $\mathbf{m}$ as
\begin{eqnarray}
\mathbf{m} = k_1\mathbf{d}_1 + ak_2 \mathbf{d}_2 + b(k_3-k_3^{0}) \mathbf{d}_3.
\label{K3}
\end{eqnarray}
In the equations above,
the parameter $a$ is a measure of the bending rigidity of the cross section of
the rod. It is the ratio $I_{1}/I_{2}$
of the two moments of inertia
of the cross-sections of the nanotube in the directions
$\mathbf{d}_1$ and $\mathbf{d}_2$.
These are conventionally oriented such that
$I_{1} \leq I_{2}$. Hence $0 <a \leq 1$.
(For a circular cross section, $a=1$.)
$b$ is a measure of the twisting rigidity of the rod.
It is given in terms of
$a$ and the Poisson ratio $\sigma$, which
is a measure of the change in volume of the rod as it is
stretched:
\begin{eqnarray}
b =2a/[(1+\sigma)(1+a)].
\label{b}
\end{eqnarray}
In Refs. \cite{gor1, jpha}, the term involving $k_3^{0}$
in Eq. (\ref{K3}) is absent. Hence the analysis
given there is valid for tubes
without an intrinsic twist. In the next section, we analyze
the Kirchhoff equations (\ref {K1}) and (\ref {K2})
with $k_3^{0}\neq 0$.
Our analysis will be valid for general values of $a$ and $\sigma$.
\section{Role of the
intrinsic twist}
We first analyze the static conformations.
Substituting Eqs. (\ref{g}) and (\ref {K3}) in Eqs.
(\ref{K1}) and (\ref{K2}),
we get
\begin{eqnarray} g_{1,s} + k_2g_3 - k_3g_2 = 0,
\label{e4}
\end{eqnarray}
\begin{eqnarray} g_{2,s} + k_3g_1 - k_1g_3 =0,
\label{e5}
\end{eqnarray}
\begin{eqnarray} g_{3,s} + k_1g_2 - k_2g_1 = 0,
\label{e6}
\end{eqnarray}
\begin{eqnarray} g_2 = k_{1,s} + (b-a)k_2k_3- b k_3^{0} k_2\,,
\label{e7}
\end{eqnarray}
\begin{eqnarray} g_1 = -ak_{2,s} + (b-1)k_1k_3-b k_3^{0} k_1\,,
\label{e8}
\end{eqnarray}
and
\begin{eqnarray} bk_{3,s} + (a-1)k_1k_2 = 0.
\label{e9}
\end{eqnarray}
As is clear from Eq. (\ref{kcomp}), the equations above
represent a
set of {\it nonlinear} coupled differential equations involving
$k$, $\tau$ and $\phi$.
To understand the role played by the intrinsic twist, we proceed as in
Refs. \cite{gor1,jpha}, and look for solutions that correspond
to $\phi = \frac{1}{2}n \pi,\,\, n = 0,1,\ldots\,$.
Using this in Eq. (\ref{kcomp}), we see that
Eq. (\ref{e9}) yields
\begin{eqnarray}
\tau =\tau_0\,.
\label{tau0}
\end{eqnarray}
Thus the torsion of the tube, which is a measure of its
nonplanarity, is an arbitrary constant, to be determined consistently.
We focus on nontrivial conformations with
a nonvanishing $\tau_0$. Two cases arise in the analysis of
Eqs. (\ref{e4})--(\ref{e8}): \\
\noindent
Case (i):\,\,$\phi = j\pi,\,\,j = 0,1\,;\,\, k_1=0\,; \,\, k_2=(-)^{j} k$.\\
We find in this case
\begin{eqnarray}
(b-2a)\,\tau_0 = b \,k_3^{0}\,.
\label{b2a}\end{eqnarray}
Using Eq. (\ref{b}) in this equation, we get
\begin{eqnarray}
\tau_0 = - \,k_3^{0}\big/[a+\sigma(a+1)].
\label{T1}
\end{eqnarray}
The internal elastic force is found to be
\begin{eqnarray}
\mathbf{g} = (-1)^{j}a
(\tau_0 \,k \,\mathbf{d}_2 -k_s\, \mathbf{d}_1)
+\left(C -\textstyle{\frac{1}{2}}a k^{2}\right)\,\mathbf{d}_3\,,
\label{G1}
\end{eqnarray}
where the constant $C$ represents the tension in the rod when it is
straight.\\
\noindent Case (ii):\,\,
$\phi = (j+\frac{1}{2})\pi\,,\,\, j = 0,1\,;
\,\, k_1=(-)^j k\,;\,\, k_2=0.$ \\
We now obtain
\begin{eqnarray}
(b-2)\,\tau_0 = b \,k_3^{0}\,.
\label{b2}
\end{eqnarray}
As before, using Eq. (\ref{b}) in this equation yields
\begin{eqnarray}
\tau_0= - ak_3^{0}/[1+\sigma(a+1)].
\label{T2}
\end{eqnarray}
The elastic force is determined as
\begin{eqnarray}
\mathbf{g}= (-1)^{j}[(\tau_0 \,k \,\mathbf{d}_1+k_s \,\mathbf{d}_2)
+\left(C - \textstyle{\frac{1}{2}}k^{2}\right)\,\mathbf{d}_3\,.
\label{G2}
\end{eqnarray}
We now show that the intrinsic twist indeed plays a nontrivial role
in determining the torsion $\tau_0$ of the
conformation of the elastic tube.
First, if $k_3^{0}=0$, then Eqs. (\ref{b2a}) and (\ref{b2})
yield $b=2a$ and $b=2$,
respectively, because $\tau_0 \neq 0$.
Putting in
these two values of $b$ in Eq. (\ref{b}) successively,
we get $\sigma=-1/(1+a)$ and $\sigma=-a/(1+a)$, respectively.
Since $0 < a \leq 1$, it follows that the Poisson ratio $\sigma$
has to be {\it negative}, and in the range
$-1 < \sigma \leq -\frac{1}{2}$ and $-\frac{1}{2} \leq \sigma < 0$,
respectively, for the two cases.
Although thermodynamic stability arguments\cite{land}
merely restrict $\sigma$ to the range
$-1 \leq \sigma \leq \frac{1}{2}$, and
while $\sigma$ can indeed be negative for
some biopolymers\cite{gor1},
for most elastic media
one finds that $0 < \sigma < \frac{1}{2}$.
Thus, for both cases (i) and (ii), {\it setting}
$k_3^{0}=0$ {\it determines}
the numerical value of $\sigma$, which turns
out to be negative
for any $a$. Further, $\tau_0$ can take on any arbitrary value
in this instance.
In contrast, if $k_3^{0}\neq 0$, the
corresponding torsion $\tau_0$ is
not arbitrary, but is determined in terms of $k_3^{0}$, $a$ and
$\sigma$ (which are material properties), as we
may expect on physical grounds.
This dependence can be seen from
Eqs. (\ref{T1}) and (\ref{T2}).
These equations also show that
positive as well as negative values of $\sigma$ are allowed
in this instance,
which is a desirable feature.
We may note that when $\sigma > 0$, the torsion $\tau_0$ and the
intrinsic twist
$k_3^{0}$ have opposite signs.
In addition,
on imposing the condition $0< a \leq 1$,
both the equations (\ref{T1}) and (\ref{T2}) lead to
the same inequality,
$\sigma < - k_3^{0}/\tau_0 \leq 2 \sigma + 1$.
\section{Exact conformations}
In the last section, we found the solutions for the torsion $\tau_0$.
Using Eq. (\ref{G1}) in Eq. (\ref{e4}), and
Eq. (\ref{G2}) in Eq. (\ref{e5}), respectively,
we can derive the nonlinear
differential equation for the curvature $k$ in
Cases (i) and (ii). They are seen to have
the same form
\begin{eqnarray}
k_{ss}+\textstyle{\frac{1}{2}} k^{3}=(C_2 - \tau_0^{2})\, k,
\label{koe}
\end{eqnarray}
where $C_2=C/a$ for Case (i), while $C_2=C$ for Case (ii).
Note that Eq. (\ref{koe}) has
the same form as that obtained earlier
\cite{gor1, jpha}, for the case $k_3^{0}=0$ as well, but with the
important difference that $\tau_0$ is not arbitrary any more, but
depends on $k_3^{0}$
as shown in the last section.
Eq. (\ref{koe}) has a solution
of the form
\begin{eqnarray}
k(s)=2(C_2-\tau_02)^{1/2}\, {\rm sech}\,\left[
(C_2-\tau_02)^{1/2}\,s\right],
\label{sech}
\end{eqnarray}
for $(C_{2}-\tau_{0}^{2}) > 0$.
This represents a static conformon. Now, due to scale and
galilean invariance \cite{cole} of the
dynamic Kirchhoff equations (\ref{K1}) and (\ref{K2}),
traveling wave solution are possible for $k$, with $s$ replaced by
$(s-vt)$ in Eq. (\ref{sech}).
Here $v$ represents the velocity of the wave. This leads to a moving
conformon\cite{jpha}, which is like a solitary wave.
Now, in general, we find that the nonlinear differential equation
(\ref{koe}) supports solutions of the form
\begin{eqnarray}
k(s,\kappa) = 2 \sqrt{\frac{C_{2}-\tau_{0}2}{2-\kappa^{2}}}
~ {\rm dn}\, (\sqrt{\frac{C_{2}-\tau_{0}2}{2-\kappa^{2}}}~s, \kappa),
\label{dn}
\end{eqnarray}
where ${\rm dn}\,$ is the usual Jacobi elliptic function, with modulus
$\kappa$ in the range $0 \le \kappa \le 1$ \cite{abra}.
For $\kappa=1$, the solution (\ref{dn}) becomes (\ref{sech}),
which was considered in Ref. \cite{jpha}.
But in contrast to that solution,
which was spatially localized, the
solution Eq. (\ref{dn})
is a spatially periodic function, with a finite period $2 K(\kappa)$
for all $\kappa \ne 1$.
Here $K(\kappa)$ is the complete Jacobi integral of the first kind,
which tends to infinity for $\kappa=1$.
Thus we call Eq. (\ref{dn}) for all $\kappa \ne 1$, a {\it conformon
lattice} solution.
In Figs. 1 to 5, we have displayed the conformations of a thin elastic
tube (rod) that correspond to
the conformon lattice solutions for the curvature,
given in Eq. (\ref{dn}), for various values of $\kappa$, and constant torsion
$\tau_0=0.5$. In all the plots,
$\sqrt {(C_2-\tau_02)}$ has been set equal to unity, since its inverse
merely scales the length.
For $\kappa=1$, the conformation is given in Fig. 1, and it corresponds
to a single loop or conformon.
Its shape is (as expected) different from that obtained
in Ref. (\cite{jpha}) for $\tau_0=1$.
\begin{figure}\label{fig1}
\end{figure}
We find that even a slight decrease to $\kappa=0.995$
leads to a fairly large change in the conformation,
with two loops, and a partially complete loop, as seen in Fig. 2.
\begin{figure}\label{fig2}
\end{figure}
From $\kappa=0.995$ to $\kappa=0.75$, there are steady changes in the
conformation.
In contrast, below $\kappa =0.75$, the conformational
changes are much slower as $\kappa$ varies, as seen by comparing Fig. 3
($\kappa = 0.75$) with Fig. 4 ($\kappa=0.25$), which differ very slightly.
\begin{figure}
\caption{\label{fig3}
\label{fig3}
\end{figure}
\begin{figure}
\caption{\label{fig4}
\label{fig4}
\end{figure}
For $\kappa=0$, the curvature becomes a constant
independent of $s$. Hence this conformation
represents a structure in which the
axis of the tube is coiled into a helix, with
a constant curvature and torsion. This is displayed in Fig. 5.
We parenthetically remark that while this helical tube was
essentially the only
conformation considered in
\cite{fons} in the context of the Kirchoff model, we see that
a wide class of coiled conformations corresponding to nonzero $\kappa$
get supported by the model.
\begin{figure}
\caption{\label{fig5}
\label{fig5}
\end{figure}
As explained below the solution Eq. (\ref{sech}) for the conformon,
it is possible to have traveling wave solutions
(\ref{dn})
with $s$ replaced by $(s-vt)$.
We conclude this section with the remark that the solutions for the curvature
(Eq. (\ref{dn})) will also be applicable for a closed tube
of length $L$ satisfying periodic boundary conditions.
We get
$L \sqrt{\frac{C_{2}-\tau^{2}_{0}}{2-\kappa^{2}}}~
=2m K(\kappa)$, where $m$ is an integer.
\section{Quantum mechanical charge transport in a tube}
In this section, we consider the implications of the
conformon lattice solution(\ref{dn})
on electron transport along a nanotube. We recall some salient steps
given in \cite{jpha}, where we discussed the conformon, for the sake
of completeness.
As has been shown \cite{gold, clar}, a thin tube
with curvature $k(s)$ and
torsion $\tau_{0}$, induces a quantum potential
$V_{eff}(s)=(\hbar^{2}/2\mu)[-k^{2}/4~ +~\tau_{0}2/2]$
for an electron moving on it.
Here $\mu$ is the mass of the electron.
In the time-dependent Schr\"{o}dinger equation with the
above potential, we
can eliminate the constant torsion $\tau_{0}$
using a simple gauge transformation $\psi_{1}=\psi(s,t)\exp (-i\hbar
\tau_{0}2 t/4\mu)$. This leads to
\begin{eqnarray}
i\hbar \frac{\partial
}{\partial t}\psi(s,t)
=-\frac{\hbar2}{2\mu}\left(\frac{\partial2}{\partial
s2} + \frac{k2(s)}{4} \right)\psi(s,t)
\label{e13}
\end{eqnarray}
Using the transformation
$(t,s) \rightarrow (\frac{4\mu}{\hbar}u, \sqrt{2}s_1)$,
Eq. (\ref{e13}) becomes
\begin{eqnarray}
i~\psi_{u}+\psi_{s_1s_1} + \frac{k2}{2}\psi =0,
\label{knls}
\end{eqnarray}
where $k=k(s_1)$, and the subscripts $s_1$ and $u$ stand
for the partial derivatives $\frac{\partial}{\partial s_1}$ and
$\frac{\partial}{\partial u}$.
Looking for stationary solutions of the Schr\"{o}dinger equation
(\ref{knls}) in the form
\begin{eqnarray}
\psi(s_1,u)=k(s_1)\exp (-i E~u),
\label{e15}
\end{eqnarray}
we get
\begin{eqnarray}
\left (k_{s_{1}s_{1}} + \frac{k3}{2}\right)= -E ~k.
\label{e16}
\end{eqnarray}
Comparing this with Eq. (\ref{koe}), for self-consistency, we
must have
\begin{eqnarray}
E = -(C_2 -\tau_0^{2}),
\label{E}
\end{eqnarray}
which is negative. We write Eq. (\ref {e16})
in the form of a time-independent
Schr\"{o}dinger equation with a potential term $V$
\begin{eqnarray}
-k_{s_{1}s_{1}} - V(s, \kappa) k = E ~k.
\label{se}
\end{eqnarray}
On using Eq. (\ref{dn}), we see that the potential is given by
\begin{eqnarray}
V(s, \kappa) = -2 \frac{(C_{2} - \tau_{0}^{2})}{(2 - \kappa^{2})}
{\rm dn}\,^{2}(\sqrt{\frac{C_{2}-\tau_{0}2}{2-\kappa^{2}}}~s, \kappa),
\label{v}
\end{eqnarray}
which is negative as well.
Since ${\rm dn}\,^{2}(s,\kappa) \le 1$ for all $s$ and $\kappa$,
the minimum value of the potential in Eq. (\ref{e16}) is
\begin{eqnarray}
V_{min} = -\left(C_{2}-\tau_{0}2\right )/[1 - \frac{\kappa^{2}}{2}].
\label{vmin}
\end{eqnarray}
Thus we have the inequality $0 \ge E \ge V_{min}$.
On using Eq. (\ref{dn}) and Eq. (\ref{E}) in Eq. (\ref{e15}) , the {\it exact}
stationary solution of the time dependent
Schr\"{o}dinger equation for the electron (Eq. (\ref{knls}))
is of the form
\begin{widetext}
\begin{eqnarray}
\psi(s_{1},\kappa,u)= 2\sqrt{\frac{C_{2}-\tau^{2}_{0}}{2-\kappa^{2}}}~
{\rm dn}\, (\sqrt {\frac{C_{2}-\tau^{2}_{0}}{2-\kappa^{2}}}~s_{1}, \kappa)~
\exp i~ (C_{2}-\tau^{2}_{0})u.
\label{psi}
\end{eqnarray}
\end{widetext}
Hence we see that the conformon lattice solution, a spatially periodic solution
for $k(s_{1})$ given
in Eq. (\ref{dn}),
which determines the conformation of the elastic tube,
is also the amplitude of the electron
wave function in (\ref{psi}).
Note that when $\kappa$ goes to $1$, the amplitude of the above
solution reduces to Eq. (\ref{sech}), which corresponds to the
static conformation in Fig. 5.
This would in turn localize the electron at the center of the loop.
However, as we explained below Eq. (\ref{sech}),
a dynamical solution with $s$ replaced by $(s-vt)$ is possible for the Kirchhoff equations
(\ref{K1}) and (\ref{K2}), which would describe
a moving loop.
Turning to electron transport on a tube, we showed in \cite{jpha}
that the quantum effective potential induced by the
above (moving) curvature traps the electron, which then travels with
it along the tube.
This is
the conformon mechanism
described in Ref. \cite{jpha} and applied to a biopolymer..
Now, we see that the conformon lattice solution for the curvature (Eq. (\ref{dn}))
to a different type of charge transport mechanism.
While for $\kappa = 1$, it was the {\it motion} of the
conformon that led to electron transport, we see that
for $\kappa \ne 1$, {\it even} the static conformon lattice
contributes to electron transport.
This happens because the electron wave function
Eq. (\ref{psi}) corresponds to a
spatially {\it delocalized state}, due to the finite periodicity
of ${\rm dn}\,$. Indeed, Eq. (\ref{psi})
shows that since for any $\kappa$,
${\rm dn}\,(s,\kappa)$ does not vanish for any $s$, the probability
of finding the electron on the polymer,
${\rm dn}\,^{2}(s,\kappa)$, is non-vanishing for all $s$.
Note that the period of ${\rm dn}\,(s,\kappa)$ increases as $\kappa$ increases,
and tends to infinity as $\kappa$ tends to $1$. Further,
${\rm dn}\,(s,\kappa)$ itself goes to unity for all $s$ as $\kappa$
tends to zero.
The conformon lattice mechanism therefore contributes
towards enhancing electron transport along the nanotube
as $\kappa$ is decreased from $1$ to $0$.
\section{Discussion}
With regard to applications, let us first consider
carbon nanotubes.
It has been shown using Green Function techniques in the context of
an "arm chair" carbon nanotube that bending of a
nanotube increases its electrical resistance\cite{roch}.
This agrees with our finding that a single bend (as in Fig. 1)
would cause localization of the electron wave function.
Thus although the
phenomenological model we have presented is a
highly idealized continuum model in the sense that it ignores
details such as electronic structure, it is able to capture the essence
of the curved geometric effects quite succinctly.
Its merits include the use of a realistic elastic model that takes into
account an intrinsic twist
and a possible asymmetry in the bending rigidities.
We have obtained a helical conformation (Fig. 5) which has been observed
experimentally. While the fact that a helical solution {\it per se}
can be supported by the Kirchhoff model
is well known \cite{gor2},
here it emerges as a special case of an elliptic function solution.
It would be interesting to experimentally
measure its electrical resistence as a
function of the number of coils. We conjecture that the
resistance of nanotubes where the coils appear very close to
each other may be less
than that in which the coils are far apart, since the delocalization
of the electron will be very efficient in the former case.
Hence design of suitable experiments to verify the above would be of
interest.
More importantly, we have shown
that in addition to the helical conformation,
the Kirchhoff model supports
shapes such as those given in Figs. 1 to 4, which have
constant torsion but varying curvature. While we have
seen in \cite{jpha} that a static loop conformation
(Fig. 1) localizes the charge on the loop, the new
multiloop solutions (Figs. 2 to 5) are shown to
lead to delocalization of the wave function,
which provides a novel mechanism for charge transport.
Conformations corresponding to closed tubes are also possible.
First, the simplest case of a circular ring (and hence planar)
solution
is easily obtained
by setting the curvature $k$ to be a constant $K_{0}$,
and the torsion $\tau_{0} = 0$ in the basic equations
(\ref{e4}) to (\ref{e9}).
We obtain $K_{0} =\sqrt{2 C_{2}}$.
This is the same as the
solution of $k$ found from Eq. (\ref{koe})
by inspection in this case. In addition,
closed tube
conformations for the general elliptic function solution
(\ref{dn}) exist when its length
$L$ is such that
$L \sqrt{\frac{C_{2}-\tau^{2}_{0}}{2-\kappa^{2}}}$
is an integral multiple of its period , $2 K(\kappa)$,
with periodic
boundary conditions ${\bf R}(L) = {\bf R}(0)$.
In the context of DNA, Shi and Hearst \cite{shi} have
analysed the Kirchhoff equations, by treating DNA
as a thin elastic rod with a circular cross-section,
to obtain exact solutions of curvature and torsion.
They have plotted
closed tube configurations of a
supercoiled DNA in the form of a toroidal helix.
Our analysis and plots are for open tubes with
non-circular cross-section, the helix being a special case.
Over the years, physicists have used various types of
experiments to measure DNA conductivity, but the results
contradict each other.\cite{endr}
Bioscientists have employed
other types of methods\cite{boon} such as oxidative
damage in DNA, to study charge transport.
They have firmly established
that there is definitely an efficient transport of electrons (or holes)
through the DNA base pair stack, which proceeds over
fairly long molecular distances ($\sim 200$ \AA)
under ordinary thermal conditions.
Since the central axis of DNA can take on curved conformations,
especially when it interacts with
various proteins during replication and transcription,
the quantum mechanical motion of
the electron is not always along
a straight line, but rather, on a curved path.
This curvature perhaps
plays a role in charge transport,
since the sensitivity of transport to
sequence-dependent conformations as well as local
flexibilty and distortions
of the soft DNA chain have been noted in
experiments. \cite{tran, boon}
It is indeed not easy
to measure coherent charge transport in DNA, because of effects due to
temperature, disorder, dissipation and interaction
with molecular vibrations, etc \cite{endr}. In addition,
issues connected with
the source of the charge, whether it is transport between the
leads or due to a donor-acceptor scenario, will need
careful experimental scrutiny. But we believe that
irrespective of how the charge gets injected,
given the same ambient conditions like temperature,
a coiled elastic nanotube induces a quantum potential
in its path, while
a straight tube of the same length does not, and
this should lead to observable effects on charge transport.
We note that the
formation of DNA loops (such as in Figs. 1 to 5)
has been frequently observed \cite{matt},
and is considered
to be an important biological mechanism for regulating
gene expression.
Thus the exact conformations that we have obtained using
the elastic model indeed seem to have physical significance.
In view of the above, systematic experiments should be
designed to study the
dependence of charge transport on DNA conformations.
We suggest that in
such experiments, long molecules (with length greater than the
persistence length of 50nm) which are not stretched out or attached to a
surface, but rather in their more
natural, free curved conformation (with loops, if possible)
should be used
to investigate for what kinds of conformations, there would be either
enhancement or depletion of charge migration.
In summary, as a result of an intricate interplay between classical
elasticity of the nanotube and {\it quantum} motion of the electron
in it, certain
conformations with constant torsion are shown to lead to a class of
corresponding exact electronic
wave functions. This suggests
a charge transport mechanism that underscores the role played by
the curved geometry of the axis of the tube, and in particular, the
nonlinear differential equation satisfied by the curvature function.
Hence curved geometry
should be taken into account in order
to obtain a full understanding of charge conduction in
nanotubes.
Our work should be regarded as a first step
in this direction.
RB thanks the Council of Scientific and Industrial
Research, India, for financial support
under the Emeritus Scientist Scheme.
\end{document} |
\begin{document}
\title{Ordering the representations of $S_n$ using the interchange process}
\author{Gil Alon and Gady Kozma}
\address{Gady Kozma\\
Department of Mathematics,\\
Weizmann Institute of Science,\\
Rehovot 76100, Israel}
\email{[email protected]}
\address{Gil Alon\\
Division of Mathematics,\\
The Open University of Israel,\\
1 University Road, Raanana 43107, Israel}
\email{[email protected]}
\begin{abstract}
Inspired by Aldous' conjecture for
the spectral gap of the interchange process and its recent
resolution by Caputo, Liggett and Richthammer \cite{CLR09}, we define
an associated order $\prec$ on the irreducible representations of $S_n$. Aldous'
conjecture is equivalent to certain representations being comparable
in this order, and hence determining the ``Aldous order'' completely is a
generalized question. We show a few additional entries for this order.
\end{abstract}
\keywords{Aldous' conjecture, interchange process, symmetric group,
representations}
\subjclass[2000]{82C22 (60B15, 43A65, 20B30, 60J27, 60K35)}
\maketitle
\section{Aldous' order}
Let $G$ be a finite graph with vertex set $\{1,\dotsc,n\}$, and equip each edge $\{i,j\}$ with an alarm clock
that rings with exponential rate $a_{i,j}$. Put a marble in every
vertex of $G$, all different, and whenever the clock of $\{i,j\}$ rings,
exchange the two marbles. Each marble therefore does a standard
continuous-time random walk on the graph but the different walks are
dependent. This process is called {\em the interchange process} and is
one of the standard examples of an interacting particle system,
related to exclusion processes (where the marbles have only a few
possible colors) but typically more complicated. Further, when one
considers the evolution of the permutation taking the initial
positions of the marbles to their positions at time $t$, one gets a
continuous-time random walk on a (weighted) \emph{Cayley graph} of the
group of permutations $S_n$.
The first landmark in the understanding of this process was the work
of Diaconis \& Shahshahani \cite{DS81}. For the case of $G$ being the
complete graph they diagonalized the relevant $n!\times n!$ matrix
completely using representation theory and achieved very fine results
on the mixing properties.
If one cannot get the whole spectrum, the second eigenvalue (so-called
the spectral gap) allows to get significant partial information on the
process. In 1992 Aldous made the bold conjecture that the spectral gap
of the interchange process is in fact equal to the spectral gap of the
simple random walk on $G$, for every $G$\label{pg:aldous}. This was
the focus of much
research \cite{B94,HJ96,M08,SC08,C09a,D09} and was finally resolved by
Caputo, Liggett and Richthammer \cite{CLR09}. Our focus in this paper
is however the spectrum as a whole, and for this we need to discuss
the problem from a representation theoretical point of view. More details
on representation theory will be given below in \S\ref{sec:rep}, for
now we continue assuming the reader has basic familiarity with the
subject.
Let $n\in\mathbb{N}$ and let
$A=\left\{a_{i,j}\right\}_{1\le i<j\le n}$ with all $a_{i,j}$
non-negative. Examine the following {\em formal sum of permutations with real
coefficients}\footnote{Or element of the group ring ${\mathbb R}[S_n]$, if you prefer}
\[
\Delta_A=\sum_{i<j}a_{i,j}(\id-(ij))
\]
where id stands for the identity permutation. Let $\rho$ be
any representation of $S_n$. Then
\begin{equation}\label{eq:defrhoDelta}
\rho(\Delta_A)=\sum_{i<j}a_{i,j}\big(\rho(\id)-\rho((ij))\big)
\end{equation}
is some $\dim\rho\times\dim\rho$ matrix. It is well known that
$\rho(\Delta_A)$ is a positive-semidefinite matrix, indeed $(ij)$ is an
involution so all the eigenvalues of $\rho((ij))$ are $\pm 1$ and the
eigenvalues of $\rho(\id-(ij))$ are in $\{0,2\}$, so each term in
(\ref{eq:defrhoDelta}) is positive and hence so is their sum. We shall
denote the eigenvalues of $\rho(\Delta_A)$ by
\[
\lambda_1(A;\rho)\le\dotsb\le\lambda_{\dim(\rho)}(A;\rho)
\]
We will occasionally drop the $A$ from the notation.
Now, the irreducible representations of $S_n$ are indexed
by partitions of $n$. Namely, for each integer sequence $r_1\ge r_2\ge
\dotsb\ge r_k>0$ with $\sum_{i=1}^k r_i=n$ there exists a unique
irreducible representation, which we shall denote by
$[r_1,\dotsc,r_k]$. Such a partition has a nice graphical
representation given by the associated \emph{Young diagram} obtained
by drawing each $r_i$ as a line of boxes from top to bottom,
\[
[5,1]={\tiny\yng(5,1)}\qquad [3,2,1]={\tiny\yng(3,2,1)} \qquad [2,1^3]={\tiny\yng(2,1,1,1)}
\]
and we will occasionally use it. We may now state Aldous' conjecture
\begin{thm*}[Caputo, Liggett \& Richthammer \cite{CLR09}] For any $A$
and any irreducible $\rho$ different from the trivial representation $[n]$,
\begin{equation}\label{eq:Aldous}
\lambda_1(A;[n-1,1])\le\lambda_1(A;\rho).
\end{equation}
\end{thm*}
\noindent The formulation of the result in \cite{CLR09} does not use
representation theory. As discussed on page \pageref{pg:aldous}, they
showed that the interchange process has the same spectral gap as
the simple random walk. See \cite{C09a} for how to get from one
formulation to the other.
Faced with (\ref{eq:Aldous}) one is
tempted to generalize the question. Define the {\bf Aldous order} on
irreducible representations by
\[
\rho\preceq\sigma \iff \forall A\; \lambda_1(A;\rho)\ge\lambda_1(A;\sigma)
\]
where again by $\forall A$ we mean for all $n\times n$ matrices with
non-negative coefficients. It is rather unfortunate that the
\emph{largest} representation in the $\prec$ order has the
\emph{smallest} $\lambda_1$, but we wish to make $\prec$ consistent
with the \emph{domination order}, for which there is already an established
direction. We say that $\rho\prec\sigma$ if $\rho\preceq\sigma$ and
they are different, and remark that $\rho\preceq\sigma$ and
$\rho\succeq\sigma$ imply that $\rho=\sigma$, see page
\pageref{pg:precsucceq}, remark 1.
As can be seen in figure \ref{fig:ao}, $\prec$ is an interesting object,
and it seems to be correlated with the domination order
$\vartriangleleft$. We say that $\sigma\vartriangleleft\rho$
if $\sigma$ can be obtained from $\rho$ by a sequence of steps, such that
in each step one box is dropped to the row below it, in a way that
leaves a Young diagram. Cesi \cite{C09a} remarked that it would be
nice if Aldous' order were identical to the domination order, but also
noted a counterexample in $n=4$: one has
${\tiny\yng(2,2)}\vartriangleright{\tiny\yng(2,1,1)}$ but
${\tiny\yng(2,2)}\nsucc{\tiny\yng(2,1,1)}$. See
\cite[Counterexample 8.2]{C09a}. Such a counterexample exists for
every $n\ge 4$: we will see below in corollary \ref{cor:asympval},
page \pageref{cor:asympval}, that $[2,2,1^{n-4}]\nsucc
[2,1^{n-2}]$, although clearly $[2,2,1^{n-4}] \vartriangleright [2,1^{n-2}]$. The graph demonstrating this is a star. Shannon Starr
discovered numerically another asymptotic family of counterexamples
for even sizes, $[n+1,n-1]\nsucc[n,n]$ (private communication, checked
numerically up to size 14, $n=7$). In this case the graph
demonstrating it is the cycle. We remark also that the results of
Diaconis \& Shahshahani \cite{DS81} imply that if $\sigma\vartriangleleft\tau$
then $\sigma\nsucc\tau$. We do not know if it is generally true that
if two representations are $\vartriangleleft$-incomparable then they are
also $\prec$-incomparable, so we cannot quite state that $\prec$ is a
sub-order of $\vartriangleleft$, though it is a very natural conjecture.
\begin{figure}
\caption{\label{fig:ao}
\label{fig:ao}
\end{figure}
Despite the similarity with the completely explicit
$\vartriangleleft$, it is not easy to prove any entry in the Aldous
order. In fact, the only entries in the Aldous order
known previous to this paper are $[n]\succeq\rho\succeq [1,\dotsc,1]$
for all $\rho$ (this is easy once one identifies these
representations, see corollary \ref{cor:n1n}, page \pageref{cor:n1n}), a result of Bacher \cite{B94} that the
hook-shaped diagrams are ordered among themselves
\[
[n]\succ [n-1,1]\succ [n-2,1^2]\succ \dotsb \succ [2,1^{n-1}]
\succ [1^n],
\]
(we fill some details about this in the appendix, page
\pageref{sec:gamma}) and of course the Caputo et al.\ result,
$[n-1,1]\succ\rho$ for all $\rho\ne [n],[n-1,1]$.
We may now state our result.
\begin{thm*}\label{thm:ao}
Let $n\ge 4k^2+4k$. Let $\tau$ be an irreducible representation whose Young
diagram has $\ge n-k$ boxes in the first row; and let $\sigma$ be an
irreducible
representation whose Young diagram has $\ge n-k$ boxes at the
the leftmost column. Then $\tau\succ\sigma$.
\end{thm*}
\noindent Again we see the relationship with $\vartriangleright$. What we show
is that if ``$\tau \vartriangleright\!\!> \sigma$'' i.e.\ if $\tau$
is much larger in the domination order than $\sigma$, then
$\tau\succ\sigma$.
Let us end this introduction returning to the work of Diaconis \&
Shahshahani and to the mixing time of Markov chains. Better
understanding of Aldous' order will allow to extend their results to
other graphs. Of particular interest are the hook-shaped
representations because for them all eigenvalues are explicitly known
\cite{B94}. Determining which representations are $\prec$ than a given
hook-shaped representation will allow to quickly estimate their
contribution to the mixing of the process. Let us formulate some modest
questions
\begin{ques*}
Describe the representations $\sigma \prec[n-2,1^2]$. The natural
generalization of Aldous conjecture is that $\sigma
\vartriangleleft [n-2,1^2]$ implies $\sigma\prec[n-2,1^2]$. Is this
true? If not, maybe there exists some absolute constant $K$ such that any
$\sigma\vartriangleleft [n-K,K-1,1]$ satisfies $\sigma\prec [n-2,1^2]$?
\end{ques*}
There are of course many other natural questions about this order. How
many entries does it have? What is the longest chain? Is it really a
subset of the domination order? What
is the longest chain in the domination order which is completely
uncomparable in Aldous order? etc. The simulation results seem to
indicate that $\sigma\succ\rho\;\triangleright\;\tau$ implies
$\sigma\succ\tau$. We see no particular reason for this to be true,
but if it is it would be interesting. We chose to highlight the question
above because we believe it has relevance to questions which do not
need representation theory to state, e.g.~mixing time and the
quantum Heisenberg ferromagnet (see \cite{AK10} for the latter).
\section{Some representation theory}\label{sec:rep}
This section will contain only the minimal set of facts needed for the
paper. For a thorough introduction to the topic see one of the books
\cite{FH91,G93,JK81,S00} and the influential paper \cite{OV96}.
A representation of $S_n$ is a group homomorphism $\rho:S_n\to \GL(V)$ where
$V$ is some linear space over $\mathbb{C}$ and $\GL(V)$ is the space of all
linear transformations of $V$. We will assume throughout that $V$ is
finite dimensional. It can be assumed \cite[Theorem 1.5.3]{S00} that
$\rho(g)$ is a unitary matrix, and so we will assume this a-priori for
all our representations. We denote $\dim\rho=\dim V$.
Given two representations $\rho_i:S_n\to\GL(V_i)$, $i=1,2$ one may
construct their direct sum $\rho_1\oplus\rho_2:S_n\to\GL(V_1\oplus
V_2)$ by
\[
(\rho_1\oplus\rho_2)(g)=\left(\begin{array}{cc}
\rho_1(g) & 0\\
0 & \rho_2(g)
\end{array}\right).
\]
On the other hand, if there is a decomposition $V=V_1\oplus V_2$ such that
for any $g\in S_n$, $\rho(V_i)\subset V_i$ for $i=1,2$ then one may
construct $\rho_i(g)=\rho(g)|_{V_i}$ and get
$\rho\cong\rho_1\oplus\rho_2$. If no such decomposition exists we say
that $\rho$ is irreducible. Every representation can be written as a
direct sum of irreducible representations, and the isomorphism classes of
the factors are unique up to order
\cite[Proposition 1.7.10]{S00}. Recall also Schur's lemma which states
that a linear map from an irreducible $V$ to $V$ which commutes with
the action of every $g\in S_n$ is a constant multiple of the identity
\cite[Corollary 1.6.8]{S00}.
\subsection{Young diagrams}
We will use one specific method that constructs all the irreducible
representations as explicit subspaces of the group ring $R$. The
construction is somewhat abstract, but we will only need a few
properties which will be easy to deduce, and we will do so in lemma
\ref{lem:Q} and (\ref{eq:reverse}) below and forget about the actual
definition of the representations.
Recall that the group ring ${\mathbb R}[S_n]$ is simply the collection of all formal
sums $\sum_{g\in S_n} a_g g$ with coefficients $a_g\in{\mathbb R}$. We will
denote $R={\mathbb R}[S_n]$. $S_n$ acts
on $R$ by
\[
h\left(\sum a_g g\right)=\sum a_g hg
\]
which makes $R$ into a (left) representation known as the
\emph{regular representation}. It is true generally for any finite group
that any irreducible representation can be embedded into the regular
representation \cite[Proposition 1.10.1]{S00}. For a general finite
group this requires to work over $\mathbb{C}$, but as we will see shortly, the
specific structure of the representation of $S_n$ allows to work over
${\mathbb R}$, which is more natural in our setting.
Let now $\tau_1\ge\tau_2\ge\dotsb\ge\tau_m$ with $\sum\tau_i=n$. We define $H$ to be the
group of permutations $h$ that preserve the rows of the diagram
$[\tau_1\dotsc,\tau_m]$ in the sense that
\begin{equation}\label{eq:DefH}
i\in [1,\tau_1] \mathbb{R}ightarrow h(i)\in[1,\tau_1];\quad
i\in [\tau_1+1,\tau_1+\tau_2] \mathbb{R}ightarrow
h(i)\in[\tau_1+1,\tau_1+\tau_2]
\quad \mbox{etc.}
\end{equation}
Let $V$ be the group of permutations that preserve the columns of the
diagram $[\tau_1,\dotsc,\tau_m]$ e.g.~any $v\in V$ must preserve the
set $\{1,\tau_1+1,\tau_1+\tau_2+1,\dotsc,n-\tau_m+1\}$. We define the following
elements of the group ring $R$,
\[
a_\tau=\sum_{h\in H}h \quad b_\tau=\sum_{v\in V}\sign(v)v\quad
c_\tau=a_\tau b_\tau\,.
\]
Then the representation $[\tau_1,\dotsc,\tau_m]$ is
defined to be $Rc_\tau=\{rc_\tau:r\in R\}$, with the group acting by
multiplication from the left. This is a subspace of $R$
which is easily seen to be closed under the action of $S_n$. By
\cite[Theorem 4.3]{FH91} these representations are
irreducible and exhaust all the irreducible representations of $S_n$.
To shed a little light on the definition let us take two
examples. The first is $[n]$. In this case $H=S_n$ and
$V=\{\id\}$. Hence $c_\tau=\sum_{g\in S_n}g$ so $hc_\tau=c_\tau$ for
any $h\in S_n$. This means that $[n]$ is one-dimensional with a
trivial action of $S_n$. This representation is also known as the
\emph{trivial representation}. A second example is $[1^n]$. In
this case $H=\{\id\}$ and $V=S_n$ so this time $c_\tau=\sum_{g\in
S_n}\sign(g)g$. We get that $hc_\tau=\sign(h)c_\tau$ so this
representation is also one-dimensional, but this time the action of
$S_n$ is by multiplication with the sign of the permutation, the
so-called \emph{sign representation}, which we will denote by
$\mathbf{sgn}$. In general, if $\tau$ is any Young diagram and if $\tau'$ is
the diagram one gets by reflecting $\tau$ along the main diagonal (so
that the lengths of the rows of $\tau$ become the lengths of the columns
of $\tau'$) then
\begin{equation}\label{eq:reverse}
[\tau']=[\tau]\otimes\mathbf{sgn}\,.
\end{equation}
See \cite[2.1.8]{JK81}.
Now, $Ra_\tau$ is also a representation. Generally it is reducible,
but it is much more convenient to work with. Indeed, $ga_\tau$ is
simply $\sum_{h\in gH}h$ so $Ra_\tau$ is isomorphic to the natural
action of $S_n$ on the set of cosets $\{gH:g\in S_n/H\}$. Further,
each coset can be thought of as a coloring of $n$ by $m$ colors with
exactly $\tau_1$ numbers colored in the first color, exactly $\tau_2$
numbers colored in the second color, etc. Formally we define
\[
Q=Q(\tau)=\{q:\{1,\dotsc,n\}\to\{1,\dotsc,m\}:\#q^{-1}(i)=\tau_{i}\}\,.
\]
and let $L^2(Q)$ be a representation of $S_n$ with the natural
action $(gq)(i)=q(g^{-1}i)$. We will mainly work with these representations, and we relate
them to the irreducible ones by
\begin{lem}\label{lem:Q}
Let $\sigma_1\ge\dotsc\ge\sigma_m$ with $\sum \sigma_i=n$. Then
\begin{enumerate}
\item $[\sigma_1,\dotsc,\sigma_m]$ can be embedded in $L^2(Q(\sigma))$.
\item For any $q\in Q$ there is a non-zero element of this embedding which is
invariant under any permutation $\phi$ that preserves the coloring
$q$ i.e.~to any $\phi$ for which $q(\phi(i))=q(i)$ for all $i$.
\end{enumerate}
\end{lem}
For example, for $[n-1,1]$ we have that $m=2$ and an element $q\in Q$
is uniquely
identified by $q^{-1}(2)$ which is an element of
$\{1,\dotsc,n\}$. Hence $|Q|=n$ and $L^2(Q)$ can be though of as
${\mathbb R}^n$ with $S_n$ acting by permutation matrices
(this representation is known as the \emph{standard
representation} \label{pg:stdrep} of
$S_n$). Clearly the constant vectors form a one-dimensional invariant
subspace, and so is their orthogonal complement, the vectors whose
entries sum to 0. It is not difficult to see (directly from the
definition) that both are irreducible representations. The first is
the trivial, hence the second is $[n-1,1]$.
Now, the second clause of the lemma in this example is as follows. Take some $q\in Q$
i.e.~$q(i)=1$ for all $i\in\{1,\dotsc,n\}$ except one $k$ for which
$q(k)=2$. A permutation $\phi$ preserves $q$ if and only if
$\phi(k)=k$. An element of $L^2(Q)$ invariant to any such $\phi$ must
be constant on $\{1,\dotsc,n\}\setminus\{k\}$, and the only such
element (up to multiplication by constants) in the subspace isomorphic to $[n-1,1] $ is
$(1,\dotsc,1,1-n,1\dots,1)$ where the position of the negative entry
is $k$ (we are not interested in the uniqueness, only in the existence).
We will prove lemma \ref{lem:Q}
immediately after this simple claim
\begin{lem} \label{lem:sum} Let $\rho:S_n\to\GL(V)$ be a representation and let
$V_1,\dotsc,V_m$ be subspaces of $V$ invariant under the action of
$S_n$. Let $W$ be an irreducible component of $\sum V_i$. Then $W$
is isomorphic to a component of one of the $V_i$.
\end{lem}
\begin{proof} Clearly it is enough to prove this for just two
subspaces $V_1$ and $V_2$. Denote $U=V_1\cap V_2$. Then $U$ is
invariant under the action of $S_n$. Since every invariant subspace is
complemented \cite[Proposition 1.5.2]{S00} we can write $V_i=U\oplus
U_i$ and $V_1+V_2 = U\oplus U_1\oplus U_2$. The lemma now follows by
the uniqueness of decomposition into irreducible representations.
\end{proof}
\begin{proof}[Proof of lemma \ref{lem:Q}] Examine
\[
Ra_\sigma R=\sum_{g\in S_n} Ra_\sigma g\,.
\]
It is easy to check that each $Ra_\sigma g$ is a representation which is
isomorphic to $Ra_\sigma$ ($g$ is an invertible element of the group
ring $R$). Since $[\sigma]\cong Rc_\sigma\subset Ra_\sigma R$
we get by lemma \ref{lem:sum} that $[\sigma]$ can be embedded into
$Ra_\sigma$. By the discussion before the statement of the lemma,
$Ra_\sigma\cong L^2(Q)$ so the first claim of lemma \ref{lem:Q} is
proved.
For the second claim, note that the permutations $\phi$ as above form
a group, which we will denote by $H_q$. Now, it is not important which $q$
one takes, since if $v$ is invariant to the action of $H_q$ then
$gv$ is invariant to the action of $gH_qg^{-1}$; and
$gH_qg^{-1}=H_{gq}$ which can give any $q'\in Q$. So we will verify
the claim for the $H$ defined in (\ref{eq:DefH}). But in this case it
is clear that $c_\sigma$ itself is invariant to the action of $H$. Since
the property of existence of a vector invariant to $H$ is an abstract
property of a representation, then it is not important that the
vector $c_\sigma$ is not necessarily in $Ra_\sigma$ but just in some
isomorphic copy. The lemma is thus proved.
\end{proof}
It is interesting to note that the irreducible components of $L^2(Q)$
are known and are all $[\tau]$ with $\tau\trianglerighteq\sigma$
\cite[Lemma 2.1.10]{JK81}. But we will not use it.
\section{Star graphs and the Gelfand-Tsetlin basis}
For $n\geq k \geq 1$, let $K_{n,k}$ be the graph with vertices
$\{1,\dotsc,n\}$ where all the vertices $\{1,\dotsc,k\}$ are connected
with each other, and the remaining $n-k$ vertices are isolated. By
abuse of notation, we will also denote by $K_{n,k}$ the adjacency
matrix of this graph. For any matrix $A$ we denote by $\wt(A)=\sum_{i<j}a_{i,j}$.
We put $\str_{n,k}=K_{n,k}-K_{n,k-1}$, a star graph having the vertex
$k$ connected with each of $1,\dotsc,k-1$. We remark that as elements
of the group ring, $\str_{n,k}$ are known as the Jucys-Murphy elements.
In this section, we shall find all the eigenvalues of
$\sigma(\Delta_{\str_{n,k}})=\sum \sigma(\id)-\sigma((ij))$ for any
irreducible representation $\sigma$ of $S_n$. These will serve as
useful examples, but more
importantly will be used in the proof of the main theorem in section
\ref{sec:main}. We will use the Gelfand-Tsetlin basis, an idea also
used in \cite{D09}.
Now, the theory of Gelfand-Tsetlin pairs and bases is very deep with
many analogs in different categories (see e.g.~\cite{M06} for a
survey) but we will not need any of it here. For our purposes it is
enough to define the basis inductively as follows: If $n=1$ then the
representation must be one dimensional and we take a nonzero vector as
the basis vector. For $n>1$, we consider the natural embedding
$S_{n-1}\hookrightarrow S_n$ whose image is $\{\pi \in S_n:
\pi(n)=n\}$. We decompose the restriction $\sigma|_{S_{n-1}}$ into
irreducible representations of $S_{n-1}$ as $\bigoplus_i V_i$, take the Gelfand-Tsetlin basis of each $V_i$ and define our basis as the union of the resulting bases.
If $\sigma=[\alpha]$, where $\alpha$ is a Young diagram of size $n$,
then $\sigma|_{S_{n-1}}= \bigoplus_{\beta}[\beta]$, where
$\beta$ goes over all the Young diagrams of size $n-1$, obtained by
removing, in all possible ways, one box from $\alpha$. See \cite[\S 2.8]{S00}.
Hence, the elements of the Gelfand-Tsetlin basis of $[\alpha]$ are in one to one correspondence with the sequences $\alpha=\alpha_n, \alpha_{n-1},\dotsc, \alpha_1 = \tiny\yng(1)$ of young diagrams, in which each $\alpha_i$ for $i<n$ is obtained from $\alpha_{i+1}$ by removing one box.
\begin{lem} \label{lem9} Let $G=\str_{n,k}$ for some $2\leq k \leq n$, and let $\sigma$ be an irreducible representation of $S_n$. \begin{enumerate} \item The Gelfand-Tsetlin basis of $\sigma$ is a basis of eigenvectors for $e_G$ acting on $\sigma$. \item Let $\alpha=\alpha_n, \alpha_{n-1},\dotsc, \alpha_1 = \tiny\yng(1)$ be a sequence of Young diagrams, where $\alpha_{i-1}$ is obtained from $\alpha_i$ by removing the $x_i$th box at the $y_i$th row, and $\sigma=[\alpha_n]$. Let $v$ be the Gelfand-Tsetlin basis element corresponding to the above sequence. Then, the eigenvalue of $\Delta_G$ with respect to $v$ is $(k-1)+y_k-x_k$. \end{enumerate} \end{lem}
\begin{proof}
(1) Let us first prove that each vector in the Gelfand-Tsetlin basis of
$\sigma$ is an eigenvector for each of
$K_{n,1},K_{n,2},\dotsc,K_{n,n}$. We will do it by induction on
$n$. For $n=1$ the claim is vacuous. For $n>1$, we decompose
$\sigma|_{S_{n-1}}$ into irreducible representations and using the
induction hypothesis, conclude that the Gelfand-Tsetlin basis vectors
are eigenvectors of $K_{n,1},\dotsc,K_{n,n-1}$. As for $K_{n,n}$, the
element $\Delta_{K_{n,n}}$ lies in the center of $\mathbb{R}[S_n]$ (being a linear
combination of the sum of all transpositions and the identity). Hence,
by Schur's lemma it acts as a scalar on each irreducible representation of $S_n$. This finishes the inductive step. Hence, each vector of the Gelfand-Tsetlin basis is an eigenvector of $G=K_{n,k}-K_{n,k-1}$.
(2) Let us now find the scalar by which $K_{n,i}$ acts on the
irreducible $S_i$-representation $[\alpha_i]$, assuming that
$\alpha_i$ has row lengths $l_1,\dotsc,l_m$. For that matter, we
invoke the trace formula from \cite[lemma 7]{DS81}, by which, the trace of a transposition acting on $[\alpha_i]$ is
$$ \frac{\dim [\alpha_i]}{\binom{i}{2}}\sum_{j=1}^m\left (\binom{l_j-j+1}{2}-\binom{j}{2}\right)$$
where we define $\binom{x}{2}=\frac{1}{2}x(x-1)$ also for $x<2$. Hence, $\Delta_{K_{n,i}}=\sum_{1\leq j<k\leq i}(\id-(jk))$ acts on $[\alpha_i]$ via the scalar
$$ c_i := \wt(K_{n,i})-\sum_{j=1}^m\left (\binom{l_j-j+1}{2}-\binom{j}{2}\right).$$
Let us find the eigenvalue of $\Delta_G=\Delta_{K_{n,k}}-\Delta_{K_{n,k-1}}$ with respect to our basis vector $v$. This eigenvalue is equal to $c_k-c_{k-1}$, and we will now simplify it.
Recall that passing from $\alpha_k$ to $\alpha_{k-1}$ is done by
removing the rightmost box from the $y_k^\textrm{th}$ row. We
distinguish two cases
\emph{Case 1:} $\alpha_k$ and
$\alpha_{k-1}$ have the same number of rows. In that case, all the
summands except the $y_k^\textrm{th}$ one cancel out, and we are
left with
\begin{align*}
\wt(\str_{n,k})-\left(\binom{l_{y_k}-y_k+1}{2} - \binom{l_{y_k}-y_k}{2}\right)
&= \wt(\str_{n,k}) - (l_{y_k}-y_k) \\ &= k-1-(x_k-y_k).
\end{align*}
\emph{Case 2:} $\alpha_k$ is obtained from $\alpha_{k-1}$ by removing the unique box in the $y_k^\textrm{th}$ row. In that case, we have $l_{y_k}=1$, and the scalar is
\begin{align*}
\wt(\str_{n,k})&-\left(\binom{l_{y_k}-y_k+1}{2}-\binom{y_k}{2}\right)
=
\wt(\str_{n,k})-\left(\binom{2-y_k}{2}-\binom{y_k}{2}\right)= \\
& = \wt(\str_{n,k})-(1-y_k) = k-1-(x_k-y_k).
\end{align*}
Hence, $\Delta_{\str_{n,k}}v= (k-1-(x_k-y_k))v$, and the result follows.
\end{proof}
\begin{corollary}\label{cor:asympval}For any $n$, $[2,2,1^{n-4}]\nsucc
[2,1^{n-2}]$.
\end{corollary}
\begin{proof} Examine the star graph $G=\str_{n,n}$. We get that
$\lambda_1(G;\rho)=\min\{ (n-1)+y-x\}$ where the minimum is taken over
all boxes $(x,y)$ which can be removed to get a legal Young
diagram. For $[2,2,1^{n-4}]$ these boxes are $(2,2)$ and (if $n>4$)
$(1,n-2)$. The minimum is achieved at $(2,2)$ so
$\lambda_1(G;[2,2,1^{n-4}])=n-1$. For $[2,1^{n-2}]$ the minimum is
achieved at $(2,1)$ giving that $\lambda_1(G;[2,1^{n-2}])=n-2$.
\end{proof}
Since $[2,2,1^{n-4}] \vartriangleright [2,1^{n-2}]$ this shows that
$\succ$ and $\vartriangleright$ differ for every $n$. Similarly one
can show that $[2^i,1^{n-2i}]\nsucc[2^j,1^{n-2j}]$ for any $j< i \le n/2$
giving a chain of length $\lfloor n/2\rfloor$ in the domination
order, no two elements of which are comparable in the Aldous order.
\subsection{Quasi-complete graphs} Let $a_2,a_3,\dotsc,a_n$ be non-negative numbers and examine
\[
\sum_{k=2}^n a_k \str_{n,k}
\]
This is not just any combination of stars, but stars formed in a special order, each added vertex connected to all existing ones. We call such graphs quasi-complete graphs. See figure \ref{fig:qc}.
\begin{figure}
\caption{\label{fig:qc}
\label{fig:qc}
\end{figure}
Since the Gelfand-Tsetlin basis of $[\alpha]$ is a basis for of eigenvalues for
$\str_{n,k}$ for all $k$ it is also a basis of eigenvalues for their
linear combination. Further, the basis element corresponding to a sequence $\alpha_n,\dotsc,\alpha_1$ as in lemma \ref{lem9} gives the eigenvalue
\begin{equation}\label{eq:qc}
\wt(G)-\sum_{k=2}^n a_k (x_k-y_k).
\end{equation}
Let us make three remarks about quasi-complete graphs:
1. Taking $a_k$ to be very fast decreasing (taking $a_k=n^{-2k}$ is good enough) it is easy to see that the minimal eigenvalue is achieved when the boxes are removed as follows: first remove the lowest row completely, then the second lowest row completely, etc. This shows that if $\alpha < \beta$ in the lexicographical order then $\alpha \nsucc \beta$. As a corollary we get that $\alpha \preceq \beta$ and $\beta\preceq\alpha$ imply $\alpha=\beta$.\label{pg:precsucceq}
2. This family of graphs is not rich enough to determine the Aldous order.
For example, take the fact that $[n+1,n-1]\nsucc [n,n]$ which can be verified for small $n\ge 3$ by direct calculation for the circle graph (we have no proof that it holds for all $n$ but this is not relevant at this point). Quasi-complete graphs cannot demonstrate this fact, because any sequence of Young diagrams as in lemma \ref{lem9} for $[n,n]$ must start by removing the box at $(n,2)$, while a sequence for $[n+1,n-1]$ \emph{may} start by removing the box at $(n+1,1)$, and continue by mimicking the first sequence, since in both cases $\alpha_{2n-1}=[n,n-1]$. So we see that
\[
\lambda_1(G;[n+1,n-1])\le \lambda_1(G;[n,n])
\]
for any quasi-complete graph $G$.
3. As an approximation to the Aldous order, one may ask whether two young diagrams $\sigma, \tau$ of size $n$ satisfy $\lambda_1(G; \sigma) \leq \lambda_1(G; \tau)$ for all quasi-complete graphs $G$. Using formula (\ref{eq:qc}), the answer can be put in terms of the following combinatorial game: Let players $A$ and $B$ get the young diagrams $\sigma$ and $\tau$, respectively. Both players fill out their diagrams as follows: On each square at position $(i,j)$ whey write the number $j-i$.
The game has $n$ steps. At each step of the game, player $B$ breaks off a square from his diagram, in a way that leaves a legal young diagram, and announces the number on that square. Then, player $A$ does the same. We say that player $A$ wins the game if at each step, her number is no less than player $B$'s number.
It is not hard to see that player $A$ has a winning strategy if, and
only if, $\lambda_1(G; \sigma) \leq \lambda_1(G; \tau)$ for all
quasi-complete graphs $G$.
\section{Proof of the main theorem}\label{sec:main}
The proof requires that we examine closely the \emph{maximal}
eigenvalue of $\sigma(\Delta_A)$ (which is of course also its norm as an
$l^2$ operator, as this is a positive matrix). We denote it by
\[
\lambda_{\max}(A;\sigma).
\]
We keep from the previous section the notation
$\wt(A)=\sum_{i<j}a_{i,j}$. For $k<n$,
we denote by $\mathcal{S}_{k}$ the set of representations
corresponding to Young diagrams of size $n$ such
that the first row has $\geq n-k$ boxes (so all the other rows
have, combined, $\leq k$ boxes). Denote
\[
\mathcal{S}_k\otimes\mathbf{sgn} = \{\sigma\otimes\mathbf{sgn} :
\sigma\in\mathcal{S}_k\}
\]
which by (\ref{eq:reverse}) is also the set of representations
corresponding to Young diagrams of size $n$ such that the leftmost
column has $\geq n-k$ boxes.
As in the previous section, we will use ``matrix'' and ``weighted graph''
interchangeably, understanding that all our matrices have non-negative
entries, and that the corresponding graph has an edge at any $(i,j)$
for which $a_{i,j}\ne 0$. In particular when we subtract graphs and
weighted graphs, we are in fact subtracting the corresponding
matrices. Let us start with two standard facts which we prove for the
convenience of the reader.
\begin{lem}
For any $A$ with entries $a_{ij}$ and any representation $\sigma$
\begin{equation}
\lambda_{\max}(A;\sigma)=2\wt(A)-\lambda_1(A;\sigma \otimes \mathbf{sgn}). \label{eq:dual}
\end{equation}
and
\begin{equation}
\lambda_{\max}(A;\sigma)\le2\wt(A)\label{eq:triv}
\end{equation}
\end{lem}
\begin{proof}
The action of $\Delta_A=\sum_{i<j}a_{ij}(\id-(ij))$ on $\sigma$ is linearly isomorphic to the action of $$\sum_{i<j}a_{ij}(\id+(ij))=2\wt(A)-\Delta_A$$ on $\sigma \otimes \mathbf{sgn}$. This gives (\ref{eq:dual}). The second part follows immediately since $\lambda_1(A;\sigma \otimes \mathbf{sgn}) \geq 0$.
\end{proof}
\begin{corollary}\label{cor:n1n} For any $\sigma$,
$[n]\succeq\sigma\succeq[1^n]$.
\end{corollary}
\begin{proof}Recall that $[n]$ is the trivial representation, so
$\lambda_1(A;[n])=0$ which shows $[n]\succeq\sigma$. In the other
direction, $[1^n]=\mathbf{sgn}$ so $\lambda_1(A;[1^n])=\lambda_{\max}(A;[1^n])=2\wt(A)\ge\lambda_{\max}(A;\sigma)\ge\lambda_1(A;\sigma)$.
\end{proof}
\begin{lem}
\label{lem:matching}Assume $n\geq4k$ and let $\sigma\in\mathcal{S}_{k}$.
Let $G$ be a graph with $2k$ disjoint edges (i.e.\ $4k$ vertices
have degree $1$, and the remaining $n-4k$ vertices are isolated).
Then\[
\lambda_{\max}(G;\sigma)\leq2k.\]
\end{lem}
\begin{proof} Recall the representation $L^2(Q)$ of lemma \ref{lem:Q},
namely if $\sigma_{1}\geq\dotsc\geq\sigma_{m}>0$ are the lengths of the
rows of (the Young diagram corresponding to) $\sigma$ then
\[
Q=Q(\sigma)=\{q:\{1,\dotsc,n\}\to\{1,\dotsc,m\}:\#q^{-1}(i)=\sigma_{i}\}.
\]
By lemma \ref{lem:Q} we know that the representation $\sigma$ can be
embedded in $L^{2}(Q)$. Hence it is enough to show that
\[
\lambda_{\max}(G;L^{2}(Q))\leq 2k.
\]
Let $f\in L^{2}(Q)$ be an eigenvector for $\lambda_{\max}$, and
let $q\in Q$ be the point where the maximum of $|f|$ is attained.
Fix one edge $(i,j)\in G$ and examine
\begin{equation}
((\id-(ij))f)(q)=f(q)-f((ij)q).\label{eq:oneij}
\end{equation}
If $q(i)=q(j)=1$ (recall that the elements $q$ of $Q$ are themselves
functions) then $(ij)q=q$ and (\ref{eq:oneij}) is zero. However,
by definition of $Q$ the number of $i$ such that $q(i)\neq1$ is
$\leq k$. Because the degree of $G$ is $\leq1$ we get that for
any $i$ there can be at most one $j$ such that $(i,j)\in G$ hence
there are a totality of no more than $k$ edges $(i,j)\in G$ for
which (\ref{eq:oneij}) is non-zero. Hence we get
\[
\sum_{(i,j)\in G}((\id-(ij))f)(q)\leq2k|f(q)|.
\]
Since $f$ is an eigenfunction of $\lambda_{\max}$, we also have \[
\sum_{(i,j)\in G}((\id-(ij))f)(q)=\lambda_{\max}f(q)\]
and the lemma is proved.
\end{proof}
\begin{lem}
\label{lem:onestar}Let $\sigma\in\mathcal{S}_{k}$ and let $G=\str_{n,l+1}$
i.e.~a star graph with $l$ edges and the rest of the vertices isolated. Then\[
\lambda_{\max}(G;\sigma)\leq l+k\]
\end{lem}
\begin{proof}
This is a corollary of lemma \ref{lem9}.
The eigenvector for the maximal eigenvalue corresponds to an element
of the Gelfand-Tsetlin basis, which corresponds to a sequence of boxes
$(x_i,y_i)$ of $\sigma$ as in lemma \ref{lem9}. Since $\sigma$ has no
more than $k+1$ rows, we have $y_i \leq k+1$ for all $i$, and we
always have $x_i \geq 1$. According to lemma \ref{lem9},
\[
\lambda_{\max}(G;\sigma)=((l+1)-1) + (y_{l+1}-x_{l+1})\leq l+k+1-1=l+k.
\qedhere\]
\end{proof}
\begin{lem}
\label{lem:weightedstar}Let $A$ be a weighted star, i.e.\ assume
there are some $a_{2}\geq\dotsb\geq a_{n}\geq0$ such that $(1,i)$
has weight $a_{i}$ but $(i,j)$ has weight $0$ when both $i>1$
and $j>1$. Let $\sigma\in\mathcal{S}_{k}$. Then\[
\lambda_{\max}(A;\sigma)\leq2a_{2}+\dotsb+2a_{k+1}+a_{k+2}+\dotsb+a_{n}.\]
\end{lem}
\begin{proof}
Write $A=A_{2}+\dotsb+A_{n}$ where $A_{i}$ is a weighted star with
weights
\begin{align*}
& \overbrace{a_{i}-a_{i+1},\dotsc,a_{i}-a_{i+1}}^{i-1\mbox{ times}},\!\!\!\overbrace{0,\dotsc,0}^{n-i\mbox{ times}} & i & =2,\dotsc,n-1\\
& \overbrace{a_{n},\dotsc,a_{n}}^{n-1\mbox{ times}} & i & =n.
\end{align*}
Since $\lambda_{\max}$ is a norm (recall that $\sigma(\Delta_A)$ is a
positive matrix so $\lambda_{\max}$ is its norm as an $l^2$ operator),
we have that
\[
\lambda_{\max}(A;\sigma)\le\sum_{i=2}^{n}\lambda_{\max}(A_{i};\sigma).
\]
Each summand may be estimated by lemma \ref{lem:onestar}, and we
get\[
\lambda_{\max}(A_{i};\sigma)\leq(i-1+k)(a_{i}-a_{i+1}).\qquad\mbox{(define }a_{n+1}:=0\mbox{)}\]
However, for $i\leq k$ we actually get a better estimate from the
trivial bound (\ref{eq:triv}),\[
\lambda_{\max}(A_{i};\sigma)\leq2(i-1)(a_{i}-a_{i+1}).\]
Summing we get
\begin{align*}
\lambda_{\max}(A;\sigma) & \leq\sum_{i=2}^{k}2(i-1)(a_{i}-a_{i+1})+\sum_{i=k+1}^{n}(i-1+k)(a_{i}-a_{i+1})=\\
& =\sum_{i=2}^{k+1}2a_{i}+\sum_{i=k+2}^{n}a_{i}
\end{align*}
as was to be proved.
\end{proof}
\begin{lem}\label{reduce}Let $A,H$ be two weighted graphs with $n$ vertices, and let $\sigma, \tau$ be two irreducible representations of $S_n$. If
\begin{equation} \lambda_1(H;\tau)\geq \lambda_{\max}(H;\sigma) \label{eq:reduce}
\end{equation} and
$$ \lambda_1(A;\tau) \geq \lambda_1(A;\sigma)$$ then
$$ \lambda_1(A+H;\tau) \geq \lambda_1(A+H;\sigma).$$ \end{lem}
\begin{proof}
Recall the variational characterization of $\lambda_1$ which states
that for any positive matrix $M$, its lowest eigenvalue is the minimum
over all vectors $v$ of $\langle Mv,v\rangle$. Hence we may upper
bound $\lambda_1(A+H;\sigma)$ with any $v$, and we choose $v$ to be a unit eigenvector corresponding to $\lambda_1(A;\sigma)$. Then
\begin{align*}
\lambda_1(A+H;\sigma) & \leq \langle(\Delta_A+\Delta_H)v,v\rangle= \\
&=\lambda_1(A;\sigma)+\langle \Delta_Hv,v\rangle \leq \lambda_1(A;\sigma)+\lambda_{\max}(H;\sigma) \leq\\
&\leq \lambda_1(A;\tau)+ \lambda_1(H;\tau) \leq
\lambda_1(A+H;\tau).\qedhere
\end{align*}
\end{proof}
We remark that even though lemma \ref{reduce} works for any matrix $H$, we
will apply it only to matrices whose entries take two values (one of
which is 0) i.e.~to graphs whose weights are all the same.
\begin{definition} Let $G,H$ be two weighted graphs, and $\sigma, \tau$ representations of $S_n$. \begin{enumerate}
\item
We call $H$ a reducing graph for $\sigma, \tau$ if $H$ satisfies (\ref{eq:reduce}).
\item
We say that $G$ is $H$-irreducible if there does not exist a graph $H'$ isomorphic to $H$ and a number $\epsilon >0$ such that $\epsilon H' \leq G$.
\end{enumerate}
\end{definition}
The identities (\ref{eq:reverse}) and (\ref{eq:dual}) show that equation (\ref{eq:reduce}) is equivalent to $$ \lambda_{\max}(H;\sigma) + \lambda_{\max}(H;\tau\otimes\mathbf{sgn}) \leq 2\wt(H). $$
We will use this reformulation to show that a given graph $H$ is reducing.
Lemma \ref{reduce} is the basis to our strategy of reduction: In proving that $\sigma \succ \tau$, if $H$ is a reducing graph, it is enough to prove that $\lambda_1(A;\sigma) \leq \lambda_1(A;\tau)$ only for $H$-irreducible matrices $A$. Indeed, if $A$ is not $H$-irreducible, then we can find $H' \cong H$ and $\epsilon>0$ such that $A-\epsilon H'$ has nonnegative weights on the edges, and fewer nonzero weights than $A$. According to lemma \ref{reduce}, it is enough to prove the inequality for $A-\epsilon H'$. Repeating this procedure, we reduce the problem to $H$-irreducible graphs.
\begin{proof}[Proof of the theorem]
Let $n\ge 4k^2+4k$ and let $\sigma\in\mathcal{S}_k$ and $\tau\in\mathcal{S}_k\otimes\mathbf{sgn}$. The claim of
the theorem is that under these conditions $\sigma\succ\tau$.
Let $H$ be the graph with $2k$ disjoint edges i.e.~as a matrix its
coefficients $h_{ij}$ are given by
\[
h_{ij}=\begin{cases}
1&\mbox{$i=2k$ and $j=2k+1$ for some $k$}\\
0&\mbox{otherwise.}
\end{cases}
\]
By lemma \ref{lem:matching},
$$
\lambda_{\max}(H; \tau\otimes\mathbf{sgn})+\lambda_{\max}(H;\sigma)\leq
4k=2\wt(H).
$$
Hence, $H$ is a reducing graph for $\sigma$ and $\tau$.
It is hence enough to prove that $\lambda_1(A,\sigma) \leq \lambda_1(A, \tau)$ for any $H$-irreducible matrix $A$.
It is well known that an $H$-irreducible graph can be written as a union of $4k-2$ weighted stars. Indeed, choose
an edge $e$ of $A$ arbitrarily and remove from $A$ the two stars
centered at the two vertices of $e$. If the resulting graph is non-empty,
choose again some edge arbitrarily and remove two stars. This process
must stop after $2k-1$ steps, since otherwise we would have found $2k$
disjoint edges in $A$. Hence we wrote $A$ as a union of $4k-2$
stars. Denote
\[
A=\sum_{i=1}^{4k-2}S_{i}.
\]
We now use lemma \ref{lem:weightedstar} for each of the $S_i$ and sum
over $i$. Recall that the $k$ edges with the largest weights of a
weighted star play a special role in lemma \ref{lem:weightedstar} ---
their weights were multiplied by 2 rather than by 1. Collecting these
special edges for the $4k-2$ stars gives a total of $k(4k-2)$ special
edges. Denote them by $e_{i}$. Thus the conclusion of lemma
\ref{lem:weightedstar} is
\[
\lambda_{\max}(A,\tau\otimes\mathbf{sgn})\leq
\sum_{i<j}a_{ij}+\sum_{i=1}^{k(4k-2)}a_{e_{i}}
\]
(where if $e=(i,j)$ then we denote $a_{e}=a_{ij}$). The edges $e_{i}$
combined have no more than $(k+1)\cdot(4k-2)=4k^2+2k-2$ vertices.
Let us now move to the estimate of $\lambda_1(A;\sigma)$. For that matter,
pick $2k$ vertices which do not belong to any of the $e_{i}$'s (here we use the condition $n\geq4k^{2}+4k$). Denote these vertices by $v_{1},\dotsc,v_{2k}$.
For every $i$, let $W(i)$ be the weight of $v_{i}$ i.e.\[
W(i)=\sum_{j=1}^{n}a_{v_{i}j}.\]
Assume w.l.o.g.\ that the $v_{i}$ are arranged so that $W(i)$ are
increasing. Examine again the representation $L^{2}(Q)$ from lemma
\ref{lem:Q}. By clause (2) of that lemma, since $\sigma \in
\mathcal{S}_{k}$, for any set of $n-k$ vertices there exists a nonzero
element $f\in V\subset L^{2}(Q)$, where $V$ is an invariant subspace
of $L^2(Q)$ isomorphic to $\sigma$, such that $f$ is invariant to
permutations of elements from this set. Normalize $f$ to have $||f||=1$.
We choose the set to be all vertices except
$v_{1},\dotsc,v_{k}$. Examine now
\[
\sum_{i<j}a_{ij}(\id-(ij))f.
\]
If both $i$ and $j$ are different from $v_{1},\dotsc,v_{k}$ then
$(ij)f=f$ and the contribution to the sum is $0$. Otherwise we simply
estimate\[
||a_{ij}(\id-(ij))f||\leq2a_{ij}\]
(here $||\cdot ||$ is the norm in $L^2(Q)$) and we get
\[
\left\Vert \sum_{i<j}a_{ij}(\id-(ij))f\right\Vert \leq2\sum_{i=1}^{k}W(i)\leq\sum_{i=1}^{2k}W(i)\]
where the second inequality comes from the fact that we chose the
$W(i)$ increasing. This bounds $\lambda_1(A;\sigma)$ and we get
\[
\lambda_1(A;\sigma)+\lambda_{\max}(A;\tau\otimes\mathbf{sgn})\leq\sum_{i=1}^{2k}W(i)+\sum_{i<j}a_{ij}+\sum_{i=1}^{k(4k-2)}a_{e_{i}}\]
Since the vertices $v_{1},\dotsc,v_{2k}$ are different from the vertices on the edges $e_{i}$, the above sum contains each edge no more than twice. Hence,
\[ \lambda_1(A; \sigma) \leq 2\wt(A)-\lambda_{\max} (A; \tau\otimes\mathbf{sgn}) = \lambda_1(A;\tau). \qedhere\]
\end{proof}
\appendix
\section{\label{sec:gamma}The hook-shaped diagrams}
In this appendix we prove the claim appearing in the introduction that
\begin{equation}\label{eq:Bacher}
[n]\succ [n-1,1] \succ \dotsb \succ [1^n].
\end{equation}
This is basically a result of Bacher \cite{B94},
who showed that the eigenvalues $\lambda_i(A;[n-k,1^k])$ are simply
all the sums of all $k$-tuples of the eigenvalues
$\lambda_i(A;[n-1,1])$. This immediately implies (\ref{eq:Bacher})
(recall that the eigenvalues are all non-negative). However, he used a
different description of these representations, as
wedge products of $[n-1,1]$. We will now prove that the two
descriptions coincide. This was known before (for example, it is
mentioned without proof in \cite{CLR09} in the penultimate paragraph
of the introduction) but we found no proof in the literature.
\begin{lem} $ \wedge^k[n-1,1]\cong[n-k,1^k] $
\end{lem}
The proof will use the Murnaghan-Nakayama formula for the characters of
the irreducible representations of $S_n$. See e.g.~\cite[Theorem 4.10.2]{S00}. We will not give
a full description of this formula here.
\begin{proof}
Recall that for a representation $\rho$ the character is a function
$\chi:G\to \mathbb{C}$ defined by $\chi(g)=\tr \rho(g)$ and that two
representations are isomorphic if and only if their characters
coincide \cite[Corollary 1.9.4(5)]{S00}. Thus it is enough to prove that the two representations have the same character.
Let us denote the character of the representation on the left hand
side by $\chi_k^\wedge $ and the right hand side by $\chi_k^\Gamma$
(the letter $\Gamma$ reminds us of a hook). We will prove that
$\chi_k^\wedge+\chi_{k-1}^\wedge=\chi_k^\Gamma+\chi_{k-1}^\Gamma$.
Since the lemma is true for $k=1$, that will be enough.
Let $V$ be the $n$-dimensional Euclidean space, with the standard
basis $e_1,..,e_n$, viewed as the standard representation of $S_n$
(see page \pageref{pg:stdrep}). Since
$V \cong [n-1,1] \oplus [n]$, and $[n]$ is one dimensional, it can be
easily seen that
$$\wedge^k V \cong \wedge^{k-1}[n-1,1] \oplus \wedge^k[n-1,1].$$
Therefore $\chi_{\wedge^kV}=\chi_k^\wedge + \chi_{k-1}^\wedge$. Recall now the
standard basis for $\wedge^k V$: for any
subset $K=\{i_1<i_2<\dotsb<i_k\}$ of $\{1,2,\dotsc,n\}$ let
$e_K=e_{i_1} \wedge \dotsb \wedge e_{i_k}$. We will calculate
the trace of a permutation $g$ (acting on $\wedge^k V$) using this
basis. Let $g\in S_n$ have cycles of lengths $c_1\geq c_2 \geq \dotsb \geq c_r$. We have $ge_K=\pm e_{g(K)}$.
The only
contribution to the trace comes from $e_K$'s for which $g(K)=K$, and
in this case the $\pm$ above is simply $\sgn(g|_K)$. Hence
we get
$$ \chi_{\wedge^kV}(g)=\sum_{\epsilon_1,\dotsc,\epsilon_r \in \{0,1\}: \sum \epsilon_i c_i = k} (-1)^{\sum \epsilon_i (c_i-1)}. $$
Let us now calculate $\chi_k^\Gamma+\chi_{k-1}^\Gamma$. We use the Murnaghan-Nakayama rule, which takes a nice form for hook-shaped diagrams:
For a hook-shaped diagram $\gamma$, let $\mathbf{S}(\gamma)$ be the
set of all sequences of Young diagrams
$\gamma=\gamma_1,\dotsc,\gamma_{r+1}=\emptyset$ such that for $1 \leq
i\leq r$, $\gamma_{i+1}$ is obtained from $\gamma_i$ by removing $c_i$
consecutive boxes. Note that except for the last stage, all the
removed parts do not contain the corner $(1,1)$ and hence are either
horizontal or vertical bars. We will call the removed parts of the
$r-1$ first stages the \emph{bars of the sequence}. For any set of
consecutive boxes we define its \emph{height} to be the number of rows it
occupies. And for an element $(\gamma_1,\dotsc,\gamma_{r+1})$ of
$\mathbf{S}(\gamma)$, we define its height to be $\sum_{i=1}^r
(1+\height(\gamma_i \setminus \gamma_{i+1}))$. Then, according to the
Murnaghan-Nakayama rule,
$$ \chi_k^\Gamma(g) = \sum_{s\in \mathbf{S}([n-k,1^k])} (-1)^{\height(s)}. $$
Let $f$ be the bijective function from the Young
diagram $[n-k,1^k]$ to the Young diagram $[n-k+1,1^{k-1}]$ taking the
box at position $(1,i)$ to the box $(1,i-1)$ for all $i>1$, and taking
the box at $(i,1)$ to $(i+1,1)$ for all $i$. See figure \ref{fig:f}.
\begin{figure}
\caption{\label{fig:f}
\label{fig:f}
\end{figure}
Let $\mathbf{A}$ be the subset of $\mathbf{S}([n-k,1^k])$ of all sequences for which
no bar has $(1,2)$ as an endpoint, and let $\mathbf{B}$ be the subset of
$\mathbf{S}([n-k+1,1^{k-1}])$ of all sequences for which no bar has $(2,1)$
as an endpoint. Let $F:\mathbf{A}\rightarrow \mathbf{B}$ be the function defined by
applying $f$ to each stage of the sequence. Then $F$ is well defined
because the condition that $(1,2)$ is not an end point ensures that
the image is indeed a set of legal Young diagrams; and $F$ is a bijection, since an inverse function can be defined using $f^{-1}$. Moreover, the corresponding terms for $s$ and $F(s)$ cancel out in the sum for $\chi_k^\Gamma+\chi_{k-1}^\Gamma$.
Hence, $$\chi_k^\Gamma+\chi_{k-1}^\Gamma= \sum_{s\in
\mathbf{S}([n-k,1^k])\setminus \mathbf{A}} (-1)^{\height(s)}+\sum_{s\in
\mathbf{S}([n-k+1,1^{k-1}])\setminus \mathbf{B}} (-1)^{\height(s)}.$$ But this is equal
to $\chi_{_V}(g)$: A sequence $\epsilon_1,..,\epsilon_r\in \{0,1\}$
such that $\sum \epsilon_i c_i=k$ determines a unique term in one of
the above two summands in the following way: at each stage, we remove
a vertical bar of size $c_i$ if $\epsilon_i=1$ and a horizontal one if
$\epsilon_i=0$. We get a sequence of Young
diagrams in the first summand when $\epsilon_r=0$, and in the second
summand when $\epsilon_r=1$. Further, it is easy to check that all terms
in both summands are obtained in this way, and that the
corresponding term is $(-1)^{\sum {\epsilon_i (c_i-1)}}$.
\end{proof}
\end{document} |
\begin{document}
\,{\operatorname{d}}ate{}
\title{Periodic Minimizers of a \ Ternary Non-Local Isoperimetric Problem}
\begin{abstract}
We study a two-dimensional ternary inhibitory system derived as a sharp-interface limit of the Nakazawa-Ohta density functional theory of triblock copolymers. This free energy functional combines
an interface energy favoring micro-domain growth with a
Coulomb-type long range interaction energy which prevents micro-domains from unlimited spreading. Here we consider a limit in which two species are vanishingly small, but interactions are correspondingly large to maintain a nontrivial limit. In this limit two energy levels are distinguished: the highest order limit encodes information on the geometry of local structures as a two-component isoperimetric problem,
while the second level describes the spatial distribution of components in global minimizers. We provide a sharp rigorous derivation of the asymptotic limit, both for minimizers and in the context of Gamma-convergence. Geometrical descriptions of limit configurations are derived; among other results, we will show
that, quite unexpectedly, coexistence of single and double bubbles can arise.
The main difficulties are hidden in the
optimal solution of two-component isoperimetric problem: compared to binary systems, not only it lacks an explicit formula, but, more crucially, it can be neither concave nor convex on parts of its domain.
\fbox{}silonnd{abstract}
\numberwithin{equation}{section}
\section{Introduction}
An $ABC$ triblock copolymer is a linear-chain molecule consisting of three subchains, joined covalently to each other. A subchain of type $A$ monomer is connected to one of type $B$, which in turn is connected to another subchain of type $C$ monomer. Because of the repulsive forces between different types of monomers, different types of subchain tend to segregate. However, since subchains are chemically bonded in molecules, segregation can lead to a phase separation only at microscopic level,
where $A, B$ and $C$-rich micro-domains emerge, forming morphological phases, many of which have been observed experimentally: see Figure \ref{laAndJanus}. Bonding of distinct monomer subchains provides an inhibition mechanism in block copolymers.
\begin{figure}[!htb]
\centering
\includegraphics[width=3.8cm]{lattice.eps}
\caption{ Electron microscopy image of the cross-section of a multi-cylinder morphology of ABC triblock copolymers with two cylinder types packed in a tetragonal lattice \cite{Mogi}.
Reproduced with permission from the American Chemical Society 1994.
}
\label{laAndJanus}
\fbox{}silonnd{figure}
This paper will address the asymptotic behavior of the energy functional derived from Nakazawa and Ohta's density functional theory for triblock copolymers \cite{microphase, lameRW} in two dimensions.
Let $u = (u_1, u_2)$, and $u_0 = 1 - u_1 - u_2 $.
The order parameters $u_i, i = 0, 1, 2, $ are defined on $\mathbb{T}^2 = \mathbb{R}^2 / \mathbb{Z}^2=[ - \frac{1}{2}, \frac{1}{2} ]^2$ i.e., the two dimensional flat torus
of unit volume, with periodic boundary conditions.
Define
\begin{eqnarray} \label{energyu}
\mathcal{E} (u) := \frac{1}{2} \sum_{i=0}^2 \int_{\mathbb{T}^2} |\nabla u_i | + \sum_{i,j = 1}^2 \frac{\gamma_{ij}}{2} \int_{\mathbb{T}^2} \int_{\mathbb{T}^2} G_{\mathbb{T}^2}(x-y)\; u_i (x) \; u_j (y) dx dy
\fbox{}silonnd{eqnarray}
on $BV(\mathbb{T}^2; \{0,1\})$.
Each $u_i$, which represents the relative monomer density, has two preferred states: $u_i = 0$ and $u_i = 1$.
Case $u_1 = 1$ corresponds to a pure-$A$ region, $u_2 = 1$ to a pure-$B$ region, and $u_0 = 1$ to a pure-$C$ region. Thus, each $u_i=\chi_{\Omega_i}$ with supports $\Omega_i, \ i=0,1,2,$ which partition ${\mathbb{T}^2}$: $\Omega_i$ are assumed to be mutually disjoint
and $u_1+u_2 + u_0=1$ a.e. on ${\mathbb{T}^2}$.
The energy is minimized under two mass or area constraints
\begin{eqnarray} \label{constrain}
\frac{1}{| \mathbb{T}^2 |} \int_{\mathbb{T}^2} u_i = M_i, i = 1, 2.
\fbox{}silonnd{eqnarray}
Here $M_1$ and $M_2$ are the area fractions of type-$A$ and type-$B$ regions, respectively. Constraints \fbox{}silonqref{constrain} model the fact
that, during an experiment, the compositions of the molecules do not change.
The first term in \fbox{}silonqref{energyu} counts the perimeter of the interfaces: indeed, for $u_i\in BV({\mathbb{T}^2}; \{0,1\})$,
\[ \int_{\mathbb{T}^2} | \nabla u_i | : = \sup \left \{ \int_{\mathbb{T}^2} u_i \ \text{div} \varphi \ dx: \varphi = (\varphi_1, \varphi_2)\in C^1 (\mathbb{T}^2 ; \mathbb{R}^2), |\varphi(x)| \leq 1 \right \}, \nonumber
\]
defines the total variation of the characteristic function $u_i$. The factor $\frac12$ acknowledges that each interface between the phases is counted twice in the sum.
The second part of \fbox{}silonqref{energyu} is the long range interaction energy, associated with the connectivity of sub-chains in the triblock copolymer macromolecule.
The long range interaction coefficients $\gamma_{ij}$ form a symmetric matrix $\gamma = [\gamma_{ij}]\in \mathbb{R}^{2\times 2}$.
Here $G_{\mathbb{T}^2}$ is the zero-mean Green's function for $- \triangle$ on $\mathbb{T}^2$ with periodic boundary conditions, satisfying
\begin{equation} \label{GLap}
-\Delta G_{\mathbb{T}^2}(\cdot - y) = \,{\operatorname{d}}elta(\cdot-y)- 1
\text{ in } \mathbb{T}^2;
\int_{\mathbb{T}^2} {G_{\mathbb{T}^2} (x - y)} dx=0
\fbox{}silonnd{equation}
for each $ y\in \mathbb{T}^2$. In two dimensions, the Green's function $G_{\mathbb{T}^2}$ has the local representation
\begin{equation} \label{G2def}
G_{\mathbb{T}^2}(x - y)= - \frac{1}{2\pi}\log | x-y | + R_{\mathbb{T}^2} (x - y),
\fbox{}silonnd{equation}
for $|x-y|<\frac12$.
Here $R_{\mathbb{T}^2}\in C^\infty(\mathbb{T}^2)$ is the regular part of the Green's function.
\begin{figure}[!htb]
\centering
\includegraphics[width=5.2cm]{co8.eps}
\includegraphics[width=5.2cm]{dou9.eps}
\includegraphics[width=5.2cm]{asqu8.eps}
\caption{Numerical simulations: coexistence, all double bubble, and all single bubble patterns of $ABC$ triblock copolymers. Type $A$ micro-domains are in red, and type $B$ are in yellow. The rest of the region is filled by type $C$ monomers, in blue.}
\label{terDisk1}
\fbox{}silonnd{figure}
As was the case for the Ohta-Kawasaki model of diblock copolymers, nonlocal ternary systems are of high mathematical interest because of the diverse patterns which are expected to be observed by its minimizers.
In the same way that the binary nonlocal isoperimetric functional is obtained as a sharp-interface limit of Ohta-Kawasaki, the triblock energy \fbox{}silonqref{energyu} is the sharp-interface limit (in the sense of $\Gamma$-convergence) of a ternary phase-field model introduced by Nakazawa-Ohta (see \cite{rwtri}),
\begin{eqnarray} \label{energyu2}
\mathcal{E}^{\fbox{}silonpsilon} (u) := \frac{1}{2} \sum_{i=0}^2 \left [ \fbox{}silonpsilon \int_{\mathbb{T}^2} |\nabla u_i |^2 dx +\frac{1}{\fbox{}silonpsilon} \int_{\mathbb{T}^2} u_i^2 (1- u_i^2) dx \right] +
\sum_{i,j = 1}^2 \frac{\gamma_{ij}}{2} \int_{\mathbb{T}^2} \int_{\mathbb{T}^2} G_{\mathbb{T}^2}(x-y)\; u_i (x) \; u_j (y) dx dy
\fbox{}silonnd{eqnarray}
defined on $H^1(\mathbb{T}^2) $.
Just as the diblock copolymer problem may be formulated as a nonlocal isoperimetric problem (NLIP) which partitions space into two components, the triblock model is a NLIP based on partitions into three disjoint components.
Physicists (see e.g. Bates-Fredrickson \cite{block}) have predicted a wide variety of both two dimensional and three dimensional patterns.
The shape of minimizers is generally believed to come from perimeter minimization, while the nonlocal interactions promote fragmentation. For partitions of $\mathbb{R}^n$, $n=2,3$, into three components, the minimizers of perimeter are known to be {\it double bubbles} \cite{FABHZ}, \cite{db1}, \cite{db2}. In addition, there are {\it core shell} configurations (annuli in two or three dimensions) which are non-minimizing critical points of perimeter. The presence of these additional structures add to the complexity of the energy landscape of the triblock functional. Figure~\ref{terDisk1} presents numerical simulations of three two-dimensional morphologies. These are obtained as the $L^2$ gradient flow dynamics of \fbox{}silonqref{energyu2}, which is solved by a semi-implicit Fourier spectral method \cite{wrz}.
A recent series of papers by Ren \& Wei \cite{doubleAs} and Ren \& Wang \cite{disc, stationary} consider the triblock energy in a parameter regime where two of the components are very dilute with respect to the third. They use perturbative arguments to generate stationary configurations consisting of assemblies of double bubbles, core shells, or single bubbles of both species, in an array. These solutions are not constructed by minimization, and it is unknown if they are local or global minimizers. The purpose of this article is to consider global minimizers of the triblock energy in 2D, in an asymptotic regime where two minority phases have vanishingly small area but strong interaction ensures a bounded number of phase domains in the limit. In particular, we are interested in describing the possible morphologies of minimizers in this dilute limit.
The appropriate ``droplet'' scaling we use was introduced by Choksi \& Peletier \cite{bi1} in the diblock case. We introduce a new parameter $\fbox{}silonta$ which is to represent the characteristic length scale of the droplet components. Thus, areas scale as $\fbox{}silonta^2$, and so we choose mass constraints on $u=(u_1,u_2)$,
\[ \int_{\mathbb{T}^2} u_i = \fbox{}silonta^2 M_i \]
for some fixed $M_i$, $i=1, 2$. We then rescale $u_i$ as
\begin{eqnarray}
v_{i, \fbox{}silonta}^{} = \frac{ u_i }{\fbox{}silonta^2}, \quad i=0,1,2, \quad\text{with}\quad { \int}_{\mathbb{T}^2} v_{i, \fbox{}silonta} = M_i, \quad i=1,2.
\fbox{}silonnd{eqnarray}
The matrix $\gamma=[\gamma_{ij}]$ is also scaled, in such a way that both terms contribute at the same order in $\fbox{}silonta$. This may be accomplished by choosing
\begin{eqnarray}
\gamma_{ij} = \frac{1}{|\log \fbox{}silonta| \fbox{}silonta^3} \Gamma_{ij}, \nonumber
\fbox{}silonnd{eqnarray}
with fixed constants $\Gamma_{ij}\ge 0$. Throughout the paper we will assume
$$ \Gamma_{ii}>0 \quad i=1,2, \qquad \Gamma_{12}\ge 0, \qquad\text{and}\qquad \Gamma_{11}\Gamma_{22}-\Gamma_{12}^2>0 . $$
The hypotheses $\Gamma_{11},\Gamma_{22}>0$ and $\Gamma_{12}\ge 0$ are essential to our results. The positivity of the matrix $\Gamma$ is a consequence of the derivation of the model from density functional theory \cite{rwtri}, but can be omitted in most of our results. However, the nature of minimizers would be quite different if the matrix $\Gamma$ were not positive definite.
Denote by $v_{\fbox{}silonta} = (v_{1,\fbox{}silonta}, v_{2, \fbox{}silonta})$. As the supports of the component funtions $v_{i, \fbox{}silonta}$ should be of finite perimeter and disjoint, we will assume $v_\fbox{}silonta$ lies in the space
\begin{equation}\label{Xspace}
X_\fbox{}silonta:=\left\{ (v_{1,\fbox{}silonta}, v_{2, \fbox{}silonta}) \ | \ \fbox{}silonta^2 v_{i,\fbox{}silonta}\in BV({\mathbb{T}^2}; \{0,1\}), \ v_{1,\fbox{}silonta}\, v_{2, \fbox{}silonta} = 0 \ \ a.e.\right\}.
\fbox{}silonnd{equation}
With these definitions and for $v_\fbox{}silonta\in X_\fbox{}silonta$ we define our functional,
\begin{eqnarray} \label{Eeta}
E_{\fbox{}silonta}^{} (v_{\fbox{}silonta}) := \frac{1}{\fbox{}silonta} \mathcal{E} ( u )
=
\frac{\fbox{}silonta}{2} \sum_{i=0}^2 \int_{\mathbb{T}^2} |\nabla v_{i,\fbox{}silonta} | + \sum_{i,j = 1}^2 \ \frac{ \Gamma_{ij} }{2 |\log \fbox{}silonta| } \int_{\mathbb{T}^2} \int_{\mathbb{T}^2} G_{\mathbb{T}^2}(x - y) v_{i, \fbox{}silonta}(x) v_{j, \fbox{}silonta}(y) dx dy, \fbox{}silonnd{eqnarray}
and $E_\fbox{}silonta(v_\fbox{}silonta)=+\infty$ otherwise.
Heuristically, we expect that (for large enough $M_i>0$) this choice of parameters will lead to fragmentation of a minimizing sequence $v_\fbox{}silonta = \sum_{k=1}^K v_\fbox{}silonta^k$ into $K$ isolated components, each concentrating at a distinct point $\xi^k\in{\mathbb{T}^2}$ and supported on a pair of sets $(\Omega_{1,\fbox{}silonta}^k,\Omega_{2,\fbox{}silonta}^k)$ with characteristic length scale $O(\fbox{}silonta)$. Apart from the vectorial nature of the order parameters, this was the result described in \cite{bi1,ABCT2} in the binary case. Blowing up at $\fbox{}silonta$-scale, we would express the minimizing components
$v_{i, \fbox{}silonta}^k(\fbox{}silonta x + \xi^k ) = \fbox{}silonta^{-2} z^k_i (x)$, with limiting profile $z_i^k:=\chi_{A_i^k}$ for pairs of sets $A^k=(A^k_1,A^k_2)$ in ${\mathbb{R}}R$. With this as an ansatz, the minimizer $v_\fbox{}silonta$ may be treated as a superposition of point particles,
$$ v_\fbox{}silonta \rightharpoonup \sum_{k=1}^K (m^k_1,m^k_2)\, \,{\operatorname{d}}elta_{\xi^k}, $$
for $m_i^k=|A_i^k|$, and
a formal calculation yields an expansion of the energy of the form:
\begin{align*}
E_\fbox{}silonta(v_\fbox{}silonta)&=\sum_{k=1}^K \sum_{i=0}^2 \frac{\fbox{}silonta}{2} \int_{{\mathbb{T}^2}}|\nabla v_{i, \fbox{}silonta}^k |
+{\Gamma_{ij} \over 2 |\log \fbox{}silonta|} \sum_{k,\fbox{}silonll=1}^K \sum_{i,j=1}^2
\int_{{\mathbb{T}^2}}\int_{{\mathbb{T}^2}} v_{i, \fbox{}silonta}^k (x)\, G_{\mathbb{T}^2} (x-y)\, v_{j, \fbox{}silonta}^\fbox{}silonll(y)\, dx\, dy \\
& = \sum_{k=1}^K \sum_{i=0}^2 \frac{1}{2} \int_{A_i^k} |\nabla z_{i}^k|
+{\Gamma_{ij} \over 2|\log \fbox{}silonta|} \sum_{k,\fbox{}silonll=1}^K \sum_{i,j=1}^2
\int_{ A_i^k}\int_{ A_j^\fbox{}silonll} G_{\mathbb{T}^2} (\xi^k+ \fbox{}silonta \tilde x - \xi^\fbox{}silonll - \fbox{}silonta \tilde y)\, d \tilde x\, d \tilde y \\
&= \sum_{k=1}^K \left( \text{Per}_{{\mathbb{R}}R} (A^k) + \sum_{i,j=1}^2 {\Gamma_{ij}\over 4\pi} |A^k_i|\, |A^k_j|\right) + O(|\log\fbox{}silonta|^{-1}),
\fbox{}silonnd{align*}
where we define the perimeter of the {2-cluster} (see \cite[Chapter 29] {maggi}) $A^{\color{red} }=(A_1^{ {\color{red} } },A_2^{{ \color{red} }})$ of sets $A_1,A_2\subset{\mathbb{R}}R$ with $|A_1\cap A_2|=0$ as
\begin{equation}\label{cluster_per}
\text{Per}_F(A) = \frac{1}{2} \sum_{i=0}^2 {\cal H}^1 (A_i \cap F), \text{ where } A_0 = (A_1 \cup A_2)^C.
\fbox{}silonnd{equation}
Thus, to highest order, energy minimization define the {\it shape} of minimizing components $A^k$ at scale $\fbox{}silonta$, as minimizers of an isoperimetric problem for clusters in ${\mathbb{R}}R$. We define
$$ \mathcal{G} (A) : = \text{Per}_{\mathbb{R}^2 } (A) + \sum_{i,j=1}^2 \frac{\Gamma_{ij} {m}_i {m}_j }{4 \pi}, $$
and for given $ m =( m_1 , m_2 )$, $ m_i \ge 0$,
$$ e_0( m ):=\min \left\{ \mathcal{G}( A ) \ | \ A =(A_1,A_2) \text{ 2-cluster, with $| A_i |= m_i $, $i=1,2$}\right \}. $$
If both $ m_i >0$, $i=1,2$, then \cite{FABHZ} the minimum is attained at a {\it double bubble}, whose geometry is uniquely determined by $m$; that is,
\begin{eqnarray} \label{e0m}
e_0^{} (m) = p(m_1, m_2) + \sum_{i, j=1}^2 \frac{\Gamma_{ij} m_i m_j }{4\pi}.
\fbox{}silonnd{eqnarray}
The expression $p(m_1,m_2)=\text{Per}_{{\mathbb{R}}R}(A)$ gives the perimeter of the minimizing cluster $A=(A_1,A_2)$ with $m_i=|A_i|$, and represents the total perimeter of a double bubble when $m_1,m_2>0$.
In the case of $m_1 = 0$ (or $m_2 =0$), the minimizer is a single bubble,
$p(m_1,0)=2\sqrt{\pi m_1}$ (and similarly for $p(0,m_2)$), and so single bubbles simplify to
\begin{eqnarray} \label{e0s}
\text{ } e_0(m) : = e_0(m_1, 0) = 2\sqrt{\pi m_1} + \frac{\Gamma_{11} (m_1)^2 }{4\pi}, \ \text{ or } \, \ e_0(m) := e_0(0, m_2) = 2\sqrt{\pi m_2} + \frac{\Gamma_{22} (m_2)^2 }{4\pi}.
\fbox{}silonnd{eqnarray}
Thus, we expect that minimizers of $E_\fbox{}silonta$ will always form an array consisting of single or double bubbles (or both); no other shapes are expected for the components of $\Omega_\fbox{}silonta$. The spatial distribution of the single or double bubbles on ${\mathbb{T}^2}$ should be determined by the higher order terms in a more detailed energy expansion.
However, this heuristic description says nothing about how the total masses $M=(M_1,M_2)$ (at scale $\fbox{}silonta^2$) are to be divided. Indeed, if either $ M_i$ is large then it may well happen that total energy is reduced by further splitting into smaller components, so as to decrease the quadratic term in $e_0$. Following \cite{bi1} we define
\begin{eqnarray} \label{mine0}
\overline{e_0 }(M ) := \inf \left\{ \sum_{k=1}^{\infty} e_0 (m^k ) : m^k = (m_1^k, m_2^k ), \ m_i^k \geq 0,\ \sum_{k=1}^{\infty} m_i^k = M_i, i = 1, 2 \right\},
\label{e0bar}
\fbox{}silonnd{eqnarray}
which effectively allows for splitting of sets with large area. We remark that the problem \fbox{}silonqref{e0bar} is highly non-convex, and we do not expect uniqueness of minimizers for $\overline{e_0 }(M)$.
Our main results confirm the heuristic behavior above, and provide some description of the geometry of the limiting component clusters for minimizers. In Theorem~\ref{MinThm} we prove that $\overline{e_0}(M)$ indeed determines the distribution of masses and the resulting shapes of the components:
$$ \lim_{\fbox{}silonta\to 0}\min\left\{ E_\fbox{}silonta(v_{\fbox{}silonta}) \ | \ v_{\fbox{}silonta} \in X_\fbox{}silonta, \ \int_{\mathbb{T}^2} v_{\fbox{}silonta} = M\right\} = \overline{e_0}(M). $$
For large enough masses $M=(M_1,M_2)$ minimizers do split into a finite number $K$ of disjoint components, each of which minimizes $\mathcal{G}(A)$ upon blow-up at scale $\fbox{}silonta$. Furthermore, the spatial arrangement of the limiting bubbles is determined by minimization of the interaction energy
$$ \mathcal{F}_K (y^1,\,{\operatorname{d}}ots,y^K ; \{m^1,\,{\operatorname{d}}ots,m^K\})= \sum_{k,\fbox{}silonll=1\atop k\neq\fbox{}silonll}^K \sum_{i,j=1}^2 \frac{ \Gamma_{ij} }{2}\, m_i^k\, m_j^\fbox{}silonll\, G_{{\mathbb{T}^2}}(y^k-y^\fbox{}silonll). $$
Thus, global minimization should indeed produce a crystalline lattice of double and/or single bubbles, as in the stationary assemblies constructed in \cite{doubleAs,disc,stationary}.
Theorem~\ref{MinThm} provides a more precise statement which gives a fine detailed structure of minimizers $v_\fbox{}silonta$ of $E_\fbox{}silonta$. In the same section we also show that $E_\fbox{}silonta$ and $\overline{e_0}$ are connected via $\Gamma$-convergence; see Theorem~\ref{twodfirst}, and the interaction energy $\mathcal{F}_K$ arises as a second-level $\Gamma$ limit. These results both sharpen those for the binary (diblock) case \cite{bi1} and generalize to the more complex triblock model.
The most important and original results concern the minimizers of $\overline{e_0}(M)$.
First, minimizing configurations can contain only a finite number of nontrivial components:
\begin{theorem}(Finiteness)\label{finiteness}
For any $M= (M_1, M_2), \ M_1, M_2 > 0$, a minimizing configuration for $\overline{e_0}(M)$ has finitely many nontrivial components. That is, there exist $ K<\infty$ and pairs $m^1,\,{\operatorname{d}}ots, m^K$, with $m^k=(m_1^k,m_2^k)\neq (0,0)$, for which $\overline{e_0}(M)=\sum_{k=1}^K e_0(m^k)$.
\fbox{}silonnd{theorem}
This is proven in the binary case by \cite{bi1}, using the concavity of the perimeter for small masses. However, in the ternary case the proof is much more complex as the expression for the perimeter of double bubbles is not explicitly known, and in fact it is unknown whether it is concave for small $m$.
Given the number of parameters appearing in the limiting description $\overline{e_0}(M)$, it is impossible to make a simple statement concerning its minimizers. Numerical simulations suggest a wide variety of potential morphologies, and we prove the following indeed may be observed for appropriate parameter values.
\begin{theorem}\label{properties}
\begin{enumerate}
\item[(a)] (Coexistence)
Given $K_1$ and $K_2 >0$, and $\Gamma_{12}= 0$, there exist $\overline{M}_1$ and $\overline{M}_2$ such that for all $M_1>\overline{M}_1$ and $M_2>\overline{M}_2$ minimizing configurations of \fbox{}silonqref{mine0} have at least $K_1$ double bubbles and $K_2$ single bubbles.
\item[(b)] (All single bubbles) There exist constants $M_i^*$, depending only on $\Gamma_{ii}$, $i=1,2$, such that
for any given $M_1>4 M_1^*$, $M_2> 4 M_2^*$, there exists a threshold $\Gamma_{12}^{\ast}$ such that
for all $\Gamma_{12}> \Gamma_{12}^{\ast}$, any minimizing configuration of {\fbox{}silonqref{mine0}}
has no double bubbles. Moreover, all single bubbles have the same size (see Lemma \ref{finiteSin}).
\item[(c)] (One double bubble) There exist constants $m_i^*$, depending only on $\Gamma_{ii}$, $i=1,2$, such that for any given $M_i< \min\{m_i^*, \pi \Gamma_{ii}^{-2/3}\}$, $i=1,2$, and sufficiently small $\Gamma_{12}>0$ such that
\[\frac{\Gamma_{12}}{2\pi} M_1M_2 +p(M_1,M_2)<2\sqrt{\pi}(\sqrt{M_1}+\sqrt{M_2}),\]
then there is a unique minimizer of
\fbox{}silonqref{mine0} made of one double bubble. Here $p$ denotes the perimeter (see equation \fbox{}silonqref{e0m} and below).
\fbox{}silonnd{enumerate}
The specific values of $m_i^*$, $M_i^*$ and $\Gamma_{12}^*$ are given in the proof of Theorem~\ref{properties} and in the lemmas derived in Section~\ref{section 2}.
\fbox{}silonnd{theorem}
These are proven via delicate comparison arguments based on the geometry of double bubbles, in Section~\ref{section 2}.
For small $\Gamma_{12}$ and $|M_1-M_2|$, intuition and numerics suggest that minimizers should consist of all double bubbles, which after all are preferred by the isoperimetric inequality for 2-clusters. However, the non-explicit nature of the perimeter function for double bubbles makes such intricate comparison arguments very challenging.
In Section~\ref{section 3} we consider the interaction terms of order $|\log \fbox{}silonta|^{-1}$ and prove a second-level $\Gamma$-convergence result, Theorem~\ref{twodsecond}. Minimizers of the functional $F_0$ defined there will determine the crystalline lattice of the concentration points defined by the limit of minimizers of $v_\fbox{}silonta$.
Although experimentally an almost unlimited number of architectures can be synthetically accessed in ternary systems like triblock copolymers \cite{block}, the
mathematical study of \fbox{}silonqref{energyu} is still in its early stages, due to its complexity. One-dimensional stationary points to the Euler-Lagrange equations of \fbox{}silonqref{energyu} were found in \cite{lameRW, blendCR}.
Two and three dimensional stationary configurations were studied recently in \cite{double, doubleAs, stationary, disc, evolutionTer}.
While mathematical interest in triblock copolymers via the energy functional \fbox{}silonqref{energyu} is relatively recent, there has been much progress in mathematical analysis of nonlocal binary systems. Much early work concentrated on the diffuse interface Ohta-Kawasaki density functional theory for diblock copolymers \cite{equilibrium, nishiura, onDerivation},
\begin{eqnarray} \label{energyB}
\mathcal{E}^{} (u) := \int_{\mathbb{T}^n} |\nabla u | + \gamma \int_{\mathbb{T}^n} \int_{\mathbb{T}^n} G_{\mathbb{T}^n}(x-y)\; u (x) \; u (y) dx dy,
\fbox{}silonnd{eqnarray}
with a single mass or volume constraint. The dynamics for a gradient flow for \fbox{}silonqref{energyB} with small volume fraction were developed in \cite{hnr, gc}.
All stationary solutions to the Euler-Lagrange equation of \fbox{}silonqref{energyB} in one dimension were known to be local minimizers \cite{miniRW}, and
many stationary points in two and three dimensions have been found that match the morphological phases in diblock copolymers \cite{oshita, many, spherical, oval, ihsan, Julin3, cristoferi, afjm}.
The sharp interface nonlocal isoperimetric problems have been the object of great interest, both for applications and for their connection to problems of minimal or constant curvature surfaces.
Global minimizers of \fbox{}silonqref{energyB}, and the related Gamow's Liquid Drop model describing atomic nuclei, were studied in \cite{otto, muratov, bi1, st, GMSdensity, knupfer1, knupfer2, Julin, ms, fl} for various parameter ranges.
Variants of the Gamow's liquid drop model with background potential or with an anisotropic surface energy replacing the perimeter, are studied in \cite{ABCT1,luotto, cnt}.
Higher dimensions are considered in \cite{BC, cisp}.
Applications of the second variation of \fbox{}silonqref{energyB} and its connections to minimality and $\Gamma$-convergence are to be found in \cite{cs,afm,Julin2}.
The bifurcation from spherical, cylindrical and lamellar shapes with Yukawa instead of Coulomb interaction has been done in \cite{fall}.
Blends of diblock copolymers and nanoparticles \cite{nano, ABCT2} and blends of diblock copolymers and homopolymers are also studied by \cite{BK,blendCR}.
Extension of the local perimeter term to nonlocal $s$-perimeters is studied in \cite{figalli}.
\section{Geometric Properties of Global Minimizers} \label{section 2}
In this secction we analyze the geometric properties of minimizers of $\overline{e_0}(M)$. We recall \fbox{}silonqref{mine0}
\begin{eqnarray}
\overline{e_0}(M)=
\inf \left \{ \sum_{k=1}^{\infty} e_0 (m^k) : m^k = (m_1^k, m_2^k), \ m_i^k \geq 0, \sum_{k=1}^{\infty} m_i^k = M_i, i = 1, 2 \right \}, \notag
\fbox{}silonnd{eqnarray}
where $M=(M_1,M_2)$ and \fbox{}silonqref{e0m}
\begin{eqnarray}
e_0^{} (m) = p(m_1, m_2) + \sum_{i, j=1}^2 \frac{\Gamma_{ij} m_i m_j }{4\pi}. \notag
\fbox{}silonnd{eqnarray}
Unfortunately, for double bubbles $p(m_1,m_2)$ admits no such simple formula.
The following Lemmas \ref{c' sharp}, \ref{all but one}, and \ref{flex}, will help us overcome this difficulty. We present the proofs in the case of double bubbles; the degenerate single bubble cases are completely analogous and in most cases much simpler.
\begin{figure}[!htb]
\centering
\includegraphics[width=10cm]{con1t2.eps}
\caption{Construction for the upper bound of $p(m_1+\varepsilonp,m_2)$.}
\label{con1}
\fbox{}silonnd{figure}
\begin{lemma}\label{c' sharp}
It holds
\[\frac{\partial}{\partial m_i} p(m_1,m_2) =\frac{1}{r_i},\qquad i=1,2,\]
where $r_i = r_i (m_1, m_2)$.
\fbox{}silonnd{lemma}
\begin{proof}
Since
\[\frac{\partial }{\partial m_1}p(m_1,m_2)
=\lim_{\varepsilonp\to 0^+}\frac{p(m_1+\varepsilonp,m_2)-p(m_1,m_2)}{\varepsilonp}
=\lim_{\varepsilonp\to 0^+}\frac{p(m_1,m_2)-p(m_1-\varepsilonp,m_2)}{\varepsilonp}, \]
we need to bound $p(m_1\pm\varepsilonp,m_2)$ from above.
Denote by $B$ the double bubble with masses $(m_1,m_2)$.
Denote by $C_i$ the circular arc of the boundary of the lobe with mass $m_i$, radius $r_i$ and center $O_i$, $i=1,2$.
Also denote by $C_0$ the central arc, by $P$ one of the triple junction points, and by $\tau_i$ the tangent lines
to $C_i$ at $P$, $i=0,1,2$. Being a double bubble, the angle between each two $\tau_i$
and $\tau_j$ with $i\neq j$ is $2\pi/3$.
Let $T_t(C_1)$ be the scaling of $C_1$, still centered at $O_1$ and $t>0$ is the ratio.
{\it Upper bound.}
We first bound $p(m_1+\varepsilonp,m_2)$ from above.
To this purpose, it suffices to construct an admissible competitor $B_t$ (which has mass $x+\varepsilonp$
of type I constituent, and mass $m_2$ of type II constituent), which does not need be to be
a double bubble. We describe only the construction near $P$, since the construction near
the other triple junction $\widetilde{P}$ will be analogous. In the construction, we do not need to alter the right lobe
(the one with mass $m_2$); see Figure \ref{con1}.
\begin{itemize}
\item First, we enlarge $C_1$, replacing it with $T_t(C_1)$ with $t=1+\,{\operatorname{d}}elta$, for some $\,{\operatorname{d}}elta=\,{\operatorname{d}}elta(\varepsilonp)$
that will be determined later.
\item We connect the triple junction point $P\in C_0\cup C_2$ to $T_{t}(C_1)$ with the segment
$S_t:= \overline{PQ_t}$ where $Q_t:= T_t(C_1)\cap \tau_1$.
\item Similarly repeat this for the other triple junction point $\widetilde{P}$, which we connect to
$T_{t}(C_1)$ by a segment $\widetilde{S_t} := \overline{\widetilde{P}\widetilde{Q}_t}$, where $\widetilde{Q}_t$
denotes the reflection
of $Q_t$ with respect to $\overline{O_1O_2}$.
\fbox{}silonnd{itemize}
The competitor will be the region inside
\[ B_{t}:=C_0\cup C_2 \cup \wideparen{QQ_t} \cup S_t \cup \widetilde{S}_t.\]
Let $\theta_t:=\angle P O_1 Q_t$. Note that the triangle
$\triangle P O_1 Q_t$ satisfies
\[|O_1-Q_t|=r_1t,\quad |O_1-P|=r_1, \quad \cos \theta_t = \frac{|O_1-P|}{|O_1-Q_t|}=\frac{1}{t},\quad \mathcal{H}^1(S^t)= r_1\tan \theta_t.\]
Choose $t=1+\,{\operatorname{d}}elta$, with $0<\,{\operatorname{d}}elta\ll1$, and note that
\[\cos \theta_t = 1-\frac{(\theta_t)^2 }{2} +O((\theta_t)^4) = \frac{1}{1+\,{\operatorname{d}}elta} = 1-\,{\operatorname{d}}elta+o(\,{\operatorname{d}}elta),\]
hence $\theta_t = \sqrt{2\,{\operatorname{d}}elta} +o(\sqrt{\,{\operatorname{d}}elta})$.
The piece of arc of $C_1$
inside $\triangle P O_1 Q_t$ has length $r_1 \theta_t$. Thus
\[|\mathcal{H}^1(S_t)-\mathcal{H}^1(C_1\cap \triangle P O_1 Q_t)| = r_1(\tan \theta_t-\theta_t)
=r_1\Big( \frac{(\theta_t)^3}3+O((\theta_t)^5)\Big)=
O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}),\]
and similarly
\[|\mathcal{H}^1(\widetilde{S}_t)-\mathcal{H}^1(C_1\cap \triangle \widetilde{P } O_1 \widetilde{Q}_t)| = O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}).\]
Thus, the difference in perimeter is
\begin{eqnarray*}
&& \mathcal{H}^1(\partial B_{t})-\mathcal{H}^1( \partial B)\\
& = & \left [\mathcal{H}^1\left(\wideparen{QQ_t} \right) +
\mathcal{H}^1(S_t)+\mathcal{H}^1(\widetilde{S}_t ) + \mathcal{H}^1(C_0)+\mathcal{H}^1(C_2) \right ] -\left [ \mathcal{H}^1(C_1) + \mathcal{H}^1(C_0)+\mathcal{H}^1(C_2) \right ]\\
&=& 2 r_1 (1+\,{\operatorname{d}}elta)(\thetaeta_1 - \thetaeta_t) - 2 r_1 (\thetaeta_1 - \thetaeta_t) + O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}) \\
&=& 2\theta_1r_1\,{\operatorname{d}}elta+ O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}).
\fbox{}silonnd{eqnarray*}
Now we need to estimate the difference in area:
\begin{eqnarray*}
\mathcal{H}^2(B_t) - \mathcal{H}^2(B)
&=& (\theta_1-\theta_t ) r_1^2[(1+\,{\operatorname{d}}elta)^2 -1] + 2 \left [ \mathcal{H}^2(\triangle P O_1 Q^t)-\frac{\theta_t r_1^2}2 \right ]\\
&=& 2\theta_1 r_1^2 \,{\operatorname{d}}elta +O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}) + r_1^2 (\tan \theta_t -\theta_t ) \\
&=& 2\theta_1 r_1^2 \,{\operatorname{d}}elta +O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}).
\fbox{}silonnd{eqnarray*}
Thus the difference in area between the competitor $B_t$ and the original double bubble
$B$ is
\[2\theta_1 r_1^2 \,{\operatorname{d}}elta +O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}),\]
which has to be equal to $\varepsilonp$. Thus $\,{\operatorname{d}}elta=\varepsilonp/2\theta_1 r_1^2 +o(\varepsilonp)$, and
\[
\lim_{\varepsilonp\to 0^+}\frac{p(m_1+\varepsilonp,m_2)-p(m_1,m_2)}{\varepsilonp}
\le\lim_{\varepsilonp\to 0^+}\frac{\mathcal{H}^1(\partial B_t)-p(m_1,m_2)}{\varepsilonp}
=\lim_{\varepsilonp\to 0^+}\frac{2\theta_1r_1\,{\operatorname{d}}elta+ O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta})}{\varepsilonp}
=\frac1{r_1}.
\]
\begin{figure}[!htb]
\centering
\includegraphics[width=10cm]{con2t2.eps}
\caption{Construction for the lower bound of $p(m_1-\varepsilonp,m_2)$.}
\label{con2}
\fbox{}silonnd{figure}
{\it Lower bound.}
The construction for the lower bound is very similar. Instead of enlarging $C_1$, we now have to shrink it; see Figure \ref{con2}.
\begin{itemize}
\item First, we shrink $C_1$, replacing it with $T_t(C_1)$ with $t=1-\,{\operatorname{d}}elta$, for some $\,{\operatorname{d}}elta=\,{\operatorname{d}}elta(\varepsilonp)$
that will be determined later.
\item
Let $\thetaeta_t$ be the unique angle such that the segment
$S_t:= \overline{PQ_t}$ is tangent to $T_t(C_1)$ at $Q_t$.
\item Similarly repeat this for the other triple junction point $\widetilde{P}$, which we connect to
$T_{t}(C_1)$ by a segment $\widetilde{S_t} := \overline{\widetilde{P}\widetilde{Q}_t}$, where $\widetilde{Q}_t$
denotes the reflection
of $Q_t$ with respect to $\overline{O_1O_2}$.
\fbox{}silonnd{itemize}
Let the competitor be the region inside
\[ B_{t}:=C_0\cup C_2 \cup \wideparen{QQ_t} \cup S_t \cup \widetilde{S}_t.\]
Note that our geometric construction gives
\[|O_1-Q_t|=r_1t,\quad \mathcal{H}^1(S_t)=r_1\sin \theta_t, \quad \theta_t:=\angle PO_1 Q_t = \arccos \frac{|O_1-Q_t|}{|O_1-P|}=t.\]
Choosing $t=1-\,{\operatorname{d}}elta$ gives again $\theta_t:= \sqrt{2\,{\operatorname{d}}elta}+o(\sqrt\,{\operatorname{d}}elta)$.
The difference in perimeter is thus
\begin{eqnarray*}
&&\mathcal{H}^1( \partial B) - \mathcal{H}^1(\partial B_{t})\\
& = & \left [ \mathcal{H}^1(C_1) + \mathcal{H}^1(C_0)+\mathcal{H}^1(C_2) \right ] - \left [\mathcal{H}^1\left(\wideparen{QQ_t} \right) +
\mathcal{H}^1(S_t)+\mathcal{H}^1(\widetilde{S}_t ) + \mathcal{H}^1(C_0)+\mathcal{H}^1(C_2) \right ] \\
&=& 2 \thetaeta_1 r_1 -2 (\thetaeta_1-\thetaeta_t)r_1(1-\,{\operatorname{d}}elta) - 2r_1 \sin \thetaeta_t + O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}) \\
&=& 2\theta_1r_1\,{\operatorname{d}}elta+ O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}).
\fbox{}silonnd{eqnarray*}
And the difference in area is:
\begin{eqnarray*}
\mathcal{H}^2(B) - \mathcal{H}^2(B_t)
&=& (\theta_1-\theta_t ) r_1^2[1-(1-\,{\operatorname{d}}elta)^2 ] + 2 \left [ \frac{\theta_t r_1^2}2 - \mathcal{H}^2(\triangle P O_1 Q^t) \right ]\\
&=& 2\theta_1 r_1^2 \,{\operatorname{d}}elta +O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}) + r_1^2 (\theta_t - \sin \theta_t \cos \theta_t) \\
&=& 2\theta_1 r_1^2 \,{\operatorname{d}}elta +O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta}).
\fbox{}silonnd{eqnarray*}
Since we need the area difference to be $\varepsilonp$, we get $2\theta_1 r_1^2 \,{\operatorname{d}}elta +O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta})=\varepsilonp$.
Thus $\,{\operatorname{d}}elta=\varepsilonp/2\theta_1 r_1^2 +o(\varepsilonp)$, and
\[
\lim_{\varepsilonp\to 0^+}\frac{p(m_1,m_2)-p(m_1-\varepsilonp,m_2)}{\varepsilonp}
\geq \lim_{\varepsilonp\to 0^+}\frac{p(m_1,m_2) - \mathcal{H}^1(\partial B_t)}{\varepsilonp}
=\lim_{\varepsilonp\to 0^+}\frac{2\theta_1r_1\,{\operatorname{d}}elta+ O(\,{\operatorname{d}}elta\sqrt{\,{\operatorname{d}}elta})}{\varepsilonp}
=\frac1{r_1},
\]
concluding the proof.
\fbox{}silonnd{proof}
\begin{lemma}\label{all but one}
Consider an arbitrary minimizing configuration $\mathcal{B}$ of \fbox{}silonqref{mine0} containing at least two double bubbles, denoted by $B_k$, $k=1,2,\cdots$.
Let $m_1^k$ and $m_2^k$ denote the masses of the two lobes of $B_k$.
Then
the pure second derivatives satisfy
\[\frac{\partial^2 e_0^{}(m_1^k,m_2^k)}{\partial (m_1^k)^2},\ \frac{\partial^2 e_0^{} (m_1^h,m_2^h)}{\partial (m_2^h)^2} \geq 0 \]
for all except at most one such index $k$ (resp. $h$).
\fbox{}silonnd{lemma}
\begin{proof}
For notational convenience in the proof we denote $x_k:=m_1^k$, $y_k:=m_2^k$. Consider two (arbitrary) different double bubbles $B_k$, $B_h$. Then
\begin{align*}
e_0^{} (x_k+\varepsilonp,y_k)-e_0^{} (x_k,y_k) & = \varepsilonp \frac{\partial e_0^{} (x_k,y_k)}{\partial x_k} +\frac{\varepsilonp^2}{2} \frac{\partial^2 e_0^{} (x_k,y_k)}{\partial x_k^2} +O(\varepsilonp^3),\\
e_0^{} (x_h-\varepsilonp,y_h)-e_0^{} (x_h,y_h) & = -\varepsilonp \frac{\partial e_0^{} (x_h,y_h)}{\partial x_h} +\frac{\varepsilonp^2}{2} \frac{\partial^2 e_0^{} (x_h,y_h)}{\partial x_h^2}+O(\varepsilonp^3),
\fbox{}silonnd{align*}
hence the minimality of $\mathcal{B}$ gives the necessary condition
\begin{align*}
0&\le e_0^{} (x_k+\varepsilonp,y_k)+e_0^{} (x_h-\varepsilonp,y_h)+ \sum_{j\geq 1,\ j\neq k,h}e_0^{} (x_j,y_j) -\sum_{j\geq 1}e_0^{} (x_j,y_j) \\
&=\varepsilonp \bigg(\frac{\partial e_0^{} (x_k,y_k)}{\partial x_k}- \frac{\partial e_0^{} (x_h,y_h)}{\partial x_h} \bigg)+\frac{\varepsilonp^2}{2}\bigg( \frac{\partial^2 e_0^{} (x_k,y_k)}{\partial x_k^2}+\frac{\partial^2 e_0^{} (x_h,y_h)}{\partial x_h^2}\bigg) +O(\varepsilonp^3),
\fbox{}silonnd{align*}
hence, by the arbitrariness of $\varepsilonp$,
\[ \frac{\partial e_0^{} (x_k,y_k)}{\partial x_k}=\frac{\partial e_0^{} (x_h,y_h)}{\partial x_h}, \quad \frac{\partial^2 e_0^{} (x_k,y_k)}{\partial x_k^2}+ \frac{\partial^2 e_0^{} (x_h,y_h)}{\partial x_h^2}
\geq 0, \qquad \forall k\neq h.\]
The proof for the pure second derivative in $y_k$ is completely analogous.
\fbox{}silonnd{proof}
\begin{lemma}\label{flex}
Given $\Gamma_{ii}$, there exist constants $m_i^*$,
such that
\[ \frac{\partial^2 e_0^{} (m_1,m_2)}{\partial m_i^2} <0 \qquad \text{for all } m_i< m_i^*,\ i=1,2,\]
where $m_i^*$ only depends on $\Gamma_{ii}$.
\fbox{}silonnd{lemma}
In particular, it is quite important for our constructions that $m_i^*$ do not depend neither on
$\Gamma_{12}$, nor on the total masses $M_i$, $i=1,2$.
\begin{proof}
We prove the result for $i=1$. The case $i=2$ is completely analogous.
By Lemma \ref{c' sharp}, we have
\[\frac{\partial e_0^{} (m_1,m_2)}{\partial m_1} = \frac{\Gamma_{11}m_1+\Gamma_{12}m_2 }{2\pi} +\frac{1}{r_1},\qquad
\frac{\partial^2 e_0^{} (m_1,m_2)}{\partial m_1^2} = \frac{\Gamma_{11} }{2\pi} +\frac{\partial}{\partial m_1}\frac{1}{r_1},\]
where $r_1 = r_1 (m_1, m_2)$.
So we need to show that there exists a threshold $m_1^*$ such that, for any $m_1<m_1^*$,
\[\frac{\partial}{\partial m_1}\frac{1}{r_1}<-\frac{\Gamma_{11}}{2\pi}.\]
Thus it suffices to show that
\begin{equation}
\lim_{m_1\rightarrow 0} \frac{\partial}{\partial m_1}\frac{1}{r_1} =-\infty.
\label{liminf}
\fbox{}silonnd{equation}
\begin{figure}[!htb]
\centering
\includegraphics[width=8cm]{dbs3.eps}
\caption{ An asymmetric double bubble with radii $r_i$ and half-angles $\thetaeta_i$, $i=0,1,2$.}
\label{dbs2}
\fbox{}silonnd{figure}
For an asymmetric double bubble bounded by three circular arcs of radii $r_1, r_2$ and $r_0$ with $m_1 < m_2$,
notice that $r_1, r_2, r_0$ and $\thetaeta_1, \thetaeta_2, \thetaeta_0$, the half-angles associated with the three arcs, depend on $m_1$ and $m_2$ implicitly through the equations \cite{isenberg}
\begin{eqnarray}
\label{eq1}
m_1 &=& r_1^2 (\thetaeta_1 - \cos \thetaeta_1 \sin \thetaeta_1) + r_0^2 (\thetaeta_0 - \cos \thetaeta_0 \sin \thetaeta_0), \\
\label{eq2}
m_2 &=& r_2^2 (\thetaeta_2 - \cos \thetaeta_2 \sin \thetaeta_2) - r_0^2 (\thetaeta_0 - \cos \thetaeta_0 \sin \thetaeta_0), \\
\label{e3}
h &=& r_0 \sin \thetaeta_0= r_1 \sin \thetaeta_1 = r_2 \sin \thetaeta_2, \\
\label{e5}
(r_0)^{-1} &=&(r_1)^{-1} - (r_2)^{-1}, \\
\label{e6}
0 &=& \cos \thetaeta_1 + \cos \thetaeta_2 + \cos \thetaeta_0,
\fbox{}silonnd{eqnarray}
where $r_0$ is the radius of the common boundary of the two lobes of the double bubble;
$\thetaeta_0$ is half of the angle associated with the middle arc;
and $h$ is half of the distance between two triple junction points; see Figure \ref{dbs2}.
From \fbox{}silonqref{e3} and \fbox{}silonqref{e5}, we have
\begin{eqnarray} \label{e7}
\sin \thetaeta_1 - \sin \thetaeta_2 - \sin \thetaeta_0 = 0.
\fbox{}silonnd{eqnarray}
Combine \fbox{}silonqref{e6} with \fbox{}silonqref{e7}, we get
\begin{eqnarray}
\cos(\thetaeta_1+\thetaeta_0) = - \frac{1}{2}, \text{ and } \cos(\thetaeta_2-\thetaeta_0) = - \frac{1}{2}. \nonumber
\fbox{}silonnd{eqnarray}
That is,
\begin{eqnarray} \label{e8}
\thetaeta_1 = \frac{2\pi}{3} - \thetaeta_0, \text{ and } \; \thetaeta_2 = \frac{2\pi}{3} + \thetaeta_0.
\fbox{}silonnd{eqnarray}
We are interested in the case $m_1 \to 0$. This implies immediately
$h\to 0$, and $r_2\to \sqrt{m_2/\pi}$. Thus $\theta_2\to \pi$, hence $\theta_0,\theta_1\to \pi/3$.
Let $\theta_0=\pi/3-\varepsilonp$, $\theta_1=\pi/3+\varepsilonp$, $\thetaeta_2 = \pi - \varepsilonp$. Thus
from \fbox{}silonqref{e3} we get
\begin{equation*}
h=r_2\sin( \pi -\varepsilonp) =r_1\sin (\pi/3+\varepsilonp) =r_0\sin (\pi/3-\varepsilonp).
\fbox{}silonnd{equation*}
Thus,
\begin{equation} \label{h and r}
r_1=r_2\frac{\sin \varepsilonp}{\sin (\pi/3+\varepsilonp)},\ r_0=r_2\frac{\sin \varepsilonp}{\sin (\pi/3-\varepsilonp)},
\fbox{}silonnd{equation}
and \fbox{}silonqref{eq1}, \fbox{}silonqref{eq2} now read
\begin{align}
m_2&= r_2^2 \bigg[\pi- \varepsilonp + \frac12 \sin(2\varepsilonp)
- \frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3-\varepsilonp)}\bigg( \frac{\pi}3 -\varepsilonp - \frac12\sin \Big(\frac{2\pi}3 -2\varepsilonp\Big)\bigg)\bigg]
\label{m2 vep}\\
m_1 &= r_2^2 \left [\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3+\varepsilonp)}\bigg( \frac{\pi}3 +\varepsilonp - \frac12\sin \Big(\frac{2\pi}3 +2\varepsilonp\Big)\bigg)
+ \frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3-\varepsilonp)}\bigg( \frac{\pi}3 -\varepsilonp - \frac12\sin \Big(\frac{2\pi}3 -2\varepsilonp\Big)\bigg) \right ]
\notag
\\
&=m_2\frac{\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3+\varepsilonp)}( \frac{\pi}3 +\varepsilonp - \frac12\sin (\frac{2\pi}3
+2\varepsilonp))
+\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3-\varepsilonp)}( \frac{\pi}3 -\varepsilonp - \frac12\sin (\frac{2\pi}3 -2\varepsilonp))
}{\pi- \varepsilonp + \frac12 \sin(2\varepsilonp)
- \frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3-\varepsilonp)}( \frac{\pi}3 -\varepsilonp - \frac12\sin (\frac{2\pi}3 -2\varepsilonp))}.
\notag
\fbox{}silonnd{align}
Let
\begin{align*}
D(\varepsilonp)&:= \pi- \varepsilonp + \frac12 \sin(2\varepsilonp)
- \frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3-\varepsilonp)} \Big( \frac{\pi}3 -\varepsilonp - \frac12\sin\Big(\frac{2\pi}3 -2\varepsilonp\Big)\Big)
=\pi+O(\varepsilonp^2),\\
N(\varepsilonp)&:=\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3+\varepsilonp)}\Big( \frac{\pi}3 +\varepsilonp - \frac12\sin \Big(\frac{2\pi}3
+2\varepsilonp\Big)\Big)
+\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3-\varepsilonp)}\Big( \frac{\pi}3 -\varepsilonp - \frac12\sin \Big(\frac{2\pi}3 -2\varepsilonp\Big)\Big)
=O(\varepsilonp^2),
\fbox{}silonnd{align*}
so we have $m_1=\frac{N(\varepsilonp)}{D(\varepsilonp)}m_2$, and by direct computation,
\begin{align*}
N'(\varepsilonp) &= \frac{\sin 2\varepsilonp}{\sin^2 (\pi/3+\varepsilonp)}\Big( \frac{\pi}3 +\varepsilonp - \frac12\sin \Big(\frac{2\pi}3
+2\varepsilonp\Big)\Big)
+\frac{\sin 2\varepsilonp}{\sin^2 (\pi/3-\varepsilonp)}\Big( \frac{\pi}3 -\varepsilonp - \frac12\sin \Big(\frac{2\pi}3 -2\varepsilonp\Big)\Big)\\
&-\frac{2\sin^2 \varepsilonp\cos(\pi/3+\varepsilonp)}{\sin^3 (\pi/3+\varepsilonp)}\Big( \frac{\pi}3 +\varepsilonp - \frac12\sin \Big(\frac{2\pi}3
+2\varepsilonp\Big)\Big)\\
&+\frac{2\sin^2 \varepsilonp\cos(\pi/3-\varepsilonp)}{\sin^3 (\pi/3-\varepsilonp)}\Big( \frac{\pi}3 -\varepsilonp - \frac12\sin \Big(\frac{2\pi}3 -2\varepsilonp\Big)\Big)\\
&+\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3+\varepsilonp)}\Big( 1 - \cos \Big(\frac{2\pi}3 +2\varepsilonp\Big)\Big)
+\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3-\varepsilonp)}\Big( -1 +\cos \Big(\frac{2\pi}3 -2\varepsilonp\Big)\Big)\\
&=\frac{16}{3}\Big(\frac{\pi}3 -\frac{\sqrt{3}}4 \Big) \varepsilonp +O(\varepsilonp^2),
\fbox{}silonnd{align*}
and similarly
\begin{align*}
D'(\varepsilonp) &=- 1 + \cos(2\varepsilonp)
+\frac{\sin 2\varepsilonp}{\sin^2 (\pi/3-\varepsilonp)}\Big( \frac{\pi}3 -\varepsilonp - \frac12\sin \Big(\frac{2\pi}3 -2\varepsilonp\Big)\Big)\\
&+\frac{2\sin^2 \varepsilonp\cos(\pi/3-\varepsilonp)}{\sin^3 (\pi/3-\varepsilonp)}\Big( \frac{\pi}3 -\varepsilonp - \frac12\sin \Big(\frac{2\pi}3 -2\varepsilonp\Big)\Big)
+\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3-\varepsilonp)}\Big( -1 +\cos \Big(\frac{2\pi}3 -2\varepsilonp\Big)\Big)\\
&=\frac{8}{3}\Big(\frac{\pi}3 -\frac{\sqrt{3}}4 \Big)\varepsilonp +O(\varepsilonp^2).
\fbox{}silonnd{align*}
Thus
\begin{align*}
\frac1{m_2}\frac{d m_1}{d \varepsilonp} = \frac{N'(\varepsilonp)D(\varepsilonp)-D'(\varepsilonp)N(\varepsilonp)}{D(\varepsilonp)^2}
=\frac{16}{3\pi}\Big(\frac{\pi}3 -\frac{\sqrt{3}}{4} \Big) \varepsilonp+O(\varepsilonp^2).
\fbox{}silonnd{align*}
Now we compute the derivative $\frac{\partial r_1}{\partial \varepsilonp}$. From \fbox{}silonqref{h and r} and \fbox{}silonqref{m2 vep}
we get
\[ r_1^2=r_2^2\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3+\varepsilonp) } =\frac{\sin^2 \varepsilonp}{\sin^2 (\pi/3+\varepsilonp) D(\varepsilonp)}m_2,\]
hence
\begin{align*}
\frac1{m_2}\frac{\partial r_1^2}{\partial\varepsilonp}=\frac{\sin2 \varepsilonp}{\sin^2 (\pi/3+\varepsilonp) D(\varepsilonp)}
- \frac{2\sin^2 \varepsilonp \cos(2\pi/3+\varepsilonp)}{\sin^3 (\pi/3+\varepsilonp) D(\varepsilonp)}
-\frac{\sin^2 \varepsilonp D'(\varepsilonp)}{\sin^2 (\pi/3+ \varepsilonp) D^2(\varepsilonp)}
= \frac8{3\pi}\varepsilonp +O(\varepsilonp^2),
\fbox{}silonnd{align*}
which then gives
\begin{align*}
\frac{\partial r_1}{\partial m_1}&=\frac1{2r_1} \frac{ \frac{\partial r_1^2}{\partial\varepsilonp} }{\frac{d m_1 }{d \varepsilonp}}
= \frac1{2r_1} \frac{\frac8{3\pi}\varepsilonp +O(\varepsilonp^2)}{\frac{16}{3\pi}\Big(\frac{\pi}3 -\frac{\sqrt{3}}{4} \Big) \varepsilonp+O(\varepsilonp^2)}
\ge \frac{C}{r_1}>0,
\fbox{}silonnd{align*}
for all sufficiently small $\varepsilonp<\varepsilonp_0$, with $C,\ \varepsilonp_0$ being a universal constants independent of $\Gamma_{ij}$ and $M_i$, $i,j=1,2$.
Using the fact that in a double bubble we have $\theta_1\in (\pi/3,2\pi/3)$, we get
\[ \frac{\pi}{3}r_1^2\le m_1 \le \pi r_1^2,\]
and hence there exists another constant $C'>0$ such that
\[\frac{\partial }{\partial m_1 } \frac{1}{r_1} =-\frac1{r_1^2}\frac{\partial r_1}{\partial m_1}
\le - \frac{C'}{m_1^{3/2}} \]
as $m_1\to 0$, and \fbox{}silonqref{liminf} is proven. Then note
\[\frac{\partial^2 e_0^{} (m_1,m_2) }{\partial m_1^2}= \frac{\Gamma_{11}}{2\pi}+ \frac{\partial }{\partial m_1 } \frac{1}{r_1}
\le \frac{\Gamma_{11}}{2\pi} - \frac{C'}{m_1^{3/2}},\]
and the proof is complete.
\fbox{}silonnd{proof}
The next result shows that in a minimizing configuration of \fbox{}silonqref{mine0}, no single bubble, or lobe of double bubbles, can be too large.
\begin{lemma}\label{max mass}
Let $\mathcal{B}$ be a minimizing configuration of \fbox{}silonqref{mine0}. Then
there exist no single bubble, nor lobe of double bubble, of $i$-th constituent, having mass greater than
\[M_i^*:= \frac{8 \pi }{\Gamma_{ii}^{2/3}},\qquad i=1,2.\]
\fbox{}silonnd{lemma}
\begin{proof}
Assume there exists a single bubble of type I material with mass $m$. By replacing it with two single bubbles with mass $m/2$ will change the energy by
\[ \Delta = 2\Big[\frac{\Gamma_{11} m^2}{16\pi}+\sqrt{2\pi m} \Big] -\Big[\frac{\Gamma_{11} m^2}{4\pi}+2\sqrt{\pi m} \Big]
=-\frac{\Gamma_{11} m^2}{8\pi}+2\sqrt{\pi m}(\sqrt{2}-1). \]
The minimality of $\mathcal{B}$ requires $\Delta \ge 0$, which is possible only if
\[ m^{3/2}\le \frac{16 \pi \sqrt{\pi}(\sqrt{2}-1)}{\Gamma_{11}} . \]
Now assume there exists a lobe of a double bubble of type I material with mass $m$.
Denote by $m_2$ the mass of the other lobe (of type II material).
By removing such lobe and replacing it with two single bubbles with mass $m/2$ will change the energy by
\begin{align*}
\Delta & = 2\Big[\frac{\Gamma_{11} m^2}{16\pi}+ \sqrt{2\pi m} \Big] +
\frac{\Gamma_{22} m_2^2}{4\pi}+2\sqrt{\pi m_2}
-\Big[\frac{\Gamma_{11} m^2 +2 \Gamma_{12} m m_2+\Gamma_{22} m_2^2}{4\pi}+p(m,m_2) \Big]\\
&\le-\frac{\Gamma_{11} m^2}{8\pi}+2 \sqrt{2\pi m},
\fbox{}silonnd{align*}
where we used the fact that $2\sqrt{\pi m_2}=p(0,m_2)\le p(m,m_2)$, which is a direct consequence of Lemma \ref{c' sharp}.
The minimality of $\mathcal{B}$ requires $\Delta \ge 0$, which is possible only if
\[ m^{3/2}\le \frac{16 \pi \sqrt{2\pi}}{\Gamma_{11}} . \]
The proof for type II material is completely analogous.
\fbox{}silonnd{proof}
\begin{lemma}\label{single_lower}
There exist constants $\overline{m_i}^s>0$, $i=1,2$, depending on $\Gamma_{ii}$ only, such that at most one single bubble of $i$-th constituent in a minimizing configuration has mass $m^k_i <\overline{m_i}^s$.
\fbox{}silonnd{lemma}
\begin{proof}
Assume $i=1$; the case $i=2$ is the same.
Assume that for some $M$ there is a minimizing configurations with (at least) two single bubbles of type $I$, whose masses are $m^k_1=x\le y=m^\fbox{}silonll_1$. If we replace this pair by one single bubble with mass $x+y$, the change
in energy may be estimated by:
\begin{align*}
\Delta &= \frac{\Gamma_{11} (x+y)^2}{4\pi} + 2\sqrt{\pi(x+y)}- \bigg[\frac{\Gamma_{11} (x^2+y^2)}{4\pi} +2\sqrt{\pi}(\sqrt{x}+\sqrt{y}) \bigg]\\
&= \frac{\Gamma_{11} xy}{2\pi} +2\sqrt{\pi}(\sqrt{x+y}-\sqrt{x}-\sqrt{y}).
\fbox{}silonnd{align*}
By the minimality of the given configuration, we must have $\Delta \ge 0$, that is,
\[\frac{\Gamma_{11} xy}{2\pi} \ge 2\sqrt{\pi}(\sqrt{x}+\sqrt{y}-\sqrt{x+y}). \]
Thus,
\[ \frac{\Gamma_{11} }{4\pi\sqrt{\pi}} xy \ge \frac{2\sqrt{xy}}{\sqrt{x}+\sqrt{y}+\sqrt{x+y}}
\ge \frac{\sqrt{xy}}{\sqrt{x}+\sqrt{y}} \overset{(x\le y)}\ge \frac{\sqrt{x}}{2}.\]
Since Lemma \ref{max mass} gives $x,y\le M_1^*$ (which depends on $\Gamma_{11}$ only), it follows
\[\frac{\Gamma_{11} }{2\pi\sqrt{\pi}} \sqrt{x} M_1^*\ge \frac{\Gamma_{11} }{2\pi\sqrt{\pi}} \sqrt{x} y \ge 1,\]
thus there exists a constant $\overline{m_1}^{s} := 4 \pi^3 / ( \Gamma_{11} M_1^{\ast} )^2$ such that $y\ge x\ge \overline{m_1}^s$.
\fbox{}silonnd{proof}
We are now ready to prove Theorem~\ref{finiteness} on the finiteness of minimizing configurations.
\begin{proof}[\bf{Proof of Theorem~\ref{finiteness}}]
Combining Lemmas \ref{all but one} and \ref{flex}, we conclude that for any minimizing configuration for $\overline{e_0}(M)$ there exists at most one double bubble
whose lobe of the $i$-th constituent has mass less than $m_i^*$, $i=1,2$. Thus there exist at most
\[2+\min\bigg\{\frac{M_1}{m_1^*},\frac{M_2}{m_2^*}\bigg\}\]
double bubbles.
A similar argument based on Lemma \ref{single_lower} yields the same conclusion for single bubbles in a minimizing configuration.
\fbox{}silonnd{proof}
\begin{proof}[\bf{Proof of Theorem \ref{properties} (a)}]
When $\Gamma_{12}= 0$,
\fbox{}silonqref{e0m} becomes
\begin{eqnarray} \label{d1}
e_0(m) = p(m_1, m_2) + \frac{\Gamma_{11} (m_1)^2 }{4\pi} + \frac{\Gamma_{22} (m_2)^2 }{4\pi}.
\fbox{}silonnd{eqnarray}
If there were two single bubbles with different constituents types, \fbox{}silonqref{e0s} would imply that
\begin{eqnarray} \label{s2}
e_0(m_1, 0) + e_0(0, m_2) = 2\sqrt{\pi m_1} + 2\sqrt{\pi m_2} + \frac{\Gamma_{11} (m_1)^2 }{4\pi} + \frac{\Gamma_{22} (m_2)^2 }{4\pi}.
\fbox{}silonnd{eqnarray}
Comparing \fbox{}silonqref{d1} with \fbox{}silonqref{s2}, since $p(m_1, m_2) \leq 2\sqrt{\pi m_1} + 2\sqrt{\pi m_2} $, by \cite{FABHZ} the two single bubbles of different types are more costly than a double bubble of the same masses.
Therefore, all single bubbles must be of the same type of constituent.
Case 1: If all single bubbles (if any) are of type II constituent,
choose $M_1= K_1 M_1^*$, where $M_i^*$ are defined in Lemma~\ref{max mass}. By Lemma \ref{all but one} and Lemma \ref{flex}, there is at most one double bubble whose lobe of type I constituent has mass less than $m_1^*$.
Combined with Lemma \ref{max mass}, for the other double bubbles, their lobes of type I constituent must have mass between $m_1^*$ and $M_1^*$. Therefore, there are at least $K_1$ double bubbles.
Let $K_d$ be the total number of double bubbles. Clearly $K_d < 1 + M_1/m_1^*$. Again by Lemmas \ref{all but one}, \ref{flex} and \ref{max mass}
there is at most one double bubble whose lobe of type II constituent
has mass less than $m_2^*$ and for the other double bubbles, their lobes of type II constituent must have mass between $m_2^*$ and $M_2^*$.
Choose
\[M_2 \ge (1+ M_1 /m_1^*)M_2^* + K_2 \; M_2^* = (1+(K_1 M_1^*)/m_1^*)M_2^* + K_2 \; M_2^*. \]
The type II constituent used by all double bubbles is $K_d M_2^{\ast}$. All the remaining type II constituent must go into single bubbles.
Therefore, there are at least $K_2$ single bubbles.
Case 2: If all single bubbles (if any) are of type I constituent, via similar arguments, choose $M_2= K_2 M_2^*$ and $M_1 \ge (1+(K_2 M_2^*)/m_2^*)M_1^* + K_1 \; M_1^*$.
Finally, choose
\begin{eqnarray*}
\overline{M}_1 \ge \max \left \{ K_1 M_1^*, \left(1+ \frac{K_2 M_2^*}{m_2^*} \right)M_1^* + K_1 \; M_1^* \right \} = \left(1+ \frac{K_2 M_2^*}{m_2^*} \right)M_1^* + K_1 \; M_1^* , \\
\overline{M}_2 \ge \max \left \{ K_2 M_2^*, \left (1+ \frac{K_1 M_1^*}{m_1^*} \right )M_2^* + K_2 \; M_2^* \right \} = \left (1+ \frac{K_1 M_1^*}{m_1^*} \right )M_2^* + K_2 \; M_2^*.
\fbox{}silonnd{eqnarray*}
Then for all masses $M_1\ge\overline{M}_1$ for the first component and $M_2\ge\overline{M}_2$ for the second, minimizing configurations of \fbox{}silonqref{mine0} have at least $K_1$ double bubbles and $K_2$ single bubbles.
\fbox{}silonnd{proof}
\begin{lemma} \label{atMostTwo}
Given $\Gamma_{11}>0$, $\Gamma_{22}>0$, $M_1>0$, $M_2>0$, and
\[\Gamma_{12}> \frac{4\pi\sqrt{\pi}(\sqrt{M_1^*}+\sqrt{M_2^*}) }{m_1^* m_2^*},\]
then any minimizing configuration of \fbox{}silonqref{mine0}
has at most two double bubbles.
\fbox{}silonnd{lemma}
\begin{proof}
Consider an arbitrary minimizing configuration $\mathcal{B}$ of \fbox{}silonqref{mine0}.
By Lemma \ref{flex}, there exists at most one double bubble with lobe of $i$-th constituent having mass less than $m_i^*$, $i=1,2$.
Thus any remaining (if there is)
double bubble $B_k = (m_1^k, m_2^k )$ satisfying $m_1^k\ge m_1^*$ and $m_2^k\ge m_2^*$. Splitting such a double bubble
into two single bubbles changes the energy by a quantity
\begin{align*}
\Delta &= \sum_{i=1}^2 \frac{\Gamma_{ii}(m_i^k)^2}{4\pi} +2\sqrt{\pi}(\sqrt{m_1^k}+\sqrt{m_2^k}) -
\bigg[ \frac{\Gamma_{12} m_1^k m_2^k}{2\pi}+ \sum_{i=1}^2 \frac{\Gamma_{ii}(m_i^k)^2}{4\pi}+p(m_1^k, m_2^k) \bigg]\\
&\le 2\sqrt{\pi}(\sqrt{m_1^k}+\sqrt{m_2^k}) -\frac{\Gamma_{12} m_1^k m_2^k}{2\pi}.
\fbox{}silonnd{align*}
Using the minimality of $\mathcal{B}$ and Lemma \ref{max mass}, we need
\begin{equation}
0\le \Delta \le 2\sqrt{\pi}(\sqrt{m_1^k}+\sqrt{m_2^k}) -\frac{\Gamma_{12} m_1^k m_2^k}{2\pi}\le 2\sqrt{\pi}(\sqrt{M_1^*}+\sqrt{M_2^*}) -\frac{\Gamma_{12} m_1^* m_2^*}{2\pi},
\label{min gamma}
\fbox{}silonnd{equation}
and recalling that $M_i^*$ and $m_i^*$ depend only on $\Gamma_{ii}$, $i=1,2$, \fbox{}silonqref{min gamma} can hold only when
\[\Gamma_{12}\le \frac{4\pi\sqrt{\pi}(\sqrt{M_1^*}+\sqrt{M_2^*}) }{m_1^* m_2^*}.\]
Thus for our choice of $\Gamma_{12}$, no double bubble with masses $m_1^k\ge m_1^*$ and $m_2^k\ge m_2^*$
can exist, since splitting it into two single bubbles decreases the energy.
\fbox{}silonnd{proof}
\begin{proof}[\bf{Proof of Theorem \ref{properties} (b)}]
Denote the minimizing configuration of \fbox{}silonqref{mine0} by $\mathcal{B}$.
\begin{itemize}
\item Claim: there exists constant $\overline{m_i}^d >0$, depending on $\Gamma_{ii}$ only, such that any lobe of $i$-th constituent of a double bubble in $\mathcal{B}$ has mass at least $\overline{m_i}^d$.
\fbox{}silonnd{itemize}
Assume there exists a double bubble $D = (x, m_2)$. Condition $M_1>4M_1^*$, combined with Lemma \ref{atMostTwo},
gives that there are at least two single bubbles
of type I constituent. By Lemma~\ref{single_lower} there is a single bubble $S$ of type I constituent with mass $m\ge \overline{m_1}^s$. Removing mass $\varepsilonp$ from the lobe of type I constituent of $D$, and adding it to $S$
changes the energy by a quantity
\begin{eqnarray*}
\Delta
&=& \left [ \frac{\Gamma_{11} (x -\varepsilonp)^2}{4 \pi} + \frac{\Gamma_{12} (x - \varepsilonp) m_2}{ 2 \pi} + \frac{\Gamma_{22} m_2^2}{4\pi} + p(x-\varepsilonp, m_2) + \frac{\Gamma_{11} (m+\varepsilonp)^2}{4 \pi} + 2\sqrt{\pi (m+\varepsilonp)} \right ] \\
&& - \left [\frac{\Gamma_{11} x^2 }{4 \pi} + \frac{\Gamma_{12} x m_2}{ 2 \pi} + \frac{\Gamma_{22} m_2^2}{4\pi} + p(x, m_2) + \frac{\Gamma_{11} m^2}{4 \pi} + 2\sqrt{\pi m } \right ] \\
&=& - \frac{ \Gamma_{11} x \varepsilonp}{ 2 \pi} - \frac{ \Gamma_{12} m_2 \varepsilonp }{ 2 \pi} + p(x-\varepsilonp, m_2) - p(x,m_2) + \frac{\Gamma_{11} m \varepsilonp}{2 \pi} + 2\sqrt{\pi (m+\varepsilonp)} - 2\sqrt{\pi m } \\
& =&\bigg(\frac{\Gamma_{11}m}{2\pi}+\sqrt{\frac{\pi}{m}}-\frac{\Gamma_{11} x +\Gamma_{12}m_2}{2\pi}-\frac{1}{r_1} \bigg)\varepsilonp+O(\varepsilonp^2),
\fbox{}silonnd{eqnarray*}
where $r_1$ denotes the radius of the lobe of mass $x$. By
the minimality of $\mathcal{B}$, we need
\begin{equation*}
0\le\frac{\Gamma_{11}m}{2\pi}+\sqrt{\frac{\pi}{m}}-\frac{\Gamma_{11}x+\Gamma_{12}m_2}{2\pi}-\frac{1}{r_1}
\le \frac{\Gamma_{11}M_1^*}{2\pi}+\sqrt{\frac{\pi}{\overline{m_1}^s}}-\sqrt{\frac{\pi}{3 x} },
\fbox{}silonnd{equation*}
and noting that $M_1^*$ and $\overline{m_1}^s$ depend only on $\Gamma_{11}$, we get a lower bound on $x$. For any lobe of type II constituent, the proof is the same. Thus, the claim is proven.
Assume there exists a double bubble, whose lobes of $i$-th constituent
have masses $x_i$, $i=1,2$. Splitting such double bubble into two single bubbles (with masses
$x_1$ and $x_2$) changes the energy by
\begin{align*}
\Delta &= \sum_{i=1}^2\frac{\Gamma_{ii}x_i^2}{4\pi} +2\sqrt{\pi}(\sqrt{x_1}+\sqrt{x_2})
-\bigg[\frac{\Gamma_{12}x_1x_2}{2\pi}+ \sum_{i=1}^2\frac{\Gamma_{ii}x_i^2}{4\pi}+p(x_1,x_2)\bigg]\\
&\le2\sqrt{\pi}(\sqrt{M_1^*}+\sqrt{M_2^*})-\frac{\Gamma_{12} \overline{m_1}^d \overline{m_2}^d}{2\pi},
\fbox{}silonnd{align*}
and by the minimality of $\mathcal{B}$, we need
\[0\le 2\sqrt{\pi}(\sqrt{M_1^*}+\sqrt{M_2^*})-\frac{\Gamma_{12} \overline{m_1}^d \overline{m_2}^d }{2\pi}.\]
Let
\[ \Gamma_{12}^{\ast} := \frac{4\pi\sqrt{\pi}(\sqrt{M_1^*}+\sqrt{M_2^*}) }{\overline{m_1}^d \overline{m_2}^d}. \]
Since $M_i^*$ and $\overline{m_i}^d$ depend only on $\Gamma_{ii}$, $i=1,2$, for all sufficiently
large $\Gamma_{12}$, that is, $\Gamma_{12} > \Gamma_{12}^*$,
no double bubble can exist.
\fbox{}silonnd{proof}
\begin{lemma} (Finite and uniform single bubbles) \label{finiteSin}
Given $\Gamma_{11}$, $\Gamma_{12}$, $\Gamma_{22}$, $M_1$, and $M_2$, if any minimizing configuration of \fbox{}silonqref{mine0}
has only single bubbles, then there are finite single bubbles and all the single bubbles are of the same size.
\fbox{}silonnd{lemma}
\begin{proof} The proof closely follows that of [\cite{bi1}, Lemma 6.2], for the binary case. We provide a sketch for the reader's convenience.
By Theorem~\ref{finiteness} there are only finitely many single bubbles. Assume there are $K_1$ type-I single bubbles with masses $\{ m_1^1, m_1^2, \cdots, m_1^{K_1} \}$ and there are $K_2$ type-II single bubbles with masses $\{ m_2^1, m_2^2, \cdots, m_2^{K_2} \}$.
Let $ F_i(x) = \frac{\Gamma_{ii} x^2 }{4\pi} + 2\sqrt{\pi x}, $ the contribution to $e_0$ from a single bubble of mass $x$ in the $i^{th}$ constituent.
Note that
\begin{eqnarray}
F_i(x) = \frac{4\pi}{ (\Gamma_{ii})^{1/3}} f \left(\frac{ x}{4 \pi (\Gamma_{ii})^{-2/3}} \right ), \nonumber
\fbox{}silonnd{eqnarray}
where $f(x) = x^2 + \sqrt{x} $.
Calculations show that $F_i(x)$ is concave on $(0, \pi \Gamma_{ii}^{-2/3}] $ and convex on $[ \pi \Gamma_{ii}^{-2/3}, \infty) $.
Thus, for each i, the following can be proved:
\begin{itemize}
\item[(1).] There is at most one $m_i^k \leq \pi \Gamma_{ii}^{-2/3} $.
\item[(2).] The set of elements $\{ m_i^k: m_i^k \ge \pi \Gamma_{ii}^{-2/3} \}$ is a singleton since $F\tcr{_i}(x)$ is convex on $[ \pi \Gamma_{ii}^{-2/3}, \infty) $.
\item[(3).] Any masses of the form $\{ x, \underset{K_i-1}{\underbrace{y,\cdots,y}}
\}$ with $x < \pi \Gamma_{ii}^{-2/3} \leq y$ can not be a minimizer of $F_i$.
\fbox{}silonnd{itemize}
\fbox{}silonnd{proof}
\begin{proof}[\bf{Proof of Theorem~\ref{properties} (c)}]
Consider a minimizing configuration $\mathcal{B}$ of \fbox{}silonqref{mine0}.
{\fbox{}silonm At most one double bubble}: assume the opposite, i.e. there were two double
bubbles, then each lobe (of $i$-th constituent) would have mass less than $m_i^*$, prohibited by Lemma \ref{flex}.
{\fbox{}silonm At most one single bubble of each constituent}: assume there exist two single bubbles of type I constituent,
with masses $m_1^1$ and $m_1^2$, respectively. Then
\begin{eqnarray*}
&& [ e_0(m_1^1-\varepsilonp,0) + e_0(m_1^2 +\varepsilonp,0) ] - [ e_0^{} (m_1^1, 0 ) + e_0^{} (m_1^2, 0) ] \\
& = & \left [ \frac{\partial}{\partial m_1} e_0^{} (m_1^2, 0) - \frac{\partial}{ \partial m_1}
e_0^{} (m_1^1, 0) \right ] \varepsilonp +\frac{1}{2} \left [ \frac{\partial^2}{ (\partial m_1)^2} e_0^{} (m_1^1, 0) + \frac{\partial^2}{ (\partial m_1)^2} e_0^{} (m_1^2, 0) \right ] \varepsilonp^2 + O (\varepsilonp^3)
\fbox{}silonnd{eqnarray*}
The minimality of $\mathcal{B}$ requires $\frac{\partial}{\partial m_1} e_0^{} (m_1^1, 0 )= \frac{\partial}{\partial m_1} e_0^{} (m_1^2, 0)$. However,
since $m_1^1,m_1^2 < M_1 <\min\{m_1^*, \pi \Gamma_{11}^{-2/3}\}$, based on the proof in Lemma \ref{finiteSin}, we have
\[ \frac{\partial^2}{ (\partial m_1)^2} e_0^{} (m_1^1, 0 )<0, \qquad \frac{\partial^2}{ (\partial m_1)^2} e_0^{} (m_1^2, 0 )<0 ,\]
which is prohibited by the minimality of $\mathcal{B}$.
{\fbox{}silonm No coexistence}: assume the opposite, i.e.
there exists a single bubble with mass $m$ which, without loss of generality, we assume made of type I constituent,
and a double bubble with lobes of masses $m_1$ and $m_2$.
Then
\begin{eqnarray*}
&& [ e_0^{} (m-\varepsilonp, 0) + e_0^{} (m_1 +\varepsilonp, m_2) ] - [ e_0^{} (m,0) + e_0^{} (m_1, m_2) ] \\
& = & \left [ \frac{\partial}{\partial m_1}e_0^{} (m_1,m_2) - \frac{\partial}{\partial m_1} e_0^{} (m, 0) \right ] \varepsilonp +\frac{1}{2} \left [ \frac{\partial^2}{ (\partial m_1)^2} e_0^{} (m, 0) + \frac{\partial^2}{ (\partial m_1)^2} e_0^{} (m_1, m_2) \right ] \varepsilonp^2 + O (\varepsilonp^3)
\fbox{}silonnd{eqnarray*}
The minimality of $\mathcal{B}$ requires $ \frac{\partial}{\partial m_1} e_0^{} (m, 0)= \frac{\partial}{\partial m_1} e_0^{} (m_1,m_2) $. However,
since $m,m_1 < M_1 <\min\{m_1^*, \pi \Gamma_{11}^{- 2/3}\}$, based on Lemma \ref{flex} and the proof in Lemma \ref{finiteSin}, we have
\[ \frac{\partial^2}{ (\partial m_1)^2} \ e_0^{}(m, 0) <0, \qquad \frac{\partial^2}{ (\partial m_1)^2} \ e_0^{} (m_1, m_2) < 0 ,\]
which is prohibited by the minimality of $\mathcal{B}$.
Finally, we need to compare the case of one double bubble (with lobes of masses
$M_1$ and $M_2$) against the case of two single bubbles of different constituents (of masses $M_1$ and $M_2$).
By our choice of $\Gamma_{12}$, we have
\[\frac{\Gamma_{12}}{2\pi} M_1M_2 +p(M_1,M_2)<2\sqrt{\pi}(\sqrt{M_1}+\sqrt{M_2}),\]
hence the double bubble has lower energy.
\fbox{}silonnd{proof}
\section{Convergence Theorems} \label{section 3}
In this section we formulate and prove two theorems on first-order convergence of $E_\fbox{}silonta$.
First, we consider global minimizers of $E_\fbox{}silonta$ with given mass condition $\int_{{\mathbb{T}^2}} v_\fbox{}silonta = M$. Let $v_\fbox{}silonta^{\ast}$ be minimizers of $E_{\fbox{}silonta}$, that is,
\begin{equation}\label{globmin}
E_\fbox{}silonta(v_\fbox{}silonta^{\ast})= \min\left\{ E_\fbox{}silonta(v_{\fbox{}silonta}) \ | \ v_{\fbox{}silonta}= ( v_{1, \fbox{}silonta},v_{2, \fbox{}silonta} ) \in X_\fbox{}silonta, \ \int_{\mathbb{T}^2} v_{\fbox{}silonta} = M\right\},
\fbox{}silonnd{equation}
where the space $X_\fbox{}silonta$ is defined in \fbox{}silonqref{Xspace}.
\begin{theorem}\label{MinThm}
Let $v_\fbox{}silonta^{\ast}=\fbox{}silonta^{-2}\chi_{\Omega_\fbox{}silonta}$ be minimizers of problem \fbox{}silonqref{globmin} for all $\fbox{}silonta>0$. Then, there exists a subsequence $\fbox{}silonta \to 0$ (still denoted by $\fbox{}silonta$) and $K \in {\mathbb{N}}$ such that:
\begin{enumerate}
\item there exist connected clusters $A^1,\,{\operatorname{d}}ots, A^K$ in ${\mathbb{R}}R$ and points $x_{\fbox{}silonta}^k \in {\mathbb{T}^2}$, $k=1,\,{\operatorname{d}}ots,K$, for which
\begin{equation} \label{MT1}
\fbox{}silonta^{-2}\left| \Omega_\fbox{}silonta \ \triangle \ \bigcup_{k=1}^K \left(\fbox{}silonta A^k + x_{\fbox{}silonta}^k \right)
\right| \xrightarrow{\fbox{}silonta \rightarrow 0} 0;
\fbox{}silonnd{equation}
\item each $A^k$, $k=1,\,{\operatorname{d}}ots,K $ is a minimizer of $\mathcal{G}$:
\begin{equation}\label{MT2}
\mathcal{G}(A^k)= e_0(m^k), \qquad m^k=(m_1^k, m_2^k)=|A^k|;
\fbox{}silonnd{equation}
Moreover,
\begin{equation}\label{MT3}
\overline{e_0}(M) =
\lim_{\fbox{}silonta\to 0} E_\fbox{}silonta(v_\fbox{}silonta)=\sum_{k=1}^K \mathcal{G}(A^k) =
\sum_{k=1}^K e_0(m^k).
\fbox{}silonnd{equation}
\item $x_{\fbox{}silonta}^k \xrightarrow{\fbox{}silonta\rightarrow 0}x^k $, $\forall k=1\,{\operatorname{d}}ots,K$, and
$\{x^1,\,{\operatorname{d}}ots,x^K\}$ attains the minimum of
$\mathcal{F}_K(y^1,\,{\operatorname{d}}ots,y^K;\{m^1,\,{\operatorname{d}}ots,m^K\})$ over all $\{y^1,\,{\operatorname{d}}ots,y^K\}$ in ${\mathbb{T}^2}$.
\fbox{}silonnd{enumerate}
\fbox{}silonnd{theorem}
We recall that
$$ \mathcal{F}_K (y^1,\,{\operatorname{d}}ots,y^K ; \{m^1,\,{\operatorname{d}}ots,m^K\})= \sum_{k,\fbox{}silonll=1\atop k\neq\fbox{}silonll}^K \sum_{i,j=1}^2 \frac{ \Gamma_{ij} }{2}\, m_i^k\, m_j^\fbox{}silonll\, G_{{\mathbb{T}^2}}(y^k-y^\fbox{}silonll). $$
Thus, minimizers of $E_\fbox{}silonta$ concentrate on a finite number of connected clusters, each of which blows up to a minimizer of the limit energy $\mathcal{G}$, and with each converging to a {\it different} point in ${\mathbb{T}^2}$.
Note that an equivalent way to write \fbox{}silonqref{MT1} is using $BV({\mathbb{T}^2}; \{0,\fbox{}silonta^{-2}\} )$ functions:
if we define $\Theta_\fbox{}silonta:= \bigcup_{k=1}^K \left(\fbox{}silonta A^k + x_{\fbox{}silonta}^k \right)$ and $w_\fbox{}silonta=\fbox{}silonta^{-2} \chi_{\Theta_\fbox{}silonta}$, then
$ \Vert v_\fbox{}silonta - w_\fbox{}silonta \Vert_{L^1({\mathbb{T}^2})}\xrightarrow{\fbox{}silonta \rightarrow 0} 0$.
Applying the regularity theory of \cite{maggi} as in \cite{ABCT1, ABCT2} one could show that the convergence actually occurs in a much stronger $C^{1,1}$ sense. (In which case, we could conclude that minimizers $\Omega_\fbox{}silonta$ of $E_\fbox{}silonta$ must also have a bounded number of connected components.)
We note that even in the diblock (binary) case, Theorem~\ref{MinThm} provides a more detailed description of energy minimizers than that of \cite{bi1}.
We also formulate the limit in terms of $\Gamma$-convergence. In this vein we follow the model of \cite{bi1}. Gamma-limits allow us to consider non-minimizing configurations with energy of the same order as minimizers, and obtain a weaker form of the structure given above. However, as we will see we can no longer prevent ``coalescence'' of concentration points at the same limit point $\xi\in{\mathbb{T}^2}$ unless we have some second-order information, which is available for global minimizers.
We define a class of measures with countable support on ${\mathbb{T}^2}$,
$$ Y:=\left\{ v_0=\sum_{k=1}^\infty (m_1^k,m_2^k)\, \,{\operatorname{d}}elta_{x^k} \ | \ m_i^k\ge 0, \ x^k\in{\mathbb{T}^2} \ \text{distinct points}\right\},
$$
and a functional on $Y$,
\begin{eqnarray} \label{E0first}
E_0^{} (v_0 ) : =
\left\{
\begin{array}{rcl}
\sum_{k=1}^{\infty} \overline{e_0} (m^k ), & & \text{ if } v_0\in Y, \\
\infty, & & \text{ otherwise. }
\fbox{}silonnd{array}
\right.
\fbox{}silonnd{eqnarray}
Then we have a first Gamma-convergence result:
\begin{theorem}[First $\Gamma$-limit] \label{twodfirst}
We have
\begin{eqnarray}
E_{\fbox{}silonta}^{} \overset{\Gamma}{\longrightarrow} E_0^{} \ \text{ as } \ \fbox{}silonta \rightarrow 0. \nonumber
\fbox{}silonnd{eqnarray}
That is,
\begin{enumerate}
\item Let $ v_{\fbox{}silonta}\in X_\fbox{}silonta $ be a sequence with $\sup_{\fbox{}silonta>0} E_{\fbox{}silonta}^{} (v_{\fbox{}silonta} )<\infty$. Then there exists a subsequence $\fbox{}silonta\to 0$ and $v_0\in Y$ such that
$ v_{\fbox{}silonta} \rightharpoonup v_{0} $ (in the weak topology of the space of measures), and
\begin{eqnarray}
\underset{\fbox{}silonta\rightarrow0}{\lim\inf} E_{\fbox{}silonta}^{} (v_{\fbox{}silonta} ) \geq E_0^{} (v_{0}). \nonumber
\fbox{}silonnd{eqnarray}
\item Let $v_0\in Y$ with $E_0^{} (v_{0}) < \infty$. Then there exists a sequence $ v_{\fbox{}silonta} \rightharpoonup v_{0} $ weakly as measures, such that
\begin{eqnarray}
\underset{\fbox{}silonta\rightarrow0}{\lim\sup} E_{\fbox{}silonta}^{} (v_{\fbox{}silonta} ) \leq E_0^{} (v_{0}). \nonumber
\fbox{}silonnd{eqnarray}
\fbox{}silonnd{enumerate}
\fbox{}silonnd{theorem}
We may also formulate a second $\Gamma$-convergence result at the level of $|\log\fbox{}silonta|^{-1}$ in the energy, which expresses the interaction energy between components at the minimal energy $\overline{e_0}(M)$.
For $v_\fbox{}silonta\in X_\fbox{}silonta$, let
\begin{eqnarray}
F_{\fbox{}silonta}^{} (v_{\fbox{}silonta} ) : = |\log \fbox{}silonta | \left [ E_{\fbox{}silonta}^{} (v_{\fbox{}silonta} ) - \overline{e_0 } \left ( \int_{\mathbb{T}^2} v_{\fbox{}silonta} \right ) \right ].
\fbox{}silonnd{eqnarray}
Given the behavior described for minimizers in Theorem~\ref{MinThm}, we expect that for configurations with near-minimal energy $E_\fbox{}silonta(v_\fbox{}silonta)\simeq \overline{e_0}(M)$, $F_\fbox{}silonta(v_\fbox{}silonta)$ should describe the interaction energy of the disjoint clusters making up $v_\fbox{}silonta$.
For $K \in \mathbb{N}$, $m_1^k \geq 0$, $m_2^k \geq 0$ and $(m_1^k)^2 + (m_2^k)^2 > 0 $, the sequence $K \otimes (m_1^k, m_2^k) $ is defined by
\begin{eqnarray*}
( K \otimes (m_1^k, m_2^k) )^k : = \left \{
\begin{aligned}
&(m_1^k, m_2^k),& 1 \leq k \leq K, \\
& (0, 0),& K+1 \leq k < \infty.
\fbox{}silonnd{aligned}
\right.
\fbox{}silonnd{eqnarray*}
Let ${\cal{M}}_{M}$ be the set of optimal sequences made of all clusters for the problem \fbox{}silonqref{mine0}:
\begin{eqnarray}
{\cal{M}}_{M} := \Bigg \{ K \otimes (m_1^k, m_2^k) : K \otimes (m_1^k, m_2^k) \text{ minimizes \fbox{}silonqref{mine0} for } M_i , i = 1, 2, \nonumber \\
\text{ and } \ \overline{e_0} (m^k) = e_0^{} (m^k), \ m^k = (m_1^k, m_2^k ) \Bigg \}. \nonumber
\fbox{}silonnd{eqnarray}
Let $Y_M$ denote the space of all measures $v_0=\sum_{k=1}^K m^k \,{\operatorname{d}}elta_{x^k}$ with $\{x^1,\,{\operatorname{d}}ots,x^K\}$ distinct points in ${\mathbb{T}^2}$ and $K\otimes m^k\in \mathcal{M}_M$.
For $v_0\in Y_M$, we define the functional
$$ F_0(v_0)= \sum_{i,j =1}^2 \frac{\Gamma_{ij} }{2} \left \{ \sum_{k=1}^{K} \left [ f (m^k_i,m^k_j) + m_i^k m_j^k R_{\mathbb{T}^2}( 0 ) \right ]
+ \sum_{k,\fbox{}silonll=1\atop k\neq\fbox{}silonll}^K m_i^k m_j^\fbox{}silonll G_{\mathbb{T}^2}(x_i^k - x_j^\fbox{}silonll ) \right \},
$$
and $F_0(v_0)=+\infty$ otherwise. The term
$$ f(m^k_i,m^k_j) =\frac{1}{2\pi} \int_{A^k_i}\int_{A^k_j} \log { 1 \over |x-y|} dx\, dy, $$
where $A^k$ are the minimizers of $e_0(m^k)$, is determined by the first $\Gamma$-limit, and thus is a constant in $F_0$ and does not change with the locations $x^k$ of the bubbles.
\begin{theorem}[Second $\Gamma$-limit] \label{twodsecond}
We have
\begin{eqnarray}
F_{\fbox{}silonta}^{} \overset{\Gamma}{\longrightarrow} F_0^{} \ \text{ as } \ \fbox{}silonta \rightarrow 0. \nonumber
\fbox{}silonnd{eqnarray}
That is,
conditions 1 and 2 of Theorem \ref{twodfirst} hold with $E_{\fbox{}silonta} $ and $E_{0}$ replaced by $F_{\fbox{}silonta} $ and $F_{0} $.
\fbox{}silonnd{theorem}
The proof of the second $\Gamma$-limit follows that of \cite{bi1}, with some modifications as in the proof of statement 3. of Theorem~\ref{MinThm}, and is left to the reader. We remark that for minimizers, statement 3.~of Theorem~\ref{MinThm} is stronger than that of Theorem~\ref{twodsecond}, as the control of errors from the order-one Gamma limit is effectively incorporated in the hypothesis that $\sup_{\fbox{}silonta>0}F_\fbox{}silonta(v_\fbox{}silonta)<\infty$.
\subsection*{A concentration lemma}
We begin by making precise the heuristic idea that finite energy configurations $v_\fbox{}silonta$ are composed of a disjoint union of well-separated components of size $\fbox{}silonta$. Ideally, we would hope that each is connected, but it is sufficient to show that they are separated by open sets of diameter $O(\fbox{}silonta)$.
\begin{lemma}\label{components}
Let $v_\fbox{}silonta=\fbox{}silonta^{-2}\chi_{\Omega_\fbox{}silonta}\in X_{\fbox{}silonta}$ with $\fbox{}silonta\int_{{\mathbb{T}^2}} |\nabla v_{\fbox{}silonta} | \le C$. Then, there exists an at most countable collection $v_{\fbox{}silonta}^k=\fbox{}silonta^{-2}\chi_{\Omega_{\fbox{}silonta}^k}\in X_{\fbox{}silonta}$, with clusters $\Omega_{\fbox{}silonta}^k=(\Omega_{1,\fbox{}silonta}^k, \Omega_{2,\fbox{}silonta}^k)$, such that
\begin{enumerate}
\item[(a)] $\Omega_{i, \fbox{}silonta}^k\cap \Omega_{j, \fbox{}silonta}^\fbox{}silonll=\fbox{}silonmptyset$, for $k\neq\fbox{}silonll$ and $i,j=1,2$.
\item[(b)] $v_\fbox{}silonta = \sum_{k=1}^\infty v_\fbox{}silonta^k$ in $X_{\fbox{}silonta}$; in particular,
$$ \int_{{\mathbb{T}^2}} |\nabla v_{i, \fbox{}silonta}| = \sum_{k=1}^\infty \int_{{\mathbb{T}^2}} |\nabla v_{i, \fbox{}silonta}^k|.$$
\item[(c)] There exists $C>0$ with $\,{\operatorname{d}}iam(\Omega_{\fbox{}silonta}^k) \le C\fbox{}silonta$ for all $k\in{\mathbb{N}}$.
\fbox{}silonnd{enumerate}
\fbox{}silonnd{lemma}
\begin{proof} Our first step is to identify ``components'' of the clusters $\Omega_\fbox{}silonta$. As we are not assuming these are minimizers, these sets do not have any higher regularity, so we will instead define disjoint open sets $\{\widetilde\Sigma_\fbox{}silonta^k\}_{k\in{\mathbb{N}}}$ which disconnect $\Omega_\fbox{}silonta$ at the $\fbox{}silonta$-scale.
To this, we first
let $\rho_\fbox{}silonps(x)=\fbox{}silonps^{-2}\rho(x/\fbox{}silonps)$ be a family of $C^\infty$ mollifiers supported in $B_\fbox{}silonps(0)$, $\int_{{\mathbb{T}^2}}\rho_\fbox{}silonps =1$. Let $\widetilde\Omega_\fbox{}silonta=\Omega_{1,\fbox{}silonta}\cup\Omega_{2, \fbox{}silonta}$ and $\varphi_\fbox{}silonta=\rho_{\fbox{}silonta^2}*\chi_{\widetilde\Omega_\fbox{}silonta}$, which is $C^\infty({\mathbb{T}^2})$. Following \cite[Theorem~3.42]{AFP}, we may choose $t=t(\fbox{}silonta)\in(0,1)$ for which the set $\Sigma_\fbox{}silonta:=\{ x: \ \varphi_\fbox{}silonta(x)>t\}$ is an open set with smooth $\partial\Sigma_\fbox{}silonta \ \subset \ {\mathbb{T}^2}$, with
$$
\text{Per}_{{\mathbb{T}^2}}(\Sigma_\fbox{}silonta)\le \text{Per}_{{\mathbb{T}^2}}(\Omega_\fbox{}silonta) + \fbox{}silonta^2 \le C\fbox{}silonta.
$$
Moreover (by construction) the Hausdorff distance $d(\Sigma_\fbox{}silonta,\widetilde\Omega_\fbox{}silonta)\le \fbox{}silonta^2$. As $\partial\Sigma_\fbox{}silonta$ is smooth, it decomposes into an at most countable collection of smooth, disjoint, open, connected components, $\Sigma_\fbox{}silonta=\bigcup_{k=1}^\infty \Sigma_\fbox{}silonta^k$, $\Sigma_\fbox{}silonta^k\cap\Sigma_\fbox{}silonta^\fbox{}silonll=\fbox{}silonmptyset$, $k\neq\fbox{}silonll$.
As the $\Sigma_\fbox{}silonta^k$ are connected, we also conclude that the diameters,
$$ \sum_{k=1}^\infty \text{diam}_{\mathbb{T}^2}(\Sigma_\fbox{}silonta^k) \le C'\fbox{}silonta $$
for a constant $C'$ independent of $\fbox{}silonta$.
We next grow our sets $\Sigma_\fbox{}silonta^k$ by considering an $\fbox{}silonta^2$-neighborhood,
$$ \widetilde\Sigma_\fbox{}silonta^k:= \bigcup_{y\in\Sigma_\fbox{}silonta^k} B_{\fbox{}silonta^2}(y), $$
the $\fbox{}silonta^2$-neighborhoods of the sets $\Sigma_\fbox{}silonta^k$. In this way, the collection $\{\widetilde\Sigma_\fbox{}silonta^k\}$ covers the original set,
\begin{equation}\label{opencover}
\Omega_\fbox{}silonta, \Sigma_\fbox{}silonta \subset \bigcup_{k=1}^\infty \widetilde\Sigma_\fbox{}silonta^k.
\fbox{}silonnd{equation}
In expanding $\Sigma_\fbox{}silonta^k$ to $\widetilde\Sigma_\fbox{}silonta^k$, these may no longer be disjoint, but this problem may be overcome by fusing together components which intersect. Indeed, if
$\widetilde\Sigma_\fbox{}silonta^k\cap \widetilde\Sigma_\fbox{}silonta^\fbox{}silonll \neq\fbox{}silonmptyset$ for some $\fbox{}silonll\neq k$, then replace that pair in the list by $\widehat\Sigma_\fbox{}silonta^k:=\widetilde\Sigma_\fbox{}silonta^k\cup\widetilde\Sigma_\fbox{}silonta^\fbox{}silonll$
and reorder (if necessary.) The resulting sets $\{\widetilde\Sigma_\fbox{}silonta^k\}_{k\in{\mathbb{N}}}$ will in fact be an at most countable collection of disjoint open sets, each of diameter $O(\fbox{}silonta)$, which cover the clusters $\Omega_\fbox{}silonta$.
Now we may define our disjoint components of $\Omega_\fbox{}silonta$ via
$$ \Omega_\fbox{}silonta^k:= \Omega_\fbox{}silonta \cap \widetilde\Sigma_\fbox{}silonta^k. $$
By \fbox{}silonqref{opencover} we may conclude $\Omega_\fbox{}silonta=\bigcup_{k=1}^\infty \Omega_\fbox{}silonta^k \, $, and since each $\widetilde\Sigma_\fbox{}silonta^k$ is open and the collection is disjoint, we thus obtain (a), (b) and (c).
\fbox{}silonnd{proof}
Now that we have decomposed $\Omega_\fbox{}silonta$ into disjoint clusters, we describe the limiting structure of the set at the $\fbox{}silonta$-scale, in terms of sets minimizing the blow-up energy $\mathcal{G}$:
\begin{lemma}\label{ConcLem1}
Let $w_\fbox{}silonta=\fbox{}silonta^{-2}\chi_{\Omega_\fbox{}silonta}$
with $\int_{\mathbb{T}^2} w_\fbox{}silonta =M>0$, and $\sup_{\fbox{}silonta>0}E_\fbox{}silonta(w_\fbox{}silonta)<\infty$. Then there exists a subsequence of $\fbox{}silonta \to 0$,
points $\{x_{\fbox{}silonta}^k \}$ in ${\mathbb{T}^2}$, and clusters $\{A^k\}$ in ${\mathbb{R}}R$, such that
$\Omega_\fbox{}silonta=\bigcup_{k\in{\mathbb{N}}}\Omega_\fbox{}silonta^k$ satisfies:
\begin{equation}\label{CL12} \left| A^k \ \triangle \,\left( \fbox{}silonta^{-1}\left[ \Omega_{\fbox{}silonta}^k -x_{\fbox{}silonta}^k \right]\right) \right|
\xrightarrow{\fbox{}silonta_{} \rightarrow 0} 0, \qquad\forall k;
\fbox{}silonnd{equation}
Moreover,
\begin{gather}
\label{CL13}
M_i=\lim_{\fbox{}silonta_{} \to 0} \sum_{k=1}^\infty \fbox{}silonta^{-2} |\Omega_{i, \fbox{}silonta}^k| = \sum_{k=1}^\infty |A_i^k |, \quad i=1,2, \ \text{and} \\
\label{CL14} \liminf_{\fbox{}silonta_{} \to 0} E_{\fbox{}silonta_{} }(w_{\fbox{}silonta_{} }) \ge
\sum_{k=1}^\infty \mathcal{G}(A^k) \ge \overline{e_0}(M).
\fbox{}silonnd{gather}
\fbox{}silonnd{lemma}
That is, up to sets of negligible area, $\Omega_\fbox{}silonta \simeq \bigcup_{k} \left[x_{\fbox{}silonta}^k +\fbox{}silonta A^k\right]$, a disjoint, at most countable, union of {\it fixed} ($\fbox{}silonta$-independent) clusters scaled by $\fbox{}silonta$.
\begin{proof} Applying Lemma~\ref{components} to the set $\Omega_\fbox{}silonta$ for each fixed $\fbox{}silonta>0$, we obtain an at most countable disjoint collection $\Omega_\fbox{}silonta^k=(\Omega_{1,\fbox{}silonta}^k,\Omega_{2,\fbox{}silonta}^k)$ of clusters in ${\mathbb{T}^2}$, and corresponding $v^k_\fbox{}silonta=\fbox{}silonta^{-2}(\chi_{\Omega_{1,\fbox{}silonta}^k}, \chi_{\Omega_{2,\fbox{}silonta}^k})\in[BV(\TT; \{0,\eta^{-2}\})]^2$, satisfying conditions (a), (b), (c) of Lemma~\ref{components}. We denote by
$$m_\fbox{}silonta^k=(m_{1,\fbox{}silonta}^k, m_{2,\fbox{}silonta}^k)=\fbox{}silonta^{-2}|\Omega^k_{\fbox{}silonta}|=
(\fbox{}silonta^{-2}|\Omega^k_{1,\fbox{}silonta}|\ ,\ \fbox{}silonta^{-2}|\Omega^k_{2,\fbox{}silonta}|). $$
This may be a finite union of size $N_\fbox{}silonta\in\mathbb{N}$,
in which case we take $\Omega_{\fbox{}silonta}^k=(\fbox{}silonmptyset,\fbox{}silonmptyset)$ for $k>N_\fbox{}silonta$, and the choice of $x_\fbox{}silonta^k$ is irrelevant. We also recall that it is possible that only one of $m_{i, \fbox{}silonta}^k> 0$. As
$$ M=(M_1,M_2)=\sum_{k=1}^\infty m_{\fbox{}silonta}^k, $$
is either a finite sum or a convergent series, without loss of generality we may assume that each sequence $\{\Omega_{\fbox{}silonta}^k \}$ is ordered by decreasing cluster mass: that is,
$|m_{\fbox{}silonta}^k |=m_{1, \fbox{}silonta}^k+m_{2, \fbox{}silonta}^k \ge |m_{\fbox{}silonta}^{k+1} |$ holds for all $k$.
From the proof of Lemma~\ref{components} we note that each disjoint cluster $\Omega^k_{\fbox{}silonta}\subset\Sigma^k_\fbox{}silonta$, with $\{\Sigma^k_\fbox{}silonta\}_{k\in{\mathbb{N}}}$ a collection of disjoint open sets. This disconnection of $\Omega_\fbox{}silonta$ also induces a corresponding disconnection on ${\mathbb{T}^2}\setminus\Omega_\fbox{}silonta$, and hence the perimeter of the cluster $\Omega_\fbox{}silonta$ (see \fbox{}silonqref{cluster_per}) decomposes as
\begin{equation}\label{per_decomp}
\text{Per}_{{\mathbb{T}^2}}(\Omega_\fbox{}silonta)=\sum_{k=1}^\infty \text{Per}_{{\mathbb{T}^2}}(\Omega^k_{\fbox{}silonta}).
\fbox{}silonnd{equation}
For any $k\in\mathbb{N}$, take any $x_{\fbox{}silonta}^k \in \Omega_{1,\fbox{}silonta}^k \cup \Omega_{2, \fbox{}silonta}^k$.
By (c) of Lemma~\ref{components} each individual disjoint cluster $\Omega_{\fbox{}silonta}^k $ has bounded diameter, and thus there exists $R>0$ independent of $\fbox{}silonta$ with
\begin{equation}\label{esti}
\Omega_{\fbox{}silonta}^k \subset B_{\fbox{}silonta}^k \subset B_{\fbox{}silonta R}(x_{\fbox{}silonta}^k ).
\fbox{}silonnd{equation}
An immediate consequence of \fbox{}silonqref{esti} is that we can think of each cluster $\Omega_{\fbox{}silonta}^k $ as a subset of ${\mathbb{R}}R$, and do a blow-up at scale $\fbox{}silonta$. We define
$A_{\fbox{}silonta}^k = (A_{1, \fbox{}silonta}^k, A_{2, \fbox{}silonta}^k )$, a cluster in $\mathbb{R}^2$, via
$$ A_{i, \fbox{}silonta}^k = \fbox{}silonta^{-1} (\Omegaega_{i, \fbox{}silonta}^k - x_{\fbox{}silonta}^k ) \subset B_{R} (0 )\subset{\mathbb{R}}R. $$
For each $k$, $A_{\fbox{}silonta}^k $ is a uniformly bounded family of finite perimeter sets in ${\mathbb{R}}R$, and
$$ \text{Per}_{{\mathbb{R}}R} (A_{\fbox{}silonta}^k ) \le \fbox{}silonta^{-1}\int_{{\mathbb{T}^2}} |\nabla \chi_{\Omega_{\fbox{}silonta}^k }|
\le \fbox{}silonta\int_{{\mathbb{T}^2}} |\nabla w_\fbox{}silonta| \le E_\fbox{}silonta(w_\fbox{}silonta), $$
for each $\fbox{}silonta>0$.
By Proposition 29.5 in \cite{maggi}, for each $k$ there exists a subsequence (still denoted by $\fbox{}silonta$) $\fbox{}silonta\to 0$ and a cluster $A^k = (A_{1}^{k}, A_{2}^{k})$ in $\mathbb{R}^2$ with $\lim_{\fbox{}silonta \to 0}|A_{\fbox{}silonta}^{k} \ \triangle A^k |=0$, and for which
\begin{eqnarray}
\text{Per}_{{\mathbb{R}}R} (A^k ) \leq \underset{\fbox{}silonta \rightarrow 0}{\liminf} \ \text{Per}_{{\mathbb{R}}R} ( A_{\fbox{}silonta}^k ), \ \ |A^k | = \underset{\fbox{}silonta \rightarrow 0}{\lim} | A_{\fbox{}silonta}^k | = m^k = (m_1^k, m_2^k ), \nonumber
\fbox{}silonnd{eqnarray}
Thus there exists a common subsequence, which do not relabel, along which each $A_{\fbox{}silonta}^k \to A^k$ in the above sense.
Next we show \fbox{}silonqref{CL13} holds.
As $M_i = \sum_{k=1}^\infty m_{i,\fbox{}silonta}^k $,
we obtain
$$ M_i=\lim_{\fbox{}silonta \to 0}\sum_{k=1}^\infty m_{i, \fbox{}silonta}^k \ge \sum_{k=1}^\infty m_i^k, \qquad i=1,2.
$$
To obtain the opposite inequality, let $\fbox{}silonps>0$ be given and $C_0=\sup_{\fbox{}silonta>0} E_\fbox{}silonta(w_\fbox{}silonta)$. By the convergence of the series above we may choose $N\in\mathbb{N}$ for which both
$$ \sum_{k=N}^\infty m_i^k < {\fbox{}silonps\over 2}, \quad \text{and} \quad m_i^N \le |m^N| < {\fbox{}silonps^2\pi\over 4 C_0^2}, \qquad i=1,2. $$
Since $m_{i, \fbox{}silonta}^k \to m_i^k $ as $\fbox{}silonta \to 0$, we may choose $\fbox{}silonta_0>0$ so that
$$ m_{i, \fbox{}silonta}^N < 2m_i^N < {\fbox{}silonps^2\pi\over 2 C_0^2}, \qquad \forall \fbox{}silonta <\fbox{}silonta_0. $$
Using the ordering $|m_{\fbox{}silonta}^{k+1} |\le |m_{\fbox{}silonta}^k |$ and the isoperimetric inequality,
\begin{align*}
\sum_{k=N}^\infty m_{i, \fbox{}silonta}^k &\le \sum_{k=N}^\infty \sqrt{|m_{\fbox{}silonta}^k |}\sqrt{m_{i,\fbox{}silonta}^k} \le \sqrt{|m_{\fbox{}silonta}^N |}\sum_{k=N}^\infty \sqrt{m_{i,\fbox{}silonta}^k}
\\
&\le \sqrt{ {|m_{\fbox{}silonta}^N |\over 4\pi}} \sum_{k=N}^\infty \text{Per}_{\mathbb{R}}R (A_{i, \fbox{}silonta}^k ) \\
&\le \sqrt{ {|m_{\fbox{}silonta}^N|\over 4\pi}}\left[ \fbox{}silonta \int_{\mathbb{T}^2} |\nabla w_\fbox{}silonta | \right]< \sqrt{ {|m_{\fbox{}silonta}^N|\over 4\pi}} C_0 < {\fbox{}silonps\over 2},
\fbox{}silonnd{align*}
for all $\fbox{}silonta <\fbox{}silonta_0$. Finally, since $m_{\fbox{}silonta}^k \to m^k$ as $\fbox{}silonta \to 0$, by choosing $\fbox{}silonta_0$ smaller if necessary we have
$$ \sum_{k=1}^{N-1} m_{i, \fbox{}silonta}^k \le \sum_{k=1}^{N-1} m_i^k + {\fbox{}silonps\over 2}, \qquad \forall \fbox{}silonta <\fbox{}silonta_0. $$
Thus, for all $\fbox{}silonta <\fbox{}silonta_0$,
$$ M_i = \sum_{k=1}^\infty m_{i, \fbox{}silonta}^k <
\sum_{k=1}^{N-1} m_{i, \fbox{}silonta}^k + {\fbox{}silonps\over 2} <
\sum_{k=1}^{N-1} m_i^k + \fbox{}silonps \le \sum_{k=1}^\infty m_i^k + \fbox{}silonps,
$$
and \fbox{}silonqref{CL13} is verified.
It remains to calculate the energy in this limit. Let $\tilde w_{i, \fbox{}silonta}^k =\fbox{}silonta^{-2}\chi_{\Omega_{i, \fbox{}silonta}^k }$, $i=1,2$, and $\tilde w_{i,\fbox{}silonta}=\sum_{k\in{\mathbb{N}}} \tilde w_{i,\fbox{}silonta}^k $.
As the clusters $\Omega_{\fbox{}silonta}^k $ are smooth with disjoint closures, the perimeter term decomposes exactly at the $\fbox{}silonta$-scale, and by lower semicontinuity
$$ \fbox{}silonta \int_{\mathbb{T}^2} |\nabla \tilde w_\fbox{}silonta | = \fbox{}silonta^{-1} \sum_{k\in{\mathbb{N}}} \text{Per}_{\mathbb{T}^2} (\Omega_{\fbox{}silonta}^k )
\ge \sum_{k\in{\mathbb{N}}} \text{Per}_{\mathbb{R}}R (A^k) +o(1). $$
The nonlocal terms also split into a double sum: for $i,j=1,2$,
\begin{gather*}
\int_{\mathbb{T}^2}\int_{\mathbb{T}^2} \tilde w_{i,\fbox{}silonta}(x) G_{\mathbb{T}^2}(x-y) \tilde w_{j,\fbox{}silonta} (y)\, dx\, dy
= \sum_{k,\fbox{}silonll\in{\mathbb{N}}} I_{i,j}^{k,\fbox{}silonll}, \quad\text{with} \\
I_{i,j}^{k,\fbox{}silonll}:= \int_{\mathbb{T}^2}\int_{\mathbb{T}^2} \tilde w_{i,\fbox{}silonta}^k (x) \, G_{\mathbb{T}^2}(x-y)\, \tilde w_{j, \fbox{}silonta}^{\fbox{}silonll} (y) \, dx\, dy = \int_{A_{i, \fbox{}silonta}^k }\int_{A_{j, \fbox{}silonta}^{\fbox{}silonll}} G_{\mathbb{T}^2}( (x_{\fbox{}silonta}^k +\fbox{}silonta \tilde x) - ( x_{\fbox{}silonta}^{\fbox{}silonll}+\fbox{}silonta \tilde y) ) d\tilde x\, d\tilde y.
\fbox{}silonnd{gather*}
First consider the self-interaction terms, $\fbox{}silonll=k$. These have two parts,
\begin{align}\nonumber
I_{i,j}^{k,k} &= \int_{A_{i, \fbox{}silonta}^k }\int_{A_{j, \fbox{}silonta}^k } \left[ {1\over 2 \pi} \log {1\over \fbox{}silonta |\tilde x-\tilde y|} + R_{\mathbb{T}^2}(\fbox{}silonta \tilde x - \fbox{}silonta \tilde y)\right] d\tilde x\, d\tilde y \\
\label{Ikk}
&={|\log\fbox{}silonta |\over 2\pi} |A_{i, \fbox{}silonta}^k | \, |A_{j, \fbox{}silonta}^k | + J_{i,j}(A_{\fbox{}silonta}^k),
\fbox{}silonnd{align}
where we define (for clusters $A=(A_1,A_2)$ in ${\mathbb{R}}R$)
\begin{equation}\label{Jk}
J_{i,j}(A):= \int_{A_{i}} \int_{A_{j}} \left[
\frac{1}{2\pi} \log{1\over |\tilde x-\tilde y|}+R_{{\mathbb{T}^2}}(\fbox{}silonta(\tilde x-\tilde y))\right]d\tilde x\, d\tilde y.
\fbox{}silonnd{equation}
As the regular part of the Green's function $R_{{\mathbb{T}^2}}(x-y)\ge C_1$ is bounded below on ${\mathbb{T}^2}$, and (using \fbox{}silonqref{esti}) $A_{i, \fbox{}silonta}^k \subset B_R(0)$ for all $k$, we obtain the estimate
\begin{align*}\nonumber
I_{i,j}^{k,k}
&\ge \left[{|\log\fbox{}silonta | \over 2\pi} + C_1\right] |A_{i, \fbox{}silonta}^k | \, |A_{j, \fbox{}silonta}^k | -
C_2 |A_{i, \fbox{}silonta}^k |,
\fbox{}silonnd{align*}
with $C_2=(2\pi)^{-1}\int_{B_R(0)} |\log|\tilde x-\tilde y||d\tilde x $. Thus, on diagonal,
\begin{align} \label{CL15} \sum_{k\in{\mathbb{N}}} \sum_{i,j=1}^2 \frac{ \Gamma_{ij}}{2} I_{i,j}^{k,k} & \ge
\sum_{k\in{\mathbb{N}}}\sum_{i,j=1}^2 \Gamma_{ij} {|\log\fbox{}silonta| \over 4\pi} |A_{i, \fbox{}silonta}^k | \, |A_{j, \fbox{}silonta}^k | - O(1) \\
\label{CL16}
&= {|\log\fbox{}silonta|\over 4\pi}\sum_{k\in{\mathbb{N}}} \sum_{i,j=1}^2 \Gamma_{ij} m_i^k\, m_j^k
- o(|\log\fbox{}silonta|).
\fbox{}silonnd{align}
We estimate interaction terms between component clusters in terms of their distance,
\begin{equation}\label{rdef}
r_{\fbox{}silonta}^{k,\fbox{}silonll} := \max\{\,{\operatorname{d}}ist_{\mathbb{T}^2}(x,y) \ | \ x\in\Omega_{\fbox{}silonta}^k, \ y\in\Omega_{\fbox{}silonta}^{\fbox{}silonll}),
\fbox{}silonnd{equation}
for $k\neq\fbox{}silonll$ with $|\Omega_{\fbox{}silonta}^{k}|, |\Omega_{\fbox{}silonta}^{\fbox{}silonll}|\neq 0$.
The situation is different depending on whether $r_{\fbox{}silonta}^{k,\fbox{}silonll}$ is bounded from below or not. In case that (taking a further subsequence if necessary) $r_{\fbox{}silonta}^{k,\fbox{}silonll}\ge \,{\operatorname{d}}elta_0>0$ for all $\fbox{}silonta$, for $k\neq\fbox{}silonll$ and $i,j=1,2$ we have
\begin{align}
\nonumber
I_{i,j}^{k,\fbox{}silonll}&= \fbox{}silonta^{-4}\int_{\Omega_{i,\fbox{}silonta}^k } \int_{\Omega_{j, \fbox{}silonta}^{\fbox{}silonll}}
G_{\mathbb{T}^2}(x-y)\, dx\, dy \\
\nonumber
&= \fbox{}silonta^{-4} \, |\Omega_{i,\fbox{}silonta}^{k}|\, |\Omega_{j,\fbox{}silonta}^{\fbox{}silonll}|
\left[G_{\mathbb{T}^2}(x_{\fbox{}silonta}^{k}- x_{\fbox{}silonta}^{\fbox{}silonll}) - O(\fbox{}silonta)\right] \\
\label{CL17}
&= |A_{i, \fbox{}silonta}^{k}|\, |A_{j, \fbox{}silonta}^{\fbox{}silonll}| \left[G_{\mathbb{T}^2}(x_{\fbox{}silonta}^{k}- x_{\fbox{}silonta}^{\fbox{}silonll}) - O(\fbox{}silonta)\right],
\fbox{}silonnd{align}
which is of lower order than the self-interaction term. If $r_{\fbox{}silonta}^{k,\fbox{}silonll}\to 0$ as $\fbox{}silonta\to 0$, we have coalescence of two or more clusters; in this case we estimate:
\begin{align}
\nonumber
I_{i,j}^{k,\fbox{}silonll}&= \fbox{}silonta^{-4}\int_{\Omega_{i, \fbox{}silonta}^{k}} \int_{\Omega_{j, \fbox{}silonta}^{\fbox{}silonll}}
G_{\mathbb{T}^2}(x-y)\, dx\, dy \\
\nonumber
&= \fbox{}silonta^{-4}\int_{\Omega_{i, \fbox{}silonta}^{k}} \int_{\Omega_{j, \fbox{}silonta}^{\fbox{}silonll}}
\left[ -{1\over 2\pi} \log |x-y| + R_{\mathbb{T}^2}(x-y)\right] dx\, dy \\
\nonumber
&\ge \fbox{}silonta^{-4} \, |\Omega_{i,\fbox{}silonta}^{k}|\, |\Omega_{j, \fbox{}silonta}^{\fbox{}silonll}|
\left[ {|\log r_{\fbox{}silonta}^{k,\fbox{}silonll}|\over 2\pi} - C \right]
\\ \label{CL18}
&=|A_{i, \fbox{}silonta}^{k}|\, |A_{j, \fbox{}silonta}^{\fbox{}silonll}| \left[ {|\log r_{\fbox{}silonta}^{k,\fbox{}silonll}| \over 2\pi} - C
\right].
\fbox{}silonnd{align}
This term may be of the same order as the self-interaction term, if the distance $r_{\fbox{}silonta}^{k,\fbox{}silonll}$ is comparable to $\fbox{}silonta$, in the sense that $|\log r_{\fbox{}silonta}^{k,\fbox{}silonll}|\sim |\log\fbox{}silonta|$. Later, we will need to distinguish these cases, but for this lemma we need only note that they are nonnegative apart from remainders of order $o(|\log\fbox{}silonta|)$. Thus, we have the lower bound,
\begin{align} \nonumber
E_\fbox{}silonta (w_\fbox{}silonta ) &\ge E_\fbox{}silonta(\tilde w_\fbox{}silonta) - O(\fbox{}silonta) \\ \nonumber
&= \sum_{k\in{\mathbb{N}}} \text{Per}_{\mathbb{R}}R (A_{\fbox{}silonta}^k)
+{1\over|\log\fbox{}silonta|}\sum_{k,\fbox{}silonll\in{\mathbb{N}}} \sum_{i,j=1}^2 \frac{\Gamma_{ij}}{2} I_{i,j}^{k,\fbox{}silonll} -o(1)\\ \nonumber
&\ge \sum_{k\in{\mathbb{N}}} \left[ \text{Per}_{\mathbb{R}}R (A^k) +
{1\over 4\pi}\sum_{i,j=1}^2 \Gamma_{ij} m_i^k\, m_j^k\right] -o(1) \\
&= \sum_{k\in{\mathbb{N}}} \mathcal{G}(A^k) - o(1). \label{CL19}
\fbox{}silonnd{align}
As $\sum_{k\in{\mathbb{N}}} m_i^k=M_i$, $i=1,2$, we obtain \fbox{}silonqref{CL14}.
\fbox{}silonnd{proof}
With the decomposition from Lemma~\ref{ConcLem1} we are now ready to prove the first $\Gamma$-convergence statement. We note that
\begin{eqnarray} \label{e0barInequal}
\overline{e_0 } \left ( \sum_{k=1}^{\infty} m_{}^k \right ) \leq \sum_{k=1}^{\infty} \overline{e_0 }(m^k),
\fbox{}silonnd{eqnarray}
where $m^k = (m_1^k, m_2^k )$ and $ \sum_{k=1}^{\infty} m_{}^k =(\sum_{k=1}^{\infty} m_1^k, \sum_{k=1}^{\infty} m_2^k) $.
\subsection*{Proof of the first Gamma limit}
We can now prove $\Gamma$ convergence of $E_\fbox{}silonta\to E_0$.
\begin{proof}[\bf{Proof of Theorem \ref{twodfirst}}]
Let $ v_{\fbox{}silonta} = (v_{1,\fbox{}silonta}, v_{2, \fbox{}silonta}) $ be a sequence such that the energies $E_{\fbox{}silonta}^{} (v_{\fbox{}silonta} )$ and masses $\int_{\mathbb{T}^2} v_{i, \fbox{}silonta}, i = 1, 2 $ are bounded. Taking a subsequence (not relabeled,) we may assume $\int_{\mathbb{T}^2} v_{i,\fbox{}silonta}\to M_i$, $i=1,2$.
\noindent {\sc Compactness and Lower Limit:} Applying Lemma~\ref{ConcLem1} to $v_\fbox{}silonta$, we obtain at most countably many points $x_\fbox{}silonta^k\in{\mathbb{T}^2}$, clusters $\Omega_\fbox{}silonta^k$ in ${\mathbb{T}^2}$ and $A^k$ in ${\mathbb{R}}R$, satisfying \fbox{}silonqref{CL12}-\fbox{}silonqref{CL14}. In particular, by \fbox{}silonqref{CL12} we may conclude that
$$ \fbox{}silonta^{-2}\chi_{\Omega_{i,\fbox{}silonta}^k}- m_i^k\, \,{\operatorname{d}}elta_{x^k_\fbox{}silonta} \rightharpoonup 0, $$
in the sense of measures. Applying Lemma~5.1 of \cite{bi1} (which is based on a general concentration-compactness result of Lions \cite{lions},) we may conclude that (taking a further subsequence,) $v_\fbox{}silonta\rightharpoonup v_0$ with $v_0=(v_{1,0},v_{2,0})=\sum_{k=1}^\infty (m_1^k,m_2^k)\, \,{\operatorname{d}}elta_{x^k}$, with distinct $x^k\in{\mathbb{T}^2}$ and $m^k_i\ge 0$, as desired. From \fbox{}silonqref{CL13} we may conclude that $M_i=\sum_{k\in{\mathbb{N}}} m^k_i$.
Applying \fbox{}silonqref{CL14} and \fbox{}silonqref{e0barInequal}, we obtain the lower limit.
\noindent {\sc Upper limit:} The upper bound follows from essentially the same argument as in the proof of Theorem~6.1 of \cite{bi1}; the fact that our $v_\fbox{}silonta$ are supported on 2-clusters does not affect the reasoning. We provide a brief summary here for completeness, as the bound is important for the proof of Theorem~\ref{MinThm}.
Let $v_0 = \sum_{k=1}^{\infty} (m_1^k,m_2^k) \,{\operatorname{d}}elta_{x^k}$ with $ \{ x^k \}$ distinct and $ m_i^k \geq 0$, for $k\in{\mathbb{N}}$, $i=1,2$, with $M_i=\sum_k m^k_i<\infty$, and $E_{0} (v_{0} ) < \infty$. As the sums are convergent, we may approximate each by truncation to $K<\infty$ terms, $\tilde{v}_{i,0} = \sum_{k=1}^{ K } m_i^k \,{\operatorname{d}}elta_{x_i^k}$, and in that case
\begin{eqnarray}
E_0 \left ( \tilde{v}_{0} \right ) = \sum_{k=1}^{K} \overline{e_0 }(m^k ) \leq \sum_{k=1}^{\infty} \overline{e_0 }(m^k ) = E_0^{} (v_0 ). \nonumber
\fbox{}silonnd{eqnarray}
And also note that $\overline{e_0 }(m^k )$ can be approximated to arbitrary precision by
\begin{eqnarray}
\sum_{\fbox{}silonll=1}^{L_k} e_0 (m^{k \fbox{}silonll}), \nonumber
\fbox{}silonnd{eqnarray}
where $m^{k \fbox{}silonll} = (m_1^{k \fbox{}silonll}, m_2^{k \fbox{}silonll})$ and $\sum_{\fbox{}silonll=1}^{\infty} m_i^{k \fbox{}silonll} = m_i^k$.
Therefore, it is sufficient to construct a sequence $ v_{\fbox{}silonta} \rightharpoonup
\tilde v_{0 } $ such that
\begin{eqnarray}
\limsup_{\fbox{}silonta\to 0} E_{\fbox{}silonta}^{} (v_{\fbox{}silonta} ) \leq \sum_{k=1}^{K} e_0 (m^k ) \text{ for } \tilde v_{i,0} = \sum_{k=1}^{K} m_i^k \,{\operatorname{d}}elta_{x_i^k}. \label{reduced}
\fbox{}silonnd{eqnarray}
To do this, for each pair $m^k=(m^k_1,m^k_2)$ we let $A^k$ be the cluster in ${\mathbb{R}}R$ which attains the minimum of the blow-up energy, $e_0(m^k)=\mathcal{G}(A^k)$, and $z^k:=\chi_{A^k}=(\chi_{A^k_1},\chi_{A^k_2})$. Choosing $K$ points $\xi^k\in{\mathbb{T}^2}$ with dist${}_{\mathbb{T}^2} (\xi^k,\xi^\fbox{}silonll)\ge \,{\operatorname{d}}elta>0$ for $k\neq\fbox{}silonll$, we claim that the configuration
$$ v_\fbox{}silonta(x) = \fbox{}silonta^{-2}\sum_{k=1}^K z^k (\fbox{}silonta^{-1}(x-\xi^k)) $$
satisfies \fbox{}silonqref{reduced}. Indeed, the perimeter term splits exactly as in \fbox{}silonqref{per_decomp}, as well as the self interaction terms,
$$ I^{k,k}_{i,j} = {|\log\fbox{}silonta|\over 2\pi} m^k_i\, m^k_j + o(|\log\fbox{}silonta|). $$
The interaction terms $I^{k,\fbox{}silonll}_{i,j}$ for $k\neq\fbox{}silonll$ are uniformly bounded in $\fbox{}silonta$, as noted in \fbox{}silonqref{CL17}. This completes the proof of the first $\Gamma$-limit.
\fbox{}silonnd{proof}
\subsection*{Minimizers of $E_\fbox{}silonta$}
We now continue towards the proof of Theorem~\ref{MinThm} concerning global minimizers of $E_\fbox{}silonta$. We next consider configurations whose energy $E_\fbox{}silonta(w_\fbox{}silonta)\le \overline{e_0}(M)$, coinciding with the minimum value suggested by the $\Gamma$-limit. With this tighter bound we may obtain more information about the component clusters $\Omega_{\fbox{}silonta}^{k}$ and their centers $x_{\fbox{}silonta}^{k}$. However, note that this is {\fbox{}silonm not quite} sufficient to conclude that ``coalescence'' of minimizing clusters cannot occur.
\begin{lemma}\label{ConcLem2} Let $w_\fbox{}silonta=\fbox{}silonta^{-2}\chi_{\Omega_\fbox{}silonta}$ with $\int_{{\mathbb{T}}^2} w_\fbox{}silonta =M>0$, and
$$ \limsup_{\fbox{}silonta\to 0} E_\fbox{}silonta(w_\fbox{}silonta) \le \overline{e_0}(M). $$
Then there exists a subsequence (still denoted by) $\fbox{}silonta$, $K \in{\mathbb{N}}$, clusters $\Omega_{\fbox{}silonta}^{k}\subset{\mathbb{T}^2}$ and $A^k\in{\mathbb{R}}R$ satisfying \fbox{}silonqref{MT1}, \fbox{}silonqref{MT2}, and \fbox{}silonqref{MT3}. In addition,
\begin{equation}\label{CL21}
\limsup_{\fbox{}silonta \to 0} {|\log r_{\fbox{}silonta}^{k,\fbox{}silonll}|\over |\log \fbox{}silonta |} =0, \qquad \forall k\neq\fbox{}silonll.
\fbox{}silonnd{equation}
\fbox{}silonnd{lemma}
We recall that $r_\fbox{}silonta^{k,\fbox{}silonll}$ is defined in \fbox{}silonqref{rdef}.
\begin{proof}
The existence of an at most countable collection of disjoint clusters $\Omega_{\fbox{}silonta}^{k}$ and their blowup sets $A^k$, with $m^k=|A^k|$, follows from Lemma~\ref{ConcLem1}. From \fbox{}silonqref{CL14} we obtain
$$\overline{e_0}(M)\ge \sum_{k=1}^\infty \mathcal{G}(A^k)\ge \sum_{k=1}^\infty e_0(m^k)
\ge \overline{e_0}(M), $$
and so each term is equal. In particular,
$$ \sum_{k\in{\mathbb{N}}} \left[ \mathcal{G}(A^k)-e_0(m^k) \right] =0, $$
and since each term in the sum is non-negative, each must vanish. Therefore, $\mathcal{G}(A^k)=e_0(m^k)$ and $A^k$ is a minimizer for each $k$. By Theorem \ref{finiteness} we conclude that there are only a finite number $K \in{\mathbb{N}}$ of nontrivial connected clusters $A^1,\,{\operatorname{d}}ots,A^K$ in the limit. Therefore, we have
$$\sum_{k=K+1}^\infty |A_\fbox{}silonta^k|=\sum_{k=K+1}^\infty \fbox{}silonta^{-2}|\Omega_\fbox{}silonta^k|\to 0,
$$
and it suffices to consider the finite union $\bigcup_{k=1}^K \Omega_\fbox{}silonta^k$.
To prove \fbox{}silonqref{MT1}, we note that for each component $k$,
$$ \fbox{}silonta^{-2}\left|\Omega_{\fbox{}silonta}^{k} \ \triangle \ (\fbox{}silonta A^k + x_{\fbox{}silonta}^{k})\right| =
\left| \fbox{}silonta^{-1}(\Omega_{\fbox{}silonta}^{k}-x_{\fbox{}silonta}^{k}) \ \triangle \ A^k\right|
\xrightarrow{\fbox{}silonta_{} \rightarrow 0} 0, $$
by \fbox{}silonqref{CL12}. As the number of components is uniformly bounded, \fbox{}silonqref{MT1} follows.
Finally, we must show that the distance between distinct connected clusters is large relative to $\fbox{}silonta$. Assume that $\fbox{}silonxists \ k\neq\fbox{}silonll$ and $c >0$ with
${|\log r_{\fbox{}silonta}^{k,\fbox{}silonll}| \over |\log \fbox{}silonta |}\ge c.$ Returning to the lower bound \fbox{}silonqref{CL19}, we retain the term involving $I_{i,j}^{k,\fbox{}silonll}$ in the third line, and obtain:
\begin{align*}
\overline{e_0}(M) = E_\fbox{}silonta (\tilde w_\fbox{}silonta)+o(1) & \ge \sum_{k=1}^K \mathcal{G}(A^k)
+ {1\over 4\pi |\log\fbox{}silonta|}\sum_{i,j=1}^2 \Gamma_{ij} I_{i,j}^{k,\fbox{}silonll} + o(1) \\
&\ge \overline{e_0}(M) + {c \over 4\pi} \sum_{i,j=1}^2 \Gamma_{ij} m_i^k m_j^\fbox{}silonll +o(1).
\fbox{}silonnd{align*}
We conclude that at least one of $m^k, m^\fbox{}silonll=0$, and thus no two connected clusters can accumulate at that scale.
\fbox{}silonnd{proof}
We remark that we have shown that {\fbox{}silonm in the limit} $\fbox{}silonta\to 0$, only a finite number $K<\infty$ of nontrivial connected clusters remain. It is possible under the hypotheses of Lemma~\ref{ConcLem2} that for $\fbox{}silonta>0$, $\Omega_\fbox{}silonta$ has additional (and perhaps an unbounded number of) components with vanishing mass. For minimizers this should not be the case, but it requires further arguments involving regularity of minimizers (see \cite{ABCT2}) to make this conclusion.
\begin{proof}[\bf{Proof of Theorem~\ref{MinThm}}] Let $v_\fbox{}silonta^{\ast}$ be minimizers of $E_\fbox{}silonta$ with mass $\int_{\mathbb{T}^2} v_\fbox{}silonta^{\ast}=M$.
From the derivation of the Upper Bound in the proof of Theorem~\ref{twodfirst}, we have $\limsup_{\fbox{}silonta\to 0} E_\fbox{}silonta(v_\fbox{}silonta^{\ast})\le \overline{e_0}(M)$. So by Lemma~\ref{ConcLem1} we obtain the decomposition of $\Omega_\fbox{}silonta$ as in that lemma, and the convergence of the minimum values,
$$ \lim_{\fbox{}silonta\to 0} E_\fbox{}silonta(v_\fbox{}silonta^{\ast})=\overline{e_0}(M). $$
It remains to show that the centers $x_{\fbox{}silonta}^{k}$, $k=1,\,{\operatorname{d}}ots,K$, remain isolated from each other and in fact converge to minimizers of $\mathcal{F}_K$, where $K$ is determined in Lemma~\ref{ConcLem2}.
For this purpose we must refine the upper and lower bounds on the energy. Recall from the proof of Lemma~\ref{ConcLem1}, the definition of the clusters $A_\fbox{}silonta^k=\fbox{}silonta^{-1}(\Omega_\fbox{}silonta^k- x_\fbox{}silonta^k)$ in ${\mathbb{R}}R$, with mass $m_\fbox{}silonta^k$, which converge in measure to $A^k$, minimizers of $e_0(m^k)$. As Lemma~\ref{components} ensures that the components $\Omega_\fbox{}silonta^k$ of $\Omega_\fbox{}silonta$ are separated by disjoint open sets $\widetilde\Sigma_\fbox{}silonta^k$, we may deform $\Omega_\fbox{}silonta$ by moving the centers on ${\mathbb{T}^2}$, while avoiding overlapping. Choose $K$ distinct points $y^k\in{\mathbb{T}^2}$, $k=1,\,{\operatorname{d}}ots,K$, and define
$$ \hat\Omega_{\fbox{}silonta}^{k}:= \Omega_{\fbox{}silonta}^{k}+ y^k -x_{\fbox{}silonta}^{k}= \fbox{}silonta A_{\fbox{}silonta}^{k}+y^k,
$$
and
$$ w_\fbox{}silonta:=\fbox{}silonta^{-2}\sum_{k=1}^K \chi_{\hat\Omega_{\fbox{}silonta}^k}
= \sum_{k=1}^K v_\fbox{}silonta^k(\cdot -x_\fbox{}silonta^k + y^k).
$$
Note that if we were to choose $y^k=x_\fbox{}silonta^k$, then $w_\fbox{}silonta=v_\fbox{}silonta$. Moving the centers of the components leaves the perimeter and self-interaction terms unchanged, but the interaction terms between disjoint components is affected. Indeed,
\begin{equation}\label{wenergy}
E_\fbox{}silonta(w_\fbox{}silonta)= \sum_{k=1}^K \mathcal{G}(A_{\fbox{}silonta}^{k})
+ \sum_{i,j=1}^2 {\Gamma_{ij}\over 2 |\log\fbox{}silonta|}\left[\sum_{k=1}^K J_{i,j}(A_{\fbox{}silonta}^k)
+\sum_{k,\fbox{}silonll=1\atop k\neq\fbox{}silonll}^K \int_{A_{i,\fbox{}silonta}^k} \int_{A_{j,\fbox{}silonta}^\fbox{}silonll}
G_{{\mathbb{T}^2}}(y^k-y^\fbox{}silonll +\fbox{}silonta(x-y))\, dx\, dy
\right]
\fbox{}silonnd{equation}
In particular, let $y^1,\,{\operatorname{d}}ots,y^K$ be minimizers of the point interaction energy,
$$ \mu_K:=
\min_{\xi^1,\,{\operatorname{d}}ots,\xi^k\in{\mathbb{T}^2}}\mathcal{F}_K(\xi^1,\,{\operatorname{d}}ots,\xi^K; \{m^1,\,{\operatorname{d}}ots,m^k\})
=\mathcal{F}_K(y^1,\,{\operatorname{d}}ots,y^K; \{m^1,\,{\operatorname{d}}ots,m^k\}).$$
Because of the logarithmic repulsion at short range, the points $y^1,\,{\operatorname{d}}ots,y^K$ are distinct and thus well-separated, $G_{\mathbb{T}^2}$ is smooth in each integral appearing in the last term of \fbox{}silonqref{wenergy}. We may thus pass to the limit in this term to obtain:
\begin{equation}\label{Glimit}
\int_{A_{i,\fbox{}silonta}^k} \int_{A_{j,\fbox{}silonta}^\fbox{}silonll}
G_{{\mathbb{T}^2}}(y^k-y^\fbox{}silonll +\fbox{}silonta(x-y))\, dx\, dy - o(1) \xrightarrow{\fbox{}silonta_{} \rightarrow 0} G_{\mathbb{T}^2}( y^k-y^\fbox{}silonll)\, m_i^k\, m_j^\fbox{}silonll ,
\fbox{}silonnd{equation}
to obtain an upper bound,
\begin{align}\nonumber
E_\fbox{}silonta(v_\fbox{}silonta^{\ast})&\le E_\fbox{}silonta(w_\fbox{}silonta) \\
&= \sum_{k=1}^K \mathcal{G}(A_{\fbox{}silonta}^{k})
+ \sum_{i,j=1}^2 {\Gamma_{ij}\over 2 |\log\fbox{}silonta|}\sum_{k=1}^K J_{i,j}(A_{\fbox{}silonta}^k)
+ {\mu_K\over |\log\fbox{}silonta|}
+ o(|\log\fbox{}silonta|^{-1}).
\label{bestupper}
\fbox{}silonnd{align}
We next claim that the centers $x^1_\fbox{}silonta,\,{\operatorname{d}}ots,x^K_\fbox{}silonta$ are well-separated: $\fbox{}silonxists \,{\operatorname{d}}elta>0$ for which $r_{\fbox{}silonta}^{k,\fbox{}silonll}\ge\,{\operatorname{d}}elta>0$, $k\neq\fbox{}silonll$. By taking a further subsequence (still denoted by $\fbox{}silonta$) if necessary we may then conclude that each sequence $\{x_{\fbox{}silonta}^{k}\}_{}$ converges to a distinct $x^k\in{\mathbb{T}^2}$.
Indeed, assume the contrary, so $\fbox{}silonxists \ k_0\neq\fbox{}silonll_0$ for which $r_{\fbox{}silonta}^{k_0,\fbox{}silonll_0}\to 0$ as $\fbox{}silonta \to 0$. We then apply the lower bound \fbox{}silonqref{CL18} and \fbox{}silonqref{wenergy} (with $w_\fbox{}silonta=v_\fbox{}silonta$) to derive a lower bound of the form:
\begin{align*}
E_\fbox{}silonta (v_\fbox{}silonta^{\ast})&\ge \sum_{k=1}^K \mathcal{G}(A_{\fbox{}silonta}^{k})
+ \sum_{i,j=1}^2 {\Gamma_{ij}\over 2 |\log\fbox{}silonta|}\sum_{k=1}^K J_{i,j}(A_{\fbox{}silonta}^k)
+ {|\log r_{\fbox{}silonta}^{k_0,\fbox{}silonll_0}|\over |\log\fbox{}silonta|} \sum_{i,j=1}^2 {\Gamma_{ij}\over 4\pi} |A_{\fbox{}silonta}^{k_0}|\, |A_{\fbox{}silonta}^{\fbox{}silonll_0}| - o(|\log\fbox{}silonta |^{-1}).
\fbox{}silonnd{align*}
Matching the above lower bound with the upper bound \fbox{}silonqref{bestupper}, we obtain:
$$ |\log r_{\fbox{}silonta}^{k_0,\fbox{}silonll_0}| \sum_{i,j=1}^2 {\Gamma_{ij}\over 4\pi} |A_{\fbox{}silonta}^{k_0}|\, |A_{\fbox{}silonta}^{\fbox{}silonll_0}| \le C, $$
with constant $C$ independent of $\fbox{}silonta$. As
$|A_{\fbox{}silonta}^{k_0}|\ , |A_{\fbox{}silonta}^{\fbox{}silonll_0}|\to m^{k_0},m^{\fbox{}silonll_0}$, which are not both zero, this is impossible. Thus, each $r_{\fbox{}silonta}^{k_0,\fbox{}silonll_0}$ is bounded away from zero and the claim is verified.
Finally, we show that the limiting locations $x^k=\lim_{\fbox{}silonta\to 0} x_\fbox{}silonta^k$ must minimize $\mathcal{F}_K$. Using $E_\fbox{}silonta(v_\fbox{}silonta^{\ast})\le E_\fbox{}silonta(w_\fbox{}silonta)$, and writing the energy expansion \fbox{}silonqref{wenergy} for both $v_\fbox{}silonta^{\ast}$ and $w_\fbox{}silonta$ (choosing points $y^k$ which minimize $\mathcal{F}_K$,) all terms cancel exactly except for the interactions, and we are left with:
$$ \sum_{i,j=1}^2 \frac{\Gamma_{ij}}{2} \sum_{k,\fbox{}silonll=1\atop k\neq\fbox{}silonll}^K \int_{A_{i,\fbox{}silonta}^k} \int_{A_{j,\fbox{}silonta}^\fbox{}silonll}
G_{{\mathbb{T}^2}}(x_\fbox{}silonta^k-x_\fbox{}silonta^\fbox{}silonll +\fbox{}silonta(x-y))\, dx\, dy
\le \sum_{i,j=1}^2 \frac{\Gamma_{ij}}{2} \sum_{k,\fbox{}silonll=1\atop k\neq\fbox{}silonll}^K \int_{A_{i,\fbox{}silonta}^k} \int_{A_{j,\fbox{}silonta}^\fbox{}silonll}
G_{{\mathbb{T}^2}}(y^k-y^\fbox{}silonll +\fbox{}silonta(x-y))\, dx\, dy.
$$
As the collections $\{y^1,\,{\operatorname{d}}ots,y^K\}$ and $\{x^1,\,{\operatorname{d}}ots,x^K\}$ are well separated on ${\mathbb{T}^2}$, we may pass to the limit as in \fbox{}silonqref{Glimit}: $|A_{i,\fbox{}silonta}^k\triangle A_i^k|\to 0$, $m^k_{i,\fbox{}silonta}\to m^k_i$, and $x_\fbox{}silonta^k\to x^k$. By Lebesgue dominated convergence we pass to the limit in each integral to obtain
$$ \mathcal{F}_K(x^1,\,{\operatorname{d}}ots,x^K; \{m^1,\,{\operatorname{d}}ots,m^K\})\le \mu_K, $$
and hence $\{x^1,\,{\operatorname{d}}ots,x^K\}$ are minimizers of $\mathcal{F}_K$. This completes the proof of Theorem~\ref{MinThm}.
\fbox{}silonnd{proof}
\begin{remark}
\rm We note that $J_{i,j}(A^k)$ is related to the constant term in the second $\Gamma$-limit $F_0$, via
$$ \lim_{\fbox{}silonta\to 0} \sum_{i,j =1}^2 \sum_{k=1}^{K} \frac{\Gamma_{ij} }{2} J_{i,j}(A^k) =
\sum_{i,j =1}^2 \sum_{k=1}^{K} \frac{\Gamma_{ij} }{2} \left [ f (m^k_i,m^k_j) + m_i^k m_j^k R_{\mathbb{T}^2}( 0 ) \right ] ,
$$
with $m^k_i=|A^k_i|$.
\fbox{}silonnd{remark}
\appendix
\renewcommand{\thetaeequation}{A.\arabic{equation}}
\renewcommand{\thetaelemma}{A.\arabic{lemma}}
\setcounter{equation}{0}
\vskip 1cm
\noindent{\Large \bf Appendix A}
A double bubble is bounded by three circular arcs of radii $r_1, r_2$ and $r_0$, where $r_0$ is the radius of the common boundary of the two lobes of a double bubble.
Denote by $\thetaeta_1, \thetaeta_2$, and $\thetaeta_0$ the angles associated with the three arcs.
{\fbox{}silonm The asymmetric double bubble.}
Given an asymmetric double bubble, we assume without loss of generality that
\begin{eqnarray}
m_1 < m_2. \nonumber
\fbox{}silonnd{eqnarray}
We recall that the equations relating masses, angles, and radii are
\begin{eqnarray}
\label{e1A}
r_1^2 (\thetaeta_1 - \cos \thetaeta_1 \sin \thetaeta_1) + r_0^2 (\thetaeta_0 - \cos \thetaeta_0 \sin \thetaeta_0) &=& m_1 \\
\label{e2A}
r_2^2 (\thetaeta_2 - \cos \thetaeta_2 \sin \thetaeta_2) - r_0^2 (\thetaeta_0 - \cos \thetaeta_0 \sin \thetaeta_0) &=& m_2 \\
\label{e3A}
r_1 \sin \thetaeta_1 &=& r_0 \sin \thetaeta_0 \\
\label{e4A}
r_2 \sin \thetaeta_2 &=& r_0 \sin \thetaeta_0 \\
\label{e5A}
(r_1)^{-1} - (r_2)^{-1} &=& (r_0)^{-1}\\
\label{e6A}
\cos \thetaeta_1 + \cos \thetaeta_2 + \cos \thetaeta_0 &=& 0
\fbox{}silonnd{eqnarray}
From \fbox{}silonqref{e3A}-\fbox{}silonqref{e5A}, we get
\begin{eqnarray} \label{e7A}
\sin \thetaeta_1 - \sin \thetaeta_2 - \sin \thetaeta_0 = 0.
\fbox{}silonnd{eqnarray}
Combine \fbox{}silonqref{e6A} with \fbox{}silonqref{e7A}, we get
\begin{eqnarray}
\cos(\thetaeta_1+\thetaeta_0) = - \frac{1}{2}, \text{ and } \cos(\thetaeta_2-\thetaeta_0) = - \frac{1}{2}. \nonumber
\fbox{}silonnd{eqnarray}
That is,
\begin{eqnarray} \label{e8A}
\thetaeta_1 = \frac{2\pi}{3} - \thetaeta_0, \text{ and } \; \thetaeta_2 = \frac{2\pi}{3} + \thetaeta_0.
\fbox{}silonnd{eqnarray}
Based on \fbox{}silonqref{e8A}, since $\thetaeta_2 < \pi$, we get
\begin{eqnarray}
0 < \thetaeta_0 < \frac{\pi}{3}. \nonumber
\fbox{}silonnd{eqnarray}
Based on \fbox{}silonqref{e1A}, \fbox{}silonqref{e2A} and \fbox{}silonqref{e3A}, we have
\begin{eqnarray}
\frac{ \frac{2\pi/3 - \thetaeta_0}{ \sin^2 (2\pi/3 - \thetaeta_0)} - \frac{\cos( 2\pi/3 - \thetaeta_0)}{\sin( 2\pi/3 - \thetaeta_0)} + \frac{\thetaeta_0}{\sin^2 \thetaeta_0} - \frac{\cos \thetaeta_0}{\sin \thetaeta_0} }{ \frac{2\pi/3 + \thetaeta_0}{ \sin^2 (2\pi/3 + \thetaeta_0)} - \frac{\cos( 2\pi/3 + \thetaeta_0)}{\sin( 2\pi/3 + \thetaeta_0)} - \frac{\thetaeta_0}{\sin^2 \thetaeta_0} + \frac{\cos \thetaeta_0}{\sin \thetaeta_0} } = \frac{m_1}{m_2}.
\fbox{}silonnd{eqnarray}
So $\thetaeta_0$ depends on $m_1/m_2$ implicitly. That is, $\thetaeta_0$ is a function of $m_1/m_2$.
Thus, $\thetaeta_1$ and $\thetaeta_2$ are also functions of $m_1/m_2$ due to \fbox{}silonqref{e8A}.
Based on \fbox{}silonqref{e1A}, \fbox{}silonqref{e3A}, and \fbox{}silonqref{e4A}, we get
\begin{eqnarray}
r_0^2 &=& \frac{m_1}{\sin^2 \thetaeta_0 \left[ \frac{\thetaeta_1}{\sin^2 \thetaeta_1 } - \frac{\cos \thetaeta_1}{\sin \thetaeta_1 } + \frac{\thetaeta_0}{\sin^2 \thetaeta_0 } - \frac{\cos \thetaeta_0}{\sin \thetaeta_0 } \right] }, \nonumber \\
r_1^2 &=& \frac{ m_1}{ \sin^2 \thetaeta_1 \left[ \frac{\thetaeta_1}{\sin^2 \thetaeta_1 } - \frac{\cos \thetaeta_1}{\sin \thetaeta_1 } + \frac{\thetaeta_0}{\sin^2 \thetaeta_0 } - \frac{\cos \thetaeta_0}{\sin \thetaeta_0 } \right] }, \nonumber \\
r_2^2 &=& \frac{m_1 }{ \sin^2 \thetaeta_2 \left[ \frac{\thetaeta_1}{\sin^2 \thetaeta_1 } - \frac{\cos \thetaeta_1}{\sin \thetaeta_1 } + \frac{\thetaeta_0}{\sin^2 \thetaeta_0 } - \frac{\cos \thetaeta_0}{\sin \thetaeta_0 } \right] }. \nonumber
\fbox{}silonnd{eqnarray}
Thus the total perimeter of a double bubble is
\begin{eqnarray}
p(m_1, m_2) = 2 \sum_{i=0}^2 \thetaeta_i r_i = \sqrt{m_1} g \left ( \frac{m_1}{m_2} \right ), \text{ when } m_1 < m_2, \nonumber
\fbox{}silonnd{eqnarray}
where $g$ only depends on the ratio $m_1/m_2$.
{\fbox{}silonm The symmetric double bubble.} For an symmetric double bubble, we assume that $ m_1 = m_2 $.
The middle arc of the double bubble becomes a straight line. $\thetaeta_1 = \thetaeta_2 = \frac{2\pi}{3}$, and $\thetaeta_0 = 0$. $r_1 = r_2$, $r_0 = \infty$. And we have
\begin{eqnarray}
r_1^2 (\frac{2\pi}{3} - \cos \frac{2\pi}{3} \sin \frac{2\pi}{3} ) = m_1. \nonumber
\fbox{}silonnd{eqnarray}
Therefore,
\begin{eqnarray}
p(m_1, m_2) = 2 \sqrt{2} \sqrt{\frac{4}{3} \pi + \frac{\sqrt{3}}{2}} \sqrt{m_1}. \nonumber
\fbox{}silonnd{eqnarray}
\begin{thebibliography}{1}
\bibitem{afjm} E. Acerbi, N. Fusco, V. Julin and M. Morini. Nonlinear stability results for the modified Mullins-Sekerka and the surface diffusion flow. \textit{J. Differential Geom.}, 113(1):1-53, 2019.
\bibitem{afm} E. Acerbi, N. Fusco, and M. Morini. Minimality via second variation for a nonlocal isoperimetric problem. \textit{Comm. Math. Phys.}, 322(2): 515-557, 2013.
\bibitem{nano} S. Alama, L. Bronsard, I. Topaloglu. Sharp Interface Limit of an Energy Modeling Nanoparticle-Polymer Blends. \textit{Interfaces and Free Boundaries}, 18(2): 263-290, 2016.
\bibitem{ABCT1} S. Alama, L. Bronsard, Rustum Choksi, I. Topaloglu. Droplet breakup in the liquid drop model with background potential. \textit{Commun. Contemp. Math.}, 21(3), 1850022, 2019.
\bibitem{ABCT2} S. Alama, L. Bronsard, Rustum Choksi, I. Topaloglu. Droplet phase in a nonlocal isoperimetric problem under confinement. \textit{Comm. Pure Appl. Anal.}, 19, 175-202, 2020.
\bibitem{otto} G. Alberti, R. Choksi and F. Otto. Uniform energy distribution for an isoperimetric problem with long-range interactions. \textit{J. Amer. Math. Soc.}, 22(2):569-605, 2009.
\bibitem{AFP} L. Ambrosio, N. Fusco and D. Pallara. Functions of Bounded Variation and Free Discontinuity Problems. \textit{The Clarendon Press, Oxford University Press}, New York, 2000.
\bibitem{block} F. S. Bates and G. H. Fredrickson. Block copolymers - designer soft materials. \textit{Phys. Today}, 52(2): 32-38, 1999.
\bibitem{BC} M. Bonacini and R. Cristoferi. Local and global minimality results for a nonlocal isoperimetric problem on $\mathbb{R}^N$. \textit{SIAM J. Math. Anal.}, 46(4): 2310-2349, 2014.
\bibitem{BK} M. Bonacini and H. Kn\"uepfer. Ground states of a ternary system including attractive and repulsive Coulomb-type interactions. \textit{Calc. Var. Partial Dif.}, 55:114, 2016.
\bibitem{cnt} R. Choksi, R. Neumayer and I. Topaloglu. Anisotropic liquid drop models, submitted.
\bibitem{bi1} R. Choksi and M.A. Peletier, Small Volume Fraction Limit of the Diblock Copolymer Problem: I. Sharp Interface Functional. \textit{SIAM J. Math. Anal.}, 42(3):1334-1370, 2010.
\bibitem{onDerivation} R. Choksi and X. Ren. On the derivation of a density functional theory for microphase separation of diblock copolymers. \textit{J. Statist. Phys.}, 113:151-176, 2003.
\bibitem{blendCR} R. Choksi and X. Ren. Diblock copolymer-homopolymer blends: derivation of a density functional theory. \textit{Physica D}. 203(1-2):100-119, 2005.
\bibitem{cs} R. Choksi and P. Sternberg. On the first and second variations of a nonlocal isoperimetric problem. \textit{J. Reine Angew. Math.}, 611, 75-108, 2007.
\bibitem{cisp} M. Cicalese and E. Spadaro. Droplet minimizers of an isoperimetric problem with long-range interactions. \textit{Comm. Pure Appl. Math.}, 66(8):1298-1333, 2013.
\bibitem{cristoferi} R. Cristoferi. On periodic critical points and local minimizers of the Ohta-Kawasaki functional. \textit{Nonlinear Anal.}, 168, 81-109, 2018.
\bibitem{fall} M. Fall. Periodic patterns for a model invollving short-range and long-range interactions. \textit{Nonlinear Anal.}, 175:73-107, 2018.
\bibitem{figalli} A. Figalli, N. Fusco, F. Maggi, V. Millot and M. Morini. Isoperimetry and stability properties of balls with respect to nonlocal energies. \textit{Comm. Math. Phys.}, 336(1):441-507, 2015.
\bibitem{FABHZ} J. Foisy, M. Alfaro, J. Brock, N. Hodges, and J. Zimba. The standard double soap bubble in $\mathbb{R}^2$ uniquely minimizes perimeter. \textit{Pacific J. Math.}, 159(1):47-59, 1993.
\bibitem{fl} R. Frank and E. Lieb. A compactness lemma and its application to the existence of minimizers for the liquid drop model. \textit{SIAM J. Math. Anal.},47(6):4436-4450, 2015.
\bibitem{evolutionTer} K. Glasner. Evolution and competition of block copolymer nanoparticles. \textit{ SIAM J. Appl. Math.}, 79(1):28-54, 2019.
\bibitem{gc} K. Glasner and R. Choksi. Coarsening and self-organization in dilute diblock copolymer melts and mixtures. \textit{Phys. D}, 238:1241-1255, 2009.
\bibitem{GMSdensity} D. Goldman, C.B. Muratov and S. Serfaty. The $\Gamma$-limit of the two-dimensional Ohta-Kawasaki energy. I. droplet density. \textit{Arch. Rat. Mech. Anal.}, 210(2):581-613, 2013.
\bibitem{db1} J. Hass and R. Schlafly. Double bubbles minimize. \textit{Ann. Math.}, 151(2):459-515, 2000.
\bibitem{db2} M. Hutchings, R. Morgan, M. Ritor\'e, and A. Ros. Proof of the double bubble conjecture. \textit{Ann. Math.}, 155(2):459-489, 2002.
\bibitem{hnr} M. Helmers, B. Niethammer, and X. Ren. Evolution in off-critical diblock copolymer melts. \textit{Netw. Heterog. Media}, 3:615-632, 2008.
\bibitem{isenberg} C. Isenberg. The science of soap films and soap bubbles. Dover Publications, 1992.
\bibitem{Julin} V. Julin, Isoperimetric problem with a Coulomb repulsive term. \textit{Indiana Univ. Math. J.}, 63(1): 77-89, 2014.
\bibitem{Julin2} V. Julin and G. Pisante. Minimality via second variation for microphase separation of diblock copolymer melts. \textit{J. Reine Angew. Math.}, 729, 81, 2017.
\bibitem{Julin3} V. Julin. Remark on a nonlocal isoperimetric problem. \textit{Nonlinear Anal.}, 154:174-188, 2017.
\bibitem{luotto} J. Lu and F. Otto. An isoperimetric problem with Coulomb repulsion and attraction to a background nucleus, preprint, arXiv:1508.07172, 2015.
\bibitem{knupfer1} H. Kn\"upfer and C.B. Muratov. On an isoperimetric problem with a competing non-local term. I. The planar case. \textit{ Comm. Pure Appl. Math.}, 66, 1129?1162, 2013.
\bibitem{knupfer2} H. Kn\"upfer and C.B. Muratov. On an isoperimetric problem with a competing non-local term. II. The general case. \textit{ Comm. Pure Appl. Math.}, 67, 1974-1994, 2014.
\bibitem{lions} P.L. Lions, The Concentration-Compactness Principle in the Calculus of Variations. The Limit Case, Part I. \textit{ Rev. Mat. Iberoamercana}, 1(1):145-201, 1984.
\bibitem{ms} M. Morini and P. Sternberg. Cascade of minimizers for a nonlocal isoperimetric problem in thin domains. \textit{SIAM J. Math. Anal.}, 46(3):2033-2051, 2014.
\bibitem{microphase} H. Nakazawa and T. Ohta. Microphase separation of ABC-type triblock copolymers. \textit{Macromolecules}, 26(20):5503-5511, 1993.
\bibitem{nishiura} Y. Nishiura and I. Ohnishi. Some mathematical aspects of the micro-phase separation in diblock copolymers. \textit{Phys. D}, 84, 31-39, 1995.
\bibitem{maggi} F. Maggi. Sets of finite perimeter and geometric variational problems: An introduction to geometric measure theory. New York: Cambridge University Press. 2012.
\bibitem{Min} N. Min, T. Choi, S. Kim. Bicolored Janus microparticles created by phase separation in emulsion drops. \textit{Macromol. Chem. Phys.}, 218:1600265, 2017.
\bibitem{Mogi} Y. Mogi, {\it{et al.}}, Superlattice structures in morphologies of the ABC Triblock copolymers. \textit{Macromolecules}, 27(23): 6755-6760, 1994.
\bibitem{muratov} C.B. Muratov. Droplet phases in non-local Ginzburg-Landau models with Coulomb repulsion in two dimensions. \textit{Comm. Math. Phys.}, 299(1):45-87,2010.
\bibitem{oshita} Y. Oshita. Singular limit problem for some elliptic systems. \textit{SIAM J. Math. Anal.}, 38(6):1886-1911, 2007.
\bibitem{equilibrium} T. Ohta and K. Kawasaki. Equilibrium morphology of block copolymer melts. \textit{Macromolecules}, 19(10):2621-2632,1986.
\bibitem{stationary} X. Ren and C. Wang. A stationary core-shell assembly in a ternary inhibitory system. \textit{Discrete Contin. Dyn. Syst.}, 37(2):983-1012, 2017.
\bibitem{disc} X. Ren and C. Wang. Stationary disk assemblies in a ternary system with long range interaction. \textit{Commun. Contemp. Math.}, 1850046, 2018.
\bibitem{miniRW} X. Ren and J. Wei. On the multiplicity of solutions of two nonlocal variational problems. \textit{SIAM J. Math. Anal.}, 31(4):909-924, 2000.
\bibitem{lameRW} X. Ren and J.Wei. Triblock copolymer theory: Ordered ABC lamellar phase. \textit{J. Nonlinear Sci.}, 13(2):175-208, 2003.
\bibitem{rwtri} X. Ren and J. Wei. Triblock copolymer theory: free energy, disordered phase and weak segregation. \textit{Physica D}, 178(1-2):103-117, 2003.
\bibitem{many} X. Ren and J.Wei. Many dropet pattern in the cylindrical phase of diblock copolymer morphology. {\fbox{}silonm Rev. Math. Phys.}, 19(8):879-921, 2007.
\bibitem{spherical} X. Ren and J. Wei. Spherical solutions to a nonlocal free boundary problem from diblock copolymer morphology. \textit{SIAM J. Math. Anal.}, 39(5):1497-1535, 2008.
\bibitem{oval} X. Ren and J. Wei. Oval shaped droplet solutions in the saturation process of some pattern formation problems. \textit{SIAM J. Appl. Math.}, 70(4):1120-1138, 2009.
\bibitem{doubleAs} X.Ren and J.Wei. A double bubble assembly as a new phase of a ternary inhibitory system. {\fbox{}silonm Arch. Rat. Mech. Anal.}, 215(3):967-1034, 2015.
\bibitem{double} X. Ren and J. Wei. A double bubble in a ternary system with inhibitory long range interaction. \textit{Arch. Rat. Mech. Anal.}, 46(4):2798-2852, 2014.
\bibitem{Schwarz} H. A. Schwarz. Beweis des Satze, dass die Kugel kleinere Oberfl\"ache besitzt, als jeder andere K\"orper gleichen Volumens. \textit{Nach. K\"oniglichen Ges. Wiss. G\"ottingen}, pages 1-13, 1884.
\bibitem{st} P. Sternberg and I. Topaloglu. A note on the global minimizers of the nonlocal isoperimetric problem in two dimensions. \textit{Interfaces Free Bound.}, 13(1):155-169, 2011.
\bibitem{ihsan} I. Topaloglu. On a nonlocal isoperimetric problem on the two-sphere. \textit{Comm. Pure Appl. Anal.}, 12(1):597-620, 2013.
\bibitem{wrz} C. Wang, X. Ren and Y. Zhao. Bubble assemblies in ternary systems with long range interaction. \textit{Comm. Math. Sci.}, accepted, 2019.
\fbox{}silonnd{thebibliography}
\fbox{}silonnd{document} |
\begin{document}
\title{RIP-Based Near-Oracle Performance Guarantees for Subspace-Pursuit,
CoSaMP, and Iterative Hard-Thresholding}
\author{Raja~Giryes, and Michael~Elad
\thanks{R. Giryes and M. Elad are with the Department of Computer Science,
Technion - IIT 32000, Haifa, ISRAEL}}
\author{Raja Giryes and Michael Elad \\
{\small Department of Computer Science --
Technion -- Israel Institute of Technology} \\
{\small Haifa 32000, Israel} \\
{\small \{raja,elad\}@cs.technion.ac.il}}
\markboth{Preprint}
{R.~Giryes and M.~Elad}
\maketitle
\begin{abstract}
This paper presents an average case denoising performance analysis
for the Subspace Pursuit (SP), the CoSaMP and the IHT algorithms.
This analysis considers the recovery of a noisy signal, with the
assumptions that (i) it is corrupted by an additive random white
Gaussian noise; and (ii) it has a $K$-sparse representation with
respect to a known dictionary $\matr{D}$. The proposed analysis is
based on the Restricted-Isometry-Property (RIP), establishing a
near-oracle performance guarantee for each of these algorithms. The
results for the three algorithms differ in the bounds' constants and
in the cardinality requirement (the upper bound on $K$ for which the
claim is true).
Similar RIP-based analysis was carried out previously for the
Dantzig Selector (DS) and the Basis Pursuit (BP). Past work also
considered a mutual-coherence-based analysis of the denoising
performance of the DS, BP, the Orthogonal Matching Pursuit (OMP) and
the thresholding algorithms. This work differs from the above as it
addresses a different set of algorithms. Also, despite the fact that
SP, CoSaMP, and IHT are greedy-like methods, the performance
guarantees developed in this work resemble those obtained for the
relaxation-based methods (DS and BP), suggesting that the
performance is independent of the sparse representation entries
contrast and magnitude.
\end{abstract}
\section{Introduction}
\subsection{General -- Pursuit Methods for Denoising}
The area of sparse approximation (and compressed sensing as one
prominent manifestation of its applicability) is an emerging field
that get much attention in the last decade. In one of the most basic
problems posed in this field, we consider a noisy measurement vector
$\vect{y}\in \mathbb R^{m}$ of the form
\begin{equation}
\label{eq:meas_vec}
\vect{y}=\matr{D}\vect{x}+\vect{e},
\end{equation}
where $\vect{x}\in\mathbb R^N$ is the signal's representation with
respect to the dictionary $\matr{D}\in \mathbb R^{m \times N} $ where $N
\ge m$. The vector $\vect{e}\in\mathbb R^m$ is an additive noise, which
is assumed to be an adversial disturbance, or a random vector --
e.g., white Gaussian noise with zero mean and variance $\sigma^2$.
We further assume that the columns of $\matr{D}$ are normalized, and
that the representation vector $\vect{x}$ is $K$-sparse, or nearly
so.\footnote{A more exact definition of nearly sparse vectors will
be given later on} Our goal is to find the $K$-sparse vector
$\vect{x}$ that approximates the measured signal $\vect{y}$. Put
formally, this reads
\begin{equation}
\label{eq:formal_def}
\min_{\vect{x}}~\norm{\vect{y}-\matr{D}\vect{x}}_2~~\text{ subject
to }\norm{\vect{x}}_0=K,
\end{equation}
where $\norm{\vect{x}}_0$ is the $\ell_0$ pseudo-norm that counts
the number of non-zeros in the vector $\vect{x}$. This problem is
quite hard and problematic
\cite{Donoho06Stable,Donoho06OnTheStability,Tropp06JustRelax,Bruckstein09From}.
A straight forward search for the solution of (\ref{eq:formal_def})
is an NP hard problem as it requires a combinatorial search over
the support of $\vect{x}$ \cite{NP-Hard}. For this reason,
approximation algorithms were proposed -- these are often referred
to as pursuit algorithms.
One popular pursuit approach is based on $\ell_1$ relaxation and
known as the Basis Pursuit (BP) \cite{Chen98overcomplete} or the
Lasso \cite{Tibshirani96Regression}. The BP aims at minimizing the
relaxed objective
\begin{eqnarray}
\label{eq:p_1} (P1):& \min_{\vect{x}} \norm{\vect{x}}_1 \text{ subject to } \norm{\vect{y} - \matr{D}\vect{x}}_2^2 \le
\epsilon_\vect{BP}^2,
\end{eqnarray}
where $\epsilon_\vect{BP}$ is a constant related to the noise power.
This minimizing problem has an equivalent form:
\begin{eqnarray}
\label{eq:bp} (BP):& \min_{\vect{x}} & \frac{1}{2}\norm{\vect{y} -
\matr{D}\vect{x}}_2^2 + \gamma_{BP} \norm{\vect{x}}_1,
\end{eqnarray}
where $\gamma_{BP}$ is a constant related to $\epsilon_\vect{BP}$.
Another $\ell_1$-based relaxed algorithm is the Dantzig Selector
(DS), as proposed in \cite{Candes07Dantzig}. The DS aims at
minimizing
\begin{eqnarray}
\label{eq:ds} (DS): \min_{\vect{x}} \norm{\vect{x}}_1 \text{
subject to } \norm{\matr{D}^*(\vect{y} - \matr{D}\vect{x})}_\infty
\le \epsilon_\vect{DS},
\end{eqnarray}
where $\epsilon_\vect{DS}$ is a constant related to the noise power.
A different pursuit approach towards the approximation of the
solution of (\ref{eq:formal_def}) is the greedy strategy
\cite{Chen89Orthogonal,MallatZhang93,Davis97Adaptive-greedy},
leading to algorithms such as the Matching Pursuit (MP) and the
Orthogonal Matching Pursuit (OMP). These algorithms build the
solution $\vect{x}$ one non-zero entry at a time, while greedily
aiming to reduce the residual error $\norm{\vect{y} -
\matr{D}\vect{x}}_2^2$.
The last family of pursuit methods we mention here are greedy-like
algorithms that differ from MP and OMP in two important ways: (i)
Rather than accumulating the desired solution one element at a time,
a group of non-zeros is identified together; and (ii) As opposed to
the MP and OMP, these algorithms enable removal of elements from the
detected support. Algorithms belonging to this group are the
Regularized OMP (ROMP) \cite{Needell10Signal}, the Compressive
Sampling Matching Pursuit (CoSaMP) \cite{Needell09CoSaMP}, the
Subspace-Pursuit (SP) \cite{Dai09Subspace}, and the Iterative Hard
Thresholding (IHT) \cite{Blumensath09Iterative}. This paper focuses
on this specific family of methods, as it poses an interesting
compromise between the simplicity of the greedy methods and the
strong abilities of the relaxed algorithms.
\subsection{Performance Analysis -- Basic Tools}
Recall that we aim at recovering the (deterministic!) sparse
representation vector $\vect{x}$. We measure the quality of the
approximate solution $\hat{\vect{x}}$ by the Mean-Squared-Error
(MSE)
\begin{eqnarray}
\label{eq:objective} \MSE(\hat{\vect{x}}) = E\norm{\vect{x}
-\hat{\vect{x}}}_2^2,
\end{eqnarray}
where the expectation is taken over the distribution of the noise.
Therefore, our goal is to get as small as possible error. The
question is, how small can this noise be? In order to answer this
question, we first define two features that characterize the
dictionary $\matr{D}$ -- the mutual coherence and the Restricted
Isometry Property (RIP). Both are used extensively in formulating
the performance guarantees of the sort developed in this paper.
The mutual-coherence $\mu$
\cite{Donoho01Uncertainty,Elad02generalized,Donoho03Optimal} of a
matrix $\matr{D}$ is the largest absolute normalized inner product
between different columns from $\matr{D}$. The larger it is, the
more problematic the dictionary is, because in such a case we get
that columns in $\matr{D}$ are too much alike.
Turning to the RIP, it is said that $\matr{D}$ satisfies the $K$-RIP
condition with parameter $\delta_K$ if it is the smallest value that
satisfies
\begin{equation}
(1-\delta_K) \norm{\vect{x}}_2^2 \le \norm{\matr{D}\vect{x}}_2^2 \le
(1+\delta_K) \norm{\vect{x}}_2^2
\end{equation}
for any $K$-sparse vector $\vect{x}$ \cite{Candes06Near,Candes05Decoding}.
These two measures are related by $\delta_K \le (K-1)\mu$
\cite{BenHaim09Coherence}. The RIP is a stronger descriptor of
$\matr{D}$ as it characterizes groups of $K$ columns from
$\matr{D}$, whereas the mutual coherence ``sees'' only pairs. On the
other hand, computing $\mu$ is easy, while the evaluation of
$\delta_K$ is prohibitive in most cases. An exception to this are
random matrices $\matr{D}$ for which the RIP constant is known (with
high probability). For example, if the entries of $\sqrt{m}\matr{D}$
are drawn from a white Gaussian distribution\footnote{The
multiplication by $\sqrt{m}$ comes to normalize the columns of the
effective dictionary $\matr{D}$.} and $m \ge
CK\log(N/K)/\epsilon^2$,
then with a very high probability $\delta_K \le \epsilon$
\cite{Candes06Near,Rudelson06Sparse}.
We return now to the question we posed above: how small can the
error $\MSE(\hat{\vect{x}})$ be? Consider an oracle estimator that
knows the support of $\vect{x}$, i.e. the locations of the $K$
non-zeros in this vector. The oracle estimator obtained as a direct
solution of the problem posed in (\ref{eq:formal_def}) is easily
given by
\begin{eqnarray}
\label{eq:oracle} \hat{\vect{x}}_{oracle} = \matr{D}_T^\dag
\vect{y},
\end{eqnarray}
where $T$ is the support of $\vect{x}$ and $\matr{D}_T$ is a
sub-matrix of $\matr{D}$ that contains only the columns involved in
the support $T$. Its MSE is given by \cite{Candes07Dantzig}
\begin{eqnarray}
\label{eq:oracle_perf}\MSE(\hat{\vect{x}}_{oracle}) =
E\norm{\vect{x} - \hat{\vect{x}}_{oracle}}_2^2 =
E\norm{\matr{D}_T^\dag \vect{e}}_2^2.
\end{eqnarray}
In the case of a random noise, as described above, this error
becomes
\begin{eqnarray}
\label{eq:oracle_perf2} \MSE (\hat{\vect{x}}_{oracle}) &=&
E\norm{\matr{D}_T^\dag \vect{e}}_2^2 \\
\nonumber
& =& \trace\left\{(\matr{D}^*_T\matr{D}_T)^{-1}\right\} \sigma^2 \\
\nonumber
&\le& \frac{K}{1-\delta_K} \sigma^2.
\end{eqnarray}
This is the smallest possible error, and it is proportional to the
number of non-zeros $K$ multiplied by $\sigma^2$. It is natural to
ask how close do we get to this best error by practical pursuit
methods that do not assume the knowledge of the support. This brings
us to the next sub-section.
\subsection{Performance Analysis -- Known Results}
There are various attempts to bound the MSE of pursuit algorithms.
Early works considered the adversary case, where the noise can admit
any form as long as its norm is bounded \cite{Tropp06Just,
Candes06Stable,Donoho06OnTheStability,Donoho06Stable}. These works
gave bounds on the reconstruction error in the form of a constant
factor ($Const> 1$) multiplying the noise power,
\begin{eqnarray}
\label{eq:perf_Advers} \norm{\vect{x} - \hat{\vect{x}}}^2_2 \le
Const \cdot \norm{\vect{e}}_2^2.
\end{eqnarray}
Notice that the cardinality of the representation plays no role in
this bound, and all the noise energy is manifested in the final
error.
One such example is the work by Cand\`{e}s and Tao, reported in
\cite{Candes05Decoding}, which analyzed the BP error. This work have
shown that if the dictionary $\matr{D}$ satisfies $\delta_{K}
+\delta_{2K} + \delta_{3K} < 1$ then the BP MSE is bounded by a
constant times the energy of the noise, as shown above. The
condition on the RIP was improved to $\delta_{2K} < \sqrt{2} -1 $ in
\cite{Candes08Comptes}. Similar tighter bounds are $\delta_{1.625K}
< \sqrt{2} - 1$ and $\delta_{3K} < 4 - 2\sqrt{3} $
\cite{Cai01Shifting}, or $\delta_{K} <0.307$ \cite{Cai01New}. The
advantage of using the RIP in the way described above is that it
gives a uniform guarantee: it is related only to the dictionary and
sparsity level.
Next in line to be analyzed are the greedy methods (MP, OMP, Thr)
\cite{Tropp06Just,Donoho06Stable}. Unlike the BP, these algorithms
where shown to be more sensitive, incapable of providing a uniform
guarantee for the reconstruction. Rather, beyond the dependence on
the properties of $\matr{D}$ and the sparsity level, the guarantees
obtained depend also on the ratio between the noise power and the
absolute values of the signal representation entries.
Interestingly, the greedy-like approach, as practiced in the ROMP,
the CoSaMP, the SP, and the IHT algorithms, was found to be closer
is spirit to the BP, all leading to uniform guarantees on the
bounded MSE. The ROMP was the first of these algorithms to be
analyzed \cite{Needell10Signal}, leading to the more strict
requirement $\delta_{2K}<0.03/\sqrt{\log K}$. The CoSaMP
\cite{Needell09CoSaMP} and the SP \cite{Dai09Subspace} that came
later have similar RIP conditions without the $\log K$ factor, where
the SP result is slightly better.
The IHT algorithm was also shown to have a uniform guarantee for
bounded error of the same flavor as shown above
\cite{Blumensath09Iterative}.
All the results mentioned above deal with an adversial noise, and
therefore give bounds that are related only to the noise power with
a coefficient that is larger than $1$, implying that no effective
denoising is to be expected. This is natural since we consider the
worst case results, where the noise can be concentrated in the
places of the non-zero elements of the sparse vector. To obtain
better results, one must change the perspective and consider a
random noise drawn from a certain distribution.
The first to realize this and exploit this alternative point of view
were Candes and Tao in the work reported in \cite{Candes07Dantzig}
that analyzed the DS algorithm. As mentioned above, the noise was
assumed to be random zero-mean white Gaussian noise with a known
variance $\sigma^2$. For the choice $\epsilon_{DS} = \sqrt{2(1+a)
\log{N}}\cdot\sigma$, and requiring $\delta_{2K} +\delta_{3K} < 1$,
the minimizer of (\ref{eq:ds}), $\hat{\vect{x}}_{DS}$, was shown to
obey
\begin{eqnarray}
\label{eq:ds_perf} \norm{\vect{x} - \hat{\vect{x}}_{DS}}^2_2 \le
C_{DS}^2 \cdot (2 (1+a) \log N) \cdot K \sigma^2,
\end{eqnarray}
with probability exceeding $1- (\sqrt{\pi (1+a)\log N}\cdot N^a)^{-1}$,
where $C_{DS} = 4/(1-2\delta_{3K})$.\footnote{In
\cite{Candes07Dantzig} a slightly different constant was presented.}
Up to a constant and a $\log N$ factor, this bound is the same as
the oracle's one in (\ref{eq:oracle_perf}). The $\log N$ factor in
(\ref{eq:ds_perf}) in unavoidable, as proven in
\cite{Candes06Modern}, and therefore this bound is optimal up to a
constant factor.
A similar result was presented in \cite{Bickel09Simultaneous} for
the BP, showing that the solution of (\ref{eq:bp}) for the choice
$\gamma_{BP} = \sqrt{8\sigma^2(1+a)\log N}$, and requiring
$\delta_{2K} + 3\delta_{3K} < 1$, satisfies
\begin{eqnarray}
\label{eq:bp_perf} \norm{\vect{x} - \hat{\vect{x}}_{DS}}^2_2 \le
C_{BP}^2 \cdot (2 (1+a) \log N) \cdot K \sigma^2
\end{eqnarray}
with probability exceeding $1- (N^a)^{-1}$. This result is weaker
than the one obtained for the DS in three ways: (i) It gives a
smaller probability of success; (ii) The constant $C_{BP}$ is
larger, as shown in \cite{BenHaim09Coherence} ($C_{BP} \ge
32/\kappa^4$, where $\kappa < 1$ is defined in
\cite{Bickel09Simultaneous}); and (iii) The condition on the RIP is
stronger.
Mutual-Coherence based results for the DS and BP were derived in
\cite{Cai09Stable,BenHaim09Coherence}. In \cite{BenHaim09Coherence}
results were developed also for greedy algorithms -- the OMP and the
thresholding. These results rely on the contrast and magnitude of
the entries of $\vect{x}$. Denoting by $\hat{\vect{x}}_{greedy}$ the
reconstruction result of the thresholding and the OMP, we have
\begin{eqnarray}
\label{eq:greedy_perf} \norm{\vect{x} - \hat{\vect{x}}_{greedy}}^2_2
\le C_{greedy}^2 \cdot (2 (1+a) \log N) \cdot K \sigma^2,
\end{eqnarray}
where $C_{greedy} \le 2$ and with probability exceeds $1-
(\sqrt{\pi (1+a) \log N}\cdot N^a)^{-1}$. This result is true for
the OMP and thresholding under the condition
\begin{eqnarray}
\label{eq:OMP_and_THR_cond} \frac{\abs{\vect{x}_{min}} - 2\sigma
\sqrt{2(1+a)\log N }}{(2K-1)\mu} \ge \left\{
\begin{array}{cc}\abs{ \vect{x}_{min}} & \text{OMP} \\
|\vect{x}_{max}| & \text{THR} \end{array}\right.,
\end{eqnarray}
where $|\vect{x}_{min}|$ and $|\vect{x}_{max}|$ are the minimal and
maximal non-zero absolute entries in $\vect{x}$.
\subsection{This Paper Contribution}
We have seen that greedy algorithms' success is dependent on the
magnitude of the entries of $\vect{x}$ and the noise power, which is
not the case for the DS and BP. It seems that there is a need for
pursuit algorithms that, on one hand, will enjoy the simplicity and
ease of implementation as in the greedy methods, while being
guaranteed to perform as well as the BP and DS. Could the
greedy-like methods (ROMP, CoSaMP, SP, IHT) serve this purpose? The
answer was shown to be positive for the adversial noise assumption,
but these results are too weak, as they do not show the true
denoising effect that such algorithm may lead to. In this work we
show that the answer remains positive for the random noise
assumption.
More specifically, in this paper we present RIP-based near-oracle
performance guarantees for the SP, CoSaMP and IHT algorithms (in
this order). We show that these algorithms get uniform guarantees,
just as for the relaxation based methods (the DS and BP). We present
the analysis that leads to these results and we provide explicit
values for the constants in the obtained bounds.
The organization of this paper is as follows: In Section
\ref{sec:notation} we introduce the notation and propositions used
for our analysis. In Section~\ref{sec:near_orac_perf} we develop RIP-based
bounds for the SP, CoSaMP and the IHT algorithms for the adversial case.
Then we show how we can derive from these a new set of guarantees for near
oracle performance that consider the noise as random.
We develop fully the steps for the SP, and
outline the steps needed to get the results for the CoSaMP and IHT.
In Section \ref{sec:experiments} we present some experiments that
show the performance of the three methods, and a comparison between
the theoretical bounds and the empirical performance. In Section
\ref{sec:non_exact_sparse} we consider the nearly-sparse case, extending
all the above results. Section \ref{sec:conc} concludes our work.
\section{Notation and Preliminaries}
\label{sec:notation} The following notations are used in this paper:
\begin{itemize}
\item $\supp(\vect{x})$ is the support of $\vect{x}$ (a set with the locations of
the non-zero elements of $\vect{x}$).
\item $\abs{\supp(\vect{x})}$ is the size of the set $\supp(\vect{x})$.
\item $\supp(\vect{x},K)$ is the support of the largest $K$
magnitude elements in $\vect{x}$.
\item $\matr{D}_T$ is a matrix composed of the columns of the
matrix $\matr{D}$ of the set $T$.
\item In a similar way, $\vect{x}_T$ is a vector composed of the entries
of the vector $\vect{x}$ over the set $T$.
\item $T^C$ symbolizes the complementary set of $T$.
\item $T - \tilde{T}$ is the set of all the elements contained
in $T$ but not in $\tilde{T}$.
\item We will denote by $T$ the set of the non-zero places of
the original signal $\vect{x}$; As such, $\abs{T} \le K$ when
$\vect{x}$ is $K$-sparse.
\item $\vect{x}_K$ is the vector with the $K$ dominant elements
of $\vect{x}$.
\item The projection of a vector $\vect{y}$ to the subspace
spanned by the columns of the matrix $\matr{A}$ (assumed to have
more rows than columns) is denoted by $\proj(\vect{y},\matr{A}) =
\matr{A}\matr{A}^\dag\vect{y}$. The residual is
$\resid(\vect{y},\matr{A}) = \vect{y} -
\matr{A}\matr{A}^\dag\vect{y}$.
\item $T_e$ is the subset of columns of size $K$ in $\matr{D}$ that
gives the maximum correlation with the noise vector $\vect{e}$,
namely,
\begin{equation}
\label{eq:T_e_def} T_e = \operatornamewithlimits{argmax}_{T\left|~|T|=K \right.}
\norm{\matr{D}_{T}^*\vect{e}}_2
\end{equation}
\item $T_{\vect{e},p}$ is a generalization of $T_{\vect{e}}$
where $T$ in (\ref{eq:T_e_def}) is of size $pK$, $p \in \mathbb{N}$.
It is clear that $\norm{\matr{D}_{T_{\vect{e},p}}^*\vect{e}}_2 \le
p\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2$.
\end{itemize}
\noindent The proofs in this paper use several propositions from
\cite{Needell09CoSaMP, Dai09Subspace}. We bring these in this
Section, so as to keep the discussion complete.
\begin{prop}\label{prop1}[Proposition 3.1 in \cite{Needell09CoSaMP}]
Suppose $\matr{D}$ has a restricted isometry constant $\delta_{K}$.
Let $T$ be a set of $K$ indices or fewer. Then
\begin{eqnarray*}
\norm{\matr{D}_T^*\vect{y}}_2 \le \sqrt{1+\delta_{K}}\norm{\vect{y}}_2 \\
\norm{\matr{D}_T^\dag\vect{y}}_2 \le \frac{1}{\sqrt{1-\delta_{K}}}\norm{\vect{y}}_2 \\
\norm{\matr{D}_T^*\matr{D}_T\vect{x}}_2 \lesseqqgtr (1\pm\delta_{K})\norm{\vect{x}}_2 \\
\norm{(\matr{D}_T^*\matr{D}_T)^{-1}\vect{x}}_2 \lesseqqgtr
\frac{1}{1\pm\delta_{K}}\norm{\vect{x}}_2
\end{eqnarray*}
where the last two statements contain upper and lower bounds,
depending on the sign chosen.
\end{prop}
\begin{prop}\label{prop2}[Lemma 1 in \cite{Dai09Subspace}] Consequences of the RIP:
\begin{enumerate}
\item (Monotonicity of $\delta_K$) For any two integers $K\le K'$,
$\delta_{K} \le \delta_{K'}$.
\item (Near-orthogonality of columns) Let $I,J \subset \{1,...,N\}$ be
two disjoint sets ($I \cap J = \emptyset$). Suppose that
$\delta_{\abs{I}+\abs{J}} < 1$. For arbitrary vectors $\vect{a} \in
\mathbb R^{\abs{I}}$ and $\vect{b}\in \mathbb R^{\abs{J}}$,
\begin{eqnarray}
\nonumber \abs{\langle \matr{D}_I \vect{a}, \matr{D}_J\vect{b}
\rangle} \le \delta_{\abs{I}+\abs{J}} \norm{\vect{a}}_2
\norm{\vect{b}}_2
\end{eqnarray}
and
\begin{eqnarray}
\nonumber
\norm{\matr{D}_I^*\matr{D}_J\vect{b}}_2 \le \delta_{\abs{I}+\abs{J}}
\norm{\vect{b}}_2.
\end{eqnarray}
\end{enumerate}
\end{prop}
\begin{prop}\label{prop3}[Lemma 2 in \cite{Dai09Subspace}] Projection and
Residue:
\begin{enumerate}
\item (Orthogonality of the residue) For an arbitrary
vector $\vect{y} \in \mathbb R^m$ and a sub-matrix $\matr{D}_I \in
\mathbb R^{m \times K}$ of full column-rank, let $\vect{y}_r =
\resid(\vect{y,\matr{D}_I})$. Then $\matr{D}^*_I\vect{y}_r = 0$.
\item (Approximation of the projection residue)
Consider a matrix $\matr{D} \in \mathbb R^{m \times N}$. Let $I,J
\subset \{1,...,N\}$ be two disjoint sets, $I\cap J = \emptyset$,
and suppose that $\delta_{\abs{I}+\abs{J}} < 1$. Let $\vect{y} \in
\spanned(\matr{D}_I)$, $\vect{y}_p = \proj(\vect{y},\matr{D}_J)$ and
$\vect{y}_r = \resid(\vect{y},\matr{D}_J)$. Then
\begin{eqnarray*}
\norm{\vect{y}_p}_2 \le \frac{\delta_{\abs{I}+\abs{J}}}{1-\delta_{\max(\abs{I},\abs{J})}}\norm{\vect{y}}_2
\end{eqnarray*}
and
\begin{eqnarray*}
\left( 1 -
\frac{\delta_{\abs{I}+\abs{J}}}{1-\delta_{\max(\abs{I},\abs{J})}}
\right)\norm{\vect{y}}_2 \le \norm{\vect{y}_r}_2 \le
\norm{\vect{y}}_2.
\end{eqnarray*}
\end{enumerate}
\end{prop}
\begin{prop}\label{prop4}[Corollary 3.3 in \cite{Needell09CoSaMP}]
Suppose that $\matr{D}$ has an RIP constant $\delta_{\tilde{K}}$.
Let $T_1$ be an arbitrary set of indices, and let $\vect{x}$ be a
vector. Provided that $\tilde{K} \ge \abs{T_1\cup \supp(\vect{x})}$,
we obtain that
\begin{eqnarray}
\norm{\matr{D}^*_{T_1} \matr{D}_{T^C_1}\vect{x}_{T^C_1}}_2 \le
\delta_{\tilde{K}} \norm{\vect{x}_{T^C_1}}_2.
\end{eqnarray}
\end{prop}
\section{Near oracle performance of the algorithms}
\label{sec:near_orac_perf}
Our goal in this section is to find error bounds for the SP, CoSaMP
and IHT reconstructions given the measurement from
(\ref{eq:meas_vec}). We will first find bounds for the case where
$\vect{e}$ is an adversial noise using the same techniques used in
\cite{Dai09Subspace,Needell09CoSaMP}. In these works and in
\cite{Blumensath09Iterative}, the reconstruction error was bounded
by a constant times the noise power in the same form as in
(\ref{eq:perf_Advers}). In this work, we will derive a bound that is
a constant times $\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2$ (where
$T_\vect{e}$ is as defined in the previous section). Armed with this
bound, we will change perspective and look at the case where
$\vect{e}$ is a white Gaussian noise, and derive a near-oracle
performance result of the same form as in (\ref{eq:ds_perf}), using
the same tools used in \cite{Candes07Dantzig}.
\subsection{Near oracle performance of the SP algorithm}
\label{sec:SP_perf} We begin with the SP pursuit method, as
described in Algorithm~\ref{alg:SP}. SP holds a temporal solution
with $K$ non-zero entries, and in each iteration it adds an
additional set of $K$ candidate non-zeros that are most correlated
with the residual, and prunes this list back to $K$ elements by
choosing the dominant ones. We use a constant number of iterations
as a stopping criterion but different stopping criteria can be
sought, as presented in \cite{Dai09Subspace}.
\begin{algorithm}
\caption{Subspace Pursuit Algorithm [Algorithm 1 in
\cite{Dai09Subspace}]} \label{alg:SP}
\begin{algorithmic}
\REQUIRE $K, \matr{D}, \vect{y}$ where $\vect{y} = \matr{D}\vect{x}
+ \vect{e}$, $K$ is the cardinality of $\vect{x}$ and $\vect{e}$ is
the additive noise.
\ENSURE $\hat{\vect{x}}_{SP}$: $K$-sparse approximation of
$\vect{x}$
\STATE Initialize the support: $T^0 =\emptyset$.
\STATE Initialize the residual: $\vect{y}_r^0 = \vect{y}$.
\WHILE{halting criterion is not satisfied}
\STATE Find new support elements: $T_\Delta =
\supp(\matr{D}^*\vect{y}^{\ell - 1}_r,K)$.
\STATE Update the support: $\tilde{T}^\ell = T^{\ell -1} \cup
T_\Delta$.
\STATE Compute the representation: $\vect{x}_p =
\matr{D}^{\dag}_{\tilde{T}^\ell}\vect{y}$.
\STATE Prune small entries in the representation: $T^\ell =
\supp(\vect{x}_p,K)$.
\STATE Update the residual: $\vect{y}_r^\ell =
\resid(\vect{y},\matr{D}_{T^\ell})$.
\ENDWHILE \STATE Form the final solution:
$\hat{\vect{x}}_{SP,(T^\ell)^C} = 0$ and
$\hat{\vect{x}}_{SP,T^\ell} = \matr{D}^{\dag}_{T^\ell}\vect{y}$.
\end{algorithmic}
\end{algorithm}
\begin{thm} \label{thm:SP-1} The SP solution at the $\ell$-th iteration
satisfies the recurrence inequality
\begin{eqnarray}
\label{eq:SP_x_diff_bound} \norm{\vect{x}_{T-T^\ell}}_2 &\le&
\frac{2\delta_{3K}(1+\delta_{3K})}{(1-\delta_{3K})^3}
\norm{\vect{x}_{T-T^{\ell-1}}}_2 \\
\nonumber &&+
\frac{6 - 6\delta_{3K} + 4\delta_{3K}^2}
{(1-\delta_{3K})^3}
\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{eqnarray}
For $\delta_{3K} \le 0.139$ this leads to
\begin{equation}
\label{eq:SP_x_diff_bound_consts} \norm{\vect{x}_{T-T^\ell}}_2 \le
0.5\norm{\vect{x}_{T-T^{\ell-1}}}_2+8.22\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{equation}
\end{thm}
{\em Proof:} The proof of the inequality in
(\ref{eq:SP_x_diff_bound}) is given in Appendix
\ref{sec:SP_x_diff_bound_proof}. Note that the recursive formula
given (\ref{eq:SP_x_diff_bound}) has two coefficients, both
functions of $\delta_{3K}$. Fig.~\ref{fig:SP-Coefficients} shows
these coefficients as a function of $\delta_{3K}$. As can be seen,
under the condition $\delta_{3K} \le 0.139$, it holds that the
coefficient multiplying $\norm{\vect{x}_{T-T^{\ell-1}}}_2$ is lesser
or equal to $0.5$, while the coefficient multiplying
$\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2$ is lesser or equal to
$8.22$, which completes our proof.
$\Box$
\begin{figure}
\caption{The coefficients in (\ref{eq:SP_x_diff_bound}
\label{fig:SP-Coefficients}
\end{figure}
\begin{cor}
\label{cor:SP_bound_cor} Under the condition $\delta_{3K} \le
0.139$, the SP algorithm satisfies
\begin{equation}
\label{eq:SP_bound_consts} \norm{\vect{x}_{T-T^\ell}}_2 \le
2^{-\ell}\norm{\vect{x}}_2+2\cdot
8.22\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{equation}
In addition, After at most
\begin{equation}\label{eq:Num_of_Iter_SP}
\ell^* = \ceil{\log_2 \left(
\frac{\norm{\vect{x}}_2}{\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2}
\right)}
\end{equation}
iterations, the solution $\hat{\vect{x}}_{SP}$ leads to an accuracy
\begin{equation}
\label{eq:SP_error_bound_consts} \norm{\vect{x} -
\hat{\vect{x}}_{SP}}_2 \le
C_{SP}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2,
\end{equation}
where
\begin{equation}
C_{SP} = 2\cdot \frac{7 -9\delta_{3K}+7\delta_{3K}^2 -
\delta_{3K}^3 }{(1-\delta_{3K})^4} \le 21.41
\end{equation}
\end{cor}
{\em Proof:} Starting with (\ref{eq:SP_x_diff_bound_consts}), and
applying it recursively we obtain
\begin{eqnarray}
&&\norm{\vect{x}_{T-T^\ell}}_2 \le
0.5\norm{\vect{x}_{T-T^{\ell-1}}}_2 +8.22\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2
\\ \nonumber && ~~~~~\le
0.5^2\norm{\vect{x}_{T-T^{\ell-2}}}_2+8.22\cdot(0.5+1)
\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2
\\ \nonumber && ~~~~~\le \ldots \\ \nonumber && ~~~~~\le
0.5^k\norm{\vect{x}_{T-T^{\ell-k}}}_2+ 8.22\cdot
\left(\sum_{j=0}^{k-1}
0.5^j\right)\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{eqnarray}
Setting $k=\ell$ leads easily to (\ref{eq:SP_bound_consts}), since
$\norm{\vect{x}_{T-T^{0}}}_2 = \norm{\vect{x}_{T}}_2 =
\norm{\vect{x}}_2$.
Plugging the number of iterations $\ell^*$ as in
(\ref{eq:Num_of_Iter_SP}) to (\ref{eq:SP_bound_consts})
yields\footnote{Note that we have replaced the constant $8.22$ with
the equivalent expression that depends on $\delta_{3K}$ -- see
(\ref{eq:SP_x_diff_bound}).}
\begin{eqnarray}\label{eq:temp_bound1}
&& \norm{\vect{x}_{T-T^{\ell^*}}}_2 \\ \nonumber && ~~~~~\le
2^{-\ell^*}\norm{\vect{x}}_2+2\cdot
\frac{6 - 6\delta_{3K} + 4\delta_{3K}^2}{(1-\delta_{3K})^3}
\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2
\\ \nonumber && ~~~~~\le \left(1+ 2 \cdot \frac{6 - 6\delta_{3K} + 4\delta_{3K}^2}
{(1-\delta_{3K})^3}\right)
\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{eqnarray}
We define $\hat{T} \triangleq T^{\ell^*}$ and bound the
reconstruction error $\norm{\vect{x} - \hat{\vect{x}}_{SP}}_2$.
First, notice that $\norm{\vect{x}} =
\norm{\vect{x}_{\hat{T}}}+\norm{\vect{x}_{T-\hat{T}}}$, simply
because the true support $T$ can be divided into\footnote{The vector
$\vect{x}_{\hat{T}}$ is of length $|\hat{T}|=K $ and it contains
zeros in locations that are outside $T$.} $\hat T$ and the
complementary part, $T-\hat{T}$. Using the facts that
$\hat{\vect{x}}_{SP} =\matr{D}^\dag_{\hat{T}}\vect{y}$, $\vect{y}=
\matr{D}_T\vect{x}_T+\vect{e}$, and the triangle inequality, we get
\begin{eqnarray}
&&\norm{\vect{x} - \hat{\vect{x}}_{SP}}_2\\ \nonumber && ~~~~~\le
\norm{\vect{x}_{\hat{T}} - \matr{D}^\dag_{\hat{T}}\vect{y}}_2 +
\norm{\vect{x}_{T-\hat{T}}}_2\\
\nonumber
&& ~~~~~= \norm{\vect{x}_{\hat{T}} - \matr{D}^\dag_{\hat{T}}(\matr{D}_T\vect{x}_T+
\vect{e})}_2 + \norm{\vect{x}_{T-\hat{T}}}_2 \\
\nonumber
&& ~~~~~\le \norm{\vect{x}_{\hat{T}} - \matr{D}^\dag_{\hat{T}}\matr{D}_T\vect{x}_T}_2
+ \norm{ \matr{D}^\dag_{\hat{T}}\vect{e}}_2+
\norm{\vect{x}_{T-\hat{T}}}_2.
\end{eqnarray}
We proceed by breaking the term $\matr{D}_T\vect{x}_T$ into the sum
$\matr{D}_{T\cap \hat{T}}\vect{x}_{T\cap \hat{T}} + \matr{D}_{T -
\hat{T}}\vect{x}_{T - \hat{T}}$, and obtain
\begin{IEEEeqnarray}{rCl}
\norm{\vect{x} - \hat{\vect{x}}_{SP}}_2 & \le& \norm{\vect{x}_{\hat{T}}
- \matr{D}^\dag_{\hat{T}}\matr{D}_{T\cap
\hat{T}}\vect{x}_{T\cap \hat{T}}}_2
\\ \nonumber
&& + \norm{\matr{D}^\dag_{\hat{T}}\matr{D}_{T-\hat{T}}\vect{x}_{T-\hat{T}}}_2
+ \norm{
(\matr{D}^*_{\hat{T}}\matr{D}_{\hat{T}})^{-1}\matr{D}^*_{\hat{T}}\vect{e}}_2+
\norm{\vect{x}_{T-\hat{T}}}_2.
\end{IEEEeqnarray}
The first term in the above inequality vanishes, since
$\matr{D}_{T\cap \hat{T}}\vect{x}_{T\cap \hat{T}} =
\matr{D}_{\hat{T}}\vect{x}_{\hat{T}}$ (recall that
$\vect{x}_{\hat{T}}$ outside the support $T$ has zero entries that
do not contribute to the multiplication). Thus, we get that
$\vect{x}_{\hat{T}} - \matr{D}^\dag_{\hat{T}}\matr{D}_{T\cap
\hat{T}}\vect{x}_{T\cap \hat{T}} = \vect{x}_{\hat{T}} -
\matr{D}^\dag_{\hat{T}}\matr{D}_{\hat{T}}\vect{x}_{\hat{T}}=0$. The
second term can be bounded using Propositions \ref{prop1} and \ref{prop2},
\begin{eqnarray}
\nonumber
&& \norm{\matr{D}^\dag_{\hat{T}}\matr{D}_{T-\hat{T}}\vect{x}_{T-\hat{T}}}_2
= \norm {(\matr{D}^*_{\hat{T}}\matr{D}_{\hat{T}})^{-1}
\matr{D}^*_{\hat{T}}\matr{D}_{T-\hat{T}}\vect{x}_{T-\hat{T}}}_2 \\
\nonumber && ~~~~~ \le \frac{1}{1-\delta_K} \norm{
\matr{D}^*_{\hat{T}}\matr{D}_{T-\hat{T}}\vect{x}_{T-\hat{T}}}_2
\le \frac{\delta_{2K}}{1-\delta_K} \norm {
\vect{x}_{T-\hat{T}}}_2.
\end{eqnarray}
Similarly, the third term is bounded using Propositions \ref{prop1},
and we obtain
\begin{IEEEeqnarray}{rCl}
\nonumber
\norm{\vect{x} - \hat{\vect{x}}_{SP}}_2 & \le &
\left( 1+\frac{\delta_{2K}}{1-\delta_{K}}\right)\norm{ \vect{x}_{T-\hat{T}}}_2 +
\frac{1}{1-\delta_{K}}\norm{ \matr{D}^*_{\hat{T}}\vect{e}}_2 \\
\nonumber & \le & \frac{1}{1-\delta_{3K}}\norm{
\vect{x}_{T-\hat{T}}}_2 + \frac{1}{1-\delta_{3K}}\norm{
\matr{D}^*_{\hat{T}}\vect{e}}_2,
\end{IEEEeqnarray}
where we have replaced $\delta_K$ and $\delta_{2K}$ with
$\delta_{3K}$, thereby bounding the existing expression from above.
Plugging (\ref{eq:temp_bound1}) into this inequality leads to
\begin{IEEEeqnarray}{rCl}
\nonumber
\norm{\vect{x} - \hat{\vect{x}}_{SP}}_2 & \le &
\frac{1}{1-\delta_{3K}}\left(2 + 2 \cdot \frac{6 - 6\delta_{3K} + 4\delta_{3K}^2}
{(1-\delta_{3K})^3} \right)\norm{ \matr{D}^*_{\hat{T}}\vect{e}}_2\\
\nonumber
& = & 2\cdot \frac{7 - 9\delta_{3K} + 7\delta_{3K}^2 - \delta_{3K}^3}
{(1-\delta_{3K})^4}\norm{ \matr{D}^*_{\hat{T}}\vect{e}}_2.
\end{IEEEeqnarray}
Applying the condition $\delta_{3K} \le 0.139$ on this equation
leads to the result in (\ref{eq:SP_error_bound_consts}).
$\Box$
For practical use we may suggest a simpler term for $\ell^*$. Since
$\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2$ is defined by the subset
that gives the maximal correlation with the noise, and it appears in
the denominator of $\ell^*$, it can be replaced with the average
correlation, thus $\ell^* \approx \ceil{\log_2 \left(
\norm{\vect{x}}_2/\sqrt{K}\sigma \right)}$.
Now that we have a bound for the SP algorithm for the adversial
case, we proceed and consider a bound for the random noise case,
which will lead to a near-oracle performance guarantee for the SP
algorithm.
\begin{thm}
\label{thm:SP_oracle_thm} Assume that $\vect{e}$ is a white Gaussian
noise vector with variance $\sigma^2$ and that the columns of
$\matr{D}$ are normalized. If the condition $\delta_{3K}\le 0.139$
holds, then with probability exceeding $1-(\sqrt{\pi (1+a) \log{N}}
\cdot N^a)^{-1}$ we obtain
\begin{equation}
\label{eq:SP_error_bound_near_oracle} \norm{\vect{x} -
\hat{\vect{x}}_{SP}}_2^2 \le C_{SP}^2 \cdot (2(1+a)\log{N})\cdot K
\sigma^2.
\end{equation}
\end{thm}
{\em Proof:} Following Section 3 in \cite{Candes07Dantzig} it holds
true that
${\bf P}\left(\sup_i\abs{\matr{D}_i^*\vect{e}} > \sigma\cdot
\sqrt{2(1+a)\log{N}}\right) \le 1-(\sqrt{\pi (1+a) \log{N}}\cdot
N^a)^{-1}.$ Combining this with
(\ref{eq:SP_error_bound_consts}), and bearing in mind that
$|T_e|=K$, we get the stated result.
$\Box$
As can be seen, this result is similar to the one posed in
\cite{Candes07Dantzig} for the Dantzig-Selector, but with a
different constant -- the one corresponding to DS is $\approx 5.5$
for the RIP requirement used for the SP. For both algorithms,
smaller values of $\delta_{3K}$ provide smaller constants.
\subsection{Near oracle performance of the CoSaMP algorithm}
\label{sec:CoSaMP_perf}
We continue with the CoSaMP pursuit method, as described in
Algorithm~\ref{alg:CoSaMP}. CoSaMP, in a similar way to the SP,
holds a temporal solution with $K$ non-zero entries, with the
difference that in each iteration it adds an additional set of $2K$
(instead of $K$) candidate non-zeros that are most correlated with
the residual. Anther difference is that after the punning step in SP
we use a matrix inversion in order to calculate a new projection for
the $K$ dominant elements, while in the CoSaMP we just take the
biggest $K$ elements. Here also, we use a constant number of
iterations as a stopping criterion while different stopping criteria
can be sought, as presented in \cite{Needell09CoSaMP}.
\begin{algorithm}
\caption{CoSaMP Algorithm [Algorithm 2.1 in
\cite{Needell09CoSaMP}]} \label{alg:CoSaMP}
\begin{algorithmic}
\REQUIRE $K, \matr{D}, \vect{y}$ where $\vect{y} = \matr{D}\vect{x}
+ \vect{e}$, $K$ is the cardinality of $\vect{x}$ and $\vect{e}$ is
the additive noise.
\ENSURE $\hat{\vect{x}}_{CoSaMP}$: $K$-sparse approximation of
$\vect{x}$
\STATE Initialize the support: $T^0 =\emptyset$.
\STATE Initialize the residual: $\vect{y}_r^0 = \vect{y}$.
\WHILE{halting criterion is not satisfied}
\STATE Find new support elements: $T_\Delta =
\supp(\matr{D}^*\vect{y}^{\ell - 1}_r,2K)$.
\STATE Update the support: $\tilde{T^\ell} = T^{\ell -1} \cup
T_\Delta$.
\STATE Compute the representation: $\vect{x}_p =
\matr{D}^{\dag}_{\tilde{T}^\ell}\vect{y}$.
\STATE Prune small entries in the representation: $T^\ell =
\supp(\vect{x}_p,K)$.
\STATE Update the residual: $\vect{y}_r^\ell = \vect{y} - \matr{D}_{T^\ell}(\vect{x}_p)_{T^\ell}
$.
\ENDWHILE \STATE Form the final solution:
$\hat{\vect{x}}_{CoSaMP,(T^\ell)^C} = 0$ and
$\hat{\vect{x}}_{CoSaMP,T^\ell} = (\vect{x}_p)_{T^\ell}$.
\end{algorithmic}
\end{algorithm}
In the analysis of the CoSaMP that comes next, we follow the same
steps as for the SP to derive a near-oracle performance guarantee.
Since the proofs are very similar to those of the SP, and those
found in \cite{Needell09CoSaMP}, we omit most of the derivations and
present only the differences.
\begin{thm} The CoSaMP solution at the $\ell$-th iteration
satisfies the recurrence inequality\footnote{The observant reader
will notice a delicate difference in terminology between this
theorem and Theorem \ref{thm:SP-1}. While here the recurrence
formula is expressed with respect to the estimation error,
$\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell}_2$, Theorem
\ref{thm:SP-1} uses a slightly different error measure,
$\norm{\vect{x}_{T-T^\ell}}_2$.}
\begin{IEEEeqnarray}{rCl}
\label{eq:CoSaMP_x_diff_bound}
\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell}_2 &\le &
\frac{4\delta_{4K}}{(1-\delta_{4K})^2}\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^{\ell-1}}_2
\\ \nonumber && + \frac{14-6\delta_{4K} }{(1-\delta_{4K})^2}
\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2
\end{IEEEeqnarray}
For $\delta_{4K} \le 0.1$ this leads to
\begin{equation}
\label{eq:CoSaMP_x_diff_bound_consts} \norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell}_2 \le
0.5\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^{\ell-1}}_2+16.6\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{equation}
\end{thm}
{\em Proof:} The proof of the inequality in
(\ref{eq:CoSaMP_x_diff_bound}) is given in Appendix
\ref{sec:CoSaMP_x_diff_bound_proof}. In a similar way to the proof
in the SP case, under the condition $\delta_{4K} \le 0.1$, it holds
that the coefficient multiplying
$\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^{\ell-1}}_2$ is smaller or
equal to $0.5$, while the coefficient multiplying
$\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2$ is smaller or equal to
$16.6$, which completes our proof.
$\Box$
\begin{cor}
\label{cor:CoSaMP_bound_cor} Under the condition $\delta_{4K} \le
0.1$, the CoSaMP algorithm satisfies
\begin{equation}
\label{eq:CoSaMP_bound_consts} \norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell}_2 \le
2^{-\ell}\norm{\vect{x}}_2+2\cdot
16.6\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{equation}
In addition, After at most
\begin{equation}\label{eq:Num_of_Iter_CoSaMP}
\ell^* = \ceil{\log_2 \left(
\frac{\norm{\vect{x}}_2}{\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2}
\right)}
\end{equation}
iterations, the solution $\hat{\vect{x}}_{CoSaMP}$ leads to an accuracy
\begin{equation}
\label{eq:CoSaMP_error_bound_consts} \norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell}_2 \le
C_{CoSaMP}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2,
\end{equation}
where
\begin{equation}
C_{CoSaMP} = \frac{29-14\delta_{4K} + \delta_{4K}^2
}{(1-\delta_{4K})^2} \le 34.1.
\end{equation}
\end{cor}
{\em Proof:} Starting with (\ref{eq:CoSaMP_x_diff_bound_consts}),
and applying it recursively, in the same way as was done in the
proof of Corollary \ref{cor:CoSaMP_bound_cor}, we obtain
\begin{eqnarray}
\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell}_2 & \le &
0.5^k\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^{\ell-k}}_2 \\ \nonumber
&&+ 16.6\cdot \left(\sum_{j=0}^{k-1}
0.5^j\right)\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{eqnarray}
Setting $k=\ell$ leads easily to (\ref{eq:CoSaMP_bound_consts}), since
$\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^0}_2 = \norm{\vect{x}}_2$.
Plugging the number of iterations $\ell^*$ as in
(\ref{eq:Num_of_Iter_CoSaMP}) to
(\ref{eq:CoSaMP_bound_consts}) yields\footnote{As before, we replace
the constant $16.6$ with the equivalent expression that depends on
$\delta_{4K}$ -- see (\ref{eq:CoSaMP_x_diff_bound}).}
\begin{IEEEeqnarray}{rcl}
\nonumber
\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell}_2
& \le & 2^{-\ell^*}\norm{\vect{x}}_2+2\cdot \frac{14-6\delta_{4K}
}{(1-\delta_{4K})^2}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2
\\ \nonumber ~~~~~& \le & \left(1+2\cdot
\frac{14-6\delta_{4K} }{(1-\delta_{4K})^2}\right)
\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2
\\ \nonumber ~~~~~& \le & \frac{29-14\delta_{4K}+ \delta_{4K}^2 }
{(1-\delta_{4K})^2}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{IEEEeqnarray}
Applying the condition $\delta_{4K} \le 0.1$ on this equation
leads to the result in (\ref{eq:CoSaMP_error_bound_consts}).
$\Box$
As for the SP, we move now to the random noise case, which leads to
a near-oracle performance guarantee for the CoSaMP algorithm.
\begin{thm}
\label{thm:CoSaMP_oracle_thm} Assume that $\vect{e}$ is a white
Gaussian noise vector with variance $\sigma^2$ and that the columns
of $\matr{D}$ are normalized. If the condition $\delta_{4K}\le 0.1$
holds, then with probability exceeding $1-(\sqrt{\pi (1+a) \log{N}}
\cdot N^a)^{-1}$ we obtain
\begin{equation}
\label{eq:CoSaMP_error_bound_near_oracle} \norm{\vect{x} -
\hat{\vect{x}}_{CoSaMP}}_2^2 \le C_{CoSaMP}^2 \cdot (2(1+a)\log{N})\cdot K
\sigma^2.
\end{equation}
\end{thm}
{\em Proof:} The proof is identical to the one of Theorem
\ref{thm:CoSaMP_oracle_thm}.
$\Box$
Fig.~\ref{fig:C_SP_CoSaMP} shows a graph of $C_{CoSaMP}$ as a
function of $\delta_{4K}$. In order to compare the CoSaMP to SP, we
also introduce in this figure a graph of $C_{SP}$ versus
$\delta_{4K}$ (replacing $\delta_{3K}$). Since $\delta_{3K} \le
\delta_{4K}$, the constant $C_{SP}$ is actually better than the
values shown in the graph, and yet, it can be seen that even in this
case we get $C_{SP}<C_{CoSaMP}$. In addition, the requirement for
the SP is expressed with respect to $\delta_{3K}$, while the
requirement for the CoSaMP is stronger and uses $\delta_{4K}$.
\begin{figure}
\caption{The constants of the SP and CoSaMP algorithms as a funtion
of $\delta_{4K}
\label{fig:C_SP_CoSaMP}
\end{figure}
With comparison to the results presented in
\cite{BenHaim09Coherence} for the OMP and the thresholding, the
results obtained for the CoSaMP and SP are uniform, expressed only
with respect to the properties of the dictionary $\matr{D}$. These
algorithms' validity is not dependent on the values of the input
vector $\vect{x}$, its energy, or the noise power. The condition
used is the RIP, which implies constraints only on the used
dictionary and the sparsity level.
\subsection{Near oracle performance of the IHT algorithm}
\label{sec:IHT_perf}
The IHT algorithm, presented in Algorithm \ref{alg:IHT}, uses a
different strategy than the SP and the CoSaMP. It applies only
multiplications by $\matr{D}$ and $\matr{D}^*$, and a hard
thresholding operator. In each iteration it calculates a new
representation and keeps its $K$ largest elements. As for the SP and
CoSaMP, here as well we employ a fixed number of iterations as a
stopping criterion.
\begin{algorithm}
\caption{IHT Algorithm [Equation 7 in \cite{Blumensath09Iterative}]} \label{alg:IHT}
\begin{algorithmic}
\REQUIRE $K, \matr{D}, \vect{y}$ where $\vect{y} = \matr{D}\vect{x}
+ \vect{e}$, $K$ is the cardinality of $\vect{x}$ and $\vect{e}$ is
the additive noise.
\ENSURE $\hat{\vect{x}}_{IHT}$: $K$-sparse approximation of
$\vect{x}$
\STATE Initialize the support: $T^0 =\emptyset$.
\STATE Initialize the representation: $\vect{x}^0_{IHT} = 0$.
\WHILE{halting criterion is not satisfied}
\STATE Compute the representation: $\vect{x}_p =
\hat{\vect{x}}_{IHT}^{\ell-1} + \matr{D}^*(\vect{y} -
\matr{D}\hat{\vect{x}}_{IHT}^{\ell-1})$.
\STATE Prune small entries in the representation: $T^\ell =
\supp(\vect{x}_p,K)$.
\STATE Update the representation: $\hat{\vect{x}}_{IHT,(T^\ell)^C}^{\ell} = 0$ and
$\hat{\vect{x}}_{IHT,T^\ell}^{\ell} = (\vect{x}_p)_{T^\ell}$.
\ENDWHILE \STATE Form the final solution:
$\hat{\vect{x}}_{IHT,(T^\ell)^C} = 0$ and
$\hat{\vect{x}}_{IHT,T^\ell} = (\vect{x}_p)_{T^\ell}$.
\end{algorithmic}
\end{algorithm}
Similar results, as of the SP and CoSaMP methods, can be sought for
the IHT method. Again, the proofs are very similar to the ones shown
before for the SP and the CoSaMP and thus only the differences will
presented.
\begin{thm} The IHT solution at the $\ell$-th iteration
satisfies the recurrence inequality
\begin{IEEEeqnarray}{c}
\label{eq:IHT_x_diff_bound} \norm{\vect{x} -
\hat{\vect{x}}^\ell_{IHT}}_2 \le
\sqrt{8}\delta_{3K}\norm{\vect{x}-\hat{\vect{x}}_{IHT}^{\ell-1}}_2 + 4\norm{
\matr{D}^*_{T_{\vect{e}}}\vect{e}}_2.
\end{IEEEeqnarray}
For $\delta_{3K} \le \frac{1}{\sqrt{32}}$ this leads to
\begin{IEEEeqnarray}{c}
\label{eq:IHT_x_diff_bound_consts} \norm{\vect{x} -
\hat{\vect{x}}^\ell_{IHT}}_2 \le
0.5\norm{\vect{x}-\hat{\vect{x}}_{IHT}^{\ell-1}}_2 + 4\norm{
\matr{D}^*_{T_{\vect{e}}}\vect{e}}_2.
\end{IEEEeqnarray}
\end{thm}
{\em Proof:} Our proof is based on the proof of Theorem 5 in
\cite{Blumensath09Iterative}, and only major modifications in the
proof will be presented here. Using the definition $\vect{r}^\ell
\triangleq \vect{x}-\hat{\vect{x}}_{IHT}^\ell$, and an inequality
taken from Equation (22) in \cite{Blumensath09Iterative}, it holds
that
\begin{IEEEeqnarray}{rcl}
\label{eq:IHT_proof_step1}
&& \norm{\vect{x} - \hat{\vect{x}}^\ell_{IHT}}_2 \le
2 \norm{\vect{x}_{T\cup T^\ell} - (\vect{x}_p)_{T\cup T^\ell}}_2 \\
\nonumber && ~~~~= 2 \norm{\vect{x}_{T\cup T^\ell} -
\hat{\vect{x}}^{\ell-1}_{T\cup T^\ell} -\matr{D}^*_{T\cup
T^\ell}\matr{D}\vect{r}^{\ell-1} - \matr{D}^*_{T\cup
T^\ell}\vect{e}}_2
\\ \nonumber && ~~~~ = 2 \norm{\vect{r}_{T\cup
T^\ell}^{\ell-1} -\matr{D}^*_{T\cup T^\ell}\matr{D}\vect{r}^{\ell-1}
- \matr{D}^*_{T\cup T^\ell}\vect{e}}_2,
\end{IEEEeqnarray}
where the equality emerges from the definition $\vect{x}_p =
\hat{\vect{x}}_{IHT}^{\ell-1} + \matr{D}^*(\vect{y} -
\matr{D}\hat{\vect{x}}_{IHT}^{\ell-1}) =
\hat{\vect{x}}_{IHT}^{\ell-1} + \matr{D}^*(\matr{D}\vect{x} +
\vect{e}- \matr{D}\hat{\vect{x}}_{IHT}^{\ell-1}).$
The support of $\vect{r}^{\ell-1}$ is over $T\cup T^{\ell-1} $ and
thus it is also over $T \cup T^\ell \cup T^{\ell -1}$. Based on
this, we can divide $\matr{D}\vect{r}^{\ell - 1}$ into a part
supported on $T^{\ell-1}- T^\ell \cup T$ and a second part supported
on $T^\ell \cup T$. Using this and the triangle inequality with
(\ref{eq:IHT_proof_step1}), we obtain
\begin{IEEEeqnarray}{rCl}
\label{eq:IHT_proof_step2}
&&\norm{\vect{x} - \hat{\vect{x}}^\ell_{IHT}}_2 \\ \nonumber && ~~ \le 2
\norm{\vect{r}^{\ell-1}_{T\cup T^\ell} -
\matr{D}^*_{T\cup T^\ell}\matr{D}\vect{r}^{\ell-1}}_2 +
2\norm{ \matr{D}^*_{T\cup T^\ell}\vect{e}}_2 \\
\nonumber && ~~ = 2\left\Vert (\matr{I} - \matr{D}^*_{T\cup
T^\ell}\matr{D}_{T\cup T^\ell}) \vect{r}^{\ell-1}_{T\cup T^\ell} \right. \\ \nonumber &&
~~~~\left. -\matr{D}^*_{T\cup T^\ell}\matr{D}_{T^{\ell-1} - T\cup
T^\ell}\vect{r}^{\ell-1}_{T^{\ell-1} - T\cup T^\ell} \right\Vert _2
+2\norm{ \matr{D}^*_{T\cup T^\ell}\vect{e}}_2 \\
\nonumber && ~~
\le 2\norm{(\matr{I} - \matr{D}^*_{T\cup T^\ell}\matr{D}_{T\cup T^\ell})
\vect{r}^{\ell-1}_{T\cup T^\ell}}_2 \\ \nonumber && ~~~~
+ 2\norm{ \matr{D}^*_{T\cup T^\ell}
\matr{D}_{T^{\ell-1} - T\cup T^\ell}\vect{r}^{\ell-1}_{T^{\ell-1} - T\cup T^\ell}}_2
+ 2\norm{ \matr{D}^*_{T_{\vect{e}},2}\vect{e}}_2 \\
\nonumber && ~~ \le 2\delta_{2K}\norm{\vect{r}^{\ell-1}_{T\cup
T^\ell}}_2 + 2\delta_{3K}\norm{\vect{r}^{\ell-1}_{T^{\ell-1} - T\cup
T^\ell}}_2 + 4\norm{ \matr{D}^*_{T_{\vect{e}}}\vect{e}}_2.
\end{IEEEeqnarray}
The last inequality holds because the eigenvalues of $(\matr{I} -
\matr{D}^*_{T\cup T^\ell}\matr{D}_{T\cup T^\ell})$ are in the range
$[-\delta_{2K},\delta_{2K}]$, the size of the set $T\cup T^\ell$ is
smaller than $2K$, the sets $T\cup T^\ell$ and $T^{\ell-1} - T\cup
T^\ell$ are disjoint, and of total size of these together is equal
or smaller than $3K$. Note that we have used the definition of
$T_{\vect{e},2}$ as given in Section \ref{sec:notation}.
We proceed by observing that $\norm{\vect{r}^{\ell-1}_{T^{\ell-1} -
T\cup T^\ell}}_2 + \norm{\vect{r}^{\ell-1}_{T\cup T^\ell}}_2 \le
\sqrt{2}\norm{\vect{r}^{\ell-1}}_2$, since these vectors are
orthogonal. Using the fact that $\delta_{2K} \le \delta_{3K}$ we get
(\ref{eq:IHT_x_diff_bound}) from (\ref{eq:IHT_proof_step2}).
Finally, under the condition $\delta_{3K} \le 1/\sqrt{32}$, it holds
that the coefficient multiplying
$\norm{\vect{x}-\hat{\vect{x}}_{IHT}^{\ell-1}}_2$ is smaller or
equal to $0.5$, which completes our proof.
$\Box$
\begin{cor}
\label{cor:IHT_bound_cor} Under the condition $\delta_{3K} \le
1/\sqrt{32}$, the IHT algorithm satisfies
\begin{equation}
\label{eq:IHT_bound_consts} \norm{\vect{x}-\hat{\vect{x}}_{IHT}^\ell}_2 \le
2^{-\ell}\norm{\vect{x}}_2+
8\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{equation}
In addition, After at most
\begin{equation}\label{eq:Num_of_Iter_IHT}
\ell^* = \ceil{\log_2 \left(
\frac{\norm{\vect{x}}_2}{\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2}
\right)}
\end{equation}
iterations, the solution $\hat{\vect{x}}_{IHT}$ leads to an accuracy
\begin{equation}
\label{eq:IHT_error_bound_consts} \norm{\vect{x}-\hat{\vect{x}}_{IHT}^\ell}_2 \le
C_{IHT}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2,
\end{equation}
where
\begin{equation}
C_{IHT} = 9.
\end{equation}
\end{cor}
{\em Proof:} The proof is obtained following the same steps as in
Corollaries \ref{cor:SP_bound_cor} and \ref{cor:CoSaMP_bound_cor}.
$\Box$
Finally, considering a random noise instead of an adversial one, we
get a near-oracle performance guarantee for the IHT algorithm, as
was achieved for the SP and CoSaMP.
\begin{thm}
\label{thm:IHT_oracle_thm} Assume that $\vect{e}$ is a white
Gaussian noise with variance $\sigma^2$ and that the columns of
$\matr{D}$ are normalized. If the condition $\delta_{3K}\le
1/\sqrt{32}$ holds, then with probability exceeding $1-(\sqrt{\pi
(1+a) \log{N}} \cdot N^a)^{-1}$ we obtain
\begin{equation}
\label{eq:IHT_error_bound_near_oracle} \norm{\vect{x} -
\hat{\vect{x}}_{IHT}}_2^2 \le C_{IHT}^2 \cdot (2(1+a)\log{N})\cdot K
\sigma^2.
\end{equation}
\end{thm}
{\em Proof:} The proof is identical to the one of Theorem
\ref{thm:CoSaMP_oracle_thm}.
$\Box$
A comparison between the constants achieved by the IHT, SP and DS is
presented in Fig.~\ref{fig:C_SP_IHT_DS}. The CoSaMP constant was
omitted since it is bigger than the one of the SP and it is
dependent on $\delta_{4K}$ instead of $\delta_{3K}$. The figure
shows that the constant values of IHT and DS are better than that of
the SP (and as such better than the one of the CoSaMP), and that the
one of the DS is the smallest. It is interesting to note that the
constant of the IHT is independent of $\delta_{3K}$.
\begin{figure}
\caption{The constants of the SP, IHT and DS algorithms as a funtion
of $\delta_{3K}
\label{fig:C_SP_IHT_DS}
\end{figure}
In table \ref{tbl:comparison} we summarize the performance
guarantees of several different algorithms -- the DS
\cite{Candes07Dantzig}, the BP \cite{Bickel09Simultaneous}, and the
three algorithms analyzed in this paper.
\begin{table*}[!t]
\begin{center}\small{
\begin{tabular}{|l|l|l|l|l|}
\hline Alg. & RIP Condition & Probability of Correctness
& Constant & The Obtained Bound \\
\hline DS & $\delta_{2K} + \delta_{3K} \le 1$ & $1-(\sqrt{\pi
(1+a)\log{N}}\cdot N^a)^{-1}$ & $ \frac{4}{1-2\delta_{3K}} $ &
$C_{DS}^2 \cdot (2(1+a)\log{N})\cdot K \sigma^2$ \\ \hline BP &
$\delta_{2K} + 3\delta_{3K} \le 1$ & $1-(N^a)^{-1}$ &
$>\frac{32}{\kappa^4}$ & $C_{BP}^2 \cdot (2(1+a)\log{N})\cdot K
\sigma^2$ \\ \hline SP & $\delta_{3K}\le 0.139$ & $ 1-(\sqrt{\pi
(1+a)\log{N}}\cdot N^a)^{-1}$ &$ \le 21.41$ & $C_{SP}^2 \cdot
(2(1+a)\log{N})\cdot K \sigma^2$ \\ \hline CoSaMP & $\delta_{4K}
\le 0.1$ & $ 1-(\sqrt{\pi (1+a)\log{N}}\cdot N^a)^{-1}$ & $ \le
34.2$ & $C_{CoSaMP}^2 \cdot (2(1+a)\log{N})\cdot K \sigma^2$ \\
\hline IHT & $\delta_{3K} \le \frac{1}{\sqrt{32}}$ & $ 1-(\sqrt{\pi
(1+a)\log{N}}\cdot N^a)^{-1}$
& $9$ & $C_{IHT}^2 \cdot (2(1+a)\log{N})\cdot K \sigma^2$ \\
\hline
\end{tabular}}
\caption{Near oracle performance guarantees for the DS, BP, SP,
CoSaMP and IHT techniques.} \label{tbl:comparison}
\end{center}
\end{table*}
\noindent We can observe the following:
\begin{enumerate}
\item In terms of the RIP: DS and BP are the best, then IHT, then SP and last is
CoSaMP.
\item In terms of the constants in the bounds: the smallest
constant is achieved by DS. Then come IHT, SP, CoSaMP and BP in this
order.
\item In terms of the probability: all have the same probability except
the BP which gives a weaker guarantee.
\item Though the CoSaMP has a weaker guarantee compared to the SP, it has
an efficient implementation that saves the matrix inversion in the
algorithm.\footnote{The proofs of the guarantees in this paper are
not valid for this case, though it is not hard to extend them for
it.}
\end{enumerate}
For completeness of the discussion here, we also refer to
algorithms' complexity: the IHT is the cheapest, CoSaMP and SP come
next with a similar complexity (with a slight advantage to CoSaMP).
DS and BP seem to be the most complex.
Interestingly, in the guarantees of the OMP and the thresholding in
\cite{BenHaim09Coherence} better constants are obtained. However,
these results, as mentioned before, holds under mutual-coherence
based conditions, which are more restricting. In addition, their
validity relies on the magnitude of the entries of $\vect{x}$ and
the noise power, which is not correct for the results presented in
this section for the greedy-like methods. Furthermore, though we get
bigger constants with these methods, the conditions are not tight,
as will be seen in the next section.
\section{Experiments}
\label{sec:experiments}
In our experiments we use a random dictionary with entries drawn
from the canonic normal distribution. The columns of the dictionary
are normalized and the dimensions are $m=512$ and $N=1024$. The
vector $\vect{x}$ is generated by selecting a support uniformly at
random. Then the elements in the support are generated using the
following model\footnote{This model is taken from the experiments
section in \cite{Candes07Dantzig}.}:
\begin{equation}
\vect{x}_i = 10\epsilon_i(1+ \abs{n_i})
\end{equation}
where $\epsilon_i$ is $\pm 1$ with probability $0.5$, and $n_i$ is a
canonic normal random variable. The support and the non-zero values
are statistically independent. We repeat each experiment $1500$
times.
In the first experiment we calculate the error of the SP, CoSaMP and
IHT methods for different sparsity levels. The noise variance is set
to $\sigma = 1$. Fig.~\ref{fig:sparsity_guarantee} presents the
squared-error $\norm{\vect{x}-\vect{\hat x}}_2^2$ of all the
instances of the experiment for the three algorithms. Our goal is to
show that with high-probability the error obtained is bounded by the
guarantees we have developed. For each algorithm we add the
theoretical guarantee and the oracle performance. As can be seen,
the theoretical guarantees are too loose and the actual performance
of the algorithms is much better. However, we see that both the
theoretical and the empirical performance curves show a
proportionality to the oracle error. Note that the actual
performance of the algorithms' may be better than the oracle's --
this happens because the oracle is the Maximum-Likelihood Estimator
(MLE) in this case \cite{BenHaim10Cramer}, and by adding a bias one
can perform even better in some cases.
\begin{figure}
\caption{The squared-error as
achieved by the SP, the CoSaMP and the IHT algorithms as a function
of the cardinality. The graphs also show the theoretical guarantees
and the oracle performance.}
\label{fig:sp_sparsity_guarantee}
\label{fig:cosamp_sparsity_guarantee}
\label{fig:IHT_sparsity_guarantee}
\label{fig:sparsity_guarantee}
\end{figure}
Fig.~\ref{fig:sparsity_error_range_sp_cosamp_iht} presents the
mean-squared-error (by averaging all the experiments) for the range
where the RIP-condition seems to hold, and
Fig.~\ref{fig:sparsity_error_wide_sp_cosamp_iht} presents this error for
a wider range, where it is likely top be violated. It can be seen
that in the average case, though the algorithms get different
constants in their bounds, they achieve almost the same performance.
We also see a near-linear curve describing the error as a function
of $K$. Finally, we observe that the SP and the CoSaMP, which were
shown to have worse constants in theory, have better performance and
are more stable in the case where the RIP-condition do not hold
anymore.
\begin{figure}
\caption{The
mean-squared-error of the SP, the CoSaMP and the IHT algorithms as a
function of the
cardinality.}
\label{fig:sparsity_error_range_sp_cosamp_iht}
\label{fig:sparsity_error_wide_sp_cosamp_iht}
\label{fig:sparsity_error_sp_cosamp_iht}
\end{figure}
In a second experiment we calculate the error of the SP, the CoSaMP
and the IHT methods for different noise variances. The sparsity is
set to $K = 10$. Fig.~\ref{fig:sigma_guarantee} presents the error
of all the instances of the experiment for the three algorithms.
Here as well we add the theoretical guarantee and the oracle
performance. As we saw before, the guarantee is not tight but the
error is proportional to the oracle estimator's error.
\begin{figure}
\caption{The squared-error as
achieved by the SP, the CoSaMP and the IHT algorithms as a function
of the noise variance. The graphs also show the theoretical
guarantees and the oracle performance.}
\label{fig:sp_sigmas_guarantee}
\label{fig:cosamp_sigma_guarantee}
\label{fig:IHT_sigma_guarantee}
\label{fig:sigma_guarantee}
\end{figure}
Fig.~\ref{fig:power_error_sp_cosamp_iht} presents the
mean-squared-error as a function of the noise variance, by averaging
over all the experiments. It can be seen that the error behaves
linearly with respect to the variance, as expected from the
theoretical analysis. Again we see that the constants are not tight
and that the algorithms behave in a similar way. Finally, we note
that the algorithms succeed in meeting the bounds even in very low
signal-to-noise ratios, where simple greedy algorithms are expected
to fail.
\begin{figure}
\caption{The mean-squared-error of the SP, the CoSaMP and the IHT
algorithms as a function of the noise
variance.}
\label{fig:power_error_sp_cosamp_iht}
\end{figure}
\section{Extension to the non-exact sparse case}
\label{sec:non_exact_sparse}
In the case where $\vect{x}$ is not exactly $K$-sparse, our analysis
has to change. Following the work reported in
\cite{Needell09CoSaMP}, we have the following error bounds for all
algorithms (with the different RIP condition and constant).
\begin{thm}
For the SP, CoSaMP and IHT algorithms, under their appropriate RIP
conditions, it holds that after at most
\begin{equation}
\ell^* = \ceil{\log_2 \left( \frac{\norm{\vect{x}}_2}
{\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2} \right)}
\end{equation}
iterations, the estimation $\hat{\vect{x}}$ gives an accuracy of the
form
\begin{IEEEeqnarray}{rCl}
\label{eq:approx_error_bound_consts}
\norm{\vect{x} - \hat{\vect{x}}}_2 &\le& C
\bigg( \norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2 \\ \nonumber && + (1+\delta_K)
\norm{\vect{x} - \vect{x}_K}_2 + \frac{1+\delta_K}{\sqrt{K}}
\norm{\vect{x} - \vect{x}_K}_1 \bigg) .
\end{IEEEeqnarray}
where $\vect{x}_K$ is a $K$-sparse vector that nulls all entries in
$\vect{x}$ apart from the $K$ dominant ones. $C$ is the appropriate
constant value, dependent on the algorithm.
If we assume that $\vect{e}$ is a white Gaussian noise with variance
$\sigma^2$ and that the columns of $\matr{D}$ are normalized, then
with probability exceeding $1-(\sqrt{\pi (1+a)\log{N}}\cdot N^a)^{-1}$ we
get that
\begin{IEEEeqnarray}{rCl}
\label{eq:approx_error_bound_near_oracle}
\norm{\vect{x} - \hat{\vect{x}}}_2^2 &\le& 2\cdot C^2
\bigg( \sqrt{(1+a)\log{N}\cdot K} \cdot \sigma \\ \nonumber && ~~~~~~~~+
\norm{\vect{x} - \vect{x}_K}_2 + \frac{1}{\sqrt{K}}
\norm{\vect{x} - \vect{x}_K}_2 \bigg)^2.
\end{IEEEeqnarray}
\end{thm}
{\em Proof:} Proposition 3.5 from \cite{Needell09CoSaMP} provides us
with the following claim
\begin{eqnarray}
\label{eq:near_approx_bound}
\norm{\matr{D}\vect{x}}_2 \le \sqrt{1+\delta_K}
\left( \norm{\vect{x}}_2 +\frac{1}{\sqrt{K}}\norm{\vect{x}}_1 \right).
\end{eqnarray}
When $\vect{x}$ is not exactly $K$-sparse we get that the effective
error in our results becomes $\tilde{\vect{e}} = \vect{e} +
\matr{D}(\vect{x} - \vect{x}_K)$. Thus, using the error bounds of
the algorithms with the inequality in (\ref{eq:near_approx_bound})
we get
\begin{eqnarray}
\norm{\vect{x} - \hat{\vect{x}}}_2 &\le& C
\norm{\matr{D}_{T_\vect{e}}^*\vect{\tilde{e}}}_2 \\
\nonumber
& \le & C\norm{\matr{D}_{T_\vect{e}}^*\left(\vect{e} +
\matr{D}(\vect{x} - \vect{x}_K)\right)}_2 \\
\nonumber
& \le & C\norm{\matr{D}_{T_\vect{e}}^*\vect{e}} +
C\norm{\matr{D}_{T_\vect{e}}^*\matr{D}(\vect{x} - \vect{x}_K)}_2 \\
\nonumber
& \le & C\norm{\matr{D}_{T_\vect{e}}^*\vect{e}} +
C\sqrt{1+\delta_{K}}\norm{\matr{D}(\vect{x} - \vect{x}_K)}_2 \\
\nonumber & \le & C\bigg( \norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2
+ (1+\delta_K)\norm{\vect{x} - \vect{x}_K}_2 \\ \nonumber && +
\frac{1+\delta_K}{\sqrt{K}}\norm{\vect{x} - \vect{x}_K}_1 \bigg),
\end{eqnarray}
which proves (\ref{eq:approx_error_bound_consts}). Using the same
steps taken in Theorems \ref{thm:SP_oracle_thm},
\ref{thm:CoSaMP_oracle_thm}, and \ref{thm:IHT_oracle_thm}, lead us
to
\begin{IEEEeqnarray}{rCl}
\label{eq:approx_error_bound_near_oracle_proof} \norm{\vect{x} -
\hat{\vect{x}}}_2^2 &\le& C^2\bigg( \sqrt{(2(1+a)\log{N})\cdot K}
\cdot \sigma \\ \nonumber &&+ (1+\delta_{K})\norm{\vect{x} - \vect{x}_K}_2 +
\frac{1+\delta_{K}}{\sqrt{K}}\norm{\vect{x} - \vect{x}_K}_1
\bigg)^2 .
\end{IEEEeqnarray}
Since the RIP condition for all the algorithms satisfies $\delta_K
\le \sqrt{2} - 1$, plugging this into
(\ref{eq:approx_error_bound_near_oracle_proof}) gives
(\ref{eq:approx_error_bound_near_oracle}), and this concludes the
proof.
$\Box$
Just as before, we should wonder how close is this bound to the one
obtained by an oracle that knows the support $T$ of the $K$ dominant
entries in $\vect{x}$. Following \cite{Blumensath09Iterative}, we
derive an expression for such an oracle. Using the fact that the
oracle is given by $\matr{D}_T^\dag \vect{y} = \matr{D}_T^\dag
(\matr{D}\vect{x}_T+\matr{D}(\vect{x}-\vect{x}_T)+\vect{e})$, its
MSE is bounded by
\begin{IEEEeqnarray}{rcl}
&&E\norm{\vect{x} - \hat{\vect{x}}_{oracle}}_2^2 = E\norm{\vect{x} -
\matr{D}_T^\dag \vect{y}}_2^2 \\
\nonumber && = E\norm{\vect{x} - \vect{x}_T -\matr{D}_T^\dag
\vect{e} - \matr{D}_T^\dag
\matr{D}(\vect{x} - \vect{x}_T)}_2^2 \\
\nonumber && \le \left( \norm{\vect{x} - \vect{x}_T}_2 + \norm{
\matr{D}_T^\dag \matr{D}(\vect{x} - \vect{x}_T)}_2 +
E\norm{\matr{D}_T^\dag \vect{e} }_2 \right)^2,
\end{IEEEeqnarray}
where we have used the triangle inequality. Using the relation given
in (\ref{eq:oracle_perf2}) for the last term, and
properties of the RIP for the second, we obtain
\begin{IEEEeqnarray}{rcl}
&&E\norm{\vect{x} - \hat{\vect{x}}_{oracle}}_2^2 \le \\ \nonumber && ~~\left(
\norm{\vect{x} - \vect{x}_T}_2 + \frac{1}{\sqrt{1-\delta_{K}}}\norm{
\matr{D}(\vect{x} - \vect{x}_T)}_2 +
\frac{\sqrt{K}}{\sqrt{1-\delta_K}}\sigma \right)^2.
\end{IEEEeqnarray}
Finally, the middle-term can be further handled using
(\ref{eq:near_approx_bound}), and we arrive to
\begin{IEEEeqnarray}{rCl}
E\norm{\vect{x} - \hat{\vect{x}}_{oracle}}_2^2 &\le& \frac{1}{1-\delta_k}\bigg( (1
+\sqrt{1+\delta_{K}}) \norm{\vect{x} - \vect{x}_T}_2 \\ \nonumber &&+
\frac{\sqrt{1+\delta_{K}}}{\sqrt{K}} \norm{\vect{x} - \vect{x}_K}_1
+ \sqrt{K}\sigma \bigg)^2.
\end{IEEEeqnarray}
Thus we see again that the error bound in the non-exact sparse
case, is up to a constant and the $\log{N}$ factor the same as the
one of the oracle estimator.
\section{Conclusion}
\label{sec:conc} In this paper we have presented near-oracle
performance guarantees for three greedy-like algorithms -- the
Subspace Pursuit, the CoSaMP, and the Iterative Hard-Thresholding.
The approach taken in our analysis is an RIP-based (as opposed to
mutual-coherence ones). Despite their resemblance to greedy
algorithms, such as the OMP and the thresholding, our study leads to
uniform guarantees for the three algorithms explored, i.e., the
near-oracle error bounds are dependent only on the dictionary
properties (RIP constant) and the sparsity level of the sought
solution. We have also presented a simple extension of our results
to the case where the representations are only approximately sparse.
\section*{Acknowledgment}
The authors would like to thank Zvika Ben-Haim for fruitful
discussions and relevant references to existing literature, which
helped in shaping this work.
\appendices
\section{Proof of Theorem \ref{thm:SP-1} -- inequality (\ref{eq:SP_x_diff_bound})}
\label{sec:SP_x_diff_bound_proof}
In the proof of (\ref{eq:SP_x_diff_bound}) we use two main inequalities:
\begin{eqnarray}
\label{eq:x_diff_tilde_prev_bound}
\norm{\vect{x}_{T-\tilde{T}^\ell}}_2 &\le& \frac{2\delta_{3K}}
{(1-\delta_{3K})^2}\norm{\vect{x}_{T-{T}^{\ell-1}}}_2 \\ \nonumber &&+
\frac{2}{(1-\delta_{3K})^2}
\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2,
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:x_diff_curr_tilde_bound} \norm{\vect{x}_{T-T^\ell}}_2 &\le&
\frac{1+\delta_{3K}}{1-\delta_{3K}}\norm{\vect{x}_{T-\tilde{T}^\ell}}_2 \\ \nonumber &&
+ \frac{4}{1-\delta_{3K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2.
\end{eqnarray}
Their proofs are in Appendices
\ref{sec:x_diff_tilde_prev_bound_proof} and
\ref{sec:x_diff_curr_tilde_bound_proof} respectively. The inequality
(\ref{eq:SP_x_diff_bound}) is obtained by substituting
(\ref{eq:x_diff_tilde_prev_bound}) into
(\ref{eq:x_diff_curr_tilde_bound}) as shown below:
\begin{IEEEeqnarray}{rCl}
\norm{\vect{x}_{T-T^\ell}}_2 &\le& \frac{1+\delta_{3K}}
{1-\delta_{3K}}\norm{\vect{x}_{T-\tilde{T}^{\ell}}}_2+\frac{4}
{1-\delta_{3K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2 \\
\nonumber
&\le& \frac{1+\delta_{3K}}{1-\delta_{3K}}\left[\frac{2\delta_{3K}}
{(1-\delta_{3K})^2}\norm{\vect{x}_{T-T^{\ell-1}}}_2\right. \\{} \nonumber && \left.+ \frac{2}
{(1-\delta_{3K})^2}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2 \right]+
\frac{4}{1-\delta_{3K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2 \\
\nonumber
&\le& \frac{2\delta_{3K}(1+\delta_{3K})}{(1-\delta_{3K})^3}
\norm{\vect{x}_{T-T^{\ell-1}}}_2 \\ \nonumber &&+\frac{2(1+\delta_{3K})}
{(1-\delta_{3K})^3}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2 +\frac{4}
{1-\delta_{3K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2
\\ \nonumber &\le&
\frac{2\delta_{3K}(1+\delta_{3K})}{(1-\delta_{3K})^3}
\norm{\vect{x}_{T-T^{\ell-1}}}_2 \\ \nonumber &&+\frac{6 - 6\delta_{3K} + 4\delta_{3K}^2}
{(1-\delta_{3K})^3}
\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2,
\end{IEEEeqnarray}
and this concludes this proof.
$\Box$
\section{Proof of inequality (\ref{eq:x_diff_tilde_prev_bound})}
\label{sec:x_diff_tilde_prev_bound_proof}
\begin{lem}\label{lemma:A1}
The following inequality holds true for the SP algorithm:
\begin{eqnarray}
\nonumber \norm{\vect{x}_{T-\tilde{T}^\ell}}_2 &\le& \frac{2\delta_{3K}}
{(1-\delta_{3K})^2}\norm{\vect{x}_{T-{T}^{\ell-1}}}_2 \\ \nonumber &&+
\frac{2}{(1-\delta_{3K})^2}
\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2,
\end{eqnarray}
\end{lem}
{\em Proof:} We start by the residual-update step in the SP
algorithm, and exploit the relation $\vect{y} =
\matr{D}\vect{x}+\vect{e} =
\matr{D}_{T-T^{\ell-1}}\vect{x}_{T-T^{\ell-1}} + \matr{D}_{T\cap
T^{\ell-1}}\vect{x}_{T\cap T^{\ell-1}} +\vect{e}$. This leads to
\begin{IEEEeqnarray}{rCl}
\label{eq:resid_l1_composition_first}
\vect{y}_r^{\ell - 1} &=& \resid(\vect{y},\matr{D}_{T^{\ell-1}}) \\
\nonumber
&=& \resid(\matr{D}_{T-T^{\ell-1}}\vect{x}_{T-T^{\ell-1}},\matr{D}_{T^{\ell-1}}) \\
\nonumber
&& + \resid(\matr{D}_{T\cap T^{\ell-1}}
\vect{x}_{T\cap T^{\ell-1}},\matr{D}_{T^{\ell-1}})
+ \resid(\vect{e},\matr{D}_{T^{\ell-1}}).
\end{IEEEeqnarray}
Here we have used the linearity of the operator $\resid(\cdot,
\matr{D}_{T^{\ell-1}})$ with respect to its first entry. The second
term in the right-hand-side (rhs) is $0$ since $\matr{D}_{T\cap
T^{\ell-1}}\vect{x}_{T\cap T^{\ell-1}}\in
\spanned(\matr{D}_{T^{\ell-1}})$. For the first term in the rhs we
have
\begin{IEEEeqnarray}{rcl}
\label{eq:resid_D_T_l_xr}
&&\resid(\matr{D}_{T-T^{\ell-1}}\vect{x}_{T-T^{\ell-1}},\matr{D}_{\ell-1}) \\ \nonumber
&& ~~= \matr{D}_{T-T^{\ell-1}}\vect{x}_{T-T^{\ell-1}} -
\proj(\matr{D}_{T-T^{\ell-1}}\vect{x}_{T-T^{\ell-1}},\matr{D}_{T^{\ell-1}}) \\
\nonumber
&& ~~= \matr{D}_{T-T^{\ell-1}}\vect{x}_{T-T^{\ell-1}} +
\matr{D}_{T^{\ell-1}}\vect{x}_{p,T^{\ell - 1}} \\
&& ~~ = \left[ \matr{D}_{T-T^{\ell-1}},\matr{D}_{T^{\ell-1}} \right]
\left[ \begin{array}{c}
\vect{x}_{T-T^{\ell-1}} \\
\nonumber \vect{x}_{p,T^{\ell - 1}} \end{array}\right] \triangleq
\matr{D}_{T\cup T^{\ell-1}}\vect{x}_r^{\ell-1},
\end{IEEEeqnarray}
where we have defined
\begin{IEEEeqnarray}{rCl}
\label{eq:xptdef}
\vect{x}_{p,T^{\ell - 1}} &=& -(\matr{D}^*_{T^{\ell -
1}}\matr{D}_{T^{\ell - 1}})^{-1}\matr{D}^*_{T^{\ell - 1}}
\matr{D}_{T-T^{\ell-1}}\vect{x}_{T-T^{\ell-1}}.
\end{IEEEeqnarray}
Combining (\ref{eq:resid_l1_composition_first}) and
(\ref{eq:resid_D_T_l_xr}) leads to
\begin{eqnarray}\label{eq:yrldef}
\vect{y}_r^{\ell - 1} = \matr{D}_{T\cup
T^{\ell-1}}\vect{x}_r^{\ell-1} +
\resid(\vect{e},\matr{D}_{T^{\ell-1}}).
\end{eqnarray}
By the definition of $T_\Delta$ in Algorithm \ref{alg:SP} we obtain
\begin{IEEEeqnarray}{rcl}
\label{eq:x_diff_tilde_prev_bound_lower_bound_step1}
&& \norm{\matr{D}^*_{T_\Delta}\vect{y}_r^{\ell - 1}}_2 \ge
\norm{\matr{D}^*_T\vect{y}_r^{\ell - 1}}_2 \ge
\norm{\matr{D}^*_{T-T^{\ell-1}}\vect{y}_r^{\ell - 1}}_2\\
\nonumber &&
~~ \ge \norm{\matr{D}^*_{T-T^{\ell-1}}
\matr{D}_{T\cup T^{\ell - 1}}\vect{x}_r^{\ell - 1}}_2 \\ \nonumber && ~~~~ -
\norm{\matr{D}^*_{T-T^{\ell-1}} \resid(\vect{e},\matr{D}_{T^{\ell-1}})}_2 \\
\nonumber &&
~~ \ge \norm{\matr{D}^*_{T-T^{\ell-1}}
\matr{D}_{T\cup T^{\ell - 1}}\vect{x}_r^{\ell - 1}}_2 -
\norm{\matr{D}^*_{T-T^{\ell-1}} \vect{e}}_2
\\ \nonumber && ~~~~- \norm{\matr{D}^*_{T-T^{\ell-1}} \proj(\vect{e},\matr{D}_{T^{\ell-1}})}_2.
\end{IEEEeqnarray}
We will bound $\norm{\matr{D}^*_{T-T^{\ell-1}}
\proj(\vect{e},\matr{D}_{T^{\ell-1}})}_2$ from above using RIP
properties from Section \ref{sec:notation},
\begin{eqnarray}
\label{eq:x_diff_tilde_prev_bound_lower_bound_step2}
&& \norm{\matr{D}^*_{T-T^{\ell-1}} \proj(\vect{e},\matr{D}_{T^{\ell-1}})}_2 \\ \nonumber
&& ~~= \norm{\matr{D}^*_{T-T^{\ell-1}} \matr{D}_{T^{\ell-1}}(\matr{D}_{T^{\ell-1}}^*
\matr{D}_{T^{\ell-1}})^{-1}\matr{D}_{T^{\ell-1}}^*\vect{e}}_2 \\
\nonumber
&&~~\le\frac{\delta_{3K}}{1-\delta_{3K}}\norm{\matr{D}_{T^{\ell-1}}^*\vect{e}}_2.
\end{eqnarray}
Combining (\ref{eq:x_diff_tilde_prev_bound_lower_bound_step1}) and
(\ref{eq:x_diff_tilde_prev_bound_lower_bound_step2}) leads to
\begin{IEEEeqnarray}{rCl}
\label{eq:x_diff_tilde_prev_bound_lower_bound}
&& \norm{\matr{D}^*_{T_\Delta}\vect{y}_r^{\ell - 1}}_2
\ge\norm{\matr{D}^*_{T-T^{\ell-1}}\matr{D}_{T\cup T^{\ell - 1}}
\vect{x}_r^{\ell - 1}}_2 \\ \nonumber &&~~~~~~~~~~~~~~ - \norm{\matr{D}^*_{T} \vect{e}}_2 -
\frac{\delta_{3K}}{1-\delta_{3K}}\norm{\matr{D}_{T^{\ell-1}}^*\vect{e}}_2 \\
\nonumber && ~~~~~~~~ \ge\norm{\matr{D}^*_{T-T^{\ell-1}}
\matr{D}_{T\cup T^{\ell - 1}}\vect{x}_r^{\ell - 1}}_2 -
\frac{1}{1-\delta_{3K}}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2.
\end{IEEEeqnarray}
By the definition of $T_\Delta$ and $\vect{y}_r^{\ell-1}$ it holds
that $T_\Delta \cap T^{\ell-1} = \emptyset$ since
$\matr{D}_{T^{\ell-1}}^*\vect{y}_r^{\ell-1} = 0$. Using
(\ref{eq:yrldef}), the left-hand-side (lhs) of
(\ref{eq:x_diff_tilde_prev_bound_lower_bound}) is upper bounded by
\begin{IEEEeqnarray}{rcl}
\label{eq:x_diff_tilde_prev_bound_upper_bound}
\norm{\matr{D}^*_{T_\Delta}\vect{y}_r^{\ell - 1}}_2 \\ \nonumber && \le
\norm{\matr{D}^*_{T_\Delta}\matr{D}_{T\cup T^{\ell - 1}}\vect{x}_r^{\ell - 1}}_2 +
\norm{\matr{D}^*_{T_\Delta}\resid(\vect{e},\matr{D}_{T^{\ell-1}})}_2\\
\nonumber &&
\le \norm{\matr{D}^*_{T_\Delta}
\matr{D}_{T\cup T^{\ell - 1}}\vect{x}_r^{\ell - 1}}_2 +
\norm{\matr{D}^*_{T_\Delta}\vect{e}}_2
\\ \nonumber && ~~~~
+ \norm{\matr{D}^*_{T_\Delta} \matr{D}_{T^{\ell-1}}
(\matr{D}_{T^{\ell-1}}^*\matr{D}_{T^{\ell-1}})^{-1}
\matr{D}_{T^{\ell-1}}^*\vect{e}}_2\\
\nonumber && \le \norm{\matr{D}^*_{T_\Delta}\matr{D}_{T\cup T^{\ell
- 1}}\vect{x}_r^{\ell - 1}}_2 +
\norm{\matr{D}^*_{T_\Delta}\vect{e}}_2 \\ \nonumber && ~~~~+
\frac{\delta_{3K}}{1-\delta_{3K}}
\norm{\matr{D}_{T^{\ell-1}}^*\vect{e}}_2\\
\nonumber && \le \norm{\matr{D}^*_{T_\Delta}\matr{D}_{T\cup T^{\ell
- 1}}\vect{x}_r^{\ell - 1}}_2 +
\frac{1}{1-\delta_{3K}}\norm{\matr{D}^*_{T_\vect{e}}\vect{e}}_2.
\end{IEEEeqnarray}
Combining (\ref{eq:x_diff_tilde_prev_bound_lower_bound}) and
(\ref{eq:x_diff_tilde_prev_bound_upper_bound}) gives
\begin{eqnarray}
&&\norm{\matr{D}^*_{T_\Delta}\matr{D}_{T\cup T^{\ell -
1}}\vect{x}_r^{\ell - 1}}_2 +
\frac{2}{1-\delta_{3K}}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2 \\
\nonumber && ~~~~~~\ge
\norm{\matr{D}^*_{T-T^{\ell - 1}}\matr{D}_{T\cup T^{\ell -
1}}\vect{x}_r^{\ell - 1}}_2.
\end{eqnarray}
Removing the common rows in $\matr{D}^*_{T_\Delta}$ and
$\matr{D}^*_{T-T^{\ell - 1}}$ we get
\begin{IEEEeqnarray}{rCl}
\label{eq:first_inequality_only_error_bounded}
&& \norm{\matr{D}^*_{T_\Delta - T}\matr{D}_{T\cup T^{\ell - 1}}
\vect{x}_r^{\ell - 1}}_2 + \frac{2}{1-\delta_{3K}}
\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2 \\
\nonumber
&& ~~~~\ge
\norm{\matr{D}^*_{T-T^{\ell - 1}- T_{\Delta}}
\matr{D}_{T\cup T^{\ell - 1}}\vect{x}_r^{\ell - 1}}_2 \\
\nonumber
&& ~~~~=
\norm{\matr{D}^*_{T- \tilde{T}^{\ell}}
\matr{D}_{T\cup T^{\ell - 1}}\vect{x}_r^{\ell - 1}}_2 .
\end{IEEEeqnarray}
The last equality is true because
$T-T^{\ell - 1}- T_{\Delta} = T-T^{\ell - 1}-
(\tilde{T}^{\ell}-T^{\ell - 1}) = T - \tilde{T}^{\ell}$.
Now we turn to bound the lhs and rhs terms of
(\ref{eq:first_inequality_only_error_bounded}) from below and above,
respectively. For the lhs term we exploit the fact that the supports
$T_\Delta - T$ and $T \cup T^{\ell-1}$ are disjoint, leading to
\begin{IEEEeqnarray}{rCl}
\norm{\matr{D}^*_{T_\Delta - T}\matr{D}_{T\cup T^{\ell - 1}}
\vect{x}_r^{\ell - 1}}_2 &\le&
\delta_{\abs{T_\Delta\cup T^{\ell-1}\cup T}}\norm{\vect{x}^{\ell-1}_r}_2
\\ \nonumber &\le& \delta_{3K}\norm{\vect{x}^{\ell-1}_r}_2
\end{IEEEeqnarray}
For the rhs term in (\ref{eq:first_inequality_only_error_bounded}),
we obtain
\begin{IEEEeqnarray}{rcl}
&& \norm{\matr{D}^*_{T - \tilde{T}^{\ell}}
\matr{D}_{T\cup T^{\ell - 1}}\vect{x}_r^{\ell - 1}}_2 \\ \nonumber &&
~~\ge \norm{\matr{D}^*_{T - \tilde{T}^{\ell}}\matr{D}_{T - \tilde{T}^{\ell}}
(\vect{x}_r^{\ell - 1})_{T - \tilde{T}^{\ell}}}_2 \\
\nonumber && ~~~~-
\norm{\matr{D}^*_{T - \tilde{T}^{\ell}}\matr{D}_{(T\cup T^{\ell - 1}) -
(T - \tilde{T}^{\ell})}(\vect{x}_r^{\ell - 1})_{(T\cup T^{\ell - 1})
- (T - \tilde{T}^{\ell})}}_2 \\
\nonumber
&& ~~\ge (1-\delta_{K})\norm{(\vect{x}_r^{\ell - 1})_{T - \tilde{T}^{\ell}}}_2
- \delta_{3K}\norm{\vect{x}_r^{\ell - 1}}_2 \\
\nonumber
&& ~~\ge (1-\delta_{3K})\norm{(\vect{x}_r^{\ell - 1})_{T - \tilde{T}^{\ell}}}_2
- \delta_{3K}\norm{\vect{x}_r^{\ell - 1}}_2
\end{IEEEeqnarray}
Substitution of the two bounds derived above into
(\ref{eq:first_inequality_only_error_bounded}) gives
\begin{eqnarray}
\label{eq:first_inequality_only_error_bounded_x_r}
&& 2\delta_{3K}\norm{\vect{x}^{\ell-1}_r}_2 +
\frac{2}{1-\delta_{3K}}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2 \\ \nonumber && ~~~~~~\ge
(1-\delta_{3K})\norm{(\vect{x}_r^{\ell - 1})_{T - \tilde{T}^{\ell}}}_2.
\end{eqnarray}
The above inequality uses $\vect{x}^{\ell-1}_r$, which was defined
in (\ref{eq:resid_D_T_l_xr}), and this definition relies on
yet another one definition for the vector $\vect{x}_{p,T^{\ell -
1}}$ in (\ref{eq:xptdef}). We proceed by bounding
$\norm{\vect{x}_{p,T^{\ell - 1}}}_2$ from above,
\begin{IEEEeqnarray}{rCl}
\label{eq:x_p_T_upper_bound}
&& \norm{\vect{x}_{p,T^{\ell - 1}}}_2 \\ \nonumber
&& ~~= \norm{-(\matr{D}^*_{T^{\ell - 1}}
\matr{D}_{T^{\ell - 1}})^{-1}\matr{D}^*_{T^{\ell - 1}}
\matr{D}_{T-T^{\ell-1}}\vect{x}_{T-T^{\ell-1}}}_2 \\
\nonumber
&& ~~\le \frac{1}{1-\delta_K}\norm{-\matr{D}^*_{T^{\ell - 1}}
\matr{D}_{T-T^{\ell-1}}\vect{x}_{T-T^{\ell-1}}}_2\\
\nonumber &&~~\le
\frac{\delta_{2K}}{1-\delta_K}\norm{\vect{x}_{T-T^{\ell-1}}}_2 \le
\frac{\delta_{3K}}{1-\delta_{3K}}\norm{\vect{x}_{T-T^{\ell-1}}}_2,
\end{IEEEeqnarray}
and get
\begin{eqnarray}
\label{eq:x_r_l_norm_upper_bound}
\norm{\vect{x}_r^{\ell-1}}_2 &\le& \norm{\vect{x}_{T-T^{\ell-1}}}_2 +
\norm{\vect{x}_{p,T^{\ell-1}}}_2 \\ \nonumber &\le& \left( 1+ \frac{\delta_{3K}}
{1-\delta_{3K}}\right) \norm{\vect{x}_{T-T^{\ell-1}}}_2 \\
\nonumber
&\le& \frac{1}{1-\delta_{3K}} \norm{\vect{x}_{T-T^{\ell-1}}}_2.
\end{eqnarray}
In addition, since $(\vect{x}_r^{\ell - 1})_{T - T^{\ell-1}} =
\vect{x}_{T - T^{\ell-1}}$ then $(\vect{x}_r^{\ell - 1})_{T -
\tilde{T}^{\ell}} =\vect{x}_{T - \tilde{T}^{\ell}}$. Using this fact
and (\ref{eq:x_r_l_norm_upper_bound}) with
(\ref{eq:first_inequality_only_error_bounded_x_r}) leads to
\begin{eqnarray}
&& \norm{\vect{x}_{T-\tilde{T}^\ell}}_2 \\ \nonumber
&& ~~\le \frac{2\delta_{3K}}
{(1-\delta_{3K})^2}\norm{\vect{x}_{T-{T}^{\ell-1}}}_2 +
\frac{2}{(1-\delta_{3K})^2}\norm{\matr{D}_{T_\vect{e}}^*\vect{e}}_2,
\end{eqnarray}
which proves the inequality in (\ref{eq:x_diff_tilde_prev_bound}).
$\Box$
\section{Proof of inequality (\ref{eq:x_diff_curr_tilde_bound})}
\label{sec:x_diff_curr_tilde_bound_proof}
\begin{lem}\label{lemma:A2}
The following inequality holds true for the SP algorithm:
\begin{eqnarray}
\nonumber \norm{\vect{x}_{T-T^\ell}}_2 &\le &
\frac{1+\delta_{3K}}{1-\delta_{3K}}\norm{\vect{x}_{T-\tilde{T}^\ell}}_2
+ \frac{4}{1-\delta_{3K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2.
\end{eqnarray}
\end{lem}
{\em Proof:} We will define the smear vector $\epsilonbf =
\vect{x}_p - \vect{x}_{\tilde{T}^\ell}$, where $\vect{x}_p$ is the
outcome of the representation computation over ${\tilde T}^{\ell}$,
given by
\begin{eqnarray}
\label{eq:SP_x_p_explicit}
\vect{x_p} = \matr{D}_{\tilde{T}^\ell}^\dag\vect{y} =
\matr{D}_{\tilde{T}^\ell}^\dag(\matr{D}_T\vect{x}_T + \vect{e}),
\end{eqnarray}
as defined in Algorithm~\ref{alg:SP}. Expanding the first term in the last equality gives:
\begin{IEEEeqnarray}{rCl}
\label{eq:SP_x_p_explicit_first_element}
&& \matr{D}_{\tilde{T}^\ell}^\dag\matr{D}_T\vect{x}_T =
\matr{D}_{\tilde{T}^\ell}^\dag \matr{D}_{T\cap \tilde{T}^\ell}\vect{x}_{T\cap \tilde{T}^\ell} +
\matr{D}_{\tilde{T}^\ell}^\dag \matr{D}_{T- \tilde{T}^\ell}\vect{x}_{T- \tilde{T}^\ell} \\
\nonumber
&& ~ = \matr{D}_{\tilde{T}^\ell}^\dag \left[ \matr{D}_{T\cap \tilde{T}^\ell},
\matr{D}_{\tilde{T}^\ell - T} \right] \left[ \begin{array}{c}
\vect{x}_{T\cap \tilde{T}^\ell} \\
\vect{0} \end{array}\right] + \matr{D}_{\tilde{T}^\ell}^\dag
\matr{D}_{T- \tilde{T}^\ell}\vect{x}_{T- \tilde{T}^\ell} \\
\nonumber
&& ~= \matr{D}_{\tilde{T}^\ell}^\dag \matr{D}_{\tilde{T}^\ell}
\vect{x}_{\tilde{T}^\ell} + \matr{D}_{\tilde{T}^\ell}^\dag
\matr{D}_{T- \tilde{T}^\ell}\vect{x}_{T- \tilde{T}^\ell} \\
\nonumber
&& ~= \vect{x}_{\tilde{T}^\ell} + \matr{D}_{\tilde{T}^\ell}^\dag
\matr{D}_{T- \tilde{T}^\ell}\vect{x}_{T- \tilde{T}^\ell}.
\end{IEEEeqnarray}
The equalities hold based on the definition of
$\matr{D}_{\tilde{T}^\ell}^\dag$ and on the fact that $\vect{x}$ is
$0$ outside of $T$. Using (\ref{eq:SP_x_p_explicit_first_element})
we bound the smear energy from above, obtaining
\begin{eqnarray}
\label{eq:smear_upper_bound} \norm{\epsilonbf}_2 &\le&
\norm{\matr{D}_{\tilde{T}^\ell}^\dag\matr{D}_T\vect{x}_T
}_2 + \norm{\matr{D}_{\tilde{T}^\ell}^\dag\vect{e}}_2 \\
\nonumber
& = &
\norm{(\matr{D}_{\tilde{T}^\ell}^*\matr{D}_{\tilde{T}^\ell})^{-1}
\matr{D}_{\tilde{T}^\ell}^* \matr{D}_{T- \tilde{T}^\ell}\vect{x}_{T-
\tilde{T}^\ell}}_2 \\ \nonumber && + \norm{(\matr{D}_{\tilde{T}^\ell}^*
\matr{D}_{\tilde{T}^\ell})^{-1}\matr{D}_{\tilde{T}^\ell}^*\vect{e}}_2 \\
\nonumber
& \le &
\frac{\delta_{3K}}{1-\delta_{3K}}\norm{\vect{x}_{T- \tilde{T}^\ell}}_2 +
\frac{1}{1-\delta_{3K}}\norm{\matr{D}_{\tilde{T}^\ell}^*\vect{e}}_2.
\end{eqnarray}
We now turn to bound $\norm{\epsilonbf}_2$ from below. We denote the
support of the $K$ smallest coefficients in $\vect{x}_p$ by $\Delta
T \triangleq \tilde{T}^\ell - T^\ell$. Thus, for any set $T' \subset
\tilde{T}^\ell$ of cardinality $K$, it holds that
$\norm{(\vect{x}_p)_{\Delta T}}_2 \le \norm{(\vect{x}_p)_{T'}}_2$.
In particular, we shall choose $T'$ such that $T' \cap T =
\emptyset$, which necessarily exists because $\tilde{T}^\ell$ is of
cardinality $2K$ and therefore there must be at $K$ entries in this
support that are outside $T$. Thus, using the relation $\epsilonbf =
\vect{x}_p - \vect{x}_{\tilde{T}^\ell}$ we get
\begin{eqnarray}\label{eq:xpdeltat}
\norm{(\vect{x}_p)_{\Delta T}}_2 &\le& \norm{(\vect{x}_p)_{T'}}_2 =
\norm{\left(\vect{x}_{\tilde{T}^\ell} \right)_{T'}+
\epsilonbf_{T'}}_2 \\ \nonumber &=& \norm{ \epsilonbf_{T'}}_2 \le \norm{
\epsilonbf}_2.
\end{eqnarray}
Because $\vect{x}$ is supported on $T$ we have that
$\norm{\vect{x}_{\Delta T}}_2 = \norm{\vect{x}_{\Delta T \cap
T}}_2$. An upper bound for this vector is reached by
\begin{eqnarray}
\label{eq:smear_lower_bound} \norm{\vect{x}_{\Delta T \cap T}}_2 &=&
\norm{(\vect{x}_p)_{\Delta T \cap T} -\epsilonbf_{\Delta T \cap
T}}_2 \\ \nonumber &\le& \norm{(\vect{x}_p)_{\Delta T \cap T}}_2 +
\norm{\epsilonbf_{\Delta T \cap T}}_2 \\
\nonumber & \le & \norm{(\vect{x}_p)_{\Delta T}}_2 +
\norm{\epsilonbf}_2 \le 2 \norm{\epsilonbf}_2,
\end{eqnarray}
where the last step uses (\ref{eq:xpdeltat}). The vector
$\vect{x}_{T-T^\ell}$ can be decomposed as $\vect{x}_{T-T^\ell} =
\left[ \vect{x}_{T\cap \Delta T}^*,
\vect{x}^*_{T-\tilde{T}^\ell}\right]^*$. Using
(\ref{eq:smear_upper_bound}) and (\ref{eq:smear_lower_bound}) we get
\begin{IEEEeqnarray}{c}
\nonumber
\norm{\vect{x}_{T-T^\ell}}_2 \le \norm{\vect{x}_{T\cap \Delta T}}_2 +
\norm{\vect{x}_{T-\tilde{T}^\ell}}_2 \le 2\norm{\epsilonbf}_2 + \norm{\vect{x}_{T-\tilde{T}^\ell}}_2 \\
\nonumber
~~ \le \left(1 + \frac{2\delta_{3K}}{1-\delta_{3K}}\right)
\norm{\vect{x}_{T-\tilde{T}^\ell}}_2 + \frac{2}
{1-\delta_{3K}}\norm{\matr{D}_{\tilde{T}^\ell}^*\vect{e}}_2\\
\nonumber ~~=
\frac{1+\delta_{3K}}{1-\delta_{3K}}\norm{\vect{x}_{T-\tilde{T}^\ell}}_2
+ \frac{4}{1-\delta_{3K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2,
\end{IEEEeqnarray}
where the last step uses the property $\norm{\matr{D}_{{\tilde
T}^{\ell}}^*\vect{e}}_2 \le
2\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2$ taken from Section
\ref{sec:notation}, and this concludes the proof.
$\Box$
\section{Proof of inequality (\ref{eq:CoSaMP_x_diff_bound})}
\label{sec:CoSaMP_x_diff_bound_proof}
\begin{lem}\label{lemma:D1}
The following inequality holds true for the CoSaMP algorithm:
\begin{eqnarray}
\nonumber \norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell}_2 \le
\frac{4\delta_{4K}}{(1-\delta_{4K})^2}\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^{\ell-1}}_2
+ \frac{14-6\delta_{4K} }{(1-\delta_{4K})^2}
\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2.
\end{eqnarray}
\end{lem}
{\em Proof:} We denote $\hat{\vect{x}}_{CoSaMP}^\ell$ as the
solution of CoSaMP in the $\ell$-th iteration:
$\hat{\vect{x}}_{CoSaMP,(T^\ell)^C}^{\ell} = 0$ and
$\hat{\vect{x}}_{CoSaMP,T^\ell}^{\ell} = (\vect{x}_p)_{T^\ell}$. We
further define $\vect{r}^\ell \triangleq
\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell$ and use the definition of
$T_{\vect{e},p}$ (Section \ref{sec:notation}). Our proof is based on
the proof of Theorem 4.1 and the Lemmas used with it
\cite{Needell09CoSaMP}.
Since we choose $T_\Delta$ to contain the biggest $2K$ elements in
$\matr{D}^*\vect{y}_r^{\ell}$ and $\abs{T^{\ell - 1} \cup T}\le 2K$
it holds true that $ \norm{(\matr{D}^*\vect{y}_r^{\ell})_{T^{\ell }
\cup T}}_2 \le \norm{(\matr{D}^*\vect{y}_r^{\ell})_{T_\Delta}}_2 $.
Removing the common elements from both sides we get
\begin{equation}
\label{eq:CoSaMP_resid_basic_inequality}
\norm{(\matr{D}^*\vect{y}_r^{\ell})_{(T^{\ell } \cup T) - T_\Delta}}_2
\le \norm{(\matr{D}^*\vect{y}_r^{\ell})_{T_\Delta - (T^{\ell} \cup T)}}_2.
\end{equation}
We proceed by bounding the rhs and lhs of
(\ref{eq:CoSaMP_resid_basic_inequality}), from above and from below
respectively, using the triangle inequality. We use Propositions
\ref{prop1} and \ref{prop2}, the definition of $T_{\vect{e},2}$, and
the fact that $\norm{\vect{r}^{\ell}}_2 = \norm{\vect{r}_{T^{\ell }
\cup T}^{\ell}}_2$ (this holds true since the support of
$\vect{r}^{\ell}$ is over $T \cup T^{\ell}$). For the rhs we obtain
\begin{eqnarray}
\label{eq:CoSaMP_resid_basic_inequality_rhs}
\norm{(\matr{D}^*\vect{y}_r^{\ell})_{T_\Delta - (T^{\ell} \cup T)}}_2 & = &
\norm{\matr{D}^*_{T_\Delta - (T^{\ell} \cup T)}(\matr{D}\vect{r}^{\ell}+\vect{e})}_2 \\
\nonumber
& \le & \norm{\matr{D}^*_{T_\Delta - (T^{\ell} \cup T)}
\matr{D}_{T^{\ell} \cup T}\vect{r}_{T^{\ell} \cup T}^{\ell}}_2 +
\norm{\matr{D}^*_{T_\Delta - (T^{\ell} \cup T)}\vect{e}}_2 \\
\nonumber & \le & \delta_{4K}\norm{\vect{r}^{\ell}}_2 +
\norm{\matr{D}^*_{T_{\vect{e},2}}\vect{e}}_2.
\end{eqnarray}
and for the lhs:
\begin{eqnarray}
\label{eq:CoSaMP_resid_basic_inequality_lhs}
\norm{(\matr{D}^*\vect{y}_r^{\ell})_{(T^{\ell} \cup T) - T_\Delta}}_2
&=& \norm{\matr{D}^*_{(T^{\ell} \cup T) - T_\Delta}(\matr{D}\vect{r}^{\ell}+\vect{e})}_2 \\
\nonumber
&\ge & \norm{\matr{D}^*_{(T^{\ell} \cup T) -
T_\Delta}\matr{D}_{(T^{\ell} \cup T) - T_\Delta}
\vect{r}^{\ell}_{(T^{\ell} \cup T) - T_\Delta}}_2 \\
\nonumber &&- \norm{\matr{D}^*_{(T^{\ell} \cup T) - T_\Delta}
\matr{D}_{T_{\Delta}}\vect{r}^{\ell}_{T_{\Delta}}}_2 -
\norm{\matr{D}^*_{(T^{\ell} \cup T) - T_\Delta}\vect{e}}_2\\
\nonumber &\ge & (1-\delta_{2K})\norm{\vect{r}^{\ell}_{(T^{\ell}
\cup T) - T_\Delta}}_2 -
\delta_{4K}\norm{\vect{r}^{\ell}_{T_{\Delta}}}_2 -
\norm{\matr{D}^*_{T_{\vect{e},2}}\vect{e}}_2.
\end{eqnarray}
Because $\vect{r}^{\ell}$ is supported over $T \cup T^{\ell}$, it
holds true that $\norm{\vect{r}^\ell_{(T \cup T^{\ell}) -
T_\Delta}}_2 = \norm{\vect{r}^\ell_{T_\Delta^C}}_2$. Combining
(\ref{eq:CoSaMP_resid_basic_inequality_lhs}) and
(\ref{eq:CoSaMP_resid_basic_inequality_rhs}) with
(\ref{eq:CoSaMP_resid_basic_inequality}), gives
\begin{eqnarray}
\label{eq:CoSaMP_identification}
\norm{\vect{r}^{\ell}_{T_\Delta^C}}_2 &\le&
\frac{2\delta_{4K}\norm{\vect{r}^{\ell}}_2+
2\norm{\matr{D}_{T_{\vect{e},2}}^*\vect{e}}_2}{1-\delta_{2K}} \\
&\le& \frac{2\delta_{4K}\norm{\vect{r}^{\ell}}_2+
4\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2}{1-\delta_{4K}}.\nonumber
\end{eqnarray}
For brevity of notations, we denote hereafter ${\tilde T}^{\ell}$ as
${\tilde T}$. Using $\vect{y} = \matr{D}\vect{x} + \vect{e} =
\matr{D}_{\tilde{T}}\vect{x}_{\tilde{T}} +
\matr{D}_{\tilde{T}^C}\vect{x}_{\tilde{T}^C} + \vect{e}$, we
observe that
\begin{eqnarray}
\norm{\vect{x}_{\tilde{T}} - (\vect{x}_p)_{\tilde{T}}}_2 &=&
\norm{\vect{x}_{\tilde{T}} - \matr{D}_{\tilde{T}}^\dag
\left(\matr{D}_{\tilde{T}}\vect{x}_{\tilde{T}} + \matr{D}_{\tilde{T}^C}\vect{x}_{\tilde{T}^C} + \vect{e}\right)}_2 \\
\nonumber
&=& \norm{\matr{D}_{\tilde{T}}^\dag \left( \matr{D}_{\tilde{T}^C}\vect{x}_{\tilde{T}^C} + \vect{e}\right)}_2 \\
\nonumber
& \le & \norm{(\matr{D}_{\tilde{T}}^*\matr{D}_{\tilde{T}})^{-1} \matr{D}_{\tilde{T}}^* \matr{D}_{\tilde{T}^C}\vect{x}_{\tilde{T}^C}}_2 + \norm{(\matr{D}_{\tilde{T}}^*\matr{D}_{\tilde{T}})^{-1} \matr{D}_{\tilde{T}}^* \vect{e}}_2 \\
\nonumber
&\le& \frac{1}{1-\delta_{3K}}\norm{\matr{D}_{\tilde{T}}^* \matr{D}_{\tilde{T}^C}\vect{x}_{\tilde{T}^C}}_2 + \frac{1}{1-\delta_{3K}}\norm{\matr{D}_{T_{\vect{e},3}}^*\vect{e}}_2 \\
&\le&
\frac{\delta_{4K}}{1-\delta_{4K}}\norm{\vect{x}_{\tilde{T}^C}}_2 +
\frac{3}{1-\delta_{4K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2,
\nonumber
\end{eqnarray}
where the last inequality holds true because of Proposition
\ref{prop4} and that $|\tilde{T}| = 3K$. Using the triangle
inequality and the fact that $\vect{x}_p$ is supported on ${\tilde
T}$, we obtain
\begin{eqnarray}
\norm{\vect{x}-\vect{x}_p}_2 \le \norm{\vect{x}_{\tilde{T}^C}}_2 +
\norm{\vect{x}_{\tilde{T}} - (\vect{x}_p)_{\tilde{T}}}_2,
\end{eqnarray}
which leads to
\begin{eqnarray}
\label{eq:CoSaMP_estimation}
\norm{\vect{x} - \vect{x}_p}_2 &\le& \left( 1+ \frac{\delta_{4K}}{1-\delta_{4K}} \right) \norm{\vect{x}_{\tilde{T}^C}}_2 + \frac{3}{1-\delta_{4K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2 \\
&=& \frac{1}{1-\delta_{4K}}\norm{\vect{x}_{\tilde{T}^C}}_2 +
\frac{3}{1-\delta_{4K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2.
\nonumber
\end{eqnarray}
Having the above results we can obtain
(\ref{eq:CoSaMP_x_diff_bound}) by
\begin{eqnarray}
\label{eq:CoSaMP_bound_first_deriv}
\norm{\vect{x}-\hat{\vect{x}}_{CoSaMP}^\ell}_2 &\le& 2\norm{\vect{x}-\vect{x}_p}_2 \\
\nonumber
&\le& 2\left( \frac{1}{1-\delta_{4K}}\norm{\vect{x}_{\tilde{T}^C}}_2 + \frac{3}{1-\delta_{4K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2 \right) \\
\nonumber
&\le& \frac{2}{1-\delta_{4K}} \norm{\vect{r}^{\ell-1}_{T_\Delta^C}}_2 + \frac{6}{1-\delta_{4K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2 \\
\nonumber
&\le&\frac{2}{1-\delta_{4K}} \left( \frac{2\delta_{4K}}{1-\delta_{4K}}\norm{\vect{r}^{\ell - 1}}_2+ \frac{4}{1-\delta_{4K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2 \right) + \frac{6}{1-\delta_{4K}}\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2 \\
\nonumber & = &
\frac{4\delta_{4K}}{(1-\delta_{4K})^2}\norm{\vect{r}^{\ell - 1}}_2
+ \frac{14-6\delta_{4K} }{(1-\delta_{4K})^2}
\norm{\matr{D}_{T_{\vect{e}}}^*\vect{e}}_2,
\end{eqnarray}
where the inequalities are based on Lemma 4.5 from
\cite{Needell09CoSaMP}, (\ref{eq:CoSaMP_estimation}), Lemma 4.3 from
\cite{Needell09CoSaMP} and (\ref{eq:CoSaMP_identification})
respectively.
$\Box$
\linespread{1.3} \small{
}
\end{document} |
\begin{document}
\title[Symmetry and inverse-closedness of some $p$-Beurling algebras]{Symmetry and inverse-closedness of some $p$-Beurling algebras}
\author[P. A. Dabhi]{Prakash A. Dabhi}
\author[K. B. Solanki]{Karishman B. Solanki}
\address{Department of Mathematics, Institute of Infrastructure Technology Research and Management(IITRAM), Maninagar (East), Ahmedabad - 380026, Gujarat, India}
\email{[email protected]}
\email{[email protected]}
\thanks{The authors are very much grateful to Professor W. \.Zelazko for his help by sending his book. The first author would like to thank SERB, India, for the MATRICS grant no. MTR/2019/000162. The second author gratefully acknowledges Junior Research Fellowship (NET) from CSIR, India.}
\subjclass[2010]{47B37, 43A15, 46K05, 47G10}
\keywords{$p$-Banach algebra, Hulanicki's lemma, Barnes' lemma, symmetry, inverse-closedness, weight, twisted convolution}
\date{}
\dedicatory{}
\commby{}
\begin{abstract}
Let $(G,d)$ be a metric space with the counting measure $\mu$ satisfying some growth conditions. Let $\omega(x,y)=(1+d(x,y))^\delta$ for some $0<\delta\leq1$. Let $0<p\leq1$. Let $\mathcal A_{p\omega}$ be the collection of kernels $K$ on $G\times G$ satisfying $\max\{\sup_x\sum_y |K(x,y)|^p\omega(x,y)^p, \sup_y\sum_x |K(x,y)|^p\omega(x,y)^p\}<\infty$. Each $K \in \mathcal A_{p\omega}$ defines a bounded linear operator on $\ell^2(G)$. If in addition, $\omega$ satisfies the weak growth condition, then we show that $\mathcal A_{p\omega}$ is inverse closed in $B(\ell^2(G))$. We shall also discuss inverse-closedness of $p$-Banach algebra of infinite matrices over $\mathbb Z^d$ and the $p$-Banach algebra of weighted $p$-summable sequences over $\mathbb Z^{2d}$ with the twisted convolution. In order to show these results, we prove Hulanicki's lemma and Barnes' lemma for $p$-Banach algebras.
\end{abstract}
\maketitle
\section{Introduction}
Let $0<p\leq1$, and let $\mathcal{A}$ be an algebra. A mapping $\|\cdot\| : \mathcal{A} \to [0,\infty)$ is a \emph{$p$-norm} \cite{Ze} on $\mathcal{A}$ if the following conditions hold for all $x,y\in \mathcal{A}$ and $\alpha \in \mathbb{C}$.
\begin{enumerate}
\item $\|x\|=0$ if and only if $x=0$;
\item $\|x+y\|\leq\|x\|+\|y\|$;
\item $\|\alpha x\|= |\alpha|^p \|x\|$;
\item $\|xy\|\leq\|x\|\|y\|$.
\end{enumerate}
If $\mathcal{A}$ is complete in the $p$-norm, then $(\mathcal{A},\|\cdot\|)$ is a \emph{$p$-Banach algebra} \cite{Ze}. When $p=1$, $\mathcal A$ is a Banach algebra.
A $p$-normed (Banach) $\ast$-algebra is a $p$-normed (Banach) algebra along with an isometric involution $\ast$. A \emph{$p$-$C^\ast$-algebra} is a $C^\ast$-algebra $(\mathcal A,\|\cdot\|)$ with the \emph{$p$-$C^\ast$-norm} $|x|=\|x\|^p\;(x \in \mathcal A)$. Let $\mathcal{A}$ be a $p$-Banach algebra with unit $e$, and let $x\in\mathcal{A}$. The set $\sigma_\mathcal{A}(x)=\{\lambda\in\mathbb{C}: \lambda e-x \ \text{is not invertible in}\ \mathcal{A} \}$ is the \emph{spectrum} of $x$ in $\mathcal{A}$ and the number $r_\mathcal{A}(x)=\sup\{|\lambda|^p:\lambda\in\sigma_\mathcal{A}(x)\}$ is the \emph{spectral radius} of $x$. The spectral radius formula gives $r_\mathcal{A}(x)=\lim_{n\to\infty}\|x^n\|^\frac{1}{n}$ \cite{Ze}. We shall just write $\sigma(x)$ and $r(x)$ when the algebra in consideration is clear.
Let $\mathcal{A}$ be a commutative $p$-Banach algebra. A nonzero linear map $\varphi:\mathcal A \to \mathbb C$ satisfying $\varphi(ab)=\varphi(a)\varphi(b)\;(a,b \in \mathcal A)$ is a \emph{complex homomorphism} on $\mathcal A$. Let $\Delta(\mathcal A)$ be the collection of all complex homomorphisms on $\mathcal A$. For $a \in \mathcal A$, let $\widehat a:\Delta(\mathcal A)\to \mathbb C$ be $\widehat a(\varphi)=\varphi(a)\;(\varphi\in \Delta(\mathcal A))$. The smallest topology on $\Delta(\mathcal A)$ making each $\widehat a$, $a\in \mathcal A$, continuous is the \emph{Gel'fand topology} on $\Delta(\mathcal A)$ and $\Delta(\mathcal A)$ with the Gel'fand topology is the \emph{Gel'fand space} of $\mathcal{A}$. For more details on it refer \cite{Ge, Ze}.
Let $\mathcal{H}$ be a Hilbert space. Then $B(\mathcal{H})$, the collection of all bounded linear operators on $\mathcal{H}$, is a $C^\ast$-algebra with the operator norm $\|T\|_{op}=\sup\{\|T(x)\|:x\in\mathcal{H}, \|x\|\leq1\}$ for all $T \in B(\mathcal H)$.
Note that for given $0<p\leq1$ and a normed (Banach) algebra $\mathcal{A}$ with norm $\|\cdot\|$, we may consider the $p$-norm, $\|\cdot\|_p$, on $\mathcal{A}$ given $\|x\|_p=\|x\|^p \ (x\in\mathcal{A})$ making $\mathcal{A}$ a $p$-normed ($p$-Banach) algebra without changing the topology of $\mathcal{A}$. The fact that $(a+b)^p\leq a^p+b^p$ for all $a,b\in [0,\infty)$ and $0<p\leq 1$ will be used here and many times in this paper. All algebras considered here are complex algebras, i.e., over the complex field $\mathbb{C}$.
In \cite{Hu}, Hulanicki proved that if $\mathcal{A}$ is a Banach $\ast$-algebra, $S$ is a subalgebra of $\mathcal{A}$ (not necessarily closed) and if $T:\mathcal{A}\to B(\mathcal{H})$ is a faithful representation for some Hilbert space $\mathcal{H}$ such that $\|T_x\|=\lim_{n\to\infty}\|x^n\|^\frac{1}{n}$ for all $x=x^\ast\in S$, then $\sigma_\mathcal{A}(x)=\sigma(T_x)$ for all $x=x^\ast\in S$. The corrected proof of this theorem can be found in \cite{Fe}. We prove this result for $p$-Banach algebras.
Let $(G,d)$ be a metric space, and let $\mu$ be a measure on $G$. For $\delta>0$, let $\Gamma[\delta]=\{(x,y) \in G \times G : d(x,y)\leq\delta \}$, and for $x\in G$, let $\Gamma_x[\delta]=\{y\in G: d(x,y)\leq\delta \}$. Assume that there are constants $C>0, b>0$ such that $\mu(\Gamma_x[\delta])\leq C\delta^b$ for all $x\in G$ and $\delta>0$. Let $0<\delta\leq1$ be fixed, and let $\omega:G\times G \to [1,\infty)$ be $\omega(x,y)=(1+d(x,y))^\delta$. Let $0<p\leq 1$, and let $\mathcal A_{p\omega}$ be the collection all complex valued measurable functions $K=K(x,y)$ on $G \times G$ such that $$\|K\|_{p\omega}=\max\Big\{\sup_x \int_G |K(x,y)|^p\omega(x,y)^pd\mu(y),\sup_y \int_G|K(x,y)|^p\omega(x,y)^pd\mu(x)\Big\}<\infty.$$ Note that $\mathcal A_{1\omega}$ is a Banach $\ast$-algebra with the above norm, the convolution multiplication $$(K\star J)(x,y)=\int_G K(x,z)J(z,y)d\mu(z)$$ and the involution $K \mapsto K^\ast$, where $K^\ast(x,y)=\overline{K(y,x)}$. By \cite{Ba}, $K$ defines a bounded linear operator $K_2$ on $L^2(G)$ by $K_2(f)(x)=\int_G f(y)K(x,y)d\mu(y)$ for all $f \in L^2(G)$. Barnes proved in \cite{Ba} that the spectrum of $K$ as an element of $\mathcal A_{1\omega}$ is same as the spectrum of $K_2$ in $B(L^2(G))$.
Let $0<p<1$, and let $K, J \in \mathcal A_{p\omega}$. Then $|\int_G K(x,z)J(z,y) d\mu(z)|^p$ may not be smaller than $\int_G|K(x,z)|^p|J(z,y)|^p d\mu(z)$. So, if we want this inequality to remain true or if we want $\mathcal A_{p\omega}$ to be an algebra, then we should take $\mu$ to be the counting measure. One more reason for taking $\mu$ to be the counting measure on $G$ is as follows. Let $G$ be a locally compact group with the Haar measure $\mu$, let $\omega$ be a measurable weight on $G$ and let $L^p(G,\omega)$ be the collection of all measurable functions on $G$ satisfying $\int_G |f|^p\omega^p d\mu<\infty$. Then by \cite{Ze}, $L^p(G)$ is closed under convolution if and only if $G$ is a discrete group and by \cite{Bh}, $L^p(G,\omega)$ is closed under convolution if and only if $G$ is a discrete group.
So, we shall consider the counting measure $\mu$ on a metric space $G$. In this case, $\mathcal A_{p\omega}$, $0<p\leq 1$, will be the collection of all functions $K:G \times G \to \mathbb C$ satisfying $$\|K\|_{p\omega}=\max\Big\{\sup_x \sum_y |K(x,y)|^p\omega(x,y)^p, \sup_y \sum_x |K(x,y)|^p\omega(x,y)^p\Big\}<\infty.$$ Then $\mathcal A_{p\omega}$ is a $p$-Banach $\ast$-algebra with the above norm, the convolution $$(K\star J)(x,y)=\sum_z K(x,z)J(z,y)\quad(K, J \in \mathcal A_{p\omega}, (x,y)\in G \times G)$$ and the involution $K^\ast(x,y)=\overline{K(y,x)}$. We shall extend the Barnes' lemma for the case $0<p<1$.
Let $d\in \mathbb N$, and let $\omega$ be an admissible weight on $\mathbb Z^d$ satisfying weak growth condition, i.e., there is a constant $C>0$ and there is $0<\delta\leq 1$ such that $\omega(x)\geq C(1+|x|)^\delta$ for all $x$. We consider the $p$-Banach $\ast$-algebra $\mathcal A_{p\omega}$ of infinite matrices $A=(a_{kl})_{k,l \in \mathbb Z^d}$ satisfying $$\|A\|_{p\omega}=\max\Big\{\sup_{k \in \mathbb Z^d}\sum_{l \in \mathbb Z^d}|a_{kl}|^p \omega(k-l)^p, \sup_{l \in \mathbb Z^d}\sum_{k \in \mathbb Z^d}|a_{kl}|^p \omega(k-l)^p\Big\}<\infty.$$ If $A \in \mathcal A_{p\omega}$, then it defines a bounded linear operator on $\ell^2(\mathbb Z^d)$. We show that $\mathcal A_{p\omega}$ is inverse closed in $B(\ell^2(\mathbb Z^d))$.
Let $0<p\leq 1$, $d \in \mathbb N$, and let $\omega$ be an admissible weight on $\mathbb Z^{2d}$ satisfying the weak growth condition. Let $\ell^p(\mathbb Z^{2d},\omega)$ be the collection of all sequences $a=(a_{kl})_{k,l \in \mathbb Z^d}$ satisfying $\|a\|=\sum_{k,l \in \mathbb Z^d}|a_{kl}|^p\omega(k-l)^p<\infty$. Let $\theta >0$. The twisted convolution of two sequences $a=(a_{kl})_{k,l \in \mathbb Z^d}$ and $b=(b_{kl})_{k,l \in \mathbb Z^d}$ in $\ell^p(\mathbb Z^{2d},\omega)$ is given by $$(a\star_\theta b)(m,n) = \sum_{k,l\in\mathbb{Z}^d} a_{kl}b_{m-k,n-l}e^{2\pi i\theta(m-k)\cdot l}.$$ Then $\ell^p(\mathbb Z^{2d},\omega)$ is a $p$-Banach $\ast$-algebra with the twisted convolution and the involution $a^\ast_{kl}=\overline{a_{-k,-l}}e^{2\pi i \theta k\cdot l}$ for $a=(a_{kl})_{k,l\in\mathbb{Z}^d}\in\ell^p(\mathbb{Z}^{2d},\omega)$. Each $a \in \ell^p(\mathbb Z^{2d},\omega)$ defines a convolution operator $L_a$ on $\ell^2(\mathbb Z^{2d})$ given by $L_a(b)=a\star_\theta b\;(b \in \ell^2(\mathbb Z^{2d}))$. We show that $L_a$ is invertible in $B(\ell^2(\mathbb Z^{2d}))$ if and only if $a$ is invertible in $\ell^p(\mathbb Z^{2d},\omega)$ and in this case, $L_a^{-1}=L_{a^{-1}}$.
A $p$-Banach $\ast$-algebra $\mathcal{A}$ is a \emph{symmetric} if $\sigma(aa^\ast)\subset [0,\infty)$ for all $a\in\mathcal{A}$ or equivalently $\sigma(a)\in\mathbb{R}$ for all $a=a^\ast\in\mathcal{A}$. Let $\mathcal A$ and $\mathcal B$ be $p$-Banach algebras, $\mathcal A \subset \mathcal B$, and let $\mathcal A$ and $\mathcal B$ have the same unit. Then $\mathcal A$ is \emph{inverse closed} (\emph{spectrally invariant}) in $\mathcal B$ if $a \in \mathcal A$ and $a^{-1}\in \mathcal B$ imply $a^{-1}\in \mathcal A$. The property of symmetry is important itself in theory of Banach algebras as symmetric Banach algebras has many properties of $C^\ast$-algebras. Even though symmetry is defined for a given algebra and inverse-closedness gives information about relation between two nested algebras, these two topics are closely related to such a extent that most of the time the symmetry of a Banach algebra $\mathcal{A}$ is shown using inverse closedness of $\mathcal{A}$ in some $C^\ast$-algebra and it is done using the Hulanicki's lemma.
With this in consideration, first we prove Hulanicki's lemma for $p$-Banach algebras in section 2. Barnes' lemma for $p$-Banach algebras is proved in section 3. In section 4, we shall apply these lemmas to prove inverse-closedness of $p$-Banach algebra of infinite matrices over $\mathbb Z^d$ in $B(\ell^2(\mathbb Z^d))$ and the inverse-closedness of the $p$-Banach algebra $\ell^p(\mathbb Z^{2d})$ with the twisted convolution in $B(\ell^2(\mathbb Z^{2d}))$.
\section{Hulanicki's lemma for $p$-Banach algebras}
The following theorem is Hulanicki's lemma \cite[Proposition 2.5]{Hu} for $p$-Banach algebras. See \cite[6.1 Proposition]{Fe} for a proof of it for Banach algebras, i.e., for the case of $p=1$.
\begin{theorem}\label{hul}
Let $0<p\leq 1$. Let $\mathcal{A}$ be a $p$-Banach $\ast$-algebra, $S$ be a $\ast$-subalgebra of $\mathcal{A}$, and let $T$ be a faithful $\ast$-representation of $\mathcal{A}$ on Hilbert space $\mathcal{H}$ satisfying $$\|T_x\|_{op}^p=\lim_{n\to\infty}\|x^n\|^{\frac{1}{n}}\quad(x=x^\ast \in S).$$ If $\mathcal{A}$ has a unit $e$, then assume in addition that $T_e=I$, the identity operator in $B(\mathcal{H})$. If $x=x^\ast\in S$, then $\sigma_\mathcal{A}(x)=\sigma(T_x).$
\end{theorem}
We shall require the following lemma.
\begin{lemma}
Let $0<p\leq 1$. Let $\mathcal A$ be a $p$-Banach $\ast$-algebra, let $\mathcal{B}$ be the $\|\cdot\|$-closure of some commutative $\ast$-subalgebra of $\mathcal{A}$, and let $T$ be a faithful $\ast$-representation of $\mathcal A$ on a Hilbert space $\mathcal H$ satisfying $\|T_x\|_{op}^p=\lim_{n\to\infty}\|x^n\|^{\frac{1}{n}}$ for all $x=x^\ast \in \mathcal B$. If $I$ is in the operator norm closure of $T(\mathcal{B})$, then there is $e\in\mathcal{B}$ such that $T_e=I$ and $\mathcal{A}$ is unital with $e$ as unit.
\end{lemma}
\begin{proof}
For all $x\in\mathcal{B}$, let $\mu(x)=\|T_x\|_{op}^p$, and let $r(x)$ be the spectral radius of $x$. Then $\mu$ and $r$ are equivalent $p$-norms on $\mathcal{B}$ as $r$ is subadditive on $\mathcal{B}$, $r(x)=\mu(x)$ for all $x=x^\ast\in\mathcal{B}$ and $\mu(x)=\mu(x^\ast),\ r(x)=r(x^\ast)$ for all $x\in\mathcal{B}$. The completion of $\mathcal{B}$ with $\mu$, $\mathcal{B}^\mu$, is a commutative $p$-$C^\ast$-algebra isomorphic to $\overline{T(\mathcal{B})}^\mu$, and by assumption $\mathcal{B}^\mu$ has unit. As $\mathcal{B}$ is dense in $\mathcal{B}^\mu$, $\mu(x)\leq\|x\| \ (x\in\mathcal{B})$ and every $\phi\in\Delta(\mathcal{B})$ can be extended to $\widetilde{\phi}\in\Delta(\mathcal{B}^\mu)$, the Gel'fand spaces of $\mathcal{B}^\mu$ and $\mathcal{B}$ are homeomorphic via the map $\widetilde{\phi}\mapsto\widetilde{\phi}_{|\mathcal{B}}$. Since the unit of $\mathcal{B}^\mu$ has the Gel'fand transform $\mathbf{1}$, there is $x\in\mathcal{B}$ such that $\|\widehat{x}-\mathbf{1}\|_\infty<\frac{1}{2}$. Since $|\widehat{x}|\geq\frac{1}{2}$ on $\Delta(\mathcal{B})$, there is a unit $e\in\mathcal{B}$ and $T_e=I$. For $a\in\mathcal{A}$, $T_{a-ae}=T_a-T_aI=0$ and $T_{a-ea}=T_a-IT_a=0$. Since $T$ is faithful, $a=ae=ea$ and so $e$ is unit of $\mathcal{A}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{hul}]
For $x=x^\ast\in S$, let $\mathcal{B}$ be a commutative $\|\cdot\|$-closed $\ast$-subalgebra of $\mathcal{A}$ containing $x$.
If $I\in\mathcal{B}^\mu$, then the facts that the spectrum of $x$ does not separate the complex plane, $\mathcal{A}$ and $\mathcal{B}$ have the same unit, and $\mathcal{B}^\mu$ and $B(\mathcal{H})$ have the same unit imply that \begin{equation}\label{star} \sigma_\mathcal{A}(x)=\sigma_\mathcal{B}(x)=\{\phi(x):\phi\in\Delta(\mathcal{B})=\Delta(\mathcal{B}^\mu)\}=\sigma_{\mathcal{B}^\mu}(x)=\sigma(T_x). \end{equation}
If $I\notin\mathcal{B}^\mu$ and $\mathcal{A}$ has no unit, then $0\in\sigma_\mathcal{A}(x)$. Since $\mathcal{B}^\mu+\mathbb{C}I\cong\mathcal{B}^\mu\oplus\mathbb{C}$ and $\mathcal{B}^\mu+\mathbb{C}I$ and $B(\mathcal{H})$ have the same unit, $0\in\sigma_{\mathcal{B}^\mu+\mathbb{C}I}(x)=\sigma(T_x)$. So, $\sigma_\mathcal{A}(x)=\sigma(T_x)$ as the case of non-zero spectral values follows from (\ref{star}).
If $I\notin\mathcal{B}^\mu$ and $\mathcal{A}$ has unit, say $e$, then $T_e=I$ and $e\notin\mathcal{B}$. Since $\mathcal{B}^\mu+\mathbb{C}e\cong\mathcal{B}^\mu\oplus\mathbb{C}$ and $\mathcal{B}^\mu+\mathbb{C}e$ and $\mathcal{A}$ have the same unit $e$, $0\in\sigma_{\mathcal{B}^\mu+\mathbb{C}e}(x)=\sigma_\mathcal{A}(x)$. Also $0\in\sigma(T_x)$ as seen above. Combining it with (\ref{star}), we have $\sigma_\mathcal{A}(x)=\sigma(T_x)$.
\end{proof}
\section{Barnes' lemma for $p$-Banach algebras}
Let $(G,d)$ be a metric space with the counting measure $\mu$. For a subset $A$ of $G$, $\chi(A)$ denote the characteristic function of $A$. For $\delta>0$, let $\Gamma[\delta]=\{(x,y) \in G \times G : d(x,y)\leq\delta \}$, and for $x\in G$, let $\Gamma_x[\delta]=\{y\in G: d(x,y)\leq\delta \}$.
\textbf{Assumption:} There are constants $C>0, b>0$ such that $\mu(\Gamma_x[\delta])\leq C\delta^b$ for all $x\in G$ and $\delta>0$.
A \emph{kernel} $K=K(x,y)$ is a complex valued function on $G\times G$. Let $0<p\leq 1$. Let $\mathcal{A}_p$ be the collection of all kernels $K(x,y)$ such that $$\|K\|_p=\max\Big\{\sup_x\sum_y|K(x,y)|^p, \sup_y\sum_x|K(x,y)|^p \Big\}<\infty.$$ Then $(\mathcal{A}_p,\|\cdot\|_p)$ is $p$-Banach $\ast$-algebra with the convolution $$(K\star J)(x,y)=\sum_zK(x,z)J(z,y) \quad (K,J\in\mathcal{A}_p)$$ and the involution $K^\ast(x,y)=\overline{K(y,x)} \quad (K\in\mathcal{A}_p)$. Indeed, if $K,J\in\mathcal{A}_p$, then
\begin{align*} \sum_x |(K\star J)(x,y)|^p = \sum_x \left|\sum_z K(x,z)J(z,y)\right|^p &\leq \sum_x \sum_z |K(x,z)|^p|J(z,y)|^p \\&\leq \|K\|_p\|J\|_p<\infty,
\end{align*}
and the same inequality follows by reversing the roles of $x$ and $y$, so we obtain $\|K\star J\|_p\leq\|K\|_p\|J\|_p$.
Let $\delta\in(0,1]$ be fixed and define a weight $\omega:G\times G \to [1,\infty)$ by $$\omega(x,y)=(1+d(x,y))^\delta \quad ((x,y)\in G\times G).$$ By $\mathcal{A}_{p\omega}$ denote the $p$-Banach $\ast$-algebra consisting all kernels $K$ with the norm $$\|K\|_{p\omega}=\max\Big\{\sup_x\sum_y|K(x,y)|^p\omega(x,y)^p, \sup_y\sum_x|K(x,y)|^p\omega(x,y)^p \Big\}<\infty$$ and involution and convolution same as that of $\mathcal{A}_p$. Let $x,y,z\in G$. Then $d(x,y)\leq d(x,z)+d(z,y)$ implies that $\omega(x,y)\leq\omega(x,z)\omega(z,y)$ and this gives $\|K\star J\|_{p\omega}\leq \|K\|_{p\omega}\|J\|_{p\omega}$.
If $p>1$, then $\mathcal{A}_p$ is a Banach space \cite[Theorem 11.5]{Jo} with the norm $$\|K\|_p=\max\Bigg\{\sup_x\left(\sum_y|K(x,y)|^p\right)^\frac{1}{p}, \sup_y\left(\sum_x|K(x,y)|^p\right)^\frac{1}{p} \Bigg\}.$$
\begin{lemma}\label{inc}
Let $0<p\leq1$. If $K\in\mathcal{A}_{p\omega}$, then $K\in\mathcal{A}_q$ for $q\geq p$.
\end{lemma}
Let $0<p\leq1$, $q\geq p$, and let $K\in\mathcal{A}_p$. Then $K$ defines a bounded linear operator $K_q$ on $\ell^q(G)$ in the following manner $$K_q(f)(x)=\sum_yK(x,y)f(y) \quad (f\in\ell^q(G)).$$ The spectrum of $K$ in $\mathcal{A}_{p\omega}$ and $\mathcal{A}_p$ are denoted by $\sigma_{p\omega}(K)$ and $\sigma_p(K)$ respectively and the corresponding spectral radii are $r_{p\omega}(K)$ and $r_p(K)$. The spectrum and spectral radius of the operator $K_q$ in $B(\ell^q(G))$ are denoted by $\sigma(K_q)$ and $r(K_q)$ respectively.
\begin{theorem}\label{rwww}
Let $0<p\leq1$, and let $K\in\mathcal{A}_{p\omega}$. Then $r_{p\omega}(K)=r_p(K)$.
\end{theorem}
\begin{proof}
Let $0<\varepsilon\leq1$. Define a weight $\omega_\varepsilon:G\times G\to[1,\infty)$ by $$\omega_\varepsilon(x,y)=(1+\varepsilon d(x,y))^\delta.$$ Since $d(x,y)\leq d(x,z)+d(z,y)$, $1\leq\omega_\varepsilon(x,y) \leq \omega_\varepsilon(x,z)\omega_\varepsilon(z,y)$. So, $\mathcal{A}_{p\omega_\varepsilon}$ is a $p$-Banach algebra with the norm $\|K\|_{p\omega_\varepsilon}=\|K\omega_\varepsilon\|_p$. As $\omega_\varepsilon \leq \omega \leq \varepsilon^{-\delta}\omega_\varepsilon$ on $G\times G$, $\|K\|_{p\omega}\leq \varepsilon^{-p\delta} \|K\|_{p\omega_\varepsilon}$ and for $n\in\mathbb{N}$, $\|K^n\|^\frac{1}{n}_{p\omega}\leq \varepsilon^{-\frac{p\delta}{n}} \|K^n\|_{p\omega_\varepsilon}^\frac{1}{n}$. This implies that \begin{equation}\label{r1} r_{p\omega}(K)\leq r_{p\omega_\varepsilon}(K)\leq \|K\|_{p\omega_\varepsilon}. \end{equation} Since $1\leq\omega_\epsilon(x,y)$ for all $x,y\in G$, \begin{align*} \sup_x \sum_y |K(x,y)|^p &\leq \sup_x \sum_y|K(x,y)|^p \omega_\varepsilon(x,y)^p \\&\leq \sup_x \sum_y|K(x,y)|^p (1+\varepsilon^{p\delta} d(x,y)^{p\delta}) \quad (\text{as} \ 0<p\delta\leq1) \\&\leq \sup_x \sum_y|K(x,y)|^p + \varepsilon^{p\delta} \sup_x \sum_y|K(x,y)|^p d(x,y)^{p\delta}. \end{align*} Since the same inequality holds with $x$ and $y$ interchanged, $$\lim_{\varepsilon\to0}\|K\|_{p\omega_\varepsilon}=\|K\|_p.$$ This along with (\ref{r1}) gives $r_{p\omega}(K)\leq\|K\|_p$. But then $$r_{p\omega}(K)^n=r_{p\omega}(K^n)\leq\|K^n\|_p$$ and so $r_{p\omega}(K)\leq r_p(K)$. Since $\mathcal{A}_{p\omega}\subset\mathcal{A}_p$, $r_p(K)\leq r_{p\omega}(K)$. Hence, $r_{p\omega}(K)=r_p(K)$.
\end{proof}
Following lemma is a generalization of \cite[Lemma 4.4.6]{Ri} for $p$-Banach algebras.
\begin{lemma}\label{ri}
Let $0<p\leq1$, $a\mapsto T_a$ be a continuous $\ast$-representation of a $p$-normed $\ast$-algebra $\mathcal{A}$ on a Hilbert space $\mathcal{H}$, and let $a=a^\ast\in\mathcal{A}$. Then $\|T_a\|_{op}^p\leq r(a)$, where $r(a)$ is spectral radius of $a$ in $\mathcal{A}$.
\end{lemma}
\begin{proof}
Since the representation is continuous, there is some constant $C\geq1$ such that $\|T_x\|_{op}\leq C$ for all $x\in\mathcal{A}$ with $\|x\|\leq1$. Let $x\in\mathcal{A}$. If $x\neq0$, then $$\bigg\|\frac{x}{\|x\|^\frac{1}{p}}\bigg\|=1 \ \text{and so} \ \bigg\|T_{\frac{x}{\|x\|^\frac{1}{p}}}\bigg\|_{op}\leq C.$$ This gives $\|T_x\|_{op}^p\leq C^p \|x\| \leq C\|x\|$. If $x=0$, then it is trivial. Thus $\|T_x\|_{op}^p\leq C\|x\|$ for all $x\in\mathcal{A}$. Let $n\in\mathbb{N}$. Then $\|T_a^\ast T_a\|_{op}=\|T_{a^\ast}T_a\|_{op}=\|T_a\|_{op}^2$, and so $\|T_a\|_{op}^{np}=\|T_{a^n}\|_{op}^p\leq C \|a^n\|$. Thus $\|T_a\|_{op}^p\leq C^\frac{1}{n} \|a^n\|^\frac{1}{n}$. Letting $n\to\infty$, we get $\|T_a\|_{op}^p\leq r(a)$.
\end{proof}
The generalization of Barnes' lemma \cite[Theorem 4.7]{Ba} for $p$-Banach algebras is the next theorem.
\begin{theorem}\label{ba}
Let $0<p\leq1$. If $K=K^\ast\in\mathcal{A}_{p\omega}$, then $\sigma_{p\omega}(K)=\sigma(K_2)$.
\end{theorem}
\begin{proof}
By Lemma \ref{inc}, $K\in\mathcal{A}_p$. Let $n\in\mathbb{N}$. Then \begin{equation}\label{ba1} \|K^{n+1}\|_p \leq \|\chi(\Gamma[2^n])K^{n+1}\|_p + \|\chi(\Gamma[2^n]^c)K^{n+1}\|_p, \end{equation}
where $\Gamma[2^n]^c$ is complement of the set $\Gamma[2^n]$ in $G\times G$. Since $2^{n\delta}\leq\omega(x,y)$ for all $(x,y)\in\Gamma[2^n]^c$, \begin{equation}\label{ba2} \|\chi(\Gamma[2^n]^c)K^{n+1}\|_p \leq \|K^{n+1}\|_{p\omega} 2^{-np\delta}. \end{equation} Choose $m\in\mathbb{N}$ such that $\frac{1}{2^m}<p\leq\frac{1}{2^{m-1}}$. Then $1<2^mp$ and so $\|K^{n+1}\|_{2^mp}\leq\|K^{n+1}\|_1$. Using it along with Holder's inequality and Assumption, we get
\begin{align*} & \ \quad \sum_x |K^{n+1}(x,y)|^p \big(\chi(\Gamma[2^n])(x,y) \big)^p \\ &\leq \left(\sum_x |K^{n+1}(x,y)|^{2p} \big(\chi(\Gamma[2^n])(x,y)\big)^p\right)^\frac{1}{2} \left(\sum_x \big(\chi(\Gamma[2^n])(x,y)\big)^p\right)^\frac{1}{2} \\ & \leq \left(\sum_x |K^{n+1}(x,y)|^{2^mp}\right)^\frac{1}{2^m} \left(\sum_x \big(\chi(\Gamma[2^n])(x,y)\big)^{2p}\right)^\frac{1}{2^m} \left(\sum_x \big(\chi(\Gamma[2^n])(x,y)\big)^p\right)^\frac{1}{2^{m-1}} \\ & \qquad \cdots \left(\sum_x \big(\chi(\Gamma[2^n])(x,y)\big)^p\right)^\frac{1}{4} \left(\sum_x \big(\chi(\Gamma[2^n])(x,y)\big)^p\right)^\frac{1}{2} \\ & \leq \|K^{n+1}\|_{2^mp}^p (C2^{nb})^\frac{1}{2^m} (C2^{nb})^\frac{1}{2^{m-1}} \cdots (C2^{nb})^\frac{1}{4} (C2^{nb})^\frac{1}{2} \\ & \leq \|K^{n+1}\|_1^p (C2^{nb})^{\sum_{i=1}^m\frac{1}{2^i}} .\end{align*}
Since similar inequality holds by changing the roles of $x$ and $y$, \begin{equation}\label{ba3} \|\chi(\Gamma[2^n])K^{n+1}\|_p \leq \|K^{n+1}\|_1^p (C2^{nb})^{\sum_{i=1}^m\frac{1}{2^i}}. \end{equation} So, by (\ref{ba1}), (\ref{ba2}) and (\ref{ba3}), \begin{align*} \|K^{n+1}\|_p^\frac{1}{n+1} \leq \|K^{n+1}\|_1^\frac{p}{n+1} (C^\frac{1}{n+1} 2^{\frac{n}{n+1}b})^{\sum_{i=1}^m\frac{1}{2^i}} + \|K^{n+1}\|_{p\omega}^\frac{1}{n+1} 2^{-\frac{n}{n+1}p\delta}. \end{align*} This gives $r_p(K)\leq r_1(K)^p (2^b)^{\sum_{i=1}^m\frac{1}{2^i}} + r_{p\omega}(K) 2^{-p\delta}$. By Theorem \ref{rwww}, $r_p(K)=r_{p\omega}(K)$ and so $$r_p(K) \leq r_1(K)^p \frac{(2^b)^{\sum_{i=1}^m\frac{1}{2^i}}}{1-2^{-p\delta}}.$$ Now, $$r_p(K)=r_p(K^n)^\frac{1}{n} \leq r_1(K^n)^\frac{p}{n} \left(\frac{(2^b)^{\sum_{i=1}^m\frac{1}{2^i}}}{1-2^{-p\delta}}\right)^\frac{1}{n} = r_1(K)^p \left(\frac{(2^b)^{\sum_{i=1}^m\frac{1}{2^i}}}{1-2^{-p\delta}}\right)^\frac{1}{n}. $$ Letting $n\to\infty$, we get $r_p(K) \leq r_1(K)^p$. By \cite[Theorem 4.7]{Ba}, $r_1(K)\leq \|K_2\|_{op}$ and thus $r_p(K) \leq \|K_2\|_{op}^p$. Combining it with Lemma \ref{ri} and Theorem \ref{rwww}, we get $$r_{p\omega}(K)=r_p(K)=\|K_2\|_{op}^p.$$
The result follows from Theorem \ref{hul}.
\end{proof}
\section{Inverse-closedness of some $p$-Banach algebras}
\subsection{Inverse-closedness of $p$-Beurling algebras of infinite matrices}
A weight $\omega$ on $\mathbb{R}^d$ is a non-negative measurable function satisfying $$ \omega(x+y)\leq\omega(x)\omega(y) \quad (x,y\in\mathbb{R}^d).$$ Following \cite{Gr}, we impose the following conditions on weight $\omega$ to study decay conditions of infinite matrices:
\begin{enumerate}
\item Let $|\cdot|$ be a norm on $\mathbb{R}^d$, and let $\rho:[0,\infty)\to[0,\infty)$ be a continuous concave function such that $\rho(0)=0$. We take $\omega$ to be of the form $$\omega(x)=e^{\rho(|x|)} \quad (x\in\mathbb{R}^d).$$ Then $\omega(0)=1$ and $\omega$ is even, i.e., $\omega(x)=\omega(-x)$.
\item $\omega$ satisfies the GRS-condition (Gel'fand-Raikov-Shilov condition \cite{Ge}) $$\lim_{n\to\infty} \omega(nx)^\frac{1}{n}=1 \quad \text{for all} \ x\in\mathbb{R}^d.$$
\end{enumerate}
The condition (ii) implies that $\lim_{\alpha\to\infty}\frac{\rho(\alpha)}{\alpha}=0$ and such a weight is called an \emph{admissible weight}. Here we will consider only admissible weights and that too mostly on $\mathbb{Z}^d$ which is obtained by restricting $\omega$ on $\mathbb{Z}^d$.
Let $0<p\leq1$. Let $\mathcal{A}_{p\omega}$ be the collection of all matrices $A=(a_{kl})_{k,l\in\mathbb{Z}^d}$ satisfying $$\|A\|_{p\omega}=\max \Big\{ \sup_{k\in\mathbb{Z}^d} \sum_{l\in\mathbb{Z}^d} |a_{kl}|^p \omega(k-l)^p, \sup_{l\in\mathbb{Z}^d} \sum_{k\in\mathbb{Z}^d} |a_{kl}|^p\omega(k-l)^p \Big\} <\infty.$$ Then $\mathcal{A}_{p\omega}$ is a $p$-Banach $\ast$-algebra with norm $\|\cdot\|_{p\omega}$, involution $\ast:A=(a_{kl})\mapsto A^\ast=(a^\ast_{kl})$ where $a^\ast_{kl}=\overline{a_{lk}}$ and convolution as multiplication defined by $(A\star B)_{kl}=\sum_{j\in\mathbb{Z}^d} a_{kj}b_{jl}$ for $A=(a_{kl}),B=(b_{kl})\in\mathcal{A}_{p\omega}$.
Note that we will skip writing $\mathbb{Z}^d$ in the indices as the case will be clear and $(A)_{kl}$ denote the $(k,l)^{th}$ entry of the matrix $A$. When the trivial weight $\omega\equiv1$ is in consideration, the corresponding space will be denoted by $\mathcal{A}_p$.
If $A\in\mathcal{A}_{p\omega}$, then $A\in\mathcal{A}_q$ for all $q\geq p$ and so the standard Schur test implies that $A\in B(\ell^q(\mathbb{Z}^d))$ for all $q\geq p$. So, $\mathcal{A}_{p\omega}$ can be seen as a $\ast$-subalgebra of bounded operators acting on $\ell^2(\mathbb{Z}^d)$. The spectrum of $A$ in $\mathcal{A}_{p\omega}$, $\mathcal{A}_q \ (q\geq p)$ and as an operator in $B(\ell^2(\mathbb{Z}^d))$ will be denoted by $\sigma_{p\omega}(A)$, $\sigma_q(A) $ and $\sigma(A) $ respectively and the corresponding spectral radii are denoted by $r_{p\omega}(A)$, $r_q(A)$ and $r(A)$.
A weight $\omega$ is said to be satisfying \emph{weak growth condition} if for some positive constant $C$ and $0<\delta\leq1$, $$\omega(x)\geq C(1+|x|)^\delta, \quad \text{for all}\ x.$$
Following is our main theorem in this section.
\begin{theorem}\label{grm}
Let $\omega$ be an admissible weight satisfying the weak growth condition, and let $A=A^\ast\in\mathcal{A}_{p\omega}$. Then $$r_{p\omega}(A)=\|A\|_{op}^p.$$ Consequently, $\sigma_{p\omega}(A)=\sigma(A)$ and $\mathcal{A}_{p\omega}$ is symmetric.
\end{theorem}
We write a corollary of above theorem explicitly stating property of symmetry and inverse-closedness.
\begin{corollary}\label{grc7}
Let $\omega$ be an admissible weight satisfying the weak growth condition, i.e., $\omega(x)\geq C(1+|x|)^\delta$ for some positive constant $C$ and some $\delta\in(0,1]$. If $A\in B(\ell^2(\mathbb{Z}^d))$ satisfies the weighted Schur-type condition $$\max \Big\{ \sup_{k\in\mathbb{Z}^d} \sum_{l\in\mathbb{Z}^d} |a_{kl}|^p \omega(k-l)^p, \sup_{l\in\mathbb{Z}^d} \sum_{k\in\mathbb{Z}^d} |a_{kl}|^p\omega(k-l)^p \Big\} <\infty,$$ then the inverse matrix $A^{-1}=(b_{kl})_{k,l\in\mathbb{Z}^d}$ satisfies the same Schur-type condition $$\max \Big\{ \sup_{k\in\mathbb{Z}^d} \sum_{l\in\mathbb{Z}^d} |b_{kl}|^p \omega(k-l)^p, \sup_{l\in\mathbb{Z}^d} \sum_{k\in\mathbb{Z}^d} |b_{kl}|^p\omega(k-l)^p \Big\} <\infty.$$ If in addition $A$ is a positive operator, then the matrices corresponding to $A^\alpha$ for each $\alpha\in\mathbb{R}$ are also in $\mathcal{A}_{p\omega}$.
\end{corollary}
We shall require the following two lemmas. The first one of which constructs a sequence of auxiliary weights $\omega_n$ using techniques developed in \cite{Le} and \cite{Py}.
\begin{lemma}\label{grl8}\cite[Lemma 8]{Gr}
Let $\omega$ be an unbounded admissible weight. Then there is a sequence of admissible weights $\omega_n$ such that
\begin{enumerate}
\item $\omega_{n+1}\leq\omega_n\leq\omega$ for all $n\in\mathbb{N}$,
\item there are constants $c_n>0$ such that $\omega\leq c_n\omega_n$, and
\item $\lim_{n\to\infty} \omega_n=1$ uniformly on compact subsets of $\mathbb{R}^d$.
\end{enumerate}
\end{lemma}
Note that all $\omega_n$ are equivalent (by (i) and (ii)) and satisfies GRS-condition (by (i)). So, $\mathcal{A}_{p\omega}$ and $\mathcal{A}_{p\omega_n}$ coincides having equivalent norms and thus for all $A\in\mathcal{A}_{p\omega}$, $$r_{p\omega}(A)=r_{p\omega_n}(A) \quad (n\in\mathbb{N}).$$
We just give an idea about the construction of $\omega_n$ as it will be required. For detailed proof refer to \cite{Gr}.
\textbf{Construction of $\omega_n$:} For $n\in\mathbb{N}$, let $$\gamma_n=\sup_{\mu\geq\rho^{-1}(n)} \frac{\rho(\mu)-n}{\mu} >0.$$ Since $\rho$ is continuous and $\lim_{n\to\infty} \frac{\rho(\mu)-n}{\mu}=0$, there is some $\beta_n\geq\rho^{-1}(n)$ such that $$\gamma_n=\frac{\rho(\beta_n)-n}{\beta_n}.$$ Define $\rho_n:[0,\infty)\to[0,\infty)$ by \begin{align*} \rho_n(\mu)=\begin{cases} \gamma_n\mu, \ &0\leq\mu\leq\beta_n, \\ \rho(\mu)-n, \ &\mu\geq\beta_n. \end{cases} \end{align*}
Define corresponding weight $\omega_n$ by $$\omega_n(x)=e^{\rho_n(|x|)} \quad (x\in\mathbb{R}^d).$$
\begin{lemma}\label{grl9}
With the assumptions of Theorem $\ref{grm}$ and $\omega_n$ as in Lemma $\ref{grl8}$, for every $A=A^\ast\in\mathcal{A}_{p\omega}$, $$\lim_{n\to\infty} \|A\|_{p\omega_n}=\|A\|_{p}$$ and \begin{equation}\label{gr23} r_{p\omega}(A)=r_p(A)=\|A\|_{op}^p. \end{equation}
\end{lemma}
\begin{proof}
Let $\epsilon>0$. Let $A=A^\ast\in\mathcal{A}_{p\omega}$. Then $$\|A\|_{p\omega_n}=\sup_k\sum_l|a_{kl}|^p\omega_n(k-l)^p<\infty.$$ By construction of $\omega_n$, $\omega_n(x)=e^{-n}\omega(x)$ for all $|x|\geq\beta_n$. So, there is $n_0\in\mathbb{N}$ such that $$\sup_k\sum_{l:|k-l|\geq\beta_{n_0}}|a_{kl}|^p\omega_{n_0}(k-l)^p\leq e^{-pn_0}\|A\|_{p\omega}<\epsilon.$$ Since $\omega_{n+1}\leq\omega_n\leq\omega$ for all $n$, if $n\geq n_0$, then $$\sup_k\sum_{l:|k-l|\geq\beta_{n_0}}|a_{kl}|^p\omega_n(k-l)^p<\epsilon.$$ Now, if $|x|\leq\beta_{n_0}$, then $\omega_n\to1$ uniformly and so there is $n_1\in\mathbb{N}$ such that for $n\geq n_1$, $$\sup_k \sum_{l:|k-l|\leq\beta_{n_0}} |a_{kl}|^p\omega_n(k-l)^p \leq (1+\epsilon^p) \sup_k \sum_l |a_{kl}|^p. $$ So, we have $$\|A\|_{p\omega_n} \leq \epsilon + (1+\epsilon^p) \|A\|_p.$$ Thus, $\lim_{n\to\infty} \|A\|_{p\omega_n}\leq\|A\|_p$. Since $\omega_n\geq1$, reverse inequality always holds.
Since $\omega$ and $\omega_n$ are equivalent weights for all $n\in\mathbb{N}$, $$r_{p\omega}(A)^k=r_{p\omega}(A^k)=r_{p\omega_n}(A^k)\leq\|A^k\|_{p\omega_n} \quad (k,n\in\mathbb{N}).$$ So, $$r_{p\omega}(A)^k\leq\lim_{n\to\infty}\|A^k\|_{p\omega_n}=\|A^k\|_p \quad (k\in\mathbb{N})$$ and this gives $r_{p\omega}(A)\leq r_p(A)$. Since $\mathcal{A}_{p\omega}\subset\mathcal{A}_p$, $r_p(A)\leq r_{p\omega}(A)$ is always true. Now, as $\omega(x)\geq C(1+|x|)^\delta=\tau_\delta(x)$ and $0<\delta\leq1$, $\mathcal{A}_{p\omega}\subset\mathcal{A}_{p\tau_\delta}$, and so by Theorem \ref{ba}, $r_{p}(A)=\|A\|_{op}^p$. This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{grm}$ and Corollary $\ref{grc7}$]
Combining Theorem \ref{hul} with (\ref{gr23}), we get $\sigma_{p\omega}(A)=\sigma(A)$ for all $A\in\mathcal{A}_{p\omega}$ and the symmetry of $\mathcal{A}_{p\omega}$ follows.
Now, if $A\in\mathcal{A}_{p\omega}$ is an invertible positive operator in $B(\ell^2(\mathbb{Z}^d))$, then $\sigma(A)\subset[\delta,\infty)$ for some $\delta>0$ and it follows that $\sigma_{p\omega}(A)\subset[\delta,\infty)$. The theorem follows from Riesz functional calculus (see \cite{Ru} and \cite{Ze}).
\end{proof}
\subsection{Wiener's Lemma for Twisted Convolution}
\begin{definition}\cite{Le}
Let $\theta>0$. The \emph{twisted convolution} of two sequences $a=(a_{kl})_{k,l\in\mathbb{Z}^d}$ and $b=(b_{kl})_{k,l\in\mathbb{Z}^d}$ is defined as \begin{equation}\label{gr29} (a\star_\theta b)(m,n) = \sum_{k,l\in\mathbb{Z}^d} a_{kl}b_{m-k,n-l}e^{2\pi i\theta(m-k)\cdot l} = \sum_{k,l\in\mathbb{Z}^d} a_{m-k,n-l}b_{kl}e^{2\pi i\theta k\cdot(n-l)}.\end{equation}
\end{definition}
Let $0<p\leq1$, and let $q\geq 1$. Since $$\|a\star_\theta b\|_q \leq \||a|\star|b|\|_q \leq \|a\|_1\|b\|_q \leq \|a\|_p^\frac{1}{p}\|b\|_q,$$ the twisted convolution operator $L_a(b)=a\star_\theta b$ is in $B(\ell^q(\mathbb{Z}^{2d}))$ for any $a\in\ell^p(\mathbb{Z}^{2d})$.
In this section we consider the space $\ell^p(\mathbb{Z}^{2d})$ with twisted convolution as product and involution $a^\ast_{kl}=\overline{a_{-k,-l}}e^{2\pi i \theta k\cdot l}$ for $a=(a_{kl})_{k,l\in\mathbb{Z}^d}\in\ell^p(\mathbb{Z}^{2d})$.
\begin{theorem}
Let $0<p\leq1$, $\omega$ be an admissible weight satisfying weak growth condition, and let $a\in\ell^p(\mathbb{Z}^{2d},\omega)$ be such that the twisted convolution operator $L_a$ is invertible in $B(\ell^2(\mathbb{Z}^{2d}))$. Then $a$ is invertible in $\ell^p(\mathbb{Z}^{2d},\omega)$ and $L_a^{-1}=L_b$ for some $b\in\ell^p(\mathbb{Z}^{2d},\omega)$.
\end{theorem}
\begin{proof}
For $L_a\in B(\ell^2(\mathbb{Z}^{2d}))$, by (\ref{gr29}), the matrix $A$ associated with it has the entries $$A_{(k,l),(m,n)}=a_{m-k,n-l}e^{2\pi i\theta k\cdot(n-l)}.$$ Now, \begin{equation}\label{gr31} \sup_{(k,l)\in\mathbb{Z}^{2d}} \sum_{(m,n)\in\mathbb{Z}^{2d}} |A_{(k,l),(m,n)}|^p \omega(k-m,l-n)^p=\|a\|_{p\omega}<\infty, \end{equation} and likewise with index interchanged. This gives $\|A\|_{p\omega}=\|a\|_{p\omega}$ and $A\in\mathcal{A}_{p\omega}$. By Theorem \ref{grm}, $B=A^{-1}\in\mathcal{A}_{p\omega}$. So, it remains to show that there is some $b\in\ell^p(\mathbb{Z}^{2d},\omega)$ such that $B=L_b$. Let $b\in\ell^2(\mathbb{Z}^{2d})$ be such that $L_ab=\delta_0$ where $\delta_0(0)=1$ and $\delta_0(m)=0$ for non-zero $m\in\mathbb{Z}^{2d}$. Let $c\in c_{00}=\{d=(d_{kl})_{k,l\in\mathbb{Z}^d}: \text{supp(d) is finite}\}$. Then $$L_a(L_b-B)c=a\star_\theta(b\star_\theta c)-L_aL_a^{-1}c=c-c=0.$$ So, $L_b=B$ on $c_{00}$. Since $c_{00}$ is dense in $\ell^2(\mathbb{Z}^{2d})$, it follows that the matrix of $L_a$ and $B$ are same and by (\ref{gr31}), $b\in\ell^p(\mathbb{Z}^{2d},\omega)$. The rest follows.
\end{proof}
\end{document} |
\begin{document}
\title{On identities of infinite dimensional Lie superalgebras}
\author[D. Repov\v s, and M. Zaicev]
{Du\v san Repov\v s and Mikhail Zaicev}
\address{Du\v san Repov\v s \\Faculty of Mathematics and Physics, and
Faculty of Education, University of Ljubljana,
P.~O.~B. 2964, Ljubljana, 1001, Slovenia}
\email{[email protected]}
\address{Mikhail Zaicev \\Department of Algebra\\ Faculty of Mathematics and
Mechanics\\ Moscow State University \\ Moscow,119992 Russia}
\email{[email protected]}
\thanks{The first author was supported by the Slovenian Research Agency
grants
P1-0292-0101 and J1-4144-0101.
The second author was partially supported by RFBR grant No 13-01-00234a. We thank the referee for several comments and suggestions.}
\keywords{Polynomial identity, Lie algebra, codimensions,
exponential growth}
\subjclass[2010]{Primary 17C05, 16P90; Secondary 16R10}
\begin{abstract}
We study codimension growth of infinite dimensional Lie superalgebras over an
algebraically closed field of characteristic zero. We prove that if a Lie superalgebra
$L$ is a Grassmann envelope of a finite dimensional simple Lie algebra then the
PI-exponent of $L$ exists and it is a positive integer.
\end{abstract}
\maketitle
\section{Introduction}\label{intr}
We shall
consider algebras over a field $F$ of characteristic zero. One of the approaches
in the investigations
of associative and non-associative algebras is to study numerical
invariants associated with their identical relations. Given an algebra $A$, we can
associate the sequence of its codimensions $\{c_n(A)\}_{n\in \mathbb N}$
(all
notions and definitions will be given in the next section).
This sequence gives some information not only about identities of $A$ but
also about structure of $A$. For example, $A$ is nilpotent if and only if
$c_n(A )=0$ for all large enough $n$. If $A$ is an associative non-nilpotent
$F$-algebra then $A$ is
commutative if and only if $c_n(A)=1$ for all $n\ge 1$.
For an associative algebra $A$ with a non-trivial polynomial identity the sequence $c_n(A)$ is exponentially bounded by the celebrated Regev's Theorem \cite{Reg72}
while $c_n(A)=n!$ if $A$ does not satisfy any non-trivial polynomial identity.
In the
non-associative case the sequence of codimensions may
have even
faster growth.
For example, if $A$ is an
absolutely free algebra then
$$
c_n(A)=a_n n!
$$
where
$$
a_n=\frac{1}{2}{2n-2\choose n-1}
$$
is the Catalan number, i.e. the number of all possible arrangements of brackets in
the word of length $n$.
For Lie algebra $L$ the sequence $\{c_n(L)\}_{n\in \mathbb N}$ is not exponentially bounded in
general even if $L$ satisfies non-trivial Lie identities (see for example \cite{P}).
Nevertheless, a class of Lie algebras with exponentially bounded codimensions is sufficiently wide.
It includes in particular, all finite dimensional algebras
\cite{B-Dr,GZ-TAMS2010}, Kac-Moody algebras \cite{Z1,Z2}, infinite
dimensional simple Lie algebras of Cartan type \cite{M1}, Virasoro algebra, and many others.
In the case when
$\{c_n(A)\}_{n\in \mathbb N}$ is exponentially bounded,
the upper and the lower limits of
the sequence $\{\sqrt[n]{c_n(A)}\}_{n\in \mathbb N}$ exist and
a natural question arises: does the ordinary limit
$$
\lim_{n\to\infty} \sqrt[n]{c_n(A)}
$$
exist?
In case of existence we call this limit $exp(A)$ or PI-exponent of $A$.
Amitsur conjectured in the
1980's
that for any associative P.I. algebra such a limit exists and
it is a non-negative integer. This conjecture was confirmed first for verbally
prime P.I. algebras in \cite{BR1,R2}, and later in the general case in \cite{GZ1,GZ2}.
For Lie algebras a series of positive results was obtained for finite dimensional
algebras \cite{GRZ2,GRZ1,Z3},
for algebras with nilpotent commutator subalgebras \cite{PM},
for affine Kac-Moody algebras \cite{Z1,Z2}, and some other classes (see \cite{M2}).
For Lie superalgebras
there exist
only partial results \cite{MZ4,MZ3,MZ2,MZ1}.
On the other hand it was shown in \cite{ZM} that there exists a Lie algebra $L$ with
$$
3.1 < \liminf_{n\to\infty} \sqrt[n]{c_n(L)} \le
\limsup_{n\to\infty} \sqrt[n]{c_n(L)} < 3.9 \quad .
$$
This algebra $L$ is soluble and almost nilpotent, i.e. it contains a nilpotent ideal
of finite codimension. In the
general non-associative case there exists,
for any real number $\alpha >1$,
an algebra $A_\alpha$ such that
$$
\lim_{n\to\infty} \sqrt[n]{c_n(A_\alpha)}=\alpha.
$$
(see \cite{GMZ}). Note also that by
a recent result \cite{GZ} there exist finite
dimensional Lie superalgebras with a
fractional limit $\sqrt[n]{c_n(L)}$.
In the
present paper we
shall
study Grassmann envelopes of finite dimensional simple Lie
algebras. Our main result is the following theorem:
\begin{theorem}\label{t1}
Let $L_0\oplus L_1$ be a finite dimensional simple Lie algebra over an
algebraically closed field $F$ of characteristic zero with some ${\mathbb Z}_2$-grading. Let also $\widetilde L=L_0\otimes G_0\oplus L_1\otimes G_1$ be the
Grassmann envelope of $L$. Then the limit
$$
exp(\widetilde L)=\lim_{n\to \infty}\sqrt[n]{c_n(\widetilde L)}
$$
exists and
is a positive integer. Moreover, $exp(\widetilde L)=\dim L$.
\end{theorem}
Another result of our paper concerns graded identities. Since any Lie superalgebra
$L$ is ${\mathbb Z}_2$-graded one can consider ${\mathbb Z}_2$-graded identities of
$L$ and the corresponding graded codimensions $c_n^{gr}(L)$. We shall
prove that graded codimensions have similar properties.
\begin{theorem}\label{t2}
Let $L=L_0\oplus L_1$ be a finite dimensional simple Lie algebra over an
algebraically closed field $F$ of characteristic zero with some ${\mathbb Z}_2$-grading. Let also $\widetilde L=L_0\otimes G_0\oplus L_1\otimes G_1$ be a
Grassmann envelope of $L$. Then the limit
$$
exp^{gr}(\widetilde L)=\lim_{n\to \infty}\sqrt[n]{c_n^{gr}(\widetilde L)}
$$
exists and
is a non-negative integer. Moreover, $exp^{gr}(\widetilde L)=\dim L$.
\end{theorem}
In other words, both PI-exponent $exp(\widetilde L)$ and graded PI-exponent
$exp^{gr}(\widetilde L)$ exist, they are integers and they coincide. Note that for an
arbitrary
${\mathbb Z}_2$-graded algebra the
growth of ordinary codimensions and graded codimensions
may differ. For example, if $A=M_k(F)\otimes F{\mathbb Z}_2$ with the canonical
${\mathbb Z}_2$-grading induced from group algebra $F{\mathbb Z}_2$, where $M_k(F)$ is
full $k\times k$ matrix algebra, then $exp(A)=k^2$ while $exp^{gr}(A)=2k^2$ (see
\cite{GZbook} for details). In the Lie case one can take $L=L_0\oplus L_1$ to be
a
two-dimensional metabelian algebra with $L_0=<e>, L_1=<f>$ and with only one
non-trivial product $[e,f]=f$. Then $c_n(L)=n-1$ for all $n\ge 2$ hence $exp(L)=1$. On
the other hand $exp^{gr}(L)=2$.
\section{The main constructions and definitions}
Let $A$ be an arbitrary non-associative algebra over a field $F$ and let $F\{X\}$ be
an
absolutely free $F$-algebra with a
countable generating set $X$. A polynomial
$f=f(x_1,\ldotsots,x_n)$ is said to be an identity of $A$ if $f(a_1,\ldotsots,a_n)=0$ for
any $a_1,\ldotsots,a_n\in A$. The set of all identities of $L$ forms a T-ideal $Id(A)$
in $F\{X\}$, that is an ideal which is
stable under all endomorphisms of $F\{X\}$. Denote by
$P_n=P_n(x_1,\ldotsots,x_n)$ the subspace of all multilinear polynomials on
$x_1,\ldotsots,x_n$ in $F\{X\}$. Then $P_n\cap Id(A)$ is a subspace of all multilinear
identities of $A$ of degree $n$. In the case when ${\rm char}~F=0$,
the T-ideal $Id(A)$
is completely determined by the subspaces $\{P_n\cap Id(A)\}, n=1,2,\ldotsots~$.
For estimating how many identities an algebra $A$ can have
one can define the so-called $n$-th
codimension of the identities of $A$ or, for shortness,
codimension of $A$:
$$
c_n(A)=\dim \frac{P_n}{P_n\cap Id(A)},~
n=1,2,\ldotsots~.
$$
As it was mentioned above, the class of associative and non-associative algebras with
exponentially bounded sequence $\{c_n(A)\}$ is sufficiently wide. In the
case when $c_n(A) < a^n$ for some real $a$, one can define the lower and the upper PI-exponents of $A$ as
follows:
$$
\underline{exp}(A)=\liminf_{n\to\infty} \sqrt[n]{c_n(A)},\quad
\overline{exp}(A)=\limsup_{n\to\infty} \sqrt[n]{c_n(A)}
$$
and the ordinary PI-exponent
\begin{equation}\label{e1}
exp(A)=\lim_{n\to \infty}\sqrt[n]{c_n(A)},
\end{equation}
provided that $\underline{exp}(A)=\overline{exp}(A)$.
For ${\mathbb Z}_2$-graded algebras one can also consider graded identities. Let $X$
and $Y$ be two infinite sets of variables and let $F\{X\cup Y\}$ be an
absolutely free
algebra generated by $X\cup Y$. If we suppose that all elements of $X$ are even and all
elements of $Y$ are odd, i.e. $\deg(x)=0, \deg(y)=1$ for any $x\in X,y\in Y$ then
$F\{X\cup Y\}$ can be naturally endowed by
a
${\mathbb Z}_2$-grading. A polynomial
$f=f(x_1,\ldotsots,x_m,y_1,\ldotsots, y_n)\in F\{X\cup Y\}$ is said to be a graded identity
of a superalgebra $A=A_0\oplus A_1$ if $f(a_1,\ldotsots,a_m,b_1,\ldotsots, b_n)=0$ for all
$a_1,\ldotsots,a_m\in A_0,b_1,\ldotsots, b_n\in A_1$. Fix $0\le k\le n$ and denote by
$P_{k,n-k}$ the subspace of $F\{X\cup Y\}$ spanned by all multilinear polynomials in
$x_1,\ldotsots,x_k \in X$, $y_1,\ldotsots, y_{n-k}\in Y$. Then $P_{k,n-k}\cap Id(A)$ is the
set of all multilinear polynomial identities of the superalgebra $A=A_0\oplus A_1$ in
$k$ even and $n-k$ odd variables.
One of the equivalent definitions of graded codimensions of $A$ is
$$
c_n^{gr}(A)=\sum_{k=0}^n {n\choose k} c_{k,n-k}(A),
$$
where
$$
c_{k,n-k}(A)=\dim\frac{P_{k,n-k}}{P_{k,n-k}\cap Id(A)}.
$$
Starting from a
${\mathbb Z}_2$-graded algebra of some class (Lie, Jordan alternative,
etc.) one can construct a
${\mathbb Z}_2$-graded algebra of different class using the
notion of the
Grassmann envelope. Grasmann envelopes play an exceptional role in
PI-theory. For example, any variety of associative algebras is generated by the
Grassmann envelope of some finite dimensional associative superalgebra \cite{Kem}.
In Lie case any so-called special variety is generated by the Grassmann envelope of a
finitely generated Lie superalgera \cite{Va}.
We recall this construction for Lie and super Lie cases. Let $G$ be the Grassmann
algebra generated by $1$ and the infinite set $\{e_1,e_2,\ldotsots \}$ satisfying the
following relations: $e_i e_j=-e_j e_i, i,j=1,2,\ldotsots~$. It is known that $G$ has a
natural ${\mathbb Z}_2$-grading $G=G_0\oplus G_1$ where
$$
G_0=Span<e_{i_1}\cdotsots e_{i_n}|n=2k, k=0,1,\ldotsots>,
$$
$$
G_1=Span<e_{i_1}\cdotsots e_{i_n}|n=2k+1, k=0,1,\ldotsots>.
$$
Given a Lie algebra $L$ with ${\mathbb Z}_2$-grading $L=L_0\oplus L_1$, its Grassmann
envelope
$$
G(L)=L_0\otimes G_0\oplus L_1\otimes G_1 \subset L\otimes G
$$
is a Lie superalgebra. Vice versa, if $L=L_0\oplus L_1$ is a Lie superalgebra then
$G(L)$ is an ordinary Lie algebra with a
${\mathbb Z}_2$-grading.
\section{Cocharacters of Grassmann envelopes}
The main tool in studying codimensions asymptotics is representation theory of
symmetric groups. We refer the reader to \cite{JK} for details. Symmetric group $S_n$
acts naturally on multilinear polynomials in $F\{X\}$ as
\begin{equation}\label{e2}
\sigma f(x_1,\ldotsots, x_n)= f(x_{\sigma(1)},\ldotsots, x_{\sigma(n)}).
\end{equation}
Hence $P_n$ is an $FS_n$-module and $P_n\cap Id(L)$ and also
$$
P_n(L)=\frac{P_n}{P_n\cap Id(L)}
$$
are $FS_n$-modules. $S_n$-character $\chi(P_n(L))$ is called $n$-th cocharacter of $L$
and we shall write
$$
\chi_n(L)=\chi(P_n(L)).
$$
Recall that any irreducible $FS_n$-module corresponds to a
partition $\lambda$ of $n$,
$\lambda\vdash n$, $\lambda=(\lambda_1,\ldotsots, \lambda_k)$, where
$\lambda_1\ge\ldotsots\ge \lambda_k$ are
positive integers and $\lambda_1+\cdotsots+ \lambda_k=n$. By the Maschke Theorem, any
finite dimensional $FS_n$-module $M$ decomposes into the direct sum of irreducible
components and hence its character $\chi(M)$ has a decomposition
$$
\chi(M)=\sum_{\lambda\vdash n} m_\lambda \chi_\lambda
$$
where $m_\lambda$ are non-negative integers. In particular, for the algebra $L$ we have
\begin{equation}\label{e3}
\chi(L)=\sum_{\lambda\vdash n} m_\lambda \chi_\lambda.
\end{equation}
Integers $m_\lambda$ in (\ref{e3}) are called multiplicities of $\chi_\lambda$ in
$\chi_n(L)$ and $d_\lambda=\deg \chi_\lambda=\chi_\lambda(1)$ are the dimensions of
corresponding irreducible representations. Therefore
\begin{equation}\label{e4}
c_n(L)=\dim P_n(L)=\sum_{\lambda\vdash n} m_\lambda d_\lambda.
\end{equation}
For any partition $\lambda=(\lambda_1,\ldotsots, \lambda_k)\vdash n$ one can construct
Young diagram $D_\lambda$ containing $\lambda_1$ boxes in the first row, $\lambda_2$
boxes in the second row and so on:
$$
D_\lambda=\; \begin{array}{|c|c|c|c|c|c|c|} \hline & &
\cdotsots & & & \cdotsots & \\ \hline & & \cdotsots & \\
\cline{1-4}\vdots \\ \cline{1-1} \\ \cline{1-1} \end{array}
$$
Given integers $k,l,d \ge 0$, we define the partition
$$
h(k,l,d)=(\underbrace{l+d,\ldotsots, l+d}_{k},\underbrace{l,\ldotsots,l}_{d})
$$
of $n=kl+d(k+l)$. The Young diagram associated with $h(k,l,d)$ is hook shaped, and we define
$H(k,l)$, an infinite hook, as the union of all $D_\lambda$ with $\lambda=h(k,l,d)$, $d=1,2,\cdotsots~$.
For shortness we will say that a partition $\lambda\vdash n$ lies in the hook $H(k,l)$,
$\lambda\in H(k,l)$, if $D_\lambda\subset H(k,l)$. In other words,
$\lambda\in H(k,l)$ if $\lambda=(\lambda_1,\cdotsots,\lambda_t)$ and $\lambda_{k+1}\le l$.
According to this definition we will say that the cocharacter of $L$ lies in the hook
$H(k,l)$ if $m_\lambda=0$ in (\ref{e3}) as soon as $\lambda\not\in H(k,l)$.
A particular case of $H(k,l)$ is an infinite strip $H(k,0)$. In this case $\lambda\in H(k,0)$
if $\lambda_{k+1}=0$.
The following fact is well-known and we state it without proof.
\begin{lemma}\label{l1}
Let $L$ be a finite dimensional algebra, $\dim L=d<\infty$. Then $\chi_n(L)$ lies in the hook
$H(d,0)$ for all $n\ge 1$.
\end{lemma}
$\Box$
Another important numerical invariant of the identities of $L$ is the colength $l_n(L)$.
By definition
\begin{equation}\label{e5}
l_n(L)=\sum_{\lambda\vdash n} m_\lambda
\end{equation}
where $m_\lambda$ are taken from (\ref{e3}). It easily follows from (\ref{e4}) and (\ref{e5}) that
\begin{equation}\label{e6}
\max\{d_\lambda|m_\lambda\ne 0\}\le c_n(L)\le l_n(L)\cdotsot\max\{d_\lambda|m_\lambda\ne 0\}.
\end{equation}
For studying graded identities of $L=L_0\oplus L_1$ we need to act separately on even
and odd variables. More precisely, the space $P_{k,n-k}= P_{k,n-k}(x_1,\ldotsots,
x_k,y_1,\ldotsots,y_{n-k})$ is an $S_k\times S_{n-k}$-module where symmetric groups $S_k$,
$S_{n-k}$ act on $x_1,\ldotsots, x_k$ and $y_1,\ldotsots,y_{n-k})$, respectively. Any
irreducible $S_k\times S_{n-k}$-module is a tensor product of $S_k$-module and an
$S_{n-k}$-module and corresponds to the pair $\lambda, \mu$ of partitions,
$\lambda\vdash k, \mu\vdash n-k$. As before, the subspace $P_{n-k}\cap Id(L)$ is an
$S_k\times S_{n-k}$-stable subspace and one can consider the quotient
$$
P_{k,n-k}(L)=\frac{P_{k,n-k}}{P_{k,n-k}\cap Id(L)}
$$
as an $S_k\times S_{n-k}$-module. Its $S_k\times S_{n-k}$-character $\chi_{k,n-k}(L)=
\chi(P_{k,n-k}(L))$ is decomposed into irreducible components.
\begin{equation}\label{e7}
\chi_{k,n-k}(L)=\sum_{{\lambda\vdash k \atop \mu\vdash n-k}} m_{\lambda,\mu} \chi_{\lambda,\mu}
\end{equation}
and we define the
$(k,n-k)$-colength of $L$ as
$$
l_{k,n-k}(L)=\sum_{{\lambda\vdash k \atop \mu\vdash n-k}} m_{\lambda,\mu}
$$
with $m_{\lambda,\mu}$ taken from (\ref{e7}).
First, we prove some relations between graded and non-graded numerical invariants.
We begin by recalling
the correspondence between multilinear homogeneous polynomials in a
free
${\mathbb Z}_2$-graded Lie algebra and in a
free Lie superalgebra. Let $f=f(x_1,\ldotsots,x_k,
y_1,\ldotsots, y_m)$ be a non-associative polynomial multilinear on $x_1,\ldotsots,x_k$,
$y_1,\ldotsots, y_m$, where $x_1,\ldotsots,x_k$ are supposed to be even and $y_1,\ldotsots, y_m$
odd indeterminates. Then $f$ is a linear combination of monomials from $P_{k,m}$.
Let $M=M(x_1,\ldotsots,x_k,y_1,\ldotsots, y_m)$ be such a monomial. We fix positions of
$y_1,\ldotsots, y_m$ in $M$ and write $M$ for shortness in the following form
$$
M=X_0y_{\sigma(1)}X_1\cdotsots X_{m-1}y_{\sigma(m)}X_m
$$
where $X_0,\ldotsots, X_m$ are some words (possibly empty) consisting of
left and right brackets
and indeterminates $x_1,\ldotsots,x_k$. Now we define a monomial $\widetilde M$
on even indeterminates $x_1,\ldotsots,x_k$ and odd indeterminates $y_1,\ldotsots,y_m$ from
free Lie superalgebra as
$$
\widetilde M ={\rm sgn}(\sigma) X_0y_{\sigma(1)}X_1\cdotsots X_{m-1}y_{\sigma(m)}X_m.
$$
Extending this map $\,\widetilde \empty\,$ by linearity we obtain a linear isomorphism
$P_{k,m}\rightarrow P_{k,m}$ of two subspaces of
a
${\mathbb Z}_2$-graded free Lie
algebra and a
free Lie superalgebra, respectively. Although the monomials in $P_{k,m}$
are not linearly independent, it easily follows
from Jacobi and super-Jacobi identities
that the map $\,\widetilde \empty\,$
is well-defined. Similarly, we can define the
inverse map from a free Lie superalgebra to a free ${\mathbb Z}_2$-graded Lie algebra.
Following the same argument as in the
associative case (see \cite[Lemma 3.4.7]{GZbook}) we
obtain for any ${\mathbb Z}_2$-graded Lie algebra $L$ and its Grassmann envelope
$G(L)=G_0\otimes L_0\oplus G_1\otimes L_1$ the following result.
\begin{lemma}\label{l2}
Let $f\in P_{k,m}$ be a multilinear polynomial in the free Lie algebra. Then
\begin{itemize}
\item
$f$ is a graded identity of $L$ if and only if $\widetilde f$ is a graded identity
of $G(L)$; and
\item
$\widetilde{\widetilde f} = f$.
\end{itemize}
\end{lemma}
\vskip -0.2in
$\Box$
The next lemma is an obvious generalization of Lemma \ref{l1}.
\begin{lemma}\label{l3}
Let $L=L_0\oplus L_1$ be a finite dimensional Lie algebra, $\dim L_0=k, \dim L_1=l$, and
let
$$
\chi_{q,n-q}(L)=\sum_{{\lambda\vdash q \atop \mu\vdash n-q}} m_{\lambda,\mu} \chi_{\lambda,\mu}
$$
be its $(q,n-q)$-graded cocharacter. If $m_{\lambda,\mu}\ne 0$ then $\lambda\in H(k,0)$
and $\mu\in H(l,0)$.
\end{lemma}
$\Box$
Using this remark we restrict the shape of the
graded cocharacter of the Grassmann envelope $G(L)$.
\begin{lemma}\label{l4}
Let $L=L_0\oplus L_1$ be a finite dimensional Lie algebra, $\dim L_0=k, \dim L_1=l$, and let $\widetilde L$ be its Grassmann envelope. If
\begin{equation}\label{e8}
\chi_{q,n-q}(\widetilde L)=\sum_{{\lambda\vdash q \atop \mu\vdash n-q}} m_{\lambda,\mu} \chi_{\lambda,\mu}
\end{equation}
and $m_{\lambda,\mu}\ne 0$ in (\ref{e8}) then $\lambda\in H(k,0)$ and $\mu\in H(0,l)$.
\end{lemma}
{\em Proof}. Suppose $m_{\lambda, \mu}\ne 0$ in (\ref{e8}) for some $\lambda\vdash q,
\mu \vdash n-q$.
Then there exists a multilinear polynomial $g=g(x_1,\ldotsots,x_q,
y_1,\ldotsots,y_{n-q})$ such that
$$
f= e_{T_\lambda} e_{T_\mu} g(x_1,\ldotsots, y_{n-q})
$$
is not a graded identity of $\widetilde L$, where $e_{T_\lambda}\in FS_q,
e_{T_\mu}\in FS_{n-q}$ are essential idempotents generating minimal left ideals in
$FS_q, FS_{n-q}$, respectively. Inclusion $\lambda\in H(k,0)$ immediately follows by
Lemma \ref{l3} since $L$ and $G(L)$ have the same cocharacters on even indeterminates.
Since $e_{T_\lambda}$ and $e_{T_\mu}$ commute, applying Lemma 4.8.6 from \cite{GZbook}
we get
$$
\widetilde f= a e_{T_\lambda} g,
$$
where $a\in I_{\mu'}$. Here $\mu'$ is the conjugated to $\mu$ partition of $n-q$ and
$I_{\mu'}$ is the minimal two-sided ideal of $FS_{n-q}$ generated $e_{T_{\mu'}}$. That is,
$I_{\mu'}$ has the character $r\cdotsot\chi_{\mu'}$, where $r=d_{\mu'}=\deg \mu'$.
By Lemma \ref{l2}, $\widetilde f$ is not a graded identity of $G(\widetilde L)$. Since
$\widetilde{\widetilde h}=h$ for any $h\in P_{q,n-q}$, we see that $\widetilde f$ is not a graded
identity of $L$ and $\mu'\in H(l,0)$ by Lemma \ref{l3}. In other words, the number
of rows of Young diagram $D_{\mu'}$ does not exceed $l$. This number equals the
number of columns of $D_\mu$ hence $\mu\in H(0,l)$ and we are done.
$\Box$
Using the previous lemma we restrict the shape of non-graded cocharacter of $G(L)$.
\begin{lemma}\label{l5}
Let $L=L_0\oplus L_1$ be a finite dimensional Lie algebra, $\dim L_0=k, \dim L_1=l$, and let
$$
\chi(\widetilde L)=\sum_{\lambda\vdash n} m_{\lambda} \chi_{\lambda}
$$
be
the
$n$-th (non-graded) cocharacter of $\widetilde L= G(L)$. Then $m_\lambda\ne 0$
only if $\lambda\in H(k,l)$.
\end{lemma}
{\em Proof}. Suppose $f\in P_n$ is not an identity of $\widetilde L$. Since $f$ is
multilinear we may assume that $f(x_1,\ldotsots, x_q, y_1,\ldotsots, y_{n-q})\in P_{q,n-q}$
is not an identity of $\widetilde L$ for some $0\le q \le n$. Moreover, we can consider only the case when a graded polynomial $f$ generates in $P_{q,n-q}$ an irreducible $S_q\times S_{n-q}$-submodule $M$ with the character $(\chi_\lambda,\chi_\mu)$, $\lambda \vdash q, \mu\vdash n-q$.
Now we lift
the
$S_q\times S_{n-q}$-action up to an
$S_n$-action and consider a
decomposition
of $FS_n M$ into irreducible components:
$$
\chi(FS_n M)= \sum_{\nu\vdash n} m_\nu \chi_\nu.
$$
Since $\lambda$ lies in the hook $H(k,0)$, i.e. the
horizontal strip of
height $k$ by Lemma \ref{l4} and $\mu$ lies in $H(0,l)$, the vertical strip of
width $l$, it follows
from the
Littlewood-Richardson rule for induced representations (\cite[2.8.13]{JK}, see also
\cite[Theorem 2.3.9]{GZbook})
that $m_\nu=0$ as soon as $\nu\not\in H(k,l)$
and we have completed the proof.
$\Box$
\begin{lemma}\label{l6}
Let $G(L)=\widetilde L=\widetilde L_0\oplus \widetilde L_1$ be the Grassmann envelope
of a finite dimensional Lie algebra $L=L_0\oplus L_1$ with $\dim L_0=k, \dim L_1=l$. Then its colength sequence $\{l_n( \widetilde L)\}$ is polynomially
bounded.
\end{lemma}
{\em Proof.}
We use the notation $\{z_1,z_2,\ldotsots\}$ for non-graded indeterminates here since
$\{x_1,x_2,\ldotsots\}$ were even variables in the previous statements.
Let
\begin{equation}\label{eqq1}
\chi(\widetilde L)=\sum_{\lambda\vdash n} m_\lambda\chi_\lambda
\end{equation}
be the
$n$-th cocharacter of $\widetilde L$. By Lemma~\ref{l5} we have
$\lambda\in H(k,l)$ as soon as $m_\lambda\ne 0$ in (\ref{eqq1}). Fix
$\lambda\vdash n$ with $m_\lambda =m\ne 0$ and consider
the
$FS_n$-submodule
\begin{equation}\label{eqq2}
W_1\oplus\cdotsots\oplus W_m\subseteq P_n(\widetilde L)
\end{equation}
with $\chi(W_i)=\chi_\lambda,$ for all $i=1,\ldotsots, m$.
We shall prove that
\begin{equation}\label{eqq3}
m\le (k+l) 2^{2kl}n^{k^2+l^2}
\end{equation}
in (\ref{eqq2}). Denote by $\lambda'_1,\ldotsots, \lambda'_l$ the heights of
the
first
$l$ columns of
the
Young diagram $D_\lambda$.
Clearly, it suffices
to prove the
inequality (\ref{eqq3}) only for $\lambda$ with $\lambda_k>l$ and
$\lambda'_l>k$. Otherwise $\lambda\in H(k',l')$ with $k'\le k, l'\le l$ and
$k'+l'<k+l$.
Denote
$$
\mu_1=\lambda'_1-k,\ldotsots,\mu_l=\lambda'_l-k.
$$
Then $\lambda_1+\cdotsots+\lambda_k+\mu_1+\cdotsots+\mu_l=n$.
It is well-known (see, for example, \cite {MZ5}) that one can choose multilinear
$f_1\in W_1,\ldotsots, f_m\in W_m$ such that $FS_nf_1=W_1,\ldotsots, FS_nf_m=W_m$
and each $f_i, i=1,\ldotsots, m$, is symmetric on $k$ sets of indeterminates of orders
$\lambda_1,\ldotsots, \lambda_k$ and is alternating on $l$ sets of orders
$\mu_1,\ldotsots, \mu_l$.
According to this decomposition into
symmetric and alternating sets we rename
$z_1,\ldotsots, z_n$ as follows
\begin{equation}\label{eqq4}
\{z_1,\ldotsots, z_n\}=\{z^1_1,\ldotsots,z^1_{\lambda_1},\ldotsots, z^k_1,\ldotsots,z^k_{\lambda_k},
\bar z^1_1,\ldotsots,\bar z^1_{\mu_1},\ldotsots, \bar z^l_1,\ldotsots,\bar z^l_{\mu_l}
\},
\end{equation}
where each $f_i$ is symmetric on any set $\{z^j_1,\ldotsots,z^j_{\lambda_j} \}$,
$j=1,\ldotsots,k$, and is alternating on any set
$\{\bar z^s_1,\ldotsots,\bar z^s_{\mu_s}\}$, $s=1,\ldotsots, l$.
We shall find $\delta_1,\ldotsots, \delta_m\in F$ such that
$$
f=\delta_1f_1+\cdotsots + \delta_mf_m
$$
is an identity of $\widetilde L$ if (\ref{eqq3}) does not hold.
Note that for any $\delta_1,\ldotsots, \delta_m\in F$ a polynomial
$f$ is also symmetric on each subset $\{z^i_1,\ldotsots,z^i_{\lambda_i}\},
1\le i\le k$, and alternating on each subset $\{\bar z^s_1,\ldotsots,\bar
z^s_{\mu_s}\}$, $s=1,\ldotsots, l$.
Let $E=\{e_1,\ldotsots, e_{k+l}\}$ be a homogeneous basis of $L$ with
$E_0=\{e_1,\ldotsots, e_k\}\subset L_0$, $E_1=\{e_{k+1},\ldotsots, e_{k+l}\}\subset L_1$.
Then $f$ is an identity of $\widetilde L$ if and only if $\varphi(f)=0$ for any
evaluation $\varphi: Z\to \widetilde L$ such that $\varphi(z_i)=g_i\otimes a_i,
1\le i \le n,$ where $a_i$ is a basis element from $E$ and $g_i\in G$ has the
same parity as $a_i$ and $g_1\cdotsots g_n\ne 0$ in $G$.
Note also that $\varphi(f)=0$
implies $\varphi'(f)=0$ for any evaluation $\varphi'$ such that $\varphi'(z_i)=
g_i'\otimes a_i, 1\le i\le n$ provided that $g_1\cdotsots g_n\ne 0$.
Using these two remarks we shall find an upper bound for the number of evaluations for asking the question whether $f$
is
an identity of $\widetilde L$ or not.
Consider first one symmetric subset $Z_1=\{z^1_1,\ldotsots,z^1_{\lambda_1} \}$. If
$\varphi(z^1_i)=g\otimes e, \varphi(z^1_j)=h\otimes e,$ for some $i\ne j$ with
$e\in E_1,$ then $\varphi(f)=0,$ as follows from the symmetry on $Z_1$. Hence we
need to check only evaluations with at most $r\le l$ odd values
$\varphi(z^1_{i_1})=g_1\otimes e_{t_1}, \ldotsots, \varphi(z^1_{i_r})=g_r\otimes e_{t_r}$, where $e_{t_1},\ldotsots, e_{t_r}\in E_1$ are distinct. Since $Z_1$ is the
symmetric set of
variables, the result of evaluation $\varphi$ does not depend (up to the sign)
on the choice of $i_1,\ldotsots, i_r$. Hence we have ${l\choose r}$ possibilities.
Given $0\le r\le l$, we estimate the number of evaluations of remaining
$\lambda_1-r$ variables in the even component of $\widetilde L$.
First, let $r=0$ and $\varphi(z^1_i)=g_i\otimes a_i, a_i\in E_0, 1\le i\le
\lambda_1$. If $e_1$ appears in the row $(a_1,\ldotsots, a_{\lambda_1})$ exactly
$\alpha_1$ times, $e_2$ appears $\alpha_2$ times and so on,
then the result
of such substitution depends only on $\alpha_1,\ldotsots, \alpha_k$ since $f$ is
symmetric on $Z_1$. Hence we have no more than $(\lambda_1+1)^k$ variants
since $0\le \alpha_1,\ldotsots, \alpha_k\le \lambda_1$. In particular, we need at most
$(n+1)^k$ evaluations if $r=0$.
Let now $r=1$. We can replace by odd element an arbitrary variable from $Z_1$ and
get (up to the sign) the same value $\varphi(f)$ since $f$ is symmetric on $Z_1$.
Suppose say, that
$\varphi(z^1_{\lambda_1})=h\otimes e, e\in E_1$, and
$\varphi(z^1_1)=g_1\otimes a_1,\ldotsots, \varphi(z^1_{\lambda_1-1}) =
g_{\lambda_1-1}\otimes a_{\lambda_1-1},$
where all $a_j$ are even.
If
$\alpha_1,\ldotsots,\alpha_k$ are the same integers as in the
case $r=0$ then the result
of the substitution also depends only on $\alpha_1,\ldotsots,\alpha_k$. Hence for $r=1$
we have at most
$$
{l\choose 1}\lambda^k_1 \le {l\choose 1}(n+1)^k
$$
variants for $\varphi$ since $0\le \alpha_1,\ldotsots,\alpha_k\le \lambda_1-1$.
Similarly, for general $0\le r\le l$ we have at most
$$
{l\choose r}(\lambda_1+1-r)^k \le {l\choose r}(n+1)^k
$$
variants. Therefore for evaluating all variables from $Z_1$ it
suffices
$$
\sum_{r=0}^l {l\choose r}(n+1)^k = 2^l(n+1)^k
$$
substitutions and for all symmetric variables we need at most
$$
(2^l(n+1)^k)^k
$$
substitutions.
Now consider the alternating set $Z_1'=\{\bar z^1_1,\ldotsots, \bar z^1_{\mu_1}\}$.
If $\varphi(\bar z^1_i)=g\otimes e, \varphi(\bar z^1_j)=h\otimes e,$
for some
$i\ne j$ with the same $e\in E_0,$
then $\varphi(f)=0$, hence we can choose only
$0\le r\le k$ distinct basis elements $b_1,\ldotsots,b_r\in E_0$ for values of
$\bar z^1_{i_1}, \ldotsots, \bar z^1_{i_r}$ of the type $g_i\otimes b_i$. Up to the
sign,
the result of the
substitution does not depend on $i_1,\ldotsots, i_r$ and we have only
${k\choose r}$ options.
Suppose now that all $\varphi(\bar z^1_i), 1\le i\le r$, are fixed even values. Let
$$
\varphi(\bar z^1_{r+1})=g_1\otimes b_1,\ldotsots,
\varphi(\bar z^1_{\mu_1})=g_{\mu_1-r}\otimes b_{\mu_1-r},\quad b_1\ldotsots,
b_{\mu_1-r}\in E_1.
$$
Then (up to the sign) the result of $\varphi$ depends only on the number of entries
of $e_{k+1},\ldotsots, e_{k+l}$ into the row $(b_1,\ldotsots, b_{\mu_1-r})$. Hence we have
at most $(\mu_1-r+1)^l$ variants for substitution of odd variables. As in the
symmetric case we have the following upper bound
$$
\sum_{r=0}^k {k\choose r}(n+1)^l = 2^k(n+1)^l
$$
for one subset and $(2^k(n+1)^l)^l$ for all skew variables.
We have proved that one can find $T\le 2^{kl}(n+1)^{l^2+k^2}$ evaluations
$\varphi_1,\ldotsots,\varphi_T$ such that the relations
\begin{equation}\label{eqq5}
\varphi_1(f)=\cdotsots=\varphi_T(f)=0
\end{equation}
imply $\varphi(f)=0$ for any evaluation $\varphi$, that is $f$ is an identity
of $\widetilde L$. Recall that $f=\delta_1f_1+\cdotsots+\delta_mf_m$. Therefore for any
evaluation $\varphi$ the equality $\varphi(f)=0$ can be viewed as a system of $k+l$
homogeneous linear equations in the algebra $\widetilde L$ on unknown coefficients
$\delta_1,\ldotsots,\delta_m$. If (\ref{eqq3}) does not hold then the system
(\ref{eqq5}) has a non-trivial solution $\bar\delta_1,\ldotsots,\bar\delta_m$ and
$f=\bar\delta_1f_1+\cdotsots+\bar\delta_mf_m$ is an identity of $\widetilde L$, a
contradiction.
We have proved the inequality (\ref{eqq3}). From this inequality it follows that
all multiplicities in (\ref{eqq1}) are bounded by $(k+l)2^{2kl}n^{k^2+l^2}$.
Finally
note that the number of partitions $\lambda\in H(k,l)$ is bounded by $n^{k+l}$.
Hence
$$
l_n(\widetilde L)<(k+l)2^{2kl}n^{k^2+l^2+kl}
$$
and we have thus completed the proof.
$\Box$
As a corollary of previous results we obtain the following:
\begin{proposition}\label{p1}
Let $L=L_0\oplus L_1$ be a finite dimensional ${\mathbb Z}_2$-graded Lie algebra
with $\dim L_0=k, \dim L_1=l$ and let $\widetilde L=G(L)$ be its Grassmann envelope.
Then there exist constants $\alpha,\beta\in {\mathbb R}$ such that
$$
c_n(\widetilde L)\le \alpha n^\beta(k+l)^n.
$$
In particular,
$$
\overline{exp}(\widetilde L)=\limsup_{n\to\infty} \sqrt[n]{c_n(\widetilde L)} \le k+l.
$$
\end{proposition}
{\em Proof}. By \cite[Lemma 6.2.5]{GZbook},
there exist constants $C$ and $r$ such that
$$
\sum_{\lambda\in H(k,l)} d_\lambda \le C n^r(k+l)^n
$$
for all $n=1,2,\ldotsots~$. In particular,
$$
\max\{d_\lambda|\lambda\vdash n, \lambda\in H(k,l)\} \le C n^r(k+l)^n.
$$
Now Lemma \ref{l6} and the inequality (\ref{e6}) complete the proof.
$\Box$
\section{Existence of PI-exponents}
\begin{proposition}\label{p2}
Let $L$ be a finite dimensional simple Lie algebra over an algebraically
closed field of characteristic zero with some ${\mathbb Z}_2$-grading,
$L=L_0\oplus L_1$, $\dim L_0=k, \dim L_1=l$. Let also $\widetilde L=G(L)$ be its
Grassmann envelope. Then there exist constants $\gamma>0,\delta\in {\mathbb R}$
such that
$$
c_n(\widetilde L)\ge \gamma n^\delta(k+l)^n.
$$
In particular,
$$
\underline{exp}(\widetilde L)=\liminf_{n\to\infty} \sqrt[n]{c_n(\widetilde L)} \ge k+l.
$$
\end{proposition}
{\em Proof}. Denote $d=k+l=\dim L$. By \cite[Theorem 12.1]{Razm},
for the adjoint
representation of $L$ there exists a multilinear asssociative polynomial
$h=h(u^1_1,\ldotsots, u^1_d,\ldotsots,$ $ u^m_1,\ldotsots, u^m_d)$ alternating on each subset of
indeterminates $\{u^i_1,\ldotsots, u^i_d\}$, $1\le i\le m$, such that under any evaluation
$\varphi: u^i_j\to ad~ b^i_j, b^i_j\in L$, the value $\varphi(h)$ is a scalar linear
transformation of $L$ and $\varphi(h)\ne 0$ for some $h$. It follows that for
any integer $t\ge 1$ there exists a multilinear Lie polynomial
$$
f_t=f_t(u^1_1,\ldotsots, u^1_d,\ldotsots, u^{mt}_1,\ldotsots, u^{mt}_d, w)
$$
alternating on each set $\{u^i_1,\ldotsots, u^i_d\}$, $1\le i\le mt$ such that
$\varphi(f_t)\ne 0$ for some evaluation $\varphi: \{u^1_1,\ldotsots,u^{mt}_d,w\}\to
L_0\cup L_1$. Since $f_t$ is multilinear and alternating on each set
$\{u^i_1,\ldotsots, u^i_d\}$ and $d=\dim L_0+\dim L_1$ it follows that for any $t\ge 1$
we get a graded multilinear polynomial
$$
f_t=f_t(x^1_1,\ldotsots, x^1_k,\ldotsots, x^{mt}_1,\ldotsots, x^{mt}_k,
y^1_1,\ldotsots, y^1_l,\ldotsots, y^{mt}_1,\ldotsots, y^{mt}_l,w)
$$
which is not a graded identity of $L$ and it is alternating on each subset
$\{x^i_1,\ldotsots, x^i_k \}$ and on each subset $\{y^i_1,\ldotsots, y^i_l \}$,
$1\le i\le mt$, where $x^i_j$'s are even and $y^i_j$'s are odd variables. The latter
indeterminate $w$ can be taken of arbitrary parity, say, $w=x_0$ is even.
Consider an
$S_p\times S_q$-action on
$$
P_{p+1,q}=P_{p+1,q}(x_0,x^1_1,\ldotsots, x^{mt}_k, y^1_1,\ldotsots, y^{mt}_l),
$$
where $p=mtk, q=mtl$ and $S_p, S_q$ act on $\{x^i_j \}$, $\{y^i_j \}$, respectively.
It follows from Lemma \ref{l3}
that the
$S_p\times S_q$-character of the
submodule generated
by $f$ in $P_{p+1,q}$ lies in the pair of of strips $H(k,0)$, $H(l,0)$, that is
$$
\chi(F[S_p\times F_q]f) = \sum_{{\lambda\vdash p\atop \mu\vdash q}} m_{\lambda,\mu} \chi_{\lambda,\mu}
$$
with $m_{\lambda,\mu}=0$, unless $\lambda \in H(k,0), \mu \in H(l,0)$. Hence $\lambda$
is a partition of $mtk$ with at most $k$ rows. On the other hand, $f$ depends on $mt$
alternating subsets of even indeterminates of order $k$ each. It is well-known that in this
case $m_{\lambda,\mu}=0$ if $\lambda=(\lambda_1, \lambda_2,\ldotsots )$ and $\lambda_1
\ge mt+1$. It follows that only rectangular partition
\begin{equation}\label{e10}
\lambda=(\underbrace{mt,\ldotsots, mt}_k)
\end{equation}
can appear in $F[S_p\times F_q]f$ with non-zero multiplicity. Similarly,
\begin{equation}\label{e11}
\mu=(\underbrace{mt,\ldotsots, mt}_l)
\end{equation}
if $m_{\lambda,\mu}\ne 0$. Hence we can assume that $f$ has the form
$$
f=e_{T_\lambda} e_{T_\mu} g(x^1_1,\ldotsots, y^{mt}_l,w)
$$
with $\lambda$ and $\mu$ of the types (\ref{e10}), (\ref{e11}), respectively.
By Lemma \ref{l2}, the polynomial $\widetilde f$ is not an identity of the Lie
superalgebra $\widetilde L=G(L)$ and by Lemma 4.8.6 from \cite{GZbook}, the
graded
polynomial $\widetilde f$ generates in $P_{p+1,q}(\widetilde L)$ an irreducible
$S_{p}\times S_q$-submodule with the character $(\chi_\lambda,\chi_{\mu'})$, where
$$
\mu'=(\underbrace{l,\ldotsots, l}_{mt})
$$
is conjugated to a
$\mu$ partition of $mtl$.
First we apply Littelwood-Richardson rule and induce this
$S_{p}\times S_q$-module up to $S_n$-module. Then we induce the
obtained
$S_n$-module up to $S_{n+1}$-module, where $n=p+q=mt(k+l)$. It follows from
the
Littelwood-Richardson rule that the induced $S_{n+1}$-module
can contain only simple submodule corresponding to
partitions $\nu\vdash n+1$ such that the Young diagram $D_\nu$ contains a
subdiagram $D_{\nu_0}$, where
$$
\nu_0=h(k,l,t_0)=(\underbrace{l+t_0,\ldotsots, l+t_0}_{k},\,
\underbrace{l,\ldotsots, l}_{t_0})
$$
is a finite hook with $t_0\ge l-k, mt-kl$. Since we are interested
with
asymptotic of codimensions, we may assume that $mt-kl>l-k$ and then $t_0=mt-kl$.
In particular, $\nu_0$ is a partition of $n_0=(k+l)t_0+kl$. Then $n+1-n_0 =
(k+l-1)kl+1$ and by \cite[Lemma 6.2.4]{GZbook}
$$
d_{\nu_0} \le d_\nu \le n^c d_{\nu_0}
$$
where $c=(k+l-1)kl+1$ and
$$
d_{h(k,l,t_0)}\simeq an_0^b(k+l)^{n_0} \quad {\rm if}~ n_0\to\infty
$$
for some constants $a,b$, by Lemma 6.2.5 from \cite{GZbook}.
Here the relation
$f(n)\simeq g(n)$ means that $\lim_{n\to \infty}\frac{f(n)}{g(n)}=1$.
Since $c_{n+1}(\widetilde L) \ge d_\nu$ we get the inequality
\begin{equation}\label{e12}
c_{n+1}(\widetilde L) \ge \alpha(n+1)^\beta(k+l)^{n+1}
\end{equation}
for all $n=m(k+l)t$, $t=1,2,\ldotsots$ for some constants $\alpha>0$ and $\beta$.
Since Lie algebra $L$ is simple, the Grassmann envelope $\widetilde L$ is a
centerless
Lie superalgebra. It is not difficult to see that $c_{r+1}(\widetilde L) \ge
c_r(\widetilde L)$ in this case for all $r\ge 1$. Hence by (\ref{e12}) we have
$$
c_{n+j}(\widetilde L) \ge \alpha (n+1)^\beta (k+l)^{n+1}
$$
for any $1\le j \le m(k+l)$. Since $n=m(k+l)t$ one can find constants
$\gamma >0$ and $\delta$ such that
$$
c_{r}(\widetilde L) \ge \gamma r^\delta (k+l)^{r}
$$
for all positive integer $r$ and we have completed
the proof.
$\Box$
Theorem \ref{t1}
now
easily follows from Propositions \ref{p1} and \ref{p2}.
{\em Proof of Theorem \ref{t2}}. First we obtain an upper bound for
$c_n^{gr}(\widetilde L)$:
$$
c_n^{gr}(\widetilde L)=\sum_{q=0}^n {n\choose q} c_{q,n-q}(\widetilde L),
$$
where
\begin{equation}\label{e13}
c_{q,n-q}(\widetilde L)=\sum_{\lambda \vdash q\atop \mu\vdash n-q}
m_{\lambda,\mu} d_{\lambda,\mu}
\end{equation}
and $d_{\lambda,\mu}=\deg \chi_{\lambda,\mu}=\deg \chi_{\lambda}\cdotsot \deg
\chi_{\mu}=d_\lambda d_\mu$. Moreover, $\lambda\in H(k,0)$, $\mu\in H(0,l)$ by
Lemma \ref{l4}. Applying Lemma 6.2.5 from \cite{GZbook}, we obtain
$$
\sum_{\lambda \in H(k,0)\atop \lambda\vdash q} d_\lambda \le
Cn^r k^q,\quad
\sum_{\mu \in H(0,l)\atop \mu\vdash n-q}d_\mu \le Cn^r l^{n-q}
$$
for some constants $C,r$ and hence
\begin{equation}\label{e14}
\sum_{\lambda \in H(k,0), \lambda\vdash q \atop \mu \in H(0,l), \mu\vdash n-q}
d_\lambda d_\mu \le C^2 n^{2r}k^q l^{n-q}
\end{equation}
On the other hand, graded colength
$$
l_{q,n-q}(\widetilde L)=\sum_{\lambda\vdash q \atop \mu\vdash n-q} m_{\lambda,\mu}
$$
is not greater than non-graded colength $l_n(\widetilde L)$. Since $l_n(\widetilde L)$
is polynomially bounded by Lemma \ref{l6}, one can find a polynomial $\varphi(n)$
such that
\begin{equation}\label{e15}
m_{\lambda,\mu}\le \varphi(n)
\end{equation}
for any $m_{\lambda,\mu}$ in (\ref{e13}). It now follows from (\ref{e13}), (\ref{e14}) and
(\ref{e15})
that for $\psi(n)= C^2 n^{2r}\varphi(n)$ we have
\begin{equation}\label{e16}
c_n^{gr}(\widetilde L)\le \psi(n)\sum_{q=1}^n {n\choose q} k^q l^{n-q}=
\psi(n)(k+l)^n
\end{equation}
and we have obtained an upper bound for $c_n^{gr}((\widetilde L))$.
On the other hand, in \cite[Lemma 3.1]{BGR} it is proved that for
any associative $G$-graded algebra $A$, where $G$ is a finite group,
an ordinary $n$-th codimension is less than
or equal to the
graded $n$-th
codimension, for any $n$. Proof of this lemma does not use
associativity. Hence
\begin{equation}\label{e17}
c_n^{gr}(\widetilde L)\ge c_n(\widetilde L).
\end{equation}
Theorem \ref{t2}
now
follows from (\ref{e16}), (\ref{e17}) and Proposition
\ref{p2} and we have completed the proof.
$\Box$
\end{document} |
\begin{document}
\begin{abstract}
Let $X$ be a Banach space and $Y \subseteq X$ be a closed subspace. We prove that if the quotient $X/Y$ is weakly Lindelöf determined
or weak Asplund, then for every $w^*$-convergent sequence $(y_n^*)_{n\in \mathbb N}$ in~$Y^*$ there exist
a subsequence $(y_{n_k}^*)_{k\in \mathbb N}$ and a $w^*$-convergent sequence $(x_k^*)_{k\in \mathbb N}$ in~$X^*$
such that $x_k^*|_Y=y_{n_k}^*$ for all $k\in \mathbb N$. As an application we obtain that $Y$ is Grothendieck
whenever $X$ is Grothendieck and $X/Y$ is reflexive, which answers a question raised by Gonz\'{a}lez and Kania.
\end{abstract}
\title{On weak$^*$-extensible subspaces of Banach spaces}
\section{Introduction}
Throughout this paper $X$ is a Banach space. We denote by $w^*$ the weak$^*$ topology on its (topological) dual~$X^*$.
The space $X$ is said to be {\em Grothendieck} if every $w^*$-convergent sequence in~$X^*$ is weakly convergent.
This property has been widely studied over the years, we refer the reader to the recent survey \cite{GonzalezKania}
for complete information on it. By a ``subspace'' of a Banach space we mean a closed linear subspace.
If $Y \subseteq X$ is a subspace, then: (i)~the quotient $X/Y$ is Grothendieck whenever~$X$ is Grothendieck, and (ii)~$X$
is Grothendieck whenever $Y$ and $X/Y$ are Grothendieck (see, e.g., \cite[2.4.e]{cas-gon}). In general, the property of being Grothendieck is not
inherited by subspaces (for instance, $c_0$ is not Grothendieck while~$\ell_\infty$ is). However, this is the case for complemented subspaces or, more generally,
subspaces satisfying the following property:
\begin{defi}\label{defi:extensible}
A subspace $Y \subseteq X$ is said to be
{\em $w^*$-extensible} in~$X$ if for every $w^*$-convergent sequence $(y_n^*)_{n\in \mathbb{N}}$ in~$Y^*$ there exist
a subsequence $(y_{n_k}^*)_{k\in \mathbb{N}}$ and a $w^*$-convergent sequence $(x_k^*)_{k\in \mathbb{N}}$ in~$X^*$
such that $x_k^*|_Y=y_{n_k}^*$ for all $k\in \mathbb{N}$.
\end{defi}
Indeed, it is easy to show that a Banach space is Grothendieck if (and only if) every $w^*$-convergent sequence in its dual admits
a weakly convergent subsequence. Thus, a subspace $Y \subseteq X$ is Grothendieck whenever $X$ is Grothendieck
and $Y$ is $w^*$-extensible in~$X$. The concept of $w^*$-extensible subspace was studied in~\cite{cas-gon-pap,mor-wan,wan-alt}
(there the definition was given by replacing ``$w^*$-convergent'' by ``$w^*$-null''; both definitions are easily seen to be equivalent).
Note that every subspace is $w^*$-extensible in~$X$ whenever $B_{X^*}$ (the closed unit ball of~$X^*$, that we just call the ``dual ball'' of~$X$) is
$w^*$-sequentially compact (cf. \cite[2.4.f]{cas-gon}). However, this observation does not provide new results on the stability of the Grothendieck property
under subspaces, because the only Grothendieck spaces having $w^*$-sequentially compact dual ball are the reflexive ones.
In this note we focus on finding sufficient conditions for the $w^*$-extensibility of a subspace~$Y\subseteq X$
which depend only on the quotient $X/Y$. Our motivation stems from the question (raised in \cite[Problem~23]{GonzalezKania}) of whether $Y$ is Grothendieck whenever
$X$ is Grothendieck and $X/Y$ is reflexive.
It is known that the separable injectivity of~$c_0$ (Sobczyk's theorem)
implies that if $X/Y$ is separable, then $Y$ is $w^*$-extensible in~$X$ in a stronger sense, namely,
the condition of Definition~\ref{defi:extensible} holds without passing to subsequences (see, e.g., \cite[Theorem~2.3 and Proposition~2.5]{avi-alt-4}).
Our main result is the following (see below for unexplained notation):
\begin{theo}\label{theo:Main}
Let $Y \subseteq X$ be a subspace. Suppose that $X/Y$ satisfies one of the following conditions:
\begin{enumerate}
\item[(i)] Every non-empty $w^*$-closed subset of~$B_{(X/Y)^*}$ has a $G_\delta$-point
(in the relative $w^*$-topology).
\item[(ii)] ${\rm dens}(X/Y)<\mathfrak{s}$.
\end{enumerate}
Then $Y$ is $w^*$-extensible in~$X$.
\end{theo}
Given a compact Hausdorff topological space~$K$, a point $t\in K$ is called a {\em $G_\delta$-point} (in~$K$) if there is a sequence
of open subsets of~$K$ whose intersection is $\{t\}$. Corson compacta have $G_\delta$-points
(see, e.g., \cite[Theorem~14.41]{fab-ultimo}), and the same holds for any non-empty $w^*$-closed subset in the dual ball of a weak Asplund space
(see, e.g., the proof of \cite[Theorem~2.1.2]{FabianDifferentiability}). Thus, we get the following corollary covering the case
when $X/Y$ is reflexive:
\begin{cor}\label{cor:classes}
Let $Y \subseteq X$ be a subspace. If $X/Y$ is weakly Lindelöf determined or weak Asplund, then
$Y$ is $w^*$-extensible in~$X$.
\end{cor}
As an application of the above we get an affirmative answer to \cite[Problem~23]{GonzalezKania}:
\begin{cor}\label{cor:Grothendieck}
Let $Y \subseteq X$ be a subspace. If $X$ is Grothendieck and $X/Y$ is reflexive, then $Y$ is Grothendieck.
\end{cor}
As to condition~(ii) in Theorem~\ref{theo:Main}, recall that the density character of a Banach space~$Z$, denoted by~${\rm dens}(Z)$, is the
smallest cardinality of a dense subset of~$Z$. For our purposes, we just mention that
the {\em splitting number}~$\mathfrak{s}$ is the minimum of all cardinals~$\kappa$ for which there is a compact Hausdorff topological space
of weight~$\kappa$ that is not sequentially compact. In general, $\omega_1 \leq \mathfrak{s} \leq \mathfrak{c}$. So, under CH, cardinality
strictly less than~$\mathfrak{s}$ just means countable. However,
in other models there are uncountable sets of cardinality strictly less than~$\mathfrak{s}$.
We refer the reader to~\cite{dou2} for detailed information on~$\mathfrak{s}$ and other cardinal characteristics of the continuum.
Any of the conditions in Theorem~\ref{theo:Main} implies that $B_{(X/Y)^*}$ is $w^*$-sequentially compact
(see \cite[Lemma~2.1.1]{FabianDifferentiability} and note that the weight of $(B_{(X/Y)^*},w^*)$ coincides with ${\rm dens}(X/Y)$),
which is certainly not enough to guarantee that $Y$ is $w^*$-extensible in~$X$ (see Remark~\ref{rem:BourgainSchlumprecht}). The ideas that we use in this paper are similar to those used by Hagler and Sullivan~\cite{HaglerSullivan} to study sufficient conditions for the dual ball of a Banach space to be $w^*$-sequentially compact. By the way,
as another consequence of Theorem~\ref{theo:Main} we obtain a generalization of \cite[Theorem~1]{HaglerSullivan} (see Corollary~\ref{cor:seqcompactness}).
The proof of Theorem~\ref{theo:Main} and some further remarks are included in the next section. We follow
standard Banach space terminology as it can be found in \cite{FabianDifferentiability} and~\cite{fab-ultimo}.
\section{Proof of Theorem~\ref{theo:Main} and further remarks}\label{section:proofs}
By a ``compact space'' we mean a compact Hausdorff topological space. The {\em weight} of a compact space~$K$, denoted by~${\rm weight}(K)$,
is the smallest cardinality of a base of~$K$. The following notion was introduced in~\cite{MP18}:
\begin{defi}\label{defi:CDE}
Let $L$ be a compact space and $K \subseteq L$ be a closed set. We say that $L$ is a \textit{countable discrete extension} of~$K$ if
$L \setminus K$ consists of countably many isolated points.
\end{defi}
Countable discrete extensions turn out to be a useful tool to study twisted sums of~$c_0$ and $C(K)$-spaces, see \cite{avi-mar-ple} and~\cite{MP18}.
As it can be seen in the proof of Theorem~\ref{theo:Main}, countable discrete extensions appear in a natural way when dealing with sequential properties and twisted sums.
Lemma \ref{LemmaCDE} below isolates two properties of compact spaces
which are stable under countable discrete extensions, both of them implying sequential compactness. Nevertheless, sequential compactness itself is not stable under countable discrete extensions (see Remark~\ref{rem:BourgainSchlumprecht}).
\begin{lem}
\label{LemmaCDE}
Let $L$ be a compact space which
is a countable discrete extension of a closed set $K \subseteq L$. Then:
\begin{enumerate}
\item[(i)] if every non-empty closed subset of~$K$ has a $G_\delta$-point (in its relative topology), then the same property holds for~$L$;
\item[(ii)] ${\rm weight}(L)={\rm weight}(K)$ whenever $K$ is infinite.
\end{enumerate}
Therefore, if either every non-empty closed subset of~$K$ has a $G_\delta$-point (in its relative topology) or ${\rm weight}(K)<\mathfrak{s}$,
then $L$ is sequentially compact.
\end{lem}
\begin{proof} (i) Let $M \subseteq L$ be a non-empty closed set. If $M \subseteq K$, then $M$ has a $G_\delta$-point (in the relative topology) by hypothesis.
On the other hand, if $M \cap (L \setminus K)\neq \emptyset$, then $M$ contains a point which is isolated in~$L$
and so a $G_\delta$-point (in~$M$).
(ii) Let $R: C(L) \to C(K)$ be the bounded linear operator defined by $R(f)=f|_K$ for all $f\in C(K)$. Then $R$ is surjective
and $C(K)$ is isomorphic to~$C(L)/\ker R$.
The fact that $L$ is a countable discrete extension of~$K$ implies that $\ker R$ is finite-dimensional or isometrically isomorphic to~$c_0$.
In any case, $\ker R$ is separable and so
$$
{\rm dens}(C(K)) = {\rm dens}(C(L)).
$$
The conclusion follows from the equality
${\rm dens}(C(S))={\rm weight}(S)$, which holds for any infinite compact space~$S$
(see, e.g., \cite[Proposition 7.6.5]{Semadeni} or \cite[Exercise~14.36]{fab-ultimo}).
The last statement of the lemma follows from \cite[Lemma 2.1.1]{FabianDifferentiability} and \cite[Theorem~6.1]{dou2}, respectively.
\end{proof}
We will use below the well-known fact that ${\rm dens}(Z)={\rm weight}(B_{Z^*},w^*)$ for any Banach space~$Z$.
\begin{proof}[Proof of Theorem~\ref{theo:Main}]
Let $(y_n^*)_{n\in \mathbb{N}}$ be a $w^*$-convergent sequence in~$Y^*$.
Without loss of generality, we can assume that $(y_n^*)_{n\in \mathbb{N}}$ is $w^*$-null and contained in~$B_{Y^*}$. Clearly, there is nothing
to prove if $y_n^*=0$ for infinitely many $n\in \mathbb{N}$.
So, we can assume further that $y_n^*\neq 0$ for all $n\in \mathbb{N}$.
By the Hahn-Banach theorem, for each $n\in \mathbb{N}$ there is $z_n^* \in X^*$ with $z_n^*|_Y=y_n^*$ and $\|z_n^*\|=\|y_n^*\| \leq 1$.
Let $q:X\rightarrow X/Y$ be the quotient operator. It is well-known that its adjoint $q^*:(X/Y)^* \rightarrow X^*$ is an isometric isomorphism
from $(X/Y)^*$ onto~$Y^\perp$. In addition, $q^*$ is $w^*$-to-$w^*$-continuous, hence $K:=B_{X^*}\cap Y^\perp=q^*(B_{(X/Y)^*})$ is $w^*$-compact.
Observe that for every $x^* \in X^* \setminus Y^\perp$ there exist $x\in Y$, $\alpha>0$ and $m\in \mathbb{N}$
such that $x^*(x) > \alpha > z_n^*(x)$ for every $n \geq m$. Hence $L:=K \cup \{z_n^*: n \in \mathbb{N}\} \subseteq B_{X^*}$
is $w^*$-closed (so that $L$ is $w^*$-compact) and each $z_n^*$ is $w^*$-isolated in~$L$ (bear in mind that $z_n^*|_Y=y_n^*\neq 0$).
Then $(L,w^*)$ is a countable discrete extension of $(K,w^*)$, with
$(K,w^*)$ and $(B_{(X/Y)^*},w^*)$ being homeomorphic.
Bearing in mind that ${\rm dens}(X/Y)$ coincides with the weight of $(B_{(X/Y)^*},w^*)$, from Lemma~\ref{LemmaCDE}
it follows that $L$ is sequentially compact and, therefore, $(z_n^*)_{n\in \mathbb{N}}$ admits a $w^*$-convergent subsequence. The proof is finished.
\end{proof}
If $X$ is Grothendieck and $Y \subseteq X$ is a subspace such that $X/Y$ is separable, then $Y$ is Grothendieck
(see \cite[Proposition~3.1]{gon-alt}). This fact can also be seen as a consequence of Corollary~\ref{cor:Grothendieck}, because
every separable Grothendieck space is reflexive.
As an immediate application of Theorem~\ref{theo:Main} we also obtain an affirmative answer to
\cite[Problem~22]{GonzalezKania} (bear in mind that $\mathfrak{p}\leq \mathfrak{s}$, see, e.g., \cite[Theorem~3.1]{dou2}):
\begin{cor}\label{cor:Grothendieck-s}
Let $Y \subseteq X$ be a subspace such that ${\rm dens}(X/Y)<\mathfrak{s}$. If $X$ is Grothendieck, then $Y$ is Grothendieck.
\end{cor}
\begin{rem}\label{rem:alternate}
In fact, the previous corollary is a particular case of Corollary~\ref{cor:Grothendieck}. Indeed, on the one hand,
the assumption that ${\rm dens}(X/Y)<\mathfrak{s}$ implies that $B_{(X/Y)^*}$ is $w^*$-sequentially compact.
On the other hand, the Grothendieck property is preserved by quotients
and any Grothendieck space with $w^*$-sequentially compact dual ball is reflexive (cf. \cite[Proposition~6.18]{avi-alt-4}).
\end{rem}
\begin{rem}\label{rem:BourgainSchlumprecht}
In general, the $w^*$-sequential compactness
of~$B_{(X/Y)^*}$ is not enough to guarantee that a subspace $Y \subseteq X$ is $w^*$-extensible in~$X$. Indeed,
it is easy to check that {\em if both $B_{Y^*}$ and $B_{(X/Y)^*}$ are $w^*$-sequentially compact and $Y$
is $w^*$-extensible in~$X$, then $B_{X^*}$ is $w^*$-sequentially compact as well}
(see the proof of \cite[Proposition~6]{cas-gon-pap}).
On the other hand, there exists a Banach space $X$ such that $B_{X^*}$ is not $w^*$-sequentially compact
although $B_{(X/Y)^*}$ is $w^*$-sequentially compact for some separable subspace $Y \subseteq X$ (see \cite{HaglerSullivan}, cf. \cite[Section~4.8]{cas-gon}).
\end{rem}
Hagler and Sullivan proved in \cite[Theorem~1]{HaglerSullivan} that $B_{X^*}$ is $w^*$-sequentially compact whenever there is
a subspace $Y \subseteq X$ such that $B_{Y^*}$ is $w^*$-sequentially compact and $X/Y$ has an equivalent G\^{a}teaux smooth norm.
Since every Banach space admitting an equivalent G\^{a}teaux smooth norm is weak Asplund (see, e.g., \cite[Corollary~4.2.5]{FabianDifferentiability}),
the following corollary generalizes that result:
\begin{cor}
\label{cor:seqcompactness}
Let $Y \subseteq X$ be a subspace such that $B_{Y^*}$ is $w^*$-sequentially compact.
If $X/Y$ satisfies any of the conditions in Theorem~\ref{theo:Main},
then $B_{X^*}$ is $w^*$-sequentially compact.
\end{cor}
\end{document} |
\begin{document}
\title{Bare Demo of IEEEtran.cls for Journals}
\author{Michael~Shell,~\IEEEmembership{Member,~IEEE,}
John~Doe,~\IEEEmembership{Fellow,~OSA,}
and~Jane~Doe,~\IEEEmembership{Life~Fellow,~IEEE}
\thanks{M. Shell is with the Department
of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta,
GA, 30332 USA e-mail: (see http://www.michaelshell.org/contact.html).}
\thanks{J. Doe and J. Doe are with Anonymous University.}
\thanks{Manuscript received April 19, 2005; revised January 11, 2007.}}
\markboth{Journal of \LaTeX\ Class Files,~Vol.~6, No.~1, January~2007}
{Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for Journals}
\maketitle
\begin{abstract}
The abstract goes here.
\end{abstract}
\begin{abstract}
The abstract goes here.
\end{abstract}
Note to Practitioners:
\begin{abstract}
"Note to Practitioners" goes here. For format and style, please see:
http://www.ieee-ras.org/tase/ntp
\end{abstract}
Primary and Secondary Keywords
\begin{IEEEkeywords}
Primary Topics: Primary Topic Number 1, Primary Topic Number 2, ...,
Secondary Topic Keywords: ...
See list
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.7 and later.
I wish you the best of success.
mds
January 11, 2007
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\fi
\begin{IEEEbiography}{Michael Shell}
Biography text here.
\end{IEEEbiography}
\begin{IEEEbiographynophoto}{John Doe}
Biography text here.
\end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Jane Doe}
Biography text here.
\end{IEEEbiographynophoto}
\end{document} |
\begin{document}
\title{ Some Operators Associated to Rarita-Schwinger Type Operators}
\author{Junxia Li and John Ryan\\
\emph{\small Department of Mathematics, University of Arkansas, Fayetteville, AR 72701, USA}}
\date{}
\maketitle
\begin{abstract}
In this paper we study some operators associated to the Rarita-Schwinger operators. They arise from the difference between the Dirac operator and the Rarita-Schwinger operators. These operators are called remaining operators. They are based on the Dirac operator and projection operators $I-P_k.$ The fundamental solutions of these operators are harmonic polynomials, homogeneous of degree $k$. First we study the remaining operators and their representation theory in Euclidean space. Second, we can extend the remaining operators in Euclidean space to the sphere under the Cayley transformation.
\end{abstract}
{\bf Keywords:}\quad Clifford algebra, Rarita-Schwinger operators, remaining operators, Cayley transformation,
Almansi-Fischer decomposition.
\\
\\
\par \emph{This paper is dedicated to Michael Shapiro on the occasion of his 65th birthday.}
\section{Introduction}
Rarita-Schwinger operators in Clifford analysis arise in representation theory for the covering groups of $SO(n)$ and $O(n)$. They are generalizations of the Dirac operator. We denote a Rarita-Schwinger operator by $R_k$, where $k=0, 1, \cdots, m, \cdots.$ When $k=0$ it is the Dirac operator. The Rarita-Schwinger operators $R_k$ in Euclidean space have been studied in \cite{BSSV, BSSV1,DLRV, Va1, Va2}. Rarita-Schwinger operators on the sphere denoted by $R_k^S$ have also been studied in \cite{LRV}.
In this paper we study the remaining operators, $Q_k$, which are related to the Rarita-Schwinger operators. In fact, The remaining operators are the difference between the Dirac operator and the Rarita-Schwinger operators.
Let $\mathcal{H}$$_{k}$ be the space of harmonic polynomials homogeneous of degree $k$ and $\mathcal{M}_{k}$, $\mathcal{M}_{k-1}$ be the spaces of $Cl_n-$ valued monogenic polynomials, homogeneous of degree $k$ and $k-1$ respectively. Instead of considering $P_k: \mathcal{H}_k\rightarrow \mathcal{M}_{k}$ in \cite{DLRV}, we look at the projection map, $I-P_k: \mathcal{H}_k\rightarrow u\mathcal{M}_{k-1},$ and the Dirac operator, to define the $Q_k$ operators and construct their fundamental solutions in $\mathbb{R}^n$. We introduce basic results of these operators. This includes Stokes' Theorem, Borel-Pompeiu Theorem, Cauchy's Integral Formula and a Cauchy Transform. In section 5, by considering the Cayley transformation and its inverse, we can extend the results for the remaining operators in $\mathbb{R}^n$ to the sphere, $\mathbb{S}^n.$ We construct the fundamental solutions to the remaining operators by applying the Cayley transformation to the fundamental solutions to the remaining operators in $\mathbb{R}^n.$ We also obtain the intertwining operators for $Q_k$ operators and $Q_k^S$ operators. In turn, we establish the conformal invariance of the remaining equations under the Cayley transformation and its inverse. We conclude by giving some basic integral formulas with detailed proofs, and pointing out that results obtained for Rarita-Schwinger operators in \cite{LRV} for real projective space readily carry over to the context presented here.
\section{Preliminaries}
\par A Clifford algebra, $Cl_{n},$ can be generated from $\mathbb{R}^n$ by considering the
relationship $${x}^{2}=-\|{x}\|^{2}$$ for each
${x}\in \mathbb{R}^n$. We have $\mathbb{R}^n\subseteq Cl_{n}$. If $e_1,\ldots, e_n$ is an orthonormal basis for $\mathbb{R}^n$, then ${x}^{2}=-\|{x}\|^{2}$ tells us that $e_i e_j + e_j e_i= -2\delta_{ij},$ where $\delta_{ij}$ is the Kronecker delta function. Let $A=\{j_1, \cdots, j_r\}\subset \{1, 2, \cdots, n\}$ and $1\leq j_1< j_2 < \cdots < j_r \leq n$. An arbitrary element of the basis of the Clifford algebra can be written as $e_A=e_{j_1}\cdots e_{j_r}.$ Hence for any element $a\in Cl_{n}$, we have $a=\sum_Aa_Ae_A,$ where $a_A\in \mathbb{R}.$
We define the Clifford conjugation as the following:
$$\bar{a}=\sum_A(-1)^{|A|(|A|+1)/2}a_Ae_A$$
satisfying $\overline{e_{j_1}\cdots e_{j_r}}=(-1)^r e_{j_r}\cdots e_{j_1}$ and $\overline{ab}= \bar{b}\bar{a}$ for $a, b \in Cl_n.$
For each $a=a_0+\cdots +a_{1\cdots n}e_1\cdots e_n\in Cl_n$ the scalar part of $\bar{a}a$ gives the square of the norm of $a,$ namely $a_0^2+\cdots +a_{1\cdots n}^2$\,.
The reversion is given by
$$\tilde{a}=\sum_A(-1)^{|A|(|A|-1)/2}a_Ae_A,$$ where $|A|$ is the cardinality of $A$. In particular, $\widetilde{e_{j_1}\cdots e_{j_r}}=e_{j_r}\cdots e_{j_1}.$ Also $\widetilde{ab}=\tilde{b}\tilde{a}$ for $a, b \in Cl_{n+1}.$
The Pin and Spin groups play an important role in Clifford analysis. The Pin group can be defined as
$$Pin(n): =\{a\in Cl_n : a=y_1 \ldots y_p:
{y_1,\ldots , y_p}\in \mathbb{S}^{n-1}, p\in \mathbb{N}\}$$
and is clearly a group under multiplication in $Cl_n$.
Now suppose that $y\in \mathbb{S}^{n-1}\subseteq \mathbb{R}^n$. Look at $yxy=yx^{\parallel _y}y+yx^{\perp_y}y=-x^{\parallel _y}+x^{\perp_y}$ where $x^{\parallel _y}$ is the projection of $x$ onto $y$
and $x^{\perp_y}$ is perpendicular to $y$. So $yxy$ gives a reflection of $x$ in the $y$ direction. By the Cartan$-$Dieudonn\'{e} Theorem each $O \in O(n)$ is the composition of a finite number of reflections. If $a=y_1\ldots y_p\in Pin(n)$, then $\tilde{a}:=y_p\ldots y_1$ and $ax\tilde{a}=O_a(x)$ for some $O_a\in O(n).$ Choosing $y_1, \ldots, y_p$ arbitrarily in $\mathbb{S}^{n-1}$, we see that the group homomorphism $$\theta: Pin(n)\longrightarrow O(n): a\longmapsto O_a$$ with $a=y_1\ldots y_p$
and $O_a(x)=ax\tilde{a}$ is surjective. Further $-ax(-\tilde{a})=ax\tilde{a}$, so $1, -1\in ker(\theta)$. In fact $ker(\theta)=\{\pm 1\}.$
The Spin group is defined as
$$
Spin(n):=\{a\in Pin(n): a=y_1\ldots y_ p \mbox{ and } p \mbox{ even}\}
$$
and is a subgroup of $Pin(n)$. There is a group homomorphism
$$\theta: Spin(n)\longrightarrow SO(n)$$ which is surjective with kernel $\{1, -1\}$. See \cite{P} for details.
The Dirac Operator in $\mathbb{R}^n$ is defined to be $$D :=\sum_{j=1}^{n} e_j \frac{\partial}{\partial x_j}.$$ Note $D^2=-\Delta_{n},$ where $\Delta_n$ is the Laplacian in $\mathbb{R}^n$.
Let $\mathcal{H}$$_{k}$ be the space of harmonic polynomials homogeneous of degree $k.$ Let $\mathcal{M}_k$ denote the space of $Cl_n-$ valued polynomials, homogeneous of degree $k$ and such that if $p_k\in$ $\mathcal{M}_k$ then $Dp_k=0.$ Such a polynomial is called a left monogenic polynomial homogeneous of degree $k$. Note if $h_k\in$ $\mathcal{H}_k,$ the space of $Cl_n-$ valued harmonic polynomials homogeneous of degree $k$, then $Dh_k\in$ $\mathcal{M}$$_{k-1}$. But $Dup_{k-1}(u)=(-n-2k+2)p_{k-1}(u),$
so $$\mathcal{H}_k=\mathcal{M}_k\bigoplus u\mathcal{M}_{k-1}, h_k=p_k+up_{k-1}.$$
This is the so-called Almansi-Fischer decomposition of $\mathcal{H}$$_k$, where $\mathcal{M}_{k-1}$ is the space of $Cl_n-$ valued left monogenic polynomials, homogeneous of degree $k-1$. See \cite{BDS, R}.
\par Note that if $Dg(u)=0$ then $\bar{g}(u)\bar{D}=-\bar{g}(u)D=0$. So we can talk of right monogenic polynomials, homogeneous of degree $k$ and we obtain by conjugation a right Almansi-Fisher decomposition, $$\mathcal{H}_k=\overline{\mathcal{M}_k}\bigoplus \overline{\mathcal{M}}_{k-1}u,$$ where $\overline{\mathcal{M}_k}$ stands for the space of right monogenic polynomials homogeneous of degree $k$.
Let $P_k$ be the left projection map
$$P_k: \mathcal{H}_k\rightarrow \mathcal{M}_k,$$ then the left Rarita-Schwinger operator $R_k$ is defined by (see \cite{BSSV,BSSV1,DLRV,Va1,Va2})
$$R_kg(x,u)=P_kD_xg(x,u),$$
where $D_x$ is the Dirac operator with respect to $x$ and $g(x,u): U\times \mathbb{R}^n\to Cl_n$ is a monogenic polynomial homogeneous of degree $k$ in $u,$ and $U$ is a domain in $\mathbb{R}^n$.
The left Rarita-Schwinger equation is defined to be
$$
R_k g(x,u)=0.
$$
We also have a right projection $P_{k,r}: \mathcal{H}_k \rightarrow \overline{\mathcal{M}_k},$ and a right Rarita-Schwinger equation $g(x,u)D_xP_{k,r}=g(x,u)R_k=0.$
A M\"{o}bius transformation is a finite composition of orthogonal transformations, inversions, dilations, and translations.
Ahlfors \cite{A} and Vahlen \cite{V} show that given a M\"{o}bius transformation $y=\phi(x)$ on $\mathbb{R}^n\bigcup\{\infty\}$ it can be expressed as $y=(ax+b)(cx+d)^{-1}$ where $a, b,c,d\in Cl_n$ and satisfy the following conditions:
\begin{enumerate}
\item $a, b, c, d$ are all products of vectors in $\mathbb{R}^n.$
\item $a\tilde b, c\tilde d, \tilde bc, \tilde da \in\mathbb{R}^n.$
\item $a\tilde d-b\tilde c=\pm1.$
\end{enumerate}
When $c=0, \phi(x)=(ax+b)(cx+d)^{-1}=axd^{-1}+bd^{-1}=\pm ax\tilde a+bd^{-1}.$ Now assume $c\neq 0,$ then $\phi(x)=(ax+b)(cx+d)^{-1}=ac^{-1}\pm(cx\tilde{c}+d\tilde{c})^{-1}.$ These are the so-called Iwasawa decompositions. Using this notation and the conformal weights, $f(\phi(x))$ is changed to $J(\phi,x)f(\phi(x)),$ where $J(\phi,x)=\displaystyle\frac{\widetilde{cx+d}}{\|cx+d\|^n}$. Note when $\phi(x)=x+a$ then $J(\phi,x)\equiv 1.$
\section{The $Q_k$ operators and their kernels}
As $$I-P_k: \mathcal{H}_k\rightarrow u\mathcal{M}_{k-1},$$ where $I$ is the identity map, then we can define the left remaining operators
$$Q_k:=(I-P_k)D_x:u\mathcal{M}_{k-1}\to u\mathcal{M}_{k-1}\quad uf(x,u):\to (I-P_k)D_xuf(x,u).$$ See\cite{BSSV}.
The left remaining equation is defined to be $(I-P_k)D_xuf(x,u)=0$ or $Q_kuf(x,u)=0,$ for each $x$ and $(x,u)\in U\times \mathbb{R}^n$, where $U$ is a domain in $\mathbb{R}^n$ and $f(x,u)\in \mathcal{M}_{k-1}.$
We also have a right remaining operator $$Q_{k,r}:=D_x(I-P_{k,r}):\overline{\mathcal{M}}_{k-1}u\to \overline{\mathcal{M}}_{k-1}u\quad g(x,u)u:\to g(x,u)uD_x(I-P_{k,r}),$$ where $g(x,u)\in \overline{\mathcal{M}}_{k-1}$.
Consequently, the right remaining equation is $g(x,u)uD_x(I-P_{k,r})=0$ or $g(x,u)uQ_{k,r}=0.$
Now let us establish the conformal invariance of the remaining equation $Q_kuf(x,u)=0$.
It is easy to see that $I-P_k$ is conformally invariant under the M\"{o}bius transformations, since the projection operator $P_k$ is conformally invariant (see \cite{BSSV, DLRV}). By considering orthogonal transformation, inversion, dilation and translation and applying the same arguments in \cite{DLRV} used to establish the intertwining operators for Rarita-Schwinger operators, we can easily obtain the intertwining operators for $Q_k$ operators:
\begin{theorem}
$$J_{-1}(\phi,x)Q_{k,u}uf(y,u)=Q_{k,w}wJ(\phi,x)f(\phi(x),\displaystyle\frac{\widetilde{(cx+d)}w(cx+d)}{\|cx+d\|^2}),$$
where $Q_{k,u}$ and $Q_{k,w}$ are the remaining operators with respect to $u$ and $w$ respectively, $y=\phi(x)$ is the M\"{o}bius transformation, $J(\phi,x)=\displaystyle\frac{\widetilde{cx+d}}{\|cx+d\|^n},J_{-1}(\phi,x)=\displaystyle\frac{cx+d}{\|cx+d\|^{n+2}},$ and $u=\displaystyle\frac{\widetilde{(cx+d)}w(cx+d)}{\|cx+d\|^2}$ for some $w\in \mathbb{R}^n.$
\end{theorem}
Consequently, we have \\
$Q_{k,u}uf(x,u)=0$ implies $Q_{k,w}wJ(\phi,x)f(\phi(x),\displaystyle\frac{\widetilde{(cx+d)}w(cx+d)}{\|cx+d\|^2})=0$. This tells us that the remaining equation $Q_kuf(x,u)=0$ is conformally invariant under M\"{o}bius transformations.
The reproducing kernel of $\mathcal{M}_k$ with respect to integration over $\mathbb{S}^{n-1}$ is given by (see \cite{BDS,DLRV}) $$Z_k(u,v):=\displaystyle\sum_\sigma P_\sigma(u)V_\sigma(v)v,$$ where
$$
P_\sigma(u)=\displaystyle\frac{1}{k!}\displaystyle\Sigma(u_{i_1}-u_1e_1^{-1}e_{i_1})\ldots(u_{i_k}-u_1e_1^{-1}e_{i_k}),
V_\sigma(v)=\displaystyle\frac{\partial^kG(v)}{\partial v_{2}^{j_2}\ldots \partial v_{n}^{j_{n}}}\,
$$
$j_2+\ldots +j_{n}=k,~~\mbox{and}~~ i_k\in\{2,\cdots,n\}.$ Here summation is taken over all permutations of the monomials without repetition. This function is left monogenic in $u$ and right monogenic polynomial in $v$ and it is homogeneous of degree $k$. See \cite{BDS} and elsewhere.
Let us consider the polynomial $uZ_{k-1}(u,v)v$ which is harmonic, homogeneous degree of $k$ in both $u$ and $v$. Since $uZ_{k-1}(u,v)v$ does not depend on $x$, $Q_kuZ_{k-1}(u,v)v=0$.
Now applying inversion from the left, we obtain $$H_k(x,u,v):=\displaystyle\frac{-1}{\omega_n c_k}u\displaystyle\frac{x}{\|x\|^n}Z_{k-1}(\displaystyle\frac{xux}{\|x\|^2},v)v$$ is a non-trivial solution to $Q_kuf(x,u)=0,$ where $c_k=\displaystyle\frac{n-2}{n-2+2k}.$
Similarly, applying inversion from the right, we obtain $$\displaystyle\frac{-1}{\omega_n c_k}uZ_{k-1}(u,\displaystyle\frac{xvx}{\|x\|^2})\displaystyle\frac{x}{\|x\|^n}v$$ is a non-trivial solution to $f(x,v)vQ_{k,r}=0.$ Using the similar arguments in \cite{DLRV}, we can show that two representations of the solutions are equal. The details are given in the following
$$\begin{array}{ll}
\displaystyle\frac{-1}{\omega_n c_k}uZ_{k-1}(u,\displaystyle\frac{xvx}{\|x\|^2})\displaystyle\frac{x}{\|x\|^n}v\\
\\
=\displaystyle\frac{-1}{\omega_n c_k}u\displaystyle\frac{-x}{\|x\|}Z_{k-1}(\displaystyle\frac{xux}{\|x\|^2},v)\displaystyle\frac{x}{\|x\|}\displaystyle\frac{x}{\|x\|^n}v
=\displaystyle\frac{-1}{\omega_n c_k}u\displaystyle\frac{x}{\|x\|^n}Z_{k-1}(\displaystyle\frac{xux}{\|x\|^2},v)v.
\end{array}$$
In fact $H_k(x,u,v)$ is the fundamental solution to the $Q_k$ operator.
\section{Some basic integral formulas related to $Q_k$ operators}
In this section, we will establish some basic integral formulas associated with $Q_k$ operators.
\begin{definition}\cite{DLRV} \quad For any $Cl_n-$valued polynomials $P(u), Q(u)$, the inner product $(P(u), Q(u))_u$ with respect to $u$ is given by $$(P(u), Q(u))_u=\displaystyle\int_{\mathbb{S}^{n-1}}P(u)Q(u)ds(u).$$
\end{definition}
For any $p_k \in \mathcal{M}_k,$ one obtains (see \cite{BDS})
$$
p_k(u)=(Z_k(u,v), p_k(v))_v=\int_{\mathbb{S}^{n-1}}Z_k(u,v)p_k(v)ds(v).
$$
\par Now if we combine Stokes' Theorems of the Dirac operator and the Rarita-Schwinger operator, then we have two versions of Stokes' Theorem for the $Q_k$ operators .
\begin{theorem}(Stokes' Theorem for $Q_k$ operators) Let $\Omega'$ and $\Omega$ be domains in $\mathbb{R}^n$ and suppose the closure of $\Omega$ lies in $\Omega'$. Further suppose the closure of $\Omega$ is compact and the boundary of $\Omega,$ $\partial\Omega$, is piecewise smooth. Then for $f, g \in C^1(\Omega',$$\mathcal{M}_k)$, we have version 1
$$\begin{array}{ll}
\displaystyle\int_\Omega[(g(x,u)Q_{k,r}, f(x,u))_u+(g(x,u), Q_kf(x,u))_u]dx^n\\
\\
=\displaystyle\int_{\partial\Omega}\left(g(x,u), (I-P_k)d\sigma_xf(x,u)\right)_u\\
\\
=\displaystyle\int_{\partial\Omega}\left(g(x,u)d\sigma_x(I-P_{k,r}), f(x,u)\right)_u.
\end{array}$$
Then for $f, g \in C^1(\Omega',$$\mathcal{M}_{k-1})$, we have version 2
$$\begin{array}{ll}
\displaystyle\int_\Omega[(g(x,u)uQ_{k,r}), uf(x,u))_u+(g(x,u)u, Q_kuf(x,u))_u]dx^n\\
\\
=\displaystyle\int_{\partial\Omega}\left(g(x,u)u, (I-P_k)d\sigma_xuf(x,u)\right)_u\\
\\
=\displaystyle\int_{\partial\Omega}\left(g(x,u)ud\sigma_x(I-P_{k,r}), uf(x,u)\right)_u.
\end{array}$$
\end{theorem}
\par {\bf Proof:}\quad It is easy to get version 1 of Stokes' Theorem for the $Q_k$ operators by combining Stokes' Theorems of the Dirac operator and the Rarita-Schwinger operators.
Now we shall prove version 2 of Stokes' Theorem.
First of all, we want to prove that $$\displaystyle\int_{\partial\Omega}\left(g(x,u)u, (I-P_k)d\sigma_xuf(x,u)\right)_u
=\displaystyle\int_{\partial\Omega}\left(g(x,u)ud\sigma_x(I-P_{k,r}), uf(x,u)\right)_u.$$
Here $d\sigma_x=n(x)d\sigma(x)$.
By the Almansi-Fischer decomposition, we have $$g(x,u)un(x)uf(x,u)=g(x,u)u[f_1(x,u)+uf_2(x,u)]=[g_1(x,u)+g_2(x,u)u]uf(x,u),$$
so $$\begin{array}{ll}g(x,u)ud\sigma_xuf(x,u)\\
\\
=g(x,u)u[f_1(x,u)+uf_2(x,u)]d\sigma(x)=[g_1(x,u)+g_2(x,u)u]uf(x,u)d\sigma(x),\end{array}$$where $f_1(x,u), f_2(x,u), g_1(x,u), g_2(x,u)$ are left or right monogenic polynomials in $u.$ Now integrating the above formula over the unit sphere in $\mathbb{R}^n$, one gets
$$ \begin{array}{ll}\displaystyle\int_{\mathbb{S}^{n-1}}g(x,u)ud\sigma_xuf(x,u)ds(u)\\
\\
=\displaystyle\int_{\mathbb{S}^{n-1}}g(x,u)uuf_2(x,u)d\sigma(x)ds(u)= \displaystyle\int_{\mathbb{S}^{n-1}}g_2(x,u)uuf(x,u)d\sigma(x)ds(u).\end{array}$$ This follows from the fact that $$\displaystyle\int_{\mathbb{S}^{n-1}}g(x,u)uf_1(x,u)ds(u)=\displaystyle\int_{\mathbb{S}^{n-1}}g_1(x,u)uf(x,u)ds(u)=0.$$ See \cite{BDS}.
Thus \begin{eqnarray*}\displaystyle\int_{\partial\Omega}\left(g(x,u)u, (I-P_k)d\sigma_xuf(x,u)\right)_u&=&\displaystyle\int_{\partial\Omega} \int_{\mathbb{S}^{n-1}}g(x,u)u((I-P_k)d\sigma_xuf(x,u))ds(u)\\
&=&\displaystyle\int_{\partial\Omega} \int_{\mathbb{S}^{n-1}}g(x,u)u uf_2(x,u)ds(u)d\sigma(x)\\
&= &\displaystyle\int_{\partial\Omega}\int_{\mathbb{S}^{n-1}} g_2(x,u)u u f(x,u)ds(u)d\sigma(x)\\
&=&\displaystyle\int_{\partial\Omega} \int_{\mathbb{S}^{n-1}}(g(x,u)d\sigma_x(I-P_{k,r}))uf(x,u)ds(u)\\
&=&\displaystyle\int_{\partial\Omega}\left(g(x,u)ud\sigma_x(I-P_{k,r}), uf(x,u)\right)_u.
\end{eqnarray*}
Secondly, we need to show $$\begin{array}{ll}\displaystyle\int_\Omega[(g(x,u)uQ_{k,r}, uf(x,u))_u+(g(x,u)u, Q_kuf(x,u))_u]dx^n\\
\\
=\displaystyle\int_{\partial\Omega}\left(g(x,u)u, (I-P_k)d\sigma_xuf(x,u)\right)_u.\end{array}$$
Consider the integral
\begin{eqnarray}\label{1}\displaystyle\int_\Omega[(g(x,u)uD_xP_{k,r},uf(x,u))_u+(g(x,u)u,P_kD_xuf(x,u))_u]dx^n\nonumber\\
\nonumber\\
=\displaystyle\int_\Omega\int_{\mathbb{S}^{n-1}}[(g(x,u)uD_xP_{k,r})uf(x,u)+g(x,u)u(P_kD_xuf(x,u))]ds(u)dx^n. \end{eqnarray}
Since $g(x,u)uD_xP_{k,r}, f(x,u), g(x,u)$ and $P_kD_xuf(x,u)$ are monogenic functions in $u$,
$$\int_{\mathbb{S}^{n-1}}(g(x,u)uD_xP_{k,r})uf(x,u)ds(u)=0=\int_{\mathbb{S}^{n-1}}g(x,u)u(P_kD_xuf(x,u))ds(u).$$
Thus the previous integral (\ref{1}) equals zero.
By Stokes' Theorem for the Dirac operator, we have
$$\begin{array}{lll}\displaystyle\int_\Omega[(g(x,u)uD_x,uf(x,u))_u+(g(x,u)u,D_xuf(x,u))_u]dx^n\\
\\
=\displaystyle\int_\Omega\int_{\mathbb{S}^{n-1}}[(g(x,u)uD_x)uf(x,u)+g(x,u)u(D_xuf(x,u))]ds(u)dx^n\\
\\
=\displaystyle\int_{\partial\Omega}\int_{\mathbb{S}^{n-1}}[(g(x,u)ud\sigma_xuf(x,u))]ds(u)\\
\\
=\displaystyle\int_{\partial\Omega}(g(x,u)u,d\sigma_xuf(x,u))_u.\end{array}$$
But $$\displaystyle\int_{\partial\Omega}(g(x,u)u,P_kd\sigma_xuf(x,u))_u=\displaystyle\int_{\partial\Omega}\int_{\mathbb{S}^{n-1}}g(x,u)u (P_kd\sigma_xuf(x,u))ds(u)=0,$$
since $\displaystyle\int_{\mathbb{S}^{n-1}}g(x,u)u (P_kd\sigma_xuf(x,u))ds(u)=0.$
Therefore we have shown
$$\begin{array}{ll}\displaystyle\int_\Omega[(g(x,u)uQ_{k,r}, uf(x,u))_u+(g(x,u)u, Q_kuf(x,u))_u]dx^n\\
\\
=\displaystyle\int_{\partial\Omega}\left(g(x,u)u, (I-P_k)d\sigma_xuf(x,u)\right)_u. \quad \blacksquare\end{array}$$
\begin{remark}
In the proof of the previous theorem it is proved that
\begin{eqnarray}\label{2}
\displaystyle\int_{\partial\Omega}\left(g(x,u)u, (I-P_k)d\sigma_xuf(x,u)\right)_u=\displaystyle\int_{\partial\Omega}\left(g(x,u)u, d\sigma_xuf(x,u)\right)_u.
\end{eqnarray}
\end{remark}
\begin{theorem}(Borel-Pompeiu Theorem)Let $\Omega'$ and $\Omega$ be as in the previous Theorem. Then for $f\in C^1(\Omega',$$\mathcal{M}_{k-1})$ and $y\in \Omega,$ we obtain
$$\begin{array}{ll}
uf(y,u)=\displaystyle\int_\Omega(H_k(x-y,u,v),Q_kvf(x,v))_vdx^n\\
\\
-\displaystyle\int_{\partial\Omega}\left(H_k(x-y,u,v), (I-P_k)d\sigma_xvf(x,v)\right)_v.
\end{array}$$ Here we will use the representation $H_k(x-y,u,v)=\displaystyle\frac{-1}{\omega_n c_k}uZ_{k-1}(u,\displaystyle\frac{(x-y)v(x-y)}{\|x-y\|^2})\displaystyle\frac{x-y}{\|x-y\|^n}v$. \end{theorem}
\par{\bf Proof:}\qquad Consider a ball $B(y,r)$ centered at $y$ with radius $r$ such that $\overline{B(y,r)}\subset \Omega$. We have
$$\begin{array}{ll}
\displaystyle\int_\Omega(H_k(x-y,u,v),Q_kvf(x,v))_vdx^n\\
\\
=\displaystyle\int_{\Omega\setminus{B(y,r)}}(H_k(x-y,u,v),Q_kvf(x,v))_vdx^n\\
\\
+\displaystyle\int_{B(y,r)}(H_k(x-y,u,v),Q_kvf(x,v))_vdx^n.
\end{array}$$
\par The last integral in the previous equation tends to zero as $r$ tends to zero. This follows from the degree of homogeneity of $x-y$ in $H_k(x-y,u,v)$. Now applying Stokes' Theorem version 2 to the first integral, one gets
$$\begin{array}{ll}\displaystyle\int_{\Omega\setminus{B(y,r)}}(H_k(x-y,u,v),Q_kvf(x,v))_vdx^n\\
\\
=\displaystyle\int_{\partial\Omega}(H_k(x-y,u,v),(I-P_k)d\sigma_xvf(x,v))_v-\displaystyle\int_{\partial {B(y,r)}}(H_k(x-y,u,v),(I-P_k)d\sigma_xvf(x,v))_v.
\end{array}$$
\par Now let us look at the integral $$\begin{array}{ll}\displaystyle\int_{\partial {B(y,r)}}(H_k(x-y,u,v),(I-P_k)d\sigma_xvf(x,v))_vdx^n\\
\\
=\displaystyle\int_{\partial {B(y,r)}}(H_k(x-y,u,v),(I-P_k)d\sigma_xvf(y,v))_v\\
\\
+\displaystyle\int_{\partial {B(y,r)}}(H_k(x-y,u,v),(I-P_k)d\sigma_xv[f(x,v)-f(y,v)])_v.
\end{array}$$
Since the second integral on the right hand side tends to zero as $r$ goes to zero because of the continuity of $f$, we only need to deal with the first integral
$$\begin{array}{llll}\displaystyle\int_{\partial {B(y,r)}}(H_k(x-y,u,v),(I-P_k)d\sigma_xvf(y,v))_v \\
\\
=\displaystyle\int_{\partial{B(y,r)}}\int_{\mathbb{S}^{n-1}}H_k(x-y,u,v)(I-P_k)d\sigma_xvf(y,v)ds(v)\\
\\
=\displaystyle\int_{\partial{B(y,r)}}\int_{\mathbb{S}^{n-1}}\displaystyle\frac{-1}{\omega_n c_k}uZ_{k-1}\left(u,\displaystyle\frac{(x-y)v(x-y)}{\|x-y\|^2}\right)\displaystyle\frac{x-y}{\|x-y\|^n}v(I-P_k)n(x)vf(y,v)ds(v)d\sigma(x),
\end{array}$$ where $n(x)$ is the unit outer normal vector and $d\sigma(x)$ is the scalar measure on $\partial B(y,r).$ Now $n(x)$ here is $\displaystyle\frac{y-x}{\|x-y\|}.$ Using equation (\ref{2}) the previous integral becomes
$$\begin{array}{ll}
\displaystyle\int_{\partial{B(y,r)}}\int_{\mathbb{S}^{n-1}}\displaystyle\frac{1}{\omega_n c_k}uZ_{k-1}\left(u,\displaystyle\frac{(x-y)v(x-y)}{\|x-y\|^2}\right)\displaystyle\frac{x-y}{\|x-y\|^n}v\displaystyle\frac{x-y}{\|x-y\|}
vf(y,v) ds(v)d\sigma(x)\\
\\
=\displaystyle\frac{1}{\omega_n c_k}\displaystyle\int_{\partial{B(y,r)}}\displaystyle\frac{1}{r^{n-1}}\int_{\mathbb{S}^{n-1}}uZ_{k-1}\left(u,\displaystyle\frac{(x-y)v(x-y)}{\|x-y\|^2}\right)\displaystyle\frac{x-y}{\|x-y\|}v\displaystyle\frac{x-y}{\|x-y\|}
vf(y,v) ds(v)d\sigma(x)
\end{array}$$
\par Since $Z_{k-1}\left(u,\displaystyle\frac{(x-y)v(x-y)}{\|x-y\|^2}\right)\displaystyle\frac{x-y}{\|x-y\|}v\displaystyle\frac{x-y}{\|x-y\|}$ is a harmonic polynomial with degree $k$ in $v$, we can apply Lemma $5$ in \cite{DLRV}, then the integral is equal to
$$\begin{array}{ll}
\displaystyle\int_{\mathbb{S}^{n-1}}uZ_{k-1}(u,v)vvf(y,v)ds(v)d\sigma(x)\\
\\
=-u\displaystyle\int_{\mathbb{S}^{n-1}}Z_{k-1}(u,v)f(y,v)ds(v)=-uf(y,u),
\end{array}$$
\par Therefore, when the radius $r$ tends to zero, we obtain the desired result. \qquad $\blacksquare$
\par Now if the function has compact support in $\Omega$, then by Borel-Pompeiu Theorem we obtain:\\
\begin{theorem} $\displaystyle\iint_{\mathbb{R}^n}(H_k(x-y,u,v),Q_kv\phi(x,v))_vdx^n=u\phi(y,u)$ for each $\phi\in C_0^{\infty}(\mathbb{R}^n)$.\end{theorem}
\par Now suppose $vf(x,v)$ is a solution to the $Q_k$ operator, then using the Borel-Pompeiu Theorem we have:\\
\begin{theorem} (Cauchy Integral Formula) If $Q_kvf(x,v)=0,$ then for $y\in \Omega,$
$$\begin{array}{ll}
uf(y,u)=-\displaystyle\int_{\partial\Omega}\left(H_k(x-y,u,v), (I-P_k)d\sigma_xvf(x,v)\right)_v\\
=-\displaystyle\int_{\partial\Omega}\left(H_k(x-y,u,v)d\sigma_x(I-P_{k,r}), vf(x,v)\right)_v.
\qquad \blacksquare \end{array}$$ \end{theorem}
We also can talk about a Cauchy transform for the $Q_k$ operators:
\begin{definition}\quad For a domain $\Omega\subset \mathbb{R}^n$ and a function $f: \Omega\times\mathbb{R}^n \longrightarrow Cl_n,$ where $f(x,u)$ is monogenic in $u$ with degree $k-1$, the Cauchy $($or $T_k$-transform$)$ of $f$ is formally defined to be $$(T_kvf)(y,v)=-\iint_\Omega \left(H_k(x-y,u,v), uf(x,u)\right)_udx^n,\qquad y\in \Omega.$$\end{definition}
\begin{theorem} \quad $$Q_k\displaystyle\iint_{\mathbb{R}^n}\left(H_k(x-y,u,v), v\phi(x,v)\right)_vdx^n=u\phi(y,u), \mbox{for}~~ \phi\in C_0^{\infty}(\mathbb{R}^n).$$ Here we use the representation $H_k(x-y,u,v)=\displaystyle\frac{-1}{\omega_n c_k}uZ_{k-1}(u,\displaystyle\frac{(x-y)v(x-y)}{\|x-y\|^2})\displaystyle\frac{x-y}{\|x-y\|^n}v.$ \end{theorem}
\par {\bf Proof:} \quad For each fixed $y\in \mathbb{R}^n$, we can construct a bounded rectangle $R(y)$ centered at $y$ in $\mathbb{R}^n$.
\par Then
$$\begin{array}{lll}Q_k\displaystyle\iint_{\mathbb{R}^n\setminus R(y)}(H_k(x-y,u,v), v\phi(x,v))_vdx^n\\
\\
=(I-P_k)D_y\displaystyle\iint_{\mathbb{R}^n\setminus R(y)}(H_k(x-y,u,v), v\phi(x,v))_vdx^n=0.\end{array}$$
Now consider $$\begin{array}{lll}
\displaystyle\frac{\partial}{\partial y_i}\displaystyle\iint_{R(y)}(H_k(x-y,u,v), v\phi(x,v))_vdx^n\\
\\
=\lim_{\varepsilon\to 0}\displaystyle\frac{1}{\varepsilon}[\displaystyle\iint_{R(y)}(H_k(x-y,u,v), v\phi(x,v))_vdx^n\\
\\
-\displaystyle\iint_{R(y+\varepsilon e_i)}(H_k(x-y-\varepsilon e_i,u,v), v\phi(x,v))_vdx^n]\end{array}$$
$$\begin{array}{lll}=\displaystyle\iint_{R(y)}(H_k(x-y,u,v), \displaystyle\frac{\partial v\phi(x,v)}{\partial x_i})_vdx^n\\
\\
+\displaystyle\int_{\partial R_1(y)\cup \partial R_2(y)}(H_k(x-y,u,v), v\phi(x,v))_vd\sigma(x)
\end{array}$$
where $\partial R_1(y)$ and $\partial R_2(y)$ are the two faces of $R(y)$ with normal vectors $\pm e_i.$
So $$\begin{array}{ll}D_y\displaystyle\iint_{R(y)}(H_k(x-y,u,v), v\phi(x,v))_vdx^n\\
\\
=\displaystyle\iint_{R(y)}\sum_{i=1}^n e_i(H_k(x-y,u,v), \displaystyle\frac{\partial {v\phi(x,v)}}{\partial {x_i}})_vdx^n\\
\\
+\displaystyle\int_{\partial R(y)}n(x)(H_k(x-y,u,v),v\phi(x,v))_vd\sigma(x).\end{array}$$
The first integral tends to zero as the volume of $R(y)$ tends to zero. Thus we will pay attention to the integral
$$(I-P_k)\displaystyle\int_{\partial R(y)}n(x)(H_k(x-y,u,v),v\phi(x,v))_vd\sigma(x).$$
\par This is equal to
$$(I-P_k)\displaystyle\int_{\partial R(y)}\displaystyle\int_{\mathbb{S}^{n-1}}n(x)H_k(x-y,u,v)v\phi(x,v)ds(v)d\sigma(x),$$ which in turn is equal to
$$\begin{array}{ll}
(I-P_k)\displaystyle\int_{\partial R(y)}\displaystyle\int_{\mathbb{S}^{n-1}}n(x)H_k(x-y,u,v)v\phi(y,v)ds(v)d\sigma(x)\\
\\
+(I-P_k)\displaystyle\int_{\partial R(y)}\displaystyle\int_{\mathbb{S}^{n-1}}n(x)H_k(x-y,u,v)v(\phi(x,v)-\phi(y,v))ds(v)d\sigma(x).\end{array}$$
But the last integral on the right side of the above formula tends to zero as the surface area of $\partial R(y)$ tends to zero because of the degree of the homogeneity of $x-y$ in $H_k$ and the continuity of the function $\phi$. Hence we are left with
$$(I-P_k)\displaystyle\int_{\partial R(y)}\displaystyle\int_{\mathbb{S}^{n-1}}n(x)H_k(x-y,u,v)v\phi(y,v)ds(v)d\sigma(x).$$
\par By Stokes' Theorem this is equal to $$(I-P_k)\displaystyle\int_{\partial B(y,r)}\displaystyle\int_{\mathbb{S}^{n-1}}n(x)H_k(x-y,u,v)v\phi(y,v)ds(v)d\sigma(x).$$ In turn this is equal to
$$\begin{array}{ll}
(I-P_k)\displaystyle\int_{\partial B(y,r)}\displaystyle\int_{\mathbb{S}^{n-1}}-\displaystyle\frac{x-y}{\|x-y\|}\displaystyle\frac{-1}{\omega_n c_k}uZ_{k-1}(u,\displaystyle\frac{(x-y)v(x-y)}{\|x-y\|^2})\displaystyle\frac{x-y}{\|x-y\|^n}vv\phi(y,v)ds(v)d\sigma(x)\\
\\
=(I-P_k)\displaystyle\int_{\partial B(y,r)}\displaystyle\frac{-1}{\omega_n c_k}\displaystyle\int_{\mathbb{S}^{n-1}}\displaystyle\frac{x-y}{\|x-y\|}uZ_{k-1}(u,\displaystyle\frac{(x-y)v(x-y)}{\|x-y\|^2},v)\displaystyle\frac{x-y}{\|x-y\|^n}\phi(y,v)ds(v)d\sigma(x).
\end{array}$$
Since $Z_{k-1}(u,v)$ is the reproducing kernel of $\mathcal{M}_{k-1}$, $\pm\tilde{a}Z_{k-1}(au\tilde{a},av\tilde{a})a$ is also the reproducing kernel of $\mathcal{M}_{k-1}$ for each $a\in Pin(n)$. See \cite{DLRV}. Now let $a=\displaystyle\frac{x-y}{\|x-y\|},$ the previous integral equals
$$\begin{array}{ll}
(I-P_k)\displaystyle\int_{\partial B(y,r)}\displaystyle\frac{1}{\omega_n c_k}\displaystyle\int_{\mathbb{S}^{n-1}}\displaystyle\frac{x-y}{\|x-y\|}u\displaystyle\frac{x-y}{\|x-y\|}Z_{k-1}(\displaystyle\frac{(x-y)u(x-y)}{\|x-y\|^2},v)\displaystyle\frac{x-y}{\|x-y\|}\displaystyle\frac{x-y}{\|x-y\|^n}\\
\\
\phi(y,v)ds(v)d\sigma(x)\\
\\
=(I-P_k)\displaystyle\int_{\partial B(y,r)}\displaystyle\int_{\mathbb{S}^{n-1}}\displaystyle\frac{1}{\omega_n c_k}\displaystyle\frac{1}{r^{n-1}}\displaystyle\frac{x-y}{\|x-y\|}u\displaystyle\frac{x-y}{\|x-y\|}Z_{k-1}(\displaystyle\frac{(x-y)u(x-y)}{\|x-y\|^2},v)\phi(y,v)ds(v)d\sigma(x).
\end{array}$$
Applying Lemma 5 in \cite{DLRV}, the integral becomes
$$
(I-P_k)\displaystyle\int_{\mathbb{S}^{n-1}}uZ_{k-1}(u,v)\phi(y,v)ds(v)=(I-P_k)u\phi(y,u)=u\phi(y,u).\quad \blacksquare
$$
\section{The $Q_k$ operators on the sphere}
In this section, we will extend the results for the $Q_k$ operators in $\mathbb{R}^n$ from the previous sections to the sphere.
Consider the Cayley transformation
$ C : \mathbb{R}^n \to \mathbb{S}^n $, where $\mathbb{S}^n$ is the unit sphere in $\mathbb{R}^{n+1}$,
defined by $C(x)= (e_{n+1}x +1)(x + e_{n+1})^{-1} $, where $ x= x_1e_1 + \cdots + x_ne_n \in \mathbb{R}^n $,
and $e_{n+1}$ is a unit vector in $\mathbb{R}^{n+1}$ which is orthogonal to $\mathbb{R}^n $.
Now $C(\mathbb{R}^n) = \mathbb{S}^n \setminus\{e_{n+1}\}$. Suppose $ x_s \in \mathbb{S}^n $ and
$x_s = x_{s_1}e_1 + \cdots + x_{s_n}e_n + x_{s_{n+1}}e_{n+1}$, then we have
$C^{-1}(x_s) = (-e_{n+1}x_s +1)(x_s - e_{n+1})^{-1}$.
\\
The Dirac operator over the $n$-sphere $\mathbb{S}^n$ has the form $D_{s} = w(\Gamma + \frac{n}{2})$, where
$w \in \mathbb{S}^n $ and $ \Gamma =
\displaystylesplaystyle\sum_{i<j, i=1}^{n}{e_ie_j(w_i\frac{\partial}{\partial w_j} - w_j\frac{\partial}{\partial w_i})}$. See \cite{CM,LR,R1,R2,Va3}.
Let $U$ be a domain in $\mathbb{R}^n$. Consider a function $f_{\star}: U \times \mathbb{R}^{n} \to Cl_{n+1}$ such that for each $x\in U$,
$f_{\star}(x, u)$ is a left monogenic polynomial homogeneous of degree $k-1$ in $u$.
This function reduces to $f(x_s, u)$ on $C(U) \times \mathbb{R}^{n} $ and $f(x_s, u)$ takes its values in $Cl_{n+1},$
where $C(U)\subset \mathbb{S}^n$ and $f(x_s, u)$ is a left monogenic polynomial homogeneous of degree $k-1$ in $u$.
The
left $n$-spherical remaining operator on the sphere is defined to be
$$
Q_k^S=:(I-P_k)D_{s,x_s},
$$ where $D_{s,x_s}$ is the Dirac operator on the sphere with respect to $x_s$.
Hence the left $n$-spherical remaining equation is defined to be $$Q_k^Suf(x_s, u)=0.$$
On the other hand, the right $n$-spherical remaining operator is defined to be
$$
Q_{k,r}^S:=D_{s,x_s}(I-P_{k,r}).
$$
The right $n$-spherical remaining equation is defined to be $g(x_s,v)vQ_{k,r}^S=0,$ where $g(x_s,v)\in \overline{\mathcal{M}}_{k -1}.$
\subsection{The intertwining operators for $Q_k^S$ and $Q_k$ operators and the conformal invariance of $Q_k^Suf(x_s,u)=0$}
First let us recall that if $ f(u)\in \mathcal{M}_{k -1}$ then it trivially extends to $F(v) = f(u + u_{n+1}e_{n+1})$
with $u_{n+1}\in \mathbb{R}$ and $F(v)= f(u)$ for all $u_{n+1}\in \mathbb{R}$. Consequently $D_{n+1}F(v)= 0$
where $D_{n+1}= \displaystylesplaystyle\sum_{j=1}^{n+1} {e_j \frac{\partial}{\partial u_j}}$.
If $ f(u)\in \mathcal{M}_{k-1} $ then for any boundary of a piecewise smooth bounded domain
$U \subseteq \mathbb{R}^{n} $ by Cauchy's Theorem
\begin{eqnarray}\label{3}\int_{\partial U}{n(u)f(u)d\sigma(u)} = 0. \end{eqnarray}
Suppose now $a\in \mbox{Pin}(n+1)$ and $u= aw\tilde{a}$ then although $u \in \mathbb{R}^{n}$ in
general $w$ belongs to the hyperplane $a^{-1}\mathbb{R}^{n}\tilde{a}^{-1}$ in $\mathbb{R}^{n+1}$.
By applying a change of variable, up to a sign the integral (\ref{3}) becomes
\begin{eqnarray}\label{4}
\int_{a^{-1}\partial U\tilde{a}^{-1}}{ an(w)\tilde{a}F(aw\tilde{a})d\sigma(w)} = 0.\end{eqnarray}
As $\partial U$ is arbitrary then on applying Stokes' Theorem to (\ref{4}) we see
that
$$
D_a \tilde{a}F(aw\tilde{a}) = 0,~~ \mbox{where} ~~ D_a : = D_{n+1}\bigl|_{a^{-1}\mathbb{R}^{n}\tilde{a}^{-1}}.
$$ See \cite{LRV}.
From now on all functions on spheres take their values in $Cl_{n+1}.$
Now let $f(x_s,u): U_s \times \mathbb{R}^{n}\to Cl_{n+1}$ be a monogenic polynomial
homogeneous of degree $k$ in $u$ for each $x_s \in U_s$, where $U_s$ is a domain in $ \mathbb{S}^n.$
It is known from section 3 that $I-P_k$ is conformally invariant under a general M\"{o}bius transformation
over $\mathbb{R}^{n}$. This trivially extends to M\"{o}bius transformations on $\mathbb{R}^{n+1}$.
It follows that if we restrict $x_s $ to $ \mathbb{S}^n,$ then $I-P_k$ is also conformally invariant under
the Cayley transformation $C$ and its inverse $C^{-1},$ with $x\in \mathbb{R}^n$.
We can use the intertwining formulas for $D_x$ and $D_{s,x_s}$ given in \cite{LR} to establish the intertwining
formulas for $Q_k$ and $Q_k^S.$
\begin{theorem}
$$\begin{array}{ll}
-J_{-1}(C^{-1},x_s)Q_{k,u}uf(x,u)
=Q_{k,w}^SwJ(C^{-1},x_s)f(C^{-1}(x_s),\displaystyle\frac{(x_s-e_{n+1})w(x_s-e_{n+1})}{\|x_s-e_{n+1}\|^2}),
\end{array}$$
where $Q_{k,u}$ are the remaining operators with respect to $u \in \mathbb{R}^{n}$, $Q_{k,w}^S$ are the remaining operators on the sphere with respect to $w \in \mathbb{S}^{n}$,
$u=\displaystylesplaystyle\frac{(x_s - e_{n+1})w(x_s - e_{n+1})}{||x_s - e_{n+1}||^2},$
$J(C^{-1},x_s)=\displaystyle\frac{x_s - e_{n+1}}{\|x_s - e_{n+1}\|^n}$ is the conformal weight for the inverse of the Cayley transformation
and $J_{-1}(C^{-1},x_s)=\displaystyle\frac{x_s - e_{n+1}}{\|x_s - e_{n+1}\|^{n+2}}.$
\end{theorem}
{\bf Proof:}\quad In \cite{LR} it is shown that $D_x=J_{-1}(C^{-1},x_s)^{-1}D_{s,x_s}J(C^{-1},x_s).$
Set $u=\displaystyle\frac{J(C^{-1},x_s)wJ(C^{-1},x_s)}{\|J(C^{-1},x_s)\|^2}$ for some $w \in \mathbb{R}^{n+1}$. Consequently,
$$\begin{array}{ll}Q_{k,u}uf(x,u)=(I-P_{k,u})D_xuf(x,u)\\
\\
=(I-P_{k,u})J_{-1}(C^{-1},x_s)^{-1}D_{s,x_s}J(C^{-1},x_s)uf(C^{-1}(x_s),u)\\
\\
=J_{-1}(C^{-1},x_s)^{-1}(I-P_{k,w})D_{s,x_s}J(C^{-1},x_s)\displaystyle\frac{J(C^{-1},x_s)wJ(C^{-1},x_s)}{\|J(C^{-1},x_s)\|^2}\\
\\
f(C^{-1}(x_s),\displaystyle\frac{J(C^{-1},x_s)wJ(C^{-1},x_s)}{\|J(C^{-1},x_s)\|^2})
\end{array}$$
Since $\displaystyle\frac{J(C^{-1},x_s)wJ(C^{-1},x_s)}{\|J(C^{-1},x_s)\|^2}=\displaystyle\frac{(x_s - e_{n+1})w(x_s - e_{n+1})}{||x_s - e_{n+1}||^2},$ the previous equation becomes
$$\begin{array}{ll}
Q_{k,u}uf(x,u)
=-J_{-1}(C^{-1},x_s)^{-1}Q_{k,w}^SwJ(C^{-1},x_s)f(C^{-1}(x_s),\displaystyle\frac{(x_s-e_{n+1})w(x_s-e_{n+1})}{\|(x_s-e_{n+1})\|^2}).\quad \blacksquare
\end{array}$$
Similarly, we have the following result for the remaining operators under the Cayley transformation.
\begin{theorem}
$$
-J_{-1}(C,x)Q_{k,u}^Sug(x_s,u)=Q_{k,w}wJ(C,x)g(C(x),\displaystyle\frac{(x+e_{n+1})w(x+e_{n+1})}{\|x+e_{n+1}\|^2}),
$$
where $u=\displaystylesplaystyle\frac{(x+e_{n+1})w(x+e_{n+1})}{||x+e_{n+1}||^2},$
$J(C,x)=\displaystyle\frac{x +e_{n+1}}{\|x+e_{n+1}\|^n}$ and $J_{-1}(C,x)=\displaystyle\frac{x+e_{n+1}}{\|x +e_{n+1}\|^{n+2}}$ is the conformal weight for the Cayley transformation.
\end{theorem}
As a consequence of two previous theorems we have the conformal invariance of equation $Q_k^S uf(x_s,u) = 0$:
\begin{theorem}
$Q_{k,u}uf(x,u)=0$ if and only if
$$
Q_{k,w}^SwJ(C^{-1},x_s)f(C^{-1}(x_s),\displaystyle\frac{(x_s-e_{n+1})w(x_s-e_{n+1})}{\|x_s-e_{n+1}\|^2})= 0
$$
and $Q_{k,u}^Sug(x_s,u) = 0 $ if and only if
$$
Q_{k,w}wJ(C,x)g(C(x),\displaystyle\frac{(x+e_{n+1})w(x+e_{n+1})}{\|x+e_{n+1}\|^2})= 0.
$$
\end{theorem}
\subsection{A kernel for the $Q_k^S$ operator }
Now consider the kernel in $ \mathbb{R}^n $
$$\begin{array}{ll}\displaystyle\frac{-1}{\omega_n c_k}w\displaystyle\frac{x-y}{\|x-y\|^n}Z_{k-1}(\displaystyle\frac{(x-y)w(x-y)}{\|x-y\|^2},v)v\\
\\
=\displaystyle\frac{-1}{\omega_n c_k}\displaystyle\frac{J(C^{-1},x_s)^{-1}uJ(C^{-1},x_s)^{-1}}{\|J(C^{-1},x_s)^{-1}\|^2}\\
\\
J(C^{-1},x_s)^{-1}\displaystyle\frac{x_s-y_s}{\|x_s-y_s\|^n}J(C^{-1},y_s)^{-1}Z_{k-1}(\displaystyle\frac{(x-y)w(x-y)}{\|x-y\|^2},v)v,
\end{array}$$ where $w=\displaystyle\frac{J(C^{-1},x_s)^{-1}uJ(C^{-1},x_s)^{-1}}{\|J(C^{-1},x_s)^{-1}\|^2}.$
Multiplying by $J(C^{-1},x_s)$ and applying the Cayley transformation to the above kernel, we obtain the kernel
\begin{eqnarray}\label{secondeq}
H_k^S(x-y,u,v):=
\displaystyle\frac{-1}{\omega_nc_k}u\displaystyle\frac{x_s-y_s}{\|x_s-y_s\|^n}J(C^{-1},y_s)^{-1}Z_{k-1}(au\tilde{a},v)v,
\end{eqnarray}
where $a=a(x_s,y_s)=\displaystyle\frac{J(C^{-1},x_s)^{-1}(x_s-y_s)J(C^{-1},y_s)^{-1}}{\|J(C^{-1},x_s)^{-1}\|\|x_s-y_s\|\|J(C^{-1},y_s)^{-1}\|}.$
This is a fundamental solution to $Q_k^Suf(x_s,u)= 0$ on $ \mathbb{S}^n,$ for $x_s,y_s \in \mathbb{S}^n.$
Similarly, we obtain that
\begin{eqnarray}\label{firsteq}
\displaystyle\frac{-1}{\omega_nc_k}uZ_{k-1}(u,\tilde{a}va)J(C^{-1},y_s)^{-1}\displaystyle\frac{x_s-y_s}{\|x_s-y_s\|^n}v
\end{eqnarray}
is a non trivial solution to $g(x_s, v)v Q_{k,r}^S= 0$.
We can see that the representations (\ref{secondeq}) and (\ref{firsteq}) are the same up to a reflection by
$$
\begin{array}{ll}
\displaystyle\frac{-1}{\omega_nc_k}uZ_{k-1}(u,\tilde{a}va)J(C^{-1},y_s)^{-1}\displaystyle\frac{x_s-y_s}{\|x_s-y_s\|^n}v\\
\\
=\displaystyle\frac{1}{\omega_nc_k}u\tilde{a}Z_k(au\tilde{a},v)aJ(C^{-1},y_s)^{-1}\displaystyle\frac{x_s-y_s}{\|x_s-y_s\|^n}v\\
\\
=\displaystyle\frac{-1}{\omega_nc_k}uJ(C^{-1},y_s)^{-1}\displaystyle\frac{x_s-y_s}{\|x_s-y_s\|^n}\displaystyle\frac{J(C^{-1},x_s)^{-1}}{\|J(C^{-1},x_s)^{-1}\|}
Z_k(au\tilde{a},v)\displaystyle\frac{J(C^{-1},x_s)^{-1}}{\|J(C^{-1},x_s)^{-1}\|}v\\
\\
=u\displaystyle\frac{J(C^{-1},x_s)^{-1}}{\|J(C^{-1},x_s)^{-1}\|}\displaystyle\frac{-1}{\omega_nc_k}\displaystyle\frac{x_s-y_s}{\|x_s-y_s\|^n}J(C^{-1},y_s)^{-1}
Z_k(au\tilde{a},v)\displaystyle\frac{J(C^{-1},x_s)^{-1}}{\|J(C^{-1},x_s)^{-1}\|}v.
\end{array}
$$
\subsection{Some basic integral formulas for the remaining operators on spheres}
In this section we will study some basic integral formulas related to the remaining operators on the sphere.
\begin{theorem} (Stokes' Theorem for the $n$-spherical Dirac operator $D_{s}$) \cite{LR}
Suppose $U_s$ is a domain on $\mathbb{S}^n$ and $f,g: U_s \times \mathbb{R}^{n} \to Cl_{n+1}$
are $C^1$, then for a subdomain $V_s$ of $U_s$, we have
$$\begin{array}{ll}
\displaystyle\int_{\partial V_s} g(x_s, u)n(x_s)f(x_s, u)d\Sigma(x_s)\\
\\
=\displaystyle\int_{V_s} (g(x_s, u)D_{s,x_s})f(x_s, u) + g(x_s, u)(D_{s,x_s}f(x_s, u))dS(x_s) ,
\end{array}$$
where $dS(x_s)$ is the $n$-dimensional area measure on $V_s $, $d\Sigma(x_s)$ is the $n-1$-dimensional scalar
Lebesgue measure on $\partial V_s$ and $n(x_s)$ is the unit outward normal vector to $\partial V_s$ at $x_s$.
\end{theorem}
Applying the similar arguments to prove the Stokes' Theorem for $Q_k$ operators in section 4, we can obtain
\begin{theorem}(Stokes' Theorem for the $Q_k^S$ operator )
Let $U_s, V_s, \partial V_s$ be as in the previous Theorem. Then for $ f,g \in C^1(U_s \times \mathbb{R}^{n}, {\mathcal M}_{k})$,
we have
version 1
$$\begin{array}{ll}
\displaystyle\int_{V_s}[(g(x_s,u)Q_{k,r}^S, f(x_s,u))_u+(g(x_s,u), Q_k^Sf(x_s,u))_u]dS(x_s)\\
\\
=\displaystyle\int_{\partial V_s}\left(g(x_s,u), (I-P_k)n(x_s)f(x_s,u)\right)_ud\Sigma(x)\\
\\
=\displaystyle\int_{\partial V_s}\left(g(x_s,u)n(x_s)(I-P_{k,r}), f(x_s,u)\right)_ud\Sigma(x).
\end{array}$$
Then for $f, g \in C^1(U_s \times \mathbb{R}^{n},$$\mathcal{M}_{k-1})$, we have version 2
$$\begin{array}{ll}
\displaystyle\int_{V_s}[(g(x_s,u)uQ_{k,r}^S, uf(x_s,u))_u+(g(x_s,u)u, Q_k^Suf(x_s,u))_u]dS(x_s)\\
\\
=\displaystyle\int_{\partial V_s}\left(g(x_s,u)u, (I-P_k)n(x_s)uf(x_s,u)\right)_ud\Sigma(x)\\
\\
=\displaystyle\int_{\partial V_s}\left(g(x_s,u)un(x_s)(I-P_{k,r}), uf(x_s,u)\right)_ud\Sigma(x).
\end{array}$$
\end{theorem}
\begin{remark}
Using the similar arguments to show the conformal invariance of Stokes' Theorem for the Rarita-Schwinger operators in \cite{LRV}, we obtain that Stokes' Theorem for the $Q_k$ operators is conformally invariant under the Cayley transformation and the inverse of the Cayley transformation.
\end{remark}
\begin{remark}
We also have the following fact
\begin{eqnarray}\label{7}
\displaystyle\int_{\partial V_s}\left(g(x_s,u)u, (I-P_k)n(x_s)uf(x_s,u)\right)_ud\Sigma(x)=\displaystyle\int_{\partial V_s}\left(g(x_s,u)u,n(x_s)uf(x_s,u)\right)_ud\Sigma(x)
\end{eqnarray}\end{remark}
\begin{theorem} (Borel-Pompeiu Theorem)
Suppose $U_s$, $V_s$ and $\partial V_s$ are stated as in Theorem $10$ and $y_s \in V_s.$
Then for $f \in C^1(U_s \times \mathbb{R}^{n},{\mathcal M}_{k-1}) $ we have
$$\begin{array}{ll}
u'f(y_s, u') =J(C^{-1},y_s)\displaystyle\int_{\partial V_s} (H_k^{S}(x_s-y_s, u, v), (I-P_{k})n(x_s) vf(x_s,v))_{v}d\Sigma(x_s) \\
\\
-J(C^{-1},y_s)\displaystyle\int_{V_s} (H_k^{S}(x_s-y_s, u, v),Q_k^Svf(x_s,v))_{v} dS(x_s)
\end{array}$$
where $u'=\displaystyle\frac{(y_s-e_{n+1})u(y_s-e_{n+1})}{\|y_s-e_{n+1}\|^2},$
$dS(x_s)$ is the $n$-dimensional area measure on $V_s \subset \mathbb{S}^n $, $n(x_s)$ and $d\Sigma(x_s)$ as in Theorem $10$.
\end{theorem}
{\bf Proof:} In the proof we use the representation
$$ \begin{array}{ll}H_k^S(x-y,u,v)=
\displaystyle\frac{-1}{\omega_nc_k}uZ_{k-1}(u,\tilde{a}va)J(C^{-1},y_s)^{-1}\displaystyle\frac{x_s-y_s}{\|x_s-y_s\|^n}v.
\end{array}$$
Let $B_s(y_s, \epsilon)$ be the ball centered at $y_s \in \mathbb{S}^n$ with radius $\epsilon$. We denote
$C^{-1}(B_s(y_s, \epsilon))$ by $B(y, r)$, and
$C^{-1}(\partial B_s(y_s, \epsilon))$ by $\partial B(y, r),$ where $y = C^{-1}(y_s)\in \mathbb{R}^n $ and $r$ is the radius of $B(y, r)$ in $\mathbb{R}^n$. Using the similar arguments in the proof of Theorem $2$, we only deal with
$$\begin{array}{ll}
\displaystyle\int_{\partial B_s(y_s, \epsilon)} (H_k^{S}(x_s-y_s, u, v),(I-P_k)n(x_s)vf(y_s,v))_{v} d\Sigma(x_s)\\
\\
=\displaystyle\int_{\partial B_s(y_s, \epsilon)}\int_{\mathbb{S}^{n-1}} H_k^{S}(x_s-y_s, u, v)(I-P_k)n(x_s)vf(y_s,v) ds(v)d\Sigma(x_s).\end{array}$$
Now applying (\ref{7}), the integral is equal to
$$\begin{array}{lll}
\displaystyle\int_{\partial B_s(y_s, \epsilon)}\int_{\mathbb{S}^{n-1}} H_k^{S}(x_s-y_s, u, v)n(x_s)vf(y_s,v) ds(v)d\Sigma(x_s)\\
\\
=\displaystyle\int_{\partial B_s(y_s, \epsilon)}\int_{\mathbb{S}^{n-1}} \displaystyle\frac{-1}{\omega_nc_k}uZ_{k-1}(u,\tilde{a}va)
J(C^{-1},y_s)^{-1}\displaystyle\frac{x_s-y_s}{\|x_s-y_s\|^n}v n(x_s)vf(y_s,v) ds(v)d\Sigma(x_s)\end{array}$$
Applying the inverse of the Cayley transformation to the previous integral, we have
$$\begin{array}{ll}
=\displaystyle\int_{\partial B(y, r)}\int_{\mathbb{S}^{n-1}} \displaystyle\frac{-1}{\omega_nc_k}u
Z_{k-1}(u,\displaystyle\frac{(x-y)w(x-y)}{\|x-y\|^2})
J(C^{-1},y_s)^{-1}J(C,y)^{-1}\displaystyle\frac{x-y}{\|x-y\|^n}J(C,x)^{-1}\\
\\
vJ(C,x)n(x)J(C,x)vf(C(y),\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2}) ds(v)d\sigma(x),
\end{array}$$
where $v=\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2}.$
Now if we replace $v$ with $\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2}$ in the previous integral and we also set $J(C,x)=(J(C,x)-J(C,y))+J(C,y)$, but $J(C,x)-J(C,y)$ tends to zero as $x$ approaches $y$. Thus the previous integral
can be replaced by
$$\begin{array}{ll}
=\displaystyle\int_{\partial B(y, r)}\int_{\mathbb{S}^{n-1}} \displaystyle\frac{-1}{\omega_nc_k}u
Z_{k-1}(u,\displaystyle\frac{(x-y)w(x-y)}{\|x-y\|^2})
\displaystyle\frac{x-y}{\|x-y\|^n}J(C,y)^{-1}\\
\\
\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2}J(C,y)n(x)J(C,y)\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2}f(C(y),\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2}) ds(w)d\sigma(x)\\
\\
=\displaystyle\int_{\partial B(y, r)}\int_{\mathbb{S}^{n-1}} \displaystyle\frac{-1}{\omega_nc_k}u
Z_{k-1}(u,\displaystyle\frac{(x-y)w(x-y)}{\|x-y\|^2})
\displaystyle\frac{x-y}{\|x-y\|^n}
wn(x)\\
\\
wJ(C,y)f(C(y),\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2}) ds(w)d\sigma(x)\\
\\
=\displaystyle\int_{\partial B(y, r)}\int_{\mathbb{S}^{n-1}} \displaystyle\frac{1}{\omega_nc_k}\displaystyle\frac{1}{r^{n-1}}u
Z_{k-1}(u,\displaystyle\frac{(x-y)w(x-y)}{\|x-y\|^2})
\displaystyle\frac{x-y}{\|x-y\|}
w\displaystyle\frac{x-y}{\|x-y\|}\\
\\
wJ(C,y)f(C(y),\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2}) ds(w)d\sigma(x).
\end{array}$$
Using Lemma $5$ in \cite{DLRV}, the previous integral becomes
\begin{eqnarray}\label{8}
&&\displaystyle\int_{\mathbb{S}^{n-1}}uZ_{k-1}(u,w)wwJ(C,y)f(C(y),\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2})ds(w)\nonumber\\
\nonumber\\
&&=-\displaystyle\int_{\mathbb{S}^{n-1}}uZ_{k-1}(u,w)J(C,y)f(C(y),\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2})ds(w)\nonumber\\
\nonumber\\
&&=-uJ(C,y)f(C(y),\displaystyle\frac{J(C,y)uJ(C,y)}{\|J(C,y)\|^2}).
\end{eqnarray}
If we set $u'=\displaystyle\frac{J(C,y)uJ(C,y)}{\|J(C,y)\|^2}=\displaystyle\frac{J(C^{-1},y_s)^{-1}uJ(C^{-1},y_s)^{-1}}{\|J(C^{-1},y_s)^{-1}\|^2}=\displaystyle\frac{(y_s-e_{n+1})u(y_s-e_{n+1})}{\|y_s-e_{n+1}\|^2},$ then\\ $uJ(C,y)=uJ(C^{-1},y_s)^{-1}=J(C^{-1},y_s)\|J(C^{-1},y_s)^{-1}\|^2u'.$ Now if we multiply the both sides of equation (\ref{8}) by
$\displaystyle\frac{J(C^{-1},y_s)^{-1}}{\|J(C^{-1},y_s)^{-1}\|^2}=-J(C^{-1},y_s)$, then we obtain
$$\begin{array}{ll}
J(C^{-1},y_s)\displaystyle\int_{\mathbb{S}^{n-1}}uZ_{k-1}(u,w)J(C,y)f(C(y),\displaystyle\frac{J(C,y)wJ(C,y)}{\|J(C,y)\|^2})ds(w)\\
\\
=u'f(C(y),u')=u'f(y_s,u').\quad \blacksquare\end{array}$$
\begin{corollary}
Let $\psi$ be a function in $C^\infty (V_s,\mathcal{M}_{k-1})$ and supp $f\subset V_s$. Then
$$
u'\psi(y_s, u') = - J(C^{-1},y_s)\int_{V_s} (H_k^{S}(x_s-y_s, u, v),Q_k^Sv\psi(x_s,v))_{v} dS(x_s),
$$
where $u'=\displaystyle\frac{(y_s-e_{n+1})u(y_s-e_{n+1})}{\|y_s-e_{n+1}\|^2}.$
\end{corollary}
\begin{corollary} (Cauchy Integral Formula for $Q_k^S$ operators)
If $Q_k^Svf(x_s, v) = 0$, then for $y_s \in V_s$ we have
\begin{eqnarray*}
u'f(y_s, u') &=&J(C^{-1},y_s)\int_{\partial V_s} (H_k^{S}(x_s-y_s, u, v), (I-P_k)n(x_s)v f(x_s,v))_{v} d\Sigma(x_s) \\
&=&J(C^{-1},y_s)\int_{\partial V_s} (H_k^{S}(x_s-y_s, u, v) n(x_s)(I-P_{k,r}),v f(x_s,v))_{v} d\Sigma(x_s),
\end{eqnarray*}
where $u'=\displaystyle\frac{(y_s-e_{n+1})u(y_s-e_{n+1})}{\|y_s-e_{n+1}\|^2}.$
\end{corollary}
\begin{remark}
By factoring out $\mathbb{S}^n$ by the group $\mathbb{Z}_2=\{\pm 1\}$ we obtain real projective space, $\mathbb{R}P^n$. Using the similar arguments to obtain the results for Rarita-Schwinger operators on real projective space in \cite{LRV}, we can easily extends the similar results for $Q_k$ operators to real projective space.
\end{remark}
Junxia Li \qquad Email: [email protected]\\
John Ryan \qquad Email: [email protected]
\end{document} |
\begin{equation}gin{document}
\title{Quantum Equilibrium and the Role of Operators as Observables in
Quantum Theory\footnote{Dedicated to Elliott Lieb on the occasion of
his 70th birthday. Elliott will be (we fear unpleasantly) surprised
to learn that he bears a greater responsibility for this paper
than he could possibly imagine. We would of course like to think
that our work addresses in some way the concern suggested by the
title of his recent talks, {\it The Quantum-Mechanical World View:
A Remarkably Successful but Still Incomplete Theory}, but we
recognize that our understanding of incompleteness is much more
naive than Elliott's. He did, however, encourage us in his capacity
as an editor of the Reviews of Modern Physics to submit a paper on
the role of operators in quantum theory. That was 12 year ago.
Elliott is no longer an editor there and the paper that developed
is not quite a review.} }
\alphauthor{ Detlef D\"{u}rr\\
Mathematisches Institut der Universit\"{a}t M\"{u}nchen\\
Theresienstra{\ss}e 39, 80333 M\"{u}nchen, Germany\\
E-mail: [email protected] \alphand
Sheldon Goldstein\\
Departments of Mathematics, Physics, and Philosophy, Rutgers
University\\
110 Frelinghuysen Road, Piscataway, NJ 08854-8019, USA\\
E-mail: [email protected] \alphand
Nino Zangh\`{\i}\\
Dipartimento di Fisica dell'Universit\`a di Genova\\Istituto
Nazionale di Fisica Nucleare
--- Sezione di Genova\\
via Dodecaneso 33, 16146 Genova, Italy\\
E-mail: [email protected]} \date{} \maketitle
\begin{equation}gin{abstract}
Bohmian mechanics\ is the most naively obvious embedding imaginable of Schr\"{o}dinger's
equation into a completely coherent physical theory. It describes a
world in which particles move in a highly non-Newtonian sort of way,
one which may at first appear to have little to do with the spectrum
of predictions of quantum mechanics. It turns out, however, that as
a consequence of the defining dynamical equations of Bohmian mechanics, when a
system has wave function\ $\psi$ its configuration is typically random, with
probability density $\rho$ given by $|\psi|^2$, the quantum equilibrium\
distribution. It also turns out that the entire quantum formalism, operators as
observables and all the rest, naturally emerges in Bohmian mechanics
{}from the analysis of ``measurements.'' This analysis reveals the
status of operators as observables in the description of quantum
phenomena, and facilitates a clear view of the range of
applicability of the usual quantum mechanical formulas.
\end{abstract}
\maketitle
\tableofcontents
Schr\"odinger's equationction{Introduction}
Schr\"odinger's equationtcounter{equation}{0}
It is often argued that the quantum mechanical association of
observables with self-adjoint operators is a straightforward
generalization of the notion of classical observable, and that quantum
theory should be no more conceptually problematic than classical
physics {\it once this is appreciated}. The classical physical
observables---for a system of particles, their positions
$q=(\mathbf{q}_k)$, their momenta $p=(\mathbf{p}_k)$, and the
functions thereof, i.e., functions on phase space---form a commutative
algebra. It is generally taken to be the essence of quantization, the
procedure which converts a classical theory to a quantum one, that
\(q\), \(p\), and hence all functions \(f(q,p)\) thereof are replaced
by appropriate operators, on a Hilbert space (of possible wave
functions) associated with the system under consideration. Thus
quantization leads to a noncommutative operator algebra of
``observables,'' the standard examples of which are provided by
matrices and linear operators. Thus it seems perfectly natural that
classical observables are functions on phase space and quantum
observables are self-adjoint operators.
However, there is much less here than meets the eye. What should be
meant by ``measuring" a quantum observable, a self-adjoint operator?
We think it is clear that this must be specified---without such
specification it can have no meaning whatsoever. Thus we should be
careful here and use safer terminology by saying that in quantum
theory observables are {\it associated} with self-adjoint operators,
since it is difficult to see what could be meant by more than an
association, by an {\it identification} of observables, regarded as
somehow having independent meaning relating to observation or
measurement (if not to intrinsic ``properties"), with such a
mathematical abstraction as a self-adjoint operator.
We are insisting on ``association" rather than identification in
quantum theory, but not in classical theory, because there we begin
with a rather clear notion of observable (or property) which is
well-captured by the notion of a function on the phase space, the
state space of {\it complete descriptions}. If the state of the
system were observed, the value of the observable would of course be
given by this function of the state $(q,p)$, but the observable might
be observed by itself, yielding only a partial specification of the
state. In other words, measuring a classical observable means
determining to which level surface of the corresponding function the
state of the system, the phase point---which is at any time {\it
definite} though probably unknown---belongs. In the quantum realm
the analogous notion could be that of function on Hilbert space, not
self-adjoint operator. But we don't measure the wave function, so
that functions on Hilbert space are not physically measurable, and
thus do not define ``observables.''
The problematical character of the way in which measurement is treated
in orthodox quantum theory has been stressed by John Bell:
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
The concept of `measurement' becomes so fuzzy on reflection that it
is quite surprising to have it appearing in physical theory {\it at
the most fundamental level.\/} Less surprising perhaps is that
mathematicians, who need only simple axioms about otherwise
undefined objects, have been able to write extensive works on
quantum measurement theory---which experimental physicists do not
find it necessary to read. \dots Does not any {\it analysis\/} of
measurement require concepts more {\it fundamental\/} than
measurement? And should not the fundamental theory be about these
more fundamental concepts?~\cite{Bel81}
\end{quotation}
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
\dots in physics the only observations we must consider are position
observations, if only the positions of instrument pointers. It is a
great merit of the de Broglie-Bohm picture to force us to consider
this fact. If you make axioms, rather than definitions and
theorems, about the `measurement' of anything else, then you commit
redundancy and risk inconsistency.~\cite{Bel82}
\end{quotation}
The Broglie-Bohm theory, Bohmian mechanics, is a physical theory for
which the concept of `measurement' does not appear at the most
fundamental level---in the very formulation of the theory. It is a
theory about concepts more fundamental than `measurement,' in terms of
which an analysis of measurement can be performed. In a previous work
\cite{DGZ92a} we have shown how probabilities for positions of
particles given by $|\psi|^2$ emerge naturally {}from an analysis of
``equilibrium'' for the deterministic dynamical system defined by
Bohmian mechanics, in much the same way that the Maxwellian velocity
distribution emerges {}from an analysis of classical thermodynamic
equilibrium. Our analysis entails that Born's statistical rule
$\rho=|\psi|^{2}|$ should be regarded as a local manifestation of a
global equilibrium state of the universe, what we call \emph{quantum
equilibrium}, a concept analogous to, but quite distinct {}from,
thermodynamic equilibrium: a universe in quantum equilibrium evolves
so as to yield an appearance of randomness, with empirical
distributions in agreement with all the predictions of the quantum
formalism.
While in our earlier work we have proven, {}from the first principles
of Bohmian mechanics{}, the ``quantum equilibrium hypothesis'' that \emph{when a
system has wave function\ $\psi$, the distribution $\rho$ of its configuration
satisfies $\;\rho = |\psi|^2$}, our goal here is to show that it
follows {}from this hypothesis, not merely that Bohmian mechanics{} makes the same
predictions as does orthodox quantum theory for the results of any
experiment, but that \emph{the quantum formalism of operators as
observables emerges naturally and simply as the very expression of
the empirical import of Bohmian mechanics{}}.
More precisely, we shall show here that self-adjoint{} operators arise in
association with specific experiments: insofar as the statistics for
the values which result {}from the experiment are concerned, the
notion of self-adjoint{} operator compactly expresses and represents the
relevant data. It is the association ``$\mbox{$\mathscr{E}$}\mapsto A$'' between an
experiment \mbox{$\mathscr{E}$}{} and an operator $A$---an association that we shall
establish in Section 2 and upon which we shall elaborate in the other
sections---that is the central notion of this paper. According to this
association the notion of operator-as-observable in no way implies
that anything is measured in the experiment, and certainly not the
operator itself. We shall nonetheless speak of such experiments as
measurements, since this terminology is unfortunately standard. When
we wish to emphasize that we really mean measurement---the
ascertaining of the value of a quantity---we shall often speak of
genuine measurement.
Much of our analysis of the emergence and role of operators as
observables in Bohmian mechanics{}, including the von Neumann-type picture of
measurements at which we shall arrive, applies as well to orthodox
quantum theory{}. Indeed, the best way to understand the status of the quantum
formalism---and to better appreciate the minimality of Bohmian
mechanics---is Bohr's way: What are called quantum observables obtain
meaning \emph{only} through their association with specific
\emph{experiments}. We believe that Bohr's point has not been taken
to heart by most physicists, even those who regard themselves as
advocates of the Copenhagen interpretation.
Indeed, it would appear that the argument provided by our analysis
against taking operators too seriously as observables has even greater
force {}from an orthodox perspective: Given the initial wave function{}, at
least in Bohmian mechanics{} the outcome of the particular experiment is determined
by the initial configuration of system and apparatus, while for
orthodox quantum theory there is nothing in the initial state which
completely determines the outcome. Indeed, we find it rather
surprising that most proponents of standard quantum measurement
theory, that is the von Neumann analysis of measurement \cite{vNe55},
beginning with von Neumann, nonetheless seem to retain an uncritical
identification of operators with properties. Of course, this is
presumably because more urgent matters---the measurement problem and
the suggestion of inconsistency and incoherence that it entails---soon
force themselves upon one's attention. Moreover such difficulties
perhaps make it difficult to maintain much confidence about just what
{\it should\/} be concluded {}from the ``measurement'' analysis, while
in Bohmian mechanics, for which no such difficulties arise, what should be concluded
is rather obvious.
Moreover, a great many significant real-world experiments are simply
not at all associated with operators in the usual way. Because of
these and other difficulties, it has been proposed that we should go
beyond operators-as-observables, to {\it generalized observables\/},
described by mathematical objects (positive-operator-valued measures,
POVMs) even more abstract than operators (see, e.g., the books of
Davies \cite{Dav76}, Holevo \cite{Hol82} and Kraus \cite{Kra83}). It
may seem that we would regard this development as a step in the wrong
direction, since it supplies us with a new, much larger class of
abstract mathematical entities about which to be naive realists. We
shall, however, show that these generalized observables for Bohmian mechanics\ form
an extremely natural class of objects to associate with experiments,
and that the emergence and role these observables is merely an
expression of quantum equilibrium\ together with the linearity of Schr\"{o}dinger's evolution. It
is therefore rather dubious that the occurrence of generalized
observables---the simplest case of which are self-adjoint{} operators---can be
regarded as suggesting any deep truths about reality or about
epistemology.
As a byproduct of our analysis of measurement we shall obtain a
criterion of measurability and use it to examine the genuine
measurability of some of the properties of a physical system. In this
regard, it should be stressed that measurability is theory-dependent:
different theories, though empirically equivalent, may differ on what
should be regarded as genuinely measurable \emph{within} each theory.
This important---though very often ignored---point was made long ago
by Einstein and has been repeatedly stressed by Bell. It is best
summarized by Einstein's remark~\cite{Hei58}: \emph{``It is the theory
which decides what we can observe.''}
We note in passing that measurability and reality are different
issues. Indeed, for Bohmian mechanics{} most of what is ``measurable'' (in a sense
that we will explain) is not real and most of what is real is not
genuinely measurable. (The main exception, the position of a particle,
which is both real and genuinely measurable, is, however, constrained
by absolute uncertainty \cite{DGZ92a}).
In focusing here on the role of operators as observables, we don't
wish to suggest that there are no other important roles played by
operators in quantum theory. In particular, in addition to the
familiar role played by operators as generators of symmetries and
time-evolutions, we would like to mention the rather different role
played by the {\em field operators} of quantum field theory: to link
abstract Hilbert-space to space-time and structures therein,
facilitating the formulation of theories describing the behavior of an
indefinite number of particles \cite{crea1,crea2}.
Finally, we should mention what should be the most interesting sense
of measurement for a physicist, namely the determination of the
coupling constants and other parameters that define our physical
theories. This has little to do with operators as observables in
quantum theory and shall not be addressed here.
\subsection*{Notations and Conventions}
$Q= Zect{\mathbf{Q}}{N}$ denotes the actual configuration of a
system of $N$ particle with positions $\mathbf{Q}_k$;
$q=Zect{\mathbf{q}}{N}$ is its generic configuration. Whenever we
deal with a system-apparatus composite, $x$ ($X$) will denote the
generic (actual) configuration of the system and $y$ ($Y$) that of the
apparatus. Sometimes we shall refer to the system as the $x$-system
and the apparatus as the $y$-system. Since the apparatus should be
understood as including all systems relevant to the behavior of the
system in which we are interested, this notation and terminology is
quite compatible with that of Section \ref{sec:CEWF}, in which $y$
refers to the environment of the $x$-system.
For a system in the state $\mbox{$\mathbb{P}$}si$, $\rho_{\mbox{$\mathbb{P}$}si}$ will denote the
quantum equilibrium measure, $\rho_{\mbox{$\mathbb{P}$}si}(dq)= |\mbox{$\mathbb{P}$}si(q)|^2dq$. If
$Z=F(Q)$ then $\rho_{\mbox{$\mathbb{P}$}si}^Z$ denotes the measure induced by $F$, i.e.
$\rho_{\mbox{$\mathbb{P}$}si}^Z= \rho_{\mbox{$\mathbb{P}$}si}\circ F^{-1}$.
Schr\"odinger's equationction{Bohmian Experiments}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:BE}
Schr\"odinger's equationtcounter{equation}{0}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:BM}
According to Bohmian mechanics{}, the complete description or state of an
$N$-particle system is provided by its wave function{} $\mbox{$\mathbb{P}$}si(q,t)$, where
$q=Zect{\mathbf{q}}{N} \in \mathbb{R} ^{3N}$, \emph{and} its configuration
$Q= Zect{\mathbf{Q}}{N}\in \mathbb{R} ^{3N}$, where the $\mathbf{Q}_k$ are
the positions of the particles. The wave function{}, which evolves according to
Schr\"odinger's equation{},
\begin{equation}gin{equation}
i\hbar\pder{\mbox{$\mathbb{P}$}si}{t} = H\mbox{$\mathbb{P}$}si \,,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:eqsc}
\end{equation}
choreographs the motion of the particles: these evolve according to
the equation
\begin{equation}gin{equation}
\oder{\mathbf{Q}_{k}}{t} = \frac{\hbar}{m_{k}} {\rm Im}\frac{
\mbox{$\mathbb{P}$}si^*\boldsymbol{\nabla}_{k}\mbox{$\mathbb{P}$}si}{\mbox{$\mathbb{P}$}si^*\mbox{$\mathbb{P}$}si}\,
Zect{\mathbf{Q}}{N}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:velo}
\end{equation}
where $\mybold{\nabla}_{\!k}=\partial/\partial \mathbf{q}_{\!k}.$ In
equation (\ref{eq:eqsc}), $H$ is the usual nonrelativistic Schr\"{o}dinger\
Hamiltonian; for spinless particles it is of the form
\begin{equation}gin{equation}
H=-{\sum}_{k=1}^{N}
\frac{{\hbar}^{2}}{2m_{k}}\boldsymbol{\nabla}^{2}_{k} + V,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sh}
\end{equation}
containing as parameters the masses $m_1\dots, m_N$ of the particles
as well as the potential energy function $V$ of the system. For an
$N$-particle system of nonrelativistic particles, equations
(\ref{eq:eqsc}) and (\ref{eq:velo}) form a complete specification of
the theory (magnetic fields\footnote{When a magnetic field is present,
the gradients $\boldsymbol{\nabla}_{k}$ in the equations
(\ref{eq:eqsc} and (\ref{eq:velo}) must be understood as the
covariant derivatives involving the vector potential
$\boldsymbol{A}$. } and spin,\footnote{See Section \ref{secSGE}.}
as well as Fermi and Bose-Einstein statistics,\footnote{For
indistinguishable particles, a careful analysis \cite{DGZ94} of the
natural configuration space, which is no longer $\mathbb{R}^{3N}$, leads to
the consideration of wave function s on $\mathbb{R}^{3N}$ that are either symmetric or
antisymmetric under permutations.} can easily be dealt with and in
fact arise in a natural manner \cite{Bel66,Boh52,Nel85,Gol87,DGZ94}).
There is no need, and indeed no room, for any further \emph{axioms},
describing either the behavior of other observables or the effects of
measurement.
\subsection{Equivariance and Quantum Equilibrium}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec.e}
It is important to bear in mind that regardless of which observable
one chooses to measure, the result of the measurement can be assumed
to be given configurationally, say by some pointer orientation or by a
pattern of ink marks on a piece of paper. Then the fact that Bohmian mechanics{}
makes the same predictions as does orthodox quantum theory for the
results of any experiment---for example, a measurement of momentum or
of a spin component---\emph{provided we assume a random distribution
for the configuration of the system and apparatus at the beginning
of the experiment given by $|\mbox{$\mathbb{P}$}si(q)|^2$}---is a more or less
immediate consequence of (\ref{eq:velo}). This is because of the
quantum continuity equation
\begin{equation}gin{displaymath}
\pder{|\mbox{$\mathbb{P}$}si|^2}{t} + \mbox{\textup{\textrm{div}}}\, J^{\mbox{$\mathbb{P}$}si} =0,
\end{displaymath}
which is a simple consequence of Schr\"odinger's equation{}. Here $ J^{\mbox{$\mathbb{P}$}si}=
Zect{\mathbf{J}^\mbox{$\mathbb{P}$}si}{N} $ with
\begin{equation}gin{displaymath}
\mathbf{J}^\mbox{$\mathbb{P}$}si_k= \frac{\hbar}{m_k} \mbox{\textup{\textrm{Im}}}\,
(\mbox{$\mathbb{P}$}si^*\mybold{\nabla}_{\!k}\mbox{$\mathbb{P}$}si)
\end{displaymath}
the \emph{quantum probability current}. This equation becomes the
classical continuity equation
\begin{equation}gin{equation}
\pder{\rho}{t} + \mbox{\textup{\textrm{div}}}\, \rho\, v =0
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:contieq}
\end{equation}
for the system of equations $dQ/dt=v$ defined by
(\ref{eq:velo})---governing the evolution of the probability density
$\rho$ under the motion defined by the guiding equation
(\ref{eq:velo}) for the particular choice $\rho=|\mbox{$\mathbb{P}$}si|^2=\mbox{$\mathbb{P}$}si^*\mbox{$\mathbb{P}$}si$.
In other words, if the probability density for the configuration
satisfies $\rho(q,t_0)=|\mbox{$\mathbb{P}$}si(q,t_0)|^2$ at some time $t_0$, then the
density to which this is carried by the motion (\ref{eq:velo}) at any
time $t$ is also given by $\rho(q,t)=|\mbox{$\mathbb{P}$}si(q,t)|^2$. This is an
extremely important property of any Bohmian system, as it expresses a
certain compatibility between the two equations of motion defining the
dynamics, which we call the
\emph{equivariance}\footnote{\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{fn:equivariance} Equivariance can
be formulated in very general terms: consider the transformations $
U: \mbox{$\mathbb{P}$}si \to U \mbox{$\mathbb{P}$}si$ and $f: Q \to f(Q) $, where $U$ is a unitary
transformation on $L^{2}(dq)$ and $f$ is a transformation on
configuration space that may depend on $\mbox{$\mathbb{P}$}si$. We say that the map
$\mbox{$\mathbb{P}$}si\mapsto\mu_{\mbox{$\mathbb{P}$}si}$ from wave function s to measures on configuration
space is equivariant with respect to $U$ and $f$ if $ \mu_{\,U\mbox{$\mathbb{P}$}si}
= \mu_{\mbox{$\mathbb{P}$}si}\circ f^{-1} $. The above argument based on the
continuity equation (\ref{eq:contieq}) shows that $ \mbox{$\mathbb{P}$}si\mapsto
|\mbox{$\mathbb{P}$}si|^{2} dq$ is equivariant with respect to $U\equiv U_{t} =
e^{-i\,\frac{t}{\hbar}\,H}$, where $H$ is the Schr\"{o}dinger{} Hamiltonian
(\ref{sh}) and $f\equiv f_{t}$ is the solution map of
(\ref{eq:velo}). In this regard, it is important to observe that
for a Hamiltonian $H$ which is not of Schr\"{o}dinger{} type we shouldn't expect
\eq{eq:velo} to be the appropriate velocity field, that is, a field
which generates an evolution in configuration space having $
|\mbox{$\mathbb{P}$}si|^{2} $ as equivariant density. For example, for $H=
c\frac{\hbar}{i}\frac{\partial}{\partial q}$, where $c$ is a
constant (for simplicity we are assuming configuration space to be
one-dimensional), we have that $ |\mbox{$\mathbb{P}$}si|^{2} $ is equivariant
\emph{provided} the evolution of configurations is given by
${dQ}/{dt} = c$. In other words, for $U_{t}= e^{ct
\frac{\partial}{\partial q}}$ the map $ \mbox{$\mathbb{P}$}si\mapsto |\mbox{$\mathbb{P}$}si|^{2} dq$
is equivariant if $f_{t}: Q\to Q +ct$.} of $|\mbox{$\mathbb{P}$}si|^2$.
The above assumption guaranteeing agreement between Bohmian mechanics{} and quantum
mechanics regarding the results of any experiment is what we call the
``quantum equilibrium hypothesis'':
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.85\textwidth}
\emph{When a system has wave function\ $\mbox{$\mathbb{P}$}si$ its configuration $Q$ is random
with probability distribution given by the measure
$\rho_{\mbox{$\mathbb{P}$}si}(dq)=|\mbox{$\mathbb{P}$}si(q)|^2dq$.}
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{def:qe}
\end{equation}
When this condition is satisfied we shall say that the system is in
quantum equilibrium and we shall call $\rho_{\mbox{$\mathbb{P}$}si}$ the quantum
equilibrium distribution. While the meaning and justification of
(\ref{def:qe}) is a delicate matter, which we have discussed at length
elsewhere \cite{DGZ92a}, it is important to recognize that, merely as
a consequence of (\ref{eq:velo}) and (\ref{def:qe}), Bohmian mechanics{} is a
counterexample to all of the claims to the effect that a deterministic
theory cannot account for quantum randomness in the familiar
statistical mechanical way, as arising {}from averaging over
ignorance: Bohmian mechanics{} is clearly a deterministic theory, and, as we have
just explained, it does account for quantum randomness as arising
{}from averaging over ignorance given by $|\mbox{$\mathbb{P}$}si(q)|^2$.
\subsection{Conditional and Effective Wave Functions}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:CEWF}
Which systems should be governed by Bohmian mechanics ? An $n$-particle subsystem
of an $N$-particle system ($n<N$) need not in general be governed by
Bohmian mechanics, since no wave function\ for the subsystem need exist. This will be so even
with trivial interaction potential $V$, if the wave function\ of the system does
not properly factorize; for nontrivial $V$ the Schr\"{o}dinger\ evolution would in
any case quickly destroy such a factorization. Therefore in a
universe governed by Bohmian mechanics\ there is a priori only one wave function,
namely that of the universe, and there is a priori only one system
governed by Bohmian mechanics, namely the universe itself.
Consider then an $N$-particle non relativistic universe governed by
Bohmian mechanics{}, with (universal) wave function{} $\mbox{$\mathbb{P}$}si$. Focus on a subsystem with
configuration variables $x$, i.e., on a splitting $q=(x,y)$ where $y$
represents the configuration of the \emph{environment} of the
\emph{$x$-system}. The actual particle configurations at time $t$ are
accordingly denoted by $X_t$ and $Y_t$, i.e., $Q_t=(X_t ,Y_t)$. Note
that $\mbox{$\mathbb{P}$}si_t=\mbox{$\mathbb{P}$}si_t(x,y)$. How can one assign a wave function{} to the
$x$-system? One obvious possibility---\emph{afforded by the existence
of the actual configuration}---is given by what we call the
\emph{conditional} wave function of the $x$-system
\begin{equation}gin{equation}
\psi_t(x) = \mbox{$\mathbb{P}$}si_t (x,Y_t).
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:con}
\end{equation}
To get familiar with this notion consider a very simple one
dimensional universe made of two particles with Hamiltonian
($\hbar=1$)
\begin{equation}gin{displaymath}
H =H^{(x)}+H^{(y)} +H^{(xy)} = -\frac{1}{2}\big(
\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial
y^2}\big) \ + \frac{1}{2} (x-y)^2 .
\end{displaymath}
and initial wave function{}
\begin{equation}gin{displaymath}
\mbox{$\mathbb{P}$}si_0 = \psi \otimes \mbox{$\mathbb{P}$}hi_0 \quad\hbox{with}\quad
\psi(x)=\pi^{-\frac{1}{4}} e^{-\frac{x^2}{2}}\quad\hbox{and}\quad
\mbox{$\mathbb{P}$}hi_0 (y)=\pi^{-\frac{1}{4}} e^{-\frac{y^2}{2}}.
\end{displaymath}
Then (\ref{eq:eqsc}) and (\ref{eq:velo}) are easily solved:
\begin{equation}gin{displaymath}
\mbox{$\mathbb{P}$}si_t (x,y) =\pi^{-\frac{1}{2}} (1+it)^{-\frac{1}{2}}
e^{-\frac{1}{4}\big[(x- y)^2+\frac{(x+y)^2}{1+2it}\big]},
\end{displaymath}
\begin{equation}gin{displaymath}
X_t = a(t)X + b(t)Y \quad\hbox{and}\quad Y_t = b(t)X +a(t) Y ,
\end{displaymath}
where $a(t)= \frac{1}{2}[ (1+t^2)^{\frac{1}{2}}+1] $, $b(t)=
\frac{1}{2}[ (1+t^2)^{\frac{1}{2}}-1] $, and $X$ and $Y$ are the
initial positions of the two particles. Focus now on one of the two
particles (the $x$-system) and regard the other one as its environment
(the $y$-system). The conditional wave function{} of the $x$-system
\begin{equation}gin{displaymath}
\psi_t(x) = \pi^{-\frac{1}{2}} (1+it)^{-\frac{1}{2}}
e^{-\frac{1}{4}\big[(x- Y_{t})^2+\frac{(x+Y_{t})^2}{1+2it}\big]},
\end{displaymath}
depends, through $Y_t$, on \emph{both} the initial condition $Y$ for
the environment \emph{and} the initial condition $X$ for the particle.
As these are random, so is the evolution of $\psi_t$, with probability
law determined by $|\mbox{$\mathbb{P}$}si_0|^2$. In particular, $\psi_t$ does not
satisfy Schr\"{o}dinger's equation for any $H^{(x)}$.
We remark that even when the $x$-system is dynamically decoupled
{}from its environment, its conditional wave function\ will not in general
evolve according to Schr\"{o}dinger's equation. Thus the conditional wave function\ lacks
the {\it dynamical} implications {}from which the wave function\ of a system
derives much of its physical significance. These are, however,
captured by the notion of effective wave function:
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.85\textwidth}\openup 1.2\jot
Schr\"odinger's equationtlength{\baselineskip}{12pt}\emph{Suppose that $\;\mbox{$\mathbb{P}$}si(x,y) =
\psi(x)\mbox{$\mathbb{P}$}hi(y) + \mbox{$\mathbb{P}$}si^{\perp}(x,y)\, ,\;$ where $\mbox{$\mathbb{P}$}hi$ and
$\mbox{$\mathbb{P}$}si^{\perp}$ have macroscopically disjoint $y$-supports. If $\;
Y \in \hbox{\rm supp}\;\mbox{$\mathbb{P}$}hi \;$ we say that $\psi$ is the
\emph{effective wave function} of the $x$-system.}
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{def:ewf}
\end{equation}
Of course, $\psi$ is also the conditional wave function{} since nonvanishing scalar
multiples of wave function s are naturally
identified.\footnote{\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{foosti}Note that in Bohmian mechanics\ the wave function\ is
naturally a projective object since wave function s differing by a
multiplicative constant---possibly time-dependent---are associated
with the same vector field, and thus generate the same dynamics. }
\subsection{Decoherence}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:DD}
One might wonder why systems possess an effective wave function at
all. In fact, in general they don't! For example the $x$-system will
not have an effective wave function{} when, for instance, it belongs to a larger
microscopic system whose effective wave function{} doesn't factorize in the appropriate
way. However, the \emph{larger} the environment of the $x$-system,
the \emph{greater} is the potential for the existence of an effective wave function{} for
this system, owing in effect to the abundance of ``measurement-like''
interaction with a larger environment.\footnote{To understand how this
comes about one may suppose that initially the $y$-supports of
$\mbox{$\mathbb{P}$}hi$ and $\mbox{$\mathbb{P}$}si^{\perp}$ (cf. the definition above of effective
wave function{}) are just ``sufficiently'' (but not macroscopically) disjoint.
Then, due to the interaction with the environment, the amount of
$y$-disjointness will tend to increase dramatically as time goes on,
with, as in a chain reaction, more and more degrees of freedom
participating in this disjointness. When the effect of this
``decoherence'' is taken into account, one finds that even a small
amount of $y$-disjointness will often tend to become ``sufficient,''
and quickly ``more than sufficient,'' and finally macroscopic.}
We remark that it is the relative stability of the macroscopic
disjointness employed in the definition of the effective wave function, arising {}from
what are nowadays often called mechanisms of decoherence---the
destruction of the coherent spreading of the wave function, the
effectively irreversible flow of ``phase information'' into the
(macroscopic) environment---which accounts for the fact that the
effective wave function{} of a system obeys Schr\"{o}dinger's equation for the system alone whenever
this system is isolated. One of the best descriptions of the
mechanisms of decoherence, though not the word itself, can be found in
Bohm's 1952 ``hidden variables'' paper \cite{Boh52}.
Decoherence plays a crucial role in the very formulation of the
various interpretations of quantum theory\ loosely called decoherence
theories(Griffiths \cite{Gri84}, Omn\`es \cite{Omn88}, Leggett
\cite{Leg80}, Zurek \cite{Zur82}, Joos and Zeh \cite{JZ85}, Gell-Mann
and Hartle \cite{GMH90}). In this regard we wish to emphasize,
however, as did Bell in his article ``Against Measurement''
\cite{Bel90}, that decoherence in no way comes to grips with the
measurement problem itself, being arguably a {\it necessary}, but
certainly not a sufficient, condition for its complete resolution. In
contrast, for Bohmian mechanics decoherence is purely
phenomenological---it plays no role whatsoever in the formulation (or
interpretation) of the theory itself\footnote{However, decoherence
plays an important role in the emergence of Newtonian mechanics as
the description of the macroscopic regime for Bohmian mechanics,
supporting a picture of a macroscopic Bohmian particle, in the
classical regime, guided by a macroscopically well-localized wave
packet with a macroscopically sharp momentum moving along a
classical trajectory. It may, indeed, seem somewhat ironic that the
gross features of our world should appear classical because of
interaction with the environment and the resulting wave function
entanglement, the characteristic quantum innovation.}---and the very
notion of effective wave function\ accounts at once for the reduction of the wave packet
in quantum measurement.
According to orthodox quantum measurement theory \cite{vNe55, Boh51,
Wig63, Wig83}, after a measurement, or preparation, has been
performed on a quantum system, the $x$-system, the wave function\ for the
composite formed by system and apparatus is of the form
\begin{equation}gin{equation}
\sum_\alpha{\psi_\alpha\otimesimes\mbox{$\mathbb{P}$}hi_\alpha}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:msum}
\end{equation}
with the different $\mbox{$\mathbb{P}$}hi_\alpha$ supported by the macroscopically distinct
(sets of) configurations corresponding to the various possible
outcomes of the measurement, e.g., given by apparatus pointer
orientations. Of course, for Bohmian mechanics\ the terms of \eq{eq:msum} are not
all on the same footing: one of them, and only one, is selected, or
more precisely supported, by the outcome---corresponding, say, to
$\alpha_0$---which {\it actually\/} occurs. To emphasize this we may write
(\ref{eq:msum}) in the form
$$
\psi\otimesimes\mbox{$\mathbb{P}$}hi+\mbox{$\mathbb{P}$}si^\perp
$$
where $\psi=\psi_{\alpha_0}$, $\mbox{$\mathbb{P}$}hi=\mbox{$\mathbb{P}$}hi_{\alpha_0}$, and
$\mbox{$\mathbb{P}$}si^\perp=\sum_{\alpha\neq\alpha_0}{\psi_\alpha\otimesimes\mbox{$\mathbb{P}$}hi_\alpha}$. By comparison
with (\ref{def:ewf}) it follows that after the measurement the
$x$-system has effective wave function\ $\psi_{\alpha_0}$. This is how {\it collapse} (or
{\it reduction}) of the effective wave function\ to the one associated with the outcome
$\alpha_0$ arises in Bohmian mechanics.
While in orthodox quantum theory the ``collapse'' is merely
superimposed upon the unitary evolution---without a precise
specification of the circumstances under which it may legitimately be
invoked---we have now, in Bohmian mechanics{}, that the evolution of the effective wave function{} is
actually given by a stochastic process, which consistently embodies
\emph{both} unitarity \emph{and} collapse as appropriate. In
particular, the effective wave function{} of a subsystem evolves according to Schr\"odinger's equation{} when
this system is suitably isolated. Otherwise it ``pops in and out'' of
existence in a random fashion, in a way determined by the continuous
(but still random) evolution of the conditional wave function{} $\psi_t$.
Moreover, it is the critical dependence on the state of the
environment and the initial conditions which is responsible for the
random behavior of the (conditional or effective) wave function{} of the system.
\subsection{Wave Function and State}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:WFS}
As an important consequence of (\ref{eq:con}) we have, for the
conditional probability distribution of the configuration $X_{t}$ of a
system at time $t$, given the configuration $Y_{t}$ of its
environment, the \emph{fundamental conditional probability formula}
\cite{DGZ92a}:
\begin{equation}gin{equation}
\mbox{Prob}_{\mbox{$\mathbb{P}$}si_{0}} \bigl(X_t \in dx \bigm| Y_t\bigr)=|\psi_t(x)|^2\,dx,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:fpf}
\end{equation}
where
\begin{equation}gin{displaymath}
\mbox{Prob}_{\mbox{$\mathbb{P}$}si_{0}}(dQ)={|\mbox{$\mathbb{P}$}si_0(Q)|}^2\,dQ,
\end{displaymath}
with $Q=(X,Y)$ the configuration of the universe at the (initial) time
$t=0$. Formula (\ref{eq:fpf}) is the cornerstone of our analysis
\cite{DGZ92a} on the origin of randomness in Bohmian mechanics{}. Since the right
hand side of (\ref{eq:fpf}) involves only the effective wave function{}, it
follows that \emph{the wave function{} $\psi_t$ of a subsystem represents
maximal information about its configuration $X_t$.} In other words,
given the fact that its wave function{} is $\psi_t$, it is in principle
impossible to know more about the configuration of a system than what
is expressed by the right hand side of (\ref{eq:fpf}), even when the
detailed configuration $Y_t$ of its environment is taken into account
\cite{DGZ92a}
\begin{equation}gin{equation}
\mbox{Prob}_{\mbox{$\mathbb{P}$}si_{0}}\bigl(X_t \in dx \bigm| Y_t\bigr)=
\mbox{Prob}_{\mbox{$\mathbb{P}$}si_{0}}\bigl(X_t \in dx \bigm|
\psi_t\bigr)=|\psi_t(x)|^2\,dx.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:fpfp}
\end{equation}
The fact that the knowledge of the configuration of a system must be
mediated by its wave function{} may partially account for the possibility of
identifying the \emph{state} of a system---its complete
description---with its wave function{} without encountering any \emph{practical}
difficulties. This is primarily because of the wave function{}'s statistical
role, but its dynamical role is also relevant here. Thus it is
natural, even in Bohmian mechanics{}, to regard the wave function{} as the ``\emph{state}'' of
the system. This attitude is supported by the asymmetric roles of
configuration and wave function{}: while the \emph{fact} that the wave function{} is
$\psi$ entails that the configuration is distributed according to
$|\psi|^2$, the \emph{ fact} that the configuration is $X$ has no
implications whatsoever for the wave function{}.\footnote{The ``fact'' (that the
configuration is $X$) shouldn't be confused with the ``knowledge of
the fact'': the latter does have such implications \cite{DGZ92a}!}
Indeed, such an asymmetry is grounded in the dynamical laws \emph{and}
in the initial conditions. $\psi$ is always assumed to be fixed,
being usually under experimental control, while $X$ is always taken as
random, according to the quantum equilibrium{} distribution.
When all is said and done, it is important to bear in mind that
regarding $\psi$ as the ``state'' is only of practical value, and
shouldn't obscure the more important fact that the most detailed
description---\emph{the complete state description}---is given (in
Bohmian mechanics) by the wave function{} \emph{and} the configuration.
\subsection{The Stern-Gerlach Experiment}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secSGE}
Information about a system does not spontaneously pop into our heads
or into our (other) ``measuring'' instruments; rather, it is generated
by an \emph{experiment}: some physical interaction between the system
of interest and these instruments, which together (if there is more
than one) comprise the \emph{apparatus} for the experiment. Moreover,
this interaction is defined by, and must be analyzed in terms of, the
physical theory governing the behavior of the composite formed by
system and apparatus. If the apparatus is well designed, the
experiment should somehow convey significant information about the
system. However, we cannot hope to understand the significance of
this ``information''---for example, the nature of what it is, if
anything, that has been measured---without some such theoretical
analysis.
As an illustration of such an analysis we shall discuss the
Stern-Gerlach experiment {}from the standpoint of Bohmian mechanics. But first we
must explain how {\it spin} is incorporated into Bohmian mechanics: If $\mbox{$\mathbb{P}$}si$ is
spinor-valued, the bilinear forms appearing in the numerator and
denominator of (\ref{eq:velo}) should be understood as
spinor-inner-products; e.g., for a single spin $\frac{1}{2}$ particle
the two-component wave function{} $$
\mbox{$\mathbb{P}$}si \equiv \left(\begin{equation}gin{array}{c} \mbox{$\mathbb{P}$}si_+
({\bf x})\\ \mbox{$\mathbb{P}$}si_- ({\bf x})
\end{array} \right) $$
generates the velocity
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{vspin}
{\bf v}^{\mbox{$\mathbb{P}$}si} = \frac{\hbar}{m}{\rm Im}
\frac{(\mbox{$\mathbb{P}$}si,\boldsymbol{\nabla}\mbox{$\mathbb{P}$}si)} {(\mbox{$\mathbb{P}$}si, \mbox{$\mathbb{P}$}si)}
\end{equation}
where $(\,\cdot\,,\,\cdot\,)$ denotes the scalar product in the spin
space $\mathbb{C}^{2}$. The wave function\ evolves via (\ref{eq:eqsc}), where now the
Hamiltonian $H$ contains the Pauli term, for a single particle
proportional to $\mathbf{B}\cdot\boldsymbol{ \sigma}$, that represents
the coupling between the ``spin'' and an external magnetic field
$\mathbf{B}$; here $\boldsymbol{\sigma} =(\sigma_x,\sigma_y,\sigma_z)$
are the Pauli spin matrices which can be taken to be
$$
\sigma_x \,=\, \left(\begin{equation}gin{array}{cc} 0 & 1 \\ 1 & 0\end{array}
\right) \quad \sigma_y\,=\, \left( \begin{equation}gin{array}{cc} 0 & -i \\ i& 0
\end{array} \right) \quad \sigma_z\,=\, \left( \begin{equation}gin{array}{cc} 1 &
0\\ 0& -1 \end{array} \right)
$$
Let's now focus on a Stern-Gerlach ``measurement of the operator
$\sigma_z$'': An inhomogeneous magnetic field $\mathbf{B}$ is
established in a neighborhood of the origin, by means of a suitable
arrangement of magnets. This magnetic field is oriented in the
positive $z$-direction, and is increasing in this direction. We also
assume that the arrangement is invariant under translations in the
$x$-direction, i.e., that the geometry does not depend upon
$x$-coordinate. A particle with a fairly definite momentum is
directed towards the origin along the negative $y$-axis. For
simplicity, we shall consider a neutral spin-$1/2$ particle whose
wave function{} $\mbox{$\mathbb{P}$}si$ evolves according to the Hamiltonian
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sgh}
H = -\frac{\hbar^{2}}{2m} \boldsymbol{\nabla}^{2} -
\mu\boldsymbol{\sigma}{\bf \cdot B}.
\end{equation}
where $\mu$ is a positive constant (if one wishes, one might think of
a fictitious electron not feeling the Lorentz force).
The inhomogeneous field generates a vertical deflection of $\mbox{$\mathbb{P}$}si$ away
{}from the $y$-axis, which for Bohmian mechanics\ leads to a similar deflection of
the particle trajectory according to the velocity field defined by
\eq{vspin}: if its wave function\ $\mbox{$\mathbb{P}$}si$ were initially an eigenstate of
$\sigma_z$ of eigenvalue $1$ or $-1$, i.e., if it were of the form
$$
\mbox{$\mathbb{P}$}si^{(+)}=\psi^{(+)}\otimesimes\mbox{$\mathbb{P}$}hi_0 ({\bf x})\qquad\text{or}\qquad
\mbox{$\mathbb{P}$}si^{(-)}=\psi^{(-)}\otimesimes\mbox{$\mathbb{P}$}hi_0 ({\bf x})
$$
where
\begin{equation}gin{equation}
\psi^{(+)} \equiv \left(\begin{equation}gin{array}{c} 1\\ 0
\end{array} \right) \quad \mbox{and} \quad
\psi^{(-)} \equiv \left(\begin{equation}gin{array}{c} 0\\ 1
\end{array} \right)
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:spinbasis}
\end{equation}
then the deflection would be in the positive (negative) $z$-direction
(by a rather definite angle). This limiting behavior is readily seen
for $\mbox{$\mathbb{P}$}hi_0 = \mbox{$\mathbb{P}$}hi_0(z)\phi(x,y)$ and ${\bf B}=(0,0,B)$, so that the
$z$-motion is completely decoupled {}from the motion along the other
two directions, and by making the standard (albeit unphysical)
assumption \cite{Boe79}, \cite{Boh51}
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{consg}
\frac{\partial B}{\partial z} = const > 0\,.
\end{equation}
whence
$$
\mu\boldsymbol{\sigma}{\bf \cdot B}=( b+ az) \sigma_{z}
$$
where $a>0$ and $b$ are constants. Then
$$
\mbox{$\mathbb{P}$}si^{(+)}_{t} = \left(\begin{equation}gin{array}{c}
\mbox{$\mathbb{P}$}hi^{(+)}_{t}(z)\phi_{t}(x,y)\\ 0
\end{array} \right) \quad \mbox{and} \quad
\mbox{$\mathbb{P}$}si^{(-)}_{t} = \left(\begin{equation}gin{array}{c} 0\\
\mbox{$\mathbb{P}$}hi^{(-)}_{t}(z)\phi_{t}(x,y)
\end{array} \right)
$$
where $\mbox{$\mathbb{P}$}hi^{(\pm)}_{t}$ are the solutions of
\begin{equation}gin{equation}
i\hbar\frac{\partial {\mbox{$\mathbb{P}$}hi_t}^{(\pm)}}{\partial t}=
-\frac{\hbar^2}{2m} \frac{\partial^2 {\mbox{$\mathbb{P}$}hi_t}^{(\pm)} }{\partial
z^2}\, \mp \, (b + a\,z){\mbox{$\mathbb{P}$}hi_t}^{(\pm)},
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:SGequ}
\end{equation}
for initial conditions ${\mbox{$\mathbb{P}$}hi_0}^{(\pm)}=\mbox{$\mathbb{P}$}hi_0(z)$. Since $z$
generates translations of the $z$-component of the momentum, the
behavior described above follows easily. More explicitly, the limiting
behavior for $t\to\infty$ readily follows by a stationary phase
argument on the explicit solution\footnote{Eq. (\ref{eq:SGequ}) is
readily solved:
$$
\mbox{$\mathbb{P}$}hi^{(\pm)}_{t}(z) = \int G^{(\pm)}(z,z';t) \mbox{$\mathbb{P}$}hi_{0}(z')\,
dz'\,,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:solpauli}
$$
where (by the standard rules for the Green's function of linear and
quadratic Hamiltonians)
$$
G^{(\pm)}(z,z';t) = \sqrt{\frac{m}{ 2 \pi i \hbar t}}\,
e^{\frac{i}{\hbar} \left( \frac{m}{2t}\left( z-z' -
(\pm)\frac{at^{2}}{m}\right)^{2} + \frac{(\pm) at}{2} \left(
z-z' - (\pm)\frac{at^{2}}{m}\right) -(\pm) (az'+b) t +
\frac{at^{3}}{3m} \right)}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:proppauli}
$$} of (\ref{eq:SGequ}). More simply, we may consider the initial
Gaussian state $$\mbox{$\mathbb{P}$}hi_{0}= \frac {e^{( - \,\frac {z^{2}}{4d^{2}}) }}{
(2 d^{2}\pi)^\frac{1}{4} } $$
for which $|\mbox{$\mathbb{P}$}hi_{t}^{\pm}(z)|^2$, the
probability density of the particle being at a point of $z$-coordinate
$z$, is, by the linearity of the interaction in (\ref{eq:SGequ}), a
Gaussian with mean and mean square deviation given respectively by
\begin{equation}gin{equation}
\bar{z}(t) =(\pm) \frac {a\,t^{2}}{2 m}\quad\qquad d(t) =d \sqrt{1+
\frac{\hbar^{2} t^{2} }{2 m^{2}d^{4}} }\,.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:mmsd}
\end{equation}
For a more general initial wave function,
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sgpsi}
\mbox{$\mathbb{P}$}si = \psi \otimesimes \mbox{$\mathbb{P}$}hi_0\qquad \psi=
\alphalpha
\psi^{(+)}\,+\,
\begin{equation}ta\psi^{(-)}\,
\end{equation}
passage through the magnetic field will, by linearity, split the wave function\
into an upward-deflected piece (proportional to $\psi^{(+)}$) and a
downward-deflected piece (proportional to $\psi^{(-)}$), with
corresponding deflections of the trajectories. The outcome is
registered by detectors placed in the paths of these two possible
``beams.'' Thus of the four kinematically possible outcomes
(``pointer orientations'') the occurrence of no detection and of
simultaneous detection can be ignored as highly unlikely, and the two
relevant outcomes correspond to registration by either the upper or
the lower detector. Accordingly, for a measurement of $\sigma_z$ the
experiment is equipped with a ``calibration'' (i.e., an assignment of
numerical values to the outcomes of the experiment) $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{+}=1$
for upper detection and $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{-}=-1$ for lower detection (while
for a measurement of the $z$-component of the spin angular momentum
itself the calibration is given by $\frac 12\hbar\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{\pm}$).
Note that one can completely understand what's going on in this
Stern-Gerlach experiment without invoking any putative property of the
electron such as its actual $z$-component of spin that is supposed to
be revealed in the experiment. For a general initial wave function\ there is no
such property. What is more, the transparency of the analysis of this
experiment makes it clear that there is nothing the least bit
remarkable (or for that matter ``nonclassical'') about the {\it
nonexistence\/} of this property. But the failure to pay attention
to the role of operators as observables, i.e., to precisely what we
should mean when we speak of measuring operator-observables, helps
create a false impression of quantum peculiarity.
\subsection{A Remark on the Reality of Spin in Bohmian Mechanics}
Bell has said that (for Bohmian mechanics) spin is not real. Perhaps he should
better have said: {\it ``Even\/} spin is not real,'' not merely
because of all observables, it is spin which is generally regarded as
quantum mechanically most paradigmatic, but also because spin is
treated in orthodox quantum theory\ very much like position, as a ``degree of
freedom''---a discrete index which supplements the continuous degrees
of freedom corresponding to position---in the wave function.
Be that as it may, his basic meaning is, we believe, this: Unlike
position, spin is not {\it primitive\/}, i.e., no {\it actual\/}
discrete degrees of freedom, analogous to the {\it actual\/} positions
of the particles, are added to the state description in order to deal
with ``particles with spin.'' Roughly speaking, spin is {\it
merely\/} in the wave function. At the same time, as explained in Section
\ref{secSGE}, ``spin measurements'' are completely clear, and merely
reflect the way spinor wave function s are incorporated into a description of
the motion of configurations.
In this regard, it might be objected that while spin may not be
primitive, so that the result of our ``spin measurement'' will not
reflect any initial primitive property of the system, nonetheless this
result {\it is\/} determined by the initial configuration of the
system, i.e., by the position of our electron, together with its
initial wave function, and as such---as a function $X_{\sigma_z}(\mathbf{q},
\psi)$ of the state of the system---it is some property of the system
and in particular it is surely real. We shall address this issue in
Sections \ref{sec:context} and \ref{sec:agcontext}.
\subsection{The Framework of Discrete Experiments}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:FDE}
We shall now consider a generic experiment. Whatever its
significance, the information conveyed by the experiment is registered
in the apparatus as an \emph{output}, represented, say, by the
orientation of a pointer. Moreover, when we speak of a generic
experiment, we have in mind a fairly definite initial state of the
apparatus, the ready state $\mbox{$\mathbb{P}$}hi_0=\mbox{$\mathbb{P}$}hi_{0}(y)$, one for which the
apparatus should function as intended, and in particular one in which
the \emph{pointer} has some ``null'' orientation, as well as a
definite initial state of the system $\psi=\psi(x)$ on which the
experiment is performed. Under these conditions it turns out
\cite{DGZ92a} that the initial $t=0$ wave function{} $\mbox{$\mathbb{P}$}si_0=\mbox{$\mathbb{P}$}si_{0}(q)$ of the
composite system formed by system and apparatus, with generic
configuration $q=(x,y)$, has a product form, i.e.,
$$
\mbox{$\mathbb{P}$}si_0 = \psi \otimes \mbox{$\mathbb{P}$}hi_0 .$$
Such a product form is an expression
of the \emph{independence} of system and apparatus immediately before
the experiment begins.\footnote{It might be argued that it is somewhat
unrealistic to assume a sharp preparation of $\psi$, as well as the
possibility of resetting the apparatus always in the same initial
state $\mbox{$\mathbb{P}$}hi_0$. We shall address this issue in Section 6}
For Bohmian mechanics\ we should expect in general, as a consequence of the quantum equilibrium\
hypothesis, that the outcome of the experiment---the final pointer
orientation---will be random: Even if the system and apparatus
initially have definite, known wave function s, so that the outcome is
determined by the initial configuration of system and apparatus, this
configuration is random, since the composite system is in quantum equilibrium, and the
distribution of the final configuration is given by
$|\mbox{$\mathbb{P}$}si_{T}(x,y)|^2$, where $\mbox{$\mathbb{P}$}si_{T}$ is the wave function\ of the
system-apparatus composite at the time $t=T$ when the experiment ends,
and $x$, respectively $y$, is the generic system, respectively
apparatus, configuration.
Suppose now that $\mbox{$\mathbb{P}$}si_{T}$ has the form (\ref{eq:msum}), which
roughly corresponds to assuming that the experiment admits, i.e., that
the apparatus is so designed that there is, only a finite (or
countable) set of possible outcomes, given, say, by the different
possible macroscopically distinct pointer orientations of the
apparatus and corresponding to a partition of the apparatus
configuration space into macroscopically disjoint regions $G_{\alpha}$,
$\alpha=1,2, \ldots$.\footnote{Note that to assume there are only
finitely, or countably, many outcomes is really no assumption at
all, since the outcome should ultimately be converted to digital
form, whatever its initial representation may be.} We arrive in this
way at the notion of \emph{discrete experiment}, for which the time
evolution arising {}from the interaction of the system and apparatus
{}from $t=0$ to $t=T$ is given by the unitary map
\begin{equation}gin{equation}
U : \;{\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_0 \to \bigoplusplus_{\alpha} {\mbox{$\mathcal{H}$}} \otimes\mbox{$\mathbb{P}$}hi_{\alpha} \, , \quad
\psi \otimes \mbox{$\mathbb{P}$}hi_0 \mapsto \mbox{$\mathbb{P}$}si_{T} =\sum_{\alpha } \psi_{\alpha} \otimes \mbox{$\mathbb{P}$}hi_{\alpha}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:ormfin}
\end{equation}
where $\mathcal{H}$ is the system Hilbert space of square-integrable
wave functions with the usual inner product
$$
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi,\phi\rangle=\int{{\psi}^*( x)\,\phi( x)\,dx}.
$$
and the $\mbox{$\mathbb{P}$}hi_{\alpha}$ are a \emph{fixed} set of (normalized) apparatus
states supported by the macroscopically distinct regions $G_{\alpha}$ of
apparatus configurations.
The experiment usually comes equipped with an assignment of numerical
values $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ (or a vector of such values) to the various outcomes
$\alpha$. This assignment is defined by a ``calibration'' function $F$ on
the apparatus configuration space assuming on each region $G_{\alpha}$ the
constant value $\mbox{\boldmath $\lambda $} dabda_{\alpha}$. If for simplicity we assume that these values
are in one-to-one correspondence with the outcomes\footnote{We shall
consider the more general case later on in Subsection
\ref{sec:StrM}.} then
\begin{equation}gin{equation}
p_{\alpha} = \int_{F^{-1}(\mbox{\boldmath $\lambda $} dabda_{\alpha})} |\mbox{$\mathbb{P}$}si_{T}(x,y)|^{2}
dx\,dy=\int_{G_{\alpha}} |\mbox{$\mathbb{P}$}si_{T}(x,y)|^{2} dx\,dy
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:page}
\end{equation}
is the probability of finding $\mbox{\boldmath $\lambda $} dabda_{\alpha}$, for initial system wave function{} $\psi$.
Since $\mbox{$\mathbb{P}$}hi_{\alpha'}(y)=0$ for $y\in G_{\alpha}$ unless $\alpha=\alpha'$, we obtain
\begin{equation}gin{equation}
p_{\alpha}=\int dx\int_{G_{\alpha}} |\sum_{\alpha'} \psi_{\alpha'} (x)
\mbox{$\mathbb{P}$}hi_{\alpha'} (y)|^{2}\,dy = \int | \psi_{\alpha} (x) |^2 dx = \|\psi_{\alpha} \|^2 .
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:pr}
\end{equation}
Note that when the result $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ is obtained, the effective wave
function of the system undergoes the transformation $\psi \to
\psi_\alpha.$
A simple example of a discrete experiment is provided by the map
\begin{equation}gin{equation}
U: \psi\otimes\mbox{$\mathbb{P}$}hi_0 \mapsto \sum_{\alpha } c_\alpha
\psi\otimes\mbox{$\mathbb{P}$}hi_{\alpha},
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:extrva}
\end{equation}
where the $c_{\alpha}$ are complex numbers such that $\sum_{\alpha }
|c_{\alpha}|^{2}=1$; then $ p_{\alpha}=|c_{\alpha}|^{2}$. Note that the
experiment defined by \eq{eq:extrva} resembles a coin-flip more than a
measurement since the outcome $\alpha$ occurs with a probability
independent of $\psi$.
\subsection{Reproducibility and its Consequences}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:RC}
Though for a generic discrete experiment there is no reason to expect
the sort of ``measurement-like'' behavior typical of familiar quantum
measurements, there are, however, special experiments whose outcomes
are somewhat less random than we might have thought possible.
According to Schr\"{o}dinger{} \cite{Sch35}:
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
The systematically arranged interaction of two systems (measuring
object and measuring instrument) is called a measurement on the
first system, if a directly-sensible variable feature of the second
(pointer position) is always reproduced within certain error limits
when the process is immediately repeated (on the same object, which
in the mean time must not be exposed to additional influences).
\end{quotation}
To implement the notion of ``measurement-like'' experiment considered
by Schr\"{o}dinger{}, we first make some preliminary observations concerning the
unitary map (\ref{eq:ormfin}). Let $P_{[\mbox{$\mathbb{P}$}hi_{\alpha}]}$ be the orthogonal
projection in the Hilbert space $\bigoplus_{\alpha} \mathcal{H}\otimes\mbox{$\mathbb{P}$}hi_{\alpha}$ onto the
subspace ${\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_{\alpha}$ and let $\widetilde{\mathcal{H}_{\alpha}}$ be the
subspaces of $\mbox{$\mathcal{H}$}$ defined by
\begin{equation}gin{equation}
P_{[\mbox{$\mathbb{P}$}hi_{\alpha}]}\left[ U({\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_0) \right]
=\widetilde{\mathcal{H}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_{\alpha}\,.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:htilde}
\end{equation}
(Since the vectors in $\widetilde{\mathcal{H}}_{\alpha}$ arise {}from
projecting $\mbox{$\mathbb{P}$}si_{T}=\sum_{\alpha } \psi_{\alpha} \otimes \mbox{$\mathbb{P}$}hi_{\alpha}$ onto its $\alpha$-component,
$\widetilde{\mathcal{H}_{\alpha}}$ is the space of the ``collapsed''
wave function{}s associated with the occurrence of the outcome $\alpha$.) Then
\begin{equation}gin{equation}
U({\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_0) \subseteq
\bigoplus_{\alpha}\widetilde{\mathcal{H}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_{\alpha}.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:rep2}
\end{equation}
Note, however, that it need not be the case that
$U({\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_0)=\bigoplus_{\alpha}\widetilde{\mathcal{H}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_{\alpha}$, and that
the spaces $\widetilde{\mathcal{H}_{\alpha}}$ need be neither orthogonal
nor distinct; e.g., for (\ref{eq:extrva})
$\widetilde{\mathcal{H}_{\alpha}}=\mbox{$\mathcal{H}$}$ and $U({\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_0)=\mbox{$\mathcal{H}$}\otimes\sum_\alpha
c_\alpha\mbox{$\mathbb{P}$}hi_{\alpha}\neq\bigoplus_{\alpha} {\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_{\alpha}$.\footnote{Note that if \mbox{$\mathcal{H}$}\ has finite
dimension $n$, and the number of outcomes $\alpha$ is $m$, $\mbox{dim
}[U({\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_0)]= n$, while $\mbox{dim }[\bigoplus_{\alpha} {\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_{\alpha}] =
n\cdot m$.}
A ``measurement-like'' experiment is one which is reproducible in the
sense that it will yield the same outcome as originally obtained if it
is immediately repeated. (This means in particular that the apparatus
must be immediately reset to its ready state, or a fresh apparatus
must be employed, while the system is not tampered with so that its
initial state for the repeated experiment is its final state produced
by the first experiment.) Thus the experiment is \emph{reproducible}
if
\begin{equation}gin{equation}
U(\widetilde{\mathcal{H}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_0) \subseteq
\widetilde{\mathcal{H}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_{\alpha} \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:repconold}
\end{equation}
or, equivalently, if there are spaces ${{\mbox{$\mathcal{H}$}}_{\alpha}}'\subseteq
\widetilde{\mathcal{H}_{\alpha}}$ such that
\begin{equation}gin{equation}
U(\widetilde{\mathcal{H}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_0) = {{\mbox{$\mathcal{H}$}}_{\alpha}}'\otimes\mbox{$\mathbb{P}$}hi_{\alpha}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:repcon}\,.
\end{equation}
Note that it follows {}from the unitarity of $U$ and the orthogonality
of the subspaces $\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_{\alpha}$ that the subspaces
$\widetilde{\mathcal{H}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_0$ and hence the
$\widetilde{\mathcal{H}_{\alpha}}$ are also orthogonal. Therefore, by
taking the orthogonal sum over $\alpha$ of both sides of
(\ref{eq:repcon}), we obtain
\begin{equation}gin{equation}
\bigoplus_{\alpha} U(\widetilde{\mathcal{H}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_0)= U \left( \bigoplus_{\alpha}
\widetilde{\mathcal{H}_{\alpha}}\otimes\mbox{$\mathbb{P}$}hi_0\right) = \bigoplus_{\alpha} {{\mbox{$\mathcal{H}$}}_{\alpha}}'\otimes\mbox{$\mathbb{P}$}hi_{\alpha}.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:orto}
\end{equation}
If we now make the simplifying assumption that the subspaces
$\widetilde{\mathcal{H}_{\alpha}}$ are finite dimensional, we have {}from
unitarity that $ \widetilde{\mathcal{H}_{\alpha}}= {{\mbox{$\mathcal{H}$}}_{\alpha}}'$, and thus, by
comparing \eq{eq:rep2} and (\ref{eq:orto}), that equality holds in
\eq{eq:rep2} and that
\begin{equation}gin{equation}
{\mbox{$\mathcal{H}$}}=\bigoplus_{\alpha} {{\mbox{$\mathcal{H}$}}_{\alpha}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:sum}
\end{equation}
with
\begin{equation}gin{equation}
U({\mbox{$\mathcal{H}$}_\alpha} \otimes \mbox{$\mathbb{P}$}hi_0 )= {{\mbox{$\mathcal{H}$}}_{\alpha}} \otimes \mbox{$\mathbb{P}$}hi_{\alpha}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:rep4}
\end{equation}
for $${\mbox{$\mathcal{H}$}}_{\alpha} \equiv \widetilde{\mathcal{H}_{\alpha}}={{\mbox{$\mathcal{H}$}}_{\alpha}}' \, .$$
Therefore if the wave function\ of the system is initially in $\mbox{$\mathcal{H}$}_\alpha$, outcome
$\alpha$ definitely occurs and the value $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_\alpha$ is thus definitely
obtained (assuming again for simplicity one-to-one correspondence
between outcomes and results). It then follows that for a general
initial system wave function
\begin{equation}gin{displaymath}
\psi =\sum_{\alpha } P_{ {\mathcal{H}_{\alpha} } } \psi ,
\end{displaymath}
where $ P_{ {\mathcal{H}_{\alpha} } } $ is the projection in $\mbox{$\mathcal{H}$}$ onto the subspace ${\mbox{$\mathcal{H}$}}_{\alpha}$, that
the outcome $\alpha$, with result $\mbox{\boldmath $\lambda $} dabda_{\alpha}$, is obtained with (the usual)
probability
\begin{equation}gin{equation}
p_\alpha = \| P_{ {\mathcal{H}_{\alpha} } } \psi\|^2= \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi, P_{ {\mathcal{H}_{\alpha} } } \psi \rangle,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:prr}
\end{equation}
which follows {}from (\ref{eq:rep4}), \eq{eq:pr}, and \eq{eq:ormfin}
since $U\big( P_{ {\mathcal{H}_{\alpha} } } \psi\otimes\mbox{$\mathbb{P}$}hi_0\big) = \psi_{\alpha} \otimes\mbox{$\mathbb{P}$}hi_{\alpha}$ and hence $ \|
P_{ {\mathcal{H}_{\alpha} } } \psi\| = \| \psi_{\alpha} \|$ by unitarity. In particular, when the $\mbox{\boldmath $\lambda $} dabda_{\alpha}$
are real-valued, the expected value obtained is
\begin{equation}gin{equation}
\sum_{\alpha }{p_\alpha \mbox{\boldmath $\lambda $} dabda_{\alpha}}=\sum_{\alpha }{\mbox{\boldmath $\lambda $} dabda_{\alpha}{ \| P_{ {\mathcal{H}_{\alpha} } } \psi\|}^2} = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi, A\psi \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:meanz}
\end{equation}
where
\begin{equation}gin{equation}
A=\sum_{\alpha }{\mbox{\boldmath $\lambda $} dabda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } }
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:A}
\end{equation}
is the self-adjoint{} operator with eigenvalues $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ and spectral projections
$ P_{ {\mathcal{H}_{\alpha} } } $.
\subsection{Operators as Observables}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{subsec.oao}
What we wish to emphasize here is that, insofar as the statistics for
the values which result {}from the experiment are concerned,
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.85\textwidth}\openup 1.4\jot
Schr\"odinger's equationtlength{\baselineskip}{12pt}\emph{the relevant data for the
experiment are the collection $\{ {\mbox{$\mathcal{H}$}}_{\alpha}\}$ of special orthogonal
subspaces, together with the corresponding calibration $\{\mbox{\boldmath $\lambda $} dabda_{\alpha} \},$
}
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{def:exptoa}
\end{equation}
and \emph{this data is compactly expressed and represented by the
self-adjoint operator $A$, on the system Hilbert space $\mbox{$\mathcal{H}$}$, given
by \eq{eq:A}.} Thus, under the assumptions we have made, with a
reproducible experiment $\mbox{$\mathscr{E}$}$ we naturally associate an operator
$A=A_{\mbox{$\mathscr{E}$}}$, a single mathematical object, defined on the system alone,
in terms of which an efficient description \eq{eq:prr} of the
statistics of the possible results is achieved; we shall denote this
association by
\begin{equation}gin{equation}
\mbox{$\mathscr{E}$}\mapsto A\,.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:fretoe}
\end{equation}
If we wish we may speak of ``operators as observables,'' and when an
experiment \mbox{$\mathscr{E}$}{} is associated with a self-adjoint{} operator $A$, as described
above, we may say that \emph{the experiment \mbox{$\mathscr{E}$}{} is a ``measurement''
of the observable represented by the self-adjoint{} operator $A$.} If we do
so, however, it is important that we appreciate that in so speaking we
merely refer to what we have just derived: the role of operators in
the description of certain experiments.\footnote{Operators as
observables also naturally convey information about the system's
wave function\ after the experiment. For example, for an ideal measurement,
when the outcome is $\alpha$ the wave function\ of the system after the experiment
is (proportional to) $P_{\mbox{$\mathcal{H}$}_\alpha}\psi$. We shall elaborate upon this
in the next section.}
So understood, the notion of operator-as-observable in no way implies
that anything is genuinely measured in the experiment, and certainly
not the operator itself! In a general experiment no system property
is being measured, even if the experiment happens to be
measurement-like. (Position measurements in Bohmian mechanics{} are of course an
important exception.) What in general is going on in obtaining
outcome $\alphalpha$ is completely straightforward and in no way suggests,
or assigns any substantive meaning to, statements to the effect that,
prior to the experiment, observable $A$ somehow had a value
$\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_\alpha$---whether this be in some determinate sense or in the
sense of Heisenberg's ``potentiality'' or some other ill-defined fuzzy
sense---which is revealed, or crystallized, by the experiment. Even
speaking of the observable $A$ as having value $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_\alpha$ when the
system's wave function\ is in $\mbox{$\mathcal{H}$}_\alpha$, i.e., when this wave function\ is an eigenstate of
$A$ of eigenvalue $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_\alpha$---insofar as it suggests that something
peculiarly quantum is going on when the wave function\ is not an eigenstate
whereas in fact there is nothing the least bit peculiar about the
situation---perhaps does more harm than good.
It might be objected that we are claiming to arrive at the quantum formalism\ under
somewhat unrealistic assumptions, such as, for example,
reproducibility or finite dimensionality. We agree. But this
objection misses the point of the exercise. The quantum formalism\ itself is an
idealization; when applicable at all, it is only as an approximation.
Beyond illuminating the role of operators as ingredients in this
formalism, our point was to indicate how naturally it emerges. In
this regard we must emphasize that the following question arises for
quantum orthodoxy, but does not arise for Bohmian mechanics: For precisely which
theory is the quantum formalism\ an idealization?
We shall discuss how to go beyond the idealization involved in the
quantum formalism in Section 4---after having analyzed it thoroughly
in Section 3. First we wish to show that many more experiments than
those satisfying our assumptions can indeed be associated with
operators in exactly the manner we have described.
\subsection{The General Framework of Bohmian Experiments}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:E}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:GFE}
According to (\ref{eq:page}) the statistics of the results of a
discrete experiment are governed by the probability measure
$\rho_{\mbox{$\mathbb{P}$}si_T}\circ F^{-1}$, where $\rho_{\mbox{$\mathbb{P}$}si_T}(dq)
=|\mbox{$\mathbb{P}$}si_{T}(q)|^{2}dq$ is the quantum equilibrium measure. Note that
discreteness of the value space of $F$ plays no role in the
characterization of this measure. This suggests that we may consider
a more general notion of experiment, not based on the assumption of a
countable set of outcomes, but only on the \emph{unitarity} of the
operator $U$, which transforms the initial state $\psi\otimesimes\mbox{$\mathbb{P}$}hi_{0}$
into the final state $\mbox{$\mathbb{P}$}si_{T}$, and on a generic \emph{calibration
function} $F$ {}from the configuration space of the composite system
to some value space, e.g., $\mathbb{R}$, or ${\mathbb{R}}^m$, giving the result of the
experiment as a function $ F(Q_T)$ of the final configuration $Q_T$ of
system and apparatus. We arrive in this way at the notion of
\emph{general experiment}
\begin{equation}gin{equation}
\mbox{$\mathscr{E}$}{}\equiv\{\mbox{$\mathbb{P}$}hi_{0}, U, F\},
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:generalexperiment}
\end{equation}
where the unitary $U$ embodies the interaction of system and apparatus
and the function $F$ could be completely general. Of course, for
application to the results of real-world experiments $F$ might
represent the ``orientation of the apparatus pointer'' or some
coarse-graining thereof.
Performing \mbox{$\mathscr{E}$}{} on a system with initial wave function{} $\psi$ leads to the
result ${Z}= F(Q_T)$ and since $Q_{T}$ is randomly distributed
according to the quantum equilibrium measure $\rho_{\mbox{$\mathbb{P}$}si_T}$, the
probability distribution of $Z$ is given by the induced measure
\begin{equation}gin{equation}
\rho^{ Z}_{\psi}= \rho_{\mbox{$\mathbb{P}$}si_T}\circ
F^{-1}\,.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:indumas}
\end{equation}
(We have made explicit only the dependence of the measure on $\psi$,
since the initial apparatus state $\mbox{$\mathbb{P}$}hi_{0}$ is of course fixed,
defined by the experiment \mbox{$\mathscr{E}$}{}.) Note that this more general notion
of experiment eliminates the slight vagueness arising {}from the
imprecise notion of macroscopic upon which the notion of discrete
experiment is based. Note also that the structure
\eq{eq:generalexperiment} conveys information about the wave function
\eq{eq:con} of the system after a certain result $F(Q_T)$ is obtained.
Note, however, that this somewhat formal notion of experiment may not
contain enough information to determine the detailed Bohmian dynamics,
which would require specification of the Hamiltonian of the
system-apparatus composite, that might not be captured by $U$. In
particular, the final configuration $Q_T$ may not be determined, for
given initial wave function{}, as a function of the initial configuration of
system and apparatus. \mbox{$\mathscr{E}$}{} does, however, determine what is relevant
for our purposes about the random variable $Q_T$, namely its
distribution, and hence that of $Z=F(Q_T)$.
Let us now focus on the right had side of the equation (\ref{eq:prr}),
which establishes the association of operators with experiments:
$\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi, P_{ {\mathcal{H}_{\alpha} } } \psi \rangle$ is the probability that ``the operator
$A $ has value $\mbox{\boldmath $\lambda $} dabda_{\alpha}$'', and according to standard quantum mechanics
the statistics of the results of measuring a general self-adjoint{} operator
$A$, not necessarily with pure point spectrum, in the (normalized)
state $\psi$ are described by the probability measure
\begin{equation}gin{equation}
\Delta\mapsto\mu^{A}_\psi(\Delta) \equiv \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi,
P^{A }(\Delta) \psi \rangle
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:spectrmeas}
\end{equation}
where $\Delta$ is a (Borel) set of real numbers and $P^A:
\Delta\mapsto P^{A }(\Delta)$ is the \emph{projection-valued-measure}
(PVM) uniquely associated with $A$ by the spectral theorem. (We
recall \cite{RS80} that a PVM is a normalized, countably additive set
function whose values are, instead of nonnegative reals, orthogonal
projections on a Hilbert space \mbox{$\mathcal{H}$}{}. Any PVM $P$ on \mbox{$\mathcal{H}$}\ determines,
for any given $\psi\in \mbox{$\mathcal{H}$}$, a probability measure
$\mu_\psi\equiv\mu_\psi^P : \Delta \mapsto \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi ,
P(\Delta)\psi\rangle$ on $\mathbb{R}$. Integration against projection-valued-measure\ is analogous
to integration against ordinary measures, so that $B\equiv \int
f(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) P(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) $ is well-defined, as an operator on $\mbox{$\mathcal{H}$}$.
Moreover, by the spectral theorem every self-adjoint\ operator $A$ is of the
form $ A= \int \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda\, P(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)$, for a unique projection-valued-measure{} $ P =P^{A}$,
and $\int f(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) P(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)= f(A)$. )
It is then rather clear how (\ref{eq:fretoe}) extends to general self-adjoint{}
operators: \emph{a general experiment \mbox{$\mathscr{E}$}{} is a measurement of the
self-adjoint{} operator $A$ if the statistics of the results of \mbox{$\mathscr{E}$}{} are
given by (\ref{eq:spectrmeas})}, i.e.,
\begin{equation}gin{equation}
\mbox{$\mathscr{E}$}\mapsto A \qquad
\mbox{if and only if}\qquad \rho^{ Z}_{\psi} =\mu^A_\psi \,.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:prdeltan}
\end{equation}
In particular, if $\mbox{$\mathscr{E}$}\mapsto A $, then the moments of the result of
$\mbox{$\mathscr{E}$}$ are the moments of $A$:
$$
<Z^n>= \int \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^n \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi ,P(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)\psi\rangle=
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi ,A^n\psi\rangle. $$
Schr\"odinger's equationction{The Quantum Formalism} Schr\"odinger's equationtcounter{equation}{0} The spirit of
this section will be rather different {}from that of the previous
one. Here the focus will be on the formal structure of experiments
measuring self-adjoint operators. Our aim is to show that the
standard quantum formalism emerges {}from a \emph{formal} analysis
of the association $\mbox{$\mathscr{E}$}\mapsto A$ between operator and experiment
provided by (\ref{eq:prdeltan}). By ``formal analysis'' we mean not
only that the detailed physical conditions under which might
$\mbox{$\mathscr{E}$}\mapsto A$ hold (e.g., reproducibility) will play no role, but
also that the practical requirement that \mbox{$\mathscr{E}$}{} be physically
realizable will be of no relevance whatsoever.
Note that such a formal approach is unavoidable in order to recover
the quantum formalism. In fact, within the quantum formalism one
may consider measurements of arbitrary self-adjoint{} operators, for example,
the operator $A= \hat{X}^2\hat{P} + \hat{P}X^{2}$, where $\hat{X}$
and $\hat{P}$ are respectively the position and the momentum
operators. However, it may very well be the case that no ``real
world'' experiment measuring $A$ exists. Thus, in order to allow
for measurements of arbitrary self-adjoint operators we shall regard
(\ref{eq:generalexperiment}) as characterizing an ``\emph{abstract
experiment}''; in particular, we shall not regard the unitary map
$U$ as arising necessarily {}from a (realizable) Schr\"{o}dinger{} time
evolution. We may also speak of virtual experiments.
In this regard one should observe that to resort to a formal
analysis is indeed quite common in physics. Consider, e.g., the
Hamiltonian formulation of classical mechanics that arose {}from an
abstraction of the physical description of the world provided by
Newtonian mechanics. Here we may freely speak of completely general
Hamiltonians, e.g. $H(p,q)= p^{6}$, without being concerned about
whether they are physical or not. Indeed, only very few
Hamiltonians correspond to physically realizable motions!
A warning: As we have stressed in the introduction and in Section
\ref{subsec.oao}, when we speak here of a measurement we don't
usually mean a {\em genuine} measurement---an experiment revealing
the pre-existing value of a quantity of interest, the measured
quantity or property. (We speak in this unfortunate way because it
is standard.) Genuine measurement will be discussed much later, in
Section \ref{secMO}.
\subsection{Weak Formal Measurements}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:MO}
The first formal notion we shall consider is that of weak formal
measurement, formalizing the relevant data of an experiment measuring
a self-adjoint operator:
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.85\textwidth}\openup 1.4\jot
Schr\"odinger's equationtlength{\baselineskip}{12pt}\emph{Any orthogonal decomposition
${\mbox{$\mathcal{H}$}}=\bigoplus_{\alpha} {{\mbox{$\mathcal{H}$}}_{\alpha}}$, i.e., any complete collection $\{ {\mbox{$\mathcal{H}$}}_{\alpha}\}$ of
mutually orthogonal subspaces, paired with any set $\{\mbox{\boldmath $\lambda $} dabda_{\alpha} \}$ of
distinct real numbers, defines the weak formal measurement
$\mbox{$\mathcal{M}$}\equiv\{({\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha} )\}\equiv\{{\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha} \}$.}
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{def:wfm}
\end{equation}
(Compare (\ref{def:wfm}) with (\ref{def:exptoa}) and note that now we
are not assuming that the spaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ are finite-dimensional.) The
notion of weak formal measurement is aimed at expressing the minimal
structure that all experiments (some or all of which might be virtual)
measuring the same operator $A= \sum\mbox{\boldmath $\lambda $} dabda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $ have in common ($ P_{ {\mathcal{H}_{\alpha} } } $ is
the orthogonal projection onto the subspace ${\mbox{$\mathcal{H}$}}_{\alpha}$). Then, ``to
perform \mbox{$\mathcal{M}$}'' shall mean to perform (at least virtually) any one of
these experiments, i.e., any experiment such that
\begin{equation}gin{equation}
p_{\alpha}=\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, P_{ {\mathcal{H}_{\alpha} } } \psi \rangle
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:prdeltass}
\end{equation}
is the probability of obtaining the result $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ on a system initially
in the state $\psi$. (This is of course equivalent to requiring that
the result $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ is definitely obtained if and only if the initial
wave function $\psi\in {\mbox{$\mathcal{H}$}}_{\alpha}$.)
Given $\mbox{$\mathcal{M}$}\equiv\{{\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha} \}$ consider the set function
\begin{equation}gin{equation}
P:\Delta\mapsto P (\Delta)\equiv \sum_{\mbox{\boldmath $\lambda $} dabda_{\alpha}\in \Delta} P_{ {\mathcal{H}_{\alpha} } } ,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:disfr}
\end{equation}
where $\Delta$ is a set of real numbers (technically, a Borel set).
Then
\begin{equation}gin{itemize}
\item[1)] $P$ is \emph{normalized}, i.e., $P(\mathbb{R})= I$, where $I$ is the
identity operator and $\mathbb{R}$ is the real line,
\item[2)] $P(\Delta)$ is an \emph{orthogonal projection}, i.e.,
$P(\Delta)^{2}=P(\Delta)=P(\Delta)^{*}$,
\item[3)] $P$ is \emph{countably additive}, i.e., $ P(\bigcup_{n}
\Delta_n) = \sum_{n} P(\Delta_n)$, for $\Delta_n$ disjoint sets.
\end{itemize}
Thus $P$ is a projection-valued-measure{} and therefore the notion of weak formal
measurement is indeed equivalent to that of ``discrete'' PVM, that is,
a PVM supported by a countable set $\{\mbox{\boldmath $\lambda $} dabda_{\alpha}\}$ of values.
More general PVMs, e.g. PVMs supported by a continuous set of values,
will arise if we extend (\ref{def:wfm}) and base the notion of weak
formal measurement upon the general association (\ref{eq:prdeltan})
between experiments and operators. If we stipulate that
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.90\textwidth}\openup 1.4\jot
Schr\"odinger's equationtlength{\baselineskip}{12pt}\emph{any projection-valued-measure{} $P$ on $\mbox{$\mathcal{H}$}$ defines a
weak formal measurement $\mbox{$\mathcal{M}$}\equiv P$,}
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{def:wfmg}
\end{equation}
then ``to perform $\mbox{$\mathcal{M}$}$'' shall mean to perform any experiment $\mbox{$\mathscr{E}$}$
associated with $A=\int \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda P(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)$ in the sense of
(\ref{eq:prdeltan}).
Note that since by the spectral theorem there is a natural one-to-one
correspondence between PVMs and self-adjoint{} operators, we may speak
equivalently of \emph{the} operator $A=A_{\mathcal{M}}$, for given
$\mbox{$\mathcal{M}$}$, or of \emph{the} weak formal $\mbox{$\mathcal{M}$}=\mbox{$\mathcal{M}$}_A$, for given $A$. In
particular, the weak formal measurement $\mbox{$\mathcal{M}$}_{A}$ represents the
equivalence class of \emph{all} experiments $\mbox{$\mathscr{E}$}{}\to A$.
\subsection{Strong Formal Measurements}
We wish now to classify the different experiments \mbox{$\mathscr{E}$}{} associated with
the same self-adjoint{} operator $A$ by taking into account the effect of \mbox{$\mathscr{E}$}{}
on the state of the system, i.e., the state transformations $\psi \to
\psi_{\alpha}$ induced by the occurrence of the various results $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ of \mbox{$\mathscr{E}$}{}.
Accordingly, unless otherwise stated, {}from now on we shall assume
$\mbox{$\mathscr{E}$}$ to be a discrete experiment measuring $A=\sum\mbox{\boldmath $\lambda $} dabda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $, for which
the state transformation $\psi \to \psi_{\alpha}$ is defined by
\eq{eq:ormfin}. This leads to the notion of strong formal
measurements. For the most important types of strong formal
measurements, ideal, normal and standard, there is a one-to-one
correspondence between $\alpha$'s and numerical results $\mbox{\boldmath $\lambda $} dabda_{\alpha}$.
\subsubsection{Ideal Measurements} \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:IM}
Given a weak formal measurement of $A$, the simplest possibility for
the transition $\psi\to\psi_{\alpha}$ is that when the result $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ is
obtained, the initial state $\psi$ is projected onto the corresponding
space ${\mbox{$\mathcal{H}$}}_{\alpha}$, i.e., that
\begin{equation}gin{equation}
\psi \to \psi_{\alpha} = P_{ {\mathcal{H}_{\alpha} } } \psi.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:ideal}
\end{equation}
This prescription defines uniquely the \emph{ideal measurement} of
$A$. (The transformation $\psi\to\psi_{\alpha}$ should be regarded as defined
only in the projective sense: $\psi \to \psi_\alpha$ and $\psi \to
c\psi_\alpha$ ($c\neq 0$) should be regarded as the same transition.)
``To perform an ideal measurement of $A$'' shall then mean to perform
a discrete experiment \mbox{$\mathscr{E}$}{} whose results are statistically distributed
according to (\ref{eq:prdeltass}) and whose state transformations
\eq{eq:ormfin} are given by (\ref{eq:ideal}).
Under an ideal measurement the wave function{} changes as little as possible: an
initial $\psi \in {\mbox{$\mathcal{H}$}}_{\alpha}$ is unchanged by the measurement. Ideal
measurements have always played a privileged role in quantum
mechanics. It is the ideal measurements that are most frequently
discussed in textbooks. It is for ideal measurements that the standard
collapse rule is obeyed. When Dirac \cite{Dir30} wrote: ``a
measurement always causes the system to jump into an eigenstate of the
dynamical variable that is being measured" he was referring to an
ideal measurement.
\subsubsection{Normal Measurements}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:NM}
The rigid structure of ideal measurements can be weakened by requiring
only that ${\mbox{$\mathcal{H}$}}_{\alpha}$ as a whole, and not the individual vectors in ${\mbox{$\mathcal{H}$}}_{\alpha}$,
is unchanged by the measurement and therefore that the state
transformations induced by the measurement are such that when the
result $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ is obtained the transition
\begin{equation}gin{equation}
\psi \to\psi_{\alpha} = U_\alpha P_{ {\mathcal{H}_{\alpha} } } \psi
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:norm}
\end{equation}
occurs, where the $U_\alpha$ are operators on ${\mbox{$\mathcal{H}$}}_{\alpha}$ ( $U_\alpha :{\mbox{$\mathcal{H}$}}_{\alpha}\to{\mbox{$\mathcal{H}$}}_{\alpha}$).
Then for any such discrete experiment \mbox{$\mathscr{E}$}{} measuring $A$, the $U_\alpha$
can be chosen so that \eq{eq:norm} agrees with \eq{eq:ormfin}, i.e.,
so that for $\psi \in {\mbox{$\mathcal{H}$}}_{\alpha}$, $U(\psi\otimesimes\mbox{$\mathbb{P}$}hi_0) =
U_\alpha\psi\otimesimes\mbox{$\mathbb{P}$}hi_\alpha$, and hence so that $U_\alpha$ is unitary (or at
least a partial isometry). Such a measurement, with unitaries $U_\alpha
:{\mbox{$\mathcal{H}$}}_{\alpha}\to{\mbox{$\mathcal{H}$}}_{\alpha}$, will be called a \emph{normal measurement} of $A$.
In contrast with an ideal measurement, a normal measurement of an
operator is not uniquely determined by the operator itself: additional
information is needed to determine the transitions, and this is
provided by the family $\{U_{\alpha}\}$. Different families define
different normal measurements of the same operator. Note that ideal
measurements are, of course, normal (with $U_{\alpha}= I_{\alpha} \equiv$
identity on ${\mbox{$\mathcal{H}$}}_{\alpha}$), and that normal measurements with one-dimensional
subspaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ are necessarily ideal.
Since the transformations (\ref{eq:norm}) leave invariant the
subspaces ${\mbox{$\mathcal{H}$}}_{\alpha}$, the notion of normal measurement characterizes
completely the class of reproducible measurements of self-adjoint{} operators.
Following the terminology introduced by Pauli \cite{Pau58}, normal
measurement are sometimes called {\it measurements of first kind\/}.
Normal measurements are also \emph{quantum non demolition (QND)
measurements\/} \cite{Brag}, defined as measurements such that the
operators describing the induced state transformations, i.e, the
operators $R_{\alpha}\equiv U_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $, commute with the measured operator
$A=\sum\mbox{\boldmath $\lambda $} dabda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $. (This condition is regarded as expressing that the
measurement leaves the measured observable $A$ unperturbed).
\subsubsection{Standard Measurements}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:SM}
We may now drop the condition that the ${\mbox{$\mathcal{H}$}}_{\alpha}$ are left invariant by the
measurement and consider the very general state transformations
\begin{equation}gin{equation}
\psi \to \psi_{\alpha}=T_\alpha P_{ {\mathcal{H}_{\alpha} } } \psi
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:stsm}
\end{equation}
with operators $T_\alpha : {\mbox{$\mathcal{H}$}}_{\alpha}\to\mbox{$\mathcal{H}$}$. Then, exactly as for the case of
normal measurements, it follows that $T_\alpha$ can be chosen to be
unitary {}from ${\mbox{$\mathcal{H}$}}_{\alpha}$ onto its range $\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$. The subspaces
$\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$ need be neither orthogonal nor distinct. We shall
write $R_\alpha=T_\alpha P_{ {\mathcal{H}_{\alpha} } } $ for the general transition operators. With
$T_\alpha$ as chosen, $R_\alpha$ is characterized by the equation
$R_{\alpha}^{\alphast}R_{\alpha} = P_{ {\mathcal{H}_{\alpha} } } $ (where $R_{\alpha}^{\alphast}$ denotes the adjoint of
$R_{\alpha}$).
The state transformations (\ref{eq:stsm}), given by unitaries $T_\alpha:
{\mbox{$\mathcal{H}$}}_{\alpha}\to\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$, or equivalently by bounded operators $R_\alpha$ on
$\mbox{$\mathcal{H}$}$ satisfying $R_{\alpha}^{\alphast}R_{\alpha} = P_{ {\mathcal{H}_{\alpha} } } $, define what we shall call a
\emph{standard measurement} of $A$. Note that normal measurements are
standard measurements with $\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}={\mbox{$\mathcal{H}$}}_{\alpha}$ (or $
\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}\subset {\mbox{$\mathcal{H}$}}_{\alpha}$). Although standard measurements are in a
sense more realistic than normal measurements (real world measurements
are seldom reproducible in a strict sense), they are very rarely
discussed in textbooks. We emphasize that the crucial data in a
standard measurement is given by $R_\alpha$, which governs both the state
transformations ($\psi\to R_a\psi$) and the probabilities ($p_\alpha =
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi, P_{ {\mathcal{H}_{\alpha} } } \psi\rangle= \| R_\alpha\psi\|^2$).
We shall illustrate the main features of standard measurements by
considering a very simple example: Let $\{e_0, e_{1}, e_{2}, \ldots
\}$, be a fixed orthonormal basis of \mbox{$\mathcal{H}$}{} and consider the standard
measurement whose results are the numbers $0,1,2,\ldots $ and whose
state transformations are defined by the operators
\begin{equation}gin{displaymath}
R_{\alpha}\equiv |e_0\rangle\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle e_\alpha| \qquad \mbox{i.e.,}\qquad
R_{\alpha} \psi = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle e_\alpha, \psi \rangle
e_{0},\qquad\alpha=0,1,2,\ldots
\end{displaymath}
With such $R_{\alpha}$'s are associated the projections
$P_{\alpha}=R_{\alpha}^{\alphast}R_{\alpha}=|e_\alpha\rangle\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle e_\alpha|\,$, i.e., the
projections onto the one dimensional spaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ spanned respectively
by the vectors $e_{\alpha}$. Thus, this is a measurement of the operator
$ A = \sum_{\alpha} \alpha |e_\alpha\rangle\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle e_\alpha| $. Note that the spaces
$\widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$, i.e. the ranges of the $R_{\alpha}$'s, are all the same
and equal to the space $\mbox{$\mathcal{H}$}_{0}$ generated by the vector $e_0$. The
measurement is then not normal since ${\mbox{$\mathcal{H}$}}_{\alpha}\neq \widetilde{{\mbox{$\mathcal{H}$}}_{\alpha}}$.
Finally, note that this measurement could be regarded as giving a
simple model for a photo detection experiment, where any state is
projected onto the ``vacuum state'' $e_0$ after the detection.
\subsubsection{Strong Formal Measurements}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:StrM}
We shall now relax the condition that $\alpha\mapsto \mbox{\boldmath $\lambda $} dabda_{\alpha}$ is one-to-one,
as we would have to do for an experiment having a general calibration
$\alpha\mapsto\mbox{\boldmath $\lambda $} dabda_{\alpha}$, which need not be invertible. This leads to (what we
shall call) a \emph{strong formal measurement}. Since this notion
provides the most general formalization of the notion of a
``measurement of a self-adjoint{} operator'' that takes into account the effect
of the measurement on the state of the system, we shall spell it out
precisely as follows:
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.85\textwidth}\openup 1.4\jot
Schr\"odinger's equationtlength{\baselineskip}{12pt}\emph{Any complete (labelled)
collection $\{ {\mbox{$\mathcal{H}$}}_{\alpha}\}$ of mutually orthogonal subspaces, any
(labelled) set $\{\mbox{\boldmath $\lambda $} dabda_{\alpha} \}$ of not necessarily distinct real
numbers, and any (labelled) collection $\{R_{\alpha}\}$ of bounded
operators on $\mbox{$\mathcal{H}$}$, such that $R_{\alpha}^{\alphast}R_{\alpha}\equiv P_{ {\mathcal{H}_{\alpha} } } $ (the
projection onto ${\mbox{$\mathcal{H}$}}_{\alpha}$), defines a strong formal measurement.}
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{def:sfm}
\end{equation}
A strong formal measurement will be compactly denoted by $\mbox{$\mathcal{M}$}\equiv
\{({\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha}) \}\equiv\{{\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha} \}$, or even more compactly
by $\mbox{$\mathcal{M}$}\equiv \{\mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha} \}$ (the spaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ can be extracted {}from
the projections $ P_{ {\mathcal{H}_{\alpha} } } = R_{\alpha}^{\alphast}R_{\alpha}$). With \mbox{$\mathcal{M}$}{} is associated the
operator $A=\sum\mbox{\boldmath $\lambda $} dabda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $. Note that since the $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ are not
necessarily distinct numbers, $ P_{ {\mathcal{H}_{\alpha} } } $ need not be the spectral
projection $P^A (\mbox{\boldmath $\lambda $} dabda_{\alpha})$ associated with $\mbox{\boldmath $\lambda $} dabda_{\alpha}$; in general $$P^A
(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) = \sum_{\alpha: \mbox{\boldmath $\lambda $} dabda_{\alpha} =\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda} P_{ {\mathcal{H}_{\alpha} } } ,$$
i.e., it is the sum of all
the $ P_{ {\mathcal{H}_{\alpha} } } $'s that are associated with the value $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda$.\footnote{ It
is for this reason that it would be pointless and inappropriate to
similarly generalize weak measurements. It is only when the state
transformation is taken into account that the distinction between
the outcome $\alpha$ (which determines the transformation) and the
result $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ (whose probability the formal measurement is to supply)
becomes relevant.} ``\emph{To perform the measurement $\mbox{$\mathcal{M}$}$}'' on a
system initially in $\psi$ shall accordingly mean to perform a
discrete experiment \mbox{$\mathscr{E}$}{} such that: 1) the probability $p(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)$ of
getting the result $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda$ is governed by $A$, i.e., $ p(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) =
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, P^A (\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) \psi \rangle$, and 2) the state
transformations of \mbox{$\mathscr{E}$}{} are those prescribed by \mbox{$\mathcal{M}$}{}, i.e., $ \psi \to
\psi_{\alpha}= R_{\alpha}\psi$.
Observe that strong formal measurements do provide a more realistic
formalization of the notion of measurement of an operator than
standard measurements: the notion of discrete experiment does not
imply a one-to-one correspondence between outcomes, i.e, final
macroscopic configurations of the pointer, and the numerical results
of the experiment.
The relationship between (weak or strong) formal measurements, self-adjoint{}
operators, and experiments can be summarized by the following sequence
of maps:
\begin{equation}gin{equation}
\mbox{$\mathscr{E}$} \mapsto \mbox{$\mathcal{M}$} \mapsto A
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:etomtoa}
\end{equation}
The first map expresses that $\mbox{$\mathcal{M}$}$ (weak or strong) is a formalization
of \mbox{$\mathscr{E}$}{}---it contains the ``relevant data'' about \mbox{$\mathscr{E}$}{}---and it will
be many-to-one if \mbox{$\mathcal{M}$}{} is a weak formal measurement\footnote{There is
an obvious natural unitary equivalence between the preimages \mbox{$\mathscr{E}$}{} of
a strong formal measurement \mbox{$\mathcal{M}$}{}.}; the second map expresses that
\mbox{$\mathcal{M}$}{} is a formal measurement of $A$ and it will be many-to-one if \mbox{$\mathcal{M}$}{}
is (required to be) strong and one-to-one if \mbox{$\mathcal{M}$}{} is weak. \emph{Note
that $\mbox{$\mathscr{E}$}\mapsto A$ is always many-to-one}.
\subsection{From Formal Measurements to Experiments}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{subsec.exp}
Given a strong measurement $\mbox{$\mathcal{M}$}\equiv \{{\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha} \}$ one may
easily construct a map \eq{eq:ormfin} defining a discrete experiment
$\mbox{$\mathscr{E}$}{}=\mbox{$\mathscr{E}$}_{\mathcal{M}}$ associated with $\mbox{$\mathcal{M}$}$:
\begin{equation}gin{equation}
U: \;\psi \otimes \mbox{$\mathbb{P}$}hi_0 \mapsto
\sum_{\alpha} (R_{\alpha}\psi) \otimes \mbox{$\mathbb{P}$}hi_{\alpha}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{standu}
\end{equation}
The unitarity of $U$ ( {}from $ \mbox{$\mathcal{H}$}\otimes\mbox{$\mathbb{P}$}hi_0 $ onto the range of $U$)
follows then immediately {}from the orthonormality of the $\{\mbox{$\mathbb{P}$}hi_{\alpha}\}$
since
\begin{equation}gin{equation}
\sum_{\alpha} \|R_{\alpha}\psi\|^{2} = \sum_{\alpha} \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, R_{\alpha}^{\alphast}R_{\alpha}
\psi \rangle
= \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, \sum_{\alpha} P_{ {\mathcal{H}_{\alpha} } } \psi \rangle = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi,\psi\rangle = \|\psi\|^2
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:unide}
\end{equation}
This experiment is abstractly characterized by: 1) the finite or
countable set $I$ of outcomes $\alpha$, 2) the apparatus ready state
$\mbox{$\mathbb{P}$}hi_{0}$ and the set $\{\mbox{$\mathbb{P}$}hi_{\alpha}\}$ of normalized apparatus states, 3)
the unitary map $U : \;{\mbox{$\mathcal{H}$}}\otimes\mbox{$\mathbb{P}$}hi_0 \to \bigoplus_{\alpha} {\mbox{$\mathcal{H}$}} \otimes\mbox{$\mathbb{P}$}hi_{\alpha}$ given by
(\ref{standu}), 4) the calibration $\alpha \mapsto \mbox{\boldmath $\lambda $} dabda_{\alpha}$ assigning
numerical values (or a vector of such values) to the various outcomes
$\alpha$. Note that $U$ need not arise {}from a Schr\"{o}dinger{} Hamiltonian
governing the interaction between system and apparatus. Thus \mbox{$\mathscr{E}$}{}
should properly be regarded as an ``abstract'' experiment as we have
already pointed out in the introduction to this section.
\subsection{Von Neumann Measurements}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:vNM}
We shall now briefly comment on the relation between our approach,
based on formal measurements, and the widely used formulation of
quantum measurement in terms of von Neumann measurements \cite{vNe55}.
A {\it von Neumann measurement\/} of $A=\sum \mbox{\boldmath $\lambda $} dabda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $ on a system
initially in the state $\psi$ can be described as follows (while the
nondegeneracy of the eigenvalues of $A$---i.e., that
$\mbox{dim}({\mbox{$\mathcal{H}$}}_{\alpha})=1$---is usually assumed, we shall not do so): Assume
that the (relevant) configuration space of the apparatus, whose
generic configuration shall be denoted by $y$, is one-dimensional, so
that its Hilbert space $\mbox{$\mathcal{H}$}_{\mathcal{A}}\simeq L^{2}(\mathbb{R})$, and that
the interaction between system and apparatus is governed by the
Hamiltonian
\begin{equation}gin{equation}
H= H_{\text{vN}}= \gamma A\otimesimes \hat{P_{y}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{vontrans}
\end{equation}
where $\hat{P_{y}}\equiv i\hbar\partial/\partial y$ is (minus) the
momentum operator of the apparatus. Let $\mbox{$\mathbb{P}$}hi_0 = \mbox{$\mathbb{P}$}hi_0 (y) $ be the
ready state of the apparatus. Then for $\psi= P_{ {\mathcal{H}_{\alpha} } } \psi$ one easily sees
that the unitary operator $U\equiv e^{-i TH/\hbar}$ transforms the
initial state $\psi_\alpha \otimes \mbox{$\mathbb{P}$}hi_0$ into $ \psi _\alpha \otimes \mbox{$\mathbb{P}$}hi_{\alpha}$ where
$\mbox{$\mathbb{P}$}hi_{\alpha} = \mbox{$\mathbb{P}$}hi_{0}(y - \mbox{\boldmath $\lambda $} dabda_{\alpha}\gamma T)$, so that the action of $U$ on
general $\psi =\sum P_{ {\mathcal{H}_{\alpha} } } \psi$ is
\begin{equation}gin{equation}
U: \;\psi \otimes \mbox{$\mathbb{P}$}hi_0 \to
\sum_{\alpha} ( P_{ {\mathcal{H}_{\alpha} } } \psi) \otimes \mbox{$\mathbb{P}$}hi_{\alpha}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:vNm}
\end{equation}
If $\mbox{$\mathbb{P}$}hi_{0}$ has sufficiently narrow support, say around $y=0$, the
$\mbox{$\mathbb{P}$}hi_{\alpha}$ will have disjoint support around the ``pointer positions''
$y_\alpha = \mbox{\boldmath $\lambda $} dabda_{\alpha}\gamma T$, and thus will be orthogonal, so that, with
calibration $F(y)= y /\gamma T$ (more precisely, $F(y)= y_\alpha /\gamma
T$ for $y$ in the support of $\mbox{$\mathbb{P}$}hi_\alpha$), the resulting von Neumann
measurement becomes a discrete experiment measuring $A$; comparing
(\ref{eq:vNm}) and (\ref{eq:ideal}) we see that it is an ideal
measurement of $A$.\footnote{It is usually required that von Neumann
measurements be impulsive ($\gamma$ large, $T$ small) so that only
the interaction term \eq{vontrans} contributes significantly to the
total Hamiltonian over the course of the measurement.}
Thus, the framework of von Neumann measurements is less general than
that of discrete experiments, or equivalently of strong formal
measurements; at the same time, since the Hamiltonian $H_{\text{vN}}$
is not of Schr\"odinger type, von Neumann measurements are just as
formal. (We note that more general von Neumann measurements of $A$
can be obtained by replacing $H_{\text{vN}}$ with more general
Hamiltonians; for example, $H_{\text{vN}}'= H_{0} + H_{\text{vN}}$,
where $H_0$ is a self-adjoint operator on the system Hilbert space
which commutes with $A$, gives rise to a \emph{normal measurement} of
$A$, with $R_{\alpha} = e^{-iT H_0/\hbar} P_{ {\mathcal{H}_{\alpha} } } $.Thus by proper extension of the
von Neumann measurements one may arrive at a framework of measurements
completely equivalent to that of strong formal measurements.)
\subsection{Preparation Procedures}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:PP}
Before discussing further extensions of the association between
experiments and operators, we shall comment on an implicit assumption
apparently required for the measurement analysis to be relevant: that
the system upon which measurements are to be performed can be prepared
in any prescribed state $\psi$.
Firstly, we observe that the system can be prepared in a prescribed
state $\psi$ by means of an appropriate standard measurement \mbox{$\mathcal{M}$}{}
performed on the system when it is initially in an unknown state
$\psi'$. We have to choose $\mbox{$\mathcal{M}$}\equiv \{{\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha} \}$ in such a
way that $R_{\alpha_{0}}\psi'=\psi$, for some $\alpha_{0}$ and all $\psi'$,
i.e., that Ran($R_{\alpha_{0}}$) = span($\psi$); then {}from reading the
result $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{\alpha_0}$ we may infer that the system has collapsed to
the state $\psi$. The simplest possibility is for $\mbox{$\mathcal{M}$}$ to be an ideal
measurement with at least a one-dimensional subspace $\mbox{$\mathcal{H}$}_{\alpha_{0}}$
that is spanned by $\psi$. Another possibility is to perform a
(nonideal) standard measurement like that of the example at the end of
Section \ref{sec:SM}, which can be regarded as defining a preparation
procedure for the state $e_{0}$.
Secondly, we wish to emphasize that the existence of preparation
procedures is not as crucial for relevance as it may seem. If we had
only statistical knowledge about the initial state $\psi$, nothing
would change in our analysis of Bohmian experiments of Section 2, and
in our conclusions concerning the emergence of self-adjoint{} operators, except
that the uncertainty about the final configuration of the pointer
would originate {}from both quantum equilibrium and randomness in
$\psi$. We shall elaborate upon this later when we discuss Bohmian
experiments for initial states described by a density matrix.
\subsection{Measurements of Commuting Families of Operators}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secMCFO}
As hinted in Section \ref{sec:FDE}, the result of an experiment \mbox{$\mathscr{E}$}{}
might be more complex than we have suggested until now in Section 3:
it might be given by the vector $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{\alpha}\equiv(
\mbox{\boldmath $\lambda $} dabda_{\alpha}^{(1)},\ldots,\mbox{\boldmath $\lambda $} dabda_{\alpha}^{(m)})$ corresponding to the orientations of $m$
pointers. For example, the apparatus itself may be a composite of $m$
devices with the possible results $\mbox{\boldmath $\lambda $} dabda_{\alpha}^{(i)}$ corresponding to the
final state of the $i$-th device. Nothing much will change in our
discussion of measurements if we now replace the numbers $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ with
the vectors ${\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda}_{\alpha}\equiv( \mbox{\boldmath $\lambda $} dabda_{\alpha}^{(1)},\ldots,\mbox{\boldmath $\lambda $} dabda_{\alpha}^{(m)})$, since
the dimension of the value space was not very relevant. However \mbox{$\mathscr{E}$}{}
will now be associated, not with a single self-adjoint operator, but
with a commuting family of such operators. In other words, we arrive
at the notion of an experiment \mbox{$\mathscr{E}$}{} that is a \emph{measurement of a
commuting family} of self-adjoint{} operators,\footnote{We recall some basic
facts about commuting families of self-adjoint{} operators
\cite{vNe55,RN55,Pru71}. The self-adjoint{} operators $\alphavec{A}{m}$ form a
commuting family if they are bounded and pairwise commute, or, more
generally, if this is so for their spectral projections, i.e., if
$[P^{A_{i}} (\Delta), P^{A_{j}} (\Gamma)] =0$ for all $i,j =
1,\ldots,m$ and (Borel) sets $\Delta, \Gamma \subset \mathbb{R} $. A
commuting family $A\equiv(\alphavec{A}{m})$ of self-adjoint{} operators is called
\emph{complete} if every self-adjoint{} operator $C$ that commutes with all
members of the family can be expressed as $C= g(A_1,A_2,\dots )$ for
some function $g$. The set of all such operators cannot be extended
in any suitable sense (it is closed in all relevant operator
topologies). For any commuting family $(\alphavec{A}{m})$ of self-adjoint{}
operators there is a self-adjoint{} operator $B$ and measurable functions
$f_i$ such that $A_i= f_i(B)$. If the family is complete, then this
operator has simple (i.e., nondegenerate) spectrum.} namely the
family
\begin{equation}gin{equation}
{A} \equiv \sum_{\alpha} {\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda}_{\alpha} P_{ {\mathcal{H}_{\alpha} } } = \left(\sum_{\alpha}\mbox{\boldmath $\lambda $} dabda_{\alpha}^{(1)}
P_{ {\mathcal{H}_{\alpha} } } ,\ldots, \sum_{\alpha}\mbox{\boldmath $\lambda $} dabda_{\alpha}^{(m)} P_{ {\mathcal{H}_{\alpha} } } \right) \equiv (\alphavec{A}{m}).
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:ccff}
\end{equation}
Then the notions of the various kinds of formal measurements---weak,
ideal, normal, standard, strong---extend straightforwardly to formal
measurements of commuting families of operators. In particular, for
the general notion of weak formal measurement given by \ref{def:wfmg},
$P$ becomes a PVM on $\mathbb{R}^m$, with associated operators $ A_i=
\int_{\mathbb{R}^m} \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(i)} P(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)\quad [\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda=(
\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)},\ldots,\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(m)})\in\mathbb{R}^m]$. And just as for PVMs on
$\mathbb{R}$ and self-adjoint{} operators, this association in fact yields, by the
spectral theorem, a one-to-one correspondence between PVMs on $\mathbb{R}^m$
and commuting families of $m$ self-adjoint{} operators.The PVM corresponding to
the commuting family $(\alphavec{A}{m})$ is in fact simply the product PVM
$P= P^A= P^{A_1}\times \cdots\times P^{A_m}$ given on product sets by
\begin{equation}gin{equation}
P^{{A}}(\Delta_1\times\cdots \times\Delta_m)= P^{A_{1}}
(\Delta_{1})\cdots P^{A_{m}} (\Delta_{m}),
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:factpvm}
\end{equation}
where $P^{A_{1}} ,\ldots, P^{A_{m}}$ are the PVMs of $ \alphavec{A}{m}$,
and $\Delta_i\subset \mathbb{R}$, with the associated probability
distributions on $\mathbb{R}^m$ given by the spectral measures for $A$
\begin{equation}gin{equation}
\mu^{{A}}_{\psi}(\Delta)
=\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi, P^{{A}}(\Delta)
\psi\rangle\
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{plcf}
\end{equation}
for any (Borel) set $\Delta\subset\mathbb{R}^m$.
In particular, for a PVM on $\mathbb{R}^m$, corresponding to $A=
(\alphavec{A}{m})$, the $i$-marginal distribution, i.e., the distribution
of the $i$-th component $ \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(i)}$, is
$$
\mu^{A}_{\psi }(\mathbb{R} \times \cdots\mathbb{R} \times \Delta_i \times \mathbb{R} \times
\cdots \times \mathbb{R}) =\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, P^{A_i}( \Delta_i) \psi\rangle=
\mu^{A_i}_{\psi}(\Delta_i),
$$
the spectral measure for $A_i$. Thus, by focusing on the respective
pointer variables $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(i)}$, we may regard an experiment
measuring (or a weak formal measurement of) $A= (\alphavec{A}{m})$ as
providing an experiment measuring (or a weak formal measurement of)
each $A_i$, just as would be the case for a genuine measurement of $m$
quantities $A_1,\ \ldots,A_m$. Note also the following: If $\{{\mbox{$\mathcal{H}$}}_{\alpha},
\mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha} \}$ is a strong formal measurement of $A= (\alphavec{A}{m})$,
then $\{{\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_\alpha^{(i)}, R_{\alpha} \}$ is a strong formal measurement
of $A_i$, but if $\{{\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha} \}$ is an ideal, resp. normal, resp.
standard, measurement of $A$, $\{{\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_\alpha^{(i)}, R_{\alpha} \}$ need
not be ideal, resp. normal, resp. standard.
There is a crucial point to observe: the same operator may belong to
different commuting families. Consider, for example, a measurement of
${A} = (\alphavec{A}{m})$ and one of ${B} = (\alphavec{B}{m})$, where
$A_{1}=B_{1}\equiv C$. Then while both measurements provide a
measurement of $C$, they could be totally different: the operators
$A_{i}$ and $B_{i}$ for $i\neq1 $ need not commute and the PVMs of
${A}$ and ${B}$, as well as any corresponding experiments $\mbox{$\mathscr{E}$}_A$ and
$\mbox{$\mathscr{E}$}_B$, will be in general essentially different.
To emphasize this point we shall recall a famous example, the EPRB
experiment~\cite{EPR, Boh51}: A pair of spin one-half particles,
prepared in a spin-singlet state $$
\psi =
\frac{1}{\sqrt{2}}\left(\psi^{(+)}\otimesimes\psi^{(-)} +
\psi^{(-)}\otimesimes\psi^{(+)}\right)\,,$$
are moving freely in
opposite directions. Measurements are made, say by Stern-Gerlach
magnets, on selected components of the spins of the two particles.
Let $\bf {a} ,\, {b} ,\, {c}$ be three different unit vectors in
space, let $\mybold{\sigma}_{1} \equiv \mybold{\sigma}\otimes I$ and let
$\mybold{\sigma}_{2} \equiv I \otimes \mybold{\sigma}, $ where $
\mybold{\sigma} =(\sigma_x,\sigma_y,\sigma_z)$ are the Pauli matrices.
Then we could measure the operator $\mybold{\sigma}_{1}{\bf\cdot {a}}$
by measuring either of the commuting families $( \mybold{\sigma}_{1}
{\bf \cdot {a}}\,, \mybold{\sigma}_{2} {\bf \cdot {b}})$ and $(
\mybold{\sigma}_{1} {\bf \cdot {a}} \,, \mybold{\sigma}_{2} {\bf \cdot
{c}}) $. However these measurements are different, both as weak and
as strong measurements, and of course as experiments. In Bohmian mechanics{} the
result obtained at one place at any given time will in fact depend
upon the choice of the measurement simultaneously performed at the
other place (i.e., on whether the spin of the other particle is
measured along $\bf {b}$ or along $\bf{c}$). However, the statistics
of the results won't be affected by the choice of measurement at the
other place because both choices yield measurements of the same
operator and thus their results must have the same statistical
distribution.
\subsection{Functions of Measurements}
One of the most common experimental procedures is to recalibrate the
scale of an experiment \mbox{$\mathscr{E}$}{}: if $Z$ is the original result and $f$ an
appropriate function, recalibration by $f$ leads to $f({Z})$ as the
new result. Thus $f(\mbox{$\mathscr{E}$})$ has an obvious meaning. Moreover, if
$\mbox{$\mathscr{E}$}\mapsto A$ according to (\ref{eq:prdeltan}) then $ \mu^{
f(Z)}_{\psi} =\mu^{ Z}_{\psi} \circ f^{-1} = \mu^A_\psi \circ f^{-1}
$, and
$$
\mu^A_\psi \circ f^{-1}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m) =
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi,P^{A}(f^{-1}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda))\psi\rangle =
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi,P^{f(A)}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)\psi\rangle
$$
where the last equality follows {}from the very definition of
$$f(A) = \int f(\mbox{\boldmath $\lambda $} dabda_{\alpha}m) P^{A} (d\mbox{\boldmath $\lambda $} dabda_{\alpha}m) = \int \mbox{\boldmath $\lambda $} dabda_{\alpha}m
P^{A}(f^{-1}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda))$$
provided by the spectral theorem. Thus,
\begin{equation}gin{equation}
\mbox{if }\quad \mu^{ Z}_{\psi} =\mu^A_\psi \qquad
\mbox{then }\qquad \mu^{ f(Z)}_{\psi} =\mu^{f(A)}_\psi\,,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:prfm}
\end{equation}
i.e.,
\begin{equation}gin{equation}
\text{if}\qquad \mbox{$\mathscr{E}$} \mapsto A \qquad\text{then}\qquad f(\mbox{$\mathscr{E}$}) \mapsto
f(A).
\end{equation}
The notion of \emph{function of a formal measurement} has then an
unequivocal meaning: if $\mbox{$\mathcal{M}$}$ is a weak formal measurement defined by
the PVM $P$ then $f(\mbox{$\mathcal{M}$})$ is the weak formal measurement defined by the
PVM $P\circ f^{-1}$, so that if $\mbox{$\mathcal{M}$}$ is a measurement of $A$ then
$f(\mbox{$\mathcal{M}$})$ is a measurement of $f(A)$; for a strong formal measurement
$\mbox{$\mathcal{M}$}=\{{\mbox{$\mathcal{H}$}}_{\alpha}, \mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha} \}$ the self-evident requirement that the
recalibration not affect the wave function{} transitions induced by \mbox{$\mathcal{M}$}{} leads
to $ f(\mbox{$\mathcal{M}$})= \{{\mbox{$\mathcal{H}$}}_{\alpha}, f(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{\alphalpha}), R_{\alpha} \}$. Note that if $\mbox{$\mathcal{M}$}$ is
a standard measurement, $f(\mbox{$\mathcal{M}$})$ will in general not be standard (since
in general $f$ can be many--to--one).
To highlight some subtleties of the notion of function of measurement
we shall discuss two examples: Suppose that $\mbox{$\mathcal{M}$}$ and $\mbox{$\mathcal{M}$}'$ are
respectively measurements of the commuting families $A = (A_{1}, A_{
2})$ and $ B = (B_{1}, B_{ 2})$, with $A_{1}A_{ 2}= B_{1}B_{ 2}=C$.
Let $f:\mathbb{R}^{2}\to\mathbb{R}$, $f (\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{1}, \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{2}) =
\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{1}\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{2}$. Then both $f(\mbox{$\mathcal{M}$})$ and $f(\mbox{$\mathcal{M}$}')$ are
measurement of the same self-adjoint{} operator $C$. Nevertheless, as strong
measurements or as experiments, they could be very different: if
$A_{2}$ and $B_{2}$ do not commute they will be associated with
different families of spectral projections. (Even more simply,
consider measurements $\mbox{$\mathcal{M}$}_x$ and $\mbox{$\mathcal{M}$}_y$ of $\sigma_x$ and $\sigma_y$
and let $f(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)= \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^2$. Then $f(\mbox{$\mathcal{M}$}_x)$ and $f(\mbox{$\mathcal{M}$}_y)$ are
measurement of $I$---so that the result must be $1$)---but the two
strong measurements, as well as the corresponding experiments, are
completely different.)
The second example is provided by measurements designed to determine
whether the operator $A=\sum \mbox{\boldmath $\lambda $} dabda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $ (the $\mbox{\boldmath $\lambda $} dabda_{\alpha}$'s are distinct) has
values in some given set $\Delta$. This determination can be
accomplished in at least two different ways: Suppose that $\mbox{$\mathcal{M}$}$ is an
ideal measurement of $A$ and let ${\sf 1}_\Delta(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)$ be the
characteristic function of the set $\Delta$. Then we could perform
${\sf 1}_\Delta(\mbox{$\mathcal{M}$})$, that is, we measure $A$ and see whether
``$A\in\Delta$''. But we could also perform an ``\emph{ideal
determination} of $A\in\Delta$'', that is, an ideal measurement of
${\sf 1}_\Delta(A) = P^A(\Delta)$. Now, both measurements provide a
``measurement of $A\in\Delta $'' (i.e., of the operator $ {\sf
1}_\Delta(A)$), since in both cases the results 1 and 0 get assigned
the same probabilities. However, as strong measurements, they are
different: when ${\sf 1}_\Delta(\mbox{$\mathcal{M}$})$ is performed, and the result 1 is
obtained, $\psi$ undergoes the transition
$$
\psi \to P_{ {\mathcal{H}_{\alpha} } } \psi
$$
where $\alpha$ is the outcome with $\mbox{\boldmath $\lambda $} dabda_{\alpha}\in \Delta$ that actually
occurs. On the other hand, for an ideal measurement of ${\sf
1}_\Delta(A)$, the occurrence of the result 1 will generate the
transition
$$
\psi \to P^{A}(\Delta)\psi = \sum_{\mbox{\boldmath $\lambda $} dabda_{\alpha}\in \Delta} P_{ {\mathcal{H}_{\alpha} } } \psi.
$$
Note that in this case the state of the system is changed as little
as possible. For example, suppose that two eigenvalues, say
$\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{\alphalpha_1}, \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{\alphalpha_2}$, belong to $\Delta$ and $\psi
= \psi_{\alphalpha_1} + \psi_{\alphalpha_2}$; then determination by performing
${\sf 1}_\Delta(\mbox{$\mathcal{M}$})$ will lead to either $\psi_{\alphalpha_1}$ or $
\psi_{\alphalpha_2}$, while the ideal determination of $A\in\Delta$ will
not change the state.
\subsection{Measurements of Operators with Continuous Spectrum}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{subsec.mocs}
We shall now reconsider the status of measurements of self-adjoint{} operators
with continuous spectrum. First of all, we remark that while on the
weak level such measurements arise very naturally---and, as already
stressed in Section \ref{sec:MO}, are indeed the first to appear in
Bohmian mechanics---there is no straightforward extension of the notion of strong
measurement to operators with continuous spectrum.
However, for given set of real numbers $\Delta$, one may consider any
determination of $A \in \Delta$, that is, any strong measurement of
the spectral projection $P^{A}(\Delta)$. More generally, for any
choice of a \emph{simple function}
\begin{equation}gin{displaymath}
f (\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) = \sum_{i=1}^{N} c_i\, {\sf 1}_{\Delta_i}(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) ,
\end{displaymath}
one may consider the strong measurements of $f(A)$. In particular,
let $\{ f^{(n)} \}$ be a sequence of simple functions converging to
the identity, so that $f^{(n)}(A) \rightarrow A$, and let $\mbox{$\mathcal{M}$}_n $ be
measurements of $f^{(n)}(A)$. Then $\mbox{$\mathcal{M}$}_n $ are \emph{ approximate
measurements} of $A$.
Observe that the foregoing applies to operators with discrete
spectrum, as well as to operators with continuous spectrum. But note
that while on the weak level we always have
\begin{equation}gin{displaymath}
\mbox{$\mathcal{M}$}_n \to \mbox{$\mathcal{M}$} \,,
\end{displaymath}
where $\mbox{$\mathcal{M}$}$ is a (general) weak measurement of $A$ (in the sense of
(\ref{def:wfmg})), if $A$ has continuous spectrum $\mbox{$\mathcal{M}$}$ will not exist
as a strong measurement (in any reasonable generalized sense, since
this would imply the existence of a bounded-operator-valued function
$R_\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda$ on the spectrum of $A$ such that
$R^{\alphast}_{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda}R_\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda\, d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda = P^A (d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)$, which is
clearly impossible). In other words, in this case there can be no
actual (generalized) strong measurement that the approximate
measurements $\mbox{$\mathcal{M}$}_{n}$ approximate---which is perfectly reasonable.
\subsection{Sequential Measurements}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:SeqM}
Suppose that $n$ measurements (with for each $i$, the
$\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(i)}_{\alphalpha_{i}}$ distinct)
\begin{equation}gin{displaymath}
\mbox{$\mathcal{M}$}_{1}\equiv \{ \mbox{$\mathcal{H}$}^{(1)}_{\alphalpha_{1}} , \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}} ,
R^{(1)}_{\alphalpha_{1}} \},\;
\dots\;,\; \mbox{$\mathcal{M}$}_{n}\equiv \{
\mbox{$\mathcal{H}$}^{(n)}_{\alphalpha_{n}},\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(n)}_{\alphalpha_{n}},
R^{(n)}_{\alphalpha_{n}} \}
\end{displaymath}
of operators (which need not commute)
\begin{equation}gin{displaymath}
A_{1}= \sum_{\alphalpha_{1}}\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}}
P^{(1)}_{\alphalpha_{1}},\;
\dots\;,\; A_{n}= \sum_{\alphalpha_{n}}\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(n)}_{\alphalpha_{n}}
P^{(n)}_{\alphalpha_{n}}
\end{displaymath}
are successively performed on our system at times $0 < t_1 < t_2<\dots
<t_N$. Assume that the duration of any single measurement is small
with respect to the time differences $t_{i}-t_{i-1}$, so that the
measurements can be regarded as instantaneous. If in between two
successive measurements the system's wave function{} changes unitarily with the
operators $U_{t}$ then, using obvious notation,
\begin{equation}gin{equation}
\mbox{Prob}_{\psi} (A_{1}=\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}},\ldots , A_{n} =
\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(n)}_{\alphalpha_{n}} ) = \|
R^{(n)}_{\alphalpha_{n}}(t_{n})
\cdots\,R^{(1)}_{\alphalpha_{1}}(t_{1}) \, \psi\|^2 ,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:conprop}
\end{equation}
where $R_{\alpha_{i}}^{(i)}(t) = U_{t}^{-1} R_{\alpha_{i}}^{(i)}U_{t}$ and
$\psi$ is the initial ($t=0$) wave function{}.
To understand how (\ref{eq:conprop}) comes about consider first the
case where $n=2$ and $t_2\alphapprox t_1\alphapprox 0$. According to standard
probability rules, the probability of obtaining the results
$Z_{1}=\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}}$ for the first measurement and
$Z_{2}=\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(2)}_{\alphalpha_{2}}$ for the second one is the
product\footnote{This is so because of the \textit{conditional
independence} of the outcomes of two successive measurements
\textit{given} the final conditional wave function{} for the first measurement. More
generally, the outcome of any measurement depends only on the wave function{}
resulting {}from the preceding one. For Bohmian experiments this
independence is a direct consequence of (\ref{eq:fpfp}). One may
wonder about the status of this independence for orthodox quantum theory{}. We stress
that while this issue might be problematical for orthodox quantum theory{}, it is not a
problem for Bohmian mechanics: the conditional independence of two successive
measurements is a consequence of the theory. (For more on this
point, see \cite{DGZ92a}).) We also would like to stress that this
independence assumption is in fact crucial for orthodox quantum theory{}. Without it,
it is hard to see how one could ever be justified in invoking the
quantum formalism. Any measurement we may consider will follow many
earlier measurements.}
\begin{equation}gin{displaymath}
\mbox{Prob}_{\psi} (Z_{2}= \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(2)}_{\alphalpha_{2}}| Z_{1} =
\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}}) \cdot
\mbox{Prob}_{\psi}(Z_{1}=\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}})
\end{displaymath}
where the first term is the probability of obtaining
$\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(2)}_{\alphalpha_{2}}$ given that the result of the first
measurement is $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}}$. Since $\mbox{$\mathcal{M}$}_1$ then
transforms the wave function{} $\psi$ to $R^{(1)}_{\alphalpha_{1}}\psi$, the
(normalized) initial wave function{} for $\mbox{$\mathcal{M}$}_2$ is
${R^{(1)}_{\alphalpha_{1}}\psi}/{\|R^{(1)}_{\alphalpha_{1}}\psi\| }$, this
probability is equal to
\begin{equation}gin{displaymath}
\frac{\|
R^{(2)}_{\alphalpha_{2}} R^{(1)}_{\alphalpha_{1}}\psi\|^2}{\|
R^{(1)}_{\alphalpha_{1}}\psi\|^2}.
\end{displaymath}
The second term, the probability of obtaining
$\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}}$, is of course $\| R^{(1)}_{\alphalpha_{1}}
\psi\|^2 $. Thus
\begin{equation}gin{displaymath}
\mbox{Prob}_{\psi}(A^{(1)}=\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}},A^{(2)}=
\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(2)}_{\alphalpha_{2}}) =\| R^{(2)}_{\alphalpha_{2}}
R^{(1)}_{\alphalpha_{1}}\psi\|^{2}
\end{displaymath}
in this case. Note that, in agreement with the analysis of discrete
experiments (see Eq.~\eq{eq:pr}), the probability of obtaining the
results $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}}$ and $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(2)}_{\alphalpha_{2}}$
turns out to be the square of the norm of the final system wave function{}
associated with these results. Now, for general times $t_{1}$ and
$t_{2}-t_{1}$ between the preparation of $\psi$ at $t=0$ and the
performance of $\mbox{$\mathcal{M}$}_{1}$ and between $\mbox{$\mathcal{M}$}_{1}$ and $\mbox{$\mathcal{M}$}_{2}$,
respectively, the final system wave function{} is
\begin{equation}gin{math}
R^{(2)}_{\alphalpha_{2}} U_{t_{2}-t_{1}}
R^{(1)}_{\alphalpha_{1}}U_{t_{1}}\psi= R^{(2)}_{\alphalpha_{2}}U_{t_{2}}
U^{-1}_{t_{1}} R^{(1)}_{\alphalpha_{1}} U_{t_{1}}\psi.
\end{math}
But
\begin{equation}gin{math}
\|R^{(2)}_{\alphalpha_{2}} U_{t_{2}}U^{-1}_{t_{1}} R^{(1)}_{\alphalpha_{1}}
U_{t_{1}}\psi\|= \|U^{-1}_{t_{2}}R^{(2)}_{\alphalpha_{2}}
U_{t_{2}}U^{-1}_{t_{1}}R^{(1)}_{\alphalpha_{1}}U_{t_{1}}\psi\| ,
\end{math}
and it is easy to see, just as for the simple case just considered,
that the square of the latter is the probability for the corresponding
result, whence (\ref{eq:conprop}) for $n=2$. Iterating, i.e., by
induction, we arrive at (\ref{eq:conprop}) for general $n$.
We note that when the measurements $\mbox{$\mathcal{M}$}_{1},\ldots \mbox{$\mathcal{M}$}_{n}$ are ideal,
the operators $R^{(i)}_{\alphalpha_{i}}$ are the orthogonal projections
$P^{(i)}_{\alphalpha_{i}}$, and equation (\ref{eq:conprop}) becomes the
standard formula for the joint probabilities of the results of a
sequence of measurements of quantum observables, usually known as
Wigner's formula \cite{Wig63}.
It is important to observe that, even for ideal measurements, the
joint probabilities given by (\ref{eq:conprop}) are not in general a
consistent family of joint distributions: summation in
(\ref{eq:conprop}) over the outcomes of the $i$-th measurement does
not yield the joint probabilities for the results of the measurements
of the operators
\begin{equation}gin{math}
A_{1}, \ldots, A_{i-1},A_{i+1},\ldots A_{n}
\end{math}
performed at the times $t_{1}, \ldots, t_{i-1}, t_{i+1}, \ldots
t_{n}$. (By rewriting the right hand side of (\ref{eq:conprop}) as
\begin{equation}gin{math}
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, R^{(1)}_{\alphalpha_{1}}(t_{n})^\alphast \cdots
R^{(n)}_{\alphalpha_{n}} (t_{n})^\alphast R^{(n)}_{\alphalpha_{n}}(t_{n})
R^{(1)}_{\alphalpha_{1}}(t_{1}) \psi\rangle
\end{math}
one easily sees that the ``sum rule'' will be satisfied when $i=n$ or
if the operators $ R^{(i)}_{\alphalpha_{i}}(t_{i})$ commute. More
generally, the consistency is guaranteed by the ``decoherence
conditions'' of Griffiths, Omn\`es, Gell-Mann and Hartle, and
Goldstein and Page~\cite{Gri84, GMH90, GoldPage}.
This failure of consistency means that the marginals of the joint
probabilities given by (\ref{eq:conprop}) are not themselves given by
the corresponding case of the formula. This should, however, come as
no surprise: Since performing the measurement $\mbox{$\mathcal{M}$}_{i}$ affects the
state of the system, the outcome of $\mbox{$\mathcal{M}$}_{i+1}$ should in general
depend on whether or not $\mbox{$\mathcal{M}$}_{i}$ has been performed. Note that there
is nothing particularly quantum in the fact that measurements matter
in this way: They matter even for genuine measurements (unlike those
we have been considering, in which nothing need be genuinely
measured), and even in classical physics, if the measurements are such
that they affect the state of the system.
The sequences of results $
\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{\alphalpha}\equiv(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}},
\ldots,\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(n)}_{\alphalpha_{n}}),$ the associated state
transformations $R_{\alpha} \equiv R^{(n)}_{\alphalpha_{n}} U_{t_n -t_{n-1}}
R^{(n-1)}_{\alphalpha_{n-1}} \cdots\,R^{(1)}_{\alphalpha_{1}} U_{t_1},$ and
the probabilities (\ref{eq:conprop}) (i.e., given by $p_\alpha = \|
R_{\alpha}\|^2$) define what we shall call a \emph{ sequential
measurement} of $\mbox{$\mathcal{M}$}_1, \cdots\mbox{$\mathcal{M}$}_n$, which we shall denote by
$\mbox{$\mathcal{M}$}_{n}\otimesimes \ldots\otimesimes \mbox{$\mathcal{M}$}_{1}$. A sequential measurement does
not in general define a formal measurement, neither weak nor strong,
since $R_{\alphalpha}^{\alphast}R_{\alphalpha}$ need not be a projection. This
fact might seem disturbing (see, e.g., \cite{Dav76}); we shall take up
this issue in the next section.
\subsection{Some Summarizing Remarks}
The notion of formal measurement we have explored in this section is
at the heart of the quantum formalism. It embodies the two essential
ingredients of a quantum measurement: the self-adjoint operator $A$
which represents the measured observable and the set of state
transformations $R_{\alpha}$ associated with the measured results. The
operator always carries the information about the statistics of
possible results. The state transformations prescribe how the state of
the system changes when the measurement is performed. For ideal
measurement the latter information is also provided by the operator,
but in general additional structure (the $R_{\alpha}$'s) is required.
There are some important morals to draw. \emph{The association
between measurements and operators is many-to-one:} the same
operator $A$ can be measured by many different measurements, for
example ideal, or normal but not ideal. Among the possible
measurements of $A$, we must consider all possible measurements of
commuting families of operators that include $A$, each of which may
correspond to entirely different experimental setups.
A related fact: \emph{not all measurements are ideal
measurements}.\footnote{In this regard we observe that the vague
belief in a universal collapse rule is as old, almost, as quantum
mechanics. It is reflected in von Neumann's formulation of quantum
mechanics \cite{vNe55}, based on two distinct dynamical laws: a
unitary evolution {\it between measurements\/}, and a nonunitary
evolution {\it when measurements are performed}. However, von
Neumann's original proposal \cite{vNe55} for the nonunitary
evolution---that when a measurement of $A=\sum_{\alpha}\mbox{\boldmath $\lambda $} dabda_{\alpha} P_{ {\mathcal{H}_{\alpha} } } $ is
performed upon a system in the state given by the density matrix
$W$, the state of the system after the measurement is represented by
the density matrix $$
W' = \sum_{\alpha} \sum_{\begin{equation}ta}\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \phi_{\alpha
\begin{equation}ta}, W\phi_{\alpha \begin{equation}ta} \rangle P_{[\phi_{\alpha \begin{equation}ta}]}$$
where,
for each $\alpha$, $\{\phi_{\alpha \begin{equation}ta}\}$ is a basis for ${\mbox{$\mathcal{H}$}}_{\alpha}
$---does not treat the general measurement as ideal. Moreover, this
expression in general depends on the choice of the basis $\{\phi_{\alpha
\begin{equation}ta}\}$, and was thus criticized by L\"uders \cite{Lud51}, who
proposed the transformation $$
W \to W' = \sum_{\alpha } P_{ {\mathcal{H}_{\alpha} } } W P_{ {\mathcal{H}_{\alpha} } } \,,$$
as it
gives a {\it unique} prescription. Note that for $W=P_{[\psi]}$,
where $P_{[\psi]}$ is the projection onto the initial pure state
$\psi$, $ W'= \sum_{\alpha } p_{\alphalpha} P_{[ \psi_{\alpha}]}$, where $ p_\alpha =
|\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, P_{ {\mathcal{H}_{\alpha} } } \psi\rangle|^2 $ and $\psi_{\alpha} = P_{ {\mathcal{H}_{\alpha} } } \psi$,
corresponding to an ideal measurement.} No argument, physical or
mathematical, suggests that ideal measurements should be regarded as
``more correct'' than any other type. In particular, the Wigner
formula for the statistics of a sequence of ideal measurements is no
more correct than the formula \eq{eq:conprop} for a sequence of more
general measurement. Granting a privileged status to ideal
measurements amounts to a drastic and arbitrary restriction on the
quantum formalism {\it qua measurement formalism}, since many (in fact
most) real world measurements would be left out.
In this regard we note that the arbitrary restriction to ideal
measurements affects the research program of ``decoherent'' or
``consistent'' histories \cite{GMH90,Omn88,Gri84}, since Wigner's
formula for a sequence of ideal measurements is unquestionably at its
basis. (It should be emphasized however that the special status
granted to ideal measurements is probably not the main difficulty with
this approach. The no-hidden-variables theorems, which we shall
discuss in Section 7, show that the totality of different families of
weakly decohering histories, with their respective probability
formulas, is genuinely inconsistent. While such inconsistency is
perfectly acceptable for a measurement formalism, it is hard to see
how it can be tolerated as the basis of what is claimed to be a
fundamental theory. For more on this, see \cite {DGZ92a, ShellyPT}.
Schr\"odinger's equationction{The Extended Quantum Formalism} Schr\"odinger's equationtcounter{equation}{0}
As indicated in Section 2.9, the textbook quantum formalism\ is merely an
idealization. As just stressed, not all real world measurements are
ideal. In fact, in the real world the projection postulate---that when
the measurement of an observable yields a specific value, the wave function\ of
the system is replaced by its projection onto the corresponding
eigenspace---is rarely obeyed. More importantly, a great many
significant real-world experiments are simply not at all associated
with operators in the usual way. Consider for example an electron
with fairly general initial wave function, and surround the electron with a
``photographic'' plate, away {}from (the support of the wave function\ of) the
electron, but not too far away. This setup measures the position of
``escape'' of the electron {}from the region surrounded by the plate.
Notice that since in general the time of escape is random, it is not
at all clear which operator should correspond to the escape
position---it should not be the Heisenberg position operator at a
specific time, and a Heisenberg position operator at a random time has
no meaning. In fact, there is presumably no such operator, so that
for the experiment just described the probabilities for the possible
results cannot be expressed in the form \eq{eq:prdeltan}, and in fact
are not given by the spectral measure for any operator.
Time measurements, for example escape times or decay times, are
particularly embarrassing for the quantum formalism. This subject remains mired in
controversy, with various research groups proposing their own favorite
candidates for the ``time operator'' while paying little attention to
the proposals of the other groups. For an analysis of time
measurements within the framework of Bohmian mechanics, see \cite{dau97}; in this
regard see also \cite{Lea90, leavens2, leavens3, grubl}.
Because of these and other difficulties, it has been proposed that we
should go beyond operators-as-observables, to ``{\it generalized
observables\/},'' described by mathematical objects even more
abstract than operators (see, e.g., the books of Davies \cite{Dav76},
Holevo \cite{Hol82} and Kraus \cite{Kra83}). The basis of this
generalization lies in the observation that, by the spectral theorem,
the concept of self-adjoint operator is completely equivalent to that
of (a normalized) projection-valued measure (PVM), an
orthogonal-projection-valued additive set function, on the value space
$\mathbb{R}$. Orthogonal projections are among the simplest examples of
positive operators, and a natural generalization of a ``quantum
observable'' is provided by a positive-operator-valued measure (POVM):
a normalized, countably additive set function $O$ whose values are
positive operators on a Hilbert space \mbox{$\mathcal{H}$}{}. When a POVM is sandwiched
by a wave function{} it generates a probability distribution
\begin{equation}gin{equation}
\mu^O_\psi: \Delta\mapsto \mu^O_\psi (\Delta) \equiv \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi ,
O(\Delta)\psi\rangle
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{mupsipov}
\end{equation}
in exactly the same manner as a PVM.
\subsection{POVMs and Bohmian Experiments}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secpovbe}
{}From a fundamental perspective, it may seem that we would regard
this generalization, to positive-operator-valued measures, as a step
in the wrong direction, since it supplies us with a new, much larger
class of fundamentally unneeded abstract mathematical entities far
removed {}{}from the basic ingredients of Bohmian mechanics{}. However {}{}from the
perspective of Bohmian phenomenology positive-operator-valued measures
form an extremely natural class of objects---\emph{indeed more natural
than projection-valued measures}.
To see how this comes about observe that \eq{eq:ormfin} defines a
family of bounded linear operators $R_{\alpha}$ by
\begin{equation}gin{equation}
P_{[\mbox{$\mathbb{P}$}hi_{\alpha}]}\left[ U({\psi }\otimes\mbox{$\mathbb{P}$}hi_0) \right] = (R_{\alpha}
\psi)\otimes\mbox{$\mathbb{P}$}hi_{\alpha},
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{DISCRE}
\end{equation}
in terms of which we may rewrite the probability \eq{eq:pr} of
obtaining the result $\mbox{\boldmath $\lambda $} dabda_{\alpha}$ (distinct) in a generic discrete experiment
as
\begin{equation}gin{equation}
p_{\alpha} = \|\psi_{\alpha} \|^2 = \|R_{\alpha} \psi \|^2= \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi, R_{\alpha}d R_{\alpha} \psi\rangle\, .
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{paA}
\end{equation}
By the unitarity of the overall evolution of system and apparatus we
have that $ \sum_{\alpha} \|\psi_{\alpha} \|^2 = \sum_{\alpha}\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, R_{\alpha}d R_{\alpha}
\psi\rangle = 1 $ for all $\psi\in \mbox{$\mathcal{H}$}$, whence
\begin{equation}gin{equation}
\sum_{\alpha } R_{\alpha}d R_{\alpha} = I \, .
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:uni}
\end{equation}
The operators $ O_\alpha \equiv R_{\alpha}d R_{\alpha} $ are obviously positive, i.e.,
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, O_{\alpha}\psi \rangle\ge 0\qquad\mbox{for all}\quad
\psi\in \mbox{$\mathcal{H}$}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:posoper}
\end{equation}
and by (\ref{eq:uni}) sum up to the identity,
\begin{equation}gin{equation}
\sum_{\alpha} O_{\alpha}= I \, .
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:sumone}
\end{equation}
Thus we may associate with a generic discrete experiment \mbox{$\mathscr{E}$}---with no
assumptions about reproducibility or anything else, but merely
\emph{unitarity}---a POVM
\begin{equation}gin{equation}
O (\Delta) = \sum_{\mbox{\boldmath $\lambda $} dabda_{\alpha} \in \Delta} O_\alpha \equiv \sum_{\mbox{\boldmath $\lambda $} dabda_{\alpha} \in
\Delta} R_{\alpha}d R_{\alpha},
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{oeslaa}
\end{equation}
in terms of which the statistics of the results can be expressed in a
compact way: the probability that the result of the experiment lies in
a set $\Delta$ is given by
\begin{equation}gin{equation}
\sum_{\mbox{\boldmath $\lambda $} dabda_{\alpha} \in \Delta} p_{\alpha} =
\sum_{\mbox{\boldmath $\lambda $} dabda_{\alpha} \in \Delta} \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi,O_\alpha \psi\rangle =\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi, O(\Delta) \psi\rangle \, .
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{pro}
\end{equation}
Moreover, it follows {}{}from \eq{eq:ormfin} and \eq{DISCRE} that \mbox{$\mathscr{E}$}{}
generates state transformations
\begin{equation}gin{equation}
\psi \to \psi_{\alpha}=R_{\alpha} \psi\,.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{gentr}
\end{equation}
\subsection{Formal Experiments}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secFE}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{subsec.comfefm}
The association between experiments and POVMs can be extended to a
general experiment (\ref{eq:generalexperiment}) in a straightforward
way. In analogy with (\ref{eq:prdeltan}) we shall say that the POVM
$O$ is associated with the experiment \mbox{$\mathscr{E}$}{} whenever the probability
distribution (\ref{eq:indumas}) of the results of \mbox{$\mathscr{E}$}{} is equal to the
probability measure (\ref{mupsipov}) generated by $O$,
i.e.,\footnote{Whenever (\ref{etoo}) is satisfied we may say that the
experiment \mbox{$\mathscr{E}$}{} is a measurement of the generalized observable $O$.
We shall however avoid this terminology in connection with
generalized observables; even when it is standard (so that we use
it), i.e., when $ O$ is a PVM and thus equivalent to a self-adjoint\
operator, it is in fact improper.}
\begin{equation}gin{equation}
\mbox{$\mathscr{E}$}\mapsto O \qquad\mbox{if and only if}\qquad \rho^{ Z}_{\psi}
=\mu^O_\psi,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{etoo}
\end{equation}
We may now proceed as in Section 3 and analyze on a formal level the
association (\ref{etoo}) by introducing the notions of \emph{weak} and
\emph{strong} formal experiment as the obvious generalizations of
(\ref{def:wfmg}) and (\ref{def:sfm}):
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.85\textwidth}\openup 1.4\jot
Schr\"odinger's equationtlength{\baselineskip}{12pt}\emph{Any positive-operator-valued
measure $O$ defines the weak formal experiment $\mbox{$\mathscr{E}$}x\equiv O$. Any
set $\{\mbox{\boldmath $\lambda $} dabda_{\alpha} \}$ of not necessarily distinct real numbers (or
vectors of real numbers) paired with any collection $\{R_{\alpha}\}$ of
bounded operators on $\mbox{$\mathcal{H}$}$ such that $\sumR_{\alpha}^{\alphast}R_{\alpha}=I$
defines the strong formal experiment $ \mbox{$\mathscr{E}$}x\equiv\{\mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha} \}$
with associated POVM \eq{oeslaa} and state transformations
\eq{gentr}. }
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{def:wfe}
\end{equation}
The notion of formal experiment is a genuine extension of that of
formal measurement, the latter being the special case in which $O$ is
a PVM and $R_{\alpha}dR_{\alpha}$ are the projections.
Formal experiments share with formal measurements many features. This
is so because all measure-theoretic properties of projection-valued
measures extend to positive-operator-valued measures. For example,
just as for PVMs, integration of real functions against
positive-operator-valued measure is a meaningful operation that
generates self-adjoint\ operators: for given real (and measurable) function
$f$, the operator $B=\int f(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)$ is a self-adjoint{} operator
defined, say, by its matrix elements $\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \phi,B \psi\rangle =\int
\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda \mu_{\phi,\psi}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)$ for all $\phi$ and $\psi$ in $\mbox{$\mathcal{H}$}$,
where $\mu_{\phi,\psi}$ is the complex measure
$\mu_{\phi,\psi}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \phi,O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) \psi\rangle$.
(We ignore the difficulties that might arise if $f$ is not bounded.)
In particular, with $O$ is associated the self-adjoint{} operator
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sawpov}
A_{O} \equiv \int \mbox{\boldmath $\lambda $} dabda_{\alpha}m \, O (d\mbox{\boldmath $\lambda $} dabda_{\alpha}m).
\end{equation}
It is however important to observe that this association (unlike the
case of PVMs, for which the spectral theorem provides the inverse) is
not invertible, since the self-adjoint{} operator $A_{O}$ is always associated
with the PVM provided by the spectral theorem. Thus, unlike PVMs,
POVMs are not equivalent to self-adjoint{} operators. In general, the operator
$A_{O}$ will carry information only about the mean value of the
statistics of the results,
$$
\int \mbox{\boldmath $\lambda $} dabda_{\alpha}m\;\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)\psi\rangle = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi,
A_{O}\psi\rangle \,,
$$
while for the higher moments we should expect that
$$
\int \mbox{\boldmath $\lambda $} dabda_{\alpha}m^n\;\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)\psi\rangle \neq \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi,
A_{O}^n\psi\rangle \,
$$
unless $O$ is a PVM.
What we have just described is an important difference between general
formal experiments and formal measurements. This and other
differences originate {}from the fact that a POVM is a much weaker
notion than a PVM. For example, a POVM $O$ on $\mathbb{R}^m$---like ordinary
measures and unlike PVMs---need not be a product measure: If $
O_1,\ldots, O_m$ are the \emph{marginals} of $ O$,
$$
O_1(\Delta_1) = O(\Delta_1 \times \mathbb{R}^{m-1})\,,\;\ldots\,,\;
O_m(\Delta_m) = O(\mathbb{R}^{m-1} \times \Delta_m ),
$$
the product POVM $ O_1\times\cdots\times O_m$ will be in general
different {}from $ O$. (This is trivial since any probability measure
on $\mathbb{R}^m$ times the identity is a POVM.)
Another important difference between the notion of POVM and that of
PVM is this: while the projections $P(\Delta)$ of a PVM, for different
$\Delta$'s, commute, the operators $O(\Delta)$ of a generic POVM need
not commute. An illustration of how this may naturally arise is
provided by sequential measurements.
A sequential measurement (see Section \ref{sec:SeqM}) $\mbox{$\mathcal{M}$}_{n}\otimesimes
\ldots\otimesimes \mbox{$\mathcal{M}$}_{1}$ is indeed a very simple example of a formal
experiment that in general is not a formal measurement (see also
Davies \cite{Dav76}). We have that
$$\mbox{$\mathcal{M}$}_{n}\otimesimes \ldots\otimesimes \mbox{$\mathcal{M}$}_{1}= \{\mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha}\}$$
where
$$
\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_{\alphalpha}\equiv(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(1)}_{\alphalpha_{1}},
\ldots,\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^{(n)}_{\alphalpha_{n}})
$$
and
$$
R_{\alpha} \equiv R^{(n)}_{\alphalpha_{n}} U_{t_n -t_{n-1}}
R^{(n-1)}_{\alphalpha_{n-1}} \cdots\,R^{(1)}_{\alphalpha_{1}}\,. U_{t_1 }.
$$
Note that since $p_\alpha = \|R_{\alpha}\psi\|^2$, we have that
$$
\sum_{\alpha} R_{\alpha}dR_{\alpha} =I$$
\, , which also follows directly using
$$
\sum_{\alpha_{j}}R^{(j)}_{\alphalpha_{j}}\,^\alphast R^{(j)}_{\alphalpha_{j}} =
I\,,\qquad j= 1,\ldots,n
$$
Now, with $\mbox{$\mathcal{M}$}_{n}\otimesimes \ldots\otimesimes \mbox{$\mathcal{M}$}_{1}$ is associated the POVM
\begin{equation}gin{displaymath}
O (\Delta) = \sum_{\mbox{\boldmath $\lambda $} dabda_{\alpha} \in \Delta} R_{\alpha}dR_{\alpha} \,.
\end{displaymath}
Note that $O(\Delta)$ and $O(\Delta ')$ in general don't commute since
in general $R_{\alpha}$ and $R_{\begin{equation}ta}$ may fail to do so.
An interesting class of POVMs for which $O(\Delta)$ and $O(\Delta ')$
do commute arises in association with the notion of an
``\emph{approximate measurement}'' of a self-adjoint{} operator: suppose that
the result $Z$ of a measurement $\mbox{$\mathcal{M}$}=P^A$ of a self-adjoint{} operator $A$ is
distorted by the addition of an independent noise $N$ with symmetric
probability distribution $\eta (\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)$. Then the result $Z+N$ of
the experiment, for initial system wave function{} $\psi$, is distributed
according to $$\Delta \mapsto \int_{\Delta}\int_{\mathbb{R}} \eta(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda -
\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda ') \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, P_A (d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda ')\psi\rangle \, d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda \, ,
$$
which can be rewritten as
$$
\Delta \mapsto \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi,\int_{\Delta} \eta(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda -A)
d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda\; \psi\rangle\, .$$
Thus the result $Z+N$ is governed by the
POVM
\begin{equation}gin{equation}
O (\Delta)=\int_{\Delta} \eta(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda
-A)\, d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda \, .
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:appro}
\end{equation}
The formal experiment defined by this POVM can be regarded as
providing an approximate measurement of $A$. For example, let
\begin{equation}gin{equation}
\eta (\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) = \frac{1}{\sigma\sqrt{2\pi}}
e^{-\frac{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^2}{2\,\sigma^2}}\, .
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{gauss}
\end{equation}
Then for $\sigma\to 0$ the POVM (\ref{eq:appro}) becomes the PVM of
$A$ and the experiment becomes a measurement of $A$.
Concerning the POVM (\ref{eq:appro}) we wish to make two remarks. The
first is that the $O(\Delta)$'s commute since they are all functions
of $A$. The second is that this POVM has a continuous density, i.e.,
$$
O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) = o(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)\, d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda\qquad\mbox{where}\qquad
o(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)= \eta(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda -A)\,. $$
This is another difference between
POVMs and PVMs: like ordinary measures and unlike PVMs, POVMs may have
a continuous density. The reason this is possible for POVMs is that,
for a POVM $O$, unlike for a PVM, given $\psi\in H$, the vectors $
O(\Delta)\psi$ and $ O(\Delta ')\psi$, for $\Delta$ and $\Delta '$
disjoint and arbitrarily small, need not be orthogonal. Otherwise, no
density $o(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)$ could exist, because this would imply that there
is a continuous family $\{o(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)\psi\}$ of orthogonal vectors in
$\mbox{$\mathcal{H}$}$.
Finally, we observe that unlike strong measurements, the notion of
strong formal experiment can be extended to POVM with continuous
spectrum (see Section \ref{subsec.mocs}). One may in fact define a
strong experiment by $\mbox{$\mathscr{E}$}x =\{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda, R_{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda}\}$, where $\mbox{\boldmath $\lambda $} dabda_{\alpha}m
\mapstoR_{\lambda}$ is a continuous \emph{bounded-operator-valued function}
such that $\int R_{\lambda}d R_{\lambda} \,d\,\mbox{\boldmath $\lambda $} dabda_{\alpha}m \,=\,I $. Then the statistics
for the results of such an experiment is governed by the POVM
$O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m) \equiv R_{\lambda}d R_{\lambda}\, d\mbox{\boldmath $\lambda $} dabda_{\alpha}m$. For example, let
$$
R_\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda = \xi\, ( \mbox{\boldmath $\lambda $} dabda_{\alpha}mbda -A) \quad\mbox{where}\quad \xi\,
(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) = \frac{1}{\sqrt{\sigma}\sqrt[4]{2\pi}}
e^{-\frac{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda^2}{4\,\sigma^2}}\,.
$$
Then $O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda) = R_{\lambda}d R_{\lambda} \,d\,\mbox{\boldmath $\lambda $} dabda_{\alpha}m$ is the POVM
(\ref{eq:appro}) with $\eta$ given by (\ref{gauss}). We observe that
the state transformations (cf. the definition \eq{eq:con} of the
conditional wave function{})
\begin{equation}gin{equation}
\psi \to R_{\mbox{\boldmath $\lambda $} dabda_{\alpha}m}\psi= \frac{1}{\sqrt{\sigma}\sqrt[4]{2\pi}}
e^{-\frac{(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda -A)^2}{4\,\sigma^2}} \psi
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:aha}
\end{equation}
can be regarded as arising {}from a von Neumann interaction with
Hamiltonian (\ref{vontrans}) (and $\gamma T=1$) and ready state of the
apparatus
$$
\mbox{$\mathbb{P}$}hi_{0}(y) = \frac{1}{\sqrt{\sigma}\sqrt[4]{2\pi}}
e^{-\frac{y^2}{4\,\sigma^2}}.
$$
Experiments with state transformations (\ref{eq:aha}), for large
$\sigma$, have been considered by Aharonov and coworkers (see, e.g.,
Aharonov, Anandan, and Vaidman \cite{AAV93}) as providing ``weak
measurements'' of operators. (The effect of the measurement on the
state of the system is ``small'' if $\sigma$ is sufficiently large).
This terminology notwithstanding, it is important to observe that such
experiments are not measurements of $A$ in the sense we have discussed
here. They give information about the average value of $A$, since $
\int \mbox{\boldmath $\lambda $} dabda_{\alpha}m\;\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi,R_{\lambda}d R_{\lambda}\, \psi\rangle\,d\mbox{\boldmath $\lambda $} dabda_{\alpha}m = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi,
A\psi\rangle $, but presumably none about its higher moments.
\subsection{From Formal Experiments to Experiments}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{subsec.ffete}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{subsec.repexp}
Just as with a formal measurement (see Section \ref{subsec.exp}), with
a formal experiment $ \mbox{$\mathscr{E}$}x\equiv\{\mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha} \}$, we may associate a
discrete experiment \mbox{$\mathscr{E}$}{}. The unitary map (\ref{eq:ormfin}) of \mbox{$\mathscr{E}$}{}
will be given again by (\ref{standu}), i.e.,
\begin{equation}gin{equation}
U: \;\psi \otimes \mbox{$\mathbb{P}$}hi_0 \mapsto
\sum_{\alpha} (R_{\alpha}\psi) \otimes \mbox{$\mathbb{P}$}hi_{\alpha},
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{standun}
\end{equation}
but now $R_{\alpha}dR_{\alpha}$ of course need not be projection. The unitarity of
$U$ follows immediately {}from the orthonormality of the $\mbox{$\mathbb{P}$}hi_{\alpha}$ using
$\sum R_{\alpha}d R_{\alpha} = I $. (Note that with a weak formal experiment
$\mbox{$\mathscr{E}$}x\equiv O=\{O_{\alpha}\}$ we may associate many inequivalent discrete
experiments, defined by \eq{standun} with operators $R_{\alpha}\equiv
U_\alphalpha \sqrt{O_\alphalpha}$, for \emph{any} choice of unitary operators
$U_\alpha$.)
We shall now discuss a concrete example of a discrete experiment
defined by a formal experiment which will allow us to make some more
further comments on the issue of reproducibility discussed in Section
\ref{sec:RC}.
Let $\{ \dots, e_{-1},e_0,e_1,\dots \}$ be an orthonormal basis in the
system Hilbert space \mbox{$\mathcal{H}$}, let $P_{-}\,,P_0\,, P_{+}$ be the orthogonal
projections onto the subspaces $\widetilde{\mathcal{H}}_{-}$, $\mbox{$\mathcal{H}$}_0$,
$\widetilde{\mathcal{H}}_{+}$ spanned by $\{e\}_{\alpha < 0}$, $\{e_0\}$,
$\{e\}_{\alpha >0}$ respectively, and let $V_+$, $V_-$ be the right and
left shift operators,
$$V_+ e_{\alpha} = e_{\alpha+1}\,, \qquad V_-e_\alpha = e_{\alpha-1}\,.$$
Consider the
strong formal experiment \mbox{$\mathscr{E}$}x\ with the two possible results
$\mbox{\boldmath $\lambda $} dabda_{\alpha}m_{\pm}=\pm 1$ and associated state transformations
\begin{equation}gin{equation}
R_{\pm1} = V_{\pm}(P_{\pm} +\frac{1}{\sqrt{2}}P_0).
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:pove}
\end{equation}
Then the unitary $U$ of the corresponding discrete experiment \mbox{$\mathscr{E}$}{} is
given by
$$
U: \;\psi \otimes \mbox{$\mathbb{P}$}hi_0 \to R_{-}\psi \otimes \mbox{$\mathbb{P}$}hi_{-} + R_{+}\psi \otimes
\mbox{$\mathbb{P}$}hi_{+},$$
where $\mbox{$\mathbb{P}$}hi_0$ is the ready state of the apparatus and
$\mbox{$\mathbb{P}$}hi_{\pm}$ are the apparatus states associated with the results $\pm
1$. If we now consider the action of $U$ on the basis vectors
$e_{\alpha}$,
\begin{equation}gin{eqnarray}
U(e_\alpha\otimes\mbox{$\mathbb{P}$}hi_0)&=&e_{\alpha + 1}\otimes \mbox{$\mathbb{P}$}hi_{+}\qquad \mbox{for $\alpha>0$}
\nonumber\\
U(e_\alpha\otimes\mbox{$\mathbb{P}$}hi_0)&=&e_{\alpha - 1}\otimes \mbox{$\mathbb{P}$}hi_{-}\qquad \mbox{for $\alpha
<0$}
\nonumber\\
U(e_0\otimes\mbox{$\mathbb{P}$}hi_0)&=&
\frac{1}{\sqrt{2}}(e_1\otimes\mbox{$\mathbb{P}$}hi_{+}
+e_{-1}\otimes\mbox{$\mathbb{P}$}hi_{-})\, ,\nonumber
\end{eqnarray}
we see immediately that $$U(\widetilde{\mathcal{H}}_{\pm}\otimes\mbox{$\mathbb{P}$}hi_0)
\subset \widetilde{\mathcal{H}}_{\pm}\otimes\mbox{$\mathbb{P}$}hi_{\pm1}.$$
Thus
(\ref{eq:repconold}) is satisfied and \mbox{$\mathscr{E}$}{} is a reproducible
experiment. Note however that the POVM $ O = \{ O_{-1}, O_{+1}\}$
associated with (\ref{eq:pove}),
$$
O_{\pm1} ={R}_{\pm1}^{\alphast}{R}_{\pm1} = P_{\pm} +
\frac{1}{2}P_0\,,
$$
is not a PVM since the positive operators $ O_{\pm 1}$ are not
projections, i.e, $ O_{\pm 1}^2 \ne O_{\pm 1}$. Thus \mbox{$\mathscr{E}$}{} is not a
measurement of any self-adjoint operator, which shows that without the
assumption of the finite dimensionality of the subspaces
$\widetilde{\mathcal{H}}_{\alpha}$ a reproducible discrete experiment need
not be a measurement of a self-adjoint operator.
\subsection{Measure-Valued Quadratic Maps}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:mvqf}
We conclude this section with a remark about POVMs. Via \eq{mupsipov}
every POVM $O$ defines a ``normalized quadratic map'' {}from \mbox{$\mathcal{H}$}{} to
measures on some space (the value-space for the POVM). Moreover, every
such map comes {}from a POVM in this way. Thus the two notions are
equivalent:
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.85\textwidth}\openup 1.4\jot
Schr\"odinger's equationtlength{\baselineskip}{12pt}\emph{ \eq{mupsipov} defines a
canonical one-to-one correspondence between POVMs and normalized
measure-valued quadratic maps on \mbox{$\mathcal{H}$}. }
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{def:mvqm}
\end{equation}
To say that a measure-valued map on \mbox{$\mathcal{H}$}{}
\begin{equation}gin{equation}
\psi \mapsto \mu_{\psi}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:qumap}
\end{equation}
is quadratic means that
\begin{equation}gin{equation}
\mu_{\psi}= B(\psi, \psi)
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:qumapb}
\end{equation}
is the diagonal part of a sesquilinear map $B$, {}from $\mbox{$\mathcal{H}$}\times\mbox{$\mathcal{H}$}$ to
the complex measures on some value space $\Lambda$. If $B(\psi, \psi)$
is a probability measure whenever $\|\psi\| =1$, we say that the
map is normalized.\footnote{A sesquilinear map $B(\phi,\psi)$ is one
that is linear in the second slot and conjugate linear in the first:
\begin{equation}gin{eqnarray}
B(\phi, \alpha \psi_1 +\begin{equation}ta\psi_2) &=& \alpha B(\phi,\psi_1)+\begin{equation}ta
B(\phi,\psi_2)
\nonumber\\
B(\alpha\phi_1 +\begin{equation}ta\phi_2,\psi)&=&\bar {\alpha} B(\phi_1,\psi)+\bar
{\begin{equation}ta}B(\phi_2,\psi) \,. \nonumber \end{eqnarray}
Clearly any such normalized $B$ can be chosen to be conjugate
symmetric,
$ B(\psi, \phi)= \overline{B(\phi, \psi)}$,
without affecting its diagonal, and it follows
{}from polarization that any such $B$ must in fact
\emph{be} conjugate symmetric.}
Proposition (\ref{def:mvqm}) is a consequences of the following
considerations: For a given POVM $O$ the map $\psi \mapsto
\mu_{\psi}^O$, where $ \mu^O_\psi (\Delta) \equiv \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi ,
O(\Delta)\psi\rangle$, is manifestly quadratic, with $B(\phi,\psi) =
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\phi , O(\cdot)\psi\rangle$, and it is obviously normalized.
Conversely, let $\psi \mapsto \mu_{\psi}$ be a normalized
measure-valued quadratic map, corresponding to some $B$, and write
$B_\Delta (\phi,\psi)= B (\phi,\psi)[\Delta]$ for the complex measure
$B$ at the Borel set $\Delta$. By the Schwartz inequality, applied to
the positive form $ B_\Delta (\phi,\psi) $, we have that $ |B_\Delta
(\phi,\psi)|\le \|\psi\| \|\phi\| $. Thus, using Riesz's
lemma \cite{RS80}, there is a unique bounded operator $ O(\Delta)$ on
\mbox{$\mathcal{H}$}\ such that
$$
B_\Delta(\phi,\psi) = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\phi, O(\Delta)\psi\rangle .
$$
Moreover, $ O(\Delta) $, like $B_\Delta$, is countably additive in
$\Delta$, and since $B (\psi,\psi)$ is a (positive) measure, $O$ is a
positive-operator-valued measure, normalized because $B$ is.
A simple example of a normalized measure-valued quadratic map is
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{qem}
\mbox{$\mathbb{P}$}si\mapsto \rho^{\mbox{$\mathbb{P}$}si} (dq) = |\mbox{$\mathbb{P}$}si|^2 dq \, ,
\end{equation}
whose associated POVM is the PVM $P^{\hat{Q}}$ for the position
(configuration) operator
\begin{equation}gin{equation}
{\hat Q}\mbox{$\mathbb{P}$}si(q) = q\mbox{$\mathbb{P}$}si(q)\,.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:posiope}
\end{equation}
Note also that if the quadratic map $\mu_\psi$ corresponds to the POVM
$O$, then, for any unitary $U$, the composite map
$\psi\mapsto\mu_{_{U\psi}}$ corresponds to the POVM $U^*OU$, since $
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle U\psi, O(\Delta)U\psi\rangle = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi,
U^*O(\Delta)U\psi\rangle$. In particular for the map \eq{qem} and
$U=U_T$, the composite map corresponds to the PVM $P^{\hat{Q}_T}$,
with $ \hat{Q}_T= U^*\hat{Q} U $, the Heisenberg position
(configuration) at time $T$, since $ U_T^* P^{\hat{Q}} U_T = P^{U_T^*
\hat{Q} U_T } $.
Schr\"odinger's equationction{The General Emergence of Operators}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secGEO}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{5}
Schr\"odinger's equationtcounter{equation}{0} \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{GEBM}
For Bohmian mechanics\ POVMs emerge naturally, not for discrete experiments, but for
a general experiment (\ref{eq:generalexperiment}). To see how this
comes about consider the probability measure (\ref{eq:indumas}) giving
the probability distribution of the result ${Z}= F(Q_T)$ of the
experiment, where $Q_T$ is the final configuration of system and
apparatus and $F$ is the calibration function expressing the numerical
result, for example the orientation $\Theta$ of a pointer. Then the
map
\begin{equation}gin{equation}
\psi \mapsto \rho^{Z}_{\psi} = \rho_{\mbox{$\mathbb{P}$}si_{T}} \circ F^{-1},
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:basquamap}
\end{equation}
{}from the initial wave function{} of the system to the probability distribution
of the result, is quadratic since it arises {}from the sequence of
maps
\begin{equation}gin{equation}
\psi \mapsto
\mbox{$\mathbb{P}$}si = \psi \otimesimes \mbox{$\mathbb{P}$}hi_0 \mapsto
\mbox{$\mathbb{P}$}si_T = U( \psi \otimesimes \mbox{$\mathbb{P}$}hi_0) \mapsto \rho_{\mbox{$\mathbb{P}$}si_{T}}(dq) =
\mbox{$\mathbb{P}$}si_T^{*} \mbox{$\mathbb{P}$}si_T dq
\mapsto \rho^{Z}_{\psi} = \rho_{\mbox{$\mathbb{P}$}si_{T}} \circ F^{-1},
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{seqmap}
\end{equation}
where the middle map, to the quantum equilibrium\ distribution, is obviously
quadratic, while all the other maps are linear, all but the second
trivially so. Now, by (\ref{def:mvqm}), the notion of such a
quadratic map (\ref{eq:basquamap}) is completely equivalent to that of
a POVM on the system Hilbert space \mbox{$\mathcal{H}$}{}. (The sesquilinear map $B$
associated with \eq{seqmap} is $B(\psi_1,\psi_2)= \mbox{$\mathbb{P}$}si_{1\,T}^{*}
\mbox{$\mathbb{P}$}si_{2\,T} dq \circ F^{-1}$, where $\mbox{$\mathbb{P}$}si_{i\,T}= U (\psi_i \otimesimes
\mbox{$\mathbb{P}$}hi_0)$.)
Thus the emergence and role of POVMs as generalized observables in
Bohmian mechanics\ is merely an expression of the sesquilinearity of quantum equilibrium\ together
with the linearity of the Schr\"{o}dinger{} evolution. Thus the fact that with
every experiment is associated a POVM, which forms a compact
expression of the statistics for the possible results, is a near
mathematical triviality. It is therefore rather dubious that the
occurrence of POVMs---the simplest case of which is that of PVMs---as
observables can be regarded as suggesting any deep truths about
reality or about epistemology.
An explicit formula for the POVM defined by the quadratic map
(\ref{eq:basquamap}) follows immediately {}from (\ref{seqmap}):
$$
\rho^{Z}_{\psi}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m) = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi\otimesimes\mbox{$\mathbb{P}$}hi_{0}, U^{*}
P^{\hat{Q}}(F^{-1}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)) U\, \psi\otimesimes\mbox{$\mathbb{P}$}hi_{0}\rangle =
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi\otimesimes\mbox{$\mathbb{P}$}hi_{0}, P_0U^{*} P^{\hat{Q}}(F^{-1}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda))
UP_0\, \psi\otimesimes\mbox{$\mathbb{P}$}hi_{0}\rangle
$$
where $P^{\hat{Q}}$ is the PVM for the position (configuration)
operator (\ref{eq:posiope}) and $P_{0}$ is the projection onto
$\mbox{$\mathcal{H}$}\otimes\mbox{$\mathbb{P}$}hi_{0}$, whence
\begin{equation}gin{equation}
O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m) = 1_{\mbox{$\mathbb{P}$}hi_0}^{-1}
P_{0}\, U^{*} P^{\hat{Q}}(F^{-1}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)) U P_0 1_{\mbox{$\mathbb{P}$}hi_0}\,,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:genpovm}
\end{equation}
where $ 1_{\mbox{$\mathbb{P}$}hi_0}\psi = \psi\otimes\mbox{$\mathbb{P}$}hi_0 $ is the natural identification
of \mbox{$\mathcal{H}$}{} with $\mbox{$\mathcal{H}$}\otimes\mbox{$\mathbb{P}$}hi_0$. This is the obvious POVM reflecting the
essential structure of the experiment.\footnote{This POVM can also be
written as
\begin{equation}gin{equation}
O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m) = \mbox{${\rm tr}\,$}_A\left[
P_{0}\, U^{*} P^{\hat{Q}}(F^{-1}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)) U \right],
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:pto}
\end{equation}
where $\mbox{${\rm tr}\,$}_A$ is the partial trace over the apparatus variables. The
partial trace is a map $\mbox{${\rm tr}\,$}_{A}\,:\, W \mapsto \mbox{${\rm tr}\,$}_{A}(W)$, {}from
trace class operators on the Hilbert space $\mbox{$\mathcal{H}$}_{S}\otimesimes\mbox{$\mathcal{H}$}_{A}$ to
trace class operators on $\mbox{$\mathcal{H}$}_{S}$, uniquely defined by $ \mbox{${\rm tr}\,$}_{S} (
\mbox{${\rm tr}\,$}_{A}(W) B)= \mbox{${\rm tr}\,$}_{S+A} (W B\otimesimes I)$, where $\mbox{${\rm tr}\,$}_{S+A}$ and
$\mbox{${\rm tr}\,$}_{S}$ are the usual (scalar-valued) traces of operators on
$\mbox{$\mathcal{H}$}_{S}\otimesimes\mbox{$\mathcal{H}$}_{A}$ and $\mbox{$\mathcal{H}$}_{S}$, respectively. For a trace class
operator $B$ on $L^2(dx)\otimesimes L^2(dy)$ with kernel $B(x,y, x',y')$
we have $\mbox{${\rm tr}\,$}_{A}\left(B\right) (x,x') = \int\, B(x,y, x',y) dy .$ In
\eq{eq:pto} $\mbox{${\rm tr}\,$}_A$ is applied to operators that need not be trace
class---nor need the operator on the left be trace class---since,
e.g., $O(\Lambda)= I$. The formula nonetheless makes sense. }
Note that the POVM \eq{eq:genpovm} is unitarily equivalent to
\begin{equation}gin{equation}
P_0 P^{F(\hat{Q}_T)}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m) P_0
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:genpovmue}
\end{equation}
where $\hat{Q}_T$ is the Heisenberg configuration of system and
apparatus at time $T$. This POVM, acting on the subspace
$\mbox{$\mathcal{H}$}\otimes\mbox{$\mathbb{P}$}hi_{0}$, is the projection to that subspace of a PVM, the
spectral projections for $F(\hat{Q}_T)$. Naimark has shown (see, e.g.,
\cite{Dav76}) that every POVM is equivalent to one that arises in this
way, as the orthogonal projection of a PVM to a subspace.\footnote{If
$O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$ is a POVM on $\Sigma$ acting on \mbox{$\mathcal{H}$}{}, then the Hilbert
space on which the corresponding PVM acts is the natural Hilbert
space associated with the data at hand, namely $L^2(\Sigma, \mbox{$\mathcal{H}$},
O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m))$, the space of \mbox{$\mathcal{H}$}-valued functions $\psi(\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$ on
$\Sigma$, with inner product given by $\int \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi(\mbox{\boldmath $\lambda $} dabda_{\alpha}m),
O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)\phi(\mbox{\boldmath $\lambda $} dabda_{\alpha}m)\rangle$. (If this is not, in fact, positive
definite, then the quotient with its kernel should be
taken---$\psi(\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$ should, in other words, be understood as the
appropriate equivalence class.) Then $ O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$ is equivalent to
$PE(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)P$, where $E(\Delta) =\hat{{\sf 1}}_{\Delta}(\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$,
multiplication by ${\sf 1}_{\Delta}(\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$, and $P$ is the
orthogonal projection onto the subspace of constant \mbox{$\mathcal{H}$}-valued
functions $\psi(\mbox{\boldmath $\lambda $} dabda_{\alpha}m)=\psi$.}
We shall now illustrate the association of POVMs with experiments by
considering some special cases of (\ref{seqmap}).
\subsection{``No Interaction'' Experiments }
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:nie}
Let $U=U_{S} \otimes U_{A}$ in (\ref{seqmap}) (hereafter the indices
``$S$'' and ``$A$'' shall refer, respectively, to system and
apparatus). Then for $F(x,y)=y$ the measure-valued quadratic map
defined by (\ref{seqmap}) is
$$
\psi\mapsto c(y) \|\psi\|^2 dy
$$
where $c(y) = |U_{A}\mbox{$\mathbb{P}$}hi_{0}|^{2}(y)$, with POVM $O_1(dy)= c(y) dy
\;I_S$, while for $F(q)=q=(x,y)$ the map is
$$
\psi \mapsto c(y)\, |U_{S}\psi|^{2}(x)\, dq$$
with corresponding
POVM $O_2(dq) = c(y)\, U_{S}^\alphast P^{\hat X}(dx)U_{S}\, dy$.
Neither $O_1$ nor $O_2$ is a PVM. However, if $F$ is independent of
$y$, $F(x,y) =F(x)$, then the apparatus can be ignored in
(\ref{seqmap}) or (\ref{eq:genpovm}) and $O= U_{S}^{*}
P^{\hat{X}}U_{S}\circ F^{-1}$, i.e.,
$$
O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m) = U_{S}^{*} P^{\hat{X}} (F^{-1}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m))U_{S} \, ,
$$
which is manifestly a PVM---in fact corresponding to $
F(\hat{X}_T)$, where $\hat{X}_T$ is the Heisenberg configuration of
the system at the end of the experiment.
This case is somewhat degenerate: with no interaction between system
and apparatus it hardly seems anything like a measurement. However,
it does illustrate that it is ``true'' POVMs (i.e., those that aren't
PVMs) that typically get associated with experiments---i.e., unless
some special conditions hold (here that $F=F(x)$).
\subsection{``No $X$'' Experiments}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{noXexp}
The map (\ref{seqmap}) is well defined even when the system (the
$x$-system) has no translational degrees of freedom, so that there is
no $x$ (or $X$). This will be the case, for example, when the system
Hilbert space $\mbox{$\mathcal{H}$}_S$ corresponds to the spin degrees of freedom. Then
$\mbox{$\mathcal{H}$}_S=\mathbb{C}^n$ is finite dimensional.
In such cases, the calibration $F$ of course is a function of $y$
alone, since there is no $x$. For $F=y$ the measure-valued quadratic
map defined by (\ref{seqmap}) is
\begin{equation}gin{equation}
\psi \mapsto |
[U(\psi\otimesimes\mbox{$\mathbb{P}$}hi_{0})](y)|^{2} dy\,,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:nox}
\end{equation}
where $|\cdots |$ denotes the norm in $\mathbb{C}^n$.
This case is physically more interesting than the previous one, though
it might appear rather puzzling since until now our measured systems
have always involved configurations. After all, without
configurations there is no Bohmian mechanics! However, what is
relevant {}from a Bohmian perspective is that the composite of system
and apparatus be governed by Bohmian mechanics{}, and this may well be the case if
the apparatus has configurational degrees of freedom, even if what is
called the system doesn't. Moreover, this case provides the prototype
of many real-world experiments, e.g., spin measurements.
For the measurement of a spin component of a spin--$1/2$
particle---recall the description of the Stern-Gerlach experiment
given in Section \ref{secSGE}---we let $\mbox{$\mathcal{H}$}_{S}= \mathbb{C}^2$, the spin
space, with ``apparatus'' configuration $y= {\bf x}$, the position of
the particle, and with suitable calibration $F({\bf x}) $. (For a
real world experiment there would also have to be a genuine
apparatus---a detector---that measures where the particle
\emph{actually is} at the end of the experiment, but this would not in
any way affect our analysis. We shall elaborate upon this below.)
The unitary $U$ of the experiment is the evolution operator up to time
$T$ generated by the Pauli Hamiltonian (\ref{sgh}), which under the
assumption (\ref{consg}) becomes
\begin{equation}gin{equation}
H = -\frac{\hbar^{2}}{2m} \boldsymbol{\nabla}^{2} - ( b+ az) \sigma_{z}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:pahamagain}
\end{equation}
Moreover, as in Section \ref{secSGE}, we shall assume that the initial
particle wave function{} has the form $\mbox{$\mathbb{P}$}hi_{0}({\bf x)}= \mbox{$\mathbb{P}$}hi_{0}(z)
\phi(x,y)$.\footnote{We abuse notation here in using the notation $ y
= {\bf x} = (x,y,z)$. The $y$ on the right should of course not be
confused with the one on the left.} Then for $F({\bf x}) = z$ the
quadratic map (\ref{seqmap}) is
\begin{equation}gin{eqnarray*}
\psi &\mapsto& \left(
|\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi^+, \psi\rangle|^2 |\mbox{$\mathbb{P}$}hi^{(+)}_{T}(z)|^{2} +
|\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi^-, \psi\rangle|^2 |\mbox{$\mathbb{P}$}hi^{(-)}_{T}(z)|^{2} \right)dz\\
&=& \left\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi\,,\;
|\psi^+\rangle\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi^+||\mbox{$\mathbb{P}$}hi^{(+)}_{T}(z)|^{2}
+
|\psi^-\rangle\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi^-||\mbox{$\mathbb{P}$}hi^{(-)}_{T}(z)|^{2}\;
\psi \right\rangle\, dz
\end{eqnarray*}
with POVM
\begin{equation}gin{equation}
O(dz)\; = \;
\left( \begin{equation}gin{array}{cc} |\mbox{$\mathbb{P}$}hi^{(+)}_{T}(z)|^{2} & 0 \\ 0 &
|\mbox{$\mathbb{P}$}hi^{(-)}_{T}(z)|^{2} \end{array} \right) \, dz \, ,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:povmspin}
\end{equation}
where $\psi^{\pm}$ are the eigenvectors (\ref{eq:spinbasis}) of
$\sigma_{z}$ and $\mbox{$\mathbb{P}$}hi^{(\pm)}_{T}$ are the solutions of
(\ref{eq:SGequ}) computed at $t=T$, for initial conditions
${\mbox{$\mathbb{P}$}hi_0}^{(\pm)}=\mbox{$\mathbb{P}$}hi_0(z)$.
Consider now the appropriate calibration for the Stern-Gerlach
experiment, namely the function
\begin{equation}gin{equation}
F({\bf x}) =\begin{equation}gin{cases} +1 & \text{if $z>0$},\\
-1& \text{if $z<0$}
\end{cases}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:rigcal}
\end{equation}
which assigns to the outcomes of the experiment the desired numerical
results: if the particle goes up in the $z$- direction the spin is +1,
while if the particle goes down the spin is -1. The corresponding
POVM $O_{T}$ is defined by
$$
O_{T}(+1) \,=\, \left( \begin{equation}gin{array}{cc} p_{T}^{+} & 0 \\ 0 &
p_{T}^- \end{array} \right) \qquad O_{T}(-1) \,=\, \left(
\begin{equation}gin{array}{cc} 1-p_{T}^{+} & 0 \\ 0 & 1-p_{T}^-
\end{array} \right)
$$
where $$
p_{T}^+ = \int_0^\infty|{\mbox{$\mathbb{P}$}hi_T}^{(+)}|^2(z)dz,\qquad
p_{T}^- = \int_0^\infty |{\mbox{$\mathbb{P}$}hi_T}^{(-)}|^2(z)dz\, .$$
It should be noted that $O_{T}$ is not a PVM. However, as indicated in
Section \ref{secSGE}, as $T\to\infty$, $p_{T}^+\to 1$ and $p_{T}^-\to
0$, and the POVM $O_{T}$ becomes the PVM of the operator $\sigma_{z}$,
i.e., $O_{T}\to P^{\sigma_{z}}$, defined by
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{opm}
P(+1) \,=\,
\left( \begin{equation}gin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \qquad
P{(-1)}
\,=\, \left( \begin{equation}gin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right)
\end{equation}
and the experiment becomes a measurement of the operator $\sigma_{z}$.
\subsection{``No $Y$'' Experiments}
Suppose now that the ``apparatus''involves no translational degrees of
freedom, i.e., that there is no $y$ (or $Y$). For example, suppose the
apparatus Hilbert space $\mbox{$\mathcal{H}$}_A$ corresponds to certain spin degrees of
freedom, with $\mbox{$\mathcal{H}$}_{A}= \mathbb{C}^n$ finite dimensional. Then, of course,
$F=F(x)$.
This case illustrates what measurements are not. If the apparatus has
no configurational degrees of freedom, then neither in Bohmian mechanics{} nor in
orthodox quantum mechanics is it a \emph{bona fide} apparatus:
Whatever virtues such an apparatus might otherwise have, it certainly
can't generate any directly observable results (at least not when the
system itself is microscopic). According to Bohr (\cite{Boh58}, pages
73 and 90): ``Every atomic phenomenon is closed in the sense that its
observation is based on registrations obtained by means of suitable
amplification devices with irreversible functioning such as, for
example, permanent marks on the photographic plate'' and ``the
quantum-mechanical formalism permits well-defined applications only to
such closed phenomena.'' To stress this point, discussing particle
detection Bell has said~\cite{Bel80}: ``Let us suppose that a
discharged counter pops up a flag sayings `Yes' just to emphasize that
it is a macroscopically different thing {}from an undischarged
counter, in a very different region of configuration space.''
Experiments based on certain micro-apparatuses, e.g., ``one-bit
detectors'' \cite{SEW91}, provide a nice example of ``No Y''
experiments. We may think of a one-bit detector as a spin-$1/2$-like
system (e.g., a two-level atom), with ``down'' state $\mbox{$\mathbb{P}$}hi_{0}$ (the
ready state) and ``up'' state $\mbox{$\mathbb{P}$}hi_{1}$ and which is such that its
configurational degrees of freedom can be ignored. Suppose that this
``spin-system,'' in its ``down'' state, is placed in a small spatial
region $\Delta_1$ and consider a particle whose wave function{} has been
prepared in such a way that at $t=0$ it has the form $\psi = \psi_1 +
\psi_2$, where $\psi_{1}$ is supported by $\Delta_1$ and $\psi_{2}$ by
$\Delta_2$ disjoint {}from $\Delta_1$. Assume that the particle
interacts locally with the spin-system, in the sense that were
$\psi=\psi_1$ the ``spin'' would flip to the ``up'' state, while were
$\psi=\psi_2$ it would remain in its ``down'' state, and that the
interaction time is negligibly small, so that other contributions to
the Hamiltonian can be ignored. Then the initial state $\psi \otimes
\mbox{$\mathbb{P}$}hi_0$ undergoes the unitary transformation
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{unos}
U\,:\,\psi \otimes \mbox{$\mathbb{P}$}hi_0 {\to}
\mbox{$\mathbb{P}$}si \,=\, \psi_{1} \otimes \mbox{$\mathbb{P}$}hi_1 + \psi_{2} \otimes\mbox{$\mathbb{P}$}hi_0 \,.
\end{equation}
We may now ask whether $U$ defines an experiment genuinely measuring
whether the particle is in $\Delta_{1}$ or $\Delta_{2}$. The answer
of course is no (since in this experiment there is no apparatus
property at all with which the position of the particle could be
correlated) \emph{unless} the experiment is (quickly) completed by a
measurement of the ``spin'' by means of another (macroscopic)
apparatus. In other words, we may conclude that the particle is in
$\Delta_{1}$ only if the spin-system in effect pops up a flag saying
``up''.
\subsection{``No $Y$ no $\mbox{$\mathbb{P}$}hi$'' Experiments}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secnoy}
Suppose there is no apparatus at all: no apparatus configuration $y$
nor Hilbert space $\mbox{$\mathcal{H}$}_A$, or, what amounts to the same thing,
$\mbox{$\mathcal{H}$}_{A}=\mathbb{C}$. For calibration $F=x$ the measure-valued quadratic map
defined by (\ref{seqmap}) is $$
\psi \mapsto | U\psi(x)|^{2}\,,$$
with
POVM $U^* P^{\hat{X}}U$, while the POVM for general calibration $F(x)$
is
\begin{equation}gin{equation}
O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m) = U^{*} P^{\hat{X}}(F^{-1}(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)) U\,.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:noYnophi}
\end{equation}
$O$ is a PVM, as mentioned in Section \ref{sec:nie}, corresponding to
the operator $U^* F(\hat{X})U= F(\hat{X}_T)$, where $\hat{X}_T$ is the
Heisenberg position (configuration) operator at time $T$.
It is important to observe that even though these experiments suffer
{}from the defect that no correlation is established between the
system and an apparatus, this can easily be remedied---by adding a
final {\it detection measurement} that measures the final actual
configuration ${X}_T$---without in any way affecting the essential
formal structure of the experiment. For these experiments the
apparatus thus does not introduce any additional randomness, but
merely reflects what was already present in ${X}_T$. All randomness
in the final result
\begin{equation}gin{equation}
Z=F(X_{T})
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:finresnoy}
\end{equation}
arises {}from randomness in the initial configuration of the
system.\footnote{Though passive, the apparatus here plays an important
role in recording the final configuration of the system. However,
for experiments involving detections at different times, the
apparatus plays an active role: Consider such an experiment, with
detections at times $t_1,\ldots,t_n$, and final result $ Z =
F(X_{t_1}, \ldots, X_{t_n})$. Though the apparatus introduces no
extra randomness, it plays an essential role by changing the wave
function of the system at the times $t_1,\ldots,t_n$ and thus
changing the evolution of its configuration. These changes are
reflected in the POVM structure that governs the statistical
distribution of $Z$ for such experiments (see Section
\ref{sec:SeqM}).}
For $F=x$ and $U=I$ the quadratic map is $\psi \mapsto
|\psi(x)|^{2}$ with PVM $P^{\hat{X}}$, so that this (trivial)
experiment corresponds to the simplest and most basic operator of
quantum mechanics: the position operator. How other basic operators
arise {}from experiments is what we are going to discuss next.
\subsection{The Basic Operators of Quantum Mechanics}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{subsec.basop}
According to Bohmian mechanics{}, a particle whose wave function{} is real (up to a global
phase), for example an electron in the ground state of an atom, has
vanishing velocity, even though the quantum formalism{} assigns a nontrivial
probability distribution to its momentum. It might thus seem that we
are faced with a conflict between the predictions of Bohmian mechanics{} and those
of the quantum formalism. This, however, is not so. The quantum
predictions about momentum concern the results of an experiment that
happens to be called a momentum measurement and a conflict with Bohmian mechanics{}
with regard to momentum must reflect disagreement about the results of
such an experiment.
One may base such an experiment on free motion followed by a final
measurement of position.\footnote{The emergence of the momentum
operator in such so-called time-of-flight measurements was discussed
by Bohm in his 1952 article \cite{Boh52}. A similar derivation of
the momentum operator can be found in Feynman and Hibbs
\cite{FH65}.} Consider a particle of mass $m$ whose wave function{} at $t=0$
is $\psi=\psi({\bf x})$. Suppose no forces are present, that is, that
all the potentials acting on the particle are turned off, and let the
particle evolve freely. Then we measure the position ${\bf X}_T$ that
it has reached at the time $t=T$. It is natural to regard ${\bf V}_T
={\bf X}_T/T $ and ${\bf P}_T =m {\bf X}_T/T $ as providing, for large
$T$, approximations to the asymptotic velocity and momentum of the
particle. It turns out that the probability distribution of ${\bf P}_T
$, in the limit $T\to\infty$, is exactly what quantum mechanics
prescribes for the momentum, namely $|\tilde{\psi}({\bf p})|^2$, where
$$
\tilde{\psi}({\bf p}) = (\mathcal{F}\psi)({\bf
p})=\frac{1}{\sqrt{(2\pi \hbar)^3}} \int e^{ -\frac{i}{\hbar} {\bf p
\cdot x}} \psi( {\bf x})\, d{\bf x}
$$
is the Fourier transform of $\psi$.
This result can be easily understood: Observe that $|\psi_{T}({\bf
x})|^{2}\, d{\bf x}$, the probability distribution of ${\bf X}_{T}$,
is the spectral measure $\mu_{\psi}^{ \hat{\bf X}_{T}} (d\,{\bf x})
=\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi, P^{\hat{\bf X}_{T} }(d\,{\bf x})\,\psi\rangle\, $ of
$\,\hat{\bf X}_{T}= U_{T}^{*} \hat{\bf X} U_{T}$, the (Heisenberg)
position operator at time $t=T$; here $U_{t}$ is the free evolution
operator and $\hat{\bf X}$ is, as usual, the position operator at time
$t=0$. By elementary quantum mechanics (specifically, the Heisenberg
equations of motion), $\hat{\bf X}_T = \frac{1}{m}\hat{\bf P}\,T\, +\,
\hat{\bf X}$, where $\hat{\bf P}\equiv -i\hbar\boldsymbol{\nabla}$ is
the momentum operator. Thus as $T\to\infty$ the operator $ m{\hat{\bf
X}_T}/{T}$ converges to the momentum operator $ \hat{\bf P} $,
since $ \hat{\bf X}/T $ is $ O(1/T) $, and the distribution of the
random variable ${\bf P}_{T}$ accordingly converges to the spectral
measure of $\hat {\bf P}$, given by $|\tilde{\psi}({\bf
p})|^2$.\footnote{\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{foot:conv} This formal argument can be
turned into a rigorous proof by considering the limit of the
characteristic function of ${\bf P}_T $, namely of the function
$f_{T}(\boldsymbol{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda})= \int e^{i \boldsymbol{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda} \cdot
{\bf p} }\,\rho_T(d{\bf p})$, where $\rho_{T}$ is the distribution
of $m{{\bf X}_T}/{ T} $: $f_{T}(\boldsymbol{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda})=\left\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi,\, \exp \left(i\boldsymbol{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda} \cdot m{\hat{{\bf
X}}_T}/{ T} \right) \, \psi \right\rangle $, and using the
dominated convergence theorem \cite{RS80} this converges as
$T\to\infty$ to $ f(\boldsymbol{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda})=\left\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi,
\exp\left(i \boldsymbol{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda \cdot} \hat{\bf
P}\right)\psi\right\rangle$, implying the desired result. The
same result can also be obtained using the well known asymptotic
formula (see, e.g., \cite{RS75}) for the solution of the free Schr\"{o}dinger{}
equation with initial condition $\psi =\psi({\bf x})$,
$$
\psi_T({\bf x}) \sim \left(\frac{m}{iT}\right)^{\frac{3}{2}} e^{i
\frac{m{\bf x}^2}{2\hbar T}}\; \tilde{\psi}(\frac{m{\bf x}}{T})
\quad\mbox{for}\quad T\to\infty. $$}
The momentum operator arises {}from a ($T\to\infty$) limit of ``no $Y$
no $\mbox{$\mathbb{P}$}hi$'' single-particle experiments, each experiment being defined
by the unitary operator $U_{T}$ (the free particle evolution operator
up to time $T$) and calibration $ F_{T}({\bf x}) = m{\bf x}/{ T}$.
Other standard quantum-mechanical operators emerge in a similar
manner, i.e., {}from a $T\to\infty$ limit of appropriate
single-particle experiments.
This is the case, for example, for the spin operator $\sigma_{z}$. As
in Section \ref{noXexp}, consider the evolution operator $U_{T}$
generated by Hamiltonian (\ref{eq:pahamagain}), but instead of
(\ref{eq:rigcal}), consider the calibration $ F_{T}({\bf x}) = 2 m
\,z/\,a\, T^2 $. This calibration is suggested by (\ref{eq:mmsd}), as
well as by the explicit form of the $z$-component of the position
operator at time $t=T$,
\begin{equation}gin{equation}
\hat{Z}_T= U_{T}^{*} \hat{Z} U_{T} = \hat{Z} +
\frac{\hat{P}_z}{m}\,T + \frac{a}{2m}
\sigma_z\,T^{2}\,,
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:heispin}
\end{equation}
which follows {}from the Heisenberg equations
$$
m \frac{d^{2} \hat{Z}_{t}}{d\,t^2} = a\,\sigma_{z} \,, \qquad
\left.\frac{d\, \hat{Z}_{t}}{d\,t}\right|_{t=0}\! = \hat{P}_{z}
\equiv -i\hbar\frac{\partial}{\partial z}\,,\qquad
\hat{Z}_{0}=\hat{Z}\,.
$$
Then, for initial state $\mbox{$\mathbb{P}$}si = \psi\otimesimes\mbox{$\mathbb{P}$}hi_0$ with suitable $
\mbox{$\mathbb{P}$}hi_0 $, where $\psi= \alphalpha \psi^{(+)}\,+\, \begin{equation}ta\psi^{(-)}$, the
distribution of the random variable
$$
{\Sigma_{z}}_{T} = F_T({\bf X}_T) = \frac{2\,m \, Z_T }{a\, T^2}$$
converges as $T\to\infty$ to the spectral measure of $\sigma_z$, with
values $+1$ and $-1$ occurring with probabilities $|\alpha|^{2}$ and
$|\begin{equation}ta|^{2}$, respectively.\footnote{For the Hamiltonian
\eq{eq:pahamagain} no assumption on the initial state $\mbox{$\mathbb{P}$}si$ is
required here; however \eq{eq:pahamagain} will be a reasonably good
approximation only when $\mbox{$\mathbb{P}$}si$ has a suitable form, expressing in
particular that the particle is appropriately moving towards the
magnet.} This is so, just as with the momentum, because as
$T\to\infty$ the operator $ \frac{2\,m \, \hat{Z}_T }{a\, T^2}$
converges to $\sigma_{z}$.
We remark that we've made use above of the fact that simple algebraic
manipulations on the level of random variables correspond
automatically to the same manipulations for the associated operators.
More precisely, suppose that
\begin{equation}gin{equation}
Z \mapsto A
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:assrvop}
\end{equation}
in the sense (of \eq{eq:prdeltan}) that the distribution of the random
variable $Z$ is given by the spectral measure for the self-adjoint
operator $A$. Then it follows {}from (\ref{eq:prfm}) that
\begin{equation}gin{equation}
f(Z) \to f(A)
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:funcom}
\end{equation}
for any (Borel) function $f$. For example, since ${\bf X}_T\mapsto
\hat{\bf X}_T$, $m{\bf X}_T/T\mapsto m\hat{\bf X}_T/T$, and since $Z_T
\to \hat{Z}_T$, $ \frac{2\,m \, {Z}_T }{a\, T^2} \to \frac{2\,m \,
\hat{Z}_T }{a\, T^2}$. Similarly, if a random variable $P\mapsto
\hat{P}$, then $ P^2/(2m)\mapsto H_0= \hat{P}^2/(2m)$. This is rather
trivial, but it is not as trivial as the failure even to distinguish
$Z$ and $\hat{Z}$ would make it seem.
\subsection{From Positive-Operator-Valued Measures to
Experiments}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{subsec.fpovtoex} We wish here to point out that
to a very considerable extent the association $\mbox{$\mathscr{E}$}\mapsto O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$ of
experiments with POVMs is onto. It is more or less the case that every
POVM arises {}from an experiment.
We have in mind two distinct remarks. First of all, it was pointed out
in the first paragraph of Section 4.3 that every discrete POVM $O_\alpha$
(weak formal experiment) arises {}from some discrete experiment \mbox{$\mathscr{E}$}{}.
Thus, for every POVM $O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$ there is a sequence $\mbox{$\mathscr{E}$}^{(n)}$ of
discrete experiments for which the corresponding POVMs $O^{(n)}$
converge to $O$.
The second point we wish to make is that to the extent that every PVM
arises {}from an experiment $\mbox{$\mathscr{E}$}=\{\mbox{$\mathbb{P}$}hi_0,U, F\}$, so too does every
POVM. This is based on the fact, mentioned at the end of the
introduction to Section 5, that every POVM $O(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$ can be regarded
as arising {}from the projection of a PVM $E(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$, acting on
$\mbox{$\mathcal{H}$}^{(1)}$, onto the subspace $\mbox{$\mathcal{H}$}\subset\mbox{$\mathcal{H}$}^{(1)}$. We may assume
without loss of generality that both $\mbox{$\mathcal{H}$}$ and $\mbox{$\mathcal{H}$}^{(1)}\ominus\mbox{$\mathcal{H}$}$ are
infinite dimensional (by some otherwise irrelevant enlargements if
necessary). Thus we can identify $\mbox{$\mathcal{H}$}^{(1)}$ with
$\mbox{$\mathcal{H}$}\otimes\mbox{$\mathcal{H}$}_{\text{apparatus}^{(1)}}$ and the subspace with
$\mbox{$\mathcal{H}$}\otimes\mbox{$\mathbb{P}$}hi_0^{(1)}$, for any choice of $\mbox{$\mathbb{P}$}hi_0^{(1)}$. Suppose now
that there is an experiment $\mbox{$\mathscr{E}$}^{(1)}=\{\mbox{$\mathbb{P}$}hi_0^{(2)},U, F\}$ that
measures the PVM $E$ (i.e., that measures the observable $A=\int
\mbox{\boldmath $\lambda $} dabda_{\alpha}m{} E(d\mbox{\boldmath $\lambda $} dabda_{\alpha}m{})$) where $\mbox{$\mathbb{P}$}hi_0^{(2)}\in
\mbox{$\mathcal{H}$}_{\text{apparatus}^{(2)}}$, $U$ acts on $\mbox{$\mathcal{H}$}\otimes\mbox{$\mathcal{H}$}_{\text{apparatus}}$
where $\mbox{$\mathcal{H}$}_{\text{apparatus}}= \mbox{$\mathcal{H}$}_{\text{apparatus}^{(1)}}\otimes
\mbox{$\mathcal{H}$}_{\text{apparatus}^{(2)}}$ and $F$ is a function of the
configuration of the composite of the 3 systems: system,
apparatus$^{(1)}$ and apparatus$^{(2)}$. Then, with $\mbox{$\mathbb{P}$}hi_0=
\mbox{$\mathbb{P}$}hi_0^{(1)}\otimes \mbox{$\mathbb{P}$}hi_0^{(2)}$, $\mbox{$\mathscr{E}$}=\{\mbox{$\mathbb{P}$}hi_0,U, F\}$ is associated with
the POVM $O$.
\subsection{Invariance Under Trivial Extension}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{subsec:iute}
Suppose we change an experiment $\mbox{$\mathscr{E}$}$ to $\mbox{$\mathscr{E}$}'$ by regarding its
$x$-system as containing more of the universe that the $x$-system for
$\mbox{$\mathscr{E}$}$, without in any way altering what is physically done in the
experiment and how the result is specified. One would imagine that
$\mbox{$\mathscr{E}$}'$ would be equivalent to $\mbox{$\mathscr{E}$}$. This would, in fact, be trivially
the case classically, as it would if $\mbox{$\mathscr{E}$}$ were a genuine measurement,
in which case $\mbox{$\mathscr{E}$}'$ would obviously measure the same thing as $\mbox{$\mathscr{E}$}$.
This remains true for the more formal notion of measurement under
consideration here. The only source of nontriviality in arriving at
this conclusion is the fact that with $\mbox{$\mathscr{E}$}'$ we have to deal with a
different, larger class of initial wave function s.
We will say that $\mbox{$\mathscr{E}$}'$ is a trivial extension of $\mbox{$\mathscr{E}$}$ if the only
relevant difference between $\mbox{$\mathscr{E}$}$ and $\mbox{$\mathscr{E}$}'$ is that the $x$-system for
$\mbox{$\mathscr{E}$}'$ has generic configuration $x'=(x,\hat{x})$, whereas the
$x$-system for $\mbox{$\mathscr{E}$}$ has generic configuration $x$. In particular, the
unitary operator $U'$ associated with $\mbox{$\mathscr{E}$}'$ has the form $U'=
U\otimes\hat{U}$, where $U$ is the unitary associated with \mbox{$\mathscr{E}$}{},
implementing the interaction of the $x$-system and the apparatus,
while $\hat{U}$ is a unitary operator describing the independent
evolution of the $\hat{x}$-system, and the calibration $F$ for $\mbox{$\mathscr{E}$}'$
is the same as for $\mbox{$\mathscr{E}$}$. (Thus $F$ does not depend upon $\hat{x}$.)
The association of experiments with (generalized) observables (POVMs)
is \emph{invariant under trivial extension}: if $\mbox{$\mathscr{E}$}\mapsto O$ in the
sense of \eq{etoo} and $\mbox{$\mathscr{E}$}'$ is a trivial extension of $\mbox{$\mathscr{E}$}$, then
$\mbox{$\mathscr{E}$}'\mapsto O\otimes I$, where $I$ is the identity on the Hilbert space of
the $\hat{x}$-system.
To see this note that if $\mbox{$\mathscr{E}$}\mapsto O$ then the sesquilinear map $B$
arising {}from \eq{seqmap} for $\mbox{$\mathscr{E}$}'$ is of the form
$$
B(\psi_1\otimes \hat{\psi}_1, \psi_2\otimes \hat{\psi}_2) = \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi_1,
O\psi_2\rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\hat{\psi}_1, \hat{\psi}_2\rangle$$
on product
wave function{}s $\psi'= \psi\otimes \hat{\psi}$, which easily follows {}from the
form of $U'$ and the fact that $F$ doesn't depend upon $\hat{x}$, so
that the $\hat{x}$-degrees of freedom can be integrated out. Thus the
POVM $O'$ for $\mbox{$\mathscr{E}$}'$ agrees with $ O\otimes I$ on product wave function{}s, and since
such wave functions span the Hilbert space for the
$(x,\hat{x})$-system, we have that $O'= O\otimes I$. Thus $\mbox{$\mathscr{E}$}'\mapsto O\otimes
I$.
In other words, if \mbox{$\mathscr{E}$}{}{} is a measurement of $O$, then $\mbox{$\mathscr{E}$}'$ is a
measurement of $O\otimes I$. In particular, if \mbox{$\mathscr{E}$}{} is a measurement the
self-adjoint operator $A$, then $\mbox{$\mathscr{E}$}'$ is a measurement of $A\otimes I$.
This result is not quite so trivial as it would be were it concerned
with genuine measurements, rather than with the more formal notion
under consideration here.
Now suppose that $\mbox{$\mathscr{E}$}'$ is a trivial extension of a discrete experiment
$\mbox{$\mathscr{E}$}$, with state transformations given by $R_{\alpha}$. Then the state
transformations for $\mbox{$\mathscr{E}$}{}'$ are given by $R_{\alpha}' = R_{\alpha} \otimes \hat{U}$.
This is so because $R_{\alpha}'$ must agree with $R_{\alpha} \otimes \hat{U}$ on product
wave function{}s $\psi'= \psi\otimes \hat{\psi}$, and these span the Hilbert space
of the $(x,\hat{x}$)-system.
\subsection{POVMs and the Positions of Photons and Dirac Electrons}
We have indicated how POVMs emerge naturally in association with
Bohmian experiments. We wish here to indicate a somewhat different
role for a POVM: to describe the probability distribution of the
actual (as opposed to measured\footnote{The accurate measurement of
the position of a Dirac electron is presumably impossible.})
position. The probability distribution of the position of a Dirac
electron in the state $\psi$ is $\psi^+\psi$. This is given by a PVM
$E(d{\bf x})$ on the one-particle Hilbert space $\mbox{$\mathcal{H}$}$ spanned by
positive and negative energy electron wave functions. However the
physical one-particle Hilbert-space $\mbox{$\mathcal{H}$}_+$ consists solely of positive
energy states, and this is not invariant under the projections $E$.
Nonetheless the probability distribution of the position of the
electron is given by the POVM $P_+ E(d{\bf x}) P_+$ acting on $\mbox{$\mathcal{H}$}_+$,
where $P_+$ is the orthogonal projection onto $\mbox{$\mathcal{H}$}_+$. Similarly,
constraints on the photon wave function require the use of POVMs for
the localization of photons~\cite{Kraus, Emch}.\footnote{For example,
on the one-photon level, both the proposal
$\boldsymbol{\mbox{$\mathbb{P}$}si}=\mathbf{E}+i \mathbf{B}$ (where $\mathbf{E}$ and
$\mathbf{B}$ are the electric and the magnetic free fields)
\cite{Birula}, and the proposal $\boldsymbol{\mbox{$\mathbb{P}$}si}=\mathbf{A}$
(where $\mathbf{A}$ is the vector potential in the Coulomb gauge)
\cite{Emch}, require the constraint $\boldsymbol{\nabla}\cdot
\boldsymbol{\mbox{$\mathbb{P}$}si}=0$.}
Schr\"odinger's equationction{Density Matrices} Schr\"odinger's equationtcounter{equation}{0}
The notion of a density matrix, a positive (trace class) operator with
unit trace on the Hilbert space of a system, is often regarded as
providing the most general characterization of a quantum state of that
system. According to the quantum formalism, when a system is
described by the density matrix $W$, the expected value of an
observable $A$ is given by $ \mbox{${\rm tr}\,$}(WA)$. If $A$ has PVM $O$, and more
generally for any POVM $O$, the probability that the (generalized)
observable $O$ has value in $\Delta$ is given by
\begin{equation}gin{equation}
\mbox{Prob}(O\in\Delta) = \mbox{${\rm tr}\,$} (W O(\Delta)).
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:den1}
\end{equation}
A density matrix that is a one-dimensional projection, i.e., of the
form $|\psi\rangle\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi|$ where $\psi$ is a unit vector in the
Hilbert space of the system, describes a \emph{pure state} (namely,
$\psi$), and a general density matrix can be decomposed into a
\emph{mixture} of pure states $\psi_{k}$,
\begin{equation}gin{equation}
W =\sum_k p_k |\psi_k\rangle\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi_k| \qquad\mbox{where}\qquad
\sum_{k} p_{k} =1.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:dmsd}
\end{equation}
Naively, one might regard $p_{k}$ as the probability that the system
\emph{is} in the state $\psi_{k}$. This interpretation is, however,
untenable, for a variety of reasons. First of all, the decomposition
\eq{eq:dmsd} is not unique. A density matrix $W$ that does not
describe a pure state can be decomposed into pure states in a variety
of different ways.
It is always possible to decompose a density matrix $W$ in such a way
that its components $\psi_k$ are orthonormal. Such a decomposition
will be unique except when $W$ is degenerate, i.e., when some $p_k$'s
coincide. For example, if $p_1=p_2$ we may replace $\psi_{1}$ and
$\psi_{2}$ by any other orthonormal pair of vectors in the subspace
spanned by $\psi_{1}$ and $\psi_{2}$. And even if $W$ were
nondegenerate, it need not be the case that the system is in one of
the states $\psi_k$ with probability $p_k$, because for any
decomposition \eq{eq:dmsd}, regardless of whether the $\psi_k$ are
orthogonal, if the wave function{} of the system were $\psi_k$ with probability
$p_k$, this situation would be described by the density matrix $W$.
Thus a general density matrix carries no information---not even
statistical information---about the actual wave function{} of the system.
Moreover, a density matrix can describe a system that has no wave
function at all! This happens when the system is a subsystem of a
larger system whose wave function{} is entangled, i.e., does not properly
factorize (in this case one usually speaks of the reduced density
matrix of the subsystem).
This impossibility of interpreting density matrices as real mixtures
of pure states has been regarded by many authors (e.g., von Neumann
\cite{vNe55} and Landau \cite{LL}) as a further indication that
quantum randomness is inexplicable within the realm of classical logic
and probability. However, {}from the point of view of Bohmian
mechanics, there is nothing mysterious about density matrices.
Indeed, their role and status within the quantum formalism can be
understood very easily in terms of the general framework of
experiments of Section \ref{GEBM}. (It can, we believe, be reasonably
argued that even {}from the perspective of orthodox quantum theory,
density matrices can be understood in a straightforward way.)
\subsection{Density Matrices and Bohmian Experiments}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secRWF}
Consider a general experiment $\mbox{$\mathscr{E}$}\mapsto O$ (see equation \eq{etoo})
and suppose that the initial wave function{} of the system is random with
probability distribution $p (d\psi)$ (on the set of unit vectors in
\mbox{$\mathcal{H}$}). Then nothing will change in the general argument of Section
\ref{GEBM} except that now $\rho^Z_\psi$ in \eq{etoo} and \eq{seqmap}
should be interpreted as the conditional probability {\it given}
$\psi$. It follows then {}from (\ref{eq:den1}), using the fact that
$\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi , O (\Delta) \psi \rangle = \mbox{${\rm tr}\,$} (|\psi \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi| \, O(\Delta) ) $, that the probability that the result of \mbox{$\mathscr{E}$}{}
lies in $\Delta$ is given by
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{stcon} \int p(d \psi )\,\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi , O
(\Delta) \psi \rangle = \mbox{${\rm tr}\,$}\left( \int p(d\psi )\,| \psi \rangle
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi| \, O(\Delta)\right)= \mbox{${\rm tr}\,$}\left( W
O(\Delta)\right)
\end{equation}
where\footnote{Note that since $p$ is a probability measure on the
unit sphere in $\mbox{$\mathcal{H}$}$, $W$ is a positive trace class operator with
unit trace.}
\begin{equation}gin{equation}
W\equiv \int p(d\psi )\,| \psi \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi |
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:ensdm}
\end{equation}
is the \emph{ensemble density matrix} arising {}from a random wave
function with (ensemble) distribution~$p$.
Now suppose that instead of having a random wave function{}, our system has no
wave function{} at all because it is entangled with another system. Then
there is still an object that can naturally be regarded as the state
of our system, an object associated with the system itself in terms
of which the results of experiments performed on our system can be
simply expressed. This object is a density matrix $W$ and the
results are governed by \eq{eq:den1}. $W$ is the \emph{reduced
density matrix} arising {}from the state of the larger system.
This is more or less an immediate consequence of invariance under
trivial extension, described in Section \ref{subsec:iute}:
Consider a trivial extension $\mbox{$\mathscr{E}$}'$ of an experiment $\mbox{$\mathscr{E}$}\mapsto O$ on
our system---precisely what we must consider if the larger system
has a wave function{} $\psi'$ while our (smaller) system does not. The
probability that the result of $\mbox{$\mathscr{E}$}'$ lies in $\Delta$ is given by
\begin{equation}gin{equation}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{stcon2}
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi' , O(\Delta)\otimesimes I \psi' \rangle = \mbox{${\rm tr}\,$} ' \left( |
\psi' \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi'| \, O(\Delta)\otimesimes I\right)=
\mbox{${\rm tr}\,$}\left( W O(\Delta)\right) \, ,
\end{equation}
where $\mbox{${\rm tr}\,$}'$ is the trace for the $x'$-system (the big system) and
$\mbox{${\rm tr}\,$}$ is the trace for the $x$-system. In agreement with standard
quantum mechanics, the last equality of (\ref{stcon2}) defines $W$ as
the reduced density matrix of the $x$-system, i.e,
\begin{equation}gin{equation}
W\equiv \widehat{\mbox{${\rm tr}\,$}}\left( | \psi' \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi' |\right)
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:reddm}
\end{equation}
where $\widehat{\mbox{${\rm tr}\,$}}$ denotes the partial trace over the coordinates
of the $\hat{x}$-system.
\subsection{Strong Experiments and Density Matrices}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secSEI}
A strong formal experiment $\mathcal{E}\equiv\{\mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha}\}$ generates
state transformations $\psi\toR_{\alpha}\psi$. This suggests the following
action on an initial state described by a density matrix $W$: When the
outcome is $\alpha$, we have the transformation
\begin{equation}gin{equation}
W \to \frac{\mathcal{R}_\alpha W}{\mbox{${\rm tr}\,$}\left(\mathcal{R}_\alpha W\right) }
\equiv \frac{R_{\alpha} W R_{\alpha}d}{\mbox{${\rm tr}\,$}\left( R_{\alpha} W R_{\alpha}d \right)}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:axdens}
\end{equation}
where
\begin{equation}gin{equation}
\mathcal{R}_\alpha W = R_{\alpha} W R_{\alpha}d\,.
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:axdens2}
\end{equation}
After all, (\ref{eq:axdens}) is a density matrix naturally associated
with $R_{\alpha}$ and $W$, and it agrees with $\psi\toR_{\alpha}\psi$ for a pure
state, $W=| \psi \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi|$. In order to show that
(\ref{eq:axdens}) is indeed correct, we must verify it for the two
different ways in which our system might be assigned a density matrix
$W$, i.e., for $W$ an ensemble density matrix and for $W$ a reduced
density matrix.
Suppose the initial wave function is random, with distribution
$p(d\psi)$. Then the initial state of our system is given by the
density matrix \eq{eq:ensdm}. When the outcome $\alpha$ is obtained, two
changes must be made in \eq{eq:ensdm} to reflect this information: $|
\psi \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi|$ must be replaced by $ (R_{\alpha}| \psi \rangle
\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle\psi|R_{\alpha}d)/ \|R_{\alpha}\psi\|^2 $, and $p(d\psi)$ must be
replaced by $p(d\psi|\alpha)$, the conditional distribution of the initial
wave function{} given that the outcome is $\alpha$. For the latter we have
$$
p(d\psi|\alpha)= \frac{\|R_{\alpha}\psi\|^2}{\mbox{${\rm tr}\,$}( R_{\alpha} WR_{\alpha}d)} p(d\psi)
$$
($ \|R_{\alpha}\psi\|^2p(d\psi)$ is the joint distribution of $\psi$
and $\alpha$ and the denominator is the probability of obtaining the
outcome $\alpha$.) Therefore $W$ undergoes the transformation
$$
W= \int p(d\psi )\,| \psi \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi | \quad\to\quad \int
p(d\psi|\alpha) \,\frac{R_{\alpha}| \psi \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi|R_{\alpha}d}{\|R_{\alpha}\psi\|^2} = \int p (d\psi )\, \frac{R_{\alpha} | \psi
\rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi| R_{\alpha}d}{\mbox{${\rm tr}\,$}( R_{\alpha} WR_{\alpha}d)} = \frac{R_{\alpha} W
R_{\alpha}d}{\mbox{${\rm tr}\,$}( R_{\alpha} WR_{\alpha}d)} .
$$
We wish to emphasize that this demonstrates in particular the
nontrivial fact that the density matrix $\mathcal{R}_\alpha
W/\mbox{${\rm tr}\,$}(\mathcal{R}_\alpha W)$ produced by the experiment depends only upon
the initial density matrix $W$. Though $W$ can arise in many different
ways, corresponding to the multiplicity of different probability
distributions $p(d\psi)$ yielding $W$ via \eq{eq:ensdm}, insofar as
the final state is concerned, these differences don't matter.
This does not, however, establish \eq{eq:axdens} when $W$ arises not
{}from a random wave function but as a reduced density matrix. To deal
with this case we consider a trivial extension $\mbox{$\mathscr{E}$}'$ of a discrete
experiment \mbox{$\mathscr{E}$}{} with state transformations $R_{\alpha}$. Then $\mbox{$\mathscr{E}$}'$ has state
transformations $R_{\alpha}\otimes \hat{U}$ (see Section \ref{subsec:iute}).
Thus, when the initial state of the $x'$-system is $\psi'$, the final
state of the $x$-system is given by the partial trace
$$
\frac{\widehat{\mbox{${\rm tr}\,$}} \left(R_{\alpha}\otimesimes \hat{U}| \psi '\rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi' |R_{\alpha}d\otimesimes \hat{U}^{*}\right)}{\mbox{${\rm tr}\,$}'\left(R_{\alpha}\otimesimes
\hat{U}| \psi '\rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi' |R_{\alpha}d\otimesimes
\hat{U}^{*}\right)} = \frac{\widehat{\mbox{${\rm tr}\,$}} \left(R_{\alpha}\otimesimes I| \psi
'\rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi' |R_{\alpha}d\otimesimes I\right)}{\mbox{${\rm tr}\,$}'
\left(R_{\alpha}\otimesimes I| \psi '\rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi' |R_{\alpha}d\otimesimes
I\right)} =\frac{R_{\alpha} \, \widehat{\mbox{${\rm tr}\,$}} (| \psi' \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi '|) R_{\alpha}d}{\mbox{${\rm tr}\,$}\!\left(R_{\alpha} \widehat{\mbox{${\rm tr}\,$}} (| \psi' \rangle \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle
\psi '|) R_{\alpha}d\right)}$$
$$= \frac{R_{\alpha} W R_{\alpha}d}{\mbox{${\rm tr}\,$}\!\left(R_{\alpha} W R_{\alpha}d\right)} \, ,
$$
where the cyclicity of the trace has been used.
To sum up, when a strong experiment $\mathcal{E}\equiv\{\mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha}\}$ is
performed on a system described by the initial density matrix $W$ and
the outcome $\alpha$ is obtained, the final density matrix is given by
(\ref{eq:axdens}); moreover, {}from the results of the previous
section it follows that the outcome $\alpha$ will occur with probability
\begin{equation}gin{equation}
p_{\alpha}= \mbox{${\rm tr}\,$}( W O_{\alpha})= \mbox{${\rm tr}\,$}\left( W R_{\alpha}dR_{\alpha}\right) = \mbox{${\rm tr}\,$}\left(
\mathcal{R}_\alpha W \right),
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:pexdm}
\end{equation}
where the last equality follows {}from the cyclicity of the trace.
\subsection{The Notion of Instrument}
We shall briefly comment on the relationship between the notion of
strong formal experiment and that of \emph{instrument} (or
\emph{effect}) discussed by Davies \cite{Dav76}.
Consider an experiment $\mathcal{E}\equiv\{\mbox{\boldmath $\lambda $} dabda_{\alpha}, R_{\alpha}\}$ on a system
with initial density matrix $W$. Then a natural object associated
with $\mbox{$\mathscr{E}$}x$ is the set function
\begin{equation}gin{equation}
\mathcal{R}(\Delta) W \equiv\sum_{\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda_\alphalpha \in
\Delta}\mathcal{R}_\alpha W =\sum_{\mbox{\boldmath $\lambda $} dabda_{\alpha}\in \Delta}R_{\alpha} WR_{\alpha}d \, .
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:ins}
\end{equation}
The set function $\mathcal{R}: \Delta \mapsto \mathcal{R} (\Delta)$
compactly expresses both the statistics of \mbox{$\mathscr{E}$}x\ for a general initial
system density matrix $W$ and the effect of \mbox{$\mathscr{E}$}x\ on $W$
\emph{conditioned} on the occurrence of the event ``the result of
\mbox{$\mathscr{E}$}x{} is in $\Delta$''.
To see this, note first that it follows {}from \eq{eq:pexdm} that the
probability that the result of the experiment lies in the set $\Delta$
is given by
$$
p(\Delta)= \mbox{${\rm tr}\,$}\left(\mathcal{R}(\Delta) W \right)\,. $$
The
conditional distribution $p(\alpha|\Delta)$ that the outcome is $\alpha$ given
that the result $\mbox{\boldmath $\lambda $} dabda_{\alpha}\in\Delta$ is then $\mbox{${\rm tr}\,$}(\mathcal{R}_\alpha W)/ \mbox{${\rm tr}\,$}(
\mathcal{R}(\Delta) W )$. The density matrix that reflects the
knowledge that the result is in $\alpha$, obtained by averaging
\eq{eq:axdens} over $\Delta$ using $p(\alpha|\Delta)$, is thus
$\mathcal{R}(\Delta) W / \mbox{${\rm tr}\,$} (\mathcal{R}(\Delta) W )$.
It follows {}from (\ref{eq:ins}) that $\mathcal{R}$ is a countably
additive set function whose values are positive preserving linear
transformations in the space of trace-class operators in \mbox{$\mathcal{H}$}. Any map
with these properties, not necessarily of the special form
(\ref{eq:ins}), is called an \emph{instrument}.
\subsection {On the State Description Provided by Density Matrices} So
far we have followed the standard terminology and have spoken of a
density matrix as describing the {\it state} of a physical system. It
is important to appreciate, however, that this is merely a frequently
convenient way of speaking, for Bohmian mechanics{} as well as for orthodox quantum theory{}. Insofar
as Bohmian mechanics{} is concerned, the significance of density matrices is neither
more nor less than what is implied by their role in the quantum
formalism as described in Sections \ref{secRWF} and \ref{secSEI}.
While many aspects of the notion of (effective) wave function\ extend to density
matrices, in particular with respect to weak and strong experiments,
density matrices lack the dynamical implications of wave function{}s for the
evolution of the configuration, a point that has been emphasized by
Bell \cite{Bel80}:
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
In the de Broglie-Bohm theory a fundamental significance is given to
the wave function, and it cannot be transferred to the density
matrix. \ldots Of course the density matrix retains all its usual
practical utility in connection with quantum statistics.
\end{quotation}
That this is so should be reasonably clear, since it is the wave function{} that
determines, in Bohmian mechanics{}, the evolution of the configuration, and the
density matrix of a system does not determine its wave function{}, even
statistically. To underline the point we shall recall the analysis of
Bell \cite{Bel80}: Consider a particle described by a density matrix
$W_t$ evolving autonomously, so that $W_{t} =U_{t}W_{0}U_{t}^{-1}$,
where $U_{t}$ is the unitary group generated by a Schr\"{o}dinger{} Hamiltonian.
Then $ \rho^{W_{t}}(x) \equiv W_{t}(x,x)\equiv \mbox{\boldmath $\lambda $} dabda_{\alpha}ngle x| W_{t}|
x\rangle $ gives the probability distribution of the position of the
particle. Note that $\rho^{W}$ satisfies the continuity equation
$$
\frac{\partial \rho^W}{\partial t} + \hbox{\rm div}\, J^W
\,=\,0\qquad\mbox{where}\qquad J^{W} (x) = \frac{\hbar}{m}{\rm Im}\,
\left[ \nabla_x W(x,x')\right]_{x'=x}.
$$
This might suggest that the velocity of the particle should be
given by $ v =J^W /\rho^W $, which indeed agrees with the usual
formula when $W$ is a pure state ($W(x,x') = \psi (x) \psi^*(x')$).
However, this extension of the usual formula to arbitrary density
matrices, though mathematically ``natural,'' is not consistent with
what Bohmian mechanics\ prescribes for the evolution of the configuration. Consider,
for example, the situation in which the wave function\ of a particle is random,
either $\psi_1$ {\it or } $\psi_2$, with equal probability. Then the
density matrix is $ W(x,x') = \frac12\left( \psi_1(x) \psi_1^* (x')+
\psi_2(x)\psi_2^*(x')\right) $. But the velocity of the particle
will be always {\it either} $v_1$ or $v_2$ (according to whether the
actual wave function{} is $\psi_1$ or $\psi_2$), and---unless $\psi_1$ and
$\psi_2$ have disjoint supports---this does not agree with $J^W /
\rho^W $, an average of $v_1$ and $v_2$.
What we have just said is correct, however, only when spin is ignored.
For particles with spin a novel kind of density matrix emerges, a {\em
conditional density matrix}, analogous to the conditional wave
function \eq{eq:con} and with an analogous dynamical role: Even though
no conditional wave function need exist for a system entangled with
its environment when spin is taken into account, a conditional density
matrix $W$ always exists, and is such that the velocity of the system
is indeed given by $ J^W /\rho^W $. See \cite{Rodiden} for details.
A final remark: the statistical role of density matrices is basically
different {}from that provided by statistical ensembles, e.g, by Gibbs
states in classical statistical mechanics. This is because, as
mentioned earlier, even when it describes a random wave function{} via
\eq{eq:ensdm}, a density matrix $W$ does not determine the ensemble
$p(d\psi)$ {}from which it emerges. The map defined by
(\ref{eq:ensdm}) {}from probability measures $p$ on the unit sphere in
\mbox{$\mathcal{H}$}{} to density matrices $W$ is many-to-one.\footnote{This is relevant
to the foundations of quantum statistical mechanics, for which the
state of an isolated thermodynamic system is usually described by
the microcanonical density matrix $\mathcal{Z}^{-1} \delta ( H-E)$,
where $\mathcal{Z}=\mbox{${\rm tr}\,$} \delta ( H-E)$ is the partition function.
Which ensemble of wave function s should be regarded as forming the
thermodynamic ensemble? A natural choice is the uniform measure on
the subspace $H=E$, which should be thought of as fattened in the
usual way. Note that this choice is quite distinct {}from another
one that people often have in mind: a uniform distribution over a
basis of energy eigenstates of the appropriate energy. Depending
upon the choice made, we obtain different notions of typical
equilibrium wave function{}.} Consider, for example, the density matrix
$\frac{1}{n} I $ where $I$ is the identity operator on an
$n$-dimensional Hilbert space \mbox{$\mathcal{H}$}{}. Then a uniform distribution over
the vectors of any given orthonormal basis of \mbox{$\mathcal{H}$}{} leads to this
density matrix, as well as does the continuous uniform measure on the
sphere $\|\psi\|=1$. However, since the statistical distribution of
the results of any experiment depends on $p$ only through $W$,
different $p$'s associated with the same $W$ are {\it empirically
equivalent} in the sense that they can't be distinguished by
experiments performed on a system prepared somehow in the state $W$.
Schr\"odinger's equationction{Genuine Measurements}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secMO}
Schr\"odinger's equationtcounter{equation}{0}
We have so far discussed various interactions between a system and an
apparatus relevant to the quantum measurement formalism, {}from the
very special ones formalized by ``ideal measurements'' to the general
situation described in section 5. It is important to recognize that
nowhere in this discussion was there any implication that anything was
actually being measured. The fact that an interaction with an
apparatus leads to a pointer orientation that we call the result of
the experiment or ``measurement'' in no way implies that this result
reflects anything of significance concerning the system under
investigation, let alone that it reveals some preexisting property of
the system---and this is what is supposed to be meant by the word
measurement. After all \cite{Sch35}, ``any old playing around with an
indicating instrument in the vicinity of another body, whereby at any
old time one then takes a reading, can hardly be called a measurement
of this body,'' and the fact the experiment happens to be associated,
say, with a self-adjoint operator in the manner we have described, so
that the experiment is spoken of, in the quantum formalism, as a
measurement of the corresponding observable, certainly offers little
support for using language in this way.
We shall elaborate on this point later on. For now we wish to observe
that the very generality of our analysis, particularly that of section
5, covering as it does all possible interactions between system and
apparatus, covers as well those particular situations that in fact are
genuine measurements. This allows us to make some definite statements
about what can be measured in Bohmian mechanics.
For a physical quantity, describing an objective property of a system,
to be measurable means that it is possible to perform an experiment on
the system that measures the quantity, i.e., an experiment whose
result conveys its value. In Bohmian mechanics\ a physical quantity $\xi$ is
expressed by a function
\begin{equation}gin{equation} {\xi}= f (X, \psi) \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{pq}
\end{equation}
of the complete state $(X, \psi)$ of the system. An experiment \mbox{$\mathscr{E}$}\
measuring $\xi$ is thus one whose result ${Z}=F(X_T,Y_T)\equiv
{Z}(X,Y,\mbox{$\mathbb{P}$}si)$ equals $\xi=f(X,\psi)\equiv {\xi}(X,\psi)$,
\begin{equation}gin{equation}
{Z}(X,Y,\mbox{$\mathbb{P}$}si)={\xi}(X,\psi),\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{xz}
\end{equation}
where $X$, $Y$, $\psi$ and $\mbox{$\mathbb{P}$}si$ refer, as in Section 5, to the
initial state of system and apparatus, immediately prior to the
measurement, and where the equality should be regarded as approximate,
holding to any desired degree of accuracy.
The most basic quantities are, of course, the state components
themselves, namely $X$ and $\psi$, as well as the velocities
\begin{equation}gin{equation}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{velox}{{\bf v}_k} = \frac{\hbar}{m_k}{\rm
Im}\frac{{\boldsymbol{\nabla}_k}\psi(X)}{\psi(X)}
\end{equation}
of the particles. One might also consider quantities describing the
future behavior of the system, such as the configuration of an
isolated system at a later time, or the time of escape of a particle
{}from a specified region, or the asymptotic velocity discussed in
Section \ref{subsec.basop}. (Because the dynamics is deterministic,
all of these quantities are functions of the initial state of the
system and are thus of the form (\ref{pq}).)
We wish to make a few remarks about the measurability of these
quantities. In particular, we wish to mention, as an immediate
consequence of the analysis at the beginning of Section 5, a condition
that must be satisfied by any quantity if it is to be measurable.
\subsection{A Necessary Condition for Measurability}
Consider any experiment \mbox{$\mathscr{E}$}\ measuring a physical quantity $\xi$. We
showed in Section 5 that the statistics of the result $Z$ of \mbox{$\mathscr{E}$}\ must
be governed by a POVM, i.e., that the probability distribution of $Z$
must be given by a measure-valued quadratic map on the system Hilbert
space \mbox{$\mathcal{H}$}. Thus, by (\ref{xz}),
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.85\textwidth}\openup 1.4\jot
Schr\"odinger's equationtlength{\baselineskip}{12pt}\emph{$\xi$ is measurable only if its
probability distribution $\mu_{\xi}^{\psi}$ is a measure-valued
quadratic map on \mbox{$\mathcal{H}$}. }
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{MC1}
\end{equation}
As indicated earlier, the position ${\bf X}$ and the asymptotic
velocity or momentum ${\bf P}$ have distributions quadratic in $\psi$,
namely $\mu_{\bf X}^{\psi}(d{\bf x})=|\psi({\bf x})|^2$ and $\mu_{\bf
P}^{\psi}(d{\bf p})=|\tilde{\psi}({\bf p})|^2$, respectively.
Moreover, they are both measurable, basically because suitable local
interactions exist to establish appropriate correlations with the
relevant macroscopic variables. For example, in a bubble chamber a
particle following a definite path triggers a chain of reactions that
leads to the formation of (macroscopic) bubbles along the path.
The point we wish to make now, however, is simply this: the
measurability of these quantities is not a consequence of the fact
that these quantities obey this measurability condition. We emphasize
that this condition is merely a necessary condition for measurability,
and not a sufficient one. While it does follow that if $\xi$ satisfies
this condition there exists a discrete experiment that is an
approximate formal measurement of $\xi$ (in the sense that the
distribution of the result of the experiment is approximately $
\mu_{\xi}^{\psi}$), this experiment need not provide a genuine
measurement of $\xi$ because the interactions required for its
implementation need not exist and because, even if they did, the
result $Z$ of the experiment might not be related to the quantity
$\xi$ in the right way, i.e, via (\ref{xz}).
We now wish to illustrate the use of this condition, first
transforming it into a weaker but more convenient form. Note that any
quadratic map $\mu^\psi$ must satisfy
\[
\mu^{\psi_1 + \psi_2} + \mu^{\psi_1 - \psi_2} = 2(\mu^{\psi_1} +
\mu^{\psi_2})
\]
and thus if $\mu^\psi$ is also positive we have the inequality
\begin{equation}gin{equation}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{ineq} \mu^{\psi_1+\psi_2} \le 2(\mu^{\psi_1} +
\mu^{\psi_2}).
\end{equation}
Thus it follows {}from \eq{MC1} that a quantity\footnote{This
conclusion is also a more or less direct consequence of the
linearity of the Schr\"odinger evolution: If
$\psi_i\otimesimes\mbox{$\mathbb{P}$}hi_0\mapsto\mbox{$\mathbb{P}$}si_i$ for all $i$, then
$\sum\psi_i\otimesimes\mbox{$\mathbb{P}$}hi_0\mapsto\sum\mbox{$\mathbb{P}$}si_i$. But, again, our purpose
here has been mainly to illustrate the use of the measurability
condition itself.}
\begin{equation}gin{equation}
\mbox{
\begin{equation}gin{minipage}{0.85\textwidth}\openup 1.4\jot
Schr\"odinger's equationtlength{\baselineskip}{12pt}\emph{$\xi$ must fail to be
measurable if it has a possible value (one with nonvanishing
probability or probability density) when the wave function of the
system is $\psi_1+ \psi_2$ that is neither a possible value when
the wave function is $\psi_1$ nor a possible value when the wave
function is $\psi_2$. }
\end{minipage}}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{MC2}
\end{equation}
(Here neither $\psi_1$ nor $\psi_2$ need be normalized.)
\subsection{The Nonmeasurability of Velocity, Wave Function and
Deterministic Quantities}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{vwf}
It is an immediate consequence of \eq{MC2} that neither the velocity
nor the wave function is measurable, the latter because the value ``$
\psi_1+ \psi_2$'' is neither ``$\psi_1$'' nor ``$\psi_2$,'' and the
former because every wave function $\psi$ may be written as
$\psi=\psi_1+ \psi_2$ where $\psi_1$ is the real part of $\psi$ and
$\psi_2$ is $i$ times the imaginary part of $\psi$, for both of which
the velocity (of whatever particle) is 0.
Note that this is a very strong and, in a sense, surprising
conclusion, in that it establishes the {\it impossibility} of
measuring what is, after all, a most basic dynamical variable for a
{\it deterministic} mechanical theory of particles in motion. It
should probably be regarded as even more surprising that the proof
that the velocity---or wave function---is not measurable seems to rely
almost on nothing, in effect just on the linearity of the evolution of
the wave function. However, one should not overlook the crucial role of quantum
equilibrium.
We observe that the nonmeasurability of the wave function\ is related to the
{\it impossibility of copying} the wave function. (This question arises
sometimes in the form, ``Can one clone the wave function?"
\cite{ghiraun, WoZu, ghira}.) Copying would be accomplished, for
example, by an interaction leading, for all $\psi$, {}from
$\psi\otimesimes\phi_0\otimesimes\mbox{$\mathbb{P}$}hi_0$ to $\psi\otimesimes\psi\otimesimes\mbox{$\mathbb{P}$}hi$, but
this is clearly incompatible with unitarity. We wish here merely to
remark that the impossibility of cloning can also be regarded as a
consequence of the nonmeasurability of the wave function. In fact, were cloning
possible one could---by making many copies---measure the wave function\ by
performing suitable measurements on the various copies. After all, any
wave function $\psi$ is determined by $\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, A \psi\rangle$
for sufficiently many observables $A$ and these expectation values can
of course be computed using a sufficiently large ensemble.
By a deterministic quantity we mean any function ${\xi}=f(\psi)$ of
the wave function alone (which thus does not inherit any irreducible
randomness associated with the random configuration $X$). It follows
easily {}from \eq{MC2} that no (nontrivial) deterministic quantity is
measurable.\footnote{Note also that
$\mu_{\xi}^\psi(d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda)=\delta(\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda-f(\psi)) d\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda$ seems
manifestly nonquadratic in $\psi$ (unless $f$ is constant).} In
particular, the mean value $\mbox{\boldmath $\lambda $} dabda_{\alpha}ngle \psi, A \psi\rangle$ of an
observable $A$ (not a multiple of the identity) is not
measurable---though it would be were it possible to copy the wave
function, and it can of course be measured by a nonlinear experiment,
see Section \ref{secnl}.
\subsection{Initial Values and Final Values}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secIVFV}
Measurement is a tricky business. In particular, one may wonder how,
if it is not measurable, we are ever able to know the wave function{} of a
system---which in orthodox quantum theory often seems to be the only
thing that we do know about it.
In this regard, it is important to appreciate that we were concerned
in the previous section only with initial values, with the wave
function and the velocity {\it prior\/} to the measurement. We shall
now briefly comment upon the measurability of final values, produced
by the experiment.
The nonmeasurability argument of Section \ref{vwf} does not cover
final values. This may be appreciated by noting that the crucial
ingredient in the analysis involves a fundamental time-asymmetry: The
probability distribution $\mu^\psi$ of the result of an experiment is
a quadratic functional of the {\it initial\/} wave function $\psi$,
not the final one---of which it is not a functional at all. Moreover,
the final velocity can indeed be measured, by a momentum measurement
as described in Section \ref{subsec.basop}. (That such a measurement
yields also the final velocity follows {}from the formula in footnote
\ref{foot:conv} for the asymptotic wave function.) And the final wave
function can be measured by an ideal measurement of any nondegenerate
observable, and more generally by any strong formal measurement whose
subspaces ${\mbox{$\mathcal{H}$}}_{\alpha}$ are one-dimensional, see Section \ref{sec:PP}: If the
outcome is $\alpha$, the final wave function{} is $R_{\alpha} \psi= R_{\alpha} P_{ {\mathcal{H}_{\alpha} } } \psi$, which is
independent of the initial wave function{} $\psi$ (up to a scalar multiple).
We also wish to remark that this distinction between measurements of
initial values and measurements of final values has no genuine
significance for passive measurements, that merely reveal preexisting
properties without in any way affecting the measured system. However,
quantum measurements are usually active; for example, an ideal
measurement transforms the wave function of the system into an
eigenstate of the measured observable. But passive or active, a
measurement, by its very meaning, is concerned strictly speaking with
properties of a system just before its performance, i.e., with initial
values. At the same time, to the extent that any property of a system
is conveyed by a typical quantum ``measurement,'' it is a property
defined by a final value.
For example, according to orthodox quantum theory a position
measurement on a particle with a spread-out wave function, to the
extent that it measures anything at all, measures the final position
of the particle, created by the measurement, rather than the initial
position, which is generally regarded as not existing prior to the
measurement. And even in Bohmian mechanics, in which such a
measurement may indeed reveal the initial position, which---if the
measurement is suitably performed---will agree with the final
position, this measurement will still be active since the wave
function of the system must be transformed by the measurement into one
that is compatible with the sharper knowledge of the position that it
provides, see Section 2.1.
\subsection{Nonlinear Measurements and the Role of Prior Information}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secnl}
The basic idea of measurement is predicated on initial ignorance. We
think of a measurement of a property of a system as conveying that
property by a procedure that does not seriously depend upon the state
of the system,\footnote{This statement must be taken with a grain of
salt. Some things must be known about the system prior to
measurement, for example, that it is in the vicinity the measurement
apparatus, or that an atom whose angular momentum we wish to measure
is moving towards the relevant Stern Gerlach magnets, as well as a
host of similar, often unnoticed, pieces of information. This sort
of thing does not much matter for our purposes in this paper and can
be safely ignored. Taking them into account would introduce
pointless complications without affecting the analysis in an
essential way.} any details of which must after all be unknown prior
to at least some engagement with the system. Be that as it may, the
notion of measurement as codified by the quantum formalism is indeed
rooted in a standpoint of ignorance: the experimental procedures
involved in the measurement do not depend upon the state of the
measured system. And our entire discussion of measurement up to now
has been based upon that very assumption, that \mbox{$\mathscr{E}$}\ itself does not
depend on $\psi$ (and certainly not on $X$).
If, however, some prior information on the initial system wave
function $\psi$ were available, we could exploit this information to
measure quantities that would otherwise fail to be measurable. For
example, for a single-particle system, if we somehow knew its initial
wave function{} $\psi$ then a measurement of the initial position of the
particle would convey its initial velocity as well, via
(\ref{velox})---even though, as we have shown, this quantity isn't
measurable without such prior information.
By a nonlinear measurement or experiment $\mbox{$\mathscr{E}$}=\mbox{$\mathscr{E}$}^\psi$ we mean one in
which, unlike those considered so far, one or more of the defining
characteristics of the experiment depends upon $\psi$. For example, in
the measurement of the initial velocity described in the previous
paragraph, the calibration function $F=F^\psi$ depends upon
$\psi$.\footnote{Suppose that ${Z}_1=F_1(Q_T)=X$ is the result of the
measurement of the initial position. Then $F^\psi=G^\psi\circ F_1$
where $G^\psi(\cdot)= \frac{\hbar}{m}{\rm
Im}\frac{\boldsymbol{\nabla}\psi}{\psi}(\cdot)$.} More generally
we might have that $U=U^\psi$ or $\mbox{$\mathbb{P}$}hi_0=\mbox{$\mathbb{P}$}hi_0^\psi$.
The wave function can of course be measured by a nonlinear
measurement---just let $F^\psi\equiv\psi$. Somewhat less trivially,
the initial wave function can be measured, at least formally, if it is
known to be a member of a given orthonormal basis, by measuring any
nondegenerate observable whose eigenvectors form that basis. The
proposals of Aharonov, Anandan and Vaidman \cite{AAV93} for measuring
the wave function, though very interesting, are of this
character---they involve nonlinear measurements that depend upon a
choice of basis containing $\psi$---and thus remain
controversial.\footnote{In one of their proposals the wave function is
``protected'' by a procedure that depends upon the basis; in
another, involving adiabatic interactions, $\psi$ must be a
nondegenerate eigenstate of the Hamiltonian $H$ of the system, but
it is not necessary that the latter be known.}
\subsection{A Position Measurement that Does not Measure Position}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secapm}
We began this section by observing that what is spoken of as a
measurement in quantum theory need not really measure anything. We
mentioned, however, that in Bohmian mechanics the position can be
measured, and the experiment that accomplishes this would of course be
a measurement of the position operator. We wish here to point out, by
means of a very simple example, that the converse is not true, i.e.,
that a measurement of the position operator need not be a measurement
of the position.
Consider the harmonic oscillator in 2 dimensions with Hamiltonian
$$H = -\frac{\hbar^2}{2m}\big( \frac{\partial^2}{\partial x^2} +
\frac{\partial^2}{\partial y^2}\big) \ + \frac{\omega^2 m}{2} (x^2
+y^2)\,.
$$
Except for an irrelevant time-dependent phase factor, the evolution
$\psi_t$ is periodic, with period $\tau =2\pi/\omega$. The Bohm
motion of the particle, however, need not have period $\tau$. For
example, the $(n=1, m=1)$-state, which in polar coordinates is of the
form
\begin{equation}gin{equation}
\psi_t (r, \phi)
=\frac{m\omega}{\hbar\sqrt \pi} r e^{-\frac{m\omega}{2\hbar}r^2}
e^{i\phi}e^{-i\frac 32 \omega t},
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{nm}
\end{equation}
generates a circular motion of the particle around the origin with
angular velocity $\hbar/(mr^2)$, and hence with periodicity depending
upon the initial position of the particle---the closer to the origin,
the faster the rotation. Thus, in general, $${\bf X}_\tau \neq {\bf
X}_0.$$
Nonetheless, ${\bf X}_\tau$ and ${\bf X}_0 $ are identically
distributed random variables, since
$|\psi_\tau|^2=|\psi_0|^2\equiv|\psi|^2$.
We may now focus on two different experiments: Let \mbox{$\mathscr{E}$}\ be a
measurement of the actual position ${\bf X}_0$, the {\it initial\/}
position, and hence of the position operator, and let $\mbox{$\mathscr{E}$}'$ be an
experiment beginning at the same time as \mbox{$\mathscr{E}$}\ but in which it is the
position ${\bf X}_\tau$ at time $\tau$ that is actually measured.
Since for all $\psi$ the result of $\mbox{$\mathscr{E}$}'$ has the same distribution as
the result of \mbox{$\mathscr{E}$}, $\mbox{$\mathscr{E}$}'$ is also a measurement of the position
operator. But $\mbox{$\mathscr{E}$}'$ is not a measurement of the initial position
since the position at time $\tau$ does not in general agree with the
initial position: A measurement of the position at time $\tau$ is not
a measurement of the position at time $0$. Thus, while a measurement
of position is always a measurement of the position operator,
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent{\it
A measurement of the position operator is not necessarily a
genuine measurement of position!}
\end{quotation}
\subsection{Theory Dependence of Measurement}
The harmonic oscillator example provides a simple illustration of an
elementary point that is often ignored: in discussions of measurement
it is well to keep in mind the theory under consideration. The theory
we have been considering here has been Bohmian mechanics. If, instead,
we were to analyze the harmonic oscillator experiments described above
using different theories our conclusions about results of measurements
would in general be rather different, even if the different theories
were empirically equivalent. So we shall analyze the above experiment
$\mbox{$\mathscr{E}$}'$ in terms of various other formulations or interpretations of
quantum theory.
In strict orthodox quantum theory there is no such thing as a genuine
particle, and thus there is no such thing as the genuine position of a
particle. There is, however, a kind of operational definition of
position, in the sense of an experimental setup, where a measurement
device yields results the statistics of which are given by the
position operator.
In naive orthodox quantum theory one does speak loosely about a
particle and its position, which is thought of---in a somewhat
uncritical way---as being revealed by measuring the position operator.
Any experiment that yields statistics given by the position operator
is considered a genuine measurement of the particle's
position.\footnote{This, and the failure to appreciate the theory
dependence of measurements, has been a source of unfounded
criticisms of Bohmian mechanics (see \cite{ESSW92, DFGZ93f, DHS93}).} Thus $\mbox{$\mathscr{E}$}'$
would be considered as a measurement of the position of the particle
at time zero.
The decoherent (or consistent) histories formulation of quantum
mechanics \cite{GMH90, Omn88, Gri84} is concerned with the
probabilities of certain coarse-grained histories, given by the
specification of finite sequences of events, associated with
projection operators, together with their times of occurrence. These
probabilities are regarded as governing the occurrence of the
histories, regardless of whether any of the events are measured or
observed, but when they are observed, the probabilities of the
observed histories are the same as those of the unobserved histories.
The experiments \mbox{$\mathscr{E}$}{} and $\mbox{$\mathscr{E}$}'$ are measurements of single-event
histories corresponding to the position of the particle at time $0$
and at time $\tau$, respectively. Since the Heisenberg position
operators $\hat{\bf X}_\tau =\hat{\bf X}_0$ for the harmonic
oscillator, it happens to be the case, according to the decoherent
histories formulation of quantum mechanics, that for this system the
position of the particle at time $\tau$ is the same as its position at
time $0$ when the positions are unobserved, and that $\mbox{$\mathscr{E}$}'$ in fact
measures the position of the particle at time $0$ (as well as the
position at time $\tau$).
The spontaneous localization or dynamical reduction models
\cite{GRW,GRP90} are versions of quantum theory in which there are no
genuine particles; in these theories reality is represented by the
wave function alone (or, more accurately, by entities entirely
determined by the wave function{}). In these models Schr\"{o}dinger{}'s equation is
modified by the addition of a stochastic term that causes the wave function{} to
collapse during measurement in a manner more or less consistent with
the quantum formalism. In particular, the performance of \mbox{$\mathscr{E}$}{} or $\mbox{$\mathscr{E}$}'$
would lead to a random collapse of the oscillator wave function onto a
narrow spatial region, which might be spoken of as the position of the
particle at the relevant time. But $\mbox{$\mathscr{E}$}'$ could not be regarded in any
sense as measuring the position at time $0$, because the localization
does not occur for $\mbox{$\mathscr{E}$}'$ until time $\tau$.
Finally we mention stochastic mechanics \cite{Nel85}, a theory
ontologically very similar to Bohmian mechanics\, in that the basic entities with
which it is concerned are particles described by their positions.
Unlike Bohmian mechanics{}, however, the positions evolve randomly, according to a
diffusion process. Just as with Bohmian mechanics{}, for stochastic mechanics the
experiment $\mbox{$\mathscr{E}$}'$ is not a measurement of the position at time zero,
but in contrast to the situation in Bohmian mechanics{}, where the result of the
position measurement at time $\tau$ determines, given the wave function{}, the
position at time zero (via the Bohmian equation of motion), this is
not so in stochastic mechanics because of the randomness of the
motion.
Schr\"odinger's equationction{Hidden Variables}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{secHV}
The issue of hidden variables concerns the question of whether quantum
randomness arises in a completely ordinary manner, merely {}from the
fact that in orthodox quantum theory we deal with an incomplete
description of a quantum system. According to the hidden-variables
hypothesis, if we had at our disposal a sufficiently complete
description of the system, provided by supplementary parameters
traditionally called hidden variables, the totality of which is
usually denoted by $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda$, the behavior of the system would thereby
be determined, as a function of $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda$ (and the wave function). In
such a hidden-variables theory, the randomness in results of
measurements would arise solely {}from randomness in the unknown
variables $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda$. On the basis of a variety of ``impossibility
theorems,'' the hidden-variables hypothesis has been widely regarded
as having been discredited.
Note that Bohmian mechanics is just such a hidden-variables theory,
with the hidden variables $\mbox{\boldmath $\lambda $} dabda_{\alpha}mbda$ given by the configuration $Q$ of
the total system. We have seen in particular that in a Bohmian
experiment, the result $Z$ is determined by the initial configuration
$Q=(X,Y)$ of the system and apparatus. Nonetheless, there remains
much confusion about the relationship between Bohmian mechanics and
the various theorems supposedly establishing the impossibility of
hidden variables. In this section we wish to make several comments on
this matter.
\subsection{Experiments and Random Variables}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{seerv}
In Bohmian mechanics\ we understand very naturally how random variables arise in
association with experiments: the initial complete state $(Q, \mbox{$\mathbb{P}$}si)$
of system and apparatus evolves deterministically and uniquely
determines the outcome of the experiment; however, as the initial
configuration $Q$ is in quantum equilibrium, the outcome of the
experiment is random.
A general experiment \mbox{$\mathscr{E}$}\ is then {\it always} associated a random
variable (RV) $Z$ describing its result. In other words, according to
Bohmian mechanics, there is a natural association
\begin{equation}gin{equation}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{extrv} \mbox{$\mathscr{E}$}E \mapsto {Z}, \end{equation}
between experiments and RVs. Moreover, whenever the statistics of the
result of \mbox{$\mathscr{E}$}\ is governed by a self-adjoint\ operator $A$ on the Hilbert space
of the system, with the spectral measure of $A$ determining the
distribution of $Z$, for which we shall write $Z\mapsto A$ (see
\eq{eq:prdeltan}), Bohmian mechanics{} establishes thereby a natural association
between \mbox{$\mathscr{E}$}\ and $A$
\begin{equation}gin{equation}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{extop} \mbox{$\mathscr{E}$} \mapsto A . \end{equation}
While for Bohmian mechanics the result $Z$ depends in general on both
$X$ and $Y$, the initial configurations of the system and of the
apparatus, for many real-world experiments $Z$ depends only on $X$ and
the randomness in the result of the experiment is thus due solely to
randomness in the initial configuration of the system alone. This is
most obvious in the case of genuine position measurements (for which
$Z(X,Y)= X$). That in fact the apparatus need not introduce any extra
randomness for many other real-world experiments as well follows then
{}from the observation that the role of the apparatus in many
real-world experiments is to provide suitable background fields, which
introduce no randomness, as well as a final detection, a measurement
of the actual positions of the particles of the system. In
particular, this is the case for those experiments most relevant to
the issue of hidden variables, such as Stern-Gerlach measurements of
spin, as well as for momentum measurements and more generally
scattering experiments, which are completed by a final detection of
position.
The result of these experiments is then given by a random variable $$
{Z}= F(X_T)= G(X)\, ,$$
where $T$ is the final time of the
experiment,\footnote{Concerning the most common of all real-world
quantum experiments, scattering experiments, although they are
completed by a final detection of position, this detection usually
occurs, not at a definite time $T$, but at a random time, for
example when a particle enters a localized detector. Nonetheless,
for computational purposes the final detection can be regarded as
taking place at a definite time $T$. This is a consequence of the
flux-across-surfaces theorem \cite{dau96,det3,det2}, which
establishes an asymptotic equivalence between flux across surfaces
(detection at a random time) and scattering into cones (detection at
a definite time).} on the probability space $\{ {O}mega, {\mbox{$\mathbb{P}$}} \}$,
where ${O}mega=\{ X\}$ is the set of initial configurations of the
system and ${\mbox{$\mathbb{P}$}}(dx)= |\psi|^2dx$ is the quantum equilibrium
distribution associated with the initial wave function\ $\psi$ of the system.
For these experiments (see Section \ref{secnoy}) the distribution of
${Z}$ is always governed by a PVM, corresponding to some self-adjoint{}
operator $A$, $Z\mapsto A$, and thus Bohmian mechanics\ provides in these cases a
natural map $\mbox{$\mathscr{E}$} \mapsto A$.
\subsection{Random Variables, Operators, and the Impossibility
Theorems}
\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:RVOIT}
We would like to briefly review the status of the so-called
impossibility theorems for hidden variables, the most famous of which
are due to von Neumann~\cite{vNe55}, Gleason~\cite{Glea57}, Kochen and
Specker~\cite{KoSp67}, and Bell~\cite{Bel64}. Since Bohmian mechanics
exists, these theorems can't possibly establish the impossibility of
hidden variables, the widespread belief to the contrary
notwithstanding. What these theorems do establish, in great
generality, is that there is no ``{\it good''} map {}from self-adjoint\
operators on a Hilbert space \mbox{$\mathcal{H}$}{} to random variables on a common
probability space,
\begin{equation}gin{equation}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{dax} A\mapsto {Z}\equiv {Z}_A\, ,
\end{equation}
where ${Z}_A={Z}_A(\mbox{\boldmath $\lambda $} dabda_{\alpha}m)$ should be thought of as the result of
``measuring $A$'' when the hidden variables, that complete the quantum
description and restore determinism, have value $\mbox{\boldmath $\lambda $} dabda_{\alpha}m$. Different
senses of ``good'' correspond to different impossibility theorems.
For any particular choice of $\mbox{\boldmath $\lambda $} dabda_{\alpha}m$, say $\mbox{\boldmath $\lambda $} dabda_{\alpha}m_0$, the map \eq{dax} is
transformed to a \emph{value} map
\begin{equation}gin{equation}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{dax2} A\mapsto v(A)
\end{equation}
{}from self-adjoint{} operators to real numbers (with $v(A)= {Z}_A(\mbox{\boldmath $\lambda $} dabda_{\alpha}m_0)$).
The stronger impossibility theorems establish the impossibility of a
good value map, again with different senses of ``good'' corresponding
to different theorems.
Note that such theorems are not very surprising. One would not expect
there to be a ``good'' map {}from a noncommutative algebra to a
commutative one.
One of von Neumann's assumptions was, in effect, that the map \eq{dax}
be linear. While mathematically natural, this assumption is physically
rather unreasonable and in any case is entirely unnecessary. In order
to establish that there is no good map \eq{dax}, it is sufficient to
require that the map be good in the minimal sense that the following
{\it agreement condition} is satisfied:
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}{\it
Whenever the quantum mechanical joint distribution of a set of
self-adjoint{} operators $(A_1,\ldots, A_m)$ exists, i.e., when they form a
commuting family, the joint distribution of the corresponding set
of random variables, i.e., of $(Z_{A_1}, \ldots, Z_{A_m})$,
agrees with the quantum mechanical joint
distribution.}\end{quotation}
The agreement condition implies that all deterministic relationships
among commuting observables must be obeyed by the corresponding random
variables. For example, if $A$, $B$ and $C$ form a commuting family
and $C=AB$, then we must have that $Z_C =Z_AZ_B$ since the joint
distribution of $Z_A$, $Z_B$ and $Z_C$ must assign probability
$0$ to the set $\{(a,b,c)\in \mathbb{R}^3 | c\neq ab\}$. This leads to a
minimal condition for a good value map $A\mapsto v(A)$, namely that it
preserve functional relationships among commuting observables: For any
commuting family $A_1,\ldots, A_m$, whenever $f(A_1, \dots, A_m)=0$
(where $f:\mathbb{R}^m\to\mathbb{R}$ represents a linear, multiplicative, or any other
relationship among the $A_i$'s), the corresponding values must satisfy
the same relationship, $f(v(A_1),\ldots, v(A_m))=0 $.
The various impossibility theorems correctly demonstrate that there
are no maps, {}from self-adjoint operators to random variables or to
values, that are good, merely in the minimal senses described
above.\footnote{Another natural sense of good map $A\mapsto v(A)$ is
given by the requirement that $v({\bf A})\in \mbox{sp}\,({\bf A})$,
where ${\bf A}=(A_1,\ldots, A_m)$ is a commuting family, $v({\bf
A})= (v(A_1),\ldots, v(A_m))\in \mathbb{R}^m$ and $\mbox{sp}\,({\bf A})$
is the joint spectrum of the family. That a map good in this sense
is impossible follows {}from the fact that if ${\bf
\alpha}=(\alpha_1,\ldots\alpha_m)\in \mbox{sp}\,({\bf A})$, then
$\alpha_1,\ldots\alpha_m $ must obey all functional relationships for
$A_1,\ldots, A_m $.}
We note that while the original proofs of the impossibility of a good
value map, in particular that of the Kochen-Specker theorem, were
quite involved, in more recent years drastically simpler proofs have
been found (for example, by Peres \cite{Per91}, by Greenberg, Horne,
and Zeilinger \cite{GHSZ89}, and by Mermin \cite{merm93}).
In essence, one establishes the impossibility of a good map
$A\mapstoZ_A$ or $A\mapsto v(A)$ by showing that the $v(A)$'s, or
$Z_A$'s, would have to satisfy impossible relationships. These
impossible relationships are very much like the following:
$Z_A=Z_B=Z_C \neqZ_A$. However no impossible relationship can
arise for only three quantum observables, since they would have to
form a commuting family, for which quantum mechanics would supply a
joint probability distribution. Thus the quantum relationships can't
possibly lead to an inconsistency for the values of the random
variables in this case.
With four observables $A,B,C$, and $D$ it may easily happen that
$[A,B]=0$, $[B,C]=0$, $[C,D]=0$, and $[D,A]=0$ even though they don't
form a commuting family (because, say, $[A,C]\neq 0$). It turns out,
in fact, that four observables suffice for the derivation of
impossible quantum relationships. Perhaps the simplest example of
this sort is due to Hardy~\cite{hardy}, who showed that for almost
every quantum state for two spin 1/2 particles there are four
observables $A,B,C$, and $D$ (two of which happen to be spin
components for one of the particles while the other two are spin
components for the other particle) whose quantum mechanical pair-wise
distributions for commuting pairs are such that a good map to random
variables must yield random variables $Z_A,Z_B,Z_C$, and $Z_D$
obeying the following relationships:
\begin{equation}gin{itemize}
\item[(1)] The event $\{ Z_A=1\;\mbox{and}\; Z_B =1\}$ has
positive probability (with an optimal choice of the quantum state,
about $.09$).
\item[(2)] If $\{ Z_A=1\}$ then $\{ Z_D=1\}$.
\item[(3)] If $\{ Z_B=1\}$ then $\{ Z_C=1\}$.
\item[(4)] The event $\{ Z_D=1\;\mbox{and}\; Z_C =1\}$ has
probability $0$.
\end{itemize}
Clearly, there exist no such random variables.
The point we wish to emphasize here, however, is that although they
are correct and although their hypotheses may seem minimal, these
theorems are nonetheless far less relevant to the possibility of a
deterministic completion of quantum theory than one might imagine. In
the next subsection we will elaborate on how that can be so. We shall
explain why we believe such theorems have little physical significance
for the issues of determinism and hidden variables. We will
separately comment later in this section on Bell's related nonlocality
analysis \cite{Bel64}, which does have profound physical implications.
\subsection{Contextuality}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:context}
It is a simple fact there can be no map $A\mapsto {Z}_A$, {}from
self-adjoint{} operators on \mbox{$\mathcal{H}$}{} (with $\mbox{dim}\,(\mbox{$\mathcal{H}$})\ge{}3$) to random
variables on a common probability space, that is good in the minimal
sense that the joint probability distributions for the random
variables agree with the corresponding quantum mechanical
distributions, whenever the latter ones are defined. But does not
Bohmian mechanics{} yield precisely such a map? After all, have we not emphasized
how Bohmian mechanics{} naturally associates with any experiment a random variable
$Z$ giving its result, in a manner that is in complete agreement with
the quantum mechanical predictions for the result of the experiment?
Given a quantum observable $A$, let $Z_A$ be then the result of a
measurement of $A$. What gives?
Before presenting what we believe to be the correct response, we
mention some possible responses that are off-target. It might be
objected that measurements of different observables will involve
different apparatuses and hence different probability spaces. However,
one can simultaneously embed all the relevant probability spaces into
a huge common probability space. It might also be objected that not
all self-adjoint{} operators can be realistically be measured. But to arrive at
inconsistency one need consider, as mentioned in the last subsection,
only $4$ observables, each of which are spin components and are thus
certainly measurable, via Stern-Gerlach experiments. Thus, in fact, no
enlargement of probability spaces need be considered to arrive at a
contradiction, since as we emphasized at the end of Section
\ref{seerv}, the random variables giving the results of Stern-Gerlach
experiments are functions of initial particle positions, so that for
joint measurements of pairs of spin components for 2-particles the
corresponding results are random variables on the common probability
space of initial configurations of the 2 particles, equipped with the
quantum equilibrium distribution determined by the initial wave function{}.
There must be a mistake. But where could it be? The mistake occurs, in
fact, so early that it is difficult to notice it. It occurs at square
one. The difficulty lies not so much in any conditions on the map
$A\mapstoZ_A$, but in the conclusion that Bohmian mechanics{} supplies such a map
at all.
What Bohmian mechanics{} naturally supplies is a map $\mbox{$\mathscr{E}$}\mapsto{}Z_{\mbox{$\mathscr{E}$}}$, {}from
experiments to random variables. When $Z_{\mbox{$\mathscr{E}$}}\mapsto{}A$, so that we
speak of \mbox{$\mathscr{E}$}{} as a measurement of $A$ ($\mbox{$\mathscr{E}$}\mapsto{}A$), this very
language suggests that insofar as the random variable is concerned all
that matters is that \mbox{$\mathscr{E}$}{} measures $A$, and the map
$\mbox{$\mathscr{E}$}\mapsto{}Z_{\mbox{$\mathscr{E}$}}$ becomes a map $A\mapsto{}Z_A$. After all, if \mbox{$\mathscr{E}$}{}
were a genuine measurement of $A$, revealing, that is, the preexisting
(i.e., prior to the experiment) value of the observable $A$, then $Z$
would have to agree with that value and hence would be an unambiguous
random variable depending only on $A$.
But this sort of argument makes sense only if we take the quantum talk
of operators as observables too seriously. We have emphasized in this
paper that operators do naturally arise in association with quantum
experiments. But there is little if anything in this association,
beyond the unfortunate language that is usually used to describe it,
that supports the notion that the operator $A$ associated with an
experiment \mbox{$\mathscr{E}$}{} is in any meaningful way genuinely measured by the
experiment. From the nature of the association itself, it is
difficult to imagine what this could possibly mean. And for those who
think they imagine some meaning in this talk, the impossibility
theorems show they are mistaken.
The bottom line is this: in Bohmian mechanics{} the random variables $Z_{\mbox{$\mathscr{E}$}}$ giving
the results of experiments \mbox{$\mathscr{E}$}{} depend, of course, on the experiment,
and there is no reason that this should not be the case when the
experiments under consideration happen to be associated with the same
operator. Thus with any self-adjoint{} operator $A$, Bohmian mechanics{} naturally may
associate many different random variables $Z_{\mbox{$\mathscr{E}$}}$, one for each
different experiment $\mbox{$\mathscr{E}$}\mapsto{}A$ associated with $A$. A crucial
point here is that the map $\mbox{$\mathscr{E}$}\mapsto{}A$ is many-to-one.\footnote{We
wish to remark that, quite aside {}from this many-to-oneness, the
random variables $Z_{\mbox{$\mathscr{E}$}}$ cannot generally be regarded as
corresponding to any sort of natural property of the ``measured''
system. $Z_{\mbox{$\mathscr{E}$}}$, in general a function of the initial configuration
of the system-apparatus composite, may fail to be a function of the
configuration of the system alone. And even when, as is often the
case, $Z_{\mbox{$\mathscr{E}$}}$ does depend only on the initial configuration of the
system, owing to chaotic dynamics this dependence could have an
extremely complex character.}
Suppose we define a map $A\mapsto{}Z_A$ by selecting, for each $A$,
one of the experiments, call it $\mbox{$\mathscr{E}$}_A$, with which $A$ is associated,
and define $Z_A$ to be $Z_{\mbox{$\mathscr{E}$}_A}$. Then the map so defined can't be
good, because of the impossibility theorems; moreover there is no
reason to have expected the map to be good. Suppose, for example, that
$[A,B]=0$. Should we expect that the joint distribution of $Z_A$ and
$Z_B$ will agree with the joint quantum mechanical distribution of $A$
and $B$? Only if the experiments $\mbox{$\mathscr{E}$}_A$ and $\mbox{$\mathscr{E}$}_B$ used to define
$Z_A$ and $Z_B$ both involved a common experiment that
``simultaneously measures $A$ and $B$,'' i.e., an experiment that is
associated with the commuting family $(A,B)$. If we consider now a
third operator $C$ such that $[A,C]=0$, but $[B,C]\neq 0$, then there
is no choice of experiment \mbox{$\mathscr{E}$}{} that would permit the definition of a
random variable $Z_A$ relevant both to a ``simultaneous measurement of
$A$ and $B$'' and a ``simultaneous measurement of $A$ and $C$'' since
no experiment is a ``simultaneous measurement of $A$, $B$, and $C$.''
In the situation just described we must consider at least two random
variables associated with $A$, $Z_{A,B}$ and $Z_{A,C}$, depending upon
whether we are considering an experiment ``measuring $A$ and $B$'' or
an experiment ``measuring $A$ and $C$.'' It should be clear that when
the random variables are assigned to experiments in this way, the
possibility of conflict with the predictions of orthodox quantum theory{} is eliminated.
It should also be clear, in view of what we have repeatedly stressed,
that quite aside {}from the impossibility theorems, this way of
associating random variables with experiments is precisely what
emerges in Bohmian mechanics.
The dependence of the result of a ``measurement of the observable
$A$'' upon the other observables, if any, that are ``measured
simultaneously together with $A$''---e.g., that $Z_{A,B}$ and
$Z_{A,C}$ may be different---is called \emph{contextuality}: the
result of an experiment depends not just on ``what observable the
experiment measures'' but on more detailed information that conveys
the ``context'' of the experiment. The essential idea, however, if we
avoid misleading language, is rather trivial: that the result of an
experiment depends on the experiment.
To underline this triviality we remark that for two experiments, $\mbox{$\mathscr{E}$}$
and $\mbox{$\mathscr{E}$}'$, that ``measure $A$ and only $A$'' and involve no
simultaneous ``measurement of another observable,'' the results
$Z_{\mbox{$\mathscr{E}$}}$ and $Z_{\mbox{$\mathscr{E}$}'}$ may disagree. For example in Section
\ref{secapm} we described experiments $\mbox{$\mathscr{E}$}$ and $\mbox{$\mathscr{E}$}'$ both of which
``measured the position operator'' but only one of which measured the
actual initial position of the relevant particle, so that for these
experiments in general $Z_{\mbox{$\mathscr{E}$}}\neq Z_{\mbox{$\mathscr{E}$}'}$.
One might feel, however, that in the example just described the
experiment that does not measure the actual position is somewhat
disreputable---even though it is in fact a ``measurement of the
position operator.'' We shall therefore give another example, due to
D. Albert~\cite{albert}, in which the experiments are as simple and
canonical as possible and are entirely on the same footing. Let
$\mbox{$\mathscr{E}$}_{\uparrow}$ and $\mbox{$\mathscr{E}$}_{\downarrow}$ be Stern-Gerlach measurements of
$A=\sigma_z$, with $\mbox{$\mathscr{E}$}_{\downarrow}$ differing {}from $\mbox{$\mathscr{E}$}_{\uparrow}$
only in that the polarity of the Stern-Gerlach magnet for
$\mbox{$\mathscr{E}$}_{\downarrow}$ is the reverse of that for $\mbox{$\mathscr{E}$}_{\uparrow}$. (In
particular, the geometry of the magnets for $\mbox{$\mathscr{E}$}_{\uparrow}$ and
$\mbox{$\mathscr{E}$}_{\downarrow}$ is the same.) If the initial wave function{}
$\psi_{\text{symm}}$ and the magnetic field $\pm B$ have sufficient
reflection symmetry with respect to a plane between the poles of the
Stern-Gerlach magnets, the particle whose spin component is being
``measured'' cannot cross this plane of symmetry, so that if the
particle is initially above, respectively below, the symmetry plane,
it will remain above, respectively below, that plane. But because
their magnets have opposite polarity, $\mbox{$\mathscr{E}$}_{\uparrow}$ and
$\mbox{$\mathscr{E}$}_{\downarrow}$ involve opposite calibrations: $F_{\uparrow}=
-F_{\downarrow}$. It follows that
$$
Z^{\psi_{\text{symm}}}_{\mbox{$\mathscr{E}$}_{\uparrow}}= -
Z^{\psi_{\text{symm}}}_{\mbox{$\mathscr{E}$}_{\downarrow}}
$$
and the two experiments completely disagree about the ``value of
$\sigma_z$'' in this case.
The essential point illustrated by the previous example is that
instead of having in Bohmian mechanics{} a natural association
$\sigma_z\mapsto{}Z_{\sigma_z}$, we have a rather different pattern of
relationships, given in the example by
$$
\genfrac{}{}{0pt}{}{\mbox{$\mathscr{E}$}_{\uparrow}
\to{Z_{\mbox{$\mathscr{E}$}_{\uparrow}}}}{\mbox{$\mathscr{E}$}_{\downarrow} \to
{Z_{\mbox{$\mathscr{E}$}_{\downarrow}}}}\,^{Schr\"odinger's equationarrow}_\nearrow\, \sigma_z,$$
\subsection{Against ``Contextuality''}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{sec:agcontext}
The impossibility theorems require the assumption of noncontextuality,
that the random variable $Z$ giving the result of a ``measurement of
quantum observable $A$'' should depend on $A$ alone, further
experimental details being irrelevant. How big a deal is
contextuality, the violation of this assumption? Here are two ways of
describing the situation:
\begin{equation}gin{enumerate}
\item In quantum mechanics (or quantum mechanics supplemented with
hidden variables), observables and properties have a novel, highly
nonclassical aspect: they (or the result of measuring them) depend
upon which other compatible properties, if any, are measured
together with them.
In this spirit, Bohm and Hiley~\cite{bohi} write that (page 109)
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
the quantum properties imply \dots that measured properties are not
intrinsic but are inseparably related to the apparatus. It follows
that the customary language that attributes the results of
measurements \dots to the observed system alone can cause confusion,
unless it is understood that these properties are actually dependent
on the total relevant context.
\end{quotation}
They later add that (page 122)
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
The context dependence of results of measurements is a further
indication of how our interpretation does not imply a simple return
to the basic principles of classical physics. It also embodies, in a
certain sense, Bohr's notion of the indivisibility of the combined
system of observing apparatus and observed object.
\end{quotation}
\item The result of an experiment depends upon the experiment. Or, as
expressed by Bell \cite{Bel87} (pg.166),
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
A final moral concerns terminology. Why did such serious people take
so seriously axioms which now seem so arbitrary? I suspect that they
were misled by the pernicious misuse of the word `measurement' in
contemporary theory. This word very strongly suggests the
ascertaining of some preexisting property of some thing, any
instrument involved playing a purely passive role. Quantum
experiments are just not like that, as we learned especially {}from
Bohr. The results have to be regarded as the joint product of
`system' and `apparatus,' the complete experimental set-up. But the
misuse of the word `measurement' makes it easy to forget this and
then to expect that the `results of measurements' should obey some
simple logic in which the apparatus is not mentioned. The resulting
difficulties soon show that any such logic is not ordinary logic. It
is my impression that the whole vast subject of `Quantum Logic' has
arisen in this way {}from the misuse of a word. I am convinced that
the word `measurement' has now been so abused that the field would
be significantly advanced by banning its use altogether, in favour
for example of the word `experiment.'
\end{quotation}
\end{enumerate}
With one caveat, we entirely agree with Bell's observation. The caveat
is this: We do not believe that the difference between quantum
mechanics and classical mechanics is quite as crucial for Bell's moral
as his language suggests it is. For any experiment, quantum or
classical, it would be a mistake to regard any instrument involved as
playing a purely passive role, unless the experiment is a genuine
measurement of a property of a system, in which case the result is
determined by the initial conditions of the system alone. However, a
relevant difference between classical and quantum theory remains:
Classically it is usually taken for granted that it is in principle
possible to measure any observable without seriously affecting the
observed system, which is clearly false in quantum mechanics (or
Bohmian mechanics{}).\footnote{The assumption could (and probably should) also be
questioned classically.}
Mermin has raised a similar question~\cite{merm93} (pg. 811):
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
Is noncontextuality, as Bell seemed to suggest, as silly a condition
as von Neumann's~\dots?
\end{quotation}
To this he answers:
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
I would not characterize the assumption of noncontextuality as a
silly constraint on a hidden-variables theory. It is surely an
important fact that the impossibility of embedding quantum mechanics
in a noncontextual hidden-variables theory rests not only on Bohr's
doctrine of the inseparability of the objects and the measuring
instruments, but also on a straightforward contradiction,
independent of one's philosophic point of view, between some
quantitative consequences of noncontextuality and the quantitative
predictions of quantum mechanics.
\end{quotation}
This is a somewhat strange answer. First of all, it applies to von
Neumann's assumption (linearity), which Mermin seems to agree is
silly, as well as to the assumption of noncontextuality. And the
statement has a rather question-begging flavor, since the importance
of the fact to which Mermin refers would seem to depend on the
nonsilliness of the assumption which the fact concerns.
Be that as it may, Mermin immediately supplies his real argument for
the nonsilliness of noncontextuality. Concerning two experiments for
``measuring observable $A$,'' he writes that
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
it is \dots\ an elementary theorem of quantum mechanics that the
joint distribution \dots\ for the first experiment yields precisely
the same marginal distribution (for $A$) as does the joint
distribution \dots\ for the second, in spite of the different
experimental arrangements. \dots\ The obvious way to account for
this, particularly when entertaining the possibility of a
hidden-variables theory, is to propose that both experiments reveal
a set of values for $A$ in the individual systems that is the same,
regardless of which experiment we choose to extract them {}from.
\dots\ A {\it contextual} hidden-variables account of this fact
would be as mysteriously silent as the quantum theory on the
question of why nature should conspire to arrange for the marginal
distributions to be the same for the two different experimental
arrangements.
\end{quotation}
A bit later, Mermin refers to the ``striking insensitivity of the
distribution to changes in the experimental arrangement.''
For Mermin there is a mystery, something that demands an explanation.
It seems to us, however, that the mystery here is very much in the eye
of the beholder. It is first of all somewhat odd that Mermin speaks of
the mysterious silence of quantum theory concerning a question whose
answer, in fact, emerges as an ``elementary theorem of quantum
mechanics.'' What better way is there to answer questions about nature
than to appeal to our best physical theories?
More importantly, the ``two different experimental arrangements,'' say
$\mbox{$\mathscr{E}$}_1$ and $\mbox{$\mathscr{E}$}_2$, considered by Mermin are not merely any two
randomly chosen experimental arrangements. They obviously must have
something in common. This is that they are both associated with the
same self-adjoint{} operator $A$ in the manner we have described: $\mbox{$\mathscr{E}$}_1\mapsto
A $ and $\mbox{$\mathscr{E}$}_2\mapsto A$. It is quite standard to say in this situation
that both $\mbox{$\mathscr{E}$}_1$ and $\mbox{$\mathscr{E}$}_2$ measure the observable $A$, but both for
Bohmian mechanics{} and for orthodox quantum theory{} the very meaning of the association with the
operator $A$ is merely that the distribution of the result of the
experiment is given by the spectral measures for $A$. Thus there is
no mystery in the fact that $\mbox{$\mathscr{E}$}_1$ and $\mbox{$\mathscr{E}$}_2$ have results governed by
the same distribution, since, when all is said and done, it is on this
basis, and this basis alone, that we are comparing them.
(One might wonder how it could be possible that there are two
different experiments that are related in this way. This is a somewhat
technical question, rather different {}from Mermin's, and it is one
that Bohmian mechanics{} and quantum mechanics readily answer, as we have explained
in this paper. In this regard it would probably be good to reflect
further on the simplest example of such experiments, the Stern-Gerlach
experiments $\mbox{$\mathscr{E}$}_{\uparrow}$ and $\mbox{$\mathscr{E}$}_{\downarrow}$ discussed in the
previous subsection.)
It is also difficult to see how Mermin's proposed resolution of the
mystery, ``that both experiments reveal a set of values for $A$ \dots\
that is the same, regardless of which experiment we choose to extract
them {}from,'' could do much good. He is faced with a certain pattern
of results in two experiments that would be explained if the
experiments did in fact genuinely measure the same thing. The
experiments, however, as far as any detailed quantum mechanical
analysis of them is concerned, don't appear to be genuine measurements
of anything at all. He then suggests that the mystery would be
resolved if, indeed, the experiments did measure the same thing, the
analysis to the contrary notwithstanding. But this proposal merely
replaces the original mystery with a bigger one, namely, of how the
experiments could in fact be understood as measuring the same thing,
or anything at all for that matter. It is like explaining the mystery
of a talking cat by saying that the cat is in fact a human being,
appearances to the contrary notwithstanding.
A final complaint about contextuality: the terminology is misleading.
It fails to convey with sufficient force the rather definitive
character of what it entails: {\it ``Properties'' that are merely
contextual are not properties at all; they do not exist, and their
failure to do so is in the strongest sense possible!}
\subsection{Nonlocality, Contextuality and Hidden Variables}
There is, however, a situation where contextuality is physically
relevant. Consider the EPRB experiment, outlined at the end of Section
\ref{secMCFO}. In this case the dependence of the result of a
measurement of the spin component $\boldsymbol{\sigma}_{1}\cdot
\mathbf{a}$ of a particle upon which spin component of a distant
particle is measured together with it---the difference between
$Z_{\boldsymbol{\sigma}_{1}\cdot \mathbf{a},\;
\boldsymbol{\sigma}_{2}\cdot \mathbf{b}}$ and
$Z_{\boldsymbol{\sigma}_{1}\cdot \mathbf{a},\;
\boldsymbol{\sigma}_{2}\cdot \mathbf{c}}$ (using the notation
described in the seventh paragraph of Section \ref{sec:context})---is
an expression of {\em nonlocality}, of, in Einstein words, a ``spooky
action at distance.'' More generally, whenever the relevant context is
distant, contextuality implies nonlocality.
Nonlocality is an essential feature of Bohmian mechanics: the
velocity, as expressed in the guiding equation (\ref{eq:velo}), of any
one of the particles of a many-particle system will typically depend
upon the positions of the other, possibly distant, particles whenever
the wave function of the system is entangled, i.e., not a product of
single-particle wave functions. In particular, this is true for the
EPRB experiment under examination. Consider the extension of the
single particle Hamiltonian (\ref{sgh}) to the two-particle case,
namely $$
H = -\frac{\hbar^{2}}{2m_{1}} \boldsymbol{\nabla}_{1}^{2}
-\frac{\hbar^{2}}{2m_{2}} \boldsymbol{\nabla}_{2}^{2}-
\mu_{1}\boldsymbol{\sigma}_1 {\bf \cdot B(\mathbf{ x_{1}) }}
-\mu_{2}\boldsymbol{\sigma}_2 {\bf \cdot B(\mathbf{x_{2}) }} . $$
Then for initial singlet state, and spin measurements as described in
Sections \ref{secSGE} and \ref{noXexp}, it easily follows {}from the
laws of motion of Bohmian mechanics that
$$Z_{\boldsymbol{\sigma}_{1}\cdot \mathbf{a},\;
\boldsymbol{\sigma}_{2}\cdot \mathbf{b}} \neq
Z_{\boldsymbol{\sigma}_{1}\cdot \mathbf{a},\;
\boldsymbol{\sigma}_{2}\cdot \mathbf{c}}\;.$$
This was observed long ago by Bell \cite{Bel66}. In fact, Bell's
examination of Bohmian mechanics led him to his celebrated nonlocality
analysis. In the course of his investigation of Bohmian mechanics he
observed that (\cite{Bel87}, p. 11)
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
in this theory an explicit causal mechanism exists whereby the
disposition of one piece of apparatus affects the results obtained
with a distant piece.
Bohm of course was well aware of these features of his scheme, and
has given them much attention. However, it must be stressed that,
to the present writer's knowledge, there is no {\em proof} that {\em
any} hidden variable account of quantum mechanics {\em must} have
this extraordinary character. It would therefore be interesting,
perhaps, to pursue some further ``impossibility proofs," replacing
the arbitrary axioms objected to above by some condition of
locality, or of separability of distant systems.
\end{quotation}
\noindent In a footnote, Bell added that ``Since the completion of
this paper such a proof has been found." This proof was published in
his 1964 paper \cite{Bel64}, "On the Einstein-Podolsky-Rosen Paradox,"
in which he derives Bell's inequality, the basis of his conclusion of
quantum nonlocality.
We find it worthwhile to reproduce here the analysis of Bell, deriving
a simple inequality equivalent to Bell's, in order to highlight the
conceptual significance of Bell's analysis and, at the same time, its
mathematical triviality. The analysis involves two parts. The first
part, the Einstein-Podolsky-Rosen argument applied to the EPRB
experiment, amounts to the observation that for the singlet state the
assumption of locality implies the existence of noncontextual hidden
variables. More precisely, it implies, for the singlet state, the
existence of random variables $ Z^{i}_{\boldsymbol{\alphalpha}}=
Z_{\boldsymbol{\alphalpha}\cdot \boldsymbol{\sigma}_i}$, $i=1, 2$,
corresponding to all possible spin components of the two particles,
that obey the agreement condition described in Section
\ref{sec:RVOIT}. In particular, focusing on components in only 3
directions $\mathbf{a}$, $\mathbf{b}$ and $\mathbf{c}$ for each
particle, locality implies the existence of 6 random variables
$$
Z^{i}_{\boldsymbol{\alphalpha}}\qquad i=1,2\quad {\boldsymbol{\alphalpha}}=
\mathbf{a},\; \mathbf{b},\; \mathbf{c}
$$
such that
\begin{equation}gin{eqnarray}
Z^{i}_{\boldsymbol{\alphalpha}} &=& \pm 1 \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:pc1}\\
Z^{1}_{{\boldsymbol{\alphalpha}}}& =&
-Z^{2}_{{\boldsymbol{\alphalpha}}}\mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:pc2}
\end{eqnarray}
and, more generally,
\begin{equation}gin{equation}
\text{Prob}(Z^{1}_{\boldsymbol{\alphalpha}}\neq
Z^{2}_{\boldsymbol{\alphalpha}}) =
q_{ {\boldsymbol{\alphalpha}}
{\boldsymbol{\begin{equation}ta}} }, \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:pc3}
\end{equation}
the corresponding quantum mechanical probabilities. This conclusion
amounts to the idea that measurements of the spin components reveal
preexisting values (the $Z^{i}_{\boldsymbol{\alphalpha}}$), which,
assuming locality, is implied by the perfect quantum mechanical
anticorrelations \cite{Bel64}:
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
Now we make the hypothesis, and it seems one at least worth
considering, that if the two measurements are made at places remote
{}from one another the orientation of one magnet does not influence
the result obtained with the other. Since we can predict in advance
the result of measuring any chosen component of
${\boldsymbol{\sigma}}_2$, by previously measuring the same
component of ${\boldsymbol{\sigma}}_1$, it follows that the result
of any such measurement must actually be predetermined.
\end{quotation}
People very often fail to appreciate that the existence of such
variables, given locality, is not an assumption but a consequence of
Bell's analysis. Bell repeatedly stressed this point (by determinism
Bell here means the existence of hidden variables):
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}
It is important to note that to the limited degree to which {\em
determinism} plays a role in the EPR argument, it is not assumed
but {\em inferred}. What is held sacred is the principle of `local
causality' -- or `no action at a distance'. \ldots
It is remarkably difficult to get this point across, that
determinism is not a {\em presupposition} of the analysis.
(\cite{Bel87}, p. 143)
Despite my insistence that the determinism was inferred rather
than assumed, you might still suspect somehow that it is a
preoccupation with determinism that creates the problem. Note well
then that the following argument makes no mention whatever of
determinism. \ldots\ Finally you might suspect that the very
notion of particle, and particle orbit \ldots\ has somehow led us
astray. \ldots\ So the following argument will not mention
particles, nor indeed fields, nor any other particular picture of
what goes on at the microscopic level. Nor will it involve any
use of the words `quantum mechanical system', which can have an
unfortunate effect on the discussion. The difficulty is not
created by any such picture or any such terminology. It is
created by the predictions about the correlations in the visible
outputs of certain conceivable experimental set-ups.
(\cite{Bel87}, p. 150)
\end{quotation}
The second part of the analysis, which unfolds the ``difficulty
\ldots\ created by the \ldots\ correlations,'' involves only very
elementary mathematics. Clearly,
$$
\text{Prob}\left( \{Z^{1}_{\mathbf{a}} = Z^{1}_{\mathbf{b}}\} \cup
\{Z^{1}_{\mathbf{b}} = Z^{1}_{\mathbf{c}}\} \cup
\{Z^{1}_{\mathbf{c}} = Z^{1}_{\mathbf{a}}\} \right) =1\;.$$
since at
least two of the three (2-valued) variables
$Z^{1}_{\boldsymbol{\alphalpha}}$ must have the same value. Hence, by
elementary probability theory,
$$
\text{Prob} \left( Z^{1}_{\mathbf{a}} = Z^{1}_{\mathbf{b}}\right) +
\text{Prob} \left( Z^{1}_{\mathbf{b}} = Z^{1}_{\mathbf{c}}\right) +
\text{Prob} \left( Z^{1}_{\mathbf{c}} = Z^{1}_{\mathbf{a}} \right) \ge
1, $$
and using the perfect anticorrelations (\ref{eq:pc2}) we have
that
\begin{equation}gin{equation}
\text{Prob}
\left( Z^{1}_{\mathbf{a}} = -Z^{2}_{\mathbf{b}}\right)
+ \text{Prob}
\left( Z^{1}_{\mathbf{b}} = -Z^{2}_{\mathbf{c}}\right)
+ \text{Prob}
\left( Z^{1}_{\mathbf{c}} =
-Z^{2}_{\mathbf{a}}
\right) \ge 1, \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{eq:bellineq}
\end{equation}
which is equivalent to Bell's inequality and in conflict with
(\ref{eq:pc3}). For example, when the angles between $\mathbf{a}$,
$\mathbf{b}$ and $\mathbf{c}$ are 120$^{0}$ the 3 relevant quantum
correlations $q_{ {\boldsymbol{\alphalpha}} {\boldsymbol{\begin{equation}ta}} }$ are
all $1/4$.
To summarize the argument, let H be the hypothesis of the existence of
the noncontextual hidden variables we have described above. Then the
logic of the argument is:
\begin{equation}gin{eqnarray}
\text{Part 1:}&\qquad \mbox{quantum mechanics} + \mbox{locality}
&\mathbb{R}ightarrow\quad
\mbox{H} \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{qmlic}\\
\text{Part 2:}&\qquad \mbox{quantum mechanics} &\mathbb{R}ightarrow\quad
\mbox{not H} \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{qmic}\\
\text{Conclusion:}&\qquad \mbox{quantum mechanics} &\mathbb{R}ightarrow\quad
\mbox{not locality} \mbox{\boldmath $\lambda $} dabda_{\alpha}bel{qmiccon}
\end{eqnarray}
To fully grasp the argument it is important to appreciate that the
identity of H---the existence of the noncontextual hidden
variables---is of little substantive importance. What is important is
not so much the identity of H as the fact that H is incompatible with
the predictions of quantum theory. The identity of H is, however, of
great historical significance: It is responsible for the misconception
that Bell proved that hidden variables are impossible, a belief until
recently almost universally shared by physicists.
Such a misconception has not been the only reaction to Bell's
analysis. Roughly speaking, we may group the different reactions into
three main categories, summarized by the following statements:
\begin{equation}gin{enumerate}
\item Hidden variables are impossible.
\item Hidden variables are possible, but they must be contextual.
\item Hidden variables are possible, but they must be nonlocal.
\end{enumerate}
Statement 1 is plainly wrong. Statement 2 is correct but not terribly
significant. Statement 3 is correct, significant, but nonetheless
rather misleading. It follow {}from (\ref{qmlic}) and (\ref{qmic})
that {\em any} account of quantum phenomena must be nonlocal, not just
any hidden variables account. Bell's argument shows that nonlocality
is implied by the predictions of standard quantum theory itself. Thus
if nature is governed by these predictions, then {\em nature is
nonlocal}. (That nature is so governed, even in the crucial
EPR-correlation experiments, has by now been established by a great
many experiments, the most conclusive of which is perhaps that of
Aspect \cite{Aspect1982}.)
Schr\"odinger's equationction{Against Naive Realism About Operators}
Traditional naive realism is the view that the world {\it is\/} pretty
much the way it {\it seems,\/} populated by objects which force
themselves upon our attention as, and which in fact are, the locus of
sensual qualities. A naive realist regards these ``secondary
qualities,'' for example color, as objective, as out there in the
world, much as perceived. A decisive difficulty with this view is
that once we understand, say, how our perception of what we call color
arises, in terms of the interaction of light with matter, and the
processing of the light by the eye, and so on, we realize that the
presence out there of color per se would play no role whatsoever in
these processes, that is, in our understanding what is relevant to our
perception of ``color.'' At the same time, we may also come to realize
that there is, in the description of an object provided by the
scientific world-view, as represented say by classical physics,
nothing which is genuinely ``color-like.''
A basic problem with quantum theory, more fundamental than the
measurement problem and all the rest, is a naive realism about
operators, a fallacy which we believe is far more serious than
traditional naive realism: With the latter we are deluded partly by
language but in the main by our senses, in a manner which can scarcely
be avoided without a good deal of scientific or philosophical
sophistication; with the former we are seduced by language alone, to
accept a view which can scarcely be taken seriously without a large
measure of (what often passes for) sophistication.
Not many physicists---or for that matter philosophers---have focused
on the issue of naive realism about operators, but Schr\"odinger and
Bell have expressed similar or related concerns:
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent \dots the
new theory [quantum theory] \dots considers the [classical] model
suitable for guiding us as to just which measurements can in
principle be made on the relevant natural object. \dots Would it
not be pre-established harmony of a peculiar sort if the
classical-epoch researchers, those who, as we hear today, had no
idea of what {\it measuring\/} truly is, had unwittingly gone on to
give us as legacy a guidance scheme revealing just what is
fundamentally measurable for instance about a hydrogen
atom!?~\cite{Sch35}
\end{quotation}
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
Here are some words which, however legitimate and necessary in
application, have no place in a {\it formulation\/} with any
pretension to physical precision: {\it system; apparatus;
environment; microscopic, macroscopic; reversible, irreversible;
observable; information; measurement.\/}
\dots The notions of ``microscopic'' and ``macroscopic'' defy
precise definition. \dots Einstein said that it is theory which
decides what is ``observable''. I think he was right. \dots
``observation'' is a complicated and theory-laden business. Then
that notion should not appear in the {\it formulation\/} of
fundamental theory. \dots
On this list of bad words {}from good books, the worst of all is
``measurement''. It must have a section to itself.~\cite{Bel90}
\end{quotation}
We agree almost entirely with Bell here. We insist, however, that
``observable'' is just as bad as ``measurement,'' maybe even a little
worse. Be that as it may, after listing Dirac's measurement
postulates Bell continues:
\begin{equation}gin{quotation}Schr\"odinger's equationtlength{\baselineskip}{12pt}\noindent
It would seem that the theory is exclusively concerned about
``results of measurement'', and has nothing to say about anything
else. What exactly qualifies some physical systems to play the role
of ``measurer''? Was the wave function of the world waiting to jump
for thousands of millions of years until a single-celled living
creature appeared? Or did it have to wait a little longer, for some
better qualified system \dots with a Ph.D.? If the theory is to
apply to anything but highly idealized laboratory operations, are we
not obliged to admit that more or less ``measurement-like''
processes are going on more or less all the time, more or less
everywhere. Do we not have jumping then all the time?
The first charge against ``measurement'', in the fundamental axioms
of quantum mechanics, is that it anchors the shifty split of the
world into ``system'' and ``apparatus''. A second charge is that
the word comes loaded with meaning {}from everyday life, meaning
which is entirely inappropriate in the quantum context. When it is
said that something is ``measured'' it is difficult not to think of
the result as referring to some {\it preexisting property\/} of the
object in question. This is to disregard Bohr's insistence that in
quantum phenomena the apparatus as well as the system is essentially
involved. If it were not so, how could we understand, for example,
that ``measurement'' of a component of ``angular momentum'' \dots
{\it in an arbitrarily chosen direction\/} \dots yields one of a
discrete set of values? When one forgets the role of the apparatus,
as the word ``measurement'' makes all too likely, one despairs of
ordinary logic \dots hence ``quantum logic''. When one remembers
the role of the apparatus, ordinary logic is just fine.
In other contexts, physicists have been able to take words {}from
ordinary language and use them as technical terms with no great harm
done. Take for example the ``strangeness'', ``charm'', and
``beauty'' of elementary particle physics. No one is taken in by
this ``baby talk''. \dots Would that it were so with
``measurement''. But in fact the word has had such a damaging
effect on the discussion, that I think it should now be banned
altogether in quantum mechanics. ({\sl Ibid.\/})
\end{quotation}
While Bell focuses directly here on the misuse of the word
``measurement'' rather than on that of ``observable,'' it is worth
noting that the abuse of ``measurement'' is in a sense inseparable
{}from that of ``observable,'' i.e., {}from naive realism about
operators. After all, one would not be very likely to speak of
measurement unless one thought that something, some ``observable''
that is, was somehow there to be measured.
Operationalism, so often used without a full appreciation of its
consequences, may lead many physicists to beliefs which are the
opposite of what one might expect. Namely, by believing somehow that
a physical property {\it is} and {\it must be} defined by an
operational definition, many physicists come to regard properties such
as spin and polarization, which can easily be operationally defined,
as intrinsic properties of the system itself, the electron or photon,
despite all the difficulties that this entails. If operational
definitions were banished, and ``real definitions'' were required,
there would be far less reason to regard these ``properties'' as
intrinsic, since they are not defined in any sort of intrinsic way; in
short, we have no idea what they really mean, and there is no reason
to think they mean anything beyond the behavior exhibited by the
system in interaction with an apparatus.
There are two primary sources of confusion, mystery and incoherence in
the foundations of quantum mechanics: the insistence on the
completeness of the description provided by the wave function{}, despite the
dramatic difficulties entailed by this dogma, as illustrated most
famously by the measurement problem; and naive realism about operators{}. While the second
seems to point in the opposite direction {}from the first, the dogma
of completeness is in fact nourished by naive realism about operators{}. This is because
naive realism about operators{} tends to produce the belief that a more complete description
is impossible because such a description should involve preexisting
values of the quantum observables, values that are revealed by
measurement. And this is impossible. But without naive realism about operators{}---without
being misled by all the quantum talk of the measurement of
observables---most of what is shown to be impossible by the
impossibility theorems would never have been expected to begin with.
\alphaddcontentsline{toc}{section}{Acknowledgments}
Schr\"odinger's equationction*{Acknowledgments}
An early version of this paper had a fourth author: Martin Daumer.
Martin left our group a long time ago and has not participated since
in the very substantial changes in both form and content that the
paper has undergone. His early contributions are very much
appreciated. We thank Roderich Tumulka for a careful reading of this
manuscript and helpful suggestions. This work was supported in part by
NSF Grant No. DMS--9504556, by the DFG, and by the INFN. We are
grateful for the hospitality that we have enjoyed, on more than one
occasion, at the Mathematisches Institut of
Ludwig-Maximilians-Universit\"at M\"unchen, at the Dipartimento di
Fisica of Universit\`a degli Studi di Genova, and at the Mathematics
Department of Rutgers University.
\alphaddcontentsline{toc}{section}{References}
\begin{equation}gin{thebibliography}{10}
\bibitem{AAV93} Y.~Aharonov, J.~Anandan, and L.~Vaidman. \newblock
Meaning of the {W}ave {F}unction. \newblock {\em Physical Review
A}, {\bf 47}: 4616--4626, 1993.
\bibitem{albert} D.~Z.~Albert. \newblock {\it Quantum Mechanics and
Experience\/}. \newblock Cambridge, MA, Harvard University Press,
1992.
\bibitem{Emch} S.T.~Ali and G. G.~ Emch. \newblock Fuzzy Observables
in Quantum Mechanics. \newblock {\em Journal of Mathematical
Physics}, {\bf 15}: 176--182, 1974.
\bibitem{Aspect1982} A.~Aspect, J.~Dalibard, and G.~Roger. \newblock
Experimental Test of Bell's Inequalities using Time-Varying
Analyzers. \newblock {\em Phys. Rev. Lett.} {\bf 49}: 1804--1807,
1982.
\bibitem{Bel64} J.~S. Bell. \newblock On the
{E}instein-{P}odolsky-{R}osen {P}aradox. \newblock {\em Physics},
{\bf 1}: 195--200, 1964. \newblock Reprinted in \cite{WZ83}, and in
\cite{Bel87}.
\bibitem{Bel66} J.~S. Bell. \newblock On the {P}roblem of {H}idden
{V}ariables in {Q}uantum {M}echanics. \newblock {\em Reviews of
Modern Physics}, {\bf 38}: 447--452, 1966. \newblock Reprinted in
\cite{WZ83} and in \cite{Bel87}.
\bibitem{Bel80} J.~S. Bell. \newblock De {B}roglie-{B}ohm,
{D}elayed-{C}hoice {D}ouble-{S}lit {E}xperiment, and {D}ensity
{M}atrix. \newblock {\em International Journal of Quantum
Chemistry: A Symposium}, {\bf 14}: 155--159, 1980. \newblock
Reprinted in \cite{Bel87}.
\bibitem{Bel81} J.~S. Bell. \newblock Quantum Mechanics for
Cosmologists. In {\it Quantum Gravity 2}, C. Isham, R. Penrose, and
D. Sciama (eds.), Oxford University Press, New York, pp. 611--637.
Reprinted in \cite{Bel87}.
\bibitem{Bel82} J.~S. Bell. \newblock On the {I}mpossible {P}ilot
{W}ave. \newblock {\em Foundations of Physics}, {\bf 12}: 989--999,
1982. \newblock Reprinted in \cite{Bel87}.
\bibitem{Bel87} J.~S. Bell. \newblock {\em Speakable and unspeakable
in quantum mechanics}. \newblock Cambridge University Press,
Cambridge, 1987.
\bibitem{Bel90} J.~S. Bell. \newblock Against ``Measurement''.
\newblock {\em Physics World}, {\bf 3}: 33--40, 1990. \newblock
Also in \cite{Mil90}.
\bibitem{Birula} I.~ Bialynicki-Birula. \newblock On the Wave
Function of the Photon. \newblock {\em Acta Physica Polonica} {\bf
86}: 97--116, 1994.
\bibitem{Boe79} A.~B{\"o}hm. \newblock {\em Quantum {M}echanics}.
\newblock Springer-Verlag, New York-Heidelberg-Berlin, 1979.
\bibitem{Boh51} D.~Bohm. \newblock {\em Quantum {T}heory}. \newblock
Prentice-Hall, Englewood Cliffs, N.J., 1951.
\bibitem{Boh52} D.~Bohm. \newblock A {S}uggested {I}nterpretation of
the {Q}uantum {T}heory in {T}erms of ``{H}idden'' {V}ariables: Parts
{I} and {II}. \newblock {\em Physical Review}, {\bf 85}: 166--193,
1952. \newblock Reprinted in \cite{WZ83}.
\bibitem{bohi} D.~Bohm and B.~J~Hiley. \newblock {\it The Undivided
Universe: An Ontological Interpretation of Quantum Theory\/}.
\newblock London, Routledge \& Kegan Paul, 1993.
\bibitem{Boh58} N.~Bohr. \newblock {\em Atomic {P}hysics and {H}uman
{K}nowledge}. \newblock Wiley, New York, 1958.
\bibitem{Brag} V.~B.~Braginsky, Y.~I.~Vorontsov, and K.~S.~Thorne.
\newblock Quantum Nondemolition Measurements. \newblock {\em
Science}, {\bf 209}: 547--57, 1980. \newblock Reprinted in
\cite{WZ83}.
\bibitem{dau96} M.~Daumer, D.~D\"urr, S.~Goldstein and N.~Zangh\`\i.
\newblock{On the Flux-Across-Surfaces Theorem}, \newblock{\em Lett.
Math. Phys.} {\bf 38}: 103-116, 1996.
\bibitem{dau97} M.~Daumer, D.~D\"urr, S.~Goldstein and N.~Zangh\`\i,
\newblock{On the Quantum Probability Flux Through Surfaces},
\newblock{\em J. Stat. Phys.} {\bf 88}: 967-977, 1997.
\bibitem{Dav76} E~B.~Davies. \newblock {\em Quantum {T}heory of
{O}pen {S}ystems}. \newblock Academic Press, London-New York-San
Francisco, 1976.
\bibitem{DHS93} C.~Dewdney, L.~Hardy, and E.~J. Squires. \newblock
How {L}ate {M}eaurements of {Q}uantum {T}rajectories {C}an {F}ool a
{D}etector. \newblock {\em Phys. Lett. A} {\bf 184}: 6--11, 1993.
\bibitem{Dir30} P.~A.~M. Dirac. \newblock {\em The {P}rinciples of
{Q}uantum {M}echanics}. \newblock Oxford University Press,
Oxford, 1930.
\bibitem{DFGZ93f} D.~D{\"u}rr, W.~Fusseder, S.~Goldstein, and
N.~Zangh\`{\i}. \newblock Comment on: {S}urrealistic {B}ohm
{T}rajectories. \newblock {\em Z.f.Naturforsch.}, {\bf 48a}:
1261--1262, 1993.
\bibitem{DGZ92a} D.~D{\"u}rr, S.~Goldstein, and N.~Zangh\`{\i}.
\newblock Quantum {E}quilibrium and the {O}rigin of {A}bsolute
{U}ncertainty. \newblock {\em Journal of Statistical Physics}, {\bf
67}: 843--907, 1992.
\bibitem{det3} D.~D\"urr, K.~M\"unch-Berndl, and S.~Teufel.
\newblock{The Flux Across Surfaces Theorem for Short Range
Potentials without Energy Cutoffs}. \newblock{\em J. Math. Phys.}
{\bf 40}: 1901-1922, 1999.
\bibitem{det2} D.~D\"urr, S.~Goldstein, S.~Teufel, and N.~Zangh\`\i.
\newblock{Scattering Theory from Microscopic First Principles}.
\newblock{\em Physica A} {\bf 279}: 416-431, 2000.
\bibitem{DGZ94} D.~D\"urr, S.~Goldstein J.~Taylor, and N.~Zangh{\`\i}.
\newblock Bosons, Fermions, and the Topology of Configuration Space.
\newblock In preparation.
\bibitem{crea2} D.~D\"urr, S.~Goldstein, R.~Tumulka, and
N.~Zangh{\`\i}. \newblock Quantum Hamiltonians and Stochastic
Jumps. \newblock quant-ph/0303056.
\bibitem{crea1} D.~D\"urr, S.~Goldstein, R.~Tumulka, and
N.~Zangh{\`\i}. \newblock Trajectories and Particle Creation and
Annihilation in Quantum Field Theory. \newblock \textit{J.\ Phys.\
A: Math.\ Gen.}\ \textbf{36}: 4143--4149, 2003.
\bibitem{Rodiden} D.~D\"urr, S.~Goldstein, R.~Tumulka, and
N.~Zangh{\`\i}. \newblock On the Role of Density Matrices in
Bohmian Mechanics. \newblock In preparation.
\bibitem{EPR} A.~Einstein, B.~Podolsky, and N.~Rosen. \newblock Can
Quantum-Mechanical Description of Physical Reality Be Considered
Complete? \newblock {\em Phys. Rev.} {\bf 47}: 777-780, 1935.
\bibitem{ESSW92} E.~Englert, M.~O. Scully, G.~S{\" u}ssman, and
H.~Walther. \newblock Surrealistic {B}ohm {T}rajectories.
\newblock {\em Z. f. Naturforsch.}, {\bf 47a}: 1175, 1992.
\bibitem{FH65} R.~P. Feynman and A.~R. Hibbs. \newblock {\em Quantum
{M}echanics and {P}ath {I}ntegrals}. \newblock McGraw-Hill, New
York, 1965.
\bibitem{GMH90} M.~Gell-Mann and J.~B. Hartle. \newblock Quantum
{M}echanics in the {L}ight of {Q}uantum {C}osmology. \newblock In
W.~Zurek, editor, {\em Complexity, Entropy, and the Physics of
Information}, pages 425--458. Addison-Wesley, Reading, 1990.
\newblock Also in \cite{KEMN90}.
\bibitem{ghiraun} G.~C.~Ghirardi. Unpublished, 1981 (private
communication).
\bibitem{ghira} G.~C.~Ghirardi and T.~Weber. \newblock Quantum Mechanics and Faster-Than-Light
Communication: Methodological Considerations \newblock {\em Nuovo
Cimento} {\bf 78B}, 9, 1983.
\bibitem{GRW} G.~C. Ghirardi, A.~Rimini, and T.~Weber. \newblock
Unified {D}ynamics for {M}icroscopic and {M}acroscopic {S}ystems.
\newblock {\em Physical Review D}, {\bf 34}: 470--491, 1986.
\bibitem{GRW90} G.~C. Ghirardi and A.~Rimini. \newblock Old and {N}ew
{I}deas in the {T}heory of {Q}uantum {M}easurements. \newblock In
\cite{Mil90}.
\bibitem{GRP90} G.~C. Ghirardi, P.~Pearle, and A.~Rimini. \newblock
Markov {P}rocesses in {H}ilbert {S}pace and {C}ontinous
{S}pontaneous {L}ocalization of {S}ystem of {I}dentical {P}articles.
\newblock {\em Physical Review A}, {\bf 42}: 78--89, 1990.
\bibitem{Glea57} A.~M.~Gleason. \newblock Measures on the Closed
Subspaces of a Hilbert Space. \newblock {\em J. Math. and Mech.}
{\bf 6}: 885--893, 1957
\bibitem{Gol87} S.~Goldstein. \newblock Stochastic Mechanics and
Quantum Theory. \newblock {\em Journal of Statistical Physics},
{\bf 47}: 645--667, 1987.
\bibitem{ShellyPT} S.~Goldstein. \newblock Quantum Theory Without
Observers. \newblock Part One: {\em Physics Today}, March 1998,
42-46. Part Two: {\em Physics Today}, April 1998, 38-42.
\bibitem{GoldPage} S.~Goldstein and D.~Page. \newblock Linearly
Positive Histories: Probabilities for a Robust Family of Sequences
of Quantum Events. \newblock {\em Phys. Rev. Lett.} {\bf 74}:
3715--3719, 1995.
\bibitem{GHSZ89} D.~M.~Greenberg, M.~Horne, S.~Shimony and
A.~Zeilenger. \newblock Bell's {T}heorem {W}ithout {I}nequalities.
\newblock {\em American Journal of Physics}, {\bf 58}: 1131--1143,
1990.
\bibitem{Gri84} R.~B. Griffiths. \newblock Consistent {H}istories and
the {I}nterpretation of {Q}uantum {M}echanics. \newblock {\em
Journal of Statistical Physics}, {\bf 36}: 219--272, 1984.
\bibitem{grubl} G.~Gr\"ubl and K.~Rheinberger. \newblock{Time of
Arrival from Bohmian Flow}. \newblock{\em J. Phys. A: Math. Gen.}
{\bf 35}: 2907-2924, 2002.
\bibitem{hardy} L.~Hardy. \newblock Nonlocality for Two Particles
Without Inequalities for Almost All Entangled States.
\newblock{Phys. Rev. Lett.} {\bf 71}: 1665, 1993.
\bibitem{Hei58} W.~Heisenberg. \newblock {\em Physics and Beyond;
Encounters and Conversations}. \newblock Harper \& Row, New York,
1971.
\bibitem{Hol82} A.~S. Holevo. \newblock {\em Probabilistic and
{S}tatistical {A}spects of {Q}uantum {T}heory}, Volume~1 of {\em
North-Holland Series in Statistics and Probability}. \newblock
North-Holland, Amsterdam-New York-Oxford, 1982.
\bibitem{JZ85} E.~Joos and H.~D. Zeh. \newblock The {E}mergence of
{C}lassical {P}roperties through {I}nteraction with the
{E}nvironment. \newblock {\em Zeitschrift f\"ur Physik B}, {\bf
59}: 223--243, 1985.
\bibitem{KEMN90} S.~Kobayashi, H.~Ezawa, Y.~Murayama, and S.~Nomura,
editors. \newblock {\em Proceedings of the 3rd {I}nternational
{S}ymposium on {Q}uantum {M}echanics in the {L}ight of {N}ew
{T}echnology}. Physical Society of Japan, 1990.
\bibitem{KoSp67} S.~Kochen and E.~P. Specker. \newblock The {P}roblem
of {H}idden {V}ariables in {Q}uantum {M}echanics. \newblock {\em
Journal of Mathematics and Mechanics}, {\bf 17}:59--87, 1967.
\bibitem{Kraus} K.~Kraus. \newblock Position Observables of the
Photon, pp.~293-320 in W.C.~Price and S.S.~Chissick (eds.),
\textit{The Uncertainty Principle and Foundations of Quantum
Mechanics}. New York: Wiley (1977)
\bibitem{Kra83} K.~Kraus. \newblock States, {E}ffects, and
{O}perations. \newblock {\em Lectures Notes in Physics} {\bf 190},
1983.
\bibitem{LL} L.~D. Landau and E.~M. Lifshitz. \newblock {\em
{Q}uantum {M}echanics: {N}on-relativistic {T}heory}. \newblock
Pergamon Press, Oxford and New York, 1958. \newblock Translated
{}from the Russian by J. B. Sykes and J. S. Bell.
\bibitem{Lea90} C.~R. Leavens. \newblock Transmission, {R}eflection
and {D}well {T}imes within {B}ohm's {C}ausal {I}nterpretation of
{Q}uantum {M}echanics. \newblock {\em Solid State Communications},
{\bf 74}: 923, 1990.
\bibitem{leavens2} W.~R.~McKinnon and C.~R.~Leavens.
\newblock{Distributions of Delay Times and Transmission Times in
Bohm's Causal Interpretation of Quantum Mechanics}. \newblock{\em
Phys. Rev. A} {\bf 51}, 2748-2757, 1995.
\bibitem{leavens3} C.~R.~Leavens. \newblock{Time of Arrival in Quantum
and Bohmian Mechanics}. \newblock{\em Phys. Rev. A} {\bf 58}:
840--847, 1998.
\bibitem{Leg80} A.~J. Leggett. \newblock Macroscopic {Q}uantum
{S}ystems and the {Q}uantum {T}heory of {M}easurement. \newblock
{\em Supplement of the Progress of Theoretical Physics}, {\bf 69}:
80--100, 1980.
\bibitem{Lud51} G.~L{\" u}ders. \newblock {\"U}ber die
Zustands{\"a}nderung durch den {M}essprozess. \newblock {\em
Annalen der Physik}, {\bf 8}: 322--328, 1951.
\bibitem{merm93} D.~N.~Mermin. \newblock Hidden Variables and the Two
Theorems of John Bell. \newblock {\em Review of Modern Physics}
{\bf 65}: 803--815, 1993.
\bibitem{Mil90} A.~I. Miller, editor. \newblock {\em Sixty-{T}wo
{Y}ears of {U}ncertainty: {H}istorical, {P}hilosophical, and
{P}hysical {I}nquiries into the {F}oundations of {Q}uantum
{M}echanics}. \newblock Plenum Press, New York, 1990.
\bibitem{Nel85} E.~Nelson. \newblock {\em Quantum {F}luctuations}.
\newblock Princeton University Press, Princeton, N.J., 1985.
\bibitem{Omn88} R.~Omnes. \newblock Logical {R}eformulation of
{Q}uantum {M}echanics. \newblock {\em Journal of Statistical
Physics}, {\bf 53}: 893--932, 1988.
\bibitem{Pau58} W.~Pauli. \newblock In S.~Fl{\"u}gge, editor, {\em
Encyclopedia of Physics}, Volume~60. Springer, Berlin,
Heidelberg, New York, 1958.
\bibitem{Per91} A.~Peres. \newblock Two {S}imple {P}roofs of the
{K}ochen-{S}pecker {T}heorem. \newblock {\em Journal of Physics},
{\bf A24}: 175--178, 1991.
\bibitem{Pru71} E.~Prugovecki. \newblock {\em Quantum {M}echanics in
{H}ilbert {S}pace}. \newblock Academic Press, New York and
London, 1971.
\bibitem{RS75} M.~Reed and B.~Simon. \newblock {\em Methods of
{M}odern {M}athematical {P}hysics {II}}. \newblock Academic
Press, New York, 1975.
\bibitem{RS80} M.~Reed and B.~Simon. \newblock {\em Methods of
{M}odern {M}athematical {P}hysics {I}. {F}unctional {A}nalysis,
revised and enlarged edition}. \newblock Academic Press, New
York, 1980.
\bibitem{RN55} F.~Riesz and B.~Sz.-{N}agy. \newblock {\em Functional
{A}nalysis}. \newblock F. Ungar, New York, 1955.
\bibitem{Sch35} E.~Schr{\"o}dinger. \newblock Die gegenw{\"a}rtige
{S}ituation in der {Q}uantenmechanik. \newblock {\em
Naturwissenschaften}, {\bf 23}: 807--812, 1935. \newblock English
translation by J. D. Trimmer, {\it The present situation in quantum
mechanics: a translation of Schr{\"o}dinger's ``cat paradox''
paper}, Proceedings of the American Philosophical Society, {\bf
124}: 323--338, 1980. Reprinted in \cite{WZ83}.
\bibitem{SEW91} M.~O. Scully, B.~G. Englert, and H.~Walther.
\newblock Quantum {O}ptical {T}ests of {C}omplementarity. \newblock
{\em Nature}, {\bf 351}: 111--116, 1991.
\bibitem{vNe55} J.~von Neumann. \newblock {\em Mathematische
{G}rundlagen der {Q}uantenmechanik}. \newblock Springer Verlag,
Berlin, 1932. \newblock English translation by R. T. Beyer, {\it
Mathematical Foundations of Quantum Mechanics.} Princeton
University Press, Princeton, N.J., 1955.
\bibitem{WZ83} J.~A. Wheeler and W.~H. Zurek. \newblock {\em Quantum
{T}heory and {M}easurement}. \newblock Princeton University
Press, Princeton, N.J., 1983.
\bibitem{Wig63} E.~P. Wigner. \newblock The {P}roblem of
{M}easurement. \newblock {\em American Journal of Physics}, {\bf
31}: 6--15, 1963. \newblock Reprinted in \cite{WZ83}.
\bibitem{Wig83} E.~P. Wigner. \newblock Interpretation of {Q}uantum
{M}echanics. \newblock In \cite{WZ83}, 1976.
\bibitem{WoZu} W.~K.~Wooters and W.~H.~Zurek. \newblock A Single
Quantum cannot be cloned. \newblock {\em Nature} {\bf 299}:
802--803, 1982.
\bibitem{Zur82} W.~H. Zurek. \newblock Environment-{I}nduced
{S}uperselection {R}ules. \newblock {\em Physical Review D} {\bf
26}: 1862--1880, 1982.
\end{thebibliography}
\end{document} |
\mathsf{b}egin{document}
\title[Galerkin, Hermite and kinetic Fokker Planck equations]{
A Galerkin type method for kinetic
\\
Fokker Planck equations based
\\
on Hermite expansions}
\mathsf{a}ddress{Benny Avelin \\mathsf{D}epartment of Mathematics, Uppsala University\\
S-751 06 Uppsala, Sweden}
\email{[email protected]}
\mathsf{a}ddress{Mingyi Hou\\mathsf{D}epartment of Mathematics, Uppsala University\\
S-751 06 Uppsala, Sweden}
\email{[email protected]}
\mathsf{a}ddress{Kaj Nystr\"{o}m\\mathsf{D}epartment of Mathematics, Uppsala University\\
S-751 06 Uppsala, Sweden}
\email{[email protected]}
\mathsf{a}uthor{Benny Avelin, Mingyi Hou and Kaj Nystr{\"o}m}
\mathsf{b}egin{abstract}
In this paper, we develop a Galerkin-type approximation, with quantitative error estimates, for weak solutions to the Cauchy problem for kinetic Fokker-Planck equations in the domain $(0, T) \times D \times \mathbb{R}^d$, where $D$ is either $\T^d$ or $\mathbb{R}^d$. Our approach is based on a Hermite expansion in the velocity variable only, with a hyperbolic system that appears as the truncation of the Brinkman hierarchy, as well as ideas from \cite{albrittonVariationalMethodsKinetic2021a} and additional energy-type estimates that we have developed. We also establish the regularity of the solution based on the regularity of the initial data and the source term.
\noindent
{\it Keywords and phrases: Kinetic Fokker-Planck operator, Hermite expansion, Galerkin approximation, hyperbolic system, Brinkman hierarchy, energy estimates, weak solution, Cauchy problem, regularity.}
\end{abstract}
\maketitle
\setcounter{equation}{0} \setcounter{theorem}{0}
\section{Introduction and statement of main results}
The kinetic Fokker-Planck equation is
\mathsf{b}egin{align}\langlebel{eq:kfp}
\partialrtial_t p = \nabla_v\cdot(\nabla_v p + vp) - v\cdot\nabla_x p + \nabla_x U(x)\cdot \nabla_v p,
\end{align}
where $x\in\mathbb{R}^d$ represents position, $v\in\mathbb{R}^d$ represents velocity, $t$ is time, and $U(x)$ is a potential that depends only on $x$. Subject to smoothness and growth conditions on $U(x)$, this equation describes the probability density $p(t,x,v)$ of the stochastic process $(x, v) = (x(t), v(t))$ where
\mathsf{b}egin{align*}
\mathsf{b}egin{cases}
\, \mathrm{d} x = v \, \mathrm{d} t, \\
\, \mathrm{d} v = - (v +\nabla_x U(x))\, \mathrm{d} t + \sqrt{2}\, \mathrm{d} B_t,
\end{cases}
\end{align*}
and where and $B_t$ is a standard Brownian motion in $\mathbb{R}^d$. It is well-known that
\mathsf{b}egin{align*}
\rho(x, v) := e^{-\mathsf{a}bs{v}^2/2-U(x)}
\end{align*}
is the stationary solution of \cref{eq:kfp}. Defining
\mathsf{b}egin{align*}
u(t,x,v):=p(t,x,v)/\rho(x, v),
\end{align*}
it follows that $u$ satisfies the conjugated form of \cref{eq:kfp}, i.e.
\mathsf{b}egin{align}\langlebel{eq:kfpconj}
\partialrtial_t u = \mathcal{L} u, \, \mathcal{L}:= (\mathsf{D}elta_v -v\cdot \nabla_v ) + (\nabla_x U\cdot\nabla_v - v\cdot \nabla_x ).
\end{align}
The operators in \cref{eq:kfp,eq:kfpconj} are sometimes referred to as Kramers-Smoluchowski-Fokker-Planck operators.
The kinetic Fokker-Planck operator appears naturally in various fields of kinetic theory and statistical physics, including plasma physics, condensed matter physics, and more recently, in machine learning and in the optimization of deep neural networks using the method of stochastic gradient descent with momentum.
The purpose of this paper is to contribute to the study of Cauchy problems for the kinetic Fokker-Planck equation by constructing Galerkin-type approximations of weak solutions to the problem in \cref{ivp1} with quantitative error estimates. More precisely, we study the Cauchy problem
\mathsf{b}egin{align}\langlebel{ivp1}
\mathsf{b}egin{cases}
\partialrtial_t u - \mathcal{L}u = f &\mbox{in } (0,T)\times\Omegaega,\\
u = g &\mbox{on } \{t=0\}\times\Omegaega,
\end{cases}
\end{align}
where $u=u(t,x,v)$, $T\in (0,\infty)$, $\Omegaega = D\times\mathbb{R}^d$, and either $D=\T^d$ or $D=\mathbb{R}^d$.
In \cref{ivp1}, $g(x,v)$ is the initial data, and $f(t,x,v)$ is a given source term.
As we will explain, we restrict our attention to $D=\T^d$ and $D=\mathbb{R}^d$ for several reasons. One reason is the current lack of understanding regarding the existence and, in particular, uniqueness of weak solutions to the Cauchy-Dirichlet problem for the operator in \cref{eq:kfpconj}.
Various versions of the problem in \cref{ivp1} have been extensively studied from both theoretical and numerical perspectives over the past few decades. Notably, in \cite{villaniHypocoercivity2009}, Villani established the existence and exponential decay to equilibrium, under assumptions on the potential $U=U(x)$. In \cite{villaniHypocoercivity2009}, Villani also introduced the notion of {hypocoercivity} which essentially amounts to redeeming the initial lack of coercivity of the hypoelliptic operators at hand, by introducing auxiliary norms in which solutions are proved to dissipate and quantitative decay estimates can be deduced. In \cite{herauIsotropicHypoellipticityTrend2004a}, Herau and Nier gave explicit estimates of the rate of decay to the equilibrium using hypoelliptic techniques and by making the connection to the Witten Laplacian and its spectrum and spectral gaps.
Spectral gaps were further studied in a series of papers by Herau et al., including \cite{herauSemiclassicalAnalysisKramersFokkerPlanck2005,herauTunnelEffectKramersFokkerPlanck2008,herauTunnelEffectSymmetries2011a}.
In these papers the authors used semiclassical analysis to study the relationship between small eigenvalues and minimums of the potential $U$.
More recently, Bony et al.~\cite{bony2022eyringkramers} generalized the results from \cite{herauTunnelEffectSymmetries2011a} to general Fokker-Planck type operators admitting a Gibbs stationary distribution and established an Eyring-Kramers formula for the eigenvalues of the operator.
Different from the approaches mentined above, Albritton et al.~\cite{albrittonVariationalMethodsKinetic2021a} proved the existence of weak solutions in $\T^d$ and exponential decay in an $\mathsf{L}^2$-setting using a variational approach based on energy methods.
Cao et al.~\cite{caoExplicitConvergenceRate2023} extended the results of \cite{albrittonVariationalMethodsKinetic2021a} to the whole space and developed a new method for determining the rate of decay to equilibrium in the $\mathsf{L}^2$ norm. Their approach involves expressing the estimate in terms of the Poincaré constant of the underlying space.
The work presented in this paper is inspired by the approach developed in \cite{albrittonVariationalMethodsKinetic2021a}, and our idea has been to develop Galerkin-type approximations, with quantitative error estimates, of weak solutions to the problem in \cref{ivp1} in the framework of \cite{albrittonVariationalMethodsKinetic2021a}.
In the case $D=\T^d$, our starting point is an ansatz for the solution to the problem in \cref{ivp1} in the form of a spectral expansion using Hermite polynomials in the velocity variable only, an approach which in the one-dimensional case was introduced by Risken in \cite{riskenFokkerplanckEquationMethods1996}. Through this ansatz we are led to an infinite system of equations, originally discovered in \cite{brinkmanBrownianMotionField1956}, in the $(t,x)$ variables only.
By truncating this system or hierarchy of equations, often referred to as the Brinkman hierarchy, we obtain a hyperbolic system \cref{eq:hyperbolic_intro+} which plays a central role in our paper.
A byproduct of our approach and energy estimates, is that they yield regularity estimates for the solution that depend on the regularity of the initial data and the source term.
It is worth noting that a well-posedness result for the hyperbolic system was given in \cite{meyerCommentsGradProcedure1983b}.
However, a novelty in our approach is that we use a vanishing viscosity method to prove the well-posedness of the hyperbolic system and to obtain the corresponding energy estimates. In addition, our approach gives an alternative proof of the existence and uniqueness of weak solutions to \cref{ivp1} established in \cite{albrittonVariationalMethodsKinetic2021a} in the case $D=\T^d$. Furthermore, compare to \cite{meyerCommentsGradProcedure1983b} and the numerical papers mentioned below, we are able to quantify to what extent the (truncated) hyperbolic system approximates the Cauchy problem \cref{ivp1} and we deduce error estimates depending essentially only on dimension $d$, the level of truncation, the potential $U$, the initial data and the source term, see \cref{Thm1}. In the case $D=\mathbb{R}^d$, we prove, under additional restrictions on the potential $U(x)$, that we can approximate the solution in $\mathbb{R}^d\times\mathbb{R}^d$, by a series of bounded periodic solutions which each can be computed by the Galerkin method developed, see \cref{Thm2}. Furthermore, also in this case we prove the existence and uniqueness of weak solutions to \cref{ivp1}.
Concerning the literature devoted to numerical approaches to the problem in \cref{ivp1}, we note that Risken developed a matrix continued-fraction method in \cite{riskenFokkerplanckEquationMethods1996} based on the spectral expansion.
In \cite{fokCombinedHermiteSpectralfinite2002a}, Fok et al.~gave detailed explicit and implicit discretization schemes for the hyperbolic system \cref{eq:hyperbolic_intro+} in the case $d=1$.
A similar numerical scheme for the periodic case can be found in \cite{chaiMixedGeneralizedHermiteFourier2018}.
In addition, for the Fokker-Planck-Landau equation, the nonlinear version of equation \cref{ivp1}, a numerical scheme based on Hermite expansion can be found in \cite{liHermiteSpectralMethod2021a}.
We also note that different spectral methods may be used to solve the kinetic Fokker-Planck equations, for example, a generalized Laguerre-Legendre spectral method can be found in \cite{guoCompositeGeneralizedLaguerreLegendre2009a}. In addition, we refer to \cite{pavliotisStochasticProcessesApplications2014a} for a good introduction to Monte Carlo methods and the simulation of probability densities. Compared to the previous literature we develop Galerkin-type approximations, with quantitative error estimates, of the (unique) weak solutions to the Cauchy problem in \cref{ivp1} in domains $(0, T)\times D\times \mathbb{R}^d$ where
either $D=\T^d$ or $D=\mathbb{R}^d$.
\subsection{Weak solutions} Points in $\mathbb{R}\times\mathbb{R}^d\times\mathbb{R}^d$ will be denoted by $(t,x,v)$ where $x\in \mathbb{R}^d$ is position, $v\in\mathbb{R}^d$ is velocity, and $t$ is time. We introduce Gibbs measures $\mu$ and $\eta$ on $\mathbb{R}^d$,
\mathsf{b}egin{align}\langlebel{measures}
\, \mathrm{d} \mu = \, \mathrm{d}\mu(v) = e^{-\mathsf{a}bs{v}^2/2}\, \mathrm{d} v, \quad \, \mathrm{d} \eta=\, \mathrm{d}\eta(x) = e^{-U(x)}\, \mathrm{d} x,
\end{align}
and we will consistently assume that
$e^{-{U(x)}}\in \mathrm{L}^1(\mathbb{R}^d,\, \mathrm{d} x)$.
Given $D\subset\mathbb{R}^d$, and $T>0$, we will, if unambigous, drop either some or all of the domains of definition in the Bochner spaces $\mathsf{L}^2(\mathsf{L}^2_\eta(\mathrm{H}^{1}_\mu))=\mathsf{L}^2((0,T);\mathsf{L}^2_\eta (D;\mathrm{H}^{1}_\mu(\mathbb{R}^d)))$ and
$\mathsf{L}^2(\mathsf{L}^2_\eta(\mathrm{H}^{-1}_\mu))=\mathsf{L}^2((0,T);\mathsf{L}^2_\eta (D;\mathrm{H}^{-1}_\mu(\mathbb{R}^d)))$, here the subscripts $\eta$ and $\mu$ indicated the measure with respect to which integration is performed. Furthermore, as in \cite{albrittonVariationalMethodsKinetic2021a}, given $D \subset \mathbb{R}^d$ and $T>0$, we define $\mathrm{H}^1_{\text{kin}}:=\mathrm{H}^1_{\text{kin}}((0,T)\times D\times \mathbb{R}^d)$ as the set
\mathsf{b}egin{align*}
\left \{u:u \in \mathsf{L}^2(\mathsf{L}^2_\eta(\mathrm{H}^{1}_\mu)), \partialrtial_t u + v\cdot\nabla_x u -\nabla_x U\cdot \nabla_v u \in \mathsf{L}^2(\mathsf{L}^2_\eta(\mathrm{H}^{-1}_\mu)) \right \}.
\end{align*}
We remark that, equipped with the norm
\mathsf{b}egin{align*}
\|u\|_{\mathrm{H}^1_{\text{kin}}}:= \|u\|_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathrm{H}^1_\mu))}
+
\|\partialrtial_t u + v\cdot\nabla_x u -\nabla_x U\cdot \nabla_v u\|_{\mathsf{L}^2(\mathsf{L}^2_\eta (\mathrm{H}^{-1}_\mu))},
\end{align*}
$\mathrm{H}^1_{\text{kin}}$ is a Banach space.
Our functional notation, including the Sobolev space $\mathrm{H}^1_\mu(\mathbb{R}^d)$ and its dual $\mathrm{H}^{-1}_\mu(\mathbb{R}^d)$, is detailed in \cref{sec:pre}.
\mathsf{b}egin{definition}\langlebel{def:weak}
Given a domain $D \subset \mathbb{R}^d$ and $T > 0$, if $g\in \mathsf{L}^2_\eta(D;\mathsf{a}llowbreak \mathsf{L}^2_\mu)$ and $f\in \mathsf{L}^2((0,T);\mathsf{L}^2_\eta(D;\mathrm H^{-1}_\mu))$, we say that a function $u\in \mathrm{H}^1_{\mathrm{kin}}$ is a weak solution to \cref{ivp1} if
\mathsf{b}egin{multline*}
\underset{(0,T) \times D}{\iint} (\nabla_v u, \nabla_v \varphi)_{\mathsf{L}^2_\mu}\, \mathrm{d}\eta\, \mathrm{d} t
=
\underset{(0,T) \times D}{\iint} (f-\partialrtial_t u- v\cdot\nabla_x u +\nabla_x U\cdot\nabla_v u, \varphi )_{\mathrm{H}^{-1}_\mu,\mathrm{H}^1_\mu}\, \mathrm{d}\eta\, \mathrm{d} t,
\end{multline*}
holds for all $\varphi\in \mathsf{L}^2((0,T);\mathsf{L}^2_\eta(D;\mathrm{H}^1_\mu))$ and if
\mathsf{b}egin{align*}
u(0,x,v) = g(x,v)\mbox{ for }(x,v)\in D\times\mathbb{R}^d.
\end{align*}
\end{definition}
\mathsf{b}egin{remark}
The initial condition in \cref{def:weak} is well-defined as any function $u\in \mathrm{H}^1_{\mathrm{kin}}((0,T)\times D\times \mathbb{R}^d)$ can be identified with an element of
$C([0,T];\mathsf{L}^2_\eta(D;\mathsf{L}^2_\mu(\mathbb{R}^d)))$, see \cite[Lemma 6.12]{albrittonVariationalMethodsKinetic2021a}.
\end{remark}
To formulate our main results we also introduce, for $k\geq 1$, $D \subset \mathbb{R}^d$, and $T > 0$, the function space $\mathrm{H}^k_{t,x,v}:=\mathrm{H}^k_{t,x,v}((0,T)\times D\times\mathbb{R}^d) $
which
is defined as
\mathsf{b}egin{align*}
\mathrm{H}^k_{t,x,v}((0,T)\times D\times\mathbb{R}^d):=\left \{u: \partialrtial_t^i\nabla_x^j\nabla_v^{2l} u \in \mathsf{L}^2(L^2_\eta(\mathsf{L}^2_\mu)), \forall i,j,k \in \N \text{ s.t. } i+j+l \leq k \right\}.
\end{align*}
We equip this space with the norm
\mathsf{b}egin{align*}
\norm{ u }^2_{\mathrm{H}^k_{t,x,v}}:= \sum_{i+j+l = 0}^k \norm{\partialrtial_t^i \nabla_x^j \nabla_v^{2l} u}^2_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}.
\end{align*}
The spaces $\mathrm{H}^k_{t,x}:=\mathrm{H}^k_{t,x}((0,T)\times D)$ and $\mathrm{H}^k_{x,v}:=\mathrm{H}^k_{x,v}(D\times\mathbb{R}^d)$ are defined analogously.
We finally remark that, if $U(x) \in C^\infty(\T^d)$, then
\mathsf{b}egin{align*}
\mathrm{H}^k_{t,x,v}((0,T)\times\T^d\times\mathbb{R}^d)\subset \mathrm{H}^1_{\text{kin}}((0,T)\times \T^d\times \mathbb{R}^d),
\end{align*}
for all $k\geq 1$, but this is not the case if $\T^d$ is replaced by $\mathbb{R}^d$.
\subsection{Statement of main results} Our main results concern the geometric situations $D=\T^d$ and $D=\mathbb{R}^d$.
In the case $D=\T^d$, we consider the problem in \cref{ivp1} and the ansatz
\mathsf{b}egin{align} \langlebel{eq:ansatz}
u_m(t,x,v) = \sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m c_m^\mathsf{a}lpha(t,x)\Psi_\mathsf{a}lpha (v),
\end{align}
where $\{\Psi_\mathsf{a}lpha(v)\}$ are normalized Hermite polynomials in $\mathbb{R}^d$ introduced in \cref{eq:hermite}, with the coefficients $\{c_m^\mathsf{a}lpha\}$ satisfying the hyperbolic system stated in \cref{eq:hyperbolic_intro+}. In this case our main result can be stated as follows.
\mathsf{b}egin{theorem}\langlebel{Thm1}
Let $D=\T^d$, $T > 0$, and let $k \in \Z$ be such that $k\geq2$. If we assume that
\mathsf{b}egin{align}\langlebel{eq:assump:um1}
\mathsf{b}egin{cases}
U(x) \in C^\infty(\T^d),\ g\in \mathrm{H}^{k}_{x,v}(\T^d\times\mathbb{R}^d),\ f\in \mathrm{H}^{k}_{t,x,v}((0,T)\times\T^d\times\mathbb{R}^d). \\
\nabla_x^{k+1} g\in \mathsf{L}^2_\eta(\T^d;\mathsf{L}^2_\mu(\mathbb{R}^d)), \\
\partialrtial_t^i \nabla_x^{j} f \in \mathsf{L}^2((0,T); \mathsf{L}^2_\eta(\T^d;\mathsf{L}^2_\mu(\mathbb{R}^d))), \text{ whenever } i+j =k+1,
\end{cases}
\end{align}
then there exists, for each $m\in\Z_+$, a unique function
\mathsf{b}egin{align*}
u_m\in\mathrm{H}^k_{t,x,v}((0,T)\times\T^d\times\mathbb{R}^d)
\end{align*}
of the form \cref{eq:ansatz} with $\{c_m^\mathsf{a}lpha\}$ satisfying the hyperbolic system \cref{eq:hyperbolic_intro+}. Also, there exists a constant
$C=C(d,T,\norm{\nabla_x^2 U}_\infty,dots, \norm{\nabla_x^{k+1}U}_\infty,k) \geq 1$, such that
\mathsf{b}egin{align*}
\norm{u_m}^2_{\mathrm{H}^k_{t,x,v}} \leq C (\norm{g}^2_{\mathrm{H}^k_{x,v}} + \norm{f}^2_{\mathrm{H}^k_{t,x,v}}).
\end{align*}
Furthermore, after passing to a subsequence,
\mathsf{b}egin{align*}
u_m\rightharpoonup u\in \mathrm{H}^{k}_{t,x,v}((0,T)\times\T^d\times\mathbb{R}^d),
\end{align*}
where $u$ is the unique weak solution to \cref{ivp1} in the sense of \cref{def:weak}, and
\mathsf{b}egin{align}\langlebel{thm1:error}
\norm{u-u_m}^2_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))} \leq \frac{C}{(1+m)^{2k-3}}\mathsf{b}igl( \norm{g}^2_{\mathrm{H}^k_{x,v}} + \norm{f}^2_{\mathrm{H}^k_{t,x,v}}\mathsf{b}igr).
\end{align}
\end{theorem}
Using \cref{Thm1} and an approximation argument, we can also deduce the following well-posedness result.
\mathsf{b}egin{corollary}\langlebel{cor:l2initial}
Let $D=\T^d$, $T > 0$ and assume that
\mathsf{b}egin{align*}
U(x)\in C^\infty(\T^d),\ g\in \mathsf{L}^2_\eta(\T^d; \mathsf{L}^2_\mu(\mathbb{R}^d)),\ f \in \mathsf{L}^2((0,T); \mathsf{L}^2_\eta(\T^d;\mathsf{L}^2_\mu(\mathbb{R}^d))).
\end{align*}
Then there exists a unique weak solution $u\in \mathrm{H}^1_{\mathrm{kin}}((0,T)\times\T^d\times\mathbb{R}^d)$ to the initial value problem in \cref{ivp1}, and there exists a constant $C = C(d,\mathsf{a}llowbreak T,\mathsf{a}llowbreak \norm{\nabla_x^2 U}_\infty,\mathsf{a}llowbreak \norm{\nabla_x^{3}U}_\infty) \geq 1$, such that
\mathsf{b}egin{align*}
\norm{u}_{\mathrm{H}^1_{\mathrm{kin}}} &\leq C\mathsf{b}igl( \norm{g}_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)} +
\norm{f}_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}\mathsf{b}igr).
\end{align*}
\end{corollary}
In the case $D =\mathbb{R}^d$, we prove the following theorem.
\mathsf{b}egin{theorem}\langlebel{Thm2}
Let $D=\mathbb{R}^d$ and $T > 0$, assume that $U(x)$ has the form
\mathsf{b}egin{align}\langlebel{assump:global:potential}
U(x) = a |x|^2 + P(x),\ a\in \mathbb{R}_+,\ P(x)\in C_0^\infty(\mathbb{R}^d),
\end{align}
and that
\mathsf{b}egin{align}\langlebel{assump:global:potential+}
g\in C_0^\infty(\mathbb{R}^{2d}),\ f\in C^\infty_0([0,T]\times\mathbb{R}^{2d}),
\end{align}
where $C_0^\infty(X)$ denotes the space of all compactly supported smooth functions on $X$.
Then there exists a sequence of explicit smooth periodic functions $\{u_R\}$
such that, after passing to a subsequence,
\mathsf{b}egin{align*}
u_R\rightharpoonup u\in \mathrm{H}^1_{\mathrm{kin}}((0,T)\times\mathbb{R}^d\times\mathbb{R}^d),
\end{align*}
where $u$ is the unique weak solution to \cref{ivp1}.
\end{theorem}
Using \cref{Thm2} and an approximation argument, we can also deduce the following well-posedness result.
\mathsf{b}egin{corollary}\langlebel{cor:global:l2initial}
Let $D$, $U$, and $T$ be as in \cref{Thm2}. If $g\in \mathsf{L}^2_\eta(\mathbb{R}^d;\mathsf{L}^2_\mu(\mathbb{R}^d))$, and $f \in \mathsf{L}^2((0,T); \mathsf{L}^2_\eta(\mathbb{R}^d;\mathsf{L}^2_\mu(\mathbb{R}^d)))$,
then there exists a unique weak solution $u\in \mathrm{H}^1_{\mathrm{kin}}((0,T)\times\mathbb{R}^d\times\mathbb{R}^d)$ to the initial value problem \cref{ivp1}.
\end{corollary}
\subsection{Outline of the proofs}
To prove \cref{Thm1} we deduce that
\mathsf{b}egin{align*}
(\partialrtial_t - \mathcal{L}) u_m = f_m + {E}_m
\end{align*}
where $f_m$ is a finite dimensional projection of $f$ and ${E}_m$ is considered as an error term, see \cref{eq:Lum}.
We then prove that $\{u_m\}$ and $\{\partialrtial_t u_m + v\cdot\nabla_x u_m -\nabla_x U\cdot \nabla_v u_m\}$ are uniformly bounded sequences in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d;\mathrm{H}^1_\mu))$ and $\mathsf{L}^2((0,T);\mathsf{a}llowbreak \mathsf{L}^2_\eta(\T^d;\mathsf{a}llowbreak \mathsf{L}^2_\mu))$ respectively, see \cref{thm:regularity}.
Furthermore, we prove that ${E}_m \to 0$ strongly in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d; \mathsf{L}^2_\mu))$, see \cref{lem:Em}. Together, this allows us to conclude that
\mathsf{b}egin{align*}
u_m \rightharpoonup u
\end{align*}
in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d; \mathrm{H}^1_\mu))$, that
\mathsf{b}egin{align*}
\partialrtial_t u_m + v\cdot\nabla_x u_m -\nabla_x U\cdot \nabla_v u_m \rightharpoonup \partialrtial_t u + v\cdot\nabla_x u -\nabla_x U\cdot \nabla_v u
\end{align*}
in $\mathsf{L}^2((0,T);\mathsf{a}llowbreak \mathsf{L}^2_\eta(\T^d;\mathsf{a}llowbreak \mathrm{H}^{-1}_\mu))$, and that
\mathsf{b}egin{align*}
{E}_m \rightharpoonup 0
\end{align*}
in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d; \mathrm{H}^{-1}_\mu))$.
The error estimate in the theorem follows from the energy estimates for $u_m$, which are equivalent to the energy estimates for the coefficients ${c_m^\mathsf{a}lpha}_\mathsf{a}lpha$, and a delicate bootstrapping argument (see \cref{thm:energy}).
To prove \cref{Thm2}, we construct a sequence of bounded periodic problems indexed by the truncation scale $R$, using certain cutoff functions. We denote the solutions to these problems by ${u_R}$, and after passing to a subsequence, we show that these solutions converge to a solution $u\in\mathrm{H}^{1}_\mathrm{kin}((0,T)\times\mathbb{R}^d\times\mathbb{R}^d)$ for the problem \cref{ivp1}. The proof involves using the logarithmic Sobolev inequality for the measure $\, \mathrm{d}\eta\, \mathrm{d}\mu$ and an entropy-type inequality from \cite{ledouxConcentrationMeasureLogarithmic1999a}.
\subsection{Organization of the paper.}
In \cref{sec:pre}, we introduce essential notations and cover preliminaries.
In \cref{sec:hsys}, we establish well-posedness and regularity estimates of the hyperbolic system that appears as the truncation of the Brinkman hierarchy.
In \cref{sec:uniformestimate}, we establish the essential energy estimates for the Galerkin approximations ${u_m}$.
In \cref{sec:bddresult}, we prove both \cref{Thm1} and \cref{cor:l2initial}.
Finally, in \cref{sec:global}, we prove \cref{Thm2} and \cref{cor:global:l2initial}.
\section{Preliminaries}\langlebel{sec:pre}
Our function spaces are defined with respect to the measures $\mu$ and $\eta$ introduced in \cref{measures}. Given $D \subset \mathbb{R}^d$, and $\zeta\in\{\mu,\eta\}$,
we let $\mathsf{L}^2_\zeta(D)$ be the Hilbert space with inner product
\mathsf{b}egin{align*}
(u,w)_{\mathsf{L}^2_\zeta}:=\int_D u w \, \, \mathrm{d} \zeta,
\end{align*}
and norm
\mathsf{b}egin{align*}
\|u\|_{\mathsf{L}^2_\zeta} := (u,u)_{\mathsf{L}^2_\zeta}^{1/2}=\mathsf{b}igl ( \int_D u^2 \, \, \mathrm{d} \zeta \mathsf{b}igr )^{1/2}.
\end{align*}
We let $\mathrm{H}^1_\zeta(D)$ be the first order Sobolev space with inner product
\mathsf{b}egin{align*}
( u,w )_{\mathrm{H}^1_\zeta}:=(u,w)_{\mathsf{L}^2_\zeta}+(\nabla_v u ,\nabla_v w )_{\mathsf{L}^2_\zeta},
\end{align*}
and with norm
\mathsf{b}egin{align*}
\|u\|_{\mathrm{H}^1_\zeta} := \mathsf{b}igl(\|u\|^2_{\mathsf{L}^2_\zeta} + \|\nabla u\|^2_{\mathsf{L}^2_\zeta}\mathsf{b}igr)^{1/2}.
\end{align*}
We also let $(\cdot,\cdot)_{\mathrm{H}^{-1}_\zeta, \mathrm{H}^1_\zeta}$ denote the pairing between $\mathrm{H}^{-1}_\zeta $ and $\mathrm{H}^1_\zeta$ where $\mathrm{H}^{-1}_\zeta$ is the dual space of $\mathrm{H}^1_\zeta$.
It is well known that under rather weak conditions on the tail of the measure $\eta$, the spaces $\mathsf{L}^2_\zeta(\mathbb{R}^d)$, $\mathrm{H}^1_\zeta(\mathbb{R}^d)$, $\mathsf{L}^2_\zeta(\T^d)$, $\mathrm{H}^1_\zeta(\T^d)$ are the closures of the space of compactly supported smooth functions, $C_0^\infty(\mathbb{R}^d)$ and $C_0^\infty(\T^d)$ in the corresponding norms, see
e.g.~\cite{adamsSobolevSpaces1975a,tolleUniquenessWeightedSobolev2012b}.
The spaces $\mathrm{H}^k_\mu(\mathbb{R}^d)$ and $\mathrm{H}^k_\eta(\T^d)$, for $k\geq 2$, are defined analogously. Given an arbitrary Banach space $X$ we define the Bochner space $\mathsf{L}^2_\zeta(D;X)$ to be the set of all measurable functions $f:D \to X$ such that
\mathsf{b}egin{align*}
\|f\|_{\mathsf{L}^2_\zeta(D;X)} := \mathsf{b}igl ( \int_D \|f\|_X^2 \, \, \mathrm{d}\zeta \mathsf{b}igr )^{1/2} < \infty.
\end{align*}
The time space, $\mathsf{L}^2((0,T);X)$ is defined analogously, and as such for instance, the space $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d;\mathsf{L}^2_\mu(\mathbb{R}^d)))$ has a well defined meaning.
We will, if unambiguous, drop either some or all of the domains of definition in the function spaces for ease of notation.
\subsection{Operators and their adjoints} Consider, for $i=1,...,d$, the operators $\partialrtial_{v_i},\partialrtial_{x_i}$. The formal adjoints of these operators in $\mathsf{L}^2_\mu$, $\mathsf{L}^2_\eta$ respectively, are
\mathsf{b}egin{align}\langlebel{eq:formaldual}
\partialrtial_{v_i}^* = -\partialrtial_{v_i} + {v_i}, \quad \partialrtial_{x_i}^* = -\partialrtial_{x_i} + {\partialrtial_{x_i}U}.
\end{align}
The corresponding nabla operators $\nabla_{v},\nabla_{v}^\mathsf{a}st,\nabla_{x},\nabla_{x}^\mathsf{a}st$ are defined in the canonical way.
Using this notation can rewrite the operator $\mathcal{L}$ in \cref{eq:kfpconj} as
\mathsf{b}egin{align*}
\mathcal{L} = - \nabla_v^*\cdot\nabla_v - \mathsf{b}igl( \nabla_v^*\cdot\nabla_x -\nabla_v\cdot\nabla_x^*\mathsf{b}igr).
\end{align*}
\subsection{Hermite basis} In the case $d=1$, then the so-called Hermite polynomials form a basis for $\mathsf{L}^2_\mu(\mathbb{R})$ as well as for $\mathrm{H}^1_\mu(\mathbb{R})$. We will deal extensively with this basis in this paper and we here record some of its useful properties. The $k$-th Hermite function or polynomial is defined, see \cite{wienerFourierIntegralCertain1988},
\mathsf{b}egin{align*}
h_k(v) := (-1)^k e^{v^2/2} \frac{\, \mathrm{d}^k}{\, \mathrm{d} v^k} e^{-v^2/2}.
\end{align*}
We introduce the normalized Hermite functions as
\mathsf{b}egin{align*}
\psi_k(v) := \frac{1}{Z_k} h_k(v),\quad Z_k = \mathsf{b}igl(\int_\mathbb{R} h_k^2(v) e^{-\frac{v^2}{2}}\, \mathrm{d} v \mathsf{b}igr)^{1/2}.
\end{align*}
We also introduce the Ornstein-Uhlenbeck operator as
\mathsf{b}egin{align}\langlebel{eq:ornuhl}
K = \partialrtial_{v}^* \partialrtial_v = -\partialrtial_v^2 + v\partialrtial_v.
\end{align}
Then
\mathsf{b}egin{align*}
K\psi_k = k\psi_k,
\end{align*}
i.e.~$\{\psi_k\}$ are eigenfunctions to the one-dimension Ornstein-Uhlenbeck operator $K$. The methodology outlined in this paper rests on the following elementary and well-known result.
We list the proof here for completeness.
\mathsf{b}egin{lemma} \langlebel{lem:simultaneous}
The normalized Hermite functions $\{\psi_k\}_{k=0}^\infty$ is an \textit{orthonormal basis} in $\mathsf{L}^2_\mu(\mathbb{R})$. Furthermore, $\{\psi_k\}_{k=0}^\infty$ is an \textit{orthogonal basis} in $\mathrm{H}^1_\mu(\mathbb{R})$.
\end{lemma}
\mathsf{b}egin{proof}
For the first conclusion, it is sufficient to note that it is well known that $\{\psi_k\}_{k=0}^\infty$ is dense in $\mathsf{L}^2_\mu(\mathbb{R})$, see \cite[Chapter 8, Theorem 1]{wienerFourierIntegralCertain1988}, and that the functions in $\{\psi_k\}_{k=0}^\infty$ are normalized and pairwise orthogonal in $\mathsf{L}^2_\mu(\mathbb{R})$. For the second conclusion, we note that \mathsf{b}egin{align}\langlebel{calco}
( \psi_k, \psi_l )_{\mathrm{H}^1_\mu} &= (\psi_k,\psi_l)_{\mathsf{L}^2_\mu} + (\psi_k', \psi_l')_{\mathsf{L}^2_\mu}\notag \\
&= (\psi_k,\psi_l)_{\mathsf{L}^2_\mu} + (\psi_k, K \psi_l)_{\mathsf{L}^2_\mu} = (\psi_k, (1+l)\psi_l)_{\mathsf{L}^2_\mu} = (1+l)\, {\rm d}lta_{kl},
\end{align}
i.e.~$\{\psi_k\}_{k=0}^\infty$ is orthogonal in $\mathrm{H}^1_\mu(\mathbb{R})$. To prove that $\{\psi_k\}_{k=0}^\infty$ is dense in $\mathrm{H}^1_\mu(\mathbb{R})$, suppose that there exists $u\in \mathrm{H}^1_\mu(\mathbb{R})$, not identically zero, such that $(u,\psi_k )_{\mathrm{H}^1_\mu} = 0$ for all $k=0,1,2,dots$.
As in \cref{calco} this implies that
\mathsf{b}egin{align*}
(u, (1+k)\psi_k)_{\mathsf{L}^2_\mu}=0,
\end{align*}
for all $k=0,1,2,dots$. Therefore $u$ must be identically zero in $\mathsf{L}^2_\mu(\mathbb{R})$ since $\{\psi_k\}_{k=0}^\infty$ is dense in $\mathsf{L}^2_\mu(\mathbb{R})$. Hence $\{\psi_k\}_{k=0}^\infty$ is simultaneous basis for $\mathsf{L}^2_\mu(\mathbb{R})$ and $\mathrm{H}^1_\mu(\mathbb{R})$.
\end{proof}
Given $d\in\Z$, $d\geq 1$, and a multi-index $\mathsf{a}lpha=(\mathsf{a}lpha_1, dots, \mathsf{a}lpha_d)$, we introduce the Hermite basis for $\mathbb{R}^d$,
\mathsf{b}egin{align}\langlebel{eq:hermite}
\Psi_\mathsf{a}lpha(v) = \prod_{i=1}^d \psi_{\mathsf{a}lpha_i}(v_i).
\end{align}
Then by construction $\norm{\Psi_\mathsf{a}lpha}_{\mathsf{L}^2_\mu(\mathbb{R}^d)} = 1$. Furthermore, for any multi-indices $\mathsf{a}lpha,\mathsf{a}lpha'$, we have
\mathsf{b}egin{align*}
(\Psi_\mathsf{a}lpha,\Psi_{\mathsf{a}lpha'})_{\mathsf{L}^2_\mu} = \, {\rm d}lta_{\mathsf{a}lpha\mathsf{a}lpha'},
\end{align*}
and, for each $i=1,dots,d$,
\mathsf{b}egin{align*}
\partialrtial_{v_i}^\mathsf{a}st\partialrtial_{v_i} \Psi_\mathsf{a}lpha = \mathsf{a}lpha_i \Psi_\mathsf{a}lpha.
\end{align*}
It is immediate by \cref{lem:simultaneous} that $\{\Psi_\mathsf{a}lpha\}_{|\mathsf{a}lpha|=0}^\infty$, where $\mathsf{a}bs{\mathsf{a}lpha} = \sum_{i=1}^d \mathsf{a}lpha_i$, forms an orthonormal basis for $\mathsf{L}^2_\mu(\mathbb{R}^d)$, and an orthogonal basis for $\mathrm{H}^1_\mu(\mathbb{R}^d)$.
Let, for each $i=1,dots,d$, $\mathbf{e}_i$ be the multi-index with a $1$ in position $i$ and zero elsewhere. Then the following useful and well-known recurrence relations can easily be verified
\mathsf{b}egin{align}\langlebel{eq:Hermiterelations}
\mathsf{b}egin{split}
& \partialrtial_{v_i} \Psi_\mathsf{a}lpha (v)= \sqrt{\mathsf{a}lpha_i}\Psi_{\mathsf{a}lpha-\mathbf{e}_i}(v), \quad \mathsf{a}lpha_i > 0,\\
& \partialrtial_{v_i}^\mathsf{a}st \Psi_\mathsf{a}lpha (v) = \sqrt{\mathsf{a}lpha_i + 1}\Psi_{\mathsf{a}lpha+\mathbf{e}_i}(v),\\
& v_i \Psi_\mathsf{a}lpha = \sqrt{\mathsf{a}lpha_i + 1}\Psi_{\mathsf{a}lpha+\mathbf{e}_i} + \sqrt{\mathsf{a}lpha_i}\Psi_{\mathsf{a}lpha-\mathbf{e}_i}.
\end{split}
\end{align}
Furthermore, using the above relations and orthogonality, we can write
\mathsf{b}egin{align*}
\|\Psi_\mathsf{a}lpha\|^2_{\mathrm{H}^1_\mu(\mathbb{R}^d)} = (\Psi_\mathsf{a}lpha,\Psi_\mathsf{a}lpha)_{\mathsf{L}^2_\mu}+ (\nabla_v \Psi_\mathsf{a}lpha,\nabla_v \Psi_\mathsf{a}lpha)_{\mathsf{L}^2_\mu} = 1+\mathsf{a}bs{\mathsf{a}lpha}.
\end{align*}
\subsection{Sobolev spaces with respect to Gaussian measure}
For any function $u\in \mathrm{H}^1_\mu(\mathbb{R}^d)$ we can use \cref{lem:simultaneous} to expand it in terms of the Hermite basis \cref{eq:hermite},
\mathsf{b}egin{align*}
u = \sum_{\mathsf{a}bs{\mathsf{a}lpha}\geq 0} c^\mathsf{a}lpha(u) \Psi_\mathsf{a}lpha, \quad c^\mathsf{a}lpha(u) = (u, \Psi_\mathsf{a}lpha)_{\mathsf{L}^2_\mu}.
\end{align*}
The expansion converges in $\mathrm{H}^1_\mu(\mathbb{R}^d)$ with
\mathsf{b}egin{align*}
\|u \|^2_{\mathrm{H}^1_\mu(\mathbb{R}^d)} = \norm{u}^2_{\mathsf{L}^2_\mu(\mathbb{R}^d)} + \norm{\nabla_v u}^2_{\mathsf{L}^2_\mu(\mathbb{R}^d)} = \sum_{\mathsf{a}bs{\mathsf{a}lpha}\geq 0} (1+\mathsf{a}bs{\mathsf{a}lpha})|c^\mathsf{a}lpha(u)|^2.
\end{align*}
For $u\in \mathrm{H}^k_\mu(\mathbb{R}^d)$ we also have
\mathsf{b}egin{align} \langlebel{eq:fractional_norm}
\|u\|^2_{\mathrm{H}^{k}_\mu(\mathbb{R}^d)} = \sum_{\mathsf{a}lpha} (1+\mathsf{a}bs{\mathsf{a}lpha})^{k} |(u,\Psi_\mathsf{a}lpha)_{\mathsf{L}^2_\mu}|^2 .
\end{align}
\section{The associated hyperbolic system}\langlebel{sec:hsys}
In this section we consider the ansatz \cref{eq:ansatz} and derive a hyperbolic system for the coefficients $c^\mathsf{a}lpha$. Specifically, for $m\geq 1$ fixed, we let
\mathsf{b}egin{align*}
c_m^{\mathsf{a}lpha-\mathbf{e}_i} := 0\mbox{ if }\mathsf{a}lpha_i = 0,\ c_m^{\mathsf{a}lpha+\mathbf{e}_i} := 0\mbox{ if }|\mathsf{a}lpha+\mathbf{e}_i| > m,
\end{align*}
and applying $\partialrtial_t-\mathcal{L}$ to $u_m$ from \cref{eq:ansatz} we get
\mathsf{b}egin{align}\langlebel{eq:um:expansion}
(\partialrtial_t - \mathcal{L} )u_m(t,x,v) =& \sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m \mathsf{b}igl( \partialrtial_t c_m^\mathsf{a}lpha(t,x) \Psi_\mathsf{a}lpha(v) + \mathsf{a}bs{\mathsf{a}lpha} c_m^\mathsf{a}lpha(t,x) \Psi_\mathsf{a}lpha(v)\mathsf{b}igr)\notag\\
&- \sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m \mathsf{b}igl(\sum_{i=1}^d \sqrt{\mathsf{a}lpha_i}\partialrtial^*_{x_i} c_m^\mathsf{a}lpha(t,x) \Psi_{\mathsf{a}lpha-\mathbf{e}_i}(v)\mathsf{b}igr)\notag\\
&+\sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m \mathsf{b}igl(\sum_{i=1}^d\sqrt{\mathsf{a}lpha_i+1}\partialrtial_{x_i} c_m^\mathsf{a}lpha(t,x) \Psi_{\mathsf{a}lpha+\mathbf{e}_i}(v)\mathsf{b}igr).
\end{align}
Expand the right hand side $f$ of \cref{ivp1} in the Hermite basis
\mathsf{b}egin{align*}
f(x,v) = \sum_{|\mathsf{a}lpha|=0}^\infty f_\mathsf{a}lpha(x) \Psi_\mathsf{a}lpha(v).
\end{align*}
Then, setting the right hand side of \cref{eq:um:expansion} to $\sum_{|\mathsf{a}lpha|=0}^m f_\mathsf{a}lpha \Psi_\mathsf{a}lpha$ in the space spanned by $\{\Psi_{\mathsf{a}lpha}\}_{|\mathsf{a}lpha|= 0}^m$ we get the hyperbolic system
\mathsf{b}egin{multline}\langlebel{eq:hyperbolic_intro+}
f^{\mathsf{a}lpha}(t,x)= \ \partialrtial_t c_m^\mathsf{a}lpha(t,x) + \mathsf{a}bs{\mathsf{a}lpha} c_m^{\mathsf{a}lpha}(t,x) \\
- \sum_{i=1}^d \sqrt{\mathsf{a}lpha_i+1} \partialrtial^*_{x_i} c_m^{\mathsf{a}lpha+\mathbf{e}_i}(t,x)+ \sum_{i=1}^d\sqrt{\mathsf{a}lpha_i}\partialrtial_{x_i} c_m^{\mathsf{a}lpha-\mathbf{e}_i}(t,x),
\end{multline}
where $0\leq \mathsf{a}bs{\mathsf{a}lpha}\leq m$, $f^\mathsf{a}lpha(x) := (f(x,\cdot),\Psi_\mathsf{a}lpha)_{\mathsf{L}^2_\mu}$, and $\partialrtial_{x_i}^*$ is defined in \cref{eq:formaldual}.
This is the truncated hyperbolic system that appeared in \cite{brinkmanBrownianMotionField1956,riskenFokkerplanckEquationMethods1996}.
Assuming that we have a solution to \cref{eq:hyperbolic_intro+}, and inserting it into \cref{eq:um:expansion} we get
\mathsf{b}egin{align}\langlebel{eq:um:expansion_err}
(\partialrtial_t - \mathcal{L} )u_m(t,x,v) = \sum_{|\mathsf{a}lpha| = 0}^m f_\mathsf{a}lpha \Psi_\mathsf{a}lpha + \sum_{\mathsf{a}bs{\mathsf{a}lpha} = m} \sum_{i=1}^d \sqrt{\mathsf{a}lpha_i+1} \partialrtial_{x_i} c_m^\mathsf{a}lpha \Psi_{\mathsf{a}lpha+\mathbf{e}_i} .
\end{align}
Thus, in order to show that \cref{eq:ansatz} is a reasonable ansatz, we need to establish existence and regularity for solutions to \cref{eq:hyperbolic_intro+}.
To write \cref{eq:hyperbolic_intro+} in a more compact form, we first order all multi-indices $0 \leq |\mathsf{a}lpha| \leq m$ into an array $\{\mathsf{a}lpha^i\}$ such that $|\mathsf{a}lpha^i| \leq |\mathsf{a}lpha^j|$ for all $i \leq j$. Let $\mathbf{c}_m$ denote the column vector $\mathbf{c}_m :=(c_m^{\mathsf{a}lpha^1},\ldots,c_m^{\mathsf{a}lpha^n})$ where
\mathsf{b}egin{align*}
n = \#\lbrace \mathsf{a}lpha \textrm{ multi-index s.t. } 0\leq \mathsf{a}bs{\mathsf{a}lpha}\leq m \rbrace.
\end{align*}
Using this notation we prove the following auxiliary lemma.
\mathsf{b}egin{lemma} \langlebel{lem:skew}
The system in \cref{eq:hyperbolic_intro+} can be written as
\mathsf{b}egin{align*}
\partialrtial_t \mathbf{c}_m + A \mathbf{c}_m - \sum_{i=1}^d B_i \partialrtial_{x_i}^\mathsf{a}st \mathbf{c}_m + \sum_{i=1}^d B_i^T \partialrtial_{x_i} \mathbf{c}_m = \mathbf{f}_m,
\end{align*}
where $\mathbf{f}_m = (f^{\mathsf{a}lpha^1},dots,f^{\mathsf{a}lpha^n})$, $f^{\mathsf{a}lpha^j} := (f,\Psi_{\mathsf{a}lpha^j})_{\mathsf{L}^2_\mu}$, $A = \mathrm{diag}(\{|\mathsf{a}lpha^j|\}_{j=0}^1)$, and each $B_i$ is constant $n \times n$ upper triangular matrices with zeros on the diagonal.
\end{lemma}
\mathsf{b}egin{proof}
Fix $i \in \{1,\ldots,d\}$. By the definition of $B_i$ we have, for $1 \leq j \leq n$,
\mathsf{b}egin{align*}
\{B_i \partialrtial_{x_i}^\mathsf{a}st \mathbf{c}_m\}_{j} = \sqrt{\mathsf{a}lpha_i^j+1}\ \partialrtial^*_{x_i} c_m^{\mathsf{a}lpha^j+\mathbf{e}_i},
\end{align*}
if $\mathsf{a}lpha^j_i < m$, otherwise $\{B_i \partialrtial_{x_i}^\mathsf{a}st \mathbf{c}_m\}_{j}=0$. Assume that $\mathsf{a}lpha^j_i < m$ and let $j < l \leq n$ be the index such that $\mathsf{a}lpha^j+\mathbf{e}_i = \mathsf{a}lpha^l$. Then the above reads $\{B_i\}_{jl} = \sqrt{\mathsf{a}lpha_i^l}$. Thus $B_i$ is upper triangular with zeros on the diagonal. Similarly, it's easy to check $\{B_i^T \partialrtial_{x_i} \mathbf{c}_m\}_j = \sqrt{\mathsf{a}lpha^j_i}\partialrtial_{x_i} c_m^{\mathsf{a}lpha^j-\mathbf{e}_i}$,
where $B_i^T$ is the transpose of $B_i$.
\end{proof}
Using that $\partialrtial_{x_i}^\mathsf{a}st$ is the adjoint of $\partialrtial_{x_i}$ in $\mathsf{L}^2_\eta(\mathbb{R}^d)$ we see that
\mathsf{b}egin{align*}
(B_i \partialrtial_{x_i}^\mathsf{a}st \mathbf{c}_m, \mathbf{c}_m )_{\mathsf{L}^2_\eta} = (B_i \mathbf{c}_m, \partialrtial_{x_i} \mathbf{c}_m )_{\mathsf{L}^2_\eta} = (B_i^T \partialrtial_{x_i} \mathbf{c}_m, \mathbf{c}_m)_{\mathsf{L}^2_\eta}.
\end{align*}
Therefore
\mathsf{b}egin{align}\langlebel{eq:skew}
\mathsf{b}igl ( A \mathbf{c}_m - \sum_{i=1}^d B_i \partialrtial_{x_i}^\mathsf{a}st \mathbf{c}_m + \sum_{i=1}^d B_i^T \partialrtial_{x_i} \mathbf{c}_m, \mathbf{c}_m \mathsf{b}igr )_{\mathsf{L}^2_\eta} = \mathsf{b}igl ( A \mathbf{c}_m, \mathbf{c}_m \mathsf{b}igr )_{\mathsf{L}^2_\eta}.
\end{align}
\mathsf{b}egin{remark}
Note that the matrix $A$ in \cref{lem:skew} appears from the representation of the Ornstein-Uhlenbeck operator \cref{eq:ornuhl} in the spectral expansion, while $B_i$ and $B_i^T$ correspond to $\nabla_v$ and $\nabla_v^*$.
It can therefore be seen, see \cref{lem:Em} below, that $B_i\partialrtial_{x_i}^\mathsf{a}st \mathbf{c}_m$ and $B_i^T \partialrtial_{x_i} \mathbf{c}_m$ are mixed second order terms, with one derivative with respect to $v$ and one derivative with respect to $x_i$.
\end{remark}
\mathsf{b}egin{remark}
In the case $d=1$ the system in \cref{eq:hyperbolic_intro+} equals
\mathsf{b}egin{align*}
\mathsf{b}egin{array}{lllll}
\partialrtial_t c_m^0 & & - \sqrt{1} \partialrtial_x^* c_m^1 & & = f^0, \\
\partialrtial_t c_m^1 & + c_m^1 & - \sqrt{2} \partialrtial_x^* c_m^2 & + \sqrt{1} \partialrtial_x c_m^0 & = f^1,\\
\vdots & \vdots & \vdots & \vdots & \vdots \\
\partialrtial_t c_m^{m-1} & + (m-1) c_m^{m-1} & - \sqrt{m}\partialrtial_x^* c_m^m & + \sqrt{m-1} \partialrtial_x c_m^{m-2} & = f^{m-1},\\
\partialrtial_t c_m^m & + m c_m^m & & + \sqrt{m} \partialrtial_x c_m^{m-1} & = f^m,
\end{array}
\end{align*}
and the matrices $A$ and $B$ appearing in \cref{lem:skew} are given as
\mathsf{b}egin{align*}
A = \mathsf{b}egin{pmatrix}
0 & & & & \\
& {1} & & & \\
& & {2} & & \\
& & & ddots & \\
& & & & {m}
\end{pmatrix},
\quad
B = \mathsf{b}egin{pmatrix}
0 & \sqrt{1} & 0 & & \\
& 0 & \sqrt{2} & & \\
& & 0 & ddots & \\
& & & ddots & \sqrt{m} \\
& & & & 0
\end{pmatrix}.
\end{align*}
\end{remark}
\subsection{Well-posedness for the hyperbolic system}
From \cref{lem:skew} we see that if we define
\mathsf{b}egin{align*}
H := A - \sum_{i=1}^d B_i \partialrtial_{x_i}^\mathsf{a}st + \sum_{i=1}^d B_i^T \partialrtial_{x_i},
\end{align*}
then the initial value problem corresponding to \cref{eq:hyperbolic_intro+} is
\mathsf{b}egin{align}\langlebel{ivp2}
\mathsf{b}egin{cases}
\partialrtial_t \mathbf{c}_m + H \mathbf{c}_m = \mathbf{f}_m & \text{ in } (0,T)\times\T^d,\\
\hphantom{\partialrtial_t \mathbf{c}_m + H } \mathbf{c}_m = \mathbf{g}_m & \text{ on } \{t=0\}\times\T^d,
\end{cases}
\end{align}
where
\mathsf{b}egin{align*}
\mbox{$\mathbf{g}_m = (g_m^1,dots, g_m^n)$ with $g_m^j = (g,\Psi_{\mathsf{a}lpha^j})_{\mathsf{L}^2_\mu}$,}
\end{align*}
and
\mathsf{b}egin{align*}
\mbox{$\mathbf{f}_m = (f^{\mathsf{a}lpha^1}, dots,f^{\mathsf{a}lpha^n})$ with $f^{\mathsf{a}lpha^j} = (f, \Psi_{\mathsf{a}lpha^j})_{\mathsf{L}^2_\mu}$ for all $1\leqslant j\leqslant n$}.
\end{align*}
\mathsf{b}egin{definition} \langlebel{def:brinkman_weak} We say that a vector-valued function
$\mathbf{c}_m \in \mathsf{L}^2((0,T);\mathrm{H}^1_\eta(\T^d;\mathbb{R}^n))$ is a weak solution to \cref{ivp2} if $\partialrtial_t \mathbf{c}_m \in \mathsf{L}^2((0,T);\mathrm{H}^1_\eta(\T^d;\mathbb{R}^n))$,
$\mathbf{c}_m(0) = \mathbf{g}_m$ and for a.e. $0 \leq t \leq T$ and all test functions
$\varphi \in \mathrm{H}^1_\eta(\T^d;\mathbb{R}^n)$, it holds
\mathsf{b}egin{align*}
(\partialrtial_t \mathbf{c}_m(t,\cdot) + H \mathbf{c}_m(t,\cdot), \varphi(\cdot))_{\mathsf{L}^2_\eta} = 0.
\end{align*}
\end{definition}
To prove existence of weak solutions to \cref{ivp2} we will use the method of vanishing viscosity, i.e.~we consider, for $\varepsilon>0$ small, the problem
\mathsf{b}egin{align}\langlebel{ivp3}
\mathsf{b}egin{cases}
\partialrtial_t \mathbf{c}^\varepsilon_m + \varepsilon \nabla_x^\mathsf{a}st \nabla_x \mathbf{c}_m^\varepsilon + H \mathbf{c}_m^\varepsilon= \mathbf{f}^\varepsilon_m &\text{ in } (0,T)\times\T^d, \\
\hphantom{\partialrtial_t \mathbf{c}^\varepsilon_m + \varepsilon \nabla_x^\mathsf{a}st \nabla_x \mathbf{c}_m^\varepsilon + H}\mathbf{c}^\varepsilon_m = \mathbf{g}^\varepsilon_m & \text{ on } \{t=0\}\times\T^d,
\end{cases}
\end{align}
where
\mathsf{b}egin{align*}
\mbox{$\mathbf{g}^\varepsilon_m := \phi_\varepsilon * \mathbf{g}_m$ and $\mathbf{f}_m^\varepsilon := \phi_\varepsilon * \mathbf{f}_m$},
\end{align*}
where $\phi_\varepsilon(x) = \varepsilon^{-n} \phi(x/\varepsilon)$, and $\phi$ is a standard mollifier on $\T^d$. Note that the convolution is applied componentwise on the vector valued functions.
\mathsf{b}egin{proposition}\langlebel{thm:hsys:approx}
Let $k\geq 0$ be a fixed integer and assume that
\mathsf{b}egin{align}\langlebel{eq:assumphsys}
U(x)\in C^\infty(\T^d),\ \mathbf{g}_m\in\mathrm{H}^{k+1}_\eta(\T^d),\ \mathbf{f}_m\in \mathrm{H}^{k+1}_{t,x}((0,T)\times\T^d).
\end{align}
Then there exists, for each $\varepsilon>0$, a unique weak solution $\mathbf{c}_m^\varepsilon$ to \cref{ivp3} such that
\mathsf{b}egin{align}\langlebel{regga}
\partialrtial_t^{i} \mathbf{c}_m^\varepsilon \in \mathsf{L}^2((0,T);\mathrm{H}^{2k+4-2i}_\eta(\T^d)), \text{ for every } i=0,dots,k+2.
\end{align}
\end{proposition}
\mathsf{b}egin{proof} Note that the measure $\, \mathrm{d} \eta = e^{-U(x)} \, \mathrm{d} x$ is comparable to the Lebesgue measure on $\T^d$, and therefore we can and will in the following use function spaces defined with respect to $\, \mathrm{d} x$ instead of $\, \mathrm{d} \eta$. Introducing
\mathsf{b}egin{align*}
X := \mathsf{L}^\infty (0,T; \mathrm{H}^1(\T^d; \mathbb{R}^{n})),
\end{align*}
we consider for $\mathbf{w}\in X$ given, the linear system
\mathsf{b}egin{align*}
\mathsf{b}egin{cases}
\partialrtial_t \mathbf{c}_m - \varepsilon \mathsf{D}elta_{x} \mathbf{c}_m = \mathbf{f}_m^\varepsilon- \varepsilon \nabla_x U\cdot\nabla_x\mathbf{w}- H \mathbf{w} & \text{ in } (0,T)\times\T^d, \\
\hphantom{\partialrtial_t \mathbf{c}_m - \varepsilon \mathsf{D}elta_{x}} \mathbf{c}_m = \mathbf{g}_m^\varepsilon& \text{ on } \{t=0\}\times\T^d.
\end{cases}
\end{align*}
Given our information on the right hand side of this system, and on $\mathbf{g}_m^\varepsilon$, we can conclude that this system has a unique solution $\mathbf{c}_m$ such that $\mathbf{c}_m\in \mathsf{L}^2((0,T);\mathsf{a}llowbreak\mathrm{H}^2(\T^d;\mathsf{a}llowbreak\mathbb{R}^n))$, $\partialrtial_t \mathbf{c}_m\in \mathsf{L}^2((0,T);\mathsf{a}llowbreak\mathsf{L}^2(\T^d;\mathsf{a}llowbreak\mathbb{R}^n))$. Therefore, it follows from a standard fixed point argument, see \cite[Sec 7.3.2.b]{evansPartialDifferentialEquations2010a}, that there exists a unique solution $\mathbf{c}_m$ to \cref{ivp3} satisfying $\mathbf{c}_m\in \mathsf{L}^2((0,T); \mathrm{H}^1(\T^d;\mathbb{R}^n))$ and $\partialrtial_t \mathbf{c}_m \in \mathsf{L}^2((0,T);\mathrm H^{-1}(\T^d;\mathbb{R}^n))$.
The regularity result stated in \cref{regga} is a consequence of parabolic regularity theory as we have smooth initial data and regular coefficients, see for instance \cite[Sec 7.1.3]{evansPartialDifferentialEquations2010a}.
\end{proof}
Having established the existence of solutions to \cref{ivp3} we next to prove uniform energy estimates w.r.t.~$\varepsilon$ in order to complete the method of vanishing viscosity.
\mathsf{b}egin{proposition}\langlebel{thm:energy}
Let $k\geq 0$ be a fixed integer and assume \cref{eq:assumphsys}. Let $\mathbf{c}_m^\varepsilon$ be the unique weak solution to \cref{ivp3} obtained in \cref{thm:hsys:approx}. Given an integer $k\geq 0$, there exists a constant $C = C(d,T,m,U,k)>0$ such that
\mathsf{b}egin{align}\langlebel{thm:energy:estimate}
\norm{\mathbf{c}_m^\varepsilon}^2_{\mathrm H^{k+1}_{t,x}((0,T)\times \T^d)} \leq C \mathsf{b}igl(\norm{\mathbf{g}_m}^2_{\mathrm H^{k+1}_\eta(\T^d)} + \norm{\mathbf{f}_m}^2_{\mathrm H^{k+1}_{t,x}((0,T)\times \T^d)}\mathsf{b}igr).
\end{align}
\end{proposition}
\mathsf{b}egin{proof}
We will prove the proposition by induction with respect to $k$. Throughout this proof we use the convention that $a \lesssim b$ means that $a\leq Cb$ for some positive constant $C$ which may depend on $(d,T,m,U,k)$ but which is independent of $\varepsilon$. In the following we will use that the commutator $[\partialrtial_{x_j}, \partialrtial_{x_i}^\mathsf{a}st]$ equals multiplication by $\partialrtial_{x_j}\partialrtial_{x_i} U$.
\noindent \emph{Step 1: }
In this step we establish \cref{thm:energy:estimate} in the case $k=0$ assuming $\mathbf{g}_m\in \mathrm H^1_\eta(\T^d)$ and $\mathbf{f}_m\in \mathrm H^1_{t,x}((0,T)\times\T^d)$. By \cref{eq:skew} and the self-adjointness of $\nabla_x^*\nabla_x$ in $\mathsf{L}^2_\eta(\T^d)$,
\mathsf{b}egin{align}\langlebel{eq:hsys:main}
\mathsf{b}egin{split}
\partialrtial_t \frac{1}{2} \norm{\mathbf{c}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} =& - \norm{A^{1/2}\mathbf{c}_{m}^\varepsilon}^2_{\mathsf{L}^2_\eta} - \varepsilon \norm{\nabla_x \mathbf{c}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} + (\mathbf{c}_m^\varepsilon, \mathbf{f}_m^\varepsilon)_{\mathsf{L}^2_\eta} \\
\leq & (\mathbf{c}_m^\varepsilon, \mathbf{f}_m^\varepsilon)_{\mathsf{L}^2_\eta} \leq \norm{\mathbf{c}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} + \norm{\mathbf{f}_m^\varepsilon}^2_{\mathsf{L}^2_\eta}.
\end{split}
\end{align}
Applying Grönwall's inequality we get
\mathsf{b}egin{align}\langlebel{ineq:hsys:main}
\max_{0\leq t\leq T} \norm{\mathbf{c}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} \lesssim \norm{\mathbf{g}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} + \norm{\mathbf{f}_m^\varepsilon}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta)}.
\end{align}
Using \cref{thm:hsys:approx} we see that we can now differentiate \cref{ivp3} w.r.t.~$x$. Given a multi-index $\theta = (\theta_1, dots,\theta_d)$, we set $\nabla_x^\theta := \partialrtial_{x_{1}}^{\theta_1}\cdots\partialrtial_{x_{d}}^{\theta_d}$, and we let $\mathbf{w}^{(\theta)} :=\nabla_x^\theta \mathbf{c}_m^\varepsilon$ and similarly for $(\mathbf{g}_m^\varepsilon)^{(\theta)}$. We also let
\mathsf{b}egin{align*}
\norm{\nabla_x^l \mathbf{c}_m^\varepsilon}^2_{\mathsf{L}^2_\eta}:= \sum_{\mathsf{a}bs{\theta}=l} \norm{\mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta},
\end{align*}
for all integers $l\geq 0$. Using this notation we have, in particular, that $\mathbf{w}^{(\mathbf{e}_j)}=\partialrtial_{x_j}\mathbf{c}_m^\varepsilon$, and
differentiating the equation in \cref{ivp3} w.r.t.~$x_j$, for $1\leq j\leq d$, we deduce
\mathsf{b}egin{align} \langlebel{eq:hsys:dx}
\mathsf{b}egin{cases}
\partialrtial_t \mathbf{w}^{(\mathbf{e}_j)} + H \mathbf{w}^{(\mathbf{e}_j)} + \varepsilon \nabla_{x}^\mathsf{a}st \nabla_{x} \mathbf{w}^{(\mathbf{e}_j)}
= \widetilde{\mathbf{f}}^{(\mathbf{e}_j)} & \text{ in } (0,T)\times \T^d ,\\
\hphantom{\partialrtial_t \mathbf{w}^{(\mathbf{e}_j)} + H \mathbf{w}^{(\mathbf{e}_j)} + \varepsilon \nabla_{x}^\mathsf{a}st \nabla_{x}} \mathbf{w}^{(\mathbf{e}_j)} = \partialrtial_{x_j} \mathbf{g}_m^\varepsilon &\text{ on } \{t=0\}\times\T^d.
\end{cases}
\end{align}
Here
\mathsf{b}egin{align*}
\widetilde{\mathbf{f}}^{(\mathbf{e}_j)} = \sum_{i=1}^d [\partialrtial_{x_j}, \partialrtial_{x_i}^\mathsf{a}st] \mathsf{b}igl (B_i \mathbf{c}_m^\varepsilon - \varepsilon \mathbf{w}^{(\mathbf{e}_i)} \mathsf{b}igr ) + \partialrtial_{x_j} \mathbf{f}_m^\varepsilon,
\end{align*}
and it follows that
\mathsf{b}egin{align*}
\norm{{\widetilde{\mathbf{f}}}^{(\mathbf{e}_j)}}_{\mathsf{L}^2_\eta}^2 \lesssim \norm{\mathbf{c}_m^\varepsilon}_{\mathsf{L}^2_\eta}^2 + \norm{\nabla_x \mathbf{c}_m^\varepsilon}_{\mathsf{L}^2_\eta}^2 + \norm{\partialrtial_{x_j}\mathbf{f}_m^\varepsilon}_{\mathsf{L}^2_\eta}^2.
\end{align*}
Arguing again as in \cref{eq:hsys:main} and applying Gr\"onwall's inequality we obtain
\mathsf{b}egin{align}\langlebel{ineq:hsys:dx}
\max_{0\leq t\leq T} \norm{\nabla_x \mathbf{c}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} \lesssim \norm{\mathbf{g}_m^\varepsilon}^2_{\mathrm{H}^1_\eta} + \norm{\mathbf{f}_m^\varepsilon}^2_{\mathsf{L}^2((0,T);\mathrm{H}^1_\eta)}.
\end{align}
Next, letting $\hat{\mathbf{w}} = \partialrtial_t \mathbf{c}_m^\varepsilon$ we see that
\mathsf{b}egin{align}\langlebel{eq:hsys:dt}
\mathsf{b}egin{cases}
\partialrtial_t \hat{\mathbf{w}} + H \hat{\mathbf{w}} + \varepsilon \nabla_{x}^\mathsf{a}st \nabla_{x} \hat{\mathbf{w}}
= \partialrtial_t \mathbf{f}_m^\varepsilon & \text{ in } (0,T)\times \T^d ,\\
\hat{\mathbf{w}} = \mathbf{f}_m^\varepsilon(0) - H \mathbf{g}_m^\varepsilon - \varepsilon\nabla_x^\mathsf{a}st \nabla_x \mathbf{g}_m^\varepsilon & \text{ on } \{t=0\}\times\T^d.
\end{cases}
\end{align}
Arguing again as in \cref{eq:hsys:main}, we obtain
\mathsf{b}egin{align}\langlebel{ineq:hsys:dt:0}
\mathsf{b}egin{split}
\norm{\hat{\mathbf{w}}}^2_{\mathsf{L}^2_\eta} &\lesssim \norm{\hat{\mathbf{w}}(0)}^2_{\mathsf{L}^2_\eta} + \norm{\partialrtial_t \mathbf{f}_m^\varepsilon}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta)}\\
& \lesssim \norm{\mathbf{f}_m^\varepsilon(0)}^2_{\mathsf{L}^2_\eta} +\norm{ H \mathbf{g}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} + \varepsilon^2 \norm{\nabla_x^\mathsf{a}st \nabla_x \mathbf{g}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} + \norm{\partialrtial_t \mathbf{f}_m^\varepsilon}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta)}.
\end{split}
\end{align}
Since $\mathbf{g}_m^\varepsilon = \phi_\varepsilon * \mathbf{g}_m$ and $\T^d$ is bounded, it holds
\mathsf{b}egin{align}\langlebel{ineq:hsys:dt:1}
\norm{\nabla_x^\mathsf{a}st \nabla_x \mathbf{g}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} \leq \frac{C}{\varepsilon^2} \norm{ \nabla_x \mathbf{g}_m}^2_{\mathsf{L}^2_\eta}.
\end{align}
Also, by the fundamental theorem of calculus and Jensen's inequality, we have
\mathsf{b}egin{align}\langlebel{ineq:hsys:dt:2}
\norm{\mathbf{f}_m^\varepsilon(0)}^2_{\mathsf{L}^2_\eta} \lesssim \norm{\mathbf{f}_m^\varepsilon}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta)} + \norm{\partialrtial_t \mathbf{f}_m^\varepsilon}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta)}.
\end{align}
Furthermore, by the definition of $H$,
\mathsf{b}egin{align}\langlebel{ineq:hsys:dt:3}
\norm{ H \mathbf{g}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} \lesssim \norm{\mathbf{g}_m^\varepsilon}^2_{\mathrm{H}^1_\eta}.
\end{align}
Combining \cref{ineq:hsys:dt:0,ineq:hsys:dt:1,ineq:hsys:dt:2,ineq:hsys:dt:3} we see that
\mathsf{b}egin{align}\langlebel{ineq:hsys:dt}
\norm{\partialrtial_t \mathbf{c}_m^\varepsilon} ^2_{\mathsf{L}^2_\eta} \lesssim \norm{\mathbf{g}_m^\varepsilon}^2_{\mathrm{H}^1_\eta} + \norm{\mathbf{f}_m^\varepsilon}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta)} + \norm{\partialrtial_t \mathbf{f}_m^\varepsilon}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta)}.
\end{align}
Finally, we note that
\mathsf{b}egin{align*}
\norm{\mathbf{g}_m^\varepsilon}^2_{\mathrm{H}^1_\eta} \leq \norm{\mathbf{g}_m}^2_{\mathrm{H}^1_\eta} \textrm{ and } \norm{\mathbf{f}_m^\varepsilon}^2_{\mathrm{H}^1_{t,x}} \leq \norm{\mathbf{f}_m}^2_{\mathrm{H}^1_{t,x}}.
\end{align*}
These two inequalities, combined with \cref{ineq:hsys:main,ineq:hsys:dx,ineq:hsys:dt}, completes the proof of \cref{thm:energy:estimate} in the case $k=0$.
\noindent \emph{Step 2: }
In this step we assume that \cref{thm:energy:estimate} holds for some $k\geq 0$ and we want to prove
the estimate for $k+1$ assuming that $\mathbf{g}_m\in \mathrm{H}^{k+2}_{\eta}(\T^d)$ and $\mathbf{f}_m\in\mathrm{H}^{k+2}_{t,x}((0,T)\times \T^d)$. As $\mathrm{H}^{k+2}_{\eta}(\T^d)\subset \mathrm{H}^{k+1}_{\eta}(\T^d)$ and $\mathrm{H}^{k+2}_{t,x}((0,T)\times \T^d)\subset\mathrm{H}^{k+1}_{t,x}((0,T)\times \T^d)$ it follows by the induction hypothesis that
\mathsf{b}egin{align}\langlebel{ineq:hsys:induction:main}
\norm{\mathbf{c}_m^\varepsilon}^2_{\mathrm H^{k+1}_{t,x}((0,T)\times \T^d)} \leq C \mathsf{b}igl(\norm{\mathbf{g}_m}^2_{\mathrm H^{k+1}_\eta} + \norm{\mathbf{f}_m}^2_{\mathrm H^{k+1}_{t,x}((0,T)\times \T^d)}\mathsf{b}igr).
\end{align}
In a way similar to \cref{eq:hsys:dt}, just noting that $\partialrtial_t \mathbf{f}^\varepsilon_m$ and $\hat{\mathbf{w}}(0) $ now satisfy the assumption \cref{eq:assumphsys} for $k$, it follows that
\mathsf{b}egin{align*}
\sum_{i+j=0}^{k+1} \norm{{\partialrtial_t^i \nabla_x^j \hat{\mathbf{w}}}}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta)} \lesssim \norm{\hat{\mathbf{w}}(0)}^2_{\mathrm{H}^{k+1}_\eta}+ \sum_{i+j=0}^{k+1} \norm{\partialrtial_t^{i+1} \nabla_x^{j} \mathbf{f}_m}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta)},
\end{align*}
i.e.
\mathsf{b}egin{align}\langlebel{ineq:hsys:induction:dt}
\sum_{i=1}^{k+2} \norm{\partialrtial_t^i \mathbf{c}_m^\varepsilon}^2_{\mathsf{L}^2((0,T);\mathrm{H}^{k+2-i}_\eta)} \lesssim \norm{\mathbf{g}_m}^2_{\mathrm{H}^{k+2}_\eta}
+ \sum_{i=0}^{k+2} \norm{\partialrtial_t^i \mathbf{f}_m}^2_{\mathsf{L}^2((0,T);\mathrm{H}^{k+2-i}_\eta)}.
\end{align}
It remains to control $\norm{\nabla_x^{k+2} \mathbf{c}_m^\varepsilon}_{\mathsf{L}^2_\eta}$. Recall the notations in \cref{eq:hsys:dx} and assume that $\theta$ is a multi-index such that $\mathsf{a}bs{\theta} = k+1$. Then, for any $\theta+\mathbf{e}_j$ with $1\leq j\leq d$ we have
\mathsf{b}egin{align*}
\mathsf{b}egin{cases}
\partialrtial_t \mathbf{w}^{(\theta+\mathbf{e}_j)}+ H \mathbf{w}^{(\theta+\mathbf{e}_j)} + \varepsilon \nabla_{x}^\mathsf{a}st \nabla_{x} \mathbf{w}^{(\theta+\mathbf{e}_j)}
= \widetilde{\mathbf{f}}^{(\theta+\mathbf{e}_j)} & \text{ in } (0,T)\times \T^d,\\
\hphantom{\partialrtial_t \mathbf{w}^{(\theta+\mathbf{e}_j)}+ H \mathbf{w}^{(\theta+\mathbf{e}_j)} + \varepsilon \nabla_{x}^\mathsf{a}st \nabla_{x}} \mathbf{w}^{(\theta+\mathbf{e}_j)} = {(\mathbf{g}_m^\varepsilon)}^{(\theta+\mathbf{e}_j)} & \text{ on } \{t=0\}\times\T^d,
\end{cases}
\end{align*}
where the functions $\{\widetilde{\mathbf{f}}^{(\theta+\mathbf{e}_j)}\}$ are defined inductively as
\mathsf{b}egin{align*}
\widetilde{\mathbf{f}}^{(\theta+\mathbf{e}_j)}= \sum_{i=1}^d [\partialrtial_{x_j}, \partialrtial_{x_i}^*] \mathsf{b}igl( B_i \mathbf{w}^{(\theta)} - \varepsilon \mathbf{w}^{(\theta+\mathbf{e}_i)} \mathsf{b}igr) + \partialrtial_{x_j} \widetilde{\mathbf{f}}^{(\theta)},
\end{align*}
with $\widetilde{\mathbf{f}}^{(0)} = \mathbf{f}_m^\varepsilon$. Proceeding similarly to the deduction in \cref{ineq:hsys:dx} we can conclude
\mathsf{b}egin{align*}
\norm{\nabla_x^{k+2} \mathbf{c}_m^\varepsilon}^2_{\mathsf{L}^2_\eta} \lesssim \norm{\mathbf{g}_m}^2_{\mathrm{H}^{k+2}_\eta(\T^d)} + \norm{\mathbf{f}_m}^2_{\mathsf{L}^2((0,T); \mathrm{H}^{k+2}_\eta)((0,T)\times \T^d)}.
\end{align*}
This inequality, together with \cref{ineq:hsys:induction:main,ineq:hsys:induction:dt}, completes the induction and hence the proof of the proposition.
\end{proof}
\mathsf{b}egin{proposition} \langlebel{prop:existence}
Let $k\geq 0$ be a fixed integer and assume \cref{eq:assumphsys}. Then there exists a unique weak solution in the sense of \cref{def:brinkman_weak}, $\mathbf{c}_m \in \mathrm{H}^{k+1}_{t,x}((0,T)\times \T^d)$, to the initial value problem \cref{ivp2}. Furthermore, there exists a constant $C = C(d,T,m,U,k)>0$ such that
\mathsf{b}egin{align*}
\norm{\mathbf{c}_m}^2_{\mathrm H^{k+1}_{t,x}((0,T)\times \T^d)} \leq C \mathsf{b}igl(\norm{\mathbf{g}_m}^2_{\mathrm H^{k+1}_\eta(\T^d)} + \norm{\mathbf{f}_m}^2_{\mathrm H^{k+1}_{t,x}((0,T)\times \T^d)}\mathsf{b}igr).
\end{align*}
\end{proposition}
\mathsf{b}egin{proof}
Using the uniform in $\varepsilon$ estimates in \cref{thm:energy}, we can argue as in \cite[Chapter 7.3]{evansPartialDifferentialEquations2010a} to obtain the existence and uniqueness of a weak solution. The quantitative estimate follows readily.
\end{proof}
\section{Uniform energy estimates for the Galerkin approximations}\langlebel{sec:uniformestimate}
In this section we prove uniform energy estimate for the Galerkin approximations $\{u_m\}$ stated in \cref{eq:ansatz} assuming that $\mathbf{c}_m$ solves \cref{ivp2}. Note that by the orthogonality of Hermite functions we have
\mathsf{b}egin{align*}
\norm{u_m}^2_{\mathsf{L}^2((0,T);\mathrm{H}^k_\eta(\mathsf{L}^2_\mu))} = \sum_{\mathsf{a}bs{\mathsf{a}lpha}=1}^m \norm{c_m^\mathsf{a}lpha}_{\mathsf{L}^2((0,T);\mathrm{H}^k_\eta)}^2.
\end{align*}
Using this observation and \cref{prop:existence} we can conclude the validity of the following lemma.
\mathsf{b}egin{lemma}
Let $k\geq 0$ be a fixed integer and assume \cref{eq:assumphsys}. Let $u_m$ be of the form \cref{eq:ansatz} with $\mathbf{c}_m$ solving \cref{ivp2}.
Then
\mathsf{b}egin{align*}
\partialrtial_t^i u_m \in\mathsf{L}^2\mathsf{b}igl(0,T; \mathrm{H}^{k+1-i}_\eta(\mathsf{L}^2_\mu) \mathsf{b}igr),\quad \textrm{ for all } i=0, dots, k+1.
\end{align*}
Furthermore, $v\mapsto \partialrtial_t^i\nabla_x^j u_m(\cdot, \cdot, v)$ is smooth for all non-negative integers $i$, $j$ such that $i+j\leq k+1$.
\end{lemma}
Thus the derivation \cref{eq:um:expansion_err} is now established, i.e.
\mathsf{b}egin{align}\langlebel{eq:Lum}
(\partialrtial_t - \mathcal{L}) u_m = f_m + E_m,
\end{align}
where
\mathsf{b}egin{align}\langlebel{eq:Lum+}
f_m = \sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m f^\mathsf{a}lpha \Psi_\mathsf{a}lpha \text{ and } {E}_m = \sum_{\mathsf{a}bs{\mathsf{a}lpha} = m} \sum_{i=1}^d \sqrt{\mathsf{a}lpha_i+1} \partialrtial_{x_i} c_m^\mathsf{a}lpha \Psi_{\mathsf{a}lpha+\mathbf{e}_i} .
\end{align}
As described in the introduction, we will prove the following conclusions:
\mathsf{b}egin{enumerate}
\item \langlebel{item1} $u_m$ is uniformly bounded in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d; \mathrm{H}^1_\mu))$,
\item \langlebel{item2} $\partialrtial_t u_m + v \cdot \nabla_x u_m$ is uniformly bounded in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d; \mathsf{L}^2_\mu))$,
\item \langlebel{item3} ${E}_m \to 0$ in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d; \mathsf{L}^2_\mu))$.
\end{enumerate}
The next lemma establishes what norms we need to control in order to conclude (\cref{item2}) and (\cref{item3}).
\mathsf{b}egin{lemma} \langlebel{lem:Em}
\hphantom{1em}
\mathsf{b}egin{enumerate}
\item Let $u_m$ be of the form \cref{eq:ansatz}. Then
\mathsf{b}egin{align*}
\norm{v\cdot \nabla_x u_m(t,\cdot,\cdot)}_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)} \leq C\norm{\nabla_x u_m(t,\cdot,\cdot)}_{\mathsf{L}^2_\eta(\mathrm{H}^1_\mu)},
\end{align*}
for a constant $C$ which depends only on the dimension $d$.
\item
Let ${E}_m$ be as defined in \cref{eq:Lum+}. Then for $k \in \N_+$, we have
\mathsf{b}egin{align}\langlebel{eq:um:Em}
\norm{{E}_m(t,\cdot,\cdot)}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)} \leq \frac{d}{(1+m)^k} \norm{\nabla_x u_m(t,\cdot,\cdot)}^2_{\mathsf{L}^2_\eta(\mathrm{H}^{k+1}_\mu)}.
\end{align}
\end{enumerate}
\end{lemma}
\mathsf{b}egin{proof}
Using the relation $v_i \Psi_\mathsf{a}lpha = (\sqrt{\mathsf{a}lpha_i+1}\Psi_{\mathsf{a}lpha+\mathbf{e}_i}+\sqrt{\mathsf{a}lpha_i}\Psi_{\mathsf{a}lpha-\mathbf{e}_i})$, see \cref{eq:Hermiterelations}, we can conclude that there is a constant $C=C(d)$ such that
\mathsf{b}egin{align*}
\|v \cdot \nabla_x u_m(t,\cdot,\cdot)\|_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}^2
=&
\mathbfig \|\sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m \sum_{i=1}^d \partialrtial_{x_i} c_m^\mathsf{a}lpha \mathbfig( \sqrt{\mathsf{a}lpha_i+1} \Psi_{\mathsf{a}lpha+\mathbf{e}_i} + \sqrt{\mathsf{a}lpha_i}\Psi_{\mathsf{a}lpha-\mathbf{e}_i} \mathbfig ) \mathbfig \|_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}^2 \\
\leq&
C \mathbfig \|\sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m \sum_{i=1}^d \partialrtial_{x_i} c_m^\mathsf{a}lpha \sqrt{\mathsf{a}lpha_i+1} \Psi_{\mathsf{a}lpha+\mathbf{e}_i}\mathbfig \|^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)} \\
&+
C\mathbfig \|\sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m \sum_{i=1}^d \partialrtial_{x_i} c_m^\mathsf{a}lpha \sqrt{\mathsf{a}lpha_i}\Psi_{\mathsf{a}lpha-\mathbf{e}_i} \mathbfig \|_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}^2
\\
\leq& C \sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m (1+|\mathsf{a}lpha|)\| \nabla_{x} c_m^\mathsf{a}lpha\|_{\mathsf{L}^2_\eta}^2 \leq C\norm{\nabla_x u_m(t,\cdot,\cdot) }^2_{\mathsf{L}^2_\eta(\mathrm{H}^1_\mu)} .
\end{align*}
This proves the first statement in the lemma. To prove the second statement, using the orthogonality of $\Psi_\mathsf{a}lpha$ we have
\mathsf{b}egin{align*}
\|{E}_m(t,\cdot,\cdot)\|_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}^2 = \sum_{\mathsf{a}bs{\mathsf{a}lpha} = m}(d+m) \| \nabla_{x} c_m^\mathsf{a}lpha(t,\cdot)\|_{\mathsf{L}^2_\eta}^2 \leq d \sum_{\mathsf{a}bs{\mathsf{a}lpha}=m} (1+m)\norm{\nabla_x c_m^\mathsf{a}lpha(t,\cdot)}^2_{\mathsf{L}^2_\eta}.
\end{align*}
We then observe that by definition \cref{eq:fractional_norm},
\mathsf{b}egin{align*}
\sum_{\mathsf{a}bs{\mathsf{a}lpha}=m} (1+m) \norm{\nabla_x c_m^\mathsf{a}lpha(t,\cdot)}^2_{\mathsf{L}^2_\eta}
&=
\frac{(1+m)^{k+1}}{(1+m)^k} \sum_{\mathsf{a}bs{\mathsf{a}lpha}=m} (1+m) \norm{\nabla_x c_m^\mathsf{a}lpha(t,\cdot)}^2_{\mathsf{L}^2_\eta}
\\
&\leq \frac{1}{(1+m)^k}
\norm{\nabla_x u_m(t,\cdot,\cdot)}^2_{\mathsf{L}^2_\eta(\mathrm{H}^{k+1}_\mu)}.
\end{align*}
Combining the two inequalities above we get the inequality in \cref{eq:um:Em}.
\end{proof}
To control the norms appearing on the right hand side in the inequalities in \cref{lem:Em}, we need to prove regularity in $v$ of $u_m$.
Previously we proved regularity in the $x$ variable using a standard bootstrap technique, we will now continue to prove the regularity in the velocity variable phrased in our spectral space.
Recall that from \cref{eq:fractional_norm} that the regularity in $v$ corresponds to the decay of $\norm{c_m^\mathsf{a}lpha}_{\mathsf{L}^2_\eta}$ w.r.t.~$\mathsf{a}bs{\mathsf{a}lpha}$.
Furthermore, differentiation w.r.t.~$v$ can in the spectral space essentially be written as multiplication with powers of the matrix $A$.
When deriving the energy estimates for $(I+A)^s \mathbf{c}_m$, the cancellation seen in \cref{eq:skew} will be destroyed, specifically the commutator $[B_i,(I+A)^s] \neq 0$ when $s > 0$. However, as we will see in the proof, the remainder term will be of lower order.
\mathsf{b}egin{proposition} \langlebel{thm:regularity}
Let the assumptions in
\cref{eq:assump:um1}
hold for a fixed integer $k\geq0$
and let $u_m$ be of the form \cref{eq:ansatz} with $\mathbf{c}_m$ solving \cref{ivp2} with initial data $\mathbf{g}_m$ and source term $\mathbf{f}_m$.
Then there exists a constant $C=C(d,T,\norm{\nabla_x^2 U}_\infty, \mathsf{a}llowbreak dots,\mathsf{a}llowbreak \norm{\nabla_x^{k+1}U}_\infty,k)>0$, independent of $m$, such that
\mathsf{b}egin{align}\langlebel{eq:um:regularity}
\norm{u_m}^2_{\mathrm{H}^k_{t,x,v}((0,T)\times\T^d\times\mathbb{R}^d)} \leq C\mathsf{b}igl(\norm{g}^2_{\mathrm{H}^k_{x,v}(\T^d\times\mathbb{R}^d)} + \norm{f}^2_{\mathrm{H}^k_{t,x,v}((0,T)\times\T^d\times\mathbb{R}^d)}\mathsf{b}igr).
\end{align}
\end{proposition}
\mathsf{b}egin{proof} Throughout this proof we use the convention that $a \lesssim b$ means that $a\leq Cb$ for some positive constant $C$ which may depend on $(d,T,\norm{\nabla_x^2 U}_\infty, \mathsf{a}llowbreak dots,\mathsf{a}llowbreak \norm{\nabla_x^{k+1}U}_\infty,k)$ but which is independent of $m$. To start the argument we first observe that
\mathsf{b}egin{align}\langlebel{esta}
\norm{ u_m }^2_{\mathrm{H}^k_{t,x,v}((0,T)\times \T^d\times\mathbb{R}^d)} =& \sum_{i+j+l = 0}^k \norm{\partialrtial_t^i \nabla_x^j \nabla_v^{2l} u_m}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d;\mathsf{L}^2_\mu(\mathbb{R}^d)))}\notag\\
=& \sum_{i=0}^k \norm{\partialrtial_t^i u_m}^2_{\mathsf{L}^2((0,T);\mathrm{H}^{k-i}_{x,v}(\T^d\times\mathbb{R}^d))}.
\end{align}
Recall that $g_m = \sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m (g,\Psi_\mathsf{a}lpha)_{\mathsf{L}^2_\mu}\Psi_\mathsf{a}lpha$ and $f_m=\sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m (f,\Psi_\mathsf{a}lpha)_{\mathsf{L}^2_\mu}\Psi_\mathsf{a}lpha$. Hence, given $k$ we have that $g_m$ and $f_m$ satisfy
\cref{eq:assump:um1}.
We first estimate the term in \cref{esta} which corresponds to
$i=0$. We claim that
\mathsf{b}egin{align}\langlebel{eq:um:claim1}
\sup_{0\leq t\leq T}\norm{u_m(t,\cdot,\cdot)}^2_{\mathrm{H}^k_{x,v}} \lesssim \mathsf{b}igl(\norm{g}^2_{\mathrm{H}^k_{x,v}(\T^d\times\mathbb{R}^d)} + \norm{f}^2_{\mathrm{H}^k_{t,x,v}((0,T)\times\T^d\times\mathbb{R}^d)}\mathsf{b}igr)
\end{align}
For efficiency we introduce, for $s,j$ non-negative integers,
\mathsf{b}egin{align*}
N(s,j, t):= \norm{\nabla_x^j u_m(t,\cdot,\cdot)}^2_{\mathsf{L}^2_\eta(\mathrm{H}^{2s}_\mu)} \text{ and } F(s,j,t) := \norm{\nabla_x^j f(t,\cdot,\cdot)}^2_{\mathsf{L}^2_\eta(\mathrm{H}^{2s}_\mu)}.
\end{align*}
To prove the claim \cref{eq:um:claim1}, it suffices to prove that
\mathsf{b}egin{align}\langlebel{eq:um:sum}
\partialrtial_t \sum_{s+j\leq k } N(s,j,t) \lesssim \sum_{s+j\leq k} N(s,j,t) + \sum_{s+j\leq k} F(s,j,t).
\end{align}
Indeed we just need to apply Gr\"onwall's inequality to \cref{eq:um:sum} to conclude \cref{eq:um:claim1}.
To prove \cref{eq:um:sum} we note that
\mathsf{b}egin{align*}
N(s,j,t) = \sum_{\mathsf{a}bs{\theta} = j}\norm{(I+A)^{s} \nabla_x^\theta \mathbf{c}_m}^2_{\mathsf{L}^2_\eta},
\end{align*}
a similar statement holds for $F$.
Hence we need to go back to the equation in \cref{ivp2}. Similar to the proof of \cref{thm:energy}, we let for a multi-index
$\theta$,
\mathsf{b}egin{align}\langlebel{eq:um:bj}
\mathbf{w}^{(\theta)} &= \nabla_{x}^\theta \mathbf{c}_m,\quad \mathbf{g}^{(\theta)} = \nabla_{x}^{\theta} \mathbf{g}_m,\quad \mathbf{f}^{(\theta)} = \nabla_x^\theta \mathbf{f},\notag \\
\mathbf{b}^{(\theta+\mathbf{e}_j)} &= \sum_{i=1}^d [\partialrtial_{x_j}, \partialrtial_{x_i}^\mathsf{a}st] B_i \mathbf{w}^{(\theta)} + \nabla_{x_j} \mathbf{b}^{(\theta)},
\end{align}
and $\mathbf{b}^{(0)} = 0$. In addition, for a non-negative integer $j$ we introduce
\mathsf{b}egin{align*}
\norm{(I+A)^{s} \mathbf{w}^{(j)}}^2_{\mathsf{L}^2_\eta} := \sum_{\mathsf{a}bs{\theta}=j} \norm{(I+A)^{s} \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta},
\end{align*}
and similarly for $\norm{(I+A)^{s} \mathbf{b}^{(j)}}^2_{\mathsf{L}^2_\eta}$ and $\norm{(I+A)^{s} \mathbf{f}^{(j)}}^2_{\mathsf{L}^2_\eta}$.
Then $\mathbf{w}^{(\theta+\mathbf{e}_j)}$ solves the problem
\mathsf{b}egin{align*}
\mathsf{b}egin{cases}
\partialrtial_t \mathbf{w}^{(\theta+\mathbf{e}_j)} + H \mathbf{w}^{(\theta+\mathbf{e}_j)} = \mathbf{f}^{(\theta+\mathbf{e}_j)}+\mathbf{b}^{(\theta+\mathbf{e}_j)} & \text{in } (0,T)\times\T^d,\\
\hphantom{\partialrtial_t \mathbf{w}^{(\theta+\mathbf{e}_j)} + H } \mathbf{w}^{(\theta+\mathbf{e}_j)} = \mathbf{g}^{(\theta+\mathbf{e}_j)} & \text{on } \{t=0\}\times\T^d.
\end{cases}
\end{align*}
We will use the following lemma.
\mathsf{b}egin{lemma}\langlebel{lem:um:iteration}
Let $k,u_m,\mathbf{f}_m,\mathbf{g}_m$ be as in \cref{thm:regularity},
and assume that $s,j$ be nonnegative integers such that $s\leq k$ and $j\leq k$. Then it holds
\mathsf{b}egin{align}\langlebel{ineq:um:all}
\partialrtial_t \frac{1}{2} \norm{(I+A)^{s} \mathbf{w}^{(j)}}^2_{\mathsf{L}^2_\eta} \lesssim & \norm{(I+A)^s \mathbf{w}^{(j)}}^2_{\mathsf{L}^2_\eta}
+ \mathbb{I}_{[s > 0]}\norm{(I+A)^{s-1} \mathbf{w}^{(j+1)}}^2_{\mathsf{L}^2_\eta}\notag \\
&+ \norm{(I+A)^{s-1/2} \mathbf{b}^{(j)}}^2_{\mathsf{L}^2_\eta} + \norm{(I+A)^{s} \mathbf{f}^{(j)}}^2_{\mathsf{L}^2_\eta}.
\end{align}
\end{lemma}
Assuming \cref{lem:um:iteration} we proceed with the proof of \cref{eq:um:sum} and of \cref{thm:regularity}. First we note that by \cref{eq:um:bj} and the definition of the matrices $B_i$ we have by an iteration that for $s,j \geq 0$, it holds
\mathsf{b}egin{align}\langlebel{eq:b:bound}
\norm{(I+A)^{s-1/2} \mathbf{b}^{(j)}} \lesssim \sum_{r=0}^{j-1} N(s,r,t).
\end{align}
Now by \cref{lem:um:iteration,eq:b:bound} we have for $s,j \geq 0$ that
\mathsf{b}egin{align}\langlebel{eq:um:master}
\partialrtial_t N(s,j,t) \lesssim N(s,j,t) + \mathbb{I}_{[s>0]}N(s-1,j+1,t)
+ \sum_{r=0}^{j-1} N(s, r,t) + F(s,j,t).
\end{align}
Summing the inequalities in \cref{eq:um:master} for $s+j\leq k$ we obtain after some reindexing inequality \cref{eq:um:sum}, hence the claim \cref{eq:um:claim1} is proved.
Finally, by differentiating w.r.t.~$t$, for all $i\geq 1$, we see that the term $\partialrtial^i_t u_m$ is of the form \cref{eq:ansatz} but with $\mathbf{c}_m$ replaced by $\partialrtial^i_t \mathbf{c}_m$. It's also easy to check that $\partialrtial^i_t \mathbf{c}_m$ satisfies \cref{ivp2} with initial condition $\partialrtial^i_t \mathbf{c}_m(0) = \partialrtial_t^{i-1} \mathbf{f}_m-H\partialrtial_t^{i-1}\mathbf{c}_m(0)$, $\partialrtial_t^0 \mathbf{c}_m(0)=\mathbf{g}_m(0)$, and with source term $\partialrtial_t^i \mathbf{f}_m$.
Therefore the initial data $\partialrtial_t^i u_m(0)$ and the source term $\partialrtial_t^i f_m$ satsify the assumptions in
\cref{eq:assump:um1}
for order $k-i$. Finally, by an induction on $0\leq i\leq k$, and by recalling the inequality
\mathsf{b}egin{align*}
\norm{g_m}^2_{\mathrm{H}^k_{x,v}}+\norm{f_m}^2_{\mathrm{H}^k_{t,x,v}}\lesssim \norm{g}^2_{\mathrm{H}^k_{x,v}}+\norm{f}^2_{\mathrm{H}^k_{t,x,v}},
\end{align*}
the proof of the proposition is complete.
\end{proof}
\subsection{Proof of \cref{lem:um:iteration}} Using \cref{prop:existence} and a calculation we see that the formula
\mathsf{b}egin{align}\langlebel{eq:um:energy}
\partialrtial_t \frac{1}{2} \norm{(I+A)^{s} \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta}
= \mathsf{b}igl( (I+A)^{2s}\mathbf{w}^{(\theta)}, -H \mathbf{w}^{(\theta)} + \mathbf{f}^{(\theta)}+\mathbf{b}^{(\theta)} \mathsf{b}igr)_{\mathsf{L}^2_\eta},
\end{align}
is valid for any $s\geq0$ and $\mathsf{a}bs{\theta} = j\geq 0$. We claim that for $\, {\rm d}lta,\varepsilon\in(0,1)$ there exists $C_1=C_1(s)$ such that $C_1(0) = 0$ and
\mathsf{b}egin{align}\langlebel{eq:um:energyestimate}
\partialrtial_t \frac{1}{2} \norm{(I+A)^{s} \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta} \leq& -\norm{A^{1/2}(I+A)^s \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta}\notag \\
&+ \mathsf{b}igl( C_1 \, {\rm d}lta + \varepsilon \mathsf{b}igr)\norm{(I+A)^{s+1/2} \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta}+ \norm{(I+A)^{s} \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta}\notag\\
&+\frac{C_1 }{\, {\rm d}lta} \sum_{i=1}^d \norm{(I+A)^{s-1} \mathbf{w}^{(\theta+\mathbf{e}_i)}}^2_{\mathsf{L}^2_\eta}\notag \\
&+ \frac{1}{\varepsilon} \norm{ (I+A)^{s-1/2} \mathbf{b}^{(\theta)}}^2_{\mathsf{L}^2_\eta} + \norm{ (I+A)^{s} \mathbf{f}^{(\theta)}}^2_{\mathsf{L}^2_\eta}.
\end{align}
Furthermore, we claim that if we choose $\, {\rm d}lta=\, {\rm d}lta(s)$ and $\varepsilon=\varepsilon(s)$ small enough and so that $C_1\, {\rm d}lta + \varepsilon < 1/2$, then
\mathsf{b}egin{align}\langlebel{eq:um:claim}
&-\norm{A^{1/2}(I+A)^s \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta} + \mathsf{b}igl( C_1 \, {\rm d}lta + \varepsilon \mathsf{b}igr)\norm{(I+A)^{s+1/2} \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta}\notag\\
&\leq \frac{1}{2} \norm{(I+A)^s \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta}.
\end{align}
Indeed, by reformulating the above inequality using multi-indices we find that the coefficient of $\norm{(w^{(\theta)})^\mathsf{a}lpha}^2_{\mathsf{L}^2_\eta}$ in
\cref{eq:um:claim} equals
\mathsf{b}egin{align*}
- \mathsf{a}bs{\mathsf{a}lpha} (1+\mathsf{a}bs{\mathsf{a}lpha})^{2s} + (C_2 \, {\rm d}lta + \varepsilon) (1+\mathsf{a}bs{\mathsf{a}lpha})^{2s+1}.
\end{align*}
Letting $l:=\mathsf{a}bs{\mathsf{a}lpha}$, we have
\mathsf{b}egin{align*}
- l(1+l)^{2s} + \frac{1}{2} (1+l)^{2s+1} \leq \frac{1}{2} (1+l)^{2s}
\end{align*}
which proves \cref{eq:um:claim}. The lemma is then proved by substituting \cref{eq:um:claim} into \cref{eq:um:energyestimate} and applying Grönwall's inequality. The claim in \cref{eq:um:energyestimate} remains to be proven.
To prove the claim \cref{eq:um:energyestimate} we note that
\mathsf{b}egin{align*}
\mathsf{b}igl( (I+A)^{2s}\mathbf{w}^{(\theta)}, - H \mathbf{w}^{(\theta)} \mathsf{b}igr)_{\mathsf{L}^2_\eta}=-\text{I}+\text{II},
\end{align*}
where
\mathsf{b}egin{align*}
\text{I}:=\sum_{i=1}^d
\norm{A^{1/2}(I+A)^s \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta},\qquad
\text{II}:=\sum_{i=1}^d \mathsf{b}igl( \partialrtial_{x_i} \mathbf{w}^{(\theta)}, M_i \mathbf{w}^{(\theta)} \mathsf{b}igr)_{\mathsf{L}^2_\eta}
\end{align*}
and $M_i:= \mathsf{b}igl[ (I+A)^{2s} , B_i \mathsf{b}igr]$.
Furthermore, letting
\mathsf{b}egin{align*}
\text{III} := \mathsf{b}igl((I+A)^{2s} \mathbf{w}^{(\theta)}, \mathbf{b}^{(\theta)} \mathsf{b}igr)_{\mathsf{L}^2_\eta},\qquad
\text{IV} := \mathsf{b}igl((I+A)^{2s} \mathbf{w}^{(\theta)}, \mathbf{f}^{(\theta)} \mathsf{b}igr)_{\mathsf{L}^2_\eta},
\end{align*}
we see that we can rewrite \cref{eq:um:energy} as
\mathsf{b}egin{align*}
\partialrtial_t \frac{1}{2} \norm{(I+A)^{s} \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta} = -\text{I}+\text{II}+\text{III}+\text{IV}.
\end{align*}
We need to estimate the terms II, III and IV. To estimate the term II we see, by expanding it in terms of multi-indices, that
\mathsf{b}egin{align}\langlebel{eq:um:II}
\text{II} = \sum_{i=1}^d \sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m \sqrt{\mathsf{a}lpha_i} \mathsf{b}igl( (1+\mathsf{a}bs{\mathsf{a}lpha})^{2s} - \mathsf{a}bs{\mathsf{a}lpha}^{2s}\mathsf{b}igr) \mathsf{b}igl( w^{(\theta)}_\mathsf{a}lpha , \partialrtial_{x_i} w^{(\theta)}_\mathsf{a}lpha \mathsf{b}igr)_{\mathsf{L}^2_\eta}.
\end{align}
Observe that the coefficient $\sqrt{\mathsf{a}lpha_i} \mathsf{b}igl( (1+\mathsf{a}bs{\mathsf{a}lpha})^{2s} - \mathsf{a}bs{\mathsf{a}lpha}^{2s}\mathsf{b}igr)$ is of order $\mathsf{a}bs{\mathsf{a}lpha}^{2s-\frac12} = \mathsf{a}bs{\mathsf{a}lpha}^{s-1}\mathsf{a}bs{\mathsf{a}lpha}^{s+\frac12}$. We claim that
\mathsf{b}egin{align}\langlebel{ineq:um:ii}
\text{II} \leq \frac{C_1}{\, {\rm d}lta} \sum_{i=1}^d \norm{(I+A)^{s-1} \mathbf{w}^{(\theta+\mathbf{e}_i)}}^2_{\mathsf{L}^2_\eta} + C_1 \, {\rm d}lta \norm{(I+A)^{s+\frac12} \mathbf{w}^{(\theta)}},
\end{align}
where $C_1 = C_1(s)$, $C_1(0) = 0$ and $\, {\rm d}lta > 0$ is a degree of freedom. Indeed, for all $l\in \N$ we have
\mathsf{b}egin{align*}
\mathsf{a}bs{(1+l)^{2s}-l^{2s}} \leq C_1 (1+l)^{2s-1},
\end{align*}
for a constant $C_1 = C_1(s)$, satisfying $C_1(0) = 0$. Applying this to \cref{eq:um:II} we have for $i=1,dots,d$ that
\mathsf{b}egin{align*}
\mathsf{a}bs{\sqrt{\mathsf{a}lpha_i} \mathsf{b}igl( (1+\mathsf{a}bs{\mathsf{a}lpha})^{2s} - \mathsf{a}bs{\mathsf{a}lpha}^{2s}\mathsf{b}igr)}\leq \sqrt{1+\mathsf{a}bs{\mathsf{a}lpha}}\mathsf{b}igl( (1+\mathsf{a}bs{\mathsf{a}lpha})^{2s} - \mathsf{a}bs{\mathsf{a}lpha}^{2s}\mathsf{b}igr) \leq C_1 (1+\mathsf{a}bs{\mathsf{a}lpha})^{2s-\frac12}.
\end{align*}
Note that $2s-\frac12 = (s-1) + (s+ \frac12)$, by Young's inequality with $\, {\rm d}lta > 0$ we have
\mathsf{b}egin{align*}
\text{II} \leq \frac{C_1}{\, {\rm d}lta} \sum_{i=1}^d \sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m \norm{(1+\mathsf{a}bs{\mathsf{a}lpha})^{s-1} \partialrtial_{x_i} w_\mathsf{a}lpha^{(\theta)}}^2_{\mathsf{L}^2_\eta} + C_1 \, {\rm d}lta \sum_{i=1}^d \sum_{\mathsf{a}bs{\mathsf{a}lpha}=0}^m \norm{(1+\mathsf{a}bs{\mathsf{a}lpha})^{s+\frac12} w_\mathsf{a}lpha^{(\theta)}}^2_{\mathsf{L}^2_\eta},
\end{align*}
which proves \cref{ineq:um:ii}. To estimate the term III we proceed analogously, observing that $(I+A)^{2s} = (I+A)^{s+\frac12}(I+A)^{s-\frac12}$, and using Young's inequality, we obtain, for any $\varepsilon > 0$,
\mathsf{b}egin{align}\langlebel{ineq:um:iii}
\text{III} \leq \varepsilon \norm{(I+A)^{s+\frac12} \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta} + \frac{1}{\varepsilon} \norm{ (I+A)^{s-\frac12} \mathbf{b}^{(\theta)}}^2_{\mathsf{L}^2_\eta}.
\end{align}
For term IV we have
\mathsf{b}egin{align}\langlebel{ineq:um:iv}
\text{IV} \leq \norm{(I+A)^{s} \mathbf{w}^{(\theta)}}^2_{\mathsf{L}^2_\eta} + \norm{ (I+A)^{s} \mathbf{f}^{(\theta)}}^2_{\mathsf{L}^2_\eta}.
\end{align}
Combining \cref{ineq:um:ii,ineq:um:iii,ineq:um:iv} we get the claim in \cref{eq:um:energyestimate}.
\section{Proof of \cref{Thm1} and \cref{cor:l2initial}}\langlebel{sec:bddresult}
To start the proof of \cref{Thm1} we first recall from \cref{eq:Lum} that $u_m$ satisfies
\mathsf{b}egin{align*}
(\partialrtial_t - \mathcal{L}) u_m = f_m+{E}_m.
\end{align*}
Referring to the discussion after \cref{eq:Lum}, we have that
$u_m$ is uniformly bounded in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d; \mathrm{H}^1_\mu))$, see \cref{thm:regularity}, while $\partialrtial_t u_m + v \cdot \nabla_x u_m$ is uniformly bounded in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\T^d; \mathsf{L}^2_\mu))$, see \cref{lem:Em,thm:regularity}.
Furthermore, using \cref{thm:regularity} we see there is a subsequence, still denoted by $\{u_m\}$, that converges weakly to a function $u\in\mathrm{H}^{k}_{t,x,v}((0,T)\times\T^d\times\mathbb{R}^d)\subset\mathrm{H}^1_{\mathrm{kin}}((0,T)\times\T^d\times\mathbb{R}^d)$.
Moreover, using \cref{lem:Em,thm:regularity} we see that $\|{E}_m\|_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)} \to 0$ as $m \to \infty$. Using
all this information it follows by standard arguments, see \cite[Sec 7.2.2.c]{evansPartialDifferentialEquations2010a} for instance, that $u$ is a weak solution to the equation $(\partialrtial_t -\mathcal{L})u = f$, $u(0, x, v) = g(x,v)$ in the sense of \cref{def:weak}. The regularity estimates claimed in \cref{Thm1} follow easily from \cref{thm:regularity}. By linearity of the equation, these estimates also imply the uniqueness of $u$.
To prove the error estimates in \cref{thm1:error} we let $w_m:=u-u_m$. Recalling \cref{eq:Lum} and inserting $w_m$ we get
\mathsf{b}egin{align*}
(\partialrtial_t - \mathcal{L} ) w_m = -E_m + f-f_m,
\end{align*}
where $f_m$ and $E_m$ are given by \cref{eq:Lum+}.
Testing the equation above with $w_m$ and using Young's inequality we get the the energy estimate
\mathsf{b}egin{align*}
\partialrtial_t \frac{1}{2} \norm{w_m}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}
\leq&
- \norm{\nabla_v w_m}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}
+ \int_{\T^d} (E_m, w_m)_{\mathsf{L}^2_\mu} \, \mathrm{d}\eta
+ \int_{\T^d} (f-f_m, w_m)_{\mathsf{L}^2_\mu} \, \mathrm{d}\eta
\\
\leq &
\norm{w_m}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}
+ \frac{1}{2}\norm{E_m}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}
+ \frac{1}{2}\norm{f-f_m}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}.
\end{align*}
Using Grönwall's inequality we thus get for a constant $C = C(T) > 1$
\mathsf{b}egin{align}\langlebel{eq:energy:diff}
\notag
\norm{w_m}^2_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}
&\leq
C \mathsf{b}igl ( \norm{w_m(0)}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}
+
\norm{E_m}^2_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))} + \norm{(f-f_m)(s)}^2_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}\mathsf{b}igr)
\\
&\leq
C \frac{1}{(1+m)^{2k}}\left( \norm{g}^2_{\mathsf{L}^2_\eta(\mathrm{H}^{2k}_\mu)}
+ \norm{f}^2_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathrm{H}^{2k}_\mu))} \right)
+
C\norm{E_m}^2_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))} ,
\end{align}
where in the last inequality we have used the fact that $w_m(0)$ and $f_m$ are the orthogonal projections of $g$ and $f$ respectively on to the space spanned by the first $m$ Hermite basis functions.
Furthermore, applying the estimate in \cref{eq:um:Em} and the regularity estimate \cref{eq:um:regularity}, we obtain for $0 \leq t \leq T$ and $C$ as in \cref{eq:um:regularity}
\mathsf{b}egin{align}\langlebel{eq:Em:bound}
\notag
\norm{E_m}^2_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}
&\leq
\frac{d}{(1+m)^{2k-3}} \norm{\nabla_x u_m(t)}^2_{\mathsf{L}^2(\mathsf{L}^2_\eta(\mathrm{H}^{2k-2}_\mu))}
\\
&\leq
C \frac{d}{(1+m)^{2k-3}}C\mathsf{b}igl( \norm{g}^2_{\mathrm{H}^k_{x,v}} + \norm{f}^k_{\mathrm{H}^2_{t,x,v}}\mathsf{b}igr).
\end{align}
Combining the estimates in \cref{eq:energy:diff,eq:Em:bound} completes the proof of \cref{Thm1}.
\subsection{Proof of \cref{cor:l2initial}}
Let $\{g_\varepsilon(x,v)\}$ and $\{f_\varepsilon(t,x,v)\}$, $\varepsilon > 0$, be sequences of smooth functions such that $g_\varepsilon \to g$ in $\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)$ and $f_\varepsilon\to f$ in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))$ as $\varepsilon \to 0$.
Then by \cref{Thm1}, for each $\varepsilon > 0$ there exists a unique weak solution $u_\varepsilon$ in the sense of \cref{def:weak} to the equation $(\partialrtial_t-\mathcal{L}) u_\varepsilon = f_\varepsilon$ with initial data given by $g_\varepsilon$. It follows that
\mathsf{b}egin{align}\langlebel{energy}
\partialrtial_t \frac{1}{2} \norm{u_\varepsilon}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)} \leq - \norm{\nabla_v u_\varepsilon}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)} + \frac{1}{2}\norm{u_\varepsilon}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)} + \frac{1}{2}\norm{f_\varepsilon}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}.
\end{align}
Using Grönwall's inequality we have
\mathsf{b}egin{align*}
\max_{0\leq t\leq T} \|u_\varepsilon\|^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}
\lesssim \norm{g}^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}+ \norm{f}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}.
\end{align*}
Furthermore, integrating \cref{energy} against $t$ we get
\mathsf{b}egin{multline*}
\frac{1}{2}\|u_\varepsilon(T)\|^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}
-
\frac{1}{2}\|u_\varepsilon(0)\|^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}
+
\|\nabla_v u_\varepsilon\|^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}
\\
\leq
\frac{1}{2} \mathsf{b}igl( \norm{u_\varepsilon}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}
+
\norm{f_\varepsilon}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}\mathsf{b}igr).
\end{multline*}
Hence
\mathsf{b}egin{align*}
\|\nabla_v u_\varepsilon\|^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))} \leq C(T) \mathsf{b}igl( \|g\|^2_{\mathsf{L}^2_\eta(\mathsf{L}^2_\mu)}+ \norm{f}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}\mathsf{b}igr).
\end{align*}
Next, we rewrite the equation \cref{eq:kfpconj}
\mathsf{b}egin{align*}
\partialrtial_t u_\varepsilon +v\cdot \nabla_x u_\varepsilon -\nabla_x U\cdot \nabla_v u_\varepsilon = \nabla^*_v \nabla_v u_\varepsilon + f.
\end{align*}
From the characterization of $\mathrm{H}^{-1}_\mu$, the proof is the same as in the Lebesgue setting, see for instance \cite[Sec 5.9.1]{evansPartialDifferentialEquations2010a}, it holds
\mathsf{b}egin{multline*}
\|\partialrtial_t u_\varepsilon +v\cdot \nabla_x u_\varepsilon -\nabla_x U\cdot \nabla_v u_\varepsilon\|_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathrm{H}^{-1}_\mu))}
\\
\lesssim
\|\nabla_v u_\varepsilon\|_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}
+
\norm{f}_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathsf{L}^2_\mu))}.
\end{multline*}
Hence there exists a $u\in \mathrm{H}^1_{\mathrm{kin}}$ such that after passing to a subsequence $u_\varepsilon \rightharpoonup u$ in $\mathsf{L}^2_\eta(\mathrm{H}^1_\mu)$ with $\partialrtial_t u_\varepsilon + v\cdot\nabla_x u_\varepsilon \rightharpoonup \partialrtial_t u+ v\cdot\nabla_x u$ in $\mathsf{L}^2_\eta(\mathrm{H}^{-1}_\mu)$.
Uniqueness follows easily from the estimate in \cref{energy} and linearity.
\section{Proof of \cref{Thm2}}\langlebel{sec:global}
In this section we consider $D=\mathbb{R}^d$ and we prove \cref{Thm2} assuming \cref{assump:global:potential} and \cref{assump:global:potential+}. To start the construction of the family of smooth periodic approximations $\{U_R\}$ of $U$, let $Q_R$ denote the cube with side length $2R$ centered at the origin. Let $\chi(r): \mathbb{R} \to \mathbb{R}$ be smooth cutoff function such that $\chi = 1$ on $[0,1]$, $0\leq \chi \leq 1$ on $(1,2)$ and $\chi = 0$ on $[2,\infty)$. For $R>0$, we introduce
\mathsf{b}egin{align*}
\varphi_R(x) := \chi(\mathsf{a}bs{x}/{2R}).
\end{align*}
It follows that $\varphi_R = 1$ on $B_{R/2}$, $0\leq \varphi_R\leq 1$ on $B_R\setminus B_{R/2}$ and $\varphi_R = 0$ when $\mathsf{a}bs{x} \geq R$. Using $\varphi_R$ we define
\mathsf{b}egin{align}\langlebel{def:ur}
U_R (x) := U(x)\varphi_R(x), \text{ for } x\in Q_R,
\end{align}
and we extend $U_R$ to be a periodic function of period $2R$ on $\mathbb{R}^d$. In the following lemma we collect some elementary properties of the construction and $U_R$.
\mathsf{b}egin{lemma}\langlebel{lem:global:tech}
Let $U(x)$ satisfy the assumption \cref{assump:global:potential} and let $U_R$ be given by \cref{def:ur}. Then $U_R(x)\leq U(x)$ on $\mathbb{R}^d$.
In addition, there exists an $R_0 > 1$ such that for $R\geq R_0$, then
\mathsf{b}egin{align*}
\mathsf{a}bs{\nabla_x^k U_R(x)}\leq C_k
\end{align*}
on $\mathbb{R}^d$ for all $k\geq 2$ and for some constants $\{C_k\}$ which are independent of $R$. Furthermore, there exists a constant $C_1$ independent of $R$ such that for $R \geq R_0$
\mathsf{b}egin{align*}
\mathsf{a}bs{\nabla_x (U-U_R)(x)}\leq C_1 \mathsf{a}bs{\nabla_x U(x)}
\end{align*}
on $\mathbb{R}^d \setminus B_{R_0}$.
\end{lemma}
\mathsf{b}egin{proof} First, obviously $U_R(x)\leq U(x)$ since $\mathsf{a}bs{\varphi_R}\leq 1$. Secondly,
\mathsf{b}egin{align*}
\nabla_x U_R(x) = \nabla_x U(x) \varphi_R(x) + \nabla_x \varphi_R(x) U(x).
\end{align*}
Let $R_0$ be so large that $\mathrm{supp}(P) \subset B_{R_0/4}$ and consider $R\geq R_0$. Then $U(x) = a\mathsf{a}bs{x}^2$ in
$\mathbb{R}^d\setminus B_{R_0/2}$, and consequently
\mathsf{b}egin{align*}
\mathsf{a}bs{\nabla_x U_R(x)} \leq \mathsf{a}bs{\nabla_x U(x)} + C\frac{\mathsf{a}bs{U(x)}}{R}\leq C_1 \mathsf{a}bs{\nabla_xU(x)},
\end{align*}
on $\mathbb{R}^d \setminus B_{R_0}$, where $C_1$ is a constant which is independent of $R$. The proof for higher-order derivatives is analogous.
\end{proof}
\mathsf{b}egin{lemma}\langlebel{lem:global:energy}
Let $U$ satisfy the assumption in \cref{assump:global:potential}, and let $U_R$ and $R_0$ be constructed as above and in \cref{lem:global:tech}, respectively. Consider $R\geq R_0$, and let $\, \mathrm{d} \eta_{R} := e^{-U_R(x)}\, \mathrm{d} x$. Assume that $u=u(x)$ is function which is periodic with period $2R$ on $\mathbb{R}^d$ and satisfies $u\in \mathsf{L}^2_{\eta_R}(Q_R)$. Then $u\in \mathsf{L}^2_\eta(\mathbb{R}^d)$ and
\mathsf{b}egin{align*}
\norm{u}_{\mathsf{L}^2_\eta(\mathbb{R}^d)} \leq C\norm{u}_{\mathsf{L}^2_{\eta_R}(Q_R)},
\end{align*}
where $C$ is a constant independent of $R$.
\end{lemma}
\mathsf{b}egin{proof} Note that
\mathsf{b}egin{align*}
\int_{\mathbb{R}^d} \mathsf{a}bs{u(x)}^2 e^{-U(x)} \, \, \mathrm{d} x =& \sum_{z\in\Z^d} \int_{Q_R} \mathsf{a}bs{u(x)}^2 \frac{e^{-U(x+2Rz)}}{e^{-U_R(x)}} e^{-U_R(x)} \, \mathrm{d} x \\
\leq & \norm{u}^2_{\mathsf{L}^2_{\eta_R}(Q_R)}\cdot \sum_{z\in \Z^d} \sup_{x\in Q_R} e^{-(U(x+2Rz)-U_R(x))}.
\end{align*}
Using this, and
\mathsf{b}egin{align*}
\sum_{z\in \Z^d} \sup_{x\in Q_r} e^{-(U(x+2Rz)-U_R(x))} \leq& 1 + \sum_{\substack{z\in \Z^d, \\z\neq 0.}} \sup_{x\in Q_R} e^{-a\mathsf{a}bs{x+2Rz}^2+U_R(x)},
\end{align*}
we see that it suffices to estimate the sum
\mathsf{b}egin{align*}
\sum_{\substack{z\in \Z^d, \\z\neq 0.}} \sup_{x\in Q_R} e^{-a\mathsf{a}bs{x+2Rz}^2+U_R(x)},
\end{align*}
for $R\geq R_0$. Using the radial symmetry and monotonicity of $U(x)$ outside $B_R$ and \cref{assump:global:potential}, we deduce
\mathsf{b}egin{align}\langlebel{eq:global:expsum}
\notag \sum_{\substack{z\in \Z^d, \\z\neq 0}} \sup_{x\in Q_R} e^{-a\mathsf{a}bs{x+2Rz}^2+U_R(x)}
&\leq
\sum_{k=1}^\infty \sum_{\substack{z\in \Z^d, \\ k \leq |z| < k+1}} \sup_{x\in Q_R} e^{-a\mathsf{a}bs{x+2Rz}^2+U_R(x)}
\\
&\leq
C(P) \sum_{k=1}^\infty k^{d-1} e^{-4aR^2 (k^2-1)}.
\end{align}
It is now easily seen that the sum on the right hand side in \cref{eq:global:expsum} is bounded independent of $R$ as long as we choose $R\geq R_0$.
\end{proof}
In the following we enlarge, if necessary, $R_0$ so that $\mathrm{supp}(g(x,v))\mathbf{S}ubset B_{R_0}(0)\times\mathbb{R}^d$ and $\mathrm{supp}(f(t,x,v))\mathbf{S}ubset [0,T]\times B_{R_0}(0)\times\mathbb{R}^d$.
Let $\mathcal{L}_R$ denote the operator $\mathcal{L}$ but with $U$ replaced by $U_R$.
By the construction outlined we are thus led, for $R\geq R_0$, to the following family of problems with periodic boundary conditions
\mathsf{b}egin{align}\langlebel{ivp4}
\mathsf{b}egin{cases}
(\partialrtial_t - \mathcal{L}_R)u_R = f & \text{ in } (0,T)\times Q_R\times\mathbb{R}^d,\\
u_R = g & \text{ on } \{t=0\}\times Q_R\times \mathbb{R}^d.
\end{cases}
\end{align}
\mathsf{b}egin{corollary}\langlebel{cor:global:urexist}
Let $U$, $f$, $g$ satisfy \cref{assump:global:potential} and \cref{assump:global:potential+}, and let $R_0$ be as above. Then, given $R\geq R_0$ there exists a unique smooth weak solution $u_R$ to the problem \cref{ivp4}. Furthermore,
\mathsf{b}egin{multline*}
\norm{u_R }^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_{\eta_R}(Q_R; \mathrm{H}^{4}_\mu(\mathbb{R}^d)))} + \norm{\nabla_x u_R }^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_{\eta_R}(Q_R;\mathrm{H}^{2}_\mu(\mathbb{R}^d)))}\\
\leq C\mathsf{b}ig(\norm{g}^2_{\mathrm{H}^2_{x,v}(Q_R \times \mathbb{R}^d)} + \norm{f}^2_{\mathrm{H}^2_{t,x,v}((0,T)\times Q_R \times \mathbb{R}^d)}\mathsf{b}ig),
\end{multline*}
for a constant $C=C(d,T,\norm{\nabla_x^2 U}_\infty, \norm{\nabla_x^3 U}_\infty,R_0)>0$, independent of $R$.
\end{corollary}
\mathsf{b}egin{proof}
First of all, by the compact support of $g$ and $f$ we can extend them periodically to all of $\mathbb{R}^{2d}$ and $[0,T] \times \mathbb{R}^{2d}$ respectively, thus applying \cref{Thm1,thm:regularity} we get
\mathsf{b}egin{multline*}
\norm{u_R }^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_{\eta_R}(Q_R; \mathrm{H}^{4}_\mu(\mathbb{R}^d)))} + \norm{\nabla_x u_R }^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_{\eta_R}(Q_R;\mathrm{H}^{2}_\mu(\mathbb{R}^d)))}\\
\leq C'\mathsf{b}ig(\norm{g}^2_{\mathrm{H}^2_{x,v}(Q_R \times \mathbb{R}^d)} + \norm{f}^2_{\mathrm{H}^2_{t,x,v}((0,T)\times Q_R \times \mathbb{R}^d)}\mathsf{b}ig),
\end{multline*}
for a constant $C'=C'(d,T,\norm{\nabla_x^2 U_R}_\infty, \norm{\nabla_x^3 U_R}_\infty)>0$. Retracing our estimates in \cref{thm:regularity}, we remark that $C'$ only depends on $R$ through the norms $\norm{\nabla_x^2 U}_\infty$, $\norm{\nabla_x^3 U}_\infty$. An application of \cref{lem:global:tech} now completes the proof.
\end{proof}
Consequently, combining \cref{lem:global:energy} and \cref{cor:global:urexist} we see that
\mathsf{b}egin{multline}\langlebel{eq:ur:globalbnorm}
\norm{u_R}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathbb{R}^d;\mathrm{H}^4_\mu(\mathbb{R}^d)))} + \norm{\nabla_x u_R}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathbb{R}^d;\mathrm{H}^2_\mu(\mathbb{R}^d)))}\\
\leq C \mathsf{b}ig(\norm{g}^2_{\mathrm{H}^2_{x,v}(\mathbb{R}^{d}\times\mathbb{R}^d)} + \norm{f}^2_{\mathrm{H}^2_{t,x,v}((0,T)\times\mathbb{R}^{d}\times\mathbb{R}^d)} \mathsf{b}ig),
\end{multline}
for a constant $C$ which is independent of $R$. In particular, the family $\{u_R\}$ that is uniformly bounded in the $\mathrm{H}^2_{t,x,v}((0,T)\times\mathbb{R}^d\times\mathbb{R}^d)$ norm and $u_R$ satisfies the equation
\mathsf{b}egin{align*}
(\partialrtial_t - \mathcal{L}) u_R = \nabla_x(U-U_R)\cdot\nabla_v u_R+ f.
\end{align*}
Hence, passing to a subsequence $u_R\rightharpoonup u\in \mathrm{H}^2_{t,x,v}((0,T)\times\mathbb{R}^{d}\times\mathbb{R}^d)$.
Furthermore, since we assume that $f$ and $g$ are smooth, by \cref{thm:regularity} we have $u\in \mathrm{H}^k_{t,x,v}((0,T)\times\mathbb{R}^d\times\mathbb{R}^d)$ for all $k\geq 2$.
In particular, by the Sobolev embedding theorem we have that $u$ is smooth. However, this does not necessarily imply that $u$ is in the space $\mathrm{H}^1_\mathrm{kin}((0,T)\times\mathbb{R}^d\times\mathbb{R}^d)$ as the term $\nabla_x U\cdot\nabla_v u$ cannot readily be controlled on $(0,T)\times\mathbb{R}^d\times\mathbb{R}^d$ by the $\mathrm{H}^2_{t,x,v}((0,T)\times\mathbb{R}^d\times\mathbb{R}^d)$ norm.
Therefore, proceeding as in the proof of \cref{cor:l2initial} and \cref{Thm1}, we see that to conclude $u_R\rightharpoonup u$ in $\mathrm{H}^1_\mathrm{kin}((0,T)\times\mathbb{R}^d\times\mathbb{R}^d)$, and that $u$ is a weak solution to \cref{ivp1}, we have to prove that
\mathsf{b}egin{align}\langlebel{claimsu-}
&\mbox{$\nabla_x(U-U_R)\cdot\nabla_v u_R \rightharpoonup 0$ in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathbb{R}^d;\mathsf{L}^2_\mu(\mathbb{R}^d)))$}.
\end{align}
Note, by construction, since $R \geq R_0$ we have that $\operatorname{supp}(P) \subset B_R(0)$. As such, if $(U-U_R)(x)\neq 0$, it holds
\mathsf{b}egin{align*}
(U-U_R)(x)=a(1-\varphi_R(x))|x|^2.
\end{align*}
In particular,
\mathsf{b}egin{align*}
\nabla_x (U-U_R)(x)=2a(1-\varphi_R(x))x-a\nabla_x\varphi_R(x)|x|^2,
\end{align*}
and obviously, given $x\in\mathbb{R}^d$, we have that
$\nabla_x (U-U_R)(x)\to 0$ pointwise as $R\to\infty$.
Using this, the fact that functions in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathbb{R}^d;\mathsf{L}^2_\mu(\mathbb{R}^d)))$ with compact support are dense in $\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathbb{R}^d;\mathsf{L}^2_\mu(\mathbb{R}^d)))$, and the global bound of
$\nabla_v u_R$ in \cref{eq:ur:globalbnorm}, we see that to prove \cref{claimsu-} it suffices to prove that
\mathsf{b}egin{align}\langlebel{claimsu}
&\norm{\nabla_x(U-U_R)\cdot\nabla_v u_R}_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathbb{R}^d;\mathsf{L}^2_\mu(\mathbb{R}^d)))}
\end{align}
is uniformly bounded as a function of $R$ if $R\geq R_0$.
To prove the claim in \cref{claimsu} we introduce the measure
\mathsf{b}egin{align*}
\, \mathrm{d}\zeta = Z_\zeta^{-1}\, \mathrm{d}\eta\, \mathrm{d}\mu,
\end{align*}
where $Z_\zeta$ is a normalizing factor making $\zeta$ a probability measure on $\mathbb{R}^d\times\mathbb{R}^d$. To complete the argument we will use following lemma.
\mathsf{b}egin{proposition}\langlebel{prop:global:entropy}
Let $\zeta$ be given as above. Assume that $f(x, v) \in \mathrm{H}^1_\zeta(\mathbb{R}^{2d})$, and that $g(x,v)$ is a function such that
\mathsf{b}egin{align*}
\iint_{\mathbb{R}^{2d}} e^{g^2} \, \, \mathrm{d}\zeta <\infty.
\end{align*}
Then there exists a constant $C= C(\zeta)$ such that
\mathsf{b}egin{align*}
\iint f^2 g^2 \, \, \mathrm{d}\zeta \leq \mathsf{b}igl( C \mathsf{b}igl (\|\nabla_x f\|^2_{\mathsf{L}^2_\zeta}+\|\nabla_v f\|^2_{\mathsf{L}^2_\zeta}\mathsf{b}igr ) + 2\|f\|^2_{\mathsf{L}^2_\zeta}\mathsf{b}igr)\iint e^{g^2}\, \, \mathrm{d}\zeta.
\end{align*}
\end{proposition}
\mathsf{b}egin{proof} Note that by homogeneity it suffices to proved the stated inequality assuming that $\|f\|_{\mathsf{L}^2_\zeta}=1$. Given $a,b\geq 0$, the following inequality is valid, see \cite{ledouxConcentrationMeasureLogarithmic1999a},
\mathsf{b}egin{align*}
a b \leq a \log a + a + e^b.
\end{align*}
Using this inequality we see that
\mathsf{b}egin{align*}
\iint f^2 g^2 \, \mathrm{d}\zeta
&\leqslant \iint f^2\log(f^2) \, \mathrm{d}\zeta + 1 + \iint e^{g^2} \, \mathrm{d}\zeta\\
&\leqslant \mathsf{b}igl(2\iint f^2\log(f) \, \mathrm{d}\zeta + 2\mathsf{b}igr)\iint e^{g^2}\, \, \mathrm{d}\zeta,
\end{align*}
where we have used the fact that $e^{g^2}\geq 1$ and that $\zeta$ is a probability measure. Finally, applying the logarithmic Sobolev inequality for $\zeta$, see \cite{bakryAnalysisGeometryMarkov2014a}, we see that
\mathsf{b}egin{align*}
\iint f^2\log(f) \, \mathrm{d}\zeta\leq C \mathsf{b}igl (\|\nabla_x f\|^2_{\mathsf{L}^2_\zeta}+\|\nabla_v f\|^2_{\mathsf{L}^2_\zeta}\mathsf{b}igr ),
\end{align*}
which completes the proof.
\end{proof}
Now, given $\, {\rm d}lta>0$, we have
\mathsf{b}egin{align*}
&\norm{\nabla_x (U-U_R)\cdot\nabla_v u_R}^2_{\mathsf{L}^2_\eta(\mathbb{R}^d;\mathsf{L}^2_\mu(\mathbb{R}^d))} = Z_\zeta \iint_{\mathbb{R}^{2d}} \frac{\mathsf{a}bs{\nabla_v u_R}^2}{\, {\rm d}lta} \, {\rm d}lta \mathsf{a}bs{\nabla_x (U-U_R)}^2 \, \mathrm{d}\zeta.
\end{align*}
Letting
\mathsf{b}egin{align*}
I_{R,\, {\rm d}lta}:=\iint_{\mathbb{R}^{2d}} e^{\, {\rm d}lta \mathsf{a}bs{\nabla_x (U-U_R)}^2} \, \mathrm{d}\zeta,
\end{align*}
and applying \cref{prop:global:entropy} we obtain for a constant $C=C(\zeta) > 1$ that
\mathsf{b}egin{align*}
&
\norm{\nabla_x (U-U_R)\cdot\nabla_v u_R}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathbb{R}^d;\mathsf{L}^2_\mu(\mathbb{R}^d)))}
\\
&
\leq
\frac{CZ_\zeta}{\, {\rm d}lta}\mathsf{b}igl ( \|\nabla_x \nabla_v u_R\|^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\zeta(\mathbb{R}^{2d}))}
+
\|\nabla_v \nabla_v u_R\|^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\zeta(\mathbb{R}^{2d}))}\mathsf{b}igr )I_{R,\, {\rm d}lta}
\\
&
+
2\frac{Z_\zeta}{\, {\rm d}lta}\|\nabla_v u_R\|^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\zeta(\mathbb{R}^{2d}))}I_{R,\, {\rm d}lta}
.
\end{align*}
Thus by the above and \cref{eq:ur:globalbnorm} we have for a constant $C > 1$ independent of $R$, that
\mathsf{b}egin{multline*}
\norm{\nabla_x (U-U_R)\cdot\nabla_v u_R}^2_{\mathsf{L}^2((0,T);\mathsf{L}^2_\eta(\mathbb{R}^d;\mathsf{L}^2_\mu(\mathbb{R}^d)))}
\\
\leq
C \frac{I_{R,\, {\rm d}lta}}{\, {\rm d}lta}
\left (\norm{g}^2_{\mathrm{H}^2_{x,v}(\mathbb{R}^d \times \mathbb{R}^d)} + \norm{f}^2_{\mathrm{H}^2_{t,x,v}((0,T) \times \mathbb{R}^d \times \mathbb{R}^d)} \right ).
\end{multline*}
Furthermore, by \cref{lem:global:tech} we also have $\mathsf{a}bs{\nabla_x (U-U_R)}^2 \leq C \mathsf{a}bs{\nabla_x U}^2$ in $\mathbb{R}^d \setminus B_{R_0}$ for a constant $C$ independent of $R$. Thus, choosing $\, {\rm d}lta$ small and independent of $R$, we can conclude that there exists a constant $C$ independent of $R$ such that
\mathsf{b}egin{align*}
I_{R,\, {\rm d}lta} = \iint_{\mathbb{R}^{2d}} e^{\, {\rm d}lta \mathsf{a}bs{\nabla_x (U-U_R)}^2} \, \mathrm{d}\zeta \leq C.
\end{align*}
This completes the proof of \cref{claimsu} and \cref{Thm2}.
\emergencystretch=2em
\printbibliography
\end{document} |
\begin{document}
\title{Shared-Memory Parallel Maximal Clique Enumeration}
\author{\IEEEauthorblockN{Apurba Das}
\IEEEauthorblockA{
Iowa State University\\
[email protected]}
\and
\IEEEauthorblockN{Seyed-Vahid Sanei-Mehri}
\IEEEauthorblockA{
Iowa State University\\
[email protected]}
\and
\IEEEauthorblockN{Srikanta Tirthapura}
\IEEEauthorblockA{
Iowa State University\\
[email protected]}
}
\maketitle
\begin{abstract}
We present shared-memory parallel methods for Maximal Clique Enumeration (MCE) from a graph. MCE is a fundamental and well-studied graph analytics task, and is a widely used primitive for identifying dense structures in a graph. Due to its computationally intensive nature, parallel methods are imperative for dealing with large graphs. However, surprisingly, there do not yet exist scalable and parallel methods for MCE on a shared-memory parallel machine. In this work, we present efficient shared-memory parallel algorithms for MCE, with the following properties: (1)~the parallel algorithms are provably work-efficient relative to a state-of-the-art sequential algorithm (2)~the algorithms have a provably small parallel depth, showing that they can scale to a large number of processors, and (3)~our implementations on a multicore machine shows a good speedup and scaling behavior with increasing number of cores, and are substantially faster than prior shared-memory parallel algorithms for MCE.
\end{abstract}
\section{Introduction}
\label{sec:intro}
We study the problem of Maximal Clique Enumeration (MCE) from a graph, which requires to enumerate all cliques (complete subgraphs) in the graph that are maximal. A clique $C$ in a graph $G = (V,E)$ is a dense subgraph such that every pair of vertices in $C$ are directly connected by an edge. A clique $C$ is said to be maximal when there is no clique $C'$ such that $C$ is a proper subgraph of $C'$. Maximal cliques are perhaps the most fundamental dense subgraphs, and MCE has been widely used in diverse research areas, such as clustering and community detection in social and biological networks~\cite{PD+05} and in genomics~\cite{RWY07}. It has also applications in finding common substructures in chemical compounds~\cite{KA+14}, mining from biological data~\cite{GA+93, HO+03, MBR04, CC05, JB06, ZP+08}, and inference from graphical models~\cite{KF09}.
MCE is a computationally hard problem since it is harder than the problem of finding the {\em maximum} clique, which is a classical NP-hard combinatorial problem. The computational cost of enumerating maximal cliques can be higher than the cost of finding the maximum clique, since the output size (set of all maximal cliques) may itself be very large, in the worst case. In particular, Moon and Moser~\cite{MM65} showed that a graph on $n$ vertices can have as many as $3^{n/3}$ maximal cliques, which is proven to be a tight bound. Real-world networks typically do not have cliques of such high complexity and it is possible to enumerate maximal cliques from large graphs. The literature is rich on sequential algorithms for MCE. Bron and Kerbosch~\cite{BK73} introduced a backtracking search method to enumerate maximal cliques. Tomita et. al \cite{TTT06} used the idea of ``pivoting'' in the backtracking search, which led to a significant improvement in the runtime. This has been followed up by further work such as due to Eppstein et al.~\cite{ELS10}, who used a degeneracy-based vertex ordering scheme on top of the pivot selection strategy.
Sequential approaches to MCE can lead to high runtimes on large graphs. Based on our experiments, a real-world network {\tt orkut} with approximately $3$ million vertices, $117$ million edges requires approximately $8$ hours to enumerate all maximal cliques using an efficient sequential algorithm due to Tomita et al.~\cite{TTT06}. Graphs that are larger and/or more complex cannot be handled by sequential algorithms with a reasonable turnaround time, and the high computational complexity of MCE calls for parallel methods.
In this work, we consider shared memory parallel methods for MCE. In the shared memory model, the input graph can reside within globally shared memory, and multiple threads can work in parallel on enumerating maximal cliques. Shared memory parallelism is attractive today since machines with tens to hundreds of cores and hundreds of Gigabytes of shared memory are readily available. The advantage of using shared memory approach over a distributed memory approach are: (1)~Unlike distributed memory, it is not necessary to divide the graph into subgraphs and communicate the subgraphs among processors. In shared memory, different threads can work with a single shared copy of the graph (2)~Subproblems generated during MCE are often irregular, and it is hard to predict which subproblems are small and which are large, while initially dividing the problem into subproblems. With a shared memory method, it is easy to further subdivide subproblems and process them in parallel. With a distributed memory method, handling such irregularly sized subproblems in a load-balanced manner requires greater coordination and is more complex.
Prior works on parallel MCE have largely focused on distributed memory algorithms~\cite{WY+09, SS+09, LGG10, SMT+15, XCF16}. There are a few works on shared-memory parallel algorithms~\cite{ZA+05, DW+09, LP+17}. However, these algorithms do not scale to larger graphs due to memory or computational bottlenecks -- either the algorithms miss out significant pruning opportunities as in~\cite{DW+09} or they need to generate a large number of non-maximal cliques as in~\cite{ZA+05, LP+17}.
\subsection{Our Contributions}
We design shared-memory parallel algorithms for enumerating all maximal cliques in a simple graph. Our contributions are as follows:
\textbf{Theoretically Efficient Parallel Algorithm: } We present a shared-memory parallel algorithm ${\tt ParTTT}\xspace$ that takes as input a graph $G$ and enumerates all maximal cliques in $G$. ${\tt ParTTT}\xspace$ is an efficient parallelization of the algorithm due to Tomita et al.~\cite{TTT06}. Our analysis of ${\tt ParTTT}\xspace$ using a work-depth model~\cite{S17} of computation shows that it is work-efficient when compared with~\cite{TTT06} and has a low parallel depth. To our knowledge, this is the first shared memory parallel algorithm for MCE with such provable properties.
\textbf{Optimized Parallel Algorithm:} We present the following ideas to further improve the practical performance of ${\tt ParTTT}\xspace$, leading to Algorithm~${\tt ParMCE}\xspace$. First, instead of starting with a single task that spawns recursive subtasks as it proceeds, which leads to a lack of parallelism at the top level of recursion, we start with multiple parallel subtasks. To achieve this, we consider per-vertex parallelization, where a separate subproblem is created for each vertex and the different subproblems are processed in parallel. Each subproblem is required to enumerate cliques that contain the assigned vertex, where care is taken to prevent overlap between subproblems, and to balance the load between subproblems.
Each per-vertex subproblem is further processed in parallel using ${\tt ParTTT}\xspace$. This additional (recursive) level of parallelism is useful since the different per-vertex subproblems may have significantly different computational costs, having each run as a separate sequential task may lead to uneven load balance. To further address load balance, we consider different methods for ranking the vertices, so that the ranking functions can be used in creating subproblems that are balanced as much as possible. For ranking the vertices, we use metrics such as degree, triangle count, and degeneracy number of the vertices.
\textbf{Experimental Evaluation: }We experimentally evaluate our algorithm and show that ${\tt ParMCE}\xspace$ is \textbf{15x-31x} faster than an efficient sequential algorithm (due to Tomita et al.~\cite{TTT06}) on a multicore machine with $32$ physical cores and $256$G RAM. For example, on the {\tt orkut} network with around $3$M vertices, $117$M edges, and $2$B maximal cliques \footnote{M and B stand for million and billion respectively.}, {\tt ParTTT}\xspace achieves a \textbf{14x} parallel speedup over the sequential algorithm, and the optimized ${\tt ParMCE}\xspace$ achieves a \textbf{16x} speedup. In contrast, prior shared memory parallel algorithms for MCE~\cite{ZA+05,DW+09,LP+17} failed to handle the input graphs that we considered, and either ran out of memory (\cite{ZA+05,LP+17}) or did not complete in 5 hours (\cite{DW+09}).
\textbf{Roadmap.} The rest of the paper is organized as follows. We present preliminaries in Section~\ref{sec:prelims}, followed by a description of the algorithm and analysis in Section~\ref{sec:algo}, an experimental evaluation in Section~\ref{sec:exp}, and conclusions in Section~\ref{sec:conclude}.
\section{Related Work}
\label{sec:related}
Maximal Clique Enumeration (MCE) from a graph is a fundamental problem that has been extensively studied for more than two decades, and there are multiple prior works on sequential and parallel algorithms. We first discuss sequential algorithms for MCE, followed by parallel algorithms.
\textbf{Sequential MCE:} Bron and Kerbosch~\cite{BK73} presented an algorithm for MCE based on depth-first-search. Following their work, a number of algorithms have been presented ~\cite{TI+77,CN85,TTT06,Koch01,MU04,CK08,ELS10}. The algorithm of Tomita et al.~\cite{TTT06} has a worst-case time complexity $O(3^{\frac{n}{3}})$ for an $n$ vertex graph, which is optimal in the worst-case, since the size of the output can be as large as $O(3^{\frac{n}{3}})$~\cite{MM65}. Eppstein et al.~\cite{ELS10,ES11} present an algorithm for sparse graphs whose complexity can be parameterized by the degeneracy of the graph, a measure of graph sparsity.
Another approach to MCE is a class of ``output-sensitive" algorithms whose time complexity for enumerating maximal cliques is a function of the size of the output. There exist many such output-sensitive algorithms for MCE, including~\cite{CN85,MU04,TI+77}, which can be viewed as instances of a general paradigm called ``reverse-search''~\cite{AF93}. The output-sensitive algorithm due to Makino and Uno~\cite{MU04} provides the best theoretical worst-case time complexity among output-sensitive algorithms. In terms of practical performance, the best output-sensitive algorithms~\cite{CN85,MU04} are not as efficient as the best depth-first-search based algorithms such as~\cite{TTT06,ELS10}. Other sequential methods for MCE include algorithms due to Kose et al.~\cite{KW+01}, Johnson et al.~\cite{JYP88}, and Cheng et al.~\cite{CK+11}. There have also been works on maintaining maximal cliques in a dynamic graph~\cite{DST16,S04,OV10}.
\textbf{Parallel MCE: }There are multiple prior works on parallel algorithms for MCE ~\cite{ZA+05,DW+06,WY+09,SS+09,LGG10,CZ+12,SMT+15,XCF16}. We first discuss shared memory algorithms and then distributed memory algorithms.
Zhang et al.~\cite{ZA+05} presented a shared memory parallel algorithm based on the sequential algorithm due to Kose et al.~\cite{KW+01}. This algorithm computes maximal cliques in an iterative manner, and in each iteration, it maintains a set of cliques that are not necessarily maximal and for each such clique, maintains the set of vertices that can be added to form larger cliques. This algorithm does not provide a theoretical guarantee on the runtime and suffers for large memory requirement. Du et al.~\cite{DW+06} present a output-sensitive shared-memory parallel algorithm for MCE, but their algorithm suffers from poor load balancing as also pointed out by Schmidt et al.~\cite{SS+09}. Lessley et al.~\cite{LP+17} present a shared memory parallel algorithm that generates maximal cliques using an iterative method, where in each iteration, cliques of size $(k-1)$ are extended to cliques of size $k$. The algorithm of~\cite{LP+17} is memory-intensive, since it needs to store a number of intermediate non-maximal cliques in each iteration. Note that the number of non-maximal cliques may be far higher than the number of maximal cliques that are finally emitted, and a number of distinct non-maximal cliques may finally lead to a single maximal clique. In the extreme case, a complete graph on $n$ vertices has $(2^n-1)$ non-maximal cliques, and only a single maximal clique. We present a comparison of our algorithm with~\cite{LP+17,ZA+05,DW+06} in later sections.
Distributed memory parallel algorithms for MCE include works due to Wu et al.~\cite{WY+09}, designed for the MapReduce framework, Lu et al.~\cite{LGG10}, which is based on the sequential algorithm due to Tsukiyama et al.~\cite{TI+77}, Xu et al.~\cite{XCF16}, and Svendsen et al.~\cite{SMT+15}.
Other works on parallel algorithms for enumerating dense subgraphs from a massive graph include parallel algorithms for enumerating $k$-cores~\cite{MD+13,DDZ14,KM17,SSP17}, $k$-trusses~\cite{SSP17,KM17-2,KM17-3}, nuclei~\cite{SSP17}, and distributed memory algorithms for enumerating bicliques~\cite{MT17}.
\section{Preliminaries}
\label{sec:prelims}
We consider a simple undirected graph without self loops or multiple edges. For graph $G$, let $V(G)$ denote the set of vertices in $G$ and $E(G)$ denote the set of edges in $G$. Let $n$ denote the size of $V(G)$, and $m$ denote the size of $E(G)$. For vertex $u \in V(G)$, let $\Gamma_G(u)$ denote the set of vertices adjacent to $u$ in $G$. When the graph $G$ is clear from the context, we use $\Gamma(u)$ to mean $\Gamma_G(u)$. Let $\mathcal{C}(G)$ denote the set of all maximal cliques in $G$.
\textbf{Sequential Algorithm ${\tt TTT}\xspace$: } The algorithm due to Tomita, Tanaka, and Takahashi.~\cite{TTT06}, which we call ${\tt TTT}\xspace$, is a recursive backtracking-based algorithm for enumerating all maximal cliques in an undirected graph, with a worst-case time complexity of $O(3^{n/3})$ where $n$ is the number of vertices in the graph. In practice, this is one of the most efficient sequential algorithms for MCE. Since we use ${\tt TTT}\xspace$ as a subroutine in our parallel algorithms, we present a short description here.
In any recursive call, ${\tt TTT}\xspace$ maintains three disjoint sets of vertices $K$, ${\tt cand}\xspace$, and ${\tt fini}\xspace$ where $K$ is a candidate clique to be extended, ${\tt cand}\xspace$ is the set of vertices that can be used to extend $K$, and ${\tt fini}\xspace$ is the set of vertices that are adjacent to $K$, but need not be used to extend $K$ (these are being explored along other search paths). Each recursive call iterates over vertices from ${\tt cand}\xspace$ and in each iteration, a vertex $q \in {\tt cand}\xspace$ is added to $K$ and a new recursive call is made with parameters $K\cup\{q\}$, ${\tt cand}\xspace_q$, and ${\tt fini}\xspace_q$ for generating all maximal cliques of $G$ that extend $K\cup\{q\}$ but do not contain any vertices from ${\tt fini}\xspace_q$. The sets ${\tt cand}\xspace_q$ and ${\tt fini}\xspace_q$ can only contain vertices that are adjacent to all vertices in $K\cup \{q\}$. The clique $K$ is a maximal clique when both ${\tt cand}\xspace$ and ${\tt fini}\xspace$ are empty.
The ingredient that makes ${\tt TTT}\xspace$ different from the algorithm due to Bron and Kerbosch~\cite{BK73} is the use of a ``pivot'' where a vertex $u\in {\tt cand}\xspace\cup{\tt fini}\xspace$ is selected that maximizes $\vert {\tt cand}\xspace \cap \Gamma(u) \vert$. Once the pivot $u$ is computed, it is sufficient to iterate over all the vertices of ${\tt cand}\xspace \setminus \Gamma(u)$, instead of iterating over all vertices of ${\tt cand}\xspace$. The pseudo code of ${\tt TTT}\xspace$ is presented in Algorithm~\ref{algo:tomita}. For the initial call, $K$ and ${\tt fini}\xspace$ are initialized to an empty set, ${\tt cand}\xspace$ is the set of all vertices of $G$.
\begin{algorithm}
\DontPrintSemicolon
\caption{${\tt TTT}\xspace(\mathcal{G},K,{\tt cand}\xspace,{\tt fini}\xspace)$}
\label{algo:tomita}
\KwIn{$\mathcal{G}$ - The input graph \\ \hspace{1cm} $K$ - a clique to extend, \\
${\tt cand}\xspace$ - Set of vertices that can be used extend $K$, \\
${\tt fini}\xspace$ - Set of vertices that have been used to extend $K$}
\KwOut{Set of all maximal cliques of $G$ containing $K$ and vertices from ${\tt cand}\xspace$ but not containing any vertex from ${\tt fini}\xspace$}
\If{$({\tt cand}\xspace = \emptyset)$ \& $({\tt fini}\xspace = \emptyset)$}{
Output $K$ and return\;
}
${\tt pivot}\xspace \gets (u \in {\tt cand}\xspace \cup {\tt fini}\xspace)$ such that $u$ maximizes the size of ${\tt cand}\xspace \cap \Gamma_{\mathcal{G}}(u)$\;
${\tt ext}\xspace \gets {\tt cand}\xspace - \Gamma_{\mathcal{G}}({\tt pivot}\xspace)$\;
\For{$q \in$ {\tt ext}\xspace}{
$K_q \gets K\cup\{q\}$\;
${\tt cand}\xspace_q \gets {\tt cand}\xspace\cap\Gamma_{\mathcal{G}}(q)$\;
${\tt fini}\xspace_q \gets {\tt fini}\xspace\cap\Gamma_{\mathcal{G}}(q)$\;
${\tt cand}\xspace \gets {\tt cand}\xspace-\{q\}$\;
${\tt fini}\xspace \gets {\tt fini}\xspace \cup\{q\}$\;
${\tt TTT}\xspace(\mathcal{G},K_q,{\tt cand}\xspace_q,{\tt fini}\xspace_q)$\;
}
\end{algorithm}
\textbf{Parallel Cost Model: } For analyzing our shared-memory parallel algorithms, we use the CRCW PRAM model~\cite{BM10}, which is a model of shared parallel computation that assumes concurrent reads and concurrent writes. Our parallel algorithm can also work in other related models of shared memory such as EREW PRAM (exclusive reads and exclusive writes), with a logarithmic factor increase in work as well as parallel depth. We measure the effectiveness of the parallel algorithm using the {\em work-depth} model~\cite{S17}. Here, the ``work'' of a parallel algorithm is equal to the total number of operations of the parallel algorithm, and the ``depth'' (also called the ``parallel time'' or the ``span'') is the longest chain of dependent computations in the algorithm. A parallel algorithm is said to be {\em work-efficient} if its total work is of the same order as the work due to the best sequential algorithm\footnote{Note that work-efficiency in the CRCW PRAM model does not imply work-efficiency in the EREW PRAM model}. We aim for work-efficient algorithms with a low depth, ideally poly-logarithmic in the size of the input. Using Brent's theorem~\cite{BM10}, it can be seen that a parallel algorithm on input size $n$ with a depth of $d$ can theoretically achieve $\Theta(p)$ speedup on $p$ processors as long as $p = O(n/d)$.
\section{Parallel MCE Algorithms}
\label{sec:algo}
In this section, we present shared-memory parallel algorithms for MCE. We first describe a parallel algorithm ${\tt ParTTT}\xspace$ and an analysis of its theoretical properties, where ${\tt ParTTT}\xspace$ is a parallel version of the ${\tt TTT}\xspace$ algorithm. Then, we discuss practical bottlenecks in ${\tt ParTTT}\xspace$, leading us to another algorithm ${\tt ParMCE}\xspace$ with a better practical runtime performance.
\subsection{Algorithm ${\tt ParTTT}\xspace$}
Our first algorithm ${\tt ParTTT}\xspace$ is a work-efficient parallelization of the sequential ${\tt TTT}\xspace$ algorithm. The two main components of ${\tt TTT}\xspace$ (Algorithm~\ref{algo:tomita}) are (1)~Selection of the pivot element (Line~$3$) and (2)~Sequential backtracking for extending candidate cliques until all maximal cliques are explored (Line~$5$ to Line~$11$). We discuss how to parallelize each of these steps.
\textbf{Parallel Pivot Selection: } Within a single recursive call of ${\tt ParTTT}\xspace$, the pivot element is computed in parallel using two steps, as described in ${\tt ParPivot}\xspace$ (Algorithm~\ref{algo:pivot}). In the first step, the size of the intersection ${\tt cand}\xspace \cap \Gamma(u)$ is computed in parallel for each vertex $u \in {\tt cand}\xspace \cup {\tt fini}\xspace$. In the second step, the vertex with the maximum intersection size is selected. The parallel algorithm for selecting a pivot is presented in Algorithm~\ref{algo:pivot}.
\begin{lemma}
\label{lem:parpivot}
The total work of ${\tt ParPivot}\xspace$ is $O(\sum_{w \in {\tt cand}\xspace \cup {\tt fini}\xspace} (\min\{|{\tt cand}\xspace|, |\Gamma(w)|\})$, which is $O(n^2)$, and depth is $O(\log n)$.
\end{lemma}
\begin{IEEEproof}
If the sets ${\tt cand}\xspace$ and $\Gamma(w)$ are stored as hashsets, then for vertex $w$ the size $t_w = \vert{\tt intersect}\xspace({\tt cand}\xspace, \Gamma(w))\vert$ can be computed sequentially in time $O(\min\{|{\tt cand}\xspace|, |\Gamma(w)|\})$ -- the intersection of two sets $S_1$ and $S_2$ can be found by considering the smaller set among the two, say $S_2$, and searching for its elements within the larger set, say $S_1$. It is possible to parallelize the computation of ${\tt intersect}\xspace(S_1,S_2)$ by executing the searches elements in $S_2$ in parallel, followed by counting the number of elements that lie in the intersection, which can also be done in parallel in a work-efficient manner using logarithmic depth. Since computing the maximum of a set of $n$ numbers can be accomplished using work $O(n)$ and depth $O(\log n)$, for vertex $w$, $t_w$ can be computed using work $O(\min\{|{\tt cand}\xspace|, |\Gamma(w)|\})$ and depth $O(\log n)$. Once the different $t_w$ are computed, $argmax(\{t_w : w \in {\tt cand}\xspace\cup{\tt fini}\xspace\})$ can be computed using additional work $\vert {\tt cand}\xspace \cup {\tt fini}\xspace \vert$ and depth $O(\log n)$. Hence, the total work of ${\tt ParPivot}\xspace$ is $O(\sum_{w \in {\tt cand}\xspace \cup {\tt fini}\xspace} (\min\{|{\tt cand}\xspace|, |\Gamma(w)|\})$. Since the size of ${\tt cand}\xspace$, ${\tt fini}\xspace$, and $\Gamma(w)$ are bounded by $n$, this is $O(n^2)$, but typically much smaller.
\end{IEEEproof}
\begin{algorithm}
\DontPrintSemicolon
\caption{${\tt ParPivot}\xspace(\mathcal{G},K, {\tt cand}\xspace,{\tt fini}\xspace)$}
\label{algo:pivot}
\KwIn{
$K$ - a clique in $G$ that may be further extended \\
$\mathcal{G}$ - Input graph \\
${\tt cand}\xspace$ - Set of vertices that may extend $K$ \\
${\tt fini}\xspace$ - vertices that have been used to extend $K$}
\KwOut{
pivot vertex $u \in {\tt cand}\xspace\cup{\tt fini}\xspace$
}
\ForPar {$w \in {\tt cand}\xspace\cup{\tt fini}\xspace$}{
In parallel, compute $t_w \gets |{\tt intersect}\xspace({\tt cand}\xspace, \Gamma_{\mathcal{G}}(w))|$\;
}
In parallel, find $v \gets argmax(\{t_w : w \in {\tt cand}\xspace\cup{\tt fini}\xspace\})$\;
\Return{$v$}\;
\end{algorithm}
\textbf{Parallelization of Backtracking: } We first note that there is a sequential dependency among the different iterations within a recursive call of ${\tt TTT}\xspace$. In particular, the contents of the sets ${\tt cand}\xspace$ and ${\tt fini}\xspace$ in a given iteration are derived from the contents of ${\tt cand}\xspace$ and ${\tt fini}\xspace$ in the previous iteration. Such sequential dependence of updates of ${\tt cand}\xspace$ and ${\tt fini}\xspace$ restricts us from calling the recursive ${\tt TTT}\xspace$ for different vertices of ${\tt ext}\xspace$ in parallel. To remove this dependency, we adopt a different view of ${\tt TTT}\xspace$ which enables us to make the recursive calls in parallel. The elements of ${\tt ext}\xspace$, the vertices to be considered for extending a maximal clique, are arranged in a predefined total order. Then, we unroll the loop and explicitly compute the parameters {\tt cand}\xspace and {\tt fini}\xspace for recursive calls.
Suppose $\langle v_1, v_2, ..., v_{\kappa} \rangle$ is the order of vertices in ${\tt ext}\xspace$ to be processed in sequence. Each vertex $v_i \in {\tt ext}\xspace$, once added to $K$, should be removed from further consideration from ${\tt cand}\xspace$. To ensure this, instead of incrementally updating ${\tt cand}\xspace$ and ${\tt fini}\xspace$ with $v_i$ as in ${\tt TTT}\xspace$, in ${\tt ParTTT}\xspace$, we explicitly remove vertices $v_1, v_2, ..., v_{i-1}$ from ${\tt cand}\xspace$ and add them to ${\tt fini}\xspace$, before making the recursive calls. This way, the parameters of the $i$th iteration are computed independently of prior iterations.
\begin{algorithm}
\DontPrintSemicolon
\caption{${\tt ParTTT}\xspace(\mathcal{G},K,{\tt cand}\xspace,{\tt fini}\xspace)$}
\label{algo:partomita}
\KwIn{$\mathcal{G}$ - The input graph \\ \hspace{1cm} $K$ - a non-maximal clique to extend \\
${\tt cand}\xspace$ - Set of vertices that may extend $K$ \\
${\tt fini}\xspace$ - vertices that have been used to extend $K$}
\KwOut{Set of all maximal cliques of $G$ containing $K$ and vertices from ${\tt cand}\xspace$ but not containing any vertex from ${\tt fini}\xspace$}
\If{$({\tt cand}\xspace = \emptyset)$ \& $({\tt fini}\xspace = \emptyset)$}{
Output $K$ and return\;
}
${\tt pivot}\xspace \gets {\tt ParPivot}\xspace(\mathcal{G},{\tt cand}\xspace,{\tt fini}\xspace)$\;
${\tt ext}\xspace[1..\kappa] \gets {\tt cand}\xspace - \Gamma_{\mathcal{G}}({\tt pivot}\xspace)$ \tcp{in parallel} \;
\ForPar{$i \in$ $[1..\kappa]$}{
$q\gets{\tt ext}\xspace[i]$\;
$K_q \gets K\cup\{q\}$\;
${\tt cand}\xspace_q \gets {\tt intersect}\xspace({\tt cand}\xspace\setminus{\tt ext}\xspace[1..i-1], \Gamma_{\mathcal{G}}(q))$\;
${\tt fini}\xspace_q \gets {\tt intersect}\xspace({\tt fini}\xspace\cup{\tt ext}\xspace[1..i-1], \Gamma_{\mathcal{G}}(q))$\;
${\tt ParTTT}\xspace(\mathcal{G},K_q,{\tt cand}\xspace_q,{\tt fini}\xspace_q)$\;
}
\end{algorithm}
Next, we present an analysis of the total work and depth of ${\tt ParPivot}\xspace$ and ${\tt ParTTT}\xspace$ algorithms.
\begin{lemma}
\label{lem:partomita}
Total work of ${\tt ParTTT}\xspace$ (Algorithm~\ref{algo:partomita}) is $O(3^{n/3})$ and depth is $O(M\log{n})$ where $n$ is the number of vertices in the graph and $M$ is the size of maximum clique in $G$.
\end{lemma}
\begin{IEEEproof}
First, we analyze the total work. Note that the computational tasks in ${\tt ParTTT}\xspace$ is different from ${\tt TTT}\xspace$ at Line~9 and Line~10 of ${\tt ParTTT}\xspace$ where at an iteration $i$, we remove all vertices $\{v_1, v_2, ..., v_{i-1}\}$ from ${\tt cand}\xspace$ and add all these vertices to ${\tt fini}\xspace$ as opposed to the removal of a single vertex $v_{i-1}$ from ${\tt cand}\xspace$ and addition of that vertex to ${\tt fini}\xspace$ as in ${\tt TTT}\xspace$ (Line~9 and Line~10 of Algorithm~\ref{algo:tomita}). Therefore, in ${\tt ParTTT}\xspace$, additional $O(n)$ work is required due to independent computations of ${\tt cand}\xspace_q$ and ${\tt fini}\xspace_q$. The total work, excluding the call to ${\tt ParPivot}\xspace$ is $O(n^2)$. Adding up the work of ${\tt ParPivot}\xspace$, which requires $O(n^2)$ work, requires $O(n^2)$ total work for each single call of ${\tt ParTTT}\xspace$ excluding further recursive calls (Algorithm~\ref{algo:partomita}, Line 11), which is same as in original sequential algorithm ${\tt TTT}\xspace$ (Section 4, \cite{TTT06}). Hence, using Lemma~2 and Theorem~3 of \cite{TTT06}, we infer that the total work of ${\tt ParTTT}\xspace$ is the same as the sequential algorithm ${\tt TTT}\xspace$ and is bounded by $O(3^{n/3})$.
Next we analyze the depth of the algorithm. The depth of ${\tt ParTTT}\xspace$ consists of the (sum of the) following components: (1)~Depth of ${\tt ParPivot}\xspace$, (2)~Depth of computation of ${\tt ext}\xspace$, (3)~Maximum depth of an iteration in the for loop from Line~6 to Line~11. According to Lemma~\ref{lem:parpivot}, the depth of ${\tt ParPivot}\xspace$ is $O(\log{n})$. The depth of computing ${\tt ext}\xspace$ is $O(\log n)$ because it takes $O(1)$ time to check whether an element in ${\tt cand}\xspace$ is in the neighborhood of ${\tt pivot}\xspace$ by doing a set membership check on the set of vertices that are adjacent to ${\tt pivot}\xspace$. Similarly, the depth of computing ${\tt cand}\xspace_q$ and ${\tt fini}\xspace_q$ at Line~8 and Line~9 are $O(\log n)$ each. The remaining is the depth of the call of ${\tt ParTTT}\xspace$ at Line~10. Observe that the recursive call of ${\tt ParTTT}\xspace$ continues until there is no further vertex to add for expanding $K$, and this depth can be at most the size of the maximum clique which is $M$ because, at each recursive call of ${\tt ParTTT}\xspace$ the size of $K$ is increased by $1$. Thus, the overall depth of ${\tt ParTTT}\xspace$ is $O(M\log{n})$.
\end{IEEEproof}
\begin{corollary}
\label{cor:partomita}
Using $P$ parallel processors that share memory, ${\tt ParTTT}\xspace$ (Algorithm~\ref{algo:partomita}) is a parallel algorithm for MCE, and can achieve a worst case parallel time of $O\left(\frac{3^{n/3}}{M \log n} + P\right)$ using $P$ parallel processors. This is work-efficient as long as $P = O(\frac{3^{n/3}}{M \log n})$, and also work-optimal.
\end{corollary}
\begin{IEEEproof}
The parallel time follows from using Brent's theorem~\cite{BM10}, which states that the parallel time using $P$ processors is $O(w/d + P)$, where $w$ and $d$ are the work and the depth of the algorithm respectively. If the number of processors $P= O\left(\frac{3^{n/3}}{M \log n}\right)$, then using Lemma~\ref{lem:partomita} the parallel time is $O\left(\max\{\frac{3^{n/3}}P , {M \log n}\}\right) = O\left(\frac{3^{n/3}}{P}\right)$. The total work across all processors is $O(3^{n/3})$, which is worst-case optimal, since the size of the output can be as large as $3^{n/3}$ maximal cliques (Moon and Moser~\cite{MM65}).
\end{IEEEproof}
\subsection{Algorithm ${\tt ParMCE}\xspace$}
While ${\tt ParTTT}\xspace$ is a theoretically work-efficient parallel algorithm, we note that it is not that efficient in practice. One of the reasons is the implementation of ${\tt ParPivot}\xspace$. While the worst case work complexity of ${\tt ParPivot}\xspace$ matches that of the pivoting routine in ${\tt TTT}\xspace$, in practice, it may have a higher overhead, since the pivoting routine in ${\tt TTT}\xspace$ may take time less than $O(n^2)$. This can cause ${\tt ParTTT}\xspace$ to have greater work than ${\tt TTT}\xspace$, resulting in a lower speedup than the theoretically expected one.
We set out to improve on this to derive a more efficient parallel implementation through a more selective use of ${\tt ParPivot}\xspace$ in that the cost of pivoting can be reduced by carefully choosing many pivots in parallel instead of a single pivot element as in ${\tt ParTTT}\xspace$ at the beginning of the algorithm. We first note that the cost of ${\tt ParPivot}\xspace$ is the highest during the iteration when the parameter $K$ (clique so far) is empty. During this iteration, the set of vertices still to be considered, ${\tt cand}\xspace \cup {\tt fini}\xspace$, can be high, as large as the number of vertices in the graph. To improve upon this, we can perform the first few steps of pivoting, when $K$ is empty, using a sequential algorithm. Once the set $K$ has at least one element in it, the number of the vertices in ${\tt cand}\xspace \cup {\tt fini}\xspace$ still to be considered, drops down to no more than the size of the intersection of neighborhoods of all vertices in $K$, which is typically a number much smaller than the number of vertices in the graph (it is smaller than the smallest degree of a vertex in $K$). Problem instances with $K$ set to a single vertex can be seen as subproblems and on each of these subproblems, the overhead of ${\tt ParPivot}\xspace$ is much smaller since the number of vertices that have to be dealt with is also much smaller.
Based on this observation, we present a parallel algorithm ${\tt ParMCE}\xspace$ that works as follows. The algorithm can be viewed as considering for each vertex $v \in V(G)$, a subgraph $G_v$ that is induced by the vertex $v$ and its neighborhood $\Gamma_G(v)$. It enumerates all maximal cliques from each subgraph $G_v$ in parallel using ${\tt ParTTT}\xspace$. While processing subproblem $G_v$, it is important to not enumerate maximal cliques that are being enumerated elsewhere, in other subproblems. To handle this, the algorithm considers a specific ordering of all vertices in $V$ such that $v$ is the least ranked vertex in each maximal clique enumerated from $G_v$. The subgraphs $G_v$ for each vertex $v$ are handled in parallel -- these subgraphs need not be processed in any particular order. However, the ordering allows us to populate the ${\tt cand}\xspace$ and ${\tt fini}\xspace$ sets accordingly, so that each maximal clique is enumerated in exactly one subproblem. The order in which the vertices are considered is defined by a ``rank'' function \textbf{rank}, which indicates the position of a vertex in the total order. The specific ordering that is used influences the total work of the algorithm, as well as the load balance of the parallel implementation.
\textbf{Load Balancing:} Observe that the sizes of the subgraphs $G_v$ may vary widely because of two reasons: (1)~the subgraphs themselves may be of different sizes, depending on the vertex degrees, and (2)~the number of maximal cliques and the sizes of the maximal cliques containing $v$ can vary widely from one vertex to another. Clearly, the subproblems that deal with a large number of maximal cliques or maximal cliques of a large size are more expensive than others.
In order to maintain the size of the subproblems approximately balanced, we use an idea from PECO~\cite{SMT+15}, where we choose the rank function on the vertices in such a way that for any two vertices $v$ and $w$, \textbf{rank}($v$) $>$ \textbf{rank}($w$) if the complexity of enumerating maximal cliques from $G_v$ is higher than the complexity of enumerating maximal cliques from $G_w$. By giving a higher rank to $v$ than $w$, we are decreasing the complexity of the subproblem $G_v$, since the subproblem at $G_v$ need not enumerate maximal cliques that involve any vertex whose rank is less than $v$. Hence, the higher the rank of vertex $v$, the lower is its ``share'' (of maximal cliques it belongs to) of maximal cliques in $G_v$. We use this idea for approximately balancing the workload across subproblems. The additional enhancements in ${\tt ParMCE}\xspace$, when compared with the idea from PECO are as follows: (1)~In PECO the algorithm is designed for distributed memory so that the subgraphs and subproblems have to be explicitly copied across the network, and (2)~In ${\tt ParMCE}\xspace$, the vertex specific subproblem, dealing with $G_v$ is itself handled through a parallel algorithm, ${\tt ParTTT}\xspace$. However, in PECO, the subproblem for each vertex was handled through a sequential algorithm.
Note that it is computationally expensive to accurately count the number of maximal cliques within $G_v$, and hence it is not possible to compute the rank of each vertex exactly according to the complexity of handling $G_v$. Instead, we estimate the complexity of handling $G_v$ using some easy-to-evaluate metrics on the subgraphs. In particular, we consider the following:
\begin{itemize}
\item
\textbf{Degree Based Ranking: } For vertex $v$, define ${\tt rank}\xspace(v) = (d(v), id(v))$ where $d(v)$ and $id(v)$ are degree and identifier of $v$ respectively. For two vertices $v$ and $w$, ${\tt rank}\xspace(v) > {\tt rank}\xspace(w)$ if $d(v) > d(w)$ or $d(v) = d(w)$ and $id(v) > id(w)$; $rank(v) < rank(w)$ otherwise.
\item
\textbf{Triangle Count Based Ranking: } For vertex $v$, define ${\tt rank}\xspace(v) = (t(v), id(v))$ where $t(v)$ is the number of triangles containing vertex $v$. This is more expensive to compute than degree based ranking, but may yield a better estimate of the complexity of maximal cliques within $G_v$.
\item
\textbf{Degeneracy Based Ranking~\cite{ELS10}: } For a vertex $v$, define ${\tt rank}\xspace(v) = (degen(v), id(v))$ where $degen(v)$ is the degeneracy of a vertex $v$. A vertex $v$ has degeneracy number $k$ when it belongs to a $k$-core but no $(k+1)$-core where a $k$-core is a maximal induced subgraph with minimum degree of each vertex $k$ in that subgraph. A computational overhead of using this ranking is due to computing the degeneracy of the vertices which takes $O(n+m)$ time where $n$ is the number of vertices and $m$ is the number of edges.
\end{itemize}
The different implementations of ${\tt ParMCE}\xspace$ using degree, triangle, and degeneracy rankings are called as ${\tt ParMCE}\xspacedegree$, ${\tt ParMCE}\xspacetriangle$, ${\tt ParMCE}\xspacedegen$ respectively.
\begin{algorithm}
\DontPrintSemicolon
\caption{${\tt ParMCE}\xspace(\mathcal{G})$}
\label{algo:parmce}
\KwIn{$\mathcal{G}$ - The input graph}
\KwOut{$\mathcal{C}(\mathcal{G})$ - set of all maximal cliques of $\mathcal{G}$}
\ForPar {$v \in V(\mathcal{G})$}{
Create $\mathcal{G}_v$, the subgraph of $\mathcal{G}$ induced by $\Gamma_{\mathcal{G}}(v)\cup\{v\}$\;
$K \gets \{v\}$\;
${\tt cand}\xspace \gets \phi$\;
${\tt fini}\xspace \gets \phi$\;
\ForPar {$w \in \Gamma_{\mathcal{G}}(v)$}{
\If{$rank(w) > rank(v)$}{
${\tt cand}\xspace \gets {\tt cand}\xspace\cup\{w\}$\;
}
\Else {
${\tt fini}\xspace \gets {\tt fini}\xspace\cup\{w\}$\;
}
}
${\tt ParTTT}\xspace(\mathcal{G}_v, K, {\tt cand}\xspace, {\tt fini}\xspace)$\;
}
\end{algorithm}
\section{Experiments}
\label{sec:exp}
In this section, we present results from an experimental evaluation of the performance of parallel algorithms for MCE. For our experiments, we used an Intel Xeon (R) CPU on a Compute Engine in the Google Cloud Platform, with $32$ physical cores and $256$ GB RAM. We implement all algorithms using java 1.8 with a maximum of $100$ GB heap memory for the JVM.
\subsection{Datasets}
We use large real world network datasets from KONECT~\cite{konnect13}, SNAP~\cite{JA14}, and Network Repository~\cite{RN15}. Table~\ref{graph:summary} contains a summary of the datasets. All networks, used in our experiments, were undirected graphs. Self-loops are removed, and if the input graph was directed, we ignored the direction on the edges to derive an undirected graph.
\begin{table*}[htp!]
\centering
\begin{tabular}{| l | r | r | r | r | r |}
\toprule
\textbf{Dataset} & $|V|$ & $|E|$ & \# maximal cliques & avg. size of a maximal clique & size of the maximum clique\\
\midrule
{\tt dblp-coauthors} & \num{540486} & \num{15245729} & \num{139340} & 11 & 337\\
\hline
{\tt orkut} & \num{3072441} & \num{117184899} & \num{2270456447} & 20 & 51\\
\hline
{\tt as-skitter} & \num{1696415} & \num{11095298} & \num{37322355} & 19 & 67\\
\hline
{\tt wiki-talk} & \num{2394385} & \num{4659565} & \num{86333306} & 13 & 26\\
\bottomrule
\end{tabular}
\caption{\textbf{Undirected graphs used for evaluation, and their properties.}}
\label{graph:summary}
\end{table*}
\subsection{Implementation of the algorithms}
In our implementation of ${\tt ParTTT}\xspace$ and ${\tt ParMCE}\xspace$, we implement a parallel \texttt{For} loop using the primitive {\tt parallelStream()} provided by java 1.8. For computing the intersection of two sets as is required for computing ${\tt pivot}\xspace$ and updating ${\tt cand}\xspace$ and ${\tt fini}\xspace$ in Algorithm \ref{algo:partomita}, we perform a sequential implementation. This is because the sizes of the sets ${\tt cand}\xspace$ and ${\tt fini}\xspace$ are typically not large so that we can benefit from the use of parallelism. For the garbage collection in java we use flag {\tt -XX:+UseParallelGC} so that parallel garbage collection is run by JVM whenever required.
To compare with prior works in maximal clique enumeration, we implemented some of them \cite{TTT06,ELS10,DW+09,SMT+15,ZA+05} in Java, except the sequential algorithm ${\tt GreedyBB}\xspace$~\cite{SAS18}, and the parallel algorithm ${\tt Hashing}\xspace$~\cite{LP+17}, for which we used the executables provided by the authors (code written in C++). See Subsection \ref{sec:exp-prior} for more details.
We call our implementation of ${\tt ParMCE}\xspace$ using degree based vertex ordering as ${\tt ParMCE}\xspacedegree$, using degeneracy based vertex ordering as ${\tt ParMCE}\xspacedegen$, and using triangle count based vertex ordering as ${\tt ParMCE}\xspacetriangle$. We compute the degeneracy number and triangle count for each vertex using sequential procedures. While the computation of per-vertex triangle counts and the degeneracy ordering could be potentially parallelized, implementing a parallel method to rank vertices based on their degeneracy number or triangle count is in itself a non-trivial task. We decided not to parallelize these routines since the degeneracy- and triangle-based ordering did not yield significant benefits when compared with degree-based ordering, where as degree-based ordering is trivially available, without any additional computation.
We assume that the entire graph is stored in available in shared global memory. The runtime of ${\tt ParMCE}\xspace$ consists of (1)~the time required to rank vertices of the graph based on the ranking metric used in the algorithm, i.e. degree, degeneracy number, or triangle count of vertices and (2)~the time required to enumerate all maximal cliques. For ${\tt ParMCE}\xspacedegen$ and ${\tt ParMCE}\xspacetriangle$ algorithms, the runtime of ranking is also reported. Figures~\ref{fig:parallel-speedup-linear-scale} and~\ref{fig:runtimes-per-thread} show the parallel speedup (with respect to the runtime of ${\tt TTT}\xspace$) and and the total computation time of ${\tt ParMCE}\xspace$ using different vertex ordering strategies, respectively. Table~\ref{result:runtime-splitup} shows the breakdown of the runtime into time for ordering and the time for clique enumeration.
\subsection{Performance of Parallel Clique Enumeration Algorithms}
\begin{table*}[htp!]
\centering
\begin{tabular}{| l | r | r | r | r | r |}
\toprule
\textbf{DataSet} & {\tt TTT}\xspace & {\tt ParTTT}\xspace & {\tt ParMCE}\xspacedegree & {\tt ParMCE}\xspacedegen & {\tt ParMCE}\xspacetriangle \\
\midrule
{\tt dblp-coauthors} & 356 & 28 & 14 & 21.4 & 152.2\\
\hline
{\tt orkut} & \num{26407} & \num{1886} & \num{1362} & \num{2141.1} & \num{2278}\\
\hline
{\tt as-skitter} & 807 & 60 & 45 & 71.9 & 85.6\\
\hline
{\tt wiki-talk} & \num{1022} & \num{85} & 62 & 70.1 & 89.2\\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison of total computation time (in sec.) of ${\tt ParMCE}\xspace$ (with degree based, degeneracy based, and triangle count based vertex ordering) and computation time ${\tt ParTTT}\xspace$ (with $32$ threads) with ${\tt TTT}\xspace$.}}
\label{result:parallel-runtimes}
\end{table*}
The total runtimes of the parallel algorithms with $32$ threads are shown in Table~\ref{result:parallel-runtimes}. We observe that ${\tt ParTTT}\xspace$ achieves a speedup of \textbf{12x} to \textbf{14x} over the sequential algorithm ${\tt TTT}\xspace$. The three versions of ${\tt ParMCE}\xspace$, ${\tt ParMCE}\xspacedegree$, ${\tt ParMCE}\xspacedegen$, ${\tt ParMCE}\xspacetriangle$ achieve a speedup of \textbf{15x} to \textbf{31x} with 32 threads, when we consider only the runtime for maximal clique enumeration. This speedup are smaller for ${\tt ParMCE}\xspacedegen$ and ${\tt ParMCE}\xspacetriangle$ when we add up the time taken by ranking strategies (See Figure~\ref{fig:parallel-speedup-linear-scale}).
The reason for the higher runtimes of ${\tt ParTTT}\xspace$ when compared with ${\tt ParMCE}\xspace$ is the greater cumulative overhead of computing the pivot and in processing the ${\tt cand}\xspace$ and ${\tt fini}\xspace$ sets in ${\tt ParTTT}\xspace$. For example, for {\tt dblp-coauthors} graph, in ${\tt ParTTT}\xspace$, the cumulative overhead of computing ${\tt pivot}\xspace$ is 248 sec. and cumulative overhead of updating the ${\tt cand}\xspace$ and ${\tt fini}\xspace$ is 38 sec. whereas in ${\tt ParMCE}\xspace$, these number are 156 sec. and 21 sec. respectively and these reduced cumulative times in ${\tt ParMCE}\xspace$ are reflected in the overall reduction in the parallel enumeration time of ${\tt ParMCE}\xspace$ over ${\tt ParTTT}\xspace$ by a factor of $2$.
\remove{
\begin{figure*}
\caption{\textbf{Parallel scaling of {\tt ParMCE}
\label{fig:parallel-speedup}
\end{figure*}
}
\begin{figure*}
\caption{\textbf{Parallel speedup of ${\tt ParMCE}
\label{fig:parallel-speedup-linear-scale}
\end{figure*}
\begin{figure*}
\caption{\textbf{Total computation time of ${\tt ParMCE}
\label{fig:runtimes-per-thread}
\end{figure*}
\begin{figure*}
\caption{\textbf{Frequency distribution of sizes of maximal cliques across different input graphs.}
\label{fig:szMC-distribution}
\end{figure*}
\paragraph{Impact of vertex ordering on overall performance of ${\tt ParMCE}\xspace$}
Next we consider the influence of different vertex ordering strategies, degree, degeneracy, and triangle count, on the performance of ${\tt ParMCE}\xspace$. The total computation time when using different vertex ordering strategies are presented in Table~\ref{result:runtime-splitup}. Overall, we observe that degree based ordering (${\tt ParMCE}\xspacedegree$) usually achieves the smallest (or close to the smallest) runtime for clique enumeration, even when we don't take into account the time to compute the ordering. If we add in the time for computing the ordering, {\em degree based ordering is clearly better than triangle count or degeneracy based orderings}, since degree based ordering is available for free, while the degeneracy based ordering and triangle based ordering require additional computational overhead.
\paragraph{Scaling up with the degree of parallelism}
As the number of threads (and the degree of parallelism) increases, the runtime of ${\tt ParMCE}\xspace$ and of ${\tt ParTTT}\xspace$ decreases, and the speedup as a function of the number of threads is shown in Figure~\ref{fig:parallel-speedup-linear-scale} and the runtimes are shown in Figure~\ref{fig:runtimes-per-thread}. We see that ${\tt ParMCE}\xspacedegree$ achieves a speedup of more than 15x on all graphs, using 32 threads. On the {\tt dblp-coauthors} graph, the speedup with 32 threads was nearly 30x.
To get a better understanding of the variation of speedups achieved on different input graphs, we plotted the distribution of the sizes of maximal cliques for different input graphs, see Figure~\ref{fig:szMC-distribution}. We observe that the speedup of ${\tt ParMCE}\xspace$ is higher on those graphs that have large maximal cliques. For instance, there are many maximal cliques of size in the range $100$ to $330$ for {\tt dblp-coauthors}, and we observed the highest speedup, of nearly 30x with 32 threads, for {\tt dblp-coauthors}. A good speedup of nearly 20x was also observed for the {\tt orkut} graph, which has a large number of maximal cliques, which are of relatively large sizes (the average size of a maximal clique is 20). Overall, we see that the speedup obtained is roughly correlated with the complexity of the graph, measured in terms of the presence of large maximal cliques, as well as the number of such large maximal cliques.
\begin{table*}[htp!]
\centering
\scalebox{0.9}{
\begin{tabular}{| l | r | r | r | r | r | r | r |}
\toprule
\multirow{2}{*}\textbf{DataSet} & \multirow{2}{*}{{\tt ParMCE}\xspacedegree} & \multicolumn{3}{|c|}{{\tt ParMCE}\xspacedegen} & \multicolumn{3}{|c|}{{\tt ParMCE}\xspacetriangle} \\
\cline{3-8}
& & RT & ET & TT & RT & ET & TT \\
\midrule
{\tt dblp-coauthors} & \textbf{14} & 8.4 & 13 & \textbf{21.4} & 138.2 & 14 & \textbf{152.2}\\
\hline
{\tt orkut} & \textbf{1362} & 599.1 & 1542 & \textbf{2141.1} & 786.7 & 1492 & \textbf{2278}\\
\hline
{\tt as-skitter} & \textbf{45} & 26.9 & 45 & \textbf{71.9} & 43.6 & 42 & \textbf{85.6}\\
\hline
{\tt wiki-talk} & \textbf{62} & 8.1 & 62 & \textbf{70.1} & 30.2 & 59 & \textbf{89.2}\\
\bottomrule
\end{tabular}
}
\caption{\textbf{Total computation time (in sec.) of {\tt ParMCE}\xspacedegree, {\tt ParMCE}\xspacedegen, and {\tt ParMCE}\xspacetriangle. ``RT" stands for time for computing the vertex ranking, ``ET" stands for parallel enumeration time, and ``TT" stands for total computation time.}}
\label{result:runtime-splitup}
\end{table*}
\subsection{Comparison with prior work \label{sec:exp-prior}}
We compare the performance of ${\tt ParMCE}\xspace$ with prior sequential and parallel algorithms for MCE. We consider the following sequential algorithms: ${\tt GreedyBB}\xspace$ due to Segundo et al.~\cite{SAS18}, ${\tt TTT}\xspace$ due to Tomita et al.~\cite{TTT06}, and ${\tt BKDegeneracy}\xspace$ due to Eppstein et al.~\cite{ELS10}. For the comparison with parallel algorithm, we consider algorithm ${\tt Clique Enumerator}\xspace$ due to Zhang et al.~\cite{ZA+05}, ${\tt Peamc}\xspace$ due to Du et al.~\cite{DW+09}, {\tt PECO}\xspace due to Svendsen et al.~\cite{SMT+15}, and most recent parallel algorithm ${\tt Hashing}\xspace$ due to Lessley et al.~\cite{LP+17}. The parallel algorithms ${\tt Clique Enumerator}\xspace$, ${\tt Peamc}\xspace$, and ${\tt Hashing}\xspace$ are designed for the shared memory model, while ${\tt PECO}\xspace$ is designed for distributed memory. We modified ${\tt PECO}\xspace$ to work with shared memory, by reusing the method for subproblem construction, and eliminating the need to communicate subgraphs by storing a single copy of the graph in shared memory. We considered three different ordering strategies for ${\tt PECO}\xspace$, which we call ${\tt PECO}\xspacedegree, {\tt PECO}\xspacedegen$, and ${\tt PECO}\xspacetri$. The comparison of performance of {\tt ParMCE}\xspace with {\tt PECO}\xspace is presented in Table~\ref{result:compare-with-peco}. We note that ${\tt ParMCE}\xspace$ is significantly better than that of ${\tt PECO}\xspace$, no matter which ordering strategy was considered.
\begin{table*}[htp!]
\centering
\begin{tabular}{| l | r | r || r | r || r | r |}
\toprule
\textbf{DataSet} & {\tt PECO}\xspacedegree & {\tt ParMCE}\xspacedegree & {\tt PECO}\xspacedegen & {\tt ParMCE}\xspacedegen & {\tt PECO}\xspacetri &{\tt ParMCE}\xspacetriangle \\
\midrule
{\tt dblp-coauthors} & 73 & \textbf{14} & 78 & \textbf{14} & 74 & \textbf{13}\\
\hline
{\tt orkut} & \num{2001} & \textbf{\num{1362}} & \num{7502} & \textbf{\num{1542}} & \num{2500} & \textbf{\num{1492}} \\
\hline
{\tt as-skitter} & 272 & \textbf{45} & 450 & \textbf{45} & 267 & \textbf{42} \\
\hline
{\tt wiki-talk} & \num{1423} & \textbf{62} & \num{1776} & \textbf{62} & \num{1534} & \textbf{59} \\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison of parallel enumeration time (in sec.) of ${\tt ParMCE}\xspace$ with ${\tt PECO}\xspace$ (modified to use shared memory), using 32 threads. Three different variants are considered for each algorithm, based on the ordering strategy used.}}
\label{result:compare-with-peco}
\end{table*}
\remove{
\begin{table*}[htp!]
\centering
\begin{tabular}{| l | r | r || r | r || r | r |}
\toprule
\textbf{DataSet} & {\tt PECO}\xspaceshareddegree & {\tt ParMCE}\xspacedegree & {\tt PECO}\xspaceshareddegen & {\tt ParMCE}\xspacedegen & {\tt PECO}\xspacesharedtri &{\tt ParMCE}\xspacetriangle \\
\midrule
{\tt dblp-coauthors} & \modified{27} & \modified{\textbf{14}} & \modified{22} & \modified{\textbf{14}} & \modified{27} & \modified{\textbf{13}}\\
\hline
{\tt orkut} & \modified{\num{1793}} & \modified{\textbf{\num{1362}}} & \modified{7002} & \modified{\textbf{\num{1542}}} & \modified{2273} & \modified{\textbf{\num{1492}}} \\
\hline
{\tt as-skitter} & \modified{117} & \modified{\textbf{45}} & \modified{242} & \modified{\textbf{45}} & \modified{100} & \modified{\textbf{42}} \\
\hline
{\tt wiki-talk} & \modified{454} & \modified{\textbf{62}} & \modified{807} & \modified{\textbf{62}} & \modified{472} & \modified{\textbf{59}} \\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison of maximal clique enumeration time (in sec.) of {\tt ParMCE}\xspace (with degree based, degeneracy based, and triangle count based vertex ordering) (with $32$ threads) with a modification of {\tt PECO}\xspace for shared memory system named as {\tt PECO}\xspaceshared where we do not explicitly create subgraph for subproblems ({\tt PECO}\xspaceshareddegree, {\tt PECO}\xspaceshareddegen, {\tt PECO}\xspacesharedtri for degree based, degeneracy based, and triangle count based vertex ordering respectively).} \comment{{\tt ParMCE}\xspace results need update.}}
\label{result:compare-with-pecos}
\end{table*}
}
\begin{table*}[htp!]
\centering
\scalebox{0.9}{
\begin{tabular}{| r | r | r | r | r |}
\toprule
\textbf{DataSet} & {\tt ParMCE}\xspacedegree & {\tt Hashing}\xspace & {\tt Clique Enumerator}\xspace & {\tt Peamc}\xspace\\
\midrule
{\tt dblp-coauthors} & 14 & run out of memory in 3 min. & run out of memory in 10 min. & did not complete in 5 hours.\\
\hline
{\tt orkut} & 1362 & run out of memory in 7 min. & run out of memory in 20 min. & did not complete in 5 hours.\\
\hline
{\tt as-skitter} & 45 & run out of memory in 5 min. & run out of memory in 10 min. & did not complete in 5 hours.\\
\hline
{\tt wiki-talk} & 62 & run out of memory in 10 min. & run out of memory in 20 min. & did not complete in 5 hours.\\
\bottomrule
\end{tabular}
}
\caption{\textbf{Comparison of total computation time (in sec.) of ${\tt ParMCE}\xspace$ with ${\tt Hashing}\xspace$.}}
\label{result:compare-with-hashing}
\end{table*}
The comparison of ${\tt ParMCE}\xspace$ with other shared memory algorithms ${\tt Peamc}\xspace$, ${\tt Clique Enumerator}\xspace$, and ${\tt Hashing}\xspace$ is shown in Table~\ref{result:compare-with-hashing}. The performance of ${\tt ParMCE}\xspace$ is seen to be much better than that of any of these prior shared memory parallel algorithms. For the graph {\tt dblp-coauthor}, ${\tt Peamc}\xspace$ did not finish within $5$ hours, whereas ${\tt ParMCE}\xspace$ takes around $50$ secs for enumerating $139$K maximal cliques. The poor running time of ${\tt Peamc}\xspace$ is due to two following reasons: (1)~the algorithm does not apply efficient pruning techniques such as pivoting, used in ${\tt TTT}\xspace$, and (2)~the method to determine the maximality of a clique in the search space is not efficient. The ${\tt Clique Enumerator}\xspace$ algorithms run out of memory after a few minutes. The reason is that ${\tt Clique Enumerator}\xspace$ maintains a bit vector for each vertex that is as large as the size of the input graph, and additionally, needs to store intermediate non-maximal cliques. For each such non-maximal clique, it is required to maintain a bit vector of length equal to the size of the vertex set of the original graph. Therefore, in ${\tt Clique Enumerator}\xspace$ a memory issue is inevitable for a graph with millions of vertices.
A recent parallel algorithm in the literature, ${\tt Hashing}\xspace$ also has a significant memory overhead, and ran out of memory on the input graphs that we considered. The reason for its high memory requirement is that ${\tt Hashing}\xspace$ enumerates intermediate non-maximal cliques before finally outputting maximal cliques. The number of such intermediate non-maximal cliques may be very large, even for graphs with few number of maximal cliques. For example, a maximal clique of size $c$ contains $2^c-1$ non-maximal cliques.
\begin{table*}[htp!]
\centering
\scalebox{0.9}{
\begin{tabular}{| l | r | r | r | r | r |}
\toprule
\textbf{DataSet} & {\tt BKDegeneracy}\xspace & {\tt GreedyBB}\xspace & {\tt ParMCE}\xspacedegree & {\tt ParMCE}\xspacedegen & {\tt ParMCE}\xspacetriangle \\
\midrule
{\tt dblp-coauthors} & 231 & did not finish in 30 min. & 14 & 21.4 & 152.2\\
\hline
{\tt orkut} & \num{19958} & run out of memory in 5 min. & 1362 & 2141.1 & 2278\\
\hline
{\tt as-skitter} & 588 & out of memory in 10 min. & 45 & 71.9 & 85.6\\
\hline
{\tt wiki-talk} & 844 & run out of memory in 10 min. & 62 & 70.1 & 89.2\\
\bottomrule
\end{tabular}
}
\caption{\textbf{Total computation time (sec.) of ${\tt ParMCE}\xspace$ (with $32$ threads) and sequential ${\tt BKDegeneracy}\xspace$ and ${\tt GreedyBB}\xspace$.}}
\label{results:compare-with-sequential}
\end{table*}
Next, we compare the performance of ${\tt ParMCE}\xspace$ with that of sequential algorithms ${\tt BKDegeneracy}\xspace$ and a recent sequential algorithm ${\tt GreedyBB}\xspace$ -- results are in Table~\ref{results:compare-with-sequential}. For large graphs, the performance of ${\tt BKDegeneracy}\xspace$ is almost similar to ${\tt TTT}\xspace$ whereas ${\tt GreedyBB}\xspace$ performs much worse than ${\tt TTT}\xspace$. Since our ${\tt ParMCE}\xspace$ algorithm outperforms ${\tt TTT}\xspace$, we can conclude that ${\tt ParMCE}\xspace$ is significantly faster than other sequential algorithms.
\subsection{Summary of Experimental Results}
We found that both ${\tt ParTTT}\xspace$ and ${\tt ParMCE}\xspace$ yield significant speedups over the sequential algorithm ${\tt TTT}\xspace$, sometimes as much as the number of cores available. ${\tt ParMCE}\xspace$ using the degree-based vertex ranking always performs better than ${\tt ParTTT}\xspace$. The runtime of ${\tt ParMCE}\xspace$ using degeneracy/triangle count based vertex ranking is sometimes worse than ${\tt ParTTT}\xspace$ due to the overhead of sequential computation of vertex ranking -- note that this overhead is not needed in ${\tt ParTTT}\xspace$. The parallel speedup of ${\tt ParMCE}\xspace$ is better when the input graph has many large sized maximal cliques. Overall, ${\tt ParMCE}\xspace$ consistently outperforms prior sequential and parallel algorithms for MCE.
\remove{
\begin{figure*}
\caption{\textbf{Frequency distribution of Parallel Speedup of {\tt ParTTT}
\label{fig:speedup-distribution}
\end{figure*}
}
\section{Conclusion}
\label{sec:conclude}
We presented shared memory parallel algorithms for enumerating maximal cliques from a graph. ${\tt ParTTT}\xspace$ is a work-efficient parallelization of a sequential algorithm due to Tomita et al.~\cite{TTT06}, and ${\tt ParMCE}\xspace$ is a practical adaptation of ${\tt ParTTT}\xspace$ that has more opportunities for parallelization and better load balancing. Our algorithms are significant improvements compared with the current state-of-the-art on MCE. Our experiments show that ${\tt ParMCE}\xspace$ has a speedup of up to 31x (on a 32 core machine) when compared with an efficient sequential baseline. In contrast, prior shared memory parallel methods for MCE were either unable to process the same graphs in a reasonable time, or ran out of memory. Many questions remain open : (1)~Can these methods scale to even larger graphs, and to machines with larger numbers of cores (2)~How can one adapt these methods to other parallel systems such as a cluster of computers with a combination of shared and distributed memory, or GPUs?
\end{document} |
\begin{document}
\title{Density theorems for rational numbers}
\author{Andreas Koutsogiannis}
\begin{abstract}
Introducing the notion of a rational system of measure preserving transformations and proving a recurrence result for such systems, we give sufficient conditions in order a subset of rational numbers to contain arbitrary long arithmetic progressions.
\end{abstract}
\keywords{IP-systems, density-type theorems, rational numbers}
\subjclass{Primary 11R45; 28D05}
\maketitle
\baselineskip=18pt
\pagestyle{plain}
\section*{Introduction}
In 1927, van der Waerden proved (in \cite{vdW}) that for any finite partition of the set of natural numbers, there exists a cell of the partition which contains arbitrary long arithmetic progressions, which is a (perhaps the most) fundamental result of Ramsey theory. The density version of the van der Waerden theorem, that any set of positive upper density in ${\mathbb N}$ possesses arbitrary long arithmetic progressions (the \textit{upper density} of a subset $A\subseteq {\mathbb N}$ is defined by $\overline{d}(A)=\limsup_n \frac{|A\cap\{1,\ldots,n\}|}{n},$ where $|A|$ denotes the cardinality of $A$) was conjectured by Erd$\ddot{\text{o}}$s and Tur\'{a}n in 1930's and established by Szemer\'{e}di in 1975 (\cite{Sz}).
Furstenberg, in 1977, reproved Szemer\'{e}di's theorem, introducing a correspondence principle, which provides the link between density Ramsey theory and ergodic theory (\cite{Fu}) and proving a multiple recurrence theorem for measure preserving systems.
In this paper we will prove a multiple recurrence and two density results for the set of rational numbers giving sufficient conditions in order a set of rational numbers to contain arbitrary long arithmetic progressions. Using a representation of rational numbers (proved in \cite{BIP}), according to which every rational number can be represented as a dominated located word over an infinite alphabet, we define the notion of a rational system (Definition~\ref{rmps}). We obtain:
(a) a multiple recurrence result concerning rational systems of measure preserving transformations in Theorem~\ref{thm:2}, using the analogous result of Furstenberg-Katznelson for the IP-systems of measure preserving transformations,
(b) a sufficient condition via F{\o}lner sequences in order a subset of rational numbers to contain arbitrary long arithmetic progressions in Theorem~\ref{thm:4}, using a result of Hindman and Strauss for infinite countable left cancellative semigroups; and
(c) a density result viewing the rational numbers as located words in Theorem~\ref{l}, which follows from the density Hales-Jewett theorem of Furstenberg-Katznelson.
\begin{note}
Let ${\mathbb N}=\{1,2,\ldots\}$ be the set of natural numbers, $\mathbb{Z}=\{\ldots,-2,-1,0,1,2,\ldots\}$ be the set of integer numbers, ${\mathbb Q}=\{\frac{m}{n}: m\in {\mathbb Z},\;n\in {\mathbb N}\}$ the set of rational numbers and ${\mathbb Z}^\ast={\mathbb Z}\setminus\{0\},$ ${\mathbb Q}^\ast={\mathbb Q}\setminus\{0\}.$
\end{note}
\section{A multiple recurrence result for rational numbers}
In this section, using the representation of rational numbers as dominated located words over an infinite alphabet, we define (in Definition~\ref{rmps}) the rational systems and we prove, in Theorem~\ref{thm:2} below, a recurrence result for such systems for measure preserving transformations using an analogous result for IP-systems of Furstenberg-Katznelson.
According to \cite{BIP}, every rational number $q$ has a unique expression in the form $$q=\sum^{\infty}_{s=1}q_{-s}\frac{(-1)^{s}}{(s+1)!}\;+\;\sum^{\infty}_{r=1}q_{r}(-1)^{r+1}r! $$ where $(q_n)_{n \in \mathbb{Z}^\ast}\subseteq {\mathbb N}\cup\{0\}$ with $\;0\leq q_{-s}\leq s$ for every $s>0$, $ 0\leq q_r\leq r$ for every $r> 0$ and $q_{-s}=q_r=0$ for all but finite many $r,s$.
So, for a non-zero rational number $q$, there exist unique $l\in {\mathbb N},$ a non-empty finite subset of non-zero integers, the \textit{domain} of $q,$ $\{t_1<\ldots<t_l\}=dom(q)\in [{\mathbb Z}^\ast]^{<\omega}_{>0}$ and a subset of natural numbers $\{q_{t_1},\ldots,q_{t_l}\}\subseteq {\mathbb N}$ with $1\leq q_{t_i}\leq -t_i$ if $t_i<0$ and $1\leq q_{t_i}\leq t_{i}$ if $t_i> 0$ for every $1\leq i\leq l,$ such that if $dom^-(q)=\{t\in dom(q):\;t<0\}$ and $dom^+(q)=\{t\in dom(q):\;t>0\}$ to have $$ q=\sum_{t\in dom^-(q)}q_t\frac{(-1)^{-t}}{(-t+1)!}\;+\;\sum_{t\in dom^+(q)}q_t(-1)^{t+1}t!\;\;\;(\text{we set}\;\;\sum_{t\in \emptyset}=0).$$ Consequently, $q$ can be represented as the word \begin{center} $q=q_{t_1}\ldots q_{t_l}.$ \end{center} It is easy to see that $$e^{-1}-1=-\sum^{\infty}_{t=1}\frac{2t-1}{(2t)!}< \sum_{t\in dom^-(q)}q_t\frac{(-1)^{-t}}{(-t+1)!} <\sum^{\infty}_{t=1}\frac{2t}{(2t+1)!}=e^{-1}$$ and that $$ \sum_{t\in dom^+(q)}q_t(-1)^t(t+1)! \in {\mathbb Z}^\ast\;\;\text{if}\;\;dom^+(q)\neq\emptyset.$$
We will now recall the notion of IP-systems introduced by Furstenberg and Katznelson in \cite{FuKa}.
\begin{defn}
Let $\{T_n\}_{n\in {\mathbb N}}$ be a set of commuting transformations of a space. To every multi-index $\alpha=\{t_1,\ldots,t_l\}\in [{\mathbb N}]^{<\omega}_{>0}, \;t_1<\ldots<t_l,$ we attach the transformation \begin{center} $\mathcal{T}_{\alpha}=T_{t_1}\ldots T_{t_l}.$ \end{center} The corresponding family $\{\mathcal{T}_{\alpha}\}_{\alpha\in [{\mathbb N}]^{<\omega}_{>0}}$ is an \textit{IP-system} (of transformations).
Two IP-systems $\{\mathcal{T}^{(1)}_{\alpha}\}_{\alpha\in [{\mathbb N}]^{<\omega}_{>0}},$ $\{\mathcal{T}^{(2)}_{\alpha}\}_{\alpha\in [{\mathbb N}]^{<\omega}_{>0}}$ defined by $\{T^{(1)}_{n}\}_{n\in {\mathbb N}}$ and $\{T^{(2)}_{n}\}_{n\in {\mathbb N}}$ respectively, are \textit{commuting} if $T^{(1)}_{n}T^{(2)}_{m}=T^{(2)}_{m}T^{(1)}_{n}$ for every $n,$ $m\in {\mathbb N}.$
\end{defn}
\begin{thm}[Furstenberg-Katznelson, \cite{FuKa}]\label{thm:1}
Let $\{\mathcal{T}^{(1)}_{\alpha}\}_{\alpha\in [{\mathbb N}]^{<\omega}_{>0}},\ldots,\{\mathcal{T}^{(k)}_{\alpha}\}_{\alpha\in [{\mathbb N}]^{<\omega}_{>0}}$ be $k$ commuting IP-systems defined by the measure preserving transformations $\{T^{(j)}_{n}\}_{n\in {\mathbb N}},$ $1\leq j\leq k$ of a measure space $(X,{\mathcal B},\mu)$ with $\mu(X)=1$ (i.e. $T_n^{(j)}$ is ${\mathcal B}$-measurable with $\mu(T_n^{(j)-1}(A))=\mu(A)$ for every $A\in{\mathcal B},$ $1\leq j\leq k,$ $n\in {\mathbb N}$). If $A\in {\mathcal B}$ with $\mu(A)>0,$ then there exists an index $\alpha$ with
\begin{center} $\mu(A\cap \mathcal{T}^{(1)-1}_{\alpha}(A)\cap\ldots \cap \mathcal{T}^{(k)-1}_{\alpha}(A))>0.$ \end{center}
\end{thm}
We will define the notion of a rational system (see also \cite{K}), extending the notion of an IP-system.
\begin{defn}\label{rmps} Let $\{T_n\}_{n\in {\mathbb Z}^{\ast}}$ be a set of commuting transformations of a set $X.$ For a non-zero rational number $q$ represented as the word $q=q_{t_1}\ldots q_{t_l},$ we define \begin{center} $\mathcal{T}^q(x)=T_{t_1}^{q_{t_1}}\ldots T_{t_l}^{q_{t_l}}(x)$ and $\mathcal{T}^0(x)=x$ for every $x\in X.$\end{center} The corresponding family $\{\mathcal{T}^{q}\}_{q\in{\mathbb Q}}$ is a \textit{rational system} (of transformations).
Two rational systems $\{\mathcal{T}_1^{q}\}_{q\in{\mathbb Q}},$ $\{\mathcal{T}_2^{q}\}_{q\in{\mathbb Q}}$ defined by $\{T_{n,1}\}_{n\in {\mathbb Z}^{\ast}}$ and $\{T_{n,2}\}_{n\in {\mathbb Z}^{\ast}}$ respectively, are \textit{commuting} if $T_{n,1}T_{m,2}=T_{m,2}T_{n,1}$ for every $n,$ $m\in {\mathbb Z}^{\ast}.$
\end{defn}
Using Theorem~\ref{thm:1}, we can take the following:
\begin{thm}\label{thm:2}
Let $\{\mathcal{T}_1^{q}\}_{q\in{\mathbb Q}},\ldots,\{\mathcal{T}_k^{q}\}_{q\in{\mathbb Q}}$ be $k$ commuting rational systems defined by the measure preserving transformations $\{T_{n,j}\}_{n\in {\mathbb Z}^{\ast}},$ $1\leq j\leq k$ respectively of a measure space $(X,{\mathcal B},\mu)$ with $\mu(X)=1.$ If $A\in {\mathcal B}$ and $\mu(A)>0$ then there exists $q\in {\mathbb Q}^\ast$ with
\begin{center} $\mu(A\cap (\mathcal{T}_1^q)^{-1}(A)\cap\ldots \cap (\mathcal{T}_k^q)^{-1}(A))>0.$ \end{center}
\end{thm}
\begin{proof}
Let $A\in {\mathcal B}$ with $\mu(A)>0.$ For every $t\in {\mathbb Z}^\ast$ choose a natural number $q_t$ with $1\leq q_t\leq |t|.$ For every $t\in {\mathbb N},$ $1\leq i\leq k,$ set $\phi^{(i)}_{2t-1}=T_{t,i}^{q_t},$ $\phi^{(i)}_{2t}=T_{-t,i}^{q_{-t}}$ and let the corresponding IP-systems $\{\phi^{(i)}_{\alpha}\}_{\alpha\in [{\mathbb N}]^{<\omega}_{>0}}.$
According to Theorem~\ref{thm:1} there exists $\alpha=\{t_1<\ldots<t_l\}\in [{\mathbb N}]^{<\omega}_{>0}$ with \begin{center} $\mu(A\cap \phi^{(1)-1}_{\alpha}(A)\cap\ldots \cap \phi^{(k)-1}_{\alpha}(A))>0.$\end{center} Since $\phi^{(i)}_{\alpha}=\mathcal{T}_i^q$ for all $1\leq i\leq k,$ where $$q=\sum_{2t\in \alpha}q_t\frac{(-1)^{-t}}{(-t+1)!}+\sum_{2t-1\in \alpha}q_t (-1)^{t+1}t!\in {\mathbb Q}^\ast,$$ we have that $\mu(A\cap (\mathcal{T}_1^q)^{-1}(A)\cap\ldots \cap (\mathcal{T}_k^q)^{-1}(A))>0.$
\end{proof}
\begin{remark}\label{r}
With the same arguments as in Theorem~\ref{thm:2}, we can prove that:
$(1)$ There exists $q\in {\mathbb Z}^\ast$ which satisfies the conclusion of Theorem~\ref{thm:2}, setting $\phi^{(i)}_{t}=T_{t,i}^{q_t}$ for every $t\in {\mathbb N}$ and $1\leq i\leq k$.
$(2)$ There exists $q\in (e^{-1}-1,e^{-1})\cap {\mathbb Q}^{\ast}$ which satisfies the conclusion of Theorem~\ref{thm:2}, setting $\phi^{(i)}_{t}=T_{-t,i}^{q_{-t}}$ for every $t\in {\mathbb N}$ and $1\leq i\leq k$.
\end{remark}
\section{Density conditions for rational numbers}
In this section, using Theorem~\ref{thm:2}, we will give, via left F{\o}lner sequences, a sufficient condition (in Theorem~\ref{thm:4}) in order a subset of rational numbers to contain arbitrary long arithmetic progressions, using a result of Hindman and Strauss (Theorem~\ref{thm:33}). Also, using Furstenberg-Katznelson's density Hales-Jewett theorem for words over a finite alphabet (Theorem~\ref{thm:6}), we prove in Theorem~\ref{l} a density result viewing the rational numbers as located words.
Firstly, we will define the left F{\o}lner sequences.
\begin{defn}\label{lfs}
Let $(S,+)$ be a semigroup. A \textit{left F{\o}lner sequence} in $[S]^{<\omega}_{>0}$ is a sequence $\{F_n\}_{n\in {\mathbb N}}$ in $[S]^{<\omega}_{>0}$ such that for each $s\in S,$ $$\lim_{n\rightarrow\infty}\frac{|(s+F_n)\triangle F_n|}{|F_n|}=0,$$ where $A\triangle B=(A\setminus B)\cup (B\setminus A).$
\end{defn}
\begin{remark}[\cite{HS}]
If $(S,+)$ is an infinite countable left cancellative semigroup (i.e. $a+b=a+c\;{\mathcal R}ightarrow\;b=c$ for every $a,b,c\in S$), then we can find a left F{\o}lner sequence in $[S]^{<\omega}_{>0}.$
\end{remark}
Given a left F{\o}lner sequence ${\mathcal F}=\{F_n\}_{n\in {\mathbb N}}$ in $[S]^{<\omega}_{>0},$ there is a natural notion of upper density associated with ${\mathcal F},$ namely $$\overline{d}_{{\mathcal F}}(A)=\lim\sup_{n\rightarrow\infty}\frac{|A\cap F_n|}{|F_n|}.$$
In order to prove Theorem~\ref{thm:4}, which gives a sufficient condition in order a subset of rational numbers to contain arbitrary long arithmetic progressions, we will need some notions from the theory of ultrafilters and also Theorem~\ref{thm:33}, a fundamental result of Hindman and Strauss, which we mention below.
\subsection*{Ultrafilters}
Let $X$ be a non-empty set. An \textit{ultrafilter} on the set $X$ is a zero-one finite additive measure $\mu$ defined on all the subsets of $X$. The set of all ultrafilters on the set $X$ is denoted by $\beta X.$ So, $\mu\in\beta X$ if and only if
\begin{itemize}
\item[{(i)}] $\mu(A)\in\{0,1\}$ for every $A\subseteq X$, $\mu(X)=1$, and
\item[{(ii)}] $\mu(A\cup B)=\mu(A)+\mu(B)$ for every $A,B\subseteq X$ with $A\cap B=\emptyset$.
\end{itemize}
For $\mu\in\beta X,$ it is easy to see that $\mu(A\cap B)=1,$ if $\mu(A)=1$ and $\mu(B)=1$. For every $x\in X$ is defined the \textit{principal ultrafilter} $\mu_x$ on $X$ which corresponds a set $A\subseteq X$ to $\mu_x(A)=1$ if $x\in A$ and $\mu_x(A)=0$ if $x\notin A$. So, $\mu$ is a non-principal ultrafilter on $X$ if and only if $\mu(A)=0$ for every finite subset $A$ of $X$.
The set $\beta X$ becomes a compact Hausdorff space if it be endowed with the topology ${\mathfrak T}$ which has basis the family $\{\overline{A} : A\subseteq X \}$, where $\overline{A}=\{\mu\in\beta X : \mu(A)=1 \}$. It is easy to see that $\overline{A\cap B}=\overline{A}\cap \overline{B}$, $\overline{A\cup B}=\overline{A}\cup \overline{B}$ and $\overline{X\setminus A}=\beta X\setminus \overline{A}$ for every $A,B\subseteq X$. We always consider the set $\beta X$ endowed with the topology ${\mathfrak T}.$ Also $\beta X$ is called the \textit{Stone-$\check{\text{C}}$ech compactification} of the set $X.$
If $(X,+)$ is a semigroup, then a binary operation $+$ is defined on $\beta X,$ extending the operation $+$ on $X,$ corresponding to every $\mu_1, \mu_2\in \beta X$ the ultrafilter $\mu_1 + \mu_2\in \beta X,$ with
\begin{center}
$(\mu_1 + \mu_2)(A)= \mu_1(\{x\in X : \mu_2(\{y\in X : x+y\in A\})=1\})$ for every $A\subseteq X$.
\end{center}
With this operation $(\beta X,+,{\mathfrak T})$ becomes a right topological semigroup, that is, for every $\mu\in\beta X$ the function $f_{x_0} : \beta X\rightarrow \beta X$ with $f_{x_0}(\mu)=\mu_{x_0}+\mu$ is continuous.
Hindman and Strauss in \cite{HS} proved the following result concerning left cancellative semigroups. For $A\subseteq S$ and $t\in S$ we set $-t+A=\{s\in S:\;t+s\in A\}.$
\begin{thm}\label{thm:33}
Let $S$ be an infinite countable left cancellative semigroup, let ${\mathcal F}=\{F_n\}_{n\in {\mathbb N}}$ be a left F${\o}$lner sequence in $[S]^{<\omega}_{>0},$ and let $A\subseteq S.$ There is a countably additive measure $\mu$ on the set ${\mathcal B}$ of Borel subsets of $\beta S$ such that
$(1)$ $\mu(\overline{A})=\overline{d}_{{\mathcal F}}(A),$
$(2)$ for all $B\subseteq S, $ $\mu(\overline{B})\leq \overline{d}_{{\mathcal F}}(B),$
$(3)$ for all $B\in {\mathcal B}$ and all $t\in S,$ $\mu(-t+B)=\mu(B)=\mu(t+B),$ and
$(4)$ $\mu(\beta S)=1.$
\end{thm}
Now we can state our theorem and give a proof in analogy to Theorem 5.6 in \cite{HS}.
\begin{thm}\label{thm:4}
Let ${\mathcal F}=\{F_n\}_{n\in {\mathbb N}}$ be a left F{\o}lner sequence in $[{\mathbb Q}]^{<\omega}_{>0},$ and let $A\subseteq {\mathbb Q}$ such that $\overline{d}_{{\mathcal F}}(A)>0.$ Then for each $k\in {\mathbb N}$ there exists $q\in {\mathbb Q}^\ast$ such that \begin{center} $\overline{d}_{{\mathcal F}}(A\cap(-q+A)\cap\ldots\cap (-kq+A))>0.$ \end{center}
\end{thm}
\begin{proof}
Let ${\mathcal B}$ be the set of Borel subsets of $\beta {\mathbb Q}.$ Pick a countably additive measure $\mu$ on ${\mathcal B}$ which satisfies the conditions of Theorem~\ref{thm:33}. Let $k\in {\mathbb N}.$ For $l\in \{1,\ldots,k\}$ and $\nu\in \beta {\mathbb Q}$ let $T_{l,m}(\nu)=\mu_{l(-1)^{m+1}m!}+\nu$ for $m\in {\mathbb N}$ and $T_{l,m}(\nu)=\mu_{l\frac{(-1)^{-m}}{(-m+1)!}}+\nu$ for $-m\in {\mathbb N}.$ Each $T_{l,n},$ $1\leq l\leq k,\;n\in {\mathbb Z}^\ast$ is a continuous function from $\beta{\mathbb Q}$ to $\beta{\mathbb Q},$ since $\beta{\mathbb Q}$ is a right topological semigroup. Let $\{\mathcal{T}_l^{q}\}_{q\in{\mathbb Q}}$ be the rational system defined from $\{T_{l,n}\}_{n\in {\mathbb Z}^\ast}$ for every $1\leq l\leq k$ respectively.
The transformations $T_{l,n},$ $1\leq l\leq k,\;n\in {\mathbb Z}^\ast$ preserve $\mu,$ since $\mu$ satisfies the condition (3) of Theorem~\ref{thm:33}. Consequently, $\{\mathcal{T}_l^{q}\}_{q\in{\mathbb Q}},$ $1\leq l\leq k$ are commuting rational systems of measure preserving transformations on the space $(\beta {\mathbb Q},{\mathcal B},\mu).$ According to the condition (1) of Theorem~\ref{thm:33}, we have $\mu(\overline{A})=\overline{d}_{{\mathcal F}}(A)>0.$ So, using Theorem~\ref{thm:2}, we can find $q\in {\mathbb Q}^\ast$ such that \begin{center} $\mu(\overline{A}\cap (\mathcal{T}_1^q)^{-1}(\overline{A})\cap\ldots \cap (\mathcal{T}_k^q)^{-1}(\overline{A}))>0.$ \end{center} This gives \begin{center} $\overline{d}_{{\mathcal F}}(A\cap(-q+A)\cap\ldots\cap (-kq+A))=\mu(\overline{A\cap(-q+A)\cap\ldots\cap (-kq+A)})=$\\ $ \;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;=\mu(\overline{A}\cap (\mathcal{T}_1^q)^{-1}(\overline{A})\cap\ldots \cap (\mathcal{T}_k^q)^{-1}(\overline{A}))>0,$ \end{center} which finishes the proof.
\end{proof}
\begin{cor}\label{c}
Let ${\mathcal F}=\{F_n\}_{n\in {\mathbb N}}$ be a left F{\o}lner sequence in $[{\mathbb Q}]^{<\omega}_{>0},$ and let $A\subseteq {\mathbb Q}$ such that $\overline{d}_{{\mathcal F}}(A)>0.$ Then for each $k\in {\mathbb N}$ there exist $q\in {\mathbb Q}^\ast$ and $p\in A$ such that \begin{center} $p+jq\in A$ for every $0\leq j\leq k.$ \end{center}
\end{cor}
\begin{proof}
Let $k\in {\mathbb N}.$ According to the proof of Theorem~\ref{thm:4} there exists $q\in {\mathbb Q}^\ast$ such that $\overline{d}_{{\mathcal F}}(A\cap(-q+A)\cap\ldots\cap (-kq+A))>0.$ Pick $p\in A\cap(-q+A)\cap\ldots\cap (-kq+A).$
\end{proof}
\begin{remarks} $(1)$ In the statements of Theorem~\ref{thm:4} and Corollary~\ref{c} the rational number $q$ can be located either in ${\mathbb Z}^\ast$ or in $(e^{-1}-1,e^{-1})\cap{\mathbb Q}^{\ast},$ using in the respective proof the results of Remark~\ref{r}.
$(2)$ Defining for a left F{\o}lner sequence ${\mathcal F}=\{F_n\}_{n\in {\mathbb N}}$ in $[{\mathbb Q}]^{<\omega}_{>0},$ and $A\subseteq {\mathbb Q}$ the density \begin{center} $d^{\ast}_{{\mathcal F}}(A)=\sup\big\{\alpha:\;(\forall m\in {\mathbb N})(\exists n\geq m)(\exists q\in {\mathbb Q})(|A\cap (q+F_n)|\geq \alpha |F_n|)\big\}$ \end{center} we can replace $\overline{d}_{{\mathcal F}}$ with $d^{\ast}_{{\mathcal F}}$ in the statements of Theorem~\ref{thm:4} and Corollary~\ref{c}, using Theorem 4.6 in \cite{HS}, instead of Theorem~\ref{thm:33}.
\end{remarks}
Viewing the rational numbers as words and using the density Hales-Jewett theorem of Furstenberg and Katznelson (\cite{FuKa}), we will prove in Theorem~\ref{l} another density result for the set of rational numbers. Let start with the necessary notation.
Let $\Sigma=\{\alpha_1,\ldots,\alpha_k\}$ for $k\in {\mathbb N}$ a finite set and $\upsilon\notin\Sigma.$ We denote by $W(\Sigma)$ the set of all the words $w=w_1\ldots w_n,$ where $n\in {\mathbb N}$ and $w(i)\in \Sigma$ for every $1\leq i\leq n,$ and by $W(\Sigma,\upsilon)$ the set of all the (variable) words in $W(\Sigma\cup\{\upsilon\})$ with at least one occurrence of the symbol $\upsilon.$ A \textit{combinatorial line} in $W(\Sigma)$ is a set $\{w(\alpha):\;\alpha\in \Sigma\}$ obtained by substituting the variable $\upsilon$ of the variable word $w(\upsilon)$ by the symbols $\alpha_1,\ldots,\alpha_k.$ We also denote by $W_n(\Sigma)$ the subset of $W(\Sigma)$ consisting of all the words of length $n.$
Furstenberg and Katznelson in \cite{FuKa2} proved the following theorem:
\begin{thm}\label{thm:6}
Let $\Sigma=\{\alpha_1,\ldots,\alpha_k\},\;k\in {\mathbb N}$ a finite alphabet. If $A\subseteq W(\Sigma)$ and $\limsup_n \frac{|A\cap W_n(\Sigma)|}{k^{n}}>0,$ then $A$ contains a combinatorial line.
\end{thm}
\noindent For every $n,\;k\in {\mathbb N}$ and integers $t^n_1<\ldots<t_n^n$ with $|t^n_j|\geq k$ for every $1\leq j\leq n$ we define, in analogy to the representation of rational numbers, the subset ${\mathbb Q}(t^n_1,\ldots,t_n^n,k)\subseteq {\mathbb Q}^{\ast}$ as \begin{center}${\mathbb Q}(t^n_1,\ldots,t_n^n,k)=\{\sum^{n}_{j=1}q_{t_j^n}c_{t_j^n}:$ $1\leq q_{t_j^n}\leq k,$ $c_{t_j^n}=\frac{(-1)^{-t_j^n}}{(-t_j^n+1)!}$ if $t_j^n<0$\\ $\;\;\;\;\;$ and $c_{t_j^n}=(-1)^{t_j^n+1}t_j^n!$ if $t_j^n>0 \}.$\end{center}
For $\Sigma=\{1,\ldots,k\},\;k\in {\mathbb N}$ we define \begin{center} $g:W(\Sigma)\rightarrow \bigcup_{n\in {\mathbb N}} {\mathbb Q}(t^n_1,\ldots,t_n^n,k)$ with $g(w_1\ldots w_n)=\sum^{n}_{j=1}w_jc_{t_j^n}.$ \end{center} Note that $g|_{W_n(\Sigma)}:W_n(\Sigma)\rightarrow {\mathbb Q}(t^n_1,\ldots,t_n^n,k)$ is $1-1$ and onto for every $n\in {\mathbb N}.$
Using Theorem~\ref{thm:6} we have the following density theorem:
\begin{thm}\label{l}
Let $k\in {\mathbb N}$ and a sequence $((t^n_1,\ldots,t_n^n))_{n\in {\mathbb N}}$ with $t^n_1<\ldots<t_n^n$ and $|t^n_j|\geq k$ for every $1\leq j\leq n,$ $n\in {\mathbb N}.$ If $A\subseteq {\mathbb Q}$ with $\limsup_{n}\frac{|A\cap {\mathbb Q}(t^n_1,\ldots,t_n^n,k)|}{k^{n}}>0,$ then there exist $p\in A$ and $q\in {\mathbb Q}^\ast$ with $dom(p),$ $dom(q)\subseteq \{t^n_1,\ldots,t_n^n\}$ for some $n\in {\mathbb N},$ such that \begin{center} $p+iq\in A$ for every $i=0,1,\ldots,k-1.$\end{center}
\end{thm}
\begin{proof}
Let $\Sigma=\{1,\ldots,k\}.$ Since $|g^{-1}(A)\cap W_n(\Sigma)|=|A\cap {\mathbb Q}(t^n_1,\ldots,t_n^n,k)|$ for every $n\in {\mathbb N},$ the set $g^{-1}(A)$ contains a combinatorial line $\{w(\alpha):\;\alpha\in \Sigma\}$ obtained by a variable word $w(\upsilon),$ $\upsilon\notin \Sigma,$ according to Theorem~\ref{thm:6}. Let $n$ be the length of $w(\upsilon).$ Then, $\{g(w(\alpha)):\;\alpha\in \Sigma\}\subseteq A\cap {\mathbb Q}(t^n_1,\ldots,t_n^n,k).$ So, there exist $F_1=\{t\in dom(w):w_t\in \Sigma\},$ $F_2=\{t\in dom(w):w_t=\upsilon\}$ with $F_1,\;F_2\subseteq \{t^n_1,\ldots,t_n^n\}$ and $F_1\cap F_2=\emptyset$ such that if $q=\sum_{t\in F_2}c_t$ and $p=q+\sum_{t\in F_1}w_tc_t,$ where $c_{t}=\frac{(-1)^{-t}}{(-t+1)!}$ if $t<0$ and $c_{t}=(-1)^{t+1}t!$ if $t>0,$ we have that $g(w(i))=p+iq\in A$ for every $0\leq i\leq k-1.$
\end{proof}
\section*{Acknowledgments}
The author wish to thank Professor V. Farmaki for helpful discussions and support during the preparation of this paper.
{\footnotesize
\noindent
\newline
Andreas Koutsogiannis:
\newline
{\sc Department of Mathematics, Athens University, Panepistemiopolis, 15784 Athens, Greece}
\newline
E-mail address: [email protected]}
\end{document} |
\begin{document}
\date{}
\author{D. Salmer\'on$^{1, 2}$ and J.A. Cano$^3$\\
\textit{\small{$^1$CIBER Epidemiolog\'ia y Salud P\'ublica-CIBERESP.}}\\
\textit{\small{$^2$Departamento de Ciencias Sociosanitarias,
Universidad
de Murcia, Spain.}}\\
\textit{\small{$^3$Departamento de Estad\'istica e Investigaci\'on
Operativa, Univeridad de Murcia, Spain.}}\\
}
\title{\textbf{Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models: an efficient MCMC method}}
\maketitle
\begin{abstract}
In cohort studies binary outcomes are very often analyzed by logistic regression. However, it is well-known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult due to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models, produce smaller mean squared errors and the posterior inferences can be obtained using the software WinBUGS. However, the Markov chain Monte Carlo (MCMC) methods implemented in WinBUGS can lead to a high Monte Carlo error. To avoid this drawback we propose an MCMC algorithm that uses a reparameterization based on a Poisson approximation and has been designed to efficiently explore the constrained parameter space.
\newline
\noindent \textit{\textbf{Keywords}}: Bayesian inference, Binomial regression models, Epidemiology, Markov chain Monte Carlo, Risk ratio.
\end{abstract}
\section{Introduction}
The odds ratio is a measure of association widely used in Epidemiology that can be estimated using logistic regression. On the other hand, when one wants to communicate a risk ratio, the logistic regression is not recommended if the outcome is common, see \cite{mcnutt}, \cite{pedersen2003}, \cite{greenland2004}, \cite{Spiegelman2005}, \cite{pedersen}, and \cite{deddens} among others.
If one wants to estimate the adjusted risk ratio, a log-binomial model is preferable to a logistic model. The log-binomial model assumes that the distribution of the outcome $y_i$ is the Bernoulli distribution
\begin{equation}
y_i\sim Ber(p_i),\,\,\,
\log p_i=x_i\beta,\,\,\,i\in\mathbb{N}_n=\{1,...,n\},\label{Modelo}
\end{equation}
where $x_i\beta=(x_{i1},x_{i2},...,x_{ik})(\beta_1,...,\beta_k)^T$, and $x_i$ includes variables denoting exposures, confounders, predictors and product terms. Usually $x_{i1}=1$ and therefore $\beta_1$ is the intercept. Since $p_i=\exp(x_i\beta)\in(0,1)$, we have to impose the constraints $x_i\beta<0$, $i\in\mathbb{N}_n$, on the values of $\beta$ which complicates its maximum likelihood estimation. \cite{zou2004} and \cite{Spiegelman2005} have suggested a Poisson model without the constraints, that is,
\begin{equation}
y_i\sim Poisson(\mu_i),\,\,\,\log \mu_i=x_i\beta,\,\,\,i\in\mathbb{N}_n,\label{Zou_poisson}
\end{equation}
to approximate the log-binomial maximum likelihood estimator and they consider a robust sandwich variance estimator to estimate the standard errors. Model (\ref{Zou_poisson}) can be fitted with standard statistical packages like R, STATA or SAS. Nevertheless, if $\hat{\beta}$ is the estimate obtained fitting the Poisson model then $x_i\hat{\beta}$ can be greater than zero. On the other hand, \cite{pedersen}, and \cite{deddens} have proposed a different approximation using an \textit{expanded dataset} and a maximun likelihood estimator.
In this article we consider a Bayesian analysis of the log-binomial regression model (\ref{Modelo}). In this context \cite{chucole} have proposed to incorporate the constraints $x_i\beta<0$, $i\in\mathbb{N}_n$ as part of the likelihood function for log-binomial regression models and they have shown that the Bayesian approach provides estimates similar to the maximum likelihood estimates and produces smaller mean squared errors. Posterior computations can be carried out using the WinBUGS code that appears in \cite{chucole}; however, WinBUGS can lead to a poor convergence and a high Monte Carlo error. Furthermore, the instrumental distribution implemented in WinBUGS to simulate each full conditional distribution has not account for the constraints in an efficient way: the constraints must be evaluated \textit{a posteriori} each time a simulation from the instrumental distribution is proposed and this simulation is rejected if the new proposed value of the parameter does not satisface the $n$ constraints.
In this paper we overcome these two drawbacks using an MCMC method based on a reparameterization and an instrumental distribution that directly generates values of the parameters in the constrained parameter space.
\section{Simulation from the posterior distribution}
To introduce the problems that can arise we consider the following example discussed in \cite{chucole}. The data are $\mathbf{y}=(0, 0, 0, 0, 1, 0, 1, 1, 1, 1)$ and $x_{i2}=i$, so that
\begin{equation}
\log p_i=\beta_1+i\beta_2,\,\,\,i\in\{1,2,...,10\}.\label{toy}
\end{equation}
We have used the WinBUGS code proposed by \cite{chucole} and provided in the Appendix with the prior distribution $\pi(\beta_1,\beta_2)=1$. We have run a Markov chain with 10000 iterations and an adaptive phase of 500 iterations using the method \textit{UpdaterMetnormal}, and the last 9500 iterations have been used to carry out the inferences.
Figures (\ref{fig_ejemplo_1_beta1}) and (\ref{fig_ejemplo_1_beta2}), first row, show a poor convergence of the chains that may be explained in part by the high posterior correlation (-0.97) between $\beta_1$ and $\beta_2$ and by the constraints. The same results were obtained increasing the adaptive phase to 2000 and using the last 8000 iterations. If we consider orthogonal covariates, that is $x_{i2}=i-5.5$ instead of $x_{i2}=i$, the autocorrelation functions show a moderate improvement (Figures (\ref{fig_ejemplo_1_beta1}) and (\ref{fig_ejemplo_1_beta2}), second row), but a slow convergence again. A better performance would be attained with a reparameterization for which the new parameters were approximately uncorrelated given the data. This reparameterization may be obtained using the estimated covariance matrix of the maximum likelihood estimator of $\beta$. However, very often neither the maximum likelihood estimate nor the estimated covariance matrix can be calculated and this is the first problem we consider. To avoid this drawback we propose a reparameterization based on a Poisson model, see Zou (\citeyear{zou2004}).
\subsection{Reparameterization based on a Poisson model}
Let $\hat{\Sigma}$ be the estimated covariance matrix of the maximum likelihood estimator $\hat{\beta}$ obtained fitting the Poisson model (\ref{Zou_poisson}) and let $L$ be the upper triangular factor of the Choleski decomposition of $\hat{\Sigma}=L^TL$. The likelihood function associated with the log-binomial regression model (\ref{Modelo}) is
\begin{equation}
f(\mathbf{y}|\beta)=\prod_{i=1}^np_i^{y_i}(1-p_i)^{1-y_i},\label{verosi}
\end{equation}
where $p_i=\exp(x_i\beta)$ and $x_i\beta<0$, $i\in\mathbb{N}_n$. The reparameterization we propose is $\theta=L^{-T}\beta$. If $\pi(\beta)$ is the prior distribution then the posterior distribution of $\beta$ is $\pi(\beta|\mathbf{y})\propto\pi(\beta)f(\mathbf{y}|\beta)$ and hence, given the data $\mathbf{y}$, the distribution of $\theta=L^{-T}\beta$ is
\[
\pi(\theta|\mathbf{y})\propto\pi(L^T\theta)\prod_{i=1}^np_i^{y_i}(1-p_i)^{1-y_i},\,\,\,\theta\in\Theta
\]
where now, $p_i=\exp(z_i\theta)$, $z_i=x_iL^T$, $i\in\mathbb{N}_n$ and
\[
\Theta=\{\theta\in\mathbb{R}^k;\,z_i\theta<0\,\,\,\forall i\in\mathbb{N}_n\}.
\]
Using WinBUGS we can simulate a Markov chain with stationary distribution $\pi(\theta|\mathbf{y})$ and therefore this chain can be used to carry out Bayesian inference on $\beta=L^T\theta$. If the posterior distribution of $\beta$ is approximately the multivariate normal distribution $N(\hat{\beta},\hat{\Sigma})$ restricted to $\{\beta\in\mathbb{R}^k;\,x_i\beta<0\,\forall i\in\mathbb{N}_n\}$, then the distribution $\pi(\theta|\mathbf{y})$ is approximately the multivariate normal distribution with mean $\hat{\theta}=L^{-T}\hat{\beta}$ and covariance matrix $L^{-T}\hat{\Sigma}L^{-1}=L^{-T}(L^TL)L^{-1}=\mathbf{I}$, restricted to $\Theta$. Therefore, WinBUGS would get a better convergence if it is used to simulate from $\pi(\theta\vert\mathbf{y})$ instead of directly simulating from the posterior distribution of $\beta$.
For model (\ref{toy}) we have used WinBUGS to simulate from $\pi(\theta\vert\mathbf{y})$. We have run a chain with 10000 iterations and an adaptive phase of 500 iterations using the method \textit{UpdaterMetnormal}, and the last 9500 iterations have been used to carry out the inferences. After that, we have transformed the simulations using $\beta=L^T\theta$. Figures (\ref{fig_ejemplo_1_beta1}) and (\ref{fig_ejemplo_1_beta2}), third row, show a better convergence of the chains compared with the chains obtained from WinBUGS when the target was $\pi(\beta|\mathbf{y})$. This improvement is due to the reparameterizacion based on the Poisson model.
On the other hand, the methods implemented with WinBUGS to simulate from $\pi(\beta|\mathbf{y})$ or from $\pi(\theta\vert\mathbf{y})$ have the drawback that the instrumental distribution of the Metropolis-Hastings step used to simulate each full conditional distribution has not account for the constraints. Therefore, the constraints must be evaluated \textit{a posteriori} each time a simulation from the instrumental distribution is proposed. This can increase the computational time and the \textit{probability of rejecting} in the Metropolis-Hastings steps. This is the second problem we consider. To overcome it we propose an instrumental distribution designed to efficiently explore the parameter space.
\subsection{Instrumental distribution}
We propose a Metropolis-within-Gibbs algorithm that generates a Markov chain with stationary distribution $\pi(\theta\vert\mathbf{y})$ and therefore it can be used to carry out Bayesian inference on $\beta=L^T\theta$. It is based on an efficient simulation from the full conditional distributions. For $j\in\{1,2,...,k\}$ and $\theta_{\sim j}\in\mathbb{R}^{k-1}$ such that $\pi(\theta_{\sim j}\vert\mathbf{y})=\int\pi(\theta\vert\mathbf{y})d\theta_j>0$, the full conditional distribution is $\pi(\theta_j\vert\mathbf{y},\theta_{\sim j})\propto\pi(\theta|\mathbf{y})$. The set
\[
\Theta_j=\{\theta_j\in\mathbb{R};\pi(\theta_j\vert\mathbf{y},\theta_{\sim j})>0\},
\]
is a key ingredient for our Metropolis-within-Gibbs algorithm. In the following proposition, that is proved in the Appendix, it is established that the set $\Theta_j$ is an interval of the real line.
\begin{proposition}
If $\pi(\theta_{\sim j}\vert\mathbf{y})>0$ then the set $\Theta_j$ is the interval $(a_j,b_j)$ where
\[
a_j=\max_{i\in A_j}\sum_{s\neq j}-z_{is}\theta_s/z_{ij},\,\,\,A_j=\{i\in\mathbb{N}_n;z_{ij}<0\},
\]
and
\[
b_j=\min_{i\in B_j}\sum_{s\neq j}-z_{is}\theta_s/z_{ij},\,\,\,B_j=\{i\in\mathbb{N}_n;z_{ij}>0\},
\]
with the convention that $a_j=-\infty$ if $A_j=\emptyset$ and $b_j=+\infty$ if $B_j=\emptyset$.
\end{proposition}
To get an appropriate instrumental distribution we argue that the multivariate normal distribution $N(\hat{\theta},\mathbf{I})$ restricted to $\Theta$ is an approximation to the distribution $\pi(\theta|\mathbf{y})$ and hence, the distribution $N(\hat{\theta}_j,1)$ restricted to $\Theta_j=(a_j,b_j)$ would be an appropriate instrumental distribution to perform the Metropolis-Hastings step. However, the simulation from a truncated normal distribution can increase the computational time. Instead, we propose the Cauchy distribution with location $\hat{\theta}_j$ and scale $1$ truncated to $\Theta_j$ with density
\begin{equation}
\mathcal{C}(\theta_j^{\prime})\propto\frac{1_{\Theta_j}(\theta_j^{\prime})}
{\pi(1+(\theta_j^{\prime}-\hat{\theta}_j)^2))},\label{cauchy}
\end{equation}
where $1_{\Theta_j}(\theta_j^{\prime})=1$ if $\theta_j^{\prime}\in\Theta_j$ and $0$ otherwise. To simulate $\theta_j^{\prime}$ from this instrumental distribution we simulate $u\sim U(0,1)$ and compute
\[
\theta_j^{\prime}=\hat{\theta}_j-\tan\left((u-1)\arctan(a_j-\hat{\theta}_j)+u\arctan(\hat{\theta}_j-b_j)\right).
\]
The instrumental distribution (\ref{cauchy}) reduces the autocorrelation, as it is shown in the examples. The proposed Metropolis-within-Gibbs algorithm has been implemented in R and it is provided in the Appendix.
Figures (\ref{fig_ejemplo_1_beta1}) and (\ref{fig_ejemplo_1_beta2}), last row, show the results obtained using our MCMC algorithm with 10000 iterations. Our algorithm produces a satisfactory acceptance rate and a quickly decreasing autocorrelation. This improvement, compared with the WinBUGS code used to simulate from $\pi(\theta|\mathbf{y})$ is due to the proposed instrumental distribution (\ref{cauchy}).
\section{Examples}
In this section we present three examples to illustrate our MCMC algorithm.
For each example we have used WinBUGS to simulate from $\pi(\beta|\mathbf{y})$ and from $\pi(\theta\vert\mathbf{y})$ running a chain with an adaptive phase of 500 iterations out of a total of 10000 iterations for the method \textit{UpdaterMetnormal} and after that we have transformed the simulations from $\pi(\theta\vert\mathbf{y})$ using $\beta=L^T\theta$. We have also used our MCMC algorithm with 10000 iterations and the simulations have been transformed using $\beta=L^T\theta$. We have used uniform prior distributions. The efficiency of each algorithm has been measured in terms of the effective sample size and the computational speed.
\subsection{Breast cancer mortality}
We consider the data on the relation between receptor level and stage to 5-year
survival in a cohort of 192 women with breast cancer, see Table (\ref{Greenland}), discussed in \cite{greenland2004}. In this example the percentage of deaths was 28.13\%.
Figure (\ref{fig-greeland}), first row, shows the autocorrelation functions for the parameters $e^{\beta_1}$, $e^{\beta_2}$, $e^{\beta_3}$ and $e^{\beta_4}$, obtained from WinBUGS with target $\pi(\beta|\mathbf{y})$ (dotted), $\pi(\theta|\mathbf{y})$ (dashed) and using our algorithm (vertical lines). Our algorithm has a satisfactory acceptance rate and a quickly decreasing autocorrelation function. The effective sample sizes are shown in Table (\ref{tabla-ESS-breast}). The results show that our method converges faster than the chains obtained with WinBUGS. Regarding the computational speed, WinBUGS and our MCMC algorithm took seven seconds. Table (\ref{tabla-res-breast}) shows the estimation of the risk ratios obtained with our MCMC algorithm.
\subsection{Low birth weight}
We use the data from a 1986 cohort study conducted at the Baystate Medical Center, Springfield Massachusetts, see \cite{Lemeshow}. The study was designed to identify risk factors associated with an increased risk of low birth weight (weighing less than 2500 grams). Data were collected on 189 pregnant women, 59 of whom had low birth weight infants. We have studied the association between the low birth weight and uterine irritability (ui: yes/no), smoking status during pregnancy (smoke: yes/no), mother's race (race: white, black, other), previous premature labours (ptl$>0$: yes/no), and mother's age (age: $\leq 18$, (18,20], (20,25], (25,30] and $>30$).
Figures (\ref{fig1-pesoNacer}) and (\ref{fig2-pesoNacer}), first row, show the autocorrelation functions for the parameters $e^{\beta_j}$, $j=1,\dots,10$, obtained from WinBUGS with target $\pi(\beta|\mathbf{y})$ (dotted), $\pi(\theta|\mathbf{y})$ (dashed) and using our algorithm (vertical lines). The effective sample sizes are shown in Table (\ref{tabESSpesoNacer}). Again, the results show that our method converges faster than WinBUGS (regarding the computational speed, our MCMC algorithm and WinBUGS took 20 seconds). Table (\ref{tabSUMMARYpesoNacer}) shows the estimates of the risk ratios obtained with our MCMC algorithm.
\subsection{Simulated example}
We have simulated data form the log-binomial regression model
\[
\log\,p_i=x_i\beta,\,\,\,i\in\mathbb{N}_{1500}=\{1,...,1500\}
\]
where $x_i=(1,x_{i2},...,x_{i9})$ and $x_{ij}$ has been simulated as follows. For $i=1,...,1500$
\begin{itemize}
\item
$
x_{i2}=E_i-1/2,\,
x_{i3}=F_{i1}-1/2,\,
x_{i4}=F_{i2}-1/2,\,
x_{i5}=F_{i3}-1/2,\,
x_{i6}=F_{i4}-1/2,\,
$
where $F_{ij}\sim Ber(1/2)$ for $j=1,2$, $F_{ij}\sim U(0,1)$ for $j=3,4$ and the distribution of $E_i$ is the Bernoulli distribution
\[
Ber(\exp(\alpha_1+\alpha_2(F_{i1}-1/2)+\alpha_3(F_{i2}-1/2)+\alpha_4(F_{i3}-1/2)+\alpha_5(F_{i4}-1/2))
\]
\item We have simulated $(w_{i1},w_{i2})$ from a multivariate normal distribution with mean $(0,0)$, $Var(w_{i1})=Var(w_{i2})=1$ and $Cov(w_{i1},w_{i2})=0.5$. Then we have calculated
$\tilde{w}_{ij}=w_{ij}-\min_iw_{ij}$ and $V_{ij}=\tilde{w}_{ij}/\max_i\tilde{w}_{ij}$, $j=1,2$, and $V_{i3}\sim U(0,1)$. Finally,
\[
x_{i7}=V_{i1}-1/2,\,
x_{i8}=V_{i2}-1/2,\,
x_{i9}=V_{i3}-1/2.
\]
\end{itemize}
The value of the parameters $(e^{\beta_1},...,e^{\beta_9})$ and $(e^{\alpha_1},...,e^{\alpha_5})$ used to simulate the data $\mathbf{y}$ were
\[
(0.379, 1.400, 1.200, 1.300, 1.100, 1.250, 1.500, 1.400, 1.100).
\]
and
\[
(0.512, 1.400, 1.200, 1.600, 1.400),
\]
respectively. Thus, $E$ may represent an exposure, $F_1$, $F_2$, $F_3$ and $F_4$ confounders and $V_1$, $V_2$ and $V_3$ predictors. With this value of $\beta$ we have computed $p_i=\exp(x_i\beta)$ and we have simulated the outcome $y_i\sim Ber(p_i)$, $i\in\mathbb{N}_{1500}$, obtaining $\overline{y}=\sum_i y_i/n=0.39$.
Figure (\ref{fig1}) shows the autocorrelation functions obtained from WinBUGS with target $\pi(\beta|\mathbf{y})$ (dotted), $\pi(\theta|\mathbf{y})$ (dashed) and using our algorithm (vertical lines). For some parameters the autocorrelation functions obtained from WinBUGS are virtually identical, and for other parameters, WinBUGS with target $\pi(\theta|\mathbf{y})$ converges faster than WinBUGS with target $\pi(\beta|\mathbf{y})$. For all the parameters our proposed MCMC method produced a satisfactory acceptance rate (see Figures \ref{fig2} and \ref{fig3}) and it was superior to the method implemented with WinBUGS. Regarding the computational speed, our MCMC algorithm took between 86 and 87 seconds while WinBUGS took between 163 and 193 seconds. The effective sample sizes are shown in Table (\ref{tabla_2}). Table (\ref{tabSUMMARYEjemploSimulado}) shows the estimation of the risk ratios obtained with our MCMC algorithm. The posterior mean and the 95\% CI for $e^{\beta_1}$ were 0.379 and (0.354, 0.405), respectively.
\section{Conclusions}
Despite recent efforts made by several authors, logistic regression is still used frequently in cohort studies and clinical trials with common outcome and equal follow-up times, even if one wants to communicate a risk ratio. It is well known that the more frequent the outcome is the more the odds ratio overestimates the risk ratio when it is greater than 1 (or underestimates it if it is less than 1).
If one wants to estimate an adjusted risk ratio, the log-binomial model is preferable to the logistic one but the constrained parameter space makes difficult to find the maximum likelihood estimate. Bayesian methods implemented with WinBUGS can work with a constrained parameter space in a natural way. Moreover, \cite{chucole} have shown that Bayesian methods produce smaller mean squared errors than likelihood based methods. However, WinBUGS can lead to a high Monte Carlo error.
To avoid this drawback, we have proposed an efficient MCMC algorithm to estimate risk ratios from a Bayesian point of view using log-binomial regression models. Our method is based on two strategies: first, a reparameterization based on a Poisson model, and second, an appropriate Cauchy instrumental distribution. It converges to the posterior distribution faster than the methods implemented with WinBUGS. Regarding the computational speed, our MCMC algorithm is similar to WinBUGS for moderate sample sizes and faster for large sample sizes. Furtheremore, the possibility of easily carrying out the estimations using our R functions is an important added value.
\newline
\section*{Acknowledgment}
This research was supported by the S\'eneca Foundation Programme
for the Generation of Excellence Scientific Knowledge under
Project 15220/PI/10.
\par
\begin{thebibliography}{}
\bibitem[Chu and Cole, 2010]{chucole}
Chu, H., Cole, SR. (2010). Estimation of Risk Ratios in Cohort Studies With Common Outcomes: a Bayesian approach. Epidemiology, 21: 855-862.
\bibitem[Deddens \textit{et al.}, 2003]{pedersen2003} Deddens, J.A., Petersen, M.R., Lei, X. (2003). Estimation of prevalence ratios when PROC GENMOD does not converge. Proceedings of the 28th Annual SAS Users Group International Conference, Seattle, Washington.
\bibitem[Deddens and Petersen, 2008]{deddens} Deddens, J.A., Petersen, M.R. (2008) Approaches for estimating prevalence ratios.
Occupational and Environmental Medicine, 65:501-506.
\bibitem[Greenland, 2004]{greenland2004}
Greenland, S. (2004). Model-based Estimation of Relative Risks
and Other Epidemiologic Measures in Studies of Common Outcomes and in Case-Control Studies. American Journal of Epidemiology, 160: 301--305
\bibitem[Hosmer and Lemeshow, 2000]{Lemeshow}
Hosmer, D. W. and Lemeshow, S. (2000). Applied Logistic Regression, 2nd edition. New York: John Wiley and Sons.
\bibitem[McNutt \textit{et al.}, 2003]{mcnutt}
McNutt, LA., Wu, C., Xue, X. and Hafner, JP. (2003). Estimating the Relative Risk in Cohort Studies and Clinical Trials of Common Outcomes. American Journal of Epidemiology, 157: 940--943.
\bibitem[Petersen and Deddens, 2006]{pedersen} Petersen, M.R., Deddens, J.A. (2006). RE: "Easy SAS calculations for risk or
prevalence ratios and differences". American Journal of Epidemiology,163:1158--
1159.
\bibitem[Spiegelman and Hertzmark, 2005]{Spiegelman2005} Spiegelman, D. and Hertzmark, E. (2005). Easy SAS calculations for risk or prevalence
ratios and differences. American Journal of Epidemiology, 162:199--200.
\bibitem[Zou, 2004]{zou2004} Zou, GY. (2004). A modified Poisson regression approach to prospective studies with binary data. American Journal of Epidemiology, 159:702--706.
\end{thebibliography}
\section*{Appendix}
\subsubsection*{WinBUGS code for model (\ref{toy}) proposed by \cite{chucole} to simulate from $\pi(\beta|\mathbf{y})$}
\begin{verbatim}
model{
for(i in 1:N){
p[i]<-exp(beta1+i*beta2)
y[i]~dbern(p[i])
}
beta1~flat()
beta2~flat()
for(i in 1:N){
ones[i]<-1
ones[i]~dbern(q[i])
q[i]<-step(1-p[i])}
}
\end{verbatim}
\subsubsection*{Proof of proposition 1}
Because of $\pi(\theta_{\sim j}\vert\mathbf{y})>0$, there exist $\theta_j^*\in\mathbb{R}$ such that $\pi(\theta_j^*,\theta_{\sim j}\vert\mathbf{y})>0$ and hence $\theta_j^*\in\Theta_j$. Using that $\pi(\theta_j^*,\theta_{\sim j}\vert\mathbf{y})>0$ it follows that $z_{ij}\theta^*_j+\sum_{s\neq j} z_{is}\theta_s<0$ for $i\in\mathbb{N}_n$ and then
\[
\sum_{s\neq j} z_{is}\theta_s<0,\,\forall i\in\mathbb{N}_n\,\,\, \mathrm{such}\,\mathrm{that}\,z_{ij}=0.
\]
Let $\theta_j$ be a real number. Then $\theta_j\in\Theta_j$ if and only if
\[
z_{ij}\theta_j+\sum_{s\neq j} z_{is}\theta_s<0,\,\forall i\in\mathbb{N}_n,
\]
that is, if and only if
\[
\theta_j>\sum_{s\neq j}-z_{is}\theta_s/z_{ij},\,\forall i\in A_j,
\]
\[
\theta_j<\sum_{s\neq j}-z_{is}\theta_s/z_{ij},\,\forall i\in B_j
\]
and
\[
\sum_{s\neq j} z_{is}\theta_s<0,\,\forall i\in\mathbb{N}_n\,\,\, \mathrm{such}\,\mathrm{that}\,z_{ij}=0.
\]
It follows that $\Theta_j= (a_j,b_j)$.
\subsubsection*{R functions}
\begin{verbatim}
gibbsLogBinomial=function(j){
ztheta=Z[,-j]
A=Aind[[j]];B=Bind[[j]]
suma1=sum(Z[,j]<0);a=-Inf
if(suma1!=0){a=max(-ztheta[A]/Z[A,j])}
suma2=sum(Z[,j]>0);b=Inf
if(suma2!=0){b=min(-ztheta[B]/Z[B,j])}
u=runif(1,0,1)
location=theta.hat[j]
thetaj.star=location-tan((u-1)*atan(a-location)+u*atan(location-b))
theta.new=theta;theta.new[j]=thetaj.star
p.new=exp(Z[,j]*(thetaj.star-theta[j]))*p
logvalue.new=sum(log(p.new[y==1]))+sum(log(1-p.new[y==0]))
priortheta.new=prior(theta.new)
rho=exp(logvalue.new-logvalue)
rho=rho*priortheta.new/priortheta
rho=rho*(1+(thetaj.star-location)^2)/(1+(theta[j]-location)^2)
rho=min(1,rho)
logvalue<<-logvalue
theta<<-theta
p<<-p
priortheta<<-priortheta
u=runif(1,0,1)
if(u<rho){theta<<-theta.new;logvalue<<-logvalue.new;p<<-p.new;
priortheta<<-priortheta.new}
}
prior=function(theta){return(1)}
inicial.beta=function(){
coef=summary(glm(y ~ 1,family=binomial))$coeff
mu=coef[1,1];serror=coef[1,2]
musim=rnorm(1,mu,serror)
beta1=log(exp(musim)/(1+exp(musim)))
return(c(beta1,rep(0,k-1)))}
initialize=function(){
#Reparameterization
X<<-model.ini$x;n<<-nrow(X);beta=model.ini$coeff
Sigma<<-summary(model.ini)$cov.unscaled
L<<-chol(Sigma)
Z<<-X
model.ini.0<<-glm(y ~ Z-1,family=poisson,x=TRUE)
theta.hat<<-solve(t(L))
#Sets in proposition 1
Aind<<-{}
for(j in 1:k){
Aind[[j]]<<-(1:n)[Z[,j]<0]}
Bind<<-{}
for(j in 1:k){
Bind[[j]]<<-(1:n)[Z[,j]>0]}
#Initial point. The following lines are always the same, although
#the user can change punto to an other inital point
punto<<-solve(t(L))
theta<<-punto
p<<-exp(Z
logvalue<<-sum(log(p[y==1]))+sum(log(1-p[y==0]))
priortheta<<-prior(theta)
}
\end{verbatim}
\subsubsection*{Using the R function \textsf{gibbsLogBinomial} with the breast cancer mortality example}
\begin{verbatim}
#The data
datos<-rbind(cbind(rep(1,12),rep(1,12),c(rep(1,2),rep(0,10))),
cbind(rep(1,55),rep(2,55),c(rep(1,5),rep(0,50))),
cbind(rep(2,22),rep(1,22),c(rep(1,9),rep(0,13))),
cbind(rep(2,74),rep(2,74),c(rep(1,17),rep(0,57))),
cbind(rep(3,14),rep(1,14),c(rep(1,12),rep(0,2))),
cbind(rep(3,15),rep(2,15),c(rep(1,9),rep(0,6))))
datos<-data.frame(datos)
names(datos)<-c("Stage","Receptor_Level","Dead")
#Recoding Receptor_level
datos$Receptor_Level=as.integer(datos$Receptor_Level==1)
#Outcome
y=datos$Dead
##############################################################
##############################################################
################ Runing the MCMC algorithm ###################
#Poisson model. The following line depends on covariates
model.ini=glm(y~factor(Receptor_Level)+factor(Stage),
family=poisson,data=datos,x=TRUE)
#The following lines compute the need input for
#the algorithm and fix the lengtht of the chain to 10000
initialize()
longChain=10000
theta.sim=matrix(rep(NA,longChain*k),ncol=k)
#Finally the chain is simulated as follows
for(h in 1:longChain){
theta.sim[h,]=theta
for(j in 1:k){
gibbsLogBinomial(j)
}
}
beta.sim=theta.sim
#The object beta.sim containts the simulations
#Posterior estimation of exp(beta) using the coda package
library(coda)
RR=mcmc(exp(beta.sim))
summary(RR)
autocorr.plot(RR)
effectiveSize(RR)
plot(RR)
\end{verbatim}
\begin{figure}
\caption{Parameter $\beta_1$, model (\ref{toy}
\label{fig_ejemplo_1_beta1}
\end{figure}
\begin{figure}
\caption{Parameter $\beta_2$, model (\ref{toy}
\label{fig_ejemplo_1_beta2}
\end{figure}
\begin{figure}
\caption{Breast cancer mortality example. Parameters $e^{\beta_1}
\label{fig-greeland}
\end{figure}
\begin{figure}
\caption{Low birth weight example. Parameters $e^{\beta_j}
\label{fig1-pesoNacer}
\end{figure}
\begin{figure}
\caption{Low birth weight example. Parameters $e^{\beta_j}
\label{fig2-pesoNacer}
\end{figure}
\begin{figure}
\caption{MCMC output for the simulated data. Parameters $e^{\beta_j}
\label{fig1}
\end{figure}
\begin{figure}
\caption{MCMC output for the simulated data. Parameters $e^{\beta_j}
\label{fig2}
\end{figure}
\begin{figure}
\caption{MCMC output for the simulated data. Parameters $e^{\beta_j}
\label{fig3}
\end{figure}
\begin{table}[h]
\centering \caption{Data relating receptor level (low (1) and high(2)) and stage to
5-year breast cancer mortality.}
\begin{tabular}{cccc}
\\
\hline
Stage & Receptor level & Deaths & Total \\
\hline
1 & 1 & 2 & 12\\
1 & 2 & 5 & 55\\
2 & 1 & 9 & 22\\
2 & 2 & 17 & 74\\
3 & 1 & 12 & 14\\
3 & 2 & 9 & 15\\
\hline
\end{tabular}
\label{Greenland}
\end{table}
\begin{table}[h]
\centering
\caption{Breast cancer mortality example. Effective sample sizes obtained with our algorithm (first row), WinBUGS with target $\pi(\theta|\mathbf{y})$ (second row) and WinBUGS with target $\pi(\beta|\mathbf{y})$ (third row).}
\begin{tabular}{cccc}\\
\hline\\[-1.5ex]
$e^{\beta_1}$ & $e^{\beta_2}$ & $e^{\beta_3}$ & $e^{\beta_4}$\\
\hline
5636.9 & 4464.8 & 5450.6 & 4685.2\\
1829.3 & 1360.8 & 1600.2 & 1433.4\\
70.7 & 324.8 & 61.2 & 59.4\\
\hline
\end{tabular}
\label{tabla-ESS-breast}
\end{table}
\begin{table}[h]
\centering
\caption{Bayesian estimation of the risk ratios obtained using our MCMC algorithm for the breast cancer mortality data: posterior mean, $E(RR|\mathbf{y})$, and 95\% credible interval (95\% CI).}
\begin{tabular}{lccc}
\\
\hline
& $E(RR|\mathbf{y})$ & 95\% CI\\
\hline
receptor low & 1.576 & (1.041, 2.364)\\
stage 2 & 2.939 & (1.256, 6.404)\\
stage 3 & 6.626 & (2.871, 14.258)\\
\hline
\end{tabular}
\label{tabla-res-breast}
\end{table}
\begin{table}[h]
\centering
\caption{Bayesian estimation of the risk ratios obtained using our MCMC algorithm for the low birth weight data: posterior mean, $E(RR|\mathbf{y})$, and 95\% credible interval (95\% CI).}
\begin{tabular}{lccc}
\\
\hline
& $E(RR|\mathbf{y})$ & 95\% CI\\
\hline
ui yes & 1.242 & (0.780, 1.863)\\
smoke yes & 1.586 & (1.022, 2.377)\\
race black & 1.757 & (0.926, 2.893)\\
race other & 1.573 & (0.969, 2.439)\\
age (18,20] & 1.120 & (0.554, 1.921)\\
age (20,25] & 1.226 & (0.739, 1.944)\\
age (25,30] & 0.934 & (0.485, 1.574)\\
age $> 30$ & 0.532 & (0.115, 1.199)\\
ptl$>0$ yes & 1.727 & (1.133, 2.514)\\
\hline
\end{tabular}
\label{tabSUMMARYpesoNacer}
\end{table}
\begin{table}[h]
\centering
\caption{Low birth weight example. Effective sample sizes obtained with our MCMC algorithm (rows a and d), WinBUGS with target $\pi(\theta|\mathbf{y})$ (rows b and e) and WinBUGS with target $\pi(\beta\vert\mathbf{y})$ (rows c and f).}
\begin{tabular}{cccccc}\\
\hline\\[-1.5ex]
& $e^{\beta_1}$ & $e^{\beta_2}$ & $e^{\beta_3}$ & $e^{\beta_4}$ & $e^{\beta_5}$\\
\hline
a & 4239.9 & 4756.9 & 4084.8 & 3965.8 & 4074.3\\
b & 852 & 1426.3 & 1067.7 & 956.9 & 1161.9\\
c & 85.1 & 449.9 & 235.8 & 259.5 & 276.9\\
\hline\\[-1.5ex]
& $e^{\beta_6}$ & $e^{\beta_7}$ & $e^{\beta_8}$ & $e^{\beta_9}$ & $e^{\beta_{10}}$\\
\hline
d & 5753.7 & 5786.8 & 5937.3 & 5837.2 & 2414.2\\
e & 1283.1 & 1163.8 & 1563 & 1560.7 & 845.6\\
f & 318.4 & 243.5 & 486.5 & 693.5 & 367.1\\
\hline
\end{tabular}
\label{tabESSpesoNacer}
\end{table}
\begin{table}[h]
\centering
\caption{Bayesian estimation of the risk ratios obtained using our MCMC algorithm for the simulated data: posterior mean, $E(RR|\mathbf{y})$, and 95\% credible interval (95\% CI).}
\begin{tabular}{lcc}
\\
\hline
& $E(RR|\mathbf{y})$ & 95\% CI\\
\hline
$x_2$ & 1.367 & (1.195, 1.565)\\
$x_3$ & 1.214 & (1.073, 1.372)\\
$x_4$ & 1.320 & (1.164, 1.497)\\
$x_5$ & 1.008 & (0.807, 1.259)\\
$x_6$ & 1.295 & (1.044, 1.588)\\
$x_7$ & 1.511 & (0.955, 2.244)\\
$x_8$ & 1.755 & (1.046, 2.765)\\
$x_9$ & 1.115 & (0.902, 1.371)\\
\hline
\end{tabular}
\label{tabSUMMARYEjemploSimulado}
\end{table}
\begin{table}[h]
\centering
\caption{Simulated example. Effective sample sizes obtained with our MCMC algorithm (first row), WinBUGS with target $\pi(\theta|\mathbf{y})$ (second row) and WinBUGS with target $\pi(\beta|\mathbf{y})$ (thrid row).}
\begin{tabular}{ccccccccc}\\
\hline\\[-1.5ex]
$e^{\beta_1}$ & $e^{\beta_2}$ & $e^{\beta_3}$ & $e^{\beta_4}$ & $e^{\beta_5}$ & $e^{\beta_6}$ & $e^{\beta_7}$ & $e^{\beta_8}$ & $e^{\beta_9}$\\
\hline
4299.5 & 5058.5 & 4930.9 & 4979 & 4466.4 & 5495.6 & 5546.3 & 5677.3 & 5557\\
1911 & 1854.2 & 1822.1 & 2002 & 1899.7 & 1951.8 & 1914.5 & 2052 & 2136\\
1405.4 & 1684.7 & 1934.6 & 1818 & 1841.3 & 1799.6 & 975 & 1141.7 & 2050\\
\hline
\end{tabular}
\label{tabla_2}
\end{table}
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.